title
stringlengths
1
322
pmid
stringlengths
7
8
background_abstract
stringlengths
7
2.19k
background_abstract_label
stringlengths
7
30
methods_abstract
stringlengths
18
2.32k
methods_abstract_label
stringlengths
5
43
results_abstract
stringlengths
35
2.65k
results_abstract_label
stringlengths
6
30
conclusions_abstract
stringlengths
33
1.83k
conclusions_abstract_label
stringlengths
6
45
mesh_descriptor_names
sequence
pmcid
stringlengths
5
8
background_title
stringlengths
10
95
background_text
stringlengths
107
140k
methods_title
stringlengths
4
129
methods_text
stringlengths
99
196k
results_title
stringlengths
6
172
results_text
stringlengths
112
685k
conclusions_title
stringlengths
8
58
conclusions_text
stringlengths
64
66.1k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
Functional magnetic resonance imaging of a parametric working memory task in schizophrenia: relationship with performance and effects of antipsychotic treatment.
21331519
Working memory dysfunction is frequently observed in schizophrenia. The neural mechanisms underlying this dysfunction remain unclear, with functional neuroimaging studies reporting increased, decreased or unchanged activation compared to controls.
RATIONALE
We used functional magnetic resonance imaging and studied the blood-oxygen-level-dependent (BOLD) response of 45 schizophrenia outpatients and 19 healthy controls during a parametric spatial n-back task.
METHOD
Performance in both groups deteriorated with increasing memory load (0-back, 1-back, 2-back), but the two groups did not significantly differ in performance overall or as a function of load. Patients produced stronger BOLD signal in occipital and lateral prefrontal cortex during task performance than controls. This difference increased with increasing working memory load in the prefrontal areas. We also found that in patients with good task performance, the BOLD response in left prefrontal cortex showed a stronger parametric increase with working memory load than in patients with poor performance. Second-generation antipsychotics were independently associated with left prefrontal BOLD increase in response to working memory load, whereas first-generation antipsychotics were associated with BOLD decrease with increasing load in this area.
RESULTS
Together, these findings suggest that in schizophrenia patients, normal working memory task performance may be achieved through compensatory neural activity, especially in well-performing patients and in those treated with second-generation antipsychotics.
CONCLUSIONS
[ "Adult", "Antipsychotic Agents", "Case-Control Studies", "Female", "Functional Laterality", "Humans", "Magnetic Resonance Imaging", "Male", "Memory, Short-Term", "Neuropsychological Tests", "Oxygen", "Prefrontal Cortex", "Schizophrenia", "Schizophrenic Psychology", "Task Performance and Analysis" ]
3111549
Introduction
Working memory is defined as the capacity to mentally maintain and manipulate information over short time periods. As such, it is an important cognitive function underlying a range of behaviours and everyday tasks. Working memory deficits are frequently observed in schizophrenia (Barch 2005; Goldman-Rakic 1994; Park and Holzman 1992). The deficit has been observed at different illness stages and may be a predictor of clinical and functional outcome as well as a relevant target for treatments aimed at cognitive enhancement (Green and Nuechterlein 2004; Greenwood et al. 2005; Liddle 2000). Basic neuroscience studies have shown that the dorsolateral prefrontal cortex (DLPFC) as well as parietal cortex and subcortical projection targets are of critical importance in working memory (Barch 2005; Gruber and von Cramon 2003). However, despite the often replicated observation of working memory impairments in schizophrenia, brain activation studies using functional magnetic resonance imaging (fMRI) have not yet reliably identified the neural circuits underlying this impairment. While initial studies observed reduced activation in frontal areas in patients (termed hypofrontality), other studies have reported increased activation or no significant differences between patients and controls (for review, see (Glahn et al. 2005; Linden 2009; Manoach 2003). A variety of factors such as task design and patient characteristics are likely to play a role, but task difficulty, subject performance levels, and pharmacological treatment status are considered to be particularly important factors. Regarding the related factors of task difficulty and performance, it has been argued (Manoach 2003) and demonstrated empirically (Callicott et al. 2003; Potkin et al. 2009) that activation levels may fall along an inverted U shape distribution reflecting task difficulty. This curve may differ between patients and healthy controls, with patients showing hyperactivations relative to controls at easier task difficulty levels (or when no behavioural impairments are seen) and hypoactivations at greater task difficulty (or when behavioural impairments are seen). Regarding pharmacological treatment, the effects of antipsychotic medication have been addressed in a number of studies. These compounds have been shown to influence the blood-oxygen-level-dependent (BOLD) signal during performance of a variety of cognitive and behavioural paradigms. Specifically, a number of studies have shown normalisation of hypofunction in schizophrenia with second-generation (SGA) but not first-generation antipsychotics (FGA) across a range of neurocognitive paradigms (Braus et al. 1999; Honey et al. 1999; Jones et al. 2004; Kumari et al. 2007; Meisenzahl et al. 2006; Stephan et al. 2001; for review see Davis et al. 2005; Kumari and Cooke 2006). However, not all studies have observed this effect (Surguladze et al. 2007), and other work points to task dependence of treatment effects. For example, Schlagenhauf et al. (2008) found that switching from FGA to olanzapine led to increased BOLD in the left DLPFC during a 0-back attentional condition but not during a 2-back working memory condition, relative to baseline. The aims of this study were to further explore the macroscopic neural circuits underlying working memory task performance in a large sample of schizophrenia patients. More work is needed in this area due to the inconsistencies in the literature and in order to build up the evidence base especially in relation to the not yet clarified issue of hypo- vs. hyper-frontality (Karch et al. 2009; Manoach 2003; Potkin et al. 2009; Schneider et al. 2007; Tan et al. 2005). Additionally, we aimed to examine differences in BOLD signal between patients treated with FGA compared to SGA. This aim is of relevance considering that working memory as well as the underlying BOLD response may represent promising treatment targets or biomarkers in drug development (Green and Nuechterlein 2004; Migo et al. 2011). To study working memory, we used a spatial, parametric n-back task. The n-back task is a widely used paradigm that may represent an intermediate phenotype reflecting genetic vulnerability to schizophrenia as well as a potential treatment target (Barch and Smith 2008; Egan et al. 2001; Linden 2009). The parametric design of the task allowed us to test for effects of working memory load on BOLD signal as well as its modulation by antipsychotic treatment.
null
null
Results
[SUBTITLE] Sample description [SUBSECTION] A total of 64 participants completed the study, consisting of 45 patients with schizophrenia and 19 controls. Demographic and clinical data are summarised in Table 1. The patient and control groups did not differ significantly on demographic variables with the exception of years spent in full-time education, which may be expected given that schizophrenia is commonly associated with lower than expected educational achievement (Green 2001; see also Surguladze et al. 2007). Table 1Demographic and clinical data Patients (N = 45)Controls (N = 19)Group comparisonAge (years)37.33 (8.19)33.32 (9.21) t =−1.73, df = 62, p = 0.09Gender (N male/N female)35/1012/7 χ 2 = 1.46, df = 1, p = 0.23Ethnicity (N Caucasian/N other)19/2612/7 χ 2 = 2.34, df = 1, p = 0.13Years of education13.36 (2.33)14.95 (2.92) t = 2.32, df = 62, p = 0.02Parental SES2.58 (1.08)2.26 (1.20) Z = −1.06, p = 0.29Duration of illness (years)13.49 (9.78)––Age of onset (years)23.84 (6.58)––Antipsychotic treatment (N SGA/N FGA)38/6a ––PANSS positive symptoms16.18 (4.72)––PANSS negative symptoms17.67 (4.48)––PANSS general psychopathology32.49 (6.01)––PANSS total score66.33 (12.66)––Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed PANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics aOne patient was untreated at the time of fMRI scanning Demographic and clinical data Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed PANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics aOne patient was untreated at the time of fMRI scanning The majority of patients (N  =  38) were treated with SGA (clozapine  =  10, risperidone  =  7, olanzapine  =  16, amisulpride  =  2, aripiprazole  =  3), six patients were treated with FGA (haloperidol  =  1, flupentixol  =  2, sulpiride  =  2, chlorpromazine  =  1), and one patient was untreated at time of fMRI testing. A number of patients were co-medicated with anticholinergic, benzodiazepine, mood stabilising, or antidepressant compounds. Anticholinergic compounds were administered to three patients on FGA and to two patients on SGA. Benzodiazepines were administered to four patients on SGA. Mood stabilisers were administered to one patient on FGA and eight patients on SGA. Antidepressants were administered to two patients on FGA and 15 patients on SGA. A total of 64 participants completed the study, consisting of 45 patients with schizophrenia and 19 controls. Demographic and clinical data are summarised in Table 1. The patient and control groups did not differ significantly on demographic variables with the exception of years spent in full-time education, which may be expected given that schizophrenia is commonly associated with lower than expected educational achievement (Green 2001; see also Surguladze et al. 2007). Table 1Demographic and clinical data Patients (N = 45)Controls (N = 19)Group comparisonAge (years)37.33 (8.19)33.32 (9.21) t =−1.73, df = 62, p = 0.09Gender (N male/N female)35/1012/7 χ 2 = 1.46, df = 1, p = 0.23Ethnicity (N Caucasian/N other)19/2612/7 χ 2 = 2.34, df = 1, p = 0.13Years of education13.36 (2.33)14.95 (2.92) t = 2.32, df = 62, p = 0.02Parental SES2.58 (1.08)2.26 (1.20) Z = −1.06, p = 0.29Duration of illness (years)13.49 (9.78)––Age of onset (years)23.84 (6.58)––Antipsychotic treatment (N SGA/N FGA)38/6a ––PANSS positive symptoms16.18 (4.72)––PANSS negative symptoms17.67 (4.48)––PANSS general psychopathology32.49 (6.01)––PANSS total score66.33 (12.66)––Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed PANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics aOne patient was untreated at the time of fMRI scanning Demographic and clinical data Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed PANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics aOne patient was untreated at the time of fMRI scanning The majority of patients (N  =  38) were treated with SGA (clozapine  =  10, risperidone  =  7, olanzapine  =  16, amisulpride  =  2, aripiprazole  =  3), six patients were treated with FGA (haloperidol  =  1, flupentixol  =  2, sulpiride  =  2, chlorpromazine  =  1), and one patient was untreated at time of fMRI testing. A number of patients were co-medicated with anticholinergic, benzodiazepine, mood stabilising, or antidepressant compounds. Anticholinergic compounds were administered to three patients on FGA and to two patients on SGA. Benzodiazepines were administered to four patients on SGA. Mood stabilisers were administered to one patient on FGA and eight patients on SGA. Antidepressants were administered to two patients on FGA and 15 patients on SGA. [SUBTITLE] Working memory task performance [SUBSECTION] Working memory task performance data are shown in Table 2. The patient and control groups did not significantly differ in percent correct trials (F[1,62]  =  2.17, p = 0.15, η p2  = 0.03). There was an effect of load on percent correct trials (F[2,124]  =  73.47, p  <  0.001, η p2  =  0.54), indicating fewer correct responses with increasing load, but no group-by-load interaction (F[2,124]  =  1.27, p  =  0.29, η p2  =  0.02). Table 2 N-back task performance data by group Patients (N = 45)Controls (N = 19)0-back correct responses84.89 (19.15)87.51 (13.91)1-back correct responses62.10 (28.92)74.89 (22.42)2-back correct responses43.18 (26.54)51.58 (23.19)0-back omissions8.03 (9.62)8.56 (9.26)1-back omissions15.75 (16.23)14.81 (10.96)2-back omissions27.49 (20.92)29.72 (18.15)0-back latency233.95 (130.96)191.36 (145.83)1-back latency296.41 (210.07)293.70 (145.88)2-back latency490.24 (353.46)390.63 (213.42)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds. N-back task performance data by group Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds. The groups also did not significantly differ in the latency to correct trials (F[1,61]  =  1.00, p  =  0.32, η p2  =  0.02). As with percent correct responses, there was an effect of load on latency (F[2,122]  =  17.11, p  <  0.001, η p2  =  0.22), indicating longer latencies with increasing load, but no group-by-load interaction (F[2,122]  =  0.80, p  =  0.45, η p2  =  0.01). Similarly, there was an effect of load on the percentage of omission errors (F[2,124]  =  38.24, p < 0.001, η p2  = 0.38), indicating more omission errors with increasing load, but no effect of group (F[1,62]  =  0.04, p  =  0.85, η p2  =  0.001) or group-by-load interaction (F[2,124]  =  0.23, p  =  0.80, η p2  =  0.004). Working memory task performance data are shown in Table 2. The patient and control groups did not significantly differ in percent correct trials (F[1,62]  =  2.17, p = 0.15, η p2  = 0.03). There was an effect of load on percent correct trials (F[2,124]  =  73.47, p  <  0.001, η p2  =  0.54), indicating fewer correct responses with increasing load, but no group-by-load interaction (F[2,124]  =  1.27, p  =  0.29, η p2  =  0.02). Table 2 N-back task performance data by group Patients (N = 45)Controls (N = 19)0-back correct responses84.89 (19.15)87.51 (13.91)1-back correct responses62.10 (28.92)74.89 (22.42)2-back correct responses43.18 (26.54)51.58 (23.19)0-back omissions8.03 (9.62)8.56 (9.26)1-back omissions15.75 (16.23)14.81 (10.96)2-back omissions27.49 (20.92)29.72 (18.15)0-back latency233.95 (130.96)191.36 (145.83)1-back latency296.41 (210.07)293.70 (145.88)2-back latency490.24 (353.46)390.63 (213.42)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds. N-back task performance data by group Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds. The groups also did not significantly differ in the latency to correct trials (F[1,61]  =  1.00, p  =  0.32, η p2  =  0.02). As with percent correct responses, there was an effect of load on latency (F[2,122]  =  17.11, p  <  0.001, η p2  =  0.22), indicating longer latencies with increasing load, but no group-by-load interaction (F[2,122]  =  0.80, p  =  0.45, η p2  =  0.01). Similarly, there was an effect of load on the percentage of omission errors (F[2,124]  =  38.24, p < 0.001, η p2  = 0.38), indicating more omission errors with increasing load, but no effect of group (F[1,62]  =  0.04, p  =  0.85, η p2  =  0.001) or group-by-load interaction (F[2,124]  =  0.23, p  =  0.80, η p2  =  0.004). [SUBTITLE] BOLD response by group and load [SUBSECTION] Both groups activated an extensive fronto-parieto-striato-thalamo-cerebellar network during performance of the n-back task. Figure 1 shows the activation of the combined group as a function of load (i.e., increase in BOLD signal with increasing working memory load); the main effect of task (across conditions of load) in the combined sample gave a very similar result. Fig. 1Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level) Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level) The main effect of group (across conditions of load) revealed three clusters of increased activation (p  <  0.05 corrected cluster level) in patients relative to controls: (1) occipital cortex (BA17; Talairach coordinates of peak voxel, x  =  −20, y  =  −88, z  =  −9; 857 voxels; Z  =  4.39) extending into the cerebellum; (2) left middle frontal gyrus (BA9; x  =  −46, y  =  15, z  =  36; 1,913 voxels; Z  =  4.26) extending into pre- and postcentral gyrus; and (3) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 854 voxels; Z  =  4.06) extending into right middle frontal gyrus. In contrast, there were no significant increases in controls relative to patients at the corrected or uncorrected cluster level and there were no significant voxels at the chosen height threshold (p  <  0.001). The group-by-load interaction (Fig. 2) revealed two significant clusters (p  <  0.05 corrected cluster level): (1) left middle frontal gyrus (BA9; x  =  −46, y  =  17, z  =  36; 2,032 voxels; Z  =  4.43) and (2) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 749 voxels; Z  =  3.94) extending into right middle frontal gyrus. A third cluster with a peak in the occipital cortex (BA17; x  = −22, y  =  −86, z  =  −11; 535 voxels; Z  =  4.10) showed the same pattern but narrowly missed the level of statistical significance (p  =  0.054, corrected cluster level). These clusters showed stronger increases across load in patients than in controls. On the other hand, there were no significant clusters showing stronger increases across load in controls than in patients at the corrected or uncorrected cluster level, and there were no significant voxels at the chosen height threshold (p  <  0.001). Fig. 2Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC The prefrontal clusters that emerged in this interaction analysis appeared not to be part of the activation seen in response to load across both groups (Fig. 1) but appeared to be neighbouring it. In order to verify this, the group-by-load interaction contrast was masked with the contrast image resulting from the main effect of load; the two prefrontal areas from the interaction analysis also emerged in this masked analysis, indicating that these are voxels that show an additional increase with load in the patient group but not in the controls and not in the combined group. To better understand the origin of these group-by-load interaction effects, the mean BOLD signal in each of the two frontal clusters that showed a significant interaction effect was extracted as described above and repeated measures ANOVAs with the within-subjects factor of load were run in SPSS separately for the two groups. In the left prefrontal cluster, we found a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  6.65, p  = 0.002, η p2  =  0.13) but not in the controls (F[2,36]  =   0.35, p  =  0.71, η p2  =  0.02). Similarly, in the right prefrontal cluster, there was a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  8.26, p  =  0.001, η p2  =  0.16) but not in the controls (F[2,36]  =  0.77, p  = 0.47, η p2  =  0.04). Taken together, this pattern indicates that patients showed a significantly greater BOLD increase in response to working memory load than controls in lateral prefrontal cortical areas that did not show a significant main effect of load. Both groups activated an extensive fronto-parieto-striato-thalamo-cerebellar network during performance of the n-back task. Figure 1 shows the activation of the combined group as a function of load (i.e., increase in BOLD signal with increasing working memory load); the main effect of task (across conditions of load) in the combined sample gave a very similar result. Fig. 1Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level) Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level) The main effect of group (across conditions of load) revealed three clusters of increased activation (p  <  0.05 corrected cluster level) in patients relative to controls: (1) occipital cortex (BA17; Talairach coordinates of peak voxel, x  =  −20, y  =  −88, z  =  −9; 857 voxels; Z  =  4.39) extending into the cerebellum; (2) left middle frontal gyrus (BA9; x  =  −46, y  =  15, z  =  36; 1,913 voxels; Z  =  4.26) extending into pre- and postcentral gyrus; and (3) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 854 voxels; Z  =  4.06) extending into right middle frontal gyrus. In contrast, there were no significant increases in controls relative to patients at the corrected or uncorrected cluster level and there were no significant voxels at the chosen height threshold (p  <  0.001). The group-by-load interaction (Fig. 2) revealed two significant clusters (p  <  0.05 corrected cluster level): (1) left middle frontal gyrus (BA9; x  =  −46, y  =  17, z  =  36; 2,032 voxels; Z  =  4.43) and (2) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 749 voxels; Z  =  3.94) extending into right middle frontal gyrus. A third cluster with a peak in the occipital cortex (BA17; x  = −22, y  =  −86, z  =  −11; 535 voxels; Z  =  4.10) showed the same pattern but narrowly missed the level of statistical significance (p  =  0.054, corrected cluster level). These clusters showed stronger increases across load in patients than in controls. On the other hand, there were no significant clusters showing stronger increases across load in controls than in patients at the corrected or uncorrected cluster level, and there were no significant voxels at the chosen height threshold (p  <  0.001). Fig. 2Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC The prefrontal clusters that emerged in this interaction analysis appeared not to be part of the activation seen in response to load across both groups (Fig. 1) but appeared to be neighbouring it. In order to verify this, the group-by-load interaction contrast was masked with the contrast image resulting from the main effect of load; the two prefrontal areas from the interaction analysis also emerged in this masked analysis, indicating that these are voxels that show an additional increase with load in the patient group but not in the controls and not in the combined group. To better understand the origin of these group-by-load interaction effects, the mean BOLD signal in each of the two frontal clusters that showed a significant interaction effect was extracted as described above and repeated measures ANOVAs with the within-subjects factor of load were run in SPSS separately for the two groups. In the left prefrontal cluster, we found a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  6.65, p  = 0.002, η p2  =  0.13) but not in the controls (F[2,36]  =   0.35, p  =  0.71, η p2  =  0.02). Similarly, in the right prefrontal cluster, there was a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  8.26, p  =  0.001, η p2  =  0.16) but not in the controls (F[2,36]  =  0.77, p  = 0.47, η p2  =  0.04). Taken together, this pattern indicates that patients showed a significantly greater BOLD increase in response to working memory load than controls in lateral prefrontal cortical areas that did not show a significant main effect of load. [SUBTITLE] Association of BOLD with performance level in the patient group [SUBSECTION] In keeping with the analysis plan, the patient group was split into those with high and low performance along the patient group's median (62.49%), yielding two groups of N  = 22 each (the subject with the median score was excluded). N-back data of the two groups are summarised in Table 3. Importantly, the high- and low-performing patient groups did not differ from each other in any demographic or clinical variables (all p  >  0.05). However, the two patient groups differed significantly from each other in percent correct response at each level of load (all F  >  11.88, p  <  0.002), as expected. They also differed with regards to omission errors (all F  >  6.82, p  <  0.02) but not in reaction time (all F  <  1.85, p  >  0.18). Table 3Performance data by patient subgroups Performance subgroupsTreatment subgroupsHigh (N = 22)Low (N = 22)First-generation (N = 6)Second-generation (N = 38)0-back correct responses93.52 (5.77)75.64 (23.63)89.11 (9.22)84.07 (20.50)1-back correct responses85.07 (10.73)39.87 (23.55)60.48 (31.03)63.27 (28.81)2-back correct responses62.17 (23.96)24.20 (12.11)43.59 (30.75)43.77 (26.32)0-back omissions4.61 (4.15)11.76 (12.16)8.00 (6.14)8.07 (10.24)1-back omissions7.34 (6.11)24.74 (18.65)24.05 (21.22)14.21 (15.39)2-back omissions18.88 (17.24)36.92 (20.80)37.95 (24.36)25.26 (20.11)0-back latency220.16 (103.08)254.48 (153.69)268.35 (129.35)218.62 (117.80)1-back latency252.88 (144.17)303.00 (195.38)442.97 (228.72)264.07 (193.76)2-back latency403.39 (267.89)537.96 (380.03)460.99 (237.67)488.19 (372.91)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”). Performance data by patient subgroups Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”). At the level of BOLD, each patient subgroup showed similar activation increases in response to load as the combined group (see Online Resource 1). An anatomically unconstrained voxelwise comparison between the two patient groups did not find a main effect of performance and there was no performance-by-load interaction (at p  = 0.05, corrected cluster level). A comparison of the prefrontal clusters that had showed group-by-load interactions above similarly did not yield significant main effects of performance for either cluster (both p  >  0.79). However, there was a significant performance-by-load interaction for the left (F[2,84]  =  4.09, p  =  0.02, η p2  =  0.09) but not right (p  =  0.24, η p2  =  0.03) cluster. In order to better understand this interaction, within-subject ANOVAs of load in the left prefrontal cluster were calculated separately for the two performance groups. These showed that the high- (F[2,42]  =  9.63, p  <  0.001, η p2  =  0.31) but not the low-performing patients (p  =  0.83, η p2  =  0.009) showed a statistically significant linear increase in BOLD with load (Fig. 3). Fig. 3Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task In keeping with the analysis plan, the patient group was split into those with high and low performance along the patient group's median (62.49%), yielding two groups of N  = 22 each (the subject with the median score was excluded). N-back data of the two groups are summarised in Table 3. Importantly, the high- and low-performing patient groups did not differ from each other in any demographic or clinical variables (all p  >  0.05). However, the two patient groups differed significantly from each other in percent correct response at each level of load (all F  >  11.88, p  <  0.002), as expected. They also differed with regards to omission errors (all F  >  6.82, p  <  0.02) but not in reaction time (all F  <  1.85, p  >  0.18). Table 3Performance data by patient subgroups Performance subgroupsTreatment subgroupsHigh (N = 22)Low (N = 22)First-generation (N = 6)Second-generation (N = 38)0-back correct responses93.52 (5.77)75.64 (23.63)89.11 (9.22)84.07 (20.50)1-back correct responses85.07 (10.73)39.87 (23.55)60.48 (31.03)63.27 (28.81)2-back correct responses62.17 (23.96)24.20 (12.11)43.59 (30.75)43.77 (26.32)0-back omissions4.61 (4.15)11.76 (12.16)8.00 (6.14)8.07 (10.24)1-back omissions7.34 (6.11)24.74 (18.65)24.05 (21.22)14.21 (15.39)2-back omissions18.88 (17.24)36.92 (20.80)37.95 (24.36)25.26 (20.11)0-back latency220.16 (103.08)254.48 (153.69)268.35 (129.35)218.62 (117.80)1-back latency252.88 (144.17)303.00 (195.38)442.97 (228.72)264.07 (193.76)2-back latency403.39 (267.89)537.96 (380.03)460.99 (237.67)488.19 (372.91)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”). Performance data by patient subgroups Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”). At the level of BOLD, each patient subgroup showed similar activation increases in response to load as the combined group (see Online Resource 1). An anatomically unconstrained voxelwise comparison between the two patient groups did not find a main effect of performance and there was no performance-by-load interaction (at p  = 0.05, corrected cluster level). A comparison of the prefrontal clusters that had showed group-by-load interactions above similarly did not yield significant main effects of performance for either cluster (both p  >  0.79). However, there was a significant performance-by-load interaction for the left (F[2,84]  =  4.09, p  =  0.02, η p2  =  0.09) but not right (p  =  0.24, η p2  =  0.03) cluster. In order to better understand this interaction, within-subject ANOVAs of load in the left prefrontal cluster were calculated separately for the two performance groups. These showed that the high- (F[2,42]  =  9.63, p  <  0.001, η p2  =  0.31) but not the low-performing patients (p  =  0.83, η p2  =  0.009) showed a statistically significant linear increase in BOLD with load (Fig. 3). Fig. 3Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task [SUBTITLE] Effects of antipsychotic treatment [SUBSECTION] The split according to treatment yielded two groups of patients, those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (N  =  1 excluded). N-back performance data of the treatment groups are summarised in Table 3. There were no significant associations of treatment with any demographic variables, clinical variables, or anticholinergic, antidepressant, mood stabiliser, or benzodiazepine treatment (all p  >  0.34), and the treatment factor did not significantly interact with the performance variable (χ 2  =  0.89, df  =  1, p  =  0.35): in the SGA group, there were 17 low- and 20 high-performing patients, and in the FGA group, there were four low- and two high-performing patients. There were no main effects of treatment (all p  >  0.18) or treatment-by-load interactions (all p  >  0.22) on n-back task performance; therefore, any treatment effects on BOLD reported below are considered to be independent of performance effects. At the level of BOLD, an unrestricted voxelwise analysis did not yield any significant differences between the two treatment groups or treatment-by-load interactions (all p  > 0.05, corrected cluster level). When considering the extracted prefrontal ROIs, no main effects of treatment were seen in either cluster (both p  >  0.51). However, there was a significant treatment-by-load interaction (F[2,84]  = 5.16, p  =  0.008, η p2  =  0.11) in the left prefrontal cluster. This interaction (Fig. 4) indicated that patients treated with SGA showed an increase in BOLD as a function of load, whereas the patients treated with FGA showed a reduction. The interaction effect appeared similar in the right prefrontal cluster but did not reach statistical significance (p  =  0.13). A further split of the SGA into those patients treated with clozapine (N  =  10), risperidone (N  =  7), and olanzapine (N  =  16) did not yield any significant group or interaction effects for the prefrontal ROIs. Therefore, the SGA group as a whole showed a stronger parametric left prefrontal BOLD increase than the FGA group, with no significant differences detected between different SGA compounds. Fig. 4Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task The split according to treatment yielded two groups of patients, those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (N  =  1 excluded). N-back performance data of the treatment groups are summarised in Table 3. There were no significant associations of treatment with any demographic variables, clinical variables, or anticholinergic, antidepressant, mood stabiliser, or benzodiazepine treatment (all p  >  0.34), and the treatment factor did not significantly interact with the performance variable (χ 2  =  0.89, df  =  1, p  =  0.35): in the SGA group, there were 17 low- and 20 high-performing patients, and in the FGA group, there were four low- and two high-performing patients. There were no main effects of treatment (all p  >  0.18) or treatment-by-load interactions (all p  >  0.22) on n-back task performance; therefore, any treatment effects on BOLD reported below are considered to be independent of performance effects. At the level of BOLD, an unrestricted voxelwise analysis did not yield any significant differences between the two treatment groups or treatment-by-load interactions (all p  > 0.05, corrected cluster level). When considering the extracted prefrontal ROIs, no main effects of treatment were seen in either cluster (both p  >  0.51). However, there was a significant treatment-by-load interaction (F[2,84]  = 5.16, p  =  0.008, η p2  =  0.11) in the left prefrontal cluster. This interaction (Fig. 4) indicated that patients treated with SGA showed an increase in BOLD as a function of load, whereas the patients treated with FGA showed a reduction. The interaction effect appeared similar in the right prefrontal cluster but did not reach statistical significance (p  =  0.13). A further split of the SGA into those patients treated with clozapine (N  =  10), risperidone (N  =  7), and olanzapine (N  =  16) did not yield any significant group or interaction effects for the prefrontal ROIs. Therefore, the SGA group as a whole showed a stronger parametric left prefrontal BOLD increase than the FGA group, with no significant differences detected between different SGA compounds. Fig. 4Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task [SUBTITLE] Association of BOLD with symptom severity [SUBSECTION] There were no significant correlations of BOLD from the combined n-back activations with PANSS total or subscale scores (all p  >  0.05, corrected cluster level). Similarly, there were no correlations between the extracted ROIs and PANSS symptom scores (all p  >  0.06). There were no significant correlations of BOLD from the combined n-back activations with PANSS total or subscale scores (all p  >  0.05, corrected cluster level). Similarly, there were no correlations between the extracted ROIs and PANSS symptom scores (all p  >  0.06).
null
null
[ "Participants", "fMRI task", "fMRI data acquisition", "N-back performance data analysis", "fMRI data analysis", "Sample description", "Working memory task performance", "BOLD response by group and load", "Association of BOLD with performance level in the patient group", "Effects of antipsychotic treatment", "Association of BOLD with symptom severity", "Patient-control differences", "Performance effects in patients", "Treatment effects in patients", "Limitations", "" ]
[ "Patients with a DSM-IV diagnosis of schizophrenia (American Psychiatric Association 1994) were recruited from the South London and Maudsley catchment area. Patients were (ii) if treated, on stable doses of antipsychotics for ≥2 years and on their present antipsychotic for >3 months, (iii) in a stable phase of the illness with stable symptoms for at least 3 months, (iv) living in the community or long stay/rehabilitation wards, and (v) free of any co-morbid psychiatric diagnosis. Symptom severity was rated using the Positive and Negative Syndrome Scale (PANSS; Kay et al. 1987).\nHealthy controls were recruited via local advertisements from the same geographical area as the patients. Controls were required to be free from past or current drug abuse and were screened by an experienced psychiatrist to exclude current psychiatric conditions using the Structured Clinical Interview for DSM Axis I Disorders (First et al. 1996).\nFor both groups, only right-handed participants were included. An additional inclusion criterion for all participants was the absence of a history of neurological conditions or head injury. Participants provided written informed consent, and the study had research ethics committee permission.", "The task (see Kumari et al. 2009) was a parametric spatial n-back paradigm that involved monitoring locations of dots within a diamond-shaped box on the screen at a given delay from the original occurrence (0-back, 1-back, or 2-back). Participants viewed the paradigm projected onto a screen through a prismatic mirror. They were required to press the button on every trial, using the right thumb, corresponding to the correct location of the 0-back, 1-back, or 2-back stimulus (location of dots random). The level of chance performance was 25%.\nEach condition (0-back, 1-back, 2-back) was presented five times in 30-s blocks in pseudo-random order, controlling for order effects. Each block contained 15 stimulus presentations (stimulus duration  =  450 ms, interstimulus-interval  =  1,500 ms) and began with a 750-ms text delay allowing the participants to notice a change in task demand/condition. There were 15-s rest blocks (presentation of the word “Rest” on the screen) between active blocks and following the last active block (thus, there was a total of 15 rest blocks). The experiment lasted 11.25 min. Participants were requested to abstain from alcohol for at least 24 h before testing and underwent task familiarisation before scanning.", "Echoplanar MR images of the whole brain were acquired using a Signa scanner (General Electric, Milwaukee, Wisconsin) at 1.5-Tesla field strength. In each of 16 near-axial noncontiguous planes parallel to the intercommissural plane, 225 T2*-weighted MR images depicting BOLD contrast were acquired over the experiment with echo time (TE)  =  40 ms, repetition time (TR)  =  3,000 ms, field of view (FOV)  =  240 mm, in-plane resolution  =  3.75 × 3.75 mm, slice thickness  =  7.0 mm, and interslice gap  =  0.7 mm.", "The key-dependent variable measuring the success of n-back task performance was the percentage of correct responses. Additionally, we investigated the percentage of omission errors (failures to respond) and the latency of correct responses (in milliseconds). Data were analysed in SPSS Release 15.0.0 using repeated measures analysis of variance (ANOVA) with group (patients, controls) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor to assess main and interaction effects. Eta squared (η\np2) is given as a measure of effect size.", "fMRI data analysis was carried out using Statistical Parametric Mapping software (SPM5; http://www.fil.ion.ucl.ac.uk/spm) running in Matlab R2008a (The MathWorks Inc.). Images were first motion corrected, spatially transformed (MNI template), smoothed (8 mm full-width-at-half-maximum Gaussian filter). A high-pass filter with a cut-off of 128 s was applied.\nData were analysed within the framework of the general linear model implemented in SPM5. At the single-subject level contrast maps of each of the three conditions (0-back, 1-back, 2-back) were created with rest as implicit baseline by modelling, at each voxel, each condition using a boxcar function which incorporates the delay inherent in the hemodynamic response. Motion parameters obtained from the realignment pre-processing step were included as covariates at this stage. The resulting maps were entered into a random-effects procedure at the second level to investigate the main effect of group (patients, controls), the main effect of load (0-back, 1-back, 2-back), and the group-by-load interaction. The threshold of significance for these analyses was set at p  =  0.05 (corrected cluster level) with a height threshold of p  <  0.001 at the voxel level.\nThe group-by-load interaction effect was considered the first key result in this study; viz. differences between patients and controls in the brain functional response to parametric working memory increase. Therefore, while subsequent analyses of performance and treatment effects within the patient group were carried out first using an anatomically unconstrained whole-brain voxelwise method in SPM5, any such analyses were also repeated using a regions of interest (ROI) approach. ROIs were created by extracting the signal (using the MarsBaR toolbox implemented in SPM5; Brett et al. 2002) from clusters that showed a significant group-by-load interaction. Statistical analyses of these ROIs in relation to performance and treatment were carried out using ANOVA in SPSS 15.0.0 as described below; these analyses thus allowed addressing the question of how areas in which patients differ from controls as a function of processing demands are associated with task performance and antipsychotic treatment within the patient group.\nIn order to investigate the role of performance levels on BOLD signal within the patient group, a performance (high, low) variable was created by splitting the patient group into those with high and those with low overall percentage of correct responses along the median of the patient group (the subject with the median score was excluded). These two patient groups were first compared to each other in clinical, demographic, and performance data. The two groups were then compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with performance (high, low) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. The effect size measure Eta squared (η\np2) is given for these effects.\nIn order to assess effects of antipsychotic treatment on task performance and BOLD response, the patient sample was split into those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (one patient was unmedicated at the time of fMRI scanning and was excluded for the purpose of this analysis). These two groups were then compared to each other in clinical, demographic, and performance data. The two groups were also compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with treatment (FGA, SGA) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. Eta squared (η\np2) is again given.\nFinally, the relationship between BOLD and symptom severity (PANSS) was investigated in the patient sample (first across all voxels in the whole brain in SPM5 and subsequently in the extracted ROIs in SPSS).\nMNI coordinates of peaks within significant clusters were transformed to Talairach coordinates using a non-linear algorithm (see http://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach). The atlas of Talairach and Tournoux (1988) was used to identify anatomic labels and Brodmann areas (BA).", "A total of 64 participants completed the study, consisting of 45 patients with schizophrenia and 19 controls. Demographic and clinical data are summarised in Table 1. The patient and control groups did not differ significantly on demographic variables with the exception of years spent in full-time education, which may be expected given that schizophrenia is commonly associated with lower than expected educational achievement (Green 2001; see also Surguladze et al. 2007).\nTable 1Demographic and clinical data Patients (N = 45)Controls (N = 19)Group comparisonAge (years)37.33 (8.19)33.32 (9.21)\nt =−1.73, df = 62, p = 0.09Gender (N male/N female)35/1012/7\nχ\n2 = 1.46, df = 1, p = 0.23Ethnicity (N Caucasian/N other)19/2612/7\nχ\n2 = 2.34, df = 1, p = 0.13Years of education13.36 (2.33)14.95 (2.92)\nt = 2.32, df = 62, p = 0.02Parental SES2.58 (1.08)2.26 (1.20)\nZ = −1.06, p = 0.29Duration of illness (years)13.49 (9.78)––Age of onset (years)23.84 (6.58)––Antipsychotic treatment (N SGA/N FGA)38/6a\n––PANSS positive symptoms16.18 (4.72)––PANSS negative symptoms17.67 (4.48)––PANSS general psychopathology32.49 (6.01)––PANSS total score66.33 (12.66)––Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\naOne patient was untreated at the time of fMRI scanning\n\nDemographic and clinical data\nData represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\n\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\n\naOne patient was untreated at the time of fMRI scanning\nThe majority of patients (N  =  38) were treated with SGA (clozapine  =  10, risperidone  =  7, olanzapine  =  16, amisulpride  =  2, aripiprazole  =  3), six patients were treated with FGA (haloperidol  =  1, flupentixol  =  2, sulpiride  =  2, chlorpromazine  =  1), and one patient was untreated at time of fMRI testing.\nA number of patients were co-medicated with anticholinergic, benzodiazepine, mood stabilising, or antidepressant compounds. Anticholinergic compounds were administered to three patients on FGA and to two patients on SGA. Benzodiazepines were administered to four patients on SGA. Mood stabilisers were administered to one patient on FGA and eight patients on SGA. Antidepressants were administered to two patients on FGA and 15 patients on SGA.", "Working memory task performance data are shown in Table 2. The patient and control groups did not significantly differ in percent correct trials (F[1,62]  =  2.17, p = 0.15, η\np2  = 0.03). There was an effect of load on percent correct trials (F[2,124]  =  73.47, p  <  0.001, η\np2  =  0.54), indicating fewer correct responses with increasing load, but no group-by-load interaction (F[2,124]  =  1.27, p  =  0.29, η\np2  =  0.02).\nTable 2\nN-back task performance data by group Patients (N = 45)Controls (N = 19)0-back correct responses84.89 (19.15)87.51 (13.91)1-back correct responses62.10 (28.92)74.89 (22.42)2-back correct responses43.18 (26.54)51.58 (23.19)0-back omissions8.03 (9.62)8.56 (9.26)1-back omissions15.75 (16.23)14.81 (10.96)2-back omissions27.49 (20.92)29.72 (18.15)0-back latency233.95 (130.96)191.36 (145.83)1-back latency296.41 (210.07)293.70 (145.88)2-back latency490.24 (353.46)390.63 (213.42)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\n\n\nN-back task performance data by group\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\nThe groups also did not significantly differ in the latency to correct trials (F[1,61]  =  1.00, p  =  0.32, η\np2  =  0.02). As with percent correct responses, there was an effect of load on latency (F[2,122]  =  17.11, p  <  0.001, η\np2  =  0.22), indicating longer latencies with increasing load, but no group-by-load interaction (F[2,122]  =  0.80, p  =  0.45, η\np2  =  0.01).\nSimilarly, there was an effect of load on the percentage of omission errors (F[2,124]  =  38.24, p < 0.001, η\np2  = 0.38), indicating more omission errors with increasing load, but no effect of group (F[1,62]  =  0.04, p  =  0.85, η\np2  =  0.001) or group-by-load interaction (F[2,124]  =  0.23, p  =  0.80, η\np2  =  0.004).", "Both groups activated an extensive fronto-parieto-striato-thalamo-cerebellar network during performance of the n-back task. Figure 1 shows the activation of the combined group as a function of load (i.e., increase in BOLD signal with increasing working memory load); the main effect of task (across conditions of load) in the combined sample gave a very similar result.\nFig. 1Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\n\nMain effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\nThe main effect of group (across conditions of load) revealed three clusters of increased activation (p  <  0.05 corrected cluster level) in patients relative to controls: (1) occipital cortex (BA17; Talairach coordinates of peak voxel, x  =  −20, y  =  −88, z  =  −9; 857 voxels; Z  =  4.39) extending into the cerebellum; (2) left middle frontal gyrus (BA9; x  =  −46, y  =  15, z  =  36; 1,913 voxels; Z  =  4.26) extending into pre- and postcentral gyrus; and (3) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 854 voxels; Z  =  4.06) extending into right middle frontal gyrus. In contrast, there were no significant increases in controls relative to patients at the corrected or uncorrected cluster level and there were no significant voxels at the chosen height threshold (p  <  0.001).\nThe group-by-load interaction (Fig. 2) revealed two significant clusters (p  <  0.05 corrected cluster level): (1) left middle frontal gyrus (BA9; x  =  −46, y  =  17, z  =  36; 2,032 voxels; Z  =  4.43) and (2) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 749 voxels; Z  =  3.94) extending into right middle frontal gyrus. A third cluster with a peak in the occipital cortex (BA17; x  = −22, y  =  −86, z  =  −11; 535 voxels; Z  =  4.10) showed the same pattern but narrowly missed the level of statistical significance (p  =  0.054, corrected cluster level). These clusters showed stronger increases across load in patients than in controls. On the other hand, there were no significant clusters showing stronger increases across load in controls than in patients at the corrected or uncorrected cluster level, and there were no significant voxels at the chosen height threshold (p  <  0.001).\nFig. 2Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\n\nGroup-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\nThe prefrontal clusters that emerged in this interaction analysis appeared not to be part of the activation seen in response to load across both groups (Fig. 1) but appeared to be neighbouring it. In order to verify this, the group-by-load interaction contrast was masked with the contrast image resulting from the main effect of load; the two prefrontal areas from the interaction analysis also emerged in this masked analysis, indicating that these are voxels that show an additional increase with load in the patient group but not in the controls and not in the combined group.\nTo better understand the origin of these group-by-load interaction effects, the mean BOLD signal in each of the two frontal clusters that showed a significant interaction effect was extracted as described above and repeated measures ANOVAs with the within-subjects factor of load were run in SPSS separately for the two groups. In the left prefrontal cluster, we found a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  6.65, p  = 0.002, η\np2  =  0.13) but not in the controls (F[2,36]  =   0.35, p  =  0.71, η\np2  =  0.02). Similarly, in the right prefrontal cluster, there was a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  8.26, p  =  0.001, η\np2  =  0.16) but not in the controls (F[2,36]  =  0.77, p  = 0.47, η\np2  =  0.04).\nTaken together, this pattern indicates that patients showed a significantly greater BOLD increase in response to working memory load than controls in lateral prefrontal cortical areas that did not show a significant main effect of load.", "In keeping with the analysis plan, the patient group was split into those with high and low performance along the patient group's median (62.49%), yielding two groups of N  = 22 each (the subject with the median score was excluded). N-back data of the two groups are summarised in Table 3. Importantly, the high- and low-performing patient groups did not differ from each other in any demographic or clinical variables (all p  >  0.05). However, the two patient groups differed significantly from each other in percent correct response at each level of load (all F  >  11.88, p  <  0.002), as expected. They also differed with regards to omission errors (all F  >  6.82, p  <  0.02) but not in reaction time (all F  <  1.85, p  >  0.18).\nTable 3Performance data by patient subgroups Performance subgroupsTreatment subgroupsHigh (N = 22)Low (N = 22)First-generation (N = 6)Second-generation (N = 38)0-back correct responses93.52 (5.77)75.64 (23.63)89.11 (9.22)84.07 (20.50)1-back correct responses85.07 (10.73)39.87 (23.55)60.48 (31.03)63.27 (28.81)2-back correct responses62.17 (23.96)24.20 (12.11)43.59 (30.75)43.77 (26.32)0-back omissions4.61 (4.15)11.76 (12.16)8.00 (6.14)8.07 (10.24)1-back omissions7.34 (6.11)24.74 (18.65)24.05 (21.22)14.21 (15.39)2-back omissions18.88 (17.24)36.92 (20.80)37.95 (24.36)25.26 (20.11)0-back latency220.16 (103.08)254.48 (153.69)268.35 (129.35)218.62 (117.80)1-back latency252.88 (144.17)303.00 (195.38)442.97 (228.72)264.07 (193.76)2-back latency403.39 (267.89)537.96 (380.03)460.99 (237.67)488.19 (372.91)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\n\nPerformance data by patient subgroups\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\nAt the level of BOLD, each patient subgroup showed similar activation increases in response to load as the combined group (see Online Resource 1). An anatomically unconstrained voxelwise comparison between the two patient groups did not find a main effect of performance and there was no performance-by-load interaction (at p  = 0.05, corrected cluster level). A comparison of the prefrontal clusters that had showed group-by-load interactions above similarly did not yield significant main effects of performance for either cluster (both p  >  0.79). However, there was a significant performance-by-load interaction for the left (F[2,84]  =  4.09, p  =  0.02, η\np2  =  0.09) but not right (p  =  0.24, η\np2  =  0.03) cluster. In order to better understand this interaction, within-subject ANOVAs of load in the left prefrontal cluster were calculated separately for the two performance groups. These showed that the high- (F[2,42]  =  9.63, p  <  0.001, η\np2  =  0.31) but not the low-performing patients (p  =  0.83, η\np2  =  0.009) showed a statistically significant linear increase in BOLD with load (Fig. 3).\nFig. 3Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task\n\nPerformance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task", "The split according to treatment yielded two groups of patients, those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (N  =  1 excluded). N-back performance data of the treatment groups are summarised in Table 3. There were no significant associations of treatment with any demographic variables, clinical variables, or anticholinergic, antidepressant, mood stabiliser, or benzodiazepine treatment (all p  >  0.34), and the treatment factor did not significantly interact with the performance variable (χ\n2  =  0.89, df  =  1, p  =  0.35): in the SGA group, there were 17 low- and 20 high-performing patients, and in the FGA group, there were four low- and two high-performing patients. There were no main effects of treatment (all p  >  0.18) or treatment-by-load interactions (all p  >  0.22) on n-back task performance; therefore, any treatment effects on BOLD reported below are considered to be independent of performance effects.\nAt the level of BOLD, an unrestricted voxelwise analysis did not yield any significant differences between the two treatment groups or treatment-by-load interactions (all p  > 0.05, corrected cluster level). When considering the extracted prefrontal ROIs, no main effects of treatment were seen in either cluster (both p  >  0.51). However, there was a significant treatment-by-load interaction (F[2,84]  = 5.16, p  =  0.008, η\np2  =  0.11) in the left prefrontal cluster. This interaction (Fig. 4) indicated that patients treated with SGA showed an increase in BOLD as a function of load, whereas the patients treated with FGA showed a reduction. The interaction effect appeared similar in the right prefrontal cluster but did not reach statistical significance (p  =  0.13). A further split of the SGA into those patients treated with clozapine (N  =  10), risperidone (N  =  7), and olanzapine (N  =  16) did not yield any significant group or interaction effects for the prefrontal ROIs. Therefore, the SGA group as a whole showed a stronger parametric left prefrontal BOLD increase than the FGA group, with no significant differences detected between different SGA compounds.\nFig. 4Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task\n\nTreatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task", "There were no significant correlations of BOLD from the combined n-back activations with PANSS total or subscale scores (all p  >  0.05, corrected cluster level). Similarly, there were no correlations between the extracted ROIs and PANSS symptom scores (all p  >  0.06).", "Working memory deficits in schizophrenia have often been demonstrated and it is important to understand this impairment in relation to the pathophysiology, genetics, and treatment of schizophrenia (Barch and Smith 2008; Goldman-Rakic 1994). However, there is still considerable lack of clarity concerning the neural mechanisms that mediate these cognitive impairments. Given the importance of the neurophysiological level of analysis in genetic (Meyer-Lindenberg and Weinberger 2006) and pharmacological (Migo et al. 2011) studies of schizophrenia, the aim of this study was to further address this issue.\nPatients and controls did not differ in working memory performance in this study (see also Tan et al. 2005). At the level of BOLD, it was found that patients with schizophrenia displayed significantly stronger activation than controls in lateral prefrontal cortex and left occipital cortex (extending into the cerebellum) during task performance. Additionally, the group difference in prefrontal cortex became stronger with increasing load. The prefrontal cortex clusters were located to superior and middle frontal gyrus as well as medial prefrontal cortex. These prefrontal hyperactivations in patients appeared to be recruited in addition to prefrontal areas that were activated in the main effect of working memory load in both groups.\nPrevious brain activation studies of schizophrenia patients have variably shown increased, decreased, or comparable frontal activation levels in patients. These discrepancies have been attributed to a number of variables including performance and task difficulty (Callicott et al. 2003; Manoach 2003; Potkin et al. 2009). As outlined in detail elsewhere (Linden 2009), the pattern of schizophrenia-control differences in BOLD signal and task performance can take on different forms, indicating qualitatively different, reduced, compensatory, aberrant, or inefficient neural response. The results of the current study indicate that areas activated by both groups are activated to a similar extent; however, patients additionally activated lateral prefrontal areas, an effect that became more pronounced with increasing task demands. Bearing in mind that the patients have psychotic symptoms and general clinical impairments, this pattern suggests the operation of compensatory neural mechanisms that led to overall successful task performance when compared to healthy controls. A number of previous studies have similarly observed compensatory BOLD increases on the background of comparable task performance in schizophrenia patients (Karch et al. 2009; Lee et al. 2008; Potkin et al. 2009; Royer et al. 2009; Tan et al. 2005; van Raalten et al. 2008; see also meta-analysis by Minzenberg et al. 2009). However, while the patients' recruitment of additional areas in our study may represent a behaviourally successful compensatory mechanism, it is also neurally inefficient as it requires additional resources in order to achieve a comparable performance levels as the controls. Of interest, it should be noted that compensatory hyperactivations have been observed not only in schizophrenia but also in other psychiatric conditions such as obsessive–compulsive disorder (Henseler et al. 2008).", "The left prefrontal BOLD increases in response to working memory load were more pronounced in patients with high levels of performance than in patients with low performance levels. The finding of performance-related activation levels in the patients is in part compatible with existing models which attribute group differences in BOLD to variation in task performance (Callicott et al. 2003; Manoach 2003). These models suggested that relatively unimpaired performance may be associated with increased BOLD signal, whereas lower performance may be associated with reductions in BOLD, e.g., hypofrontality. Given that high-performing patients in this study showed a stronger increase in activation with increasing task demands than low-performing patients it may be speculated that this may perhaps be compatible with low-performing patients not being able to further increase their BOLD beyond a certain level of task difficulty, which could have resulted in (or stemmed from) their lower working memory capacity. Relating this pattern to the inverted U curves depicted by Manoach (2003) and Callicott et al. (2003), it may be surmised that low-performing patients are already at the top or on the downward slope of the inverted U.", "The cross-sectional treatment effects observed here are compatible with some previous evidence (see “Introduction”). Patients treated with SGA showed load-related increases in left prefrontal BOLD, whereas those on FGA did not; it is important to note that this effect represents an interaction effect of working memory load and treatment but not a treatment main effect. Importantly, the FGA and SGA patient groups did not differ in demographic, clinical, or performance variables. An analysis of individual SGA compounds (clozapine, risperidone, olanzapine) did not yield any differences amongst these. As stated earlier, a number of pharmacological fMRI studies of antipsychotic compounds in schizophrenia have shown stronger BOLD signal with SGA compared to FGA. The strongest evidence for this effect comes from longitudinal studies which have shown that SGA improve cortical activations, thus normalising previous hypoactivations (review, Davis et al. 2005; Kumari and Cooke 2006; Migo et al. 2011). The present study observed this effect in a cross-sectional design. The effect was seen despite the small number of patients treated with FGA, a factor that can be attributed to current guidelines in clinical practice.\nHowever, it should be noted that not all previous studies have found this effect. For example, Surguladze et al. (2007) observed that patients on FGA showed an increase in ventromedial prefrontal BOLD as a function of working memory load, whereas patients treated with risperidone and healthy controls did not. Conversely, a lateral ventral prefrontal area studied by Surguladze et al. (2007) showed an increase with load only in controls but not in FGA- or risperidone-treated patients. An important difference between their study and ours is that in Surguladze et al. (2007), patients treated with FGA displayed significantly impaired task performance relative to risperidone-treated patients, whereas in our study, there were no significant drug effects on the level of task performance. Furthermore, there is evidence for relative functional specificity within prefrontal cortices. For example, studies have shown bilaterally increased DPFC (BA9/46) activity during rule discovery but increased VPFC (BA47/12) activity with changes in card-sorting rule on the WCST (Monchi et al. 2001). It is possible that SGAs have different effects on dorsal and ventral prefrontal brain regions.\nThe observation that in this study, working memory task performance did not differ between the two treatment groups, is noteworthy given that there were differences between the two groups at the level of BOLD in response to working memory load increase. Dissociations of effects at behavioural and neural levels of analysis in fMRI designs have previously been described in relation to experimental cognitive neuroscience (Wilkinson and Halligan 2004), but similar arguments may apply to pharmacological fMRI studies (Migo et al. 2011). Specifically, in this instance, it may be argued that our data provide evidence for the notion that the BOLD signal may be a more sensitive measure of pharmacological treatment effects than behavioural measures (Honey and Bullmore 2004) analogous to findings in the neurochemical imaging literature (Fannon et al. 2003).\nA likely pharmacological explanation of the treatment effects in this study may be the richer pharmacological profile of SGA compounds which rely not just on D2 receptor antagonism but also act on other neurotransmitter systems such as the serotonergic system. However, the group of SGA compounds used in this study is heterogeneous, thus not allowing a definitive explanation of the neurotransmitter mechanisms underlying the effects observed here.", "This study has a number of limitations. First, the patient group treated with FGA was small (N  =  6). This limitation of the study stems from the fact that we used a naturalistic design in a clinical environment where the majority of schizophrenia patients are treated with SGA. The small size of the FGA group could also be an explanation for the lack of differences between the two patient groups in demographic, clinical, or performance measures. An additional limitation is that the group of SGA is pharmacologically heterogeneous. Therefore, it cannot be concluded that there is a single mechanism common to all these compounds by which they affected BOLD in this study. It should also be noted that the performance-by-load and treatment-by-load interactions were observed only when considering individual ROIs but not at the voxelwise level. Previous studies have noted that antipsychotic effects at the level of BOLD may be of small to moderate effect size (Meisenzahl et al. 2006); therefore, replication using larger samples will be important. A further limitation is the fact that some patients were co-medicated with other compounds; these may influence cognition and brain function. Finally, the effects of pharmacological treatment should ideally be investigated using more powerful longitudinal designs rather than cross-sectional studies. Future longitudinal studies are needed to further advance this field.", "Below is the link to the electronic supplementary material.\nOnline Resource 1The figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)\n\nThe figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Method", "Participants", "fMRI task", "fMRI data acquisition", "N-back performance data analysis", "fMRI data analysis", "Results", "Sample description", "Working memory task performance", "BOLD response by group and load", "Association of BOLD with performance level in the patient group", "Effects of antipsychotic treatment", "Association of BOLD with symptom severity", "Discussion", "Patient-control differences", "Performance effects in patients", "Treatment effects in patients", "Limitations", "Electronic supplementary material", "" ]
[ "Working memory is defined as the capacity to mentally maintain and manipulate information over short time periods. As such, it is an important cognitive function underlying a range of behaviours and everyday tasks. Working memory deficits are frequently observed in schizophrenia (Barch 2005; Goldman-Rakic 1994; Park and Holzman 1992). The deficit has been observed at different illness stages and may be a predictor of clinical and functional outcome as well as a relevant target for treatments aimed at cognitive enhancement (Green and Nuechterlein 2004; Greenwood et al. 2005; Liddle 2000).\nBasic neuroscience studies have shown that the dorsolateral prefrontal cortex (DLPFC) as well as parietal cortex and subcortical projection targets are of critical importance in working memory (Barch 2005; Gruber and von Cramon 2003). However, despite the often replicated observation of working memory impairments in schizophrenia, brain activation studies using functional magnetic resonance imaging (fMRI) have not yet reliably identified the neural circuits underlying this impairment. While initial studies observed reduced activation in frontal areas in patients (termed hypofrontality), other studies have reported increased activation or no significant differences between patients and controls (for review, see (Glahn et al. 2005; Linden 2009; Manoach 2003). A variety of factors such as task design and patient characteristics are likely to play a role, but task difficulty, subject performance levels, and pharmacological treatment status are considered to be particularly important factors.\nRegarding the related factors of task difficulty and performance, it has been argued (Manoach 2003) and demonstrated empirically (Callicott et al. 2003; Potkin et al. 2009) that activation levels may fall along an inverted U shape distribution reflecting task difficulty. This curve may differ between patients and healthy controls, with patients showing hyperactivations relative to controls at easier task difficulty levels (or when no behavioural impairments are seen) and hypoactivations at greater task difficulty (or when behavioural impairments are seen).\nRegarding pharmacological treatment, the effects of antipsychotic medication have been addressed in a number of studies. These compounds have been shown to influence the blood-oxygen-level-dependent (BOLD) signal during performance of a variety of cognitive and behavioural paradigms. Specifically, a number of studies have shown normalisation of hypofunction in schizophrenia with second-generation (SGA) but not first-generation antipsychotics (FGA) across a range of neurocognitive paradigms (Braus et al. 1999; Honey et al. 1999; Jones et al. 2004; Kumari et al. 2007; Meisenzahl et al. 2006; Stephan et al. 2001; for review see Davis et al. 2005; Kumari and Cooke 2006). However, not all studies have observed this effect (Surguladze et al. 2007), and other work points to task dependence of treatment effects. For example, Schlagenhauf et al. (2008) found that switching from FGA to olanzapine led to increased BOLD in the left DLPFC during a 0-back attentional condition but not during a 2-back working memory condition, relative to baseline.\nThe aims of this study were to further explore the macroscopic neural circuits underlying working memory task performance in a large sample of schizophrenia patients. More work is needed in this area due to the inconsistencies in the literature and in order to build up the evidence base especially in relation to the not yet clarified issue of hypo- vs. hyper-frontality (Karch et al. 2009; Manoach 2003; Potkin et al. 2009; Schneider et al. 2007; Tan et al. 2005). Additionally, we aimed to examine differences in BOLD signal between patients treated with FGA compared to SGA. This aim is of relevance considering that working memory as well as the underlying BOLD response may represent promising treatment targets or biomarkers in drug development (Green and Nuechterlein 2004; Migo et al. 2011). To study working memory, we used a spatial, parametric n-back task. The n-back task is a widely used paradigm that may represent an intermediate phenotype reflecting genetic vulnerability to schizophrenia as well as a potential treatment target (Barch and Smith 2008; Egan et al. 2001; Linden 2009). The parametric design of the task allowed us to test for effects of working memory load on BOLD signal as well as its modulation by antipsychotic treatment.", "[SUBTITLE] Participants [SUBSECTION] Patients with a DSM-IV diagnosis of schizophrenia (American Psychiatric Association 1994) were recruited from the South London and Maudsley catchment area. Patients were (ii) if treated, on stable doses of antipsychotics for ≥2 years and on their present antipsychotic for >3 months, (iii) in a stable phase of the illness with stable symptoms for at least 3 months, (iv) living in the community or long stay/rehabilitation wards, and (v) free of any co-morbid psychiatric diagnosis. Symptom severity was rated using the Positive and Negative Syndrome Scale (PANSS; Kay et al. 1987).\nHealthy controls were recruited via local advertisements from the same geographical area as the patients. Controls were required to be free from past or current drug abuse and were screened by an experienced psychiatrist to exclude current psychiatric conditions using the Structured Clinical Interview for DSM Axis I Disorders (First et al. 1996).\nFor both groups, only right-handed participants were included. An additional inclusion criterion for all participants was the absence of a history of neurological conditions or head injury. Participants provided written informed consent, and the study had research ethics committee permission.\nPatients with a DSM-IV diagnosis of schizophrenia (American Psychiatric Association 1994) were recruited from the South London and Maudsley catchment area. Patients were (ii) if treated, on stable doses of antipsychotics for ≥2 years and on their present antipsychotic for >3 months, (iii) in a stable phase of the illness with stable symptoms for at least 3 months, (iv) living in the community or long stay/rehabilitation wards, and (v) free of any co-morbid psychiatric diagnosis. Symptom severity was rated using the Positive and Negative Syndrome Scale (PANSS; Kay et al. 1987).\nHealthy controls were recruited via local advertisements from the same geographical area as the patients. Controls were required to be free from past or current drug abuse and were screened by an experienced psychiatrist to exclude current psychiatric conditions using the Structured Clinical Interview for DSM Axis I Disorders (First et al. 1996).\nFor both groups, only right-handed participants were included. An additional inclusion criterion for all participants was the absence of a history of neurological conditions or head injury. Participants provided written informed consent, and the study had research ethics committee permission.\n[SUBTITLE] fMRI task [SUBSECTION] The task (see Kumari et al. 2009) was a parametric spatial n-back paradigm that involved monitoring locations of dots within a diamond-shaped box on the screen at a given delay from the original occurrence (0-back, 1-back, or 2-back). Participants viewed the paradigm projected onto a screen through a prismatic mirror. They were required to press the button on every trial, using the right thumb, corresponding to the correct location of the 0-back, 1-back, or 2-back stimulus (location of dots random). The level of chance performance was 25%.\nEach condition (0-back, 1-back, 2-back) was presented five times in 30-s blocks in pseudo-random order, controlling for order effects. Each block contained 15 stimulus presentations (stimulus duration  =  450 ms, interstimulus-interval  =  1,500 ms) and began with a 750-ms text delay allowing the participants to notice a change in task demand/condition. There were 15-s rest blocks (presentation of the word “Rest” on the screen) between active blocks and following the last active block (thus, there was a total of 15 rest blocks). The experiment lasted 11.25 min. Participants were requested to abstain from alcohol for at least 24 h before testing and underwent task familiarisation before scanning.\nThe task (see Kumari et al. 2009) was a parametric spatial n-back paradigm that involved monitoring locations of dots within a diamond-shaped box on the screen at a given delay from the original occurrence (0-back, 1-back, or 2-back). Participants viewed the paradigm projected onto a screen through a prismatic mirror. They were required to press the button on every trial, using the right thumb, corresponding to the correct location of the 0-back, 1-back, or 2-back stimulus (location of dots random). The level of chance performance was 25%.\nEach condition (0-back, 1-back, 2-back) was presented five times in 30-s blocks in pseudo-random order, controlling for order effects. Each block contained 15 stimulus presentations (stimulus duration  =  450 ms, interstimulus-interval  =  1,500 ms) and began with a 750-ms text delay allowing the participants to notice a change in task demand/condition. There were 15-s rest blocks (presentation of the word “Rest” on the screen) between active blocks and following the last active block (thus, there was a total of 15 rest blocks). The experiment lasted 11.25 min. Participants were requested to abstain from alcohol for at least 24 h before testing and underwent task familiarisation before scanning.\n[SUBTITLE] fMRI data acquisition [SUBSECTION] Echoplanar MR images of the whole brain were acquired using a Signa scanner (General Electric, Milwaukee, Wisconsin) at 1.5-Tesla field strength. In each of 16 near-axial noncontiguous planes parallel to the intercommissural plane, 225 T2*-weighted MR images depicting BOLD contrast were acquired over the experiment with echo time (TE)  =  40 ms, repetition time (TR)  =  3,000 ms, field of view (FOV)  =  240 mm, in-plane resolution  =  3.75 × 3.75 mm, slice thickness  =  7.0 mm, and interslice gap  =  0.7 mm.\nEchoplanar MR images of the whole brain were acquired using a Signa scanner (General Electric, Milwaukee, Wisconsin) at 1.5-Tesla field strength. In each of 16 near-axial noncontiguous planes parallel to the intercommissural plane, 225 T2*-weighted MR images depicting BOLD contrast were acquired over the experiment with echo time (TE)  =  40 ms, repetition time (TR)  =  3,000 ms, field of view (FOV)  =  240 mm, in-plane resolution  =  3.75 × 3.75 mm, slice thickness  =  7.0 mm, and interslice gap  =  0.7 mm.\n[SUBTITLE] N-back performance data analysis [SUBSECTION] The key-dependent variable measuring the success of n-back task performance was the percentage of correct responses. Additionally, we investigated the percentage of omission errors (failures to respond) and the latency of correct responses (in milliseconds). Data were analysed in SPSS Release 15.0.0 using repeated measures analysis of variance (ANOVA) with group (patients, controls) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor to assess main and interaction effects. Eta squared (η\np2) is given as a measure of effect size.\nThe key-dependent variable measuring the success of n-back task performance was the percentage of correct responses. Additionally, we investigated the percentage of omission errors (failures to respond) and the latency of correct responses (in milliseconds). Data were analysed in SPSS Release 15.0.0 using repeated measures analysis of variance (ANOVA) with group (patients, controls) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor to assess main and interaction effects. Eta squared (η\np2) is given as a measure of effect size.\n[SUBTITLE] fMRI data analysis [SUBSECTION] fMRI data analysis was carried out using Statistical Parametric Mapping software (SPM5; http://www.fil.ion.ucl.ac.uk/spm) running in Matlab R2008a (The MathWorks Inc.). Images were first motion corrected, spatially transformed (MNI template), smoothed (8 mm full-width-at-half-maximum Gaussian filter). A high-pass filter with a cut-off of 128 s was applied.\nData were analysed within the framework of the general linear model implemented in SPM5. At the single-subject level contrast maps of each of the three conditions (0-back, 1-back, 2-back) were created with rest as implicit baseline by modelling, at each voxel, each condition using a boxcar function which incorporates the delay inherent in the hemodynamic response. Motion parameters obtained from the realignment pre-processing step were included as covariates at this stage. The resulting maps were entered into a random-effects procedure at the second level to investigate the main effect of group (patients, controls), the main effect of load (0-back, 1-back, 2-back), and the group-by-load interaction. The threshold of significance for these analyses was set at p  =  0.05 (corrected cluster level) with a height threshold of p  <  0.001 at the voxel level.\nThe group-by-load interaction effect was considered the first key result in this study; viz. differences between patients and controls in the brain functional response to parametric working memory increase. Therefore, while subsequent analyses of performance and treatment effects within the patient group were carried out first using an anatomically unconstrained whole-brain voxelwise method in SPM5, any such analyses were also repeated using a regions of interest (ROI) approach. ROIs were created by extracting the signal (using the MarsBaR toolbox implemented in SPM5; Brett et al. 2002) from clusters that showed a significant group-by-load interaction. Statistical analyses of these ROIs in relation to performance and treatment were carried out using ANOVA in SPSS 15.0.0 as described below; these analyses thus allowed addressing the question of how areas in which patients differ from controls as a function of processing demands are associated with task performance and antipsychotic treatment within the patient group.\nIn order to investigate the role of performance levels on BOLD signal within the patient group, a performance (high, low) variable was created by splitting the patient group into those with high and those with low overall percentage of correct responses along the median of the patient group (the subject with the median score was excluded). These two patient groups were first compared to each other in clinical, demographic, and performance data. The two groups were then compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with performance (high, low) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. The effect size measure Eta squared (η\np2) is given for these effects.\nIn order to assess effects of antipsychotic treatment on task performance and BOLD response, the patient sample was split into those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (one patient was unmedicated at the time of fMRI scanning and was excluded for the purpose of this analysis). These two groups were then compared to each other in clinical, demographic, and performance data. The two groups were also compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with treatment (FGA, SGA) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. Eta squared (η\np2) is again given.\nFinally, the relationship between BOLD and symptom severity (PANSS) was investigated in the patient sample (first across all voxels in the whole brain in SPM5 and subsequently in the extracted ROIs in SPSS).\nMNI coordinates of peaks within significant clusters were transformed to Talairach coordinates using a non-linear algorithm (see http://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach). The atlas of Talairach and Tournoux (1988) was used to identify anatomic labels and Brodmann areas (BA).\nfMRI data analysis was carried out using Statistical Parametric Mapping software (SPM5; http://www.fil.ion.ucl.ac.uk/spm) running in Matlab R2008a (The MathWorks Inc.). Images were first motion corrected, spatially transformed (MNI template), smoothed (8 mm full-width-at-half-maximum Gaussian filter). A high-pass filter with a cut-off of 128 s was applied.\nData were analysed within the framework of the general linear model implemented in SPM5. At the single-subject level contrast maps of each of the three conditions (0-back, 1-back, 2-back) were created with rest as implicit baseline by modelling, at each voxel, each condition using a boxcar function which incorporates the delay inherent in the hemodynamic response. Motion parameters obtained from the realignment pre-processing step were included as covariates at this stage. The resulting maps were entered into a random-effects procedure at the second level to investigate the main effect of group (patients, controls), the main effect of load (0-back, 1-back, 2-back), and the group-by-load interaction. The threshold of significance for these analyses was set at p  =  0.05 (corrected cluster level) with a height threshold of p  <  0.001 at the voxel level.\nThe group-by-load interaction effect was considered the first key result in this study; viz. differences between patients and controls in the brain functional response to parametric working memory increase. Therefore, while subsequent analyses of performance and treatment effects within the patient group were carried out first using an anatomically unconstrained whole-brain voxelwise method in SPM5, any such analyses were also repeated using a regions of interest (ROI) approach. ROIs were created by extracting the signal (using the MarsBaR toolbox implemented in SPM5; Brett et al. 2002) from clusters that showed a significant group-by-load interaction. Statistical analyses of these ROIs in relation to performance and treatment were carried out using ANOVA in SPSS 15.0.0 as described below; these analyses thus allowed addressing the question of how areas in which patients differ from controls as a function of processing demands are associated with task performance and antipsychotic treatment within the patient group.\nIn order to investigate the role of performance levels on BOLD signal within the patient group, a performance (high, low) variable was created by splitting the patient group into those with high and those with low overall percentage of correct responses along the median of the patient group (the subject with the median score was excluded). These two patient groups were first compared to each other in clinical, demographic, and performance data. The two groups were then compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with performance (high, low) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. The effect size measure Eta squared (η\np2) is given for these effects.\nIn order to assess effects of antipsychotic treatment on task performance and BOLD response, the patient sample was split into those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (one patient was unmedicated at the time of fMRI scanning and was excluded for the purpose of this analysis). These two groups were then compared to each other in clinical, demographic, and performance data. The two groups were also compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with treatment (FGA, SGA) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. Eta squared (η\np2) is again given.\nFinally, the relationship between BOLD and symptom severity (PANSS) was investigated in the patient sample (first across all voxels in the whole brain in SPM5 and subsequently in the extracted ROIs in SPSS).\nMNI coordinates of peaks within significant clusters were transformed to Talairach coordinates using a non-linear algorithm (see http://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach). The atlas of Talairach and Tournoux (1988) was used to identify anatomic labels and Brodmann areas (BA).", "Patients with a DSM-IV diagnosis of schizophrenia (American Psychiatric Association 1994) were recruited from the South London and Maudsley catchment area. Patients were (ii) if treated, on stable doses of antipsychotics for ≥2 years and on their present antipsychotic for >3 months, (iii) in a stable phase of the illness with stable symptoms for at least 3 months, (iv) living in the community or long stay/rehabilitation wards, and (v) free of any co-morbid psychiatric diagnosis. Symptom severity was rated using the Positive and Negative Syndrome Scale (PANSS; Kay et al. 1987).\nHealthy controls were recruited via local advertisements from the same geographical area as the patients. Controls were required to be free from past or current drug abuse and were screened by an experienced psychiatrist to exclude current psychiatric conditions using the Structured Clinical Interview for DSM Axis I Disorders (First et al. 1996).\nFor both groups, only right-handed participants were included. An additional inclusion criterion for all participants was the absence of a history of neurological conditions or head injury. Participants provided written informed consent, and the study had research ethics committee permission.", "The task (see Kumari et al. 2009) was a parametric spatial n-back paradigm that involved monitoring locations of dots within a diamond-shaped box on the screen at a given delay from the original occurrence (0-back, 1-back, or 2-back). Participants viewed the paradigm projected onto a screen through a prismatic mirror. They were required to press the button on every trial, using the right thumb, corresponding to the correct location of the 0-back, 1-back, or 2-back stimulus (location of dots random). The level of chance performance was 25%.\nEach condition (0-back, 1-back, 2-back) was presented five times in 30-s blocks in pseudo-random order, controlling for order effects. Each block contained 15 stimulus presentations (stimulus duration  =  450 ms, interstimulus-interval  =  1,500 ms) and began with a 750-ms text delay allowing the participants to notice a change in task demand/condition. There were 15-s rest blocks (presentation of the word “Rest” on the screen) between active blocks and following the last active block (thus, there was a total of 15 rest blocks). The experiment lasted 11.25 min. Participants were requested to abstain from alcohol for at least 24 h before testing and underwent task familiarisation before scanning.", "Echoplanar MR images of the whole brain were acquired using a Signa scanner (General Electric, Milwaukee, Wisconsin) at 1.5-Tesla field strength. In each of 16 near-axial noncontiguous planes parallel to the intercommissural plane, 225 T2*-weighted MR images depicting BOLD contrast were acquired over the experiment with echo time (TE)  =  40 ms, repetition time (TR)  =  3,000 ms, field of view (FOV)  =  240 mm, in-plane resolution  =  3.75 × 3.75 mm, slice thickness  =  7.0 mm, and interslice gap  =  0.7 mm.", "The key-dependent variable measuring the success of n-back task performance was the percentage of correct responses. Additionally, we investigated the percentage of omission errors (failures to respond) and the latency of correct responses (in milliseconds). Data were analysed in SPSS Release 15.0.0 using repeated measures analysis of variance (ANOVA) with group (patients, controls) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor to assess main and interaction effects. Eta squared (η\np2) is given as a measure of effect size.", "fMRI data analysis was carried out using Statistical Parametric Mapping software (SPM5; http://www.fil.ion.ucl.ac.uk/spm) running in Matlab R2008a (The MathWorks Inc.). Images were first motion corrected, spatially transformed (MNI template), smoothed (8 mm full-width-at-half-maximum Gaussian filter). A high-pass filter with a cut-off of 128 s was applied.\nData were analysed within the framework of the general linear model implemented in SPM5. At the single-subject level contrast maps of each of the three conditions (0-back, 1-back, 2-back) were created with rest as implicit baseline by modelling, at each voxel, each condition using a boxcar function which incorporates the delay inherent in the hemodynamic response. Motion parameters obtained from the realignment pre-processing step were included as covariates at this stage. The resulting maps were entered into a random-effects procedure at the second level to investigate the main effect of group (patients, controls), the main effect of load (0-back, 1-back, 2-back), and the group-by-load interaction. The threshold of significance for these analyses was set at p  =  0.05 (corrected cluster level) with a height threshold of p  <  0.001 at the voxel level.\nThe group-by-load interaction effect was considered the first key result in this study; viz. differences between patients and controls in the brain functional response to parametric working memory increase. Therefore, while subsequent analyses of performance and treatment effects within the patient group were carried out first using an anatomically unconstrained whole-brain voxelwise method in SPM5, any such analyses were also repeated using a regions of interest (ROI) approach. ROIs were created by extracting the signal (using the MarsBaR toolbox implemented in SPM5; Brett et al. 2002) from clusters that showed a significant group-by-load interaction. Statistical analyses of these ROIs in relation to performance and treatment were carried out using ANOVA in SPSS 15.0.0 as described below; these analyses thus allowed addressing the question of how areas in which patients differ from controls as a function of processing demands are associated with task performance and antipsychotic treatment within the patient group.\nIn order to investigate the role of performance levels on BOLD signal within the patient group, a performance (high, low) variable was created by splitting the patient group into those with high and those with low overall percentage of correct responses along the median of the patient group (the subject with the median score was excluded). These two patient groups were first compared to each other in clinical, demographic, and performance data. The two groups were then compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with performance (high, low) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. The effect size measure Eta squared (η\np2) is given for these effects.\nIn order to assess effects of antipsychotic treatment on task performance and BOLD response, the patient sample was split into those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (one patient was unmedicated at the time of fMRI scanning and was excluded for the purpose of this analysis). These two groups were then compared to each other in clinical, demographic, and performance data. The two groups were also compared to each other in BOLD data, first across the entire brain in SPM5 and then restricted to ROIs using ANOVA in SPSS with treatment (FGA, SGA) as between-subjects factor and load (0-back, 1-back, 2-back) as within-subjects factor. Eta squared (η\np2) is again given.\nFinally, the relationship between BOLD and symptom severity (PANSS) was investigated in the patient sample (first across all voxels in the whole brain in SPM5 and subsequently in the extracted ROIs in SPSS).\nMNI coordinates of peaks within significant clusters were transformed to Talairach coordinates using a non-linear algorithm (see http://imaging.mrc-cbu.cam.ac.uk/imaging/MniTalairach). The atlas of Talairach and Tournoux (1988) was used to identify anatomic labels and Brodmann areas (BA).", "[SUBTITLE] Sample description [SUBSECTION] A total of 64 participants completed the study, consisting of 45 patients with schizophrenia and 19 controls. Demographic and clinical data are summarised in Table 1. The patient and control groups did not differ significantly on demographic variables with the exception of years spent in full-time education, which may be expected given that schizophrenia is commonly associated with lower than expected educational achievement (Green 2001; see also Surguladze et al. 2007).\nTable 1Demographic and clinical data Patients (N = 45)Controls (N = 19)Group comparisonAge (years)37.33 (8.19)33.32 (9.21)\nt =−1.73, df = 62, p = 0.09Gender (N male/N female)35/1012/7\nχ\n2 = 1.46, df = 1, p = 0.23Ethnicity (N Caucasian/N other)19/2612/7\nχ\n2 = 2.34, df = 1, p = 0.13Years of education13.36 (2.33)14.95 (2.92)\nt = 2.32, df = 62, p = 0.02Parental SES2.58 (1.08)2.26 (1.20)\nZ = −1.06, p = 0.29Duration of illness (years)13.49 (9.78)––Age of onset (years)23.84 (6.58)––Antipsychotic treatment (N SGA/N FGA)38/6a\n––PANSS positive symptoms16.18 (4.72)––PANSS negative symptoms17.67 (4.48)––PANSS general psychopathology32.49 (6.01)––PANSS total score66.33 (12.66)––Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\naOne patient was untreated at the time of fMRI scanning\n\nDemographic and clinical data\nData represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\n\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\n\naOne patient was untreated at the time of fMRI scanning\nThe majority of patients (N  =  38) were treated with SGA (clozapine  =  10, risperidone  =  7, olanzapine  =  16, amisulpride  =  2, aripiprazole  =  3), six patients were treated with FGA (haloperidol  =  1, flupentixol  =  2, sulpiride  =  2, chlorpromazine  =  1), and one patient was untreated at time of fMRI testing.\nA number of patients were co-medicated with anticholinergic, benzodiazepine, mood stabilising, or antidepressant compounds. Anticholinergic compounds were administered to three patients on FGA and to two patients on SGA. Benzodiazepines were administered to four patients on SGA. Mood stabilisers were administered to one patient on FGA and eight patients on SGA. Antidepressants were administered to two patients on FGA and 15 patients on SGA.\nA total of 64 participants completed the study, consisting of 45 patients with schizophrenia and 19 controls. Demographic and clinical data are summarised in Table 1. The patient and control groups did not differ significantly on demographic variables with the exception of years spent in full-time education, which may be expected given that schizophrenia is commonly associated with lower than expected educational achievement (Green 2001; see also Surguladze et al. 2007).\nTable 1Demographic and clinical data Patients (N = 45)Controls (N = 19)Group comparisonAge (years)37.33 (8.19)33.32 (9.21)\nt =−1.73, df = 62, p = 0.09Gender (N male/N female)35/1012/7\nχ\n2 = 1.46, df = 1, p = 0.23Ethnicity (N Caucasian/N other)19/2612/7\nχ\n2 = 2.34, df = 1, p = 0.13Years of education13.36 (2.33)14.95 (2.92)\nt = 2.32, df = 62, p = 0.02Parental SES2.58 (1.08)2.26 (1.20)\nZ = −1.06, p = 0.29Duration of illness (years)13.49 (9.78)––Age of onset (years)23.84 (6.58)––Antipsychotic treatment (N SGA/N FGA)38/6a\n––PANSS positive symptoms16.18 (4.72)––PANSS negative symptoms17.67 (4.48)––PANSS general psychopathology32.49 (6.01)––PANSS total score66.33 (12.66)––Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\naOne patient was untreated at the time of fMRI scanning\n\nDemographic and clinical data\nData represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\n\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\n\naOne patient was untreated at the time of fMRI scanning\nThe majority of patients (N  =  38) were treated with SGA (clozapine  =  10, risperidone  =  7, olanzapine  =  16, amisulpride  =  2, aripiprazole  =  3), six patients were treated with FGA (haloperidol  =  1, flupentixol  =  2, sulpiride  =  2, chlorpromazine  =  1), and one patient was untreated at time of fMRI testing.\nA number of patients were co-medicated with anticholinergic, benzodiazepine, mood stabilising, or antidepressant compounds. Anticholinergic compounds were administered to three patients on FGA and to two patients on SGA. Benzodiazepines were administered to four patients on SGA. Mood stabilisers were administered to one patient on FGA and eight patients on SGA. Antidepressants were administered to two patients on FGA and 15 patients on SGA.\n[SUBTITLE] Working memory task performance [SUBSECTION] Working memory task performance data are shown in Table 2. The patient and control groups did not significantly differ in percent correct trials (F[1,62]  =  2.17, p = 0.15, η\np2  = 0.03). There was an effect of load on percent correct trials (F[2,124]  =  73.47, p  <  0.001, η\np2  =  0.54), indicating fewer correct responses with increasing load, but no group-by-load interaction (F[2,124]  =  1.27, p  =  0.29, η\np2  =  0.02).\nTable 2\nN-back task performance data by group Patients (N = 45)Controls (N = 19)0-back correct responses84.89 (19.15)87.51 (13.91)1-back correct responses62.10 (28.92)74.89 (22.42)2-back correct responses43.18 (26.54)51.58 (23.19)0-back omissions8.03 (9.62)8.56 (9.26)1-back omissions15.75 (16.23)14.81 (10.96)2-back omissions27.49 (20.92)29.72 (18.15)0-back latency233.95 (130.96)191.36 (145.83)1-back latency296.41 (210.07)293.70 (145.88)2-back latency490.24 (353.46)390.63 (213.42)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\n\n\nN-back task performance data by group\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\nThe groups also did not significantly differ in the latency to correct trials (F[1,61]  =  1.00, p  =  0.32, η\np2  =  0.02). As with percent correct responses, there was an effect of load on latency (F[2,122]  =  17.11, p  <  0.001, η\np2  =  0.22), indicating longer latencies with increasing load, but no group-by-load interaction (F[2,122]  =  0.80, p  =  0.45, η\np2  =  0.01).\nSimilarly, there was an effect of load on the percentage of omission errors (F[2,124]  =  38.24, p < 0.001, η\np2  = 0.38), indicating more omission errors with increasing load, but no effect of group (F[1,62]  =  0.04, p  =  0.85, η\np2  =  0.001) or group-by-load interaction (F[2,124]  =  0.23, p  =  0.80, η\np2  =  0.004).\nWorking memory task performance data are shown in Table 2. The patient and control groups did not significantly differ in percent correct trials (F[1,62]  =  2.17, p = 0.15, η\np2  = 0.03). There was an effect of load on percent correct trials (F[2,124]  =  73.47, p  <  0.001, η\np2  =  0.54), indicating fewer correct responses with increasing load, but no group-by-load interaction (F[2,124]  =  1.27, p  =  0.29, η\np2  =  0.02).\nTable 2\nN-back task performance data by group Patients (N = 45)Controls (N = 19)0-back correct responses84.89 (19.15)87.51 (13.91)1-back correct responses62.10 (28.92)74.89 (22.42)2-back correct responses43.18 (26.54)51.58 (23.19)0-back omissions8.03 (9.62)8.56 (9.26)1-back omissions15.75 (16.23)14.81 (10.96)2-back omissions27.49 (20.92)29.72 (18.15)0-back latency233.95 (130.96)191.36 (145.83)1-back latency296.41 (210.07)293.70 (145.88)2-back latency490.24 (353.46)390.63 (213.42)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\n\n\nN-back task performance data by group\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\nThe groups also did not significantly differ in the latency to correct trials (F[1,61]  =  1.00, p  =  0.32, η\np2  =  0.02). As with percent correct responses, there was an effect of load on latency (F[2,122]  =  17.11, p  <  0.001, η\np2  =  0.22), indicating longer latencies with increasing load, but no group-by-load interaction (F[2,122]  =  0.80, p  =  0.45, η\np2  =  0.01).\nSimilarly, there was an effect of load on the percentage of omission errors (F[2,124]  =  38.24, p < 0.001, η\np2  = 0.38), indicating more omission errors with increasing load, but no effect of group (F[1,62]  =  0.04, p  =  0.85, η\np2  =  0.001) or group-by-load interaction (F[2,124]  =  0.23, p  =  0.80, η\np2  =  0.004).\n[SUBTITLE] BOLD response by group and load [SUBSECTION] Both groups activated an extensive fronto-parieto-striato-thalamo-cerebellar network during performance of the n-back task. Figure 1 shows the activation of the combined group as a function of load (i.e., increase in BOLD signal with increasing working memory load); the main effect of task (across conditions of load) in the combined sample gave a very similar result.\nFig. 1Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\n\nMain effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\nThe main effect of group (across conditions of load) revealed three clusters of increased activation (p  <  0.05 corrected cluster level) in patients relative to controls: (1) occipital cortex (BA17; Talairach coordinates of peak voxel, x  =  −20, y  =  −88, z  =  −9; 857 voxels; Z  =  4.39) extending into the cerebellum; (2) left middle frontal gyrus (BA9; x  =  −46, y  =  15, z  =  36; 1,913 voxels; Z  =  4.26) extending into pre- and postcentral gyrus; and (3) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 854 voxels; Z  =  4.06) extending into right middle frontal gyrus. In contrast, there were no significant increases in controls relative to patients at the corrected or uncorrected cluster level and there were no significant voxels at the chosen height threshold (p  <  0.001).\nThe group-by-load interaction (Fig. 2) revealed two significant clusters (p  <  0.05 corrected cluster level): (1) left middle frontal gyrus (BA9; x  =  −46, y  =  17, z  =  36; 2,032 voxels; Z  =  4.43) and (2) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 749 voxels; Z  =  3.94) extending into right middle frontal gyrus. A third cluster with a peak in the occipital cortex (BA17; x  = −22, y  =  −86, z  =  −11; 535 voxels; Z  =  4.10) showed the same pattern but narrowly missed the level of statistical significance (p  =  0.054, corrected cluster level). These clusters showed stronger increases across load in patients than in controls. On the other hand, there were no significant clusters showing stronger increases across load in controls than in patients at the corrected or uncorrected cluster level, and there were no significant voxels at the chosen height threshold (p  <  0.001).\nFig. 2Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\n\nGroup-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\nThe prefrontal clusters that emerged in this interaction analysis appeared not to be part of the activation seen in response to load across both groups (Fig. 1) but appeared to be neighbouring it. In order to verify this, the group-by-load interaction contrast was masked with the contrast image resulting from the main effect of load; the two prefrontal areas from the interaction analysis also emerged in this masked analysis, indicating that these are voxels that show an additional increase with load in the patient group but not in the controls and not in the combined group.\nTo better understand the origin of these group-by-load interaction effects, the mean BOLD signal in each of the two frontal clusters that showed a significant interaction effect was extracted as described above and repeated measures ANOVAs with the within-subjects factor of load were run in SPSS separately for the two groups. In the left prefrontal cluster, we found a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  6.65, p  = 0.002, η\np2  =  0.13) but not in the controls (F[2,36]  =   0.35, p  =  0.71, η\np2  =  0.02). Similarly, in the right prefrontal cluster, there was a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  8.26, p  =  0.001, η\np2  =  0.16) but not in the controls (F[2,36]  =  0.77, p  = 0.47, η\np2  =  0.04).\nTaken together, this pattern indicates that patients showed a significantly greater BOLD increase in response to working memory load than controls in lateral prefrontal cortical areas that did not show a significant main effect of load.\nBoth groups activated an extensive fronto-parieto-striato-thalamo-cerebellar network during performance of the n-back task. Figure 1 shows the activation of the combined group as a function of load (i.e., increase in BOLD signal with increasing working memory load); the main effect of task (across conditions of load) in the combined sample gave a very similar result.\nFig. 1Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\n\nMain effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\nThe main effect of group (across conditions of load) revealed three clusters of increased activation (p  <  0.05 corrected cluster level) in patients relative to controls: (1) occipital cortex (BA17; Talairach coordinates of peak voxel, x  =  −20, y  =  −88, z  =  −9; 857 voxels; Z  =  4.39) extending into the cerebellum; (2) left middle frontal gyrus (BA9; x  =  −46, y  =  15, z  =  36; 1,913 voxels; Z  =  4.26) extending into pre- and postcentral gyrus; and (3) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 854 voxels; Z  =  4.06) extending into right middle frontal gyrus. In contrast, there were no significant increases in controls relative to patients at the corrected or uncorrected cluster level and there were no significant voxels at the chosen height threshold (p  <  0.001).\nThe group-by-load interaction (Fig. 2) revealed two significant clusters (p  <  0.05 corrected cluster level): (1) left middle frontal gyrus (BA9; x  =  −46, y  =  17, z  =  36; 2,032 voxels; Z  =  4.43) and (2) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 749 voxels; Z  =  3.94) extending into right middle frontal gyrus. A third cluster with a peak in the occipital cortex (BA17; x  = −22, y  =  −86, z  =  −11; 535 voxels; Z  =  4.10) showed the same pattern but narrowly missed the level of statistical significance (p  =  0.054, corrected cluster level). These clusters showed stronger increases across load in patients than in controls. On the other hand, there were no significant clusters showing stronger increases across load in controls than in patients at the corrected or uncorrected cluster level, and there were no significant voxels at the chosen height threshold (p  <  0.001).\nFig. 2Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\n\nGroup-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\nThe prefrontal clusters that emerged in this interaction analysis appeared not to be part of the activation seen in response to load across both groups (Fig. 1) but appeared to be neighbouring it. In order to verify this, the group-by-load interaction contrast was masked with the contrast image resulting from the main effect of load; the two prefrontal areas from the interaction analysis also emerged in this masked analysis, indicating that these are voxels that show an additional increase with load in the patient group but not in the controls and not in the combined group.\nTo better understand the origin of these group-by-load interaction effects, the mean BOLD signal in each of the two frontal clusters that showed a significant interaction effect was extracted as described above and repeated measures ANOVAs with the within-subjects factor of load were run in SPSS separately for the two groups. In the left prefrontal cluster, we found a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  6.65, p  = 0.002, η\np2  =  0.13) but not in the controls (F[2,36]  =   0.35, p  =  0.71, η\np2  =  0.02). Similarly, in the right prefrontal cluster, there was a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  8.26, p  =  0.001, η\np2  =  0.16) but not in the controls (F[2,36]  =  0.77, p  = 0.47, η\np2  =  0.04).\nTaken together, this pattern indicates that patients showed a significantly greater BOLD increase in response to working memory load than controls in lateral prefrontal cortical areas that did not show a significant main effect of load.\n[SUBTITLE] Association of BOLD with performance level in the patient group [SUBSECTION] In keeping with the analysis plan, the patient group was split into those with high and low performance along the patient group's median (62.49%), yielding two groups of N  = 22 each (the subject with the median score was excluded). N-back data of the two groups are summarised in Table 3. Importantly, the high- and low-performing patient groups did not differ from each other in any demographic or clinical variables (all p  >  0.05). However, the two patient groups differed significantly from each other in percent correct response at each level of load (all F  >  11.88, p  <  0.002), as expected. They also differed with regards to omission errors (all F  >  6.82, p  <  0.02) but not in reaction time (all F  <  1.85, p  >  0.18).\nTable 3Performance data by patient subgroups Performance subgroupsTreatment subgroupsHigh (N = 22)Low (N = 22)First-generation (N = 6)Second-generation (N = 38)0-back correct responses93.52 (5.77)75.64 (23.63)89.11 (9.22)84.07 (20.50)1-back correct responses85.07 (10.73)39.87 (23.55)60.48 (31.03)63.27 (28.81)2-back correct responses62.17 (23.96)24.20 (12.11)43.59 (30.75)43.77 (26.32)0-back omissions4.61 (4.15)11.76 (12.16)8.00 (6.14)8.07 (10.24)1-back omissions7.34 (6.11)24.74 (18.65)24.05 (21.22)14.21 (15.39)2-back omissions18.88 (17.24)36.92 (20.80)37.95 (24.36)25.26 (20.11)0-back latency220.16 (103.08)254.48 (153.69)268.35 (129.35)218.62 (117.80)1-back latency252.88 (144.17)303.00 (195.38)442.97 (228.72)264.07 (193.76)2-back latency403.39 (267.89)537.96 (380.03)460.99 (237.67)488.19 (372.91)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\n\nPerformance data by patient subgroups\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\nAt the level of BOLD, each patient subgroup showed similar activation increases in response to load as the combined group (see Online Resource 1). An anatomically unconstrained voxelwise comparison between the two patient groups did not find a main effect of performance and there was no performance-by-load interaction (at p  = 0.05, corrected cluster level). A comparison of the prefrontal clusters that had showed group-by-load interactions above similarly did not yield significant main effects of performance for either cluster (both p  >  0.79). However, there was a significant performance-by-load interaction for the left (F[2,84]  =  4.09, p  =  0.02, η\np2  =  0.09) but not right (p  =  0.24, η\np2  =  0.03) cluster. In order to better understand this interaction, within-subject ANOVAs of load in the left prefrontal cluster were calculated separately for the two performance groups. These showed that the high- (F[2,42]  =  9.63, p  <  0.001, η\np2  =  0.31) but not the low-performing patients (p  =  0.83, η\np2  =  0.009) showed a statistically significant linear increase in BOLD with load (Fig. 3).\nFig. 3Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task\n\nPerformance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task\nIn keeping with the analysis plan, the patient group was split into those with high and low performance along the patient group's median (62.49%), yielding two groups of N  = 22 each (the subject with the median score was excluded). N-back data of the two groups are summarised in Table 3. Importantly, the high- and low-performing patient groups did not differ from each other in any demographic or clinical variables (all p  >  0.05). However, the two patient groups differed significantly from each other in percent correct response at each level of load (all F  >  11.88, p  <  0.002), as expected. They also differed with regards to omission errors (all F  >  6.82, p  <  0.02) but not in reaction time (all F  <  1.85, p  >  0.18).\nTable 3Performance data by patient subgroups Performance subgroupsTreatment subgroupsHigh (N = 22)Low (N = 22)First-generation (N = 6)Second-generation (N = 38)0-back correct responses93.52 (5.77)75.64 (23.63)89.11 (9.22)84.07 (20.50)1-back correct responses85.07 (10.73)39.87 (23.55)60.48 (31.03)63.27 (28.81)2-back correct responses62.17 (23.96)24.20 (12.11)43.59 (30.75)43.77 (26.32)0-back omissions4.61 (4.15)11.76 (12.16)8.00 (6.14)8.07 (10.24)1-back omissions7.34 (6.11)24.74 (18.65)24.05 (21.22)14.21 (15.39)2-back omissions18.88 (17.24)36.92 (20.80)37.95 (24.36)25.26 (20.11)0-back latency220.16 (103.08)254.48 (153.69)268.35 (129.35)218.62 (117.80)1-back latency252.88 (144.17)303.00 (195.38)442.97 (228.72)264.07 (193.76)2-back latency403.39 (267.89)537.96 (380.03)460.99 (237.67)488.19 (372.91)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\n\nPerformance data by patient subgroups\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\nAt the level of BOLD, each patient subgroup showed similar activation increases in response to load as the combined group (see Online Resource 1). An anatomically unconstrained voxelwise comparison between the two patient groups did not find a main effect of performance and there was no performance-by-load interaction (at p  = 0.05, corrected cluster level). A comparison of the prefrontal clusters that had showed group-by-load interactions above similarly did not yield significant main effects of performance for either cluster (both p  >  0.79). However, there was a significant performance-by-load interaction for the left (F[2,84]  =  4.09, p  =  0.02, η\np2  =  0.09) but not right (p  =  0.24, η\np2  =  0.03) cluster. In order to better understand this interaction, within-subject ANOVAs of load in the left prefrontal cluster were calculated separately for the two performance groups. These showed that the high- (F[2,42]  =  9.63, p  <  0.001, η\np2  =  0.31) but not the low-performing patients (p  =  0.83, η\np2  =  0.009) showed a statistically significant linear increase in BOLD with load (Fig. 3).\nFig. 3Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task\n\nPerformance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task\n[SUBTITLE] Effects of antipsychotic treatment [SUBSECTION] The split according to treatment yielded two groups of patients, those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (N  =  1 excluded). N-back performance data of the treatment groups are summarised in Table 3. There were no significant associations of treatment with any demographic variables, clinical variables, or anticholinergic, antidepressant, mood stabiliser, or benzodiazepine treatment (all p  >  0.34), and the treatment factor did not significantly interact with the performance variable (χ\n2  =  0.89, df  =  1, p  =  0.35): in the SGA group, there were 17 low- and 20 high-performing patients, and in the FGA group, there were four low- and two high-performing patients. There were no main effects of treatment (all p  >  0.18) or treatment-by-load interactions (all p  >  0.22) on n-back task performance; therefore, any treatment effects on BOLD reported below are considered to be independent of performance effects.\nAt the level of BOLD, an unrestricted voxelwise analysis did not yield any significant differences between the two treatment groups or treatment-by-load interactions (all p  > 0.05, corrected cluster level). When considering the extracted prefrontal ROIs, no main effects of treatment were seen in either cluster (both p  >  0.51). However, there was a significant treatment-by-load interaction (F[2,84]  = 5.16, p  =  0.008, η\np2  =  0.11) in the left prefrontal cluster. This interaction (Fig. 4) indicated that patients treated with SGA showed an increase in BOLD as a function of load, whereas the patients treated with FGA showed a reduction. The interaction effect appeared similar in the right prefrontal cluster but did not reach statistical significance (p  =  0.13). A further split of the SGA into those patients treated with clozapine (N  =  10), risperidone (N  =  7), and olanzapine (N  =  16) did not yield any significant group or interaction effects for the prefrontal ROIs. Therefore, the SGA group as a whole showed a stronger parametric left prefrontal BOLD increase than the FGA group, with no significant differences detected between different SGA compounds.\nFig. 4Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task\n\nTreatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task\nThe split according to treatment yielded two groups of patients, those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (N  =  1 excluded). N-back performance data of the treatment groups are summarised in Table 3. There were no significant associations of treatment with any demographic variables, clinical variables, or anticholinergic, antidepressant, mood stabiliser, or benzodiazepine treatment (all p  >  0.34), and the treatment factor did not significantly interact with the performance variable (χ\n2  =  0.89, df  =  1, p  =  0.35): in the SGA group, there were 17 low- and 20 high-performing patients, and in the FGA group, there were four low- and two high-performing patients. There were no main effects of treatment (all p  >  0.18) or treatment-by-load interactions (all p  >  0.22) on n-back task performance; therefore, any treatment effects on BOLD reported below are considered to be independent of performance effects.\nAt the level of BOLD, an unrestricted voxelwise analysis did not yield any significant differences between the two treatment groups or treatment-by-load interactions (all p  > 0.05, corrected cluster level). When considering the extracted prefrontal ROIs, no main effects of treatment were seen in either cluster (both p  >  0.51). However, there was a significant treatment-by-load interaction (F[2,84]  = 5.16, p  =  0.008, η\np2  =  0.11) in the left prefrontal cluster. This interaction (Fig. 4) indicated that patients treated with SGA showed an increase in BOLD as a function of load, whereas the patients treated with FGA showed a reduction. The interaction effect appeared similar in the right prefrontal cluster but did not reach statistical significance (p  =  0.13). A further split of the SGA into those patients treated with clozapine (N  =  10), risperidone (N  =  7), and olanzapine (N  =  16) did not yield any significant group or interaction effects for the prefrontal ROIs. Therefore, the SGA group as a whole showed a stronger parametric left prefrontal BOLD increase than the FGA group, with no significant differences detected between different SGA compounds.\nFig. 4Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task\n\nTreatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task\n[SUBTITLE] Association of BOLD with symptom severity [SUBSECTION] There were no significant correlations of BOLD from the combined n-back activations with PANSS total or subscale scores (all p  >  0.05, corrected cluster level). Similarly, there were no correlations between the extracted ROIs and PANSS symptom scores (all p  >  0.06).\nThere were no significant correlations of BOLD from the combined n-back activations with PANSS total or subscale scores (all p  >  0.05, corrected cluster level). Similarly, there were no correlations between the extracted ROIs and PANSS symptom scores (all p  >  0.06).", "A total of 64 participants completed the study, consisting of 45 patients with schizophrenia and 19 controls. Demographic and clinical data are summarised in Table 1. The patient and control groups did not differ significantly on demographic variables with the exception of years spent in full-time education, which may be expected given that schizophrenia is commonly associated with lower than expected educational achievement (Green 2001; see also Surguladze et al. 2007).\nTable 1Demographic and clinical data Patients (N = 45)Controls (N = 19)Group comparisonAge (years)37.33 (8.19)33.32 (9.21)\nt =−1.73, df = 62, p = 0.09Gender (N male/N female)35/1012/7\nχ\n2 = 1.46, df = 1, p = 0.23Ethnicity (N Caucasian/N other)19/2612/7\nχ\n2 = 2.34, df = 1, p = 0.13Years of education13.36 (2.33)14.95 (2.92)\nt = 2.32, df = 62, p = 0.02Parental SES2.58 (1.08)2.26 (1.20)\nZ = −1.06, p = 0.29Duration of illness (years)13.49 (9.78)––Age of onset (years)23.84 (6.58)––Antipsychotic treatment (N SGA/N FGA)38/6a\n––PANSS positive symptoms16.18 (4.72)––PANSS negative symptoms17.67 (4.48)––PANSS general psychopathology32.49 (6.01)––PANSS total score66.33 (12.66)––Data represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\naOne patient was untreated at the time of fMRI scanning\n\nDemographic and clinical data\nData represent means (and standard deviations) unless indicated otherwise. Socio-economic status (SES) is measured in professional achievement from 1 (professional) to 4 (manual). All participants were right-handed\n\nPANSS positive and negative syndrome scale, SGA second-generation antipsychotics, FGA first-generation antipsychotics\n\naOne patient was untreated at the time of fMRI scanning\nThe majority of patients (N  =  38) were treated with SGA (clozapine  =  10, risperidone  =  7, olanzapine  =  16, amisulpride  =  2, aripiprazole  =  3), six patients were treated with FGA (haloperidol  =  1, flupentixol  =  2, sulpiride  =  2, chlorpromazine  =  1), and one patient was untreated at time of fMRI testing.\nA number of patients were co-medicated with anticholinergic, benzodiazepine, mood stabilising, or antidepressant compounds. Anticholinergic compounds were administered to three patients on FGA and to two patients on SGA. Benzodiazepines were administered to four patients on SGA. Mood stabilisers were administered to one patient on FGA and eight patients on SGA. Antidepressants were administered to two patients on FGA and 15 patients on SGA.", "Working memory task performance data are shown in Table 2. The patient and control groups did not significantly differ in percent correct trials (F[1,62]  =  2.17, p = 0.15, η\np2  = 0.03). There was an effect of load on percent correct trials (F[2,124]  =  73.47, p  <  0.001, η\np2  =  0.54), indicating fewer correct responses with increasing load, but no group-by-load interaction (F[2,124]  =  1.27, p  =  0.29, η\np2  =  0.02).\nTable 2\nN-back task performance data by group Patients (N = 45)Controls (N = 19)0-back correct responses84.89 (19.15)87.51 (13.91)1-back correct responses62.10 (28.92)74.89 (22.42)2-back correct responses43.18 (26.54)51.58 (23.19)0-back omissions8.03 (9.62)8.56 (9.26)1-back omissions15.75 (16.23)14.81 (10.96)2-back omissions27.49 (20.92)29.72 (18.15)0-back latency233.95 (130.96)191.36 (145.83)1-back latency296.41 (210.07)293.70 (145.88)2-back latency490.24 (353.46)390.63 (213.42)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\n\n\nN-back task performance data by group\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in milliseconds.\nThe groups also did not significantly differ in the latency to correct trials (F[1,61]  =  1.00, p  =  0.32, η\np2  =  0.02). As with percent correct responses, there was an effect of load on latency (F[2,122]  =  17.11, p  <  0.001, η\np2  =  0.22), indicating longer latencies with increasing load, but no group-by-load interaction (F[2,122]  =  0.80, p  =  0.45, η\np2  =  0.01).\nSimilarly, there was an effect of load on the percentage of omission errors (F[2,124]  =  38.24, p < 0.001, η\np2  = 0.38), indicating more omission errors with increasing load, but no effect of group (F[1,62]  =  0.04, p  =  0.85, η\np2  =  0.001) or group-by-load interaction (F[2,124]  =  0.23, p  =  0.80, η\np2  =  0.004).", "Both groups activated an extensive fronto-parieto-striato-thalamo-cerebellar network during performance of the n-back task. Figure 1 shows the activation of the combined group as a function of load (i.e., increase in BOLD signal with increasing working memory load); the main effect of task (across conditions of load) in the combined sample gave a very similar result.\nFig. 1Main effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\n\nMain effect of load on BOLD across groups. The figure depicts significant BOLD response as a function of load on the n-back working memory task across both groups (p  <  0.05, FWE corrected voxel level)\nThe main effect of group (across conditions of load) revealed three clusters of increased activation (p  <  0.05 corrected cluster level) in patients relative to controls: (1) occipital cortex (BA17; Talairach coordinates of peak voxel, x  =  −20, y  =  −88, z  =  −9; 857 voxels; Z  =  4.39) extending into the cerebellum; (2) left middle frontal gyrus (BA9; x  =  −46, y  =  15, z  =  36; 1,913 voxels; Z  =  4.26) extending into pre- and postcentral gyrus; and (3) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 854 voxels; Z  =  4.06) extending into right middle frontal gyrus. In contrast, there were no significant increases in controls relative to patients at the corrected or uncorrected cluster level and there were no significant voxels at the chosen height threshold (p  <  0.001).\nThe group-by-load interaction (Fig. 2) revealed two significant clusters (p  <  0.05 corrected cluster level): (1) left middle frontal gyrus (BA9; x  =  −46, y  =  17, z  =  36; 2,032 voxels; Z  =  4.43) and (2) right inferior frontal gyrus (BA45/47; x  =  46, y  =  18, z  =  18; 749 voxels; Z  =  3.94) extending into right middle frontal gyrus. A third cluster with a peak in the occipital cortex (BA17; x  = −22, y  =  −86, z  =  −11; 535 voxels; Z  =  4.10) showed the same pattern but narrowly missed the level of statistical significance (p  =  0.054, corrected cluster level). These clusters showed stronger increases across load in patients than in controls. On the other hand, there were no significant clusters showing stronger increases across load in controls than in patients at the corrected or uncorrected cluster level, and there were no significant voxels at the chosen height threshold (p  <  0.001).\nFig. 2Group-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\n\nGroup-by-load interactions in BOLD response. The upper part of the figure depicts in red the areas that show a significant main effect of load and in blue the two prefrontal areas that show a significant group-by-load interaction. The selection of the two coronal slices corresponds to the Talairach y coordinates of the peak voxel for the right (y  = 18) and left (y  = 17) PFC clusters, respectively. The lower parts of the figure depict the nature of the interaction effects separately for left and right PFC\nThe prefrontal clusters that emerged in this interaction analysis appeared not to be part of the activation seen in response to load across both groups (Fig. 1) but appeared to be neighbouring it. In order to verify this, the group-by-load interaction contrast was masked with the contrast image resulting from the main effect of load; the two prefrontal areas from the interaction analysis also emerged in this masked analysis, indicating that these are voxels that show an additional increase with load in the patient group but not in the controls and not in the combined group.\nTo better understand the origin of these group-by-load interaction effects, the mean BOLD signal in each of the two frontal clusters that showed a significant interaction effect was extracted as described above and repeated measures ANOVAs with the within-subjects factor of load were run in SPSS separately for the two groups. In the left prefrontal cluster, we found a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  6.65, p  = 0.002, η\np2  =  0.13) but not in the controls (F[2,36]  =   0.35, p  =  0.71, η\np2  =  0.02). Similarly, in the right prefrontal cluster, there was a linear increase in BOLD as a function of load in the patient group (F[2,88]  =  8.26, p  =  0.001, η\np2  =  0.16) but not in the controls (F[2,36]  =  0.77, p  = 0.47, η\np2  =  0.04).\nTaken together, this pattern indicates that patients showed a significantly greater BOLD increase in response to working memory load than controls in lateral prefrontal cortical areas that did not show a significant main effect of load.", "In keeping with the analysis plan, the patient group was split into those with high and low performance along the patient group's median (62.49%), yielding two groups of N  = 22 each (the subject with the median score was excluded). N-back data of the two groups are summarised in Table 3. Importantly, the high- and low-performing patient groups did not differ from each other in any demographic or clinical variables (all p  >  0.05). However, the two patient groups differed significantly from each other in percent correct response at each level of load (all F  >  11.88, p  <  0.002), as expected. They also differed with regards to omission errors (all F  >  6.82, p  <  0.02) but not in reaction time (all F  <  1.85, p  >  0.18).\nTable 3Performance data by patient subgroups Performance subgroupsTreatment subgroupsHigh (N = 22)Low (N = 22)First-generation (N = 6)Second-generation (N = 38)0-back correct responses93.52 (5.77)75.64 (23.63)89.11 (9.22)84.07 (20.50)1-back correct responses85.07 (10.73)39.87 (23.55)60.48 (31.03)63.27 (28.81)2-back correct responses62.17 (23.96)24.20 (12.11)43.59 (30.75)43.77 (26.32)0-back omissions4.61 (4.15)11.76 (12.16)8.00 (6.14)8.07 (10.24)1-back omissions7.34 (6.11)24.74 (18.65)24.05 (21.22)14.21 (15.39)2-back omissions18.88 (17.24)36.92 (20.80)37.95 (24.36)25.26 (20.11)0-back latency220.16 (103.08)254.48 (153.69)268.35 (129.35)218.62 (117.80)1-back latency252.88 (144.17)303.00 (195.38)442.97 (228.72)264.07 (193.76)2-back latency403.39 (267.89)537.96 (380.03)460.99 (237.67)488.19 (372.91)Data represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\n\nPerformance data by patient subgroups\nData represent means. Standard deviations are given in parentheses. Correct responses and omissions are given in percent; latency is given in msec. The performance and treatment groups were not significantly associated (see “Results”).\nAt the level of BOLD, each patient subgroup showed similar activation increases in response to load as the combined group (see Online Resource 1). An anatomically unconstrained voxelwise comparison between the two patient groups did not find a main effect of performance and there was no performance-by-load interaction (at p  = 0.05, corrected cluster level). A comparison of the prefrontal clusters that had showed group-by-load interactions above similarly did not yield significant main effects of performance for either cluster (both p  >  0.79). However, there was a significant performance-by-load interaction for the left (F[2,84]  =  4.09, p  =  0.02, η\np2  =  0.09) but not right (p  =  0.24, η\np2  =  0.03) cluster. In order to better understand this interaction, within-subject ANOVAs of load in the left prefrontal cluster were calculated separately for the two performance groups. These showed that the high- (F[2,42]  =  9.63, p  <  0.001, η\np2  =  0.31) but not the low-performing patients (p  =  0.83, η\np2  =  0.009) showed a statistically significant linear increase in BOLD with load (Fig. 3).\nFig. 3Performance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task\n\nPerformance-by-load interaction in BOLD response. The figure shows the performance-by-load interaction for the left PFC. The patient sample is split along the median correct response rate into those with high performance (N  =  22) and those with low performance (N  =  22). The x axis shows the three conditions of the n-back working memory task", "The split according to treatment yielded two groups of patients, those treated with FGA (N  =  6) and those treated with SGA (N  =  38) antipsychotics (N  =  1 excluded). N-back performance data of the treatment groups are summarised in Table 3. There were no significant associations of treatment with any demographic variables, clinical variables, or anticholinergic, antidepressant, mood stabiliser, or benzodiazepine treatment (all p  >  0.34), and the treatment factor did not significantly interact with the performance variable (χ\n2  =  0.89, df  =  1, p  =  0.35): in the SGA group, there were 17 low- and 20 high-performing patients, and in the FGA group, there were four low- and two high-performing patients. There were no main effects of treatment (all p  >  0.18) or treatment-by-load interactions (all p  >  0.22) on n-back task performance; therefore, any treatment effects on BOLD reported below are considered to be independent of performance effects.\nAt the level of BOLD, an unrestricted voxelwise analysis did not yield any significant differences between the two treatment groups or treatment-by-load interactions (all p  > 0.05, corrected cluster level). When considering the extracted prefrontal ROIs, no main effects of treatment were seen in either cluster (both p  >  0.51). However, there was a significant treatment-by-load interaction (F[2,84]  = 5.16, p  =  0.008, η\np2  =  0.11) in the left prefrontal cluster. This interaction (Fig. 4) indicated that patients treated with SGA showed an increase in BOLD as a function of load, whereas the patients treated with FGA showed a reduction. The interaction effect appeared similar in the right prefrontal cluster but did not reach statistical significance (p  =  0.13). A further split of the SGA into those patients treated with clozapine (N  =  10), risperidone (N  =  7), and olanzapine (N  =  16) did not yield any significant group or interaction effects for the prefrontal ROIs. Therefore, the SGA group as a whole showed a stronger parametric left prefrontal BOLD increase than the FGA group, with no significant differences detected between different SGA compounds.\nFig. 4Treatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task\n\nTreatment-by-load interaction in BOLD response. The figure shows the treatment-by-load interaction for the left PFC. The patient sample is split into those treated with first-generation antipsychotics (N  = 6) and those treated with second-generation antipsychotics (N  =  38). The x axis shows the three conditions of the n-back working memory task", "There were no significant correlations of BOLD from the combined n-back activations with PANSS total or subscale scores (all p  >  0.05, corrected cluster level). Similarly, there were no correlations between the extracted ROIs and PANSS symptom scores (all p  >  0.06).", "This study used fMRI to investigate the BOLD response during a parametric spatial working memory task in stable outpatients with schizophrenia and healthy control participants. It was found that (1) patients and controls did not differ significantly in task performance, (2) both groups showed deterioration in performance and increase in BOLD signal in response to increasing working memory load, (3) the patient group as a whole showed stronger BOLD signal in occipital and prefrontal cortical areas additional to prefrontal areas activated in both groups, (3) this difference increased with increasing working memory load in prefrontal areas, (4) patients with good performance showed greater left prefrontal BOLD increase in response to increasing working memory load than patients with poor performance, and (5) second-generation antipsychotics (SGA) were associated with a greater parametric increase in left prefrontal BOLD in response to increasing working memory load than first-generation antipsychotics (FGA), independent of the performance effects.\n[SUBTITLE] Patient-control differences [SUBSECTION] Working memory deficits in schizophrenia have often been demonstrated and it is important to understand this impairment in relation to the pathophysiology, genetics, and treatment of schizophrenia (Barch and Smith 2008; Goldman-Rakic 1994). However, there is still considerable lack of clarity concerning the neural mechanisms that mediate these cognitive impairments. Given the importance of the neurophysiological level of analysis in genetic (Meyer-Lindenberg and Weinberger 2006) and pharmacological (Migo et al. 2011) studies of schizophrenia, the aim of this study was to further address this issue.\nPatients and controls did not differ in working memory performance in this study (see also Tan et al. 2005). At the level of BOLD, it was found that patients with schizophrenia displayed significantly stronger activation than controls in lateral prefrontal cortex and left occipital cortex (extending into the cerebellum) during task performance. Additionally, the group difference in prefrontal cortex became stronger with increasing load. The prefrontal cortex clusters were located to superior and middle frontal gyrus as well as medial prefrontal cortex. These prefrontal hyperactivations in patients appeared to be recruited in addition to prefrontal areas that were activated in the main effect of working memory load in both groups.\nPrevious brain activation studies of schizophrenia patients have variably shown increased, decreased, or comparable frontal activation levels in patients. These discrepancies have been attributed to a number of variables including performance and task difficulty (Callicott et al. 2003; Manoach 2003; Potkin et al. 2009). As outlined in detail elsewhere (Linden 2009), the pattern of schizophrenia-control differences in BOLD signal and task performance can take on different forms, indicating qualitatively different, reduced, compensatory, aberrant, or inefficient neural response. The results of the current study indicate that areas activated by both groups are activated to a similar extent; however, patients additionally activated lateral prefrontal areas, an effect that became more pronounced with increasing task demands. Bearing in mind that the patients have psychotic symptoms and general clinical impairments, this pattern suggests the operation of compensatory neural mechanisms that led to overall successful task performance when compared to healthy controls. A number of previous studies have similarly observed compensatory BOLD increases on the background of comparable task performance in schizophrenia patients (Karch et al. 2009; Lee et al. 2008; Potkin et al. 2009; Royer et al. 2009; Tan et al. 2005; van Raalten et al. 2008; see also meta-analysis by Minzenberg et al. 2009). However, while the patients' recruitment of additional areas in our study may represent a behaviourally successful compensatory mechanism, it is also neurally inefficient as it requires additional resources in order to achieve a comparable performance levels as the controls. Of interest, it should be noted that compensatory hyperactivations have been observed not only in schizophrenia but also in other psychiatric conditions such as obsessive–compulsive disorder (Henseler et al. 2008).\nWorking memory deficits in schizophrenia have often been demonstrated and it is important to understand this impairment in relation to the pathophysiology, genetics, and treatment of schizophrenia (Barch and Smith 2008; Goldman-Rakic 1994). However, there is still considerable lack of clarity concerning the neural mechanisms that mediate these cognitive impairments. Given the importance of the neurophysiological level of analysis in genetic (Meyer-Lindenberg and Weinberger 2006) and pharmacological (Migo et al. 2011) studies of schizophrenia, the aim of this study was to further address this issue.\nPatients and controls did not differ in working memory performance in this study (see also Tan et al. 2005). At the level of BOLD, it was found that patients with schizophrenia displayed significantly stronger activation than controls in lateral prefrontal cortex and left occipital cortex (extending into the cerebellum) during task performance. Additionally, the group difference in prefrontal cortex became stronger with increasing load. The prefrontal cortex clusters were located to superior and middle frontal gyrus as well as medial prefrontal cortex. These prefrontal hyperactivations in patients appeared to be recruited in addition to prefrontal areas that were activated in the main effect of working memory load in both groups.\nPrevious brain activation studies of schizophrenia patients have variably shown increased, decreased, or comparable frontal activation levels in patients. These discrepancies have been attributed to a number of variables including performance and task difficulty (Callicott et al. 2003; Manoach 2003; Potkin et al. 2009). As outlined in detail elsewhere (Linden 2009), the pattern of schizophrenia-control differences in BOLD signal and task performance can take on different forms, indicating qualitatively different, reduced, compensatory, aberrant, or inefficient neural response. The results of the current study indicate that areas activated by both groups are activated to a similar extent; however, patients additionally activated lateral prefrontal areas, an effect that became more pronounced with increasing task demands. Bearing in mind that the patients have psychotic symptoms and general clinical impairments, this pattern suggests the operation of compensatory neural mechanisms that led to overall successful task performance when compared to healthy controls. A number of previous studies have similarly observed compensatory BOLD increases on the background of comparable task performance in schizophrenia patients (Karch et al. 2009; Lee et al. 2008; Potkin et al. 2009; Royer et al. 2009; Tan et al. 2005; van Raalten et al. 2008; see also meta-analysis by Minzenberg et al. 2009). However, while the patients' recruitment of additional areas in our study may represent a behaviourally successful compensatory mechanism, it is also neurally inefficient as it requires additional resources in order to achieve a comparable performance levels as the controls. Of interest, it should be noted that compensatory hyperactivations have been observed not only in schizophrenia but also in other psychiatric conditions such as obsessive–compulsive disorder (Henseler et al. 2008).\n[SUBTITLE] Performance effects in patients [SUBSECTION] The left prefrontal BOLD increases in response to working memory load were more pronounced in patients with high levels of performance than in patients with low performance levels. The finding of performance-related activation levels in the patients is in part compatible with existing models which attribute group differences in BOLD to variation in task performance (Callicott et al. 2003; Manoach 2003). These models suggested that relatively unimpaired performance may be associated with increased BOLD signal, whereas lower performance may be associated with reductions in BOLD, e.g., hypofrontality. Given that high-performing patients in this study showed a stronger increase in activation with increasing task demands than low-performing patients it may be speculated that this may perhaps be compatible with low-performing patients not being able to further increase their BOLD beyond a certain level of task difficulty, which could have resulted in (or stemmed from) their lower working memory capacity. Relating this pattern to the inverted U curves depicted by Manoach (2003) and Callicott et al. (2003), it may be surmised that low-performing patients are already at the top or on the downward slope of the inverted U.\nThe left prefrontal BOLD increases in response to working memory load were more pronounced in patients with high levels of performance than in patients with low performance levels. The finding of performance-related activation levels in the patients is in part compatible with existing models which attribute group differences in BOLD to variation in task performance (Callicott et al. 2003; Manoach 2003). These models suggested that relatively unimpaired performance may be associated with increased BOLD signal, whereas lower performance may be associated with reductions in BOLD, e.g., hypofrontality. Given that high-performing patients in this study showed a stronger increase in activation with increasing task demands than low-performing patients it may be speculated that this may perhaps be compatible with low-performing patients not being able to further increase their BOLD beyond a certain level of task difficulty, which could have resulted in (or stemmed from) their lower working memory capacity. Relating this pattern to the inverted U curves depicted by Manoach (2003) and Callicott et al. (2003), it may be surmised that low-performing patients are already at the top or on the downward slope of the inverted U.\n[SUBTITLE] Treatment effects in patients [SUBSECTION] The cross-sectional treatment effects observed here are compatible with some previous evidence (see “Introduction”). Patients treated with SGA showed load-related increases in left prefrontal BOLD, whereas those on FGA did not; it is important to note that this effect represents an interaction effect of working memory load and treatment but not a treatment main effect. Importantly, the FGA and SGA patient groups did not differ in demographic, clinical, or performance variables. An analysis of individual SGA compounds (clozapine, risperidone, olanzapine) did not yield any differences amongst these. As stated earlier, a number of pharmacological fMRI studies of antipsychotic compounds in schizophrenia have shown stronger BOLD signal with SGA compared to FGA. The strongest evidence for this effect comes from longitudinal studies which have shown that SGA improve cortical activations, thus normalising previous hypoactivations (review, Davis et al. 2005; Kumari and Cooke 2006; Migo et al. 2011). The present study observed this effect in a cross-sectional design. The effect was seen despite the small number of patients treated with FGA, a factor that can be attributed to current guidelines in clinical practice.\nHowever, it should be noted that not all previous studies have found this effect. For example, Surguladze et al. (2007) observed that patients on FGA showed an increase in ventromedial prefrontal BOLD as a function of working memory load, whereas patients treated with risperidone and healthy controls did not. Conversely, a lateral ventral prefrontal area studied by Surguladze et al. (2007) showed an increase with load only in controls but not in FGA- or risperidone-treated patients. An important difference between their study and ours is that in Surguladze et al. (2007), patients treated with FGA displayed significantly impaired task performance relative to risperidone-treated patients, whereas in our study, there were no significant drug effects on the level of task performance. Furthermore, there is evidence for relative functional specificity within prefrontal cortices. For example, studies have shown bilaterally increased DPFC (BA9/46) activity during rule discovery but increased VPFC (BA47/12) activity with changes in card-sorting rule on the WCST (Monchi et al. 2001). It is possible that SGAs have different effects on dorsal and ventral prefrontal brain regions.\nThe observation that in this study, working memory task performance did not differ between the two treatment groups, is noteworthy given that there were differences between the two groups at the level of BOLD in response to working memory load increase. Dissociations of effects at behavioural and neural levels of analysis in fMRI designs have previously been described in relation to experimental cognitive neuroscience (Wilkinson and Halligan 2004), but similar arguments may apply to pharmacological fMRI studies (Migo et al. 2011). Specifically, in this instance, it may be argued that our data provide evidence for the notion that the BOLD signal may be a more sensitive measure of pharmacological treatment effects than behavioural measures (Honey and Bullmore 2004) analogous to findings in the neurochemical imaging literature (Fannon et al. 2003).\nA likely pharmacological explanation of the treatment effects in this study may be the richer pharmacological profile of SGA compounds which rely not just on D2 receptor antagonism but also act on other neurotransmitter systems such as the serotonergic system. However, the group of SGA compounds used in this study is heterogeneous, thus not allowing a definitive explanation of the neurotransmitter mechanisms underlying the effects observed here.\nThe cross-sectional treatment effects observed here are compatible with some previous evidence (see “Introduction”). Patients treated with SGA showed load-related increases in left prefrontal BOLD, whereas those on FGA did not; it is important to note that this effect represents an interaction effect of working memory load and treatment but not a treatment main effect. Importantly, the FGA and SGA patient groups did not differ in demographic, clinical, or performance variables. An analysis of individual SGA compounds (clozapine, risperidone, olanzapine) did not yield any differences amongst these. As stated earlier, a number of pharmacological fMRI studies of antipsychotic compounds in schizophrenia have shown stronger BOLD signal with SGA compared to FGA. The strongest evidence for this effect comes from longitudinal studies which have shown that SGA improve cortical activations, thus normalising previous hypoactivations (review, Davis et al. 2005; Kumari and Cooke 2006; Migo et al. 2011). The present study observed this effect in a cross-sectional design. The effect was seen despite the small number of patients treated with FGA, a factor that can be attributed to current guidelines in clinical practice.\nHowever, it should be noted that not all previous studies have found this effect. For example, Surguladze et al. (2007) observed that patients on FGA showed an increase in ventromedial prefrontal BOLD as a function of working memory load, whereas patients treated with risperidone and healthy controls did not. Conversely, a lateral ventral prefrontal area studied by Surguladze et al. (2007) showed an increase with load only in controls but not in FGA- or risperidone-treated patients. An important difference between their study and ours is that in Surguladze et al. (2007), patients treated with FGA displayed significantly impaired task performance relative to risperidone-treated patients, whereas in our study, there were no significant drug effects on the level of task performance. Furthermore, there is evidence for relative functional specificity within prefrontal cortices. For example, studies have shown bilaterally increased DPFC (BA9/46) activity during rule discovery but increased VPFC (BA47/12) activity with changes in card-sorting rule on the WCST (Monchi et al. 2001). It is possible that SGAs have different effects on dorsal and ventral prefrontal brain regions.\nThe observation that in this study, working memory task performance did not differ between the two treatment groups, is noteworthy given that there were differences between the two groups at the level of BOLD in response to working memory load increase. Dissociations of effects at behavioural and neural levels of analysis in fMRI designs have previously been described in relation to experimental cognitive neuroscience (Wilkinson and Halligan 2004), but similar arguments may apply to pharmacological fMRI studies (Migo et al. 2011). Specifically, in this instance, it may be argued that our data provide evidence for the notion that the BOLD signal may be a more sensitive measure of pharmacological treatment effects than behavioural measures (Honey and Bullmore 2004) analogous to findings in the neurochemical imaging literature (Fannon et al. 2003).\nA likely pharmacological explanation of the treatment effects in this study may be the richer pharmacological profile of SGA compounds which rely not just on D2 receptor antagonism but also act on other neurotransmitter systems such as the serotonergic system. However, the group of SGA compounds used in this study is heterogeneous, thus not allowing a definitive explanation of the neurotransmitter mechanisms underlying the effects observed here.\n[SUBTITLE] Limitations [SUBSECTION] This study has a number of limitations. First, the patient group treated with FGA was small (N  =  6). This limitation of the study stems from the fact that we used a naturalistic design in a clinical environment where the majority of schizophrenia patients are treated with SGA. The small size of the FGA group could also be an explanation for the lack of differences between the two patient groups in demographic, clinical, or performance measures. An additional limitation is that the group of SGA is pharmacologically heterogeneous. Therefore, it cannot be concluded that there is a single mechanism common to all these compounds by which they affected BOLD in this study. It should also be noted that the performance-by-load and treatment-by-load interactions were observed only when considering individual ROIs but not at the voxelwise level. Previous studies have noted that antipsychotic effects at the level of BOLD may be of small to moderate effect size (Meisenzahl et al. 2006); therefore, replication using larger samples will be important. A further limitation is the fact that some patients were co-medicated with other compounds; these may influence cognition and brain function. Finally, the effects of pharmacological treatment should ideally be investigated using more powerful longitudinal designs rather than cross-sectional studies. Future longitudinal studies are needed to further advance this field.\nThis study has a number of limitations. First, the patient group treated with FGA was small (N  =  6). This limitation of the study stems from the fact that we used a naturalistic design in a clinical environment where the majority of schizophrenia patients are treated with SGA. The small size of the FGA group could also be an explanation for the lack of differences between the two patient groups in demographic, clinical, or performance measures. An additional limitation is that the group of SGA is pharmacologically heterogeneous. Therefore, it cannot be concluded that there is a single mechanism common to all these compounds by which they affected BOLD in this study. It should also be noted that the performance-by-load and treatment-by-load interactions were observed only when considering individual ROIs but not at the voxelwise level. Previous studies have noted that antipsychotic effects at the level of BOLD may be of small to moderate effect size (Meisenzahl et al. 2006); therefore, replication using larger samples will be important. A further limitation is the fact that some patients were co-medicated with other compounds; these may influence cognition and brain function. Finally, the effects of pharmacological treatment should ideally be investigated using more powerful longitudinal designs rather than cross-sectional studies. Future longitudinal studies are needed to further advance this field.", "Working memory deficits in schizophrenia have often been demonstrated and it is important to understand this impairment in relation to the pathophysiology, genetics, and treatment of schizophrenia (Barch and Smith 2008; Goldman-Rakic 1994). However, there is still considerable lack of clarity concerning the neural mechanisms that mediate these cognitive impairments. Given the importance of the neurophysiological level of analysis in genetic (Meyer-Lindenberg and Weinberger 2006) and pharmacological (Migo et al. 2011) studies of schizophrenia, the aim of this study was to further address this issue.\nPatients and controls did not differ in working memory performance in this study (see also Tan et al. 2005). At the level of BOLD, it was found that patients with schizophrenia displayed significantly stronger activation than controls in lateral prefrontal cortex and left occipital cortex (extending into the cerebellum) during task performance. Additionally, the group difference in prefrontal cortex became stronger with increasing load. The prefrontal cortex clusters were located to superior and middle frontal gyrus as well as medial prefrontal cortex. These prefrontal hyperactivations in patients appeared to be recruited in addition to prefrontal areas that were activated in the main effect of working memory load in both groups.\nPrevious brain activation studies of schizophrenia patients have variably shown increased, decreased, or comparable frontal activation levels in patients. These discrepancies have been attributed to a number of variables including performance and task difficulty (Callicott et al. 2003; Manoach 2003; Potkin et al. 2009). As outlined in detail elsewhere (Linden 2009), the pattern of schizophrenia-control differences in BOLD signal and task performance can take on different forms, indicating qualitatively different, reduced, compensatory, aberrant, or inefficient neural response. The results of the current study indicate that areas activated by both groups are activated to a similar extent; however, patients additionally activated lateral prefrontal areas, an effect that became more pronounced with increasing task demands. Bearing in mind that the patients have psychotic symptoms and general clinical impairments, this pattern suggests the operation of compensatory neural mechanisms that led to overall successful task performance when compared to healthy controls. A number of previous studies have similarly observed compensatory BOLD increases on the background of comparable task performance in schizophrenia patients (Karch et al. 2009; Lee et al. 2008; Potkin et al. 2009; Royer et al. 2009; Tan et al. 2005; van Raalten et al. 2008; see also meta-analysis by Minzenberg et al. 2009). However, while the patients' recruitment of additional areas in our study may represent a behaviourally successful compensatory mechanism, it is also neurally inefficient as it requires additional resources in order to achieve a comparable performance levels as the controls. Of interest, it should be noted that compensatory hyperactivations have been observed not only in schizophrenia but also in other psychiatric conditions such as obsessive–compulsive disorder (Henseler et al. 2008).", "The left prefrontal BOLD increases in response to working memory load were more pronounced in patients with high levels of performance than in patients with low performance levels. The finding of performance-related activation levels in the patients is in part compatible with existing models which attribute group differences in BOLD to variation in task performance (Callicott et al. 2003; Manoach 2003). These models suggested that relatively unimpaired performance may be associated with increased BOLD signal, whereas lower performance may be associated with reductions in BOLD, e.g., hypofrontality. Given that high-performing patients in this study showed a stronger increase in activation with increasing task demands than low-performing patients it may be speculated that this may perhaps be compatible with low-performing patients not being able to further increase their BOLD beyond a certain level of task difficulty, which could have resulted in (or stemmed from) their lower working memory capacity. Relating this pattern to the inverted U curves depicted by Manoach (2003) and Callicott et al. (2003), it may be surmised that low-performing patients are already at the top or on the downward slope of the inverted U.", "The cross-sectional treatment effects observed here are compatible with some previous evidence (see “Introduction”). Patients treated with SGA showed load-related increases in left prefrontal BOLD, whereas those on FGA did not; it is important to note that this effect represents an interaction effect of working memory load and treatment but not a treatment main effect. Importantly, the FGA and SGA patient groups did not differ in demographic, clinical, or performance variables. An analysis of individual SGA compounds (clozapine, risperidone, olanzapine) did not yield any differences amongst these. As stated earlier, a number of pharmacological fMRI studies of antipsychotic compounds in schizophrenia have shown stronger BOLD signal with SGA compared to FGA. The strongest evidence for this effect comes from longitudinal studies which have shown that SGA improve cortical activations, thus normalising previous hypoactivations (review, Davis et al. 2005; Kumari and Cooke 2006; Migo et al. 2011). The present study observed this effect in a cross-sectional design. The effect was seen despite the small number of patients treated with FGA, a factor that can be attributed to current guidelines in clinical practice.\nHowever, it should be noted that not all previous studies have found this effect. For example, Surguladze et al. (2007) observed that patients on FGA showed an increase in ventromedial prefrontal BOLD as a function of working memory load, whereas patients treated with risperidone and healthy controls did not. Conversely, a lateral ventral prefrontal area studied by Surguladze et al. (2007) showed an increase with load only in controls but not in FGA- or risperidone-treated patients. An important difference between their study and ours is that in Surguladze et al. (2007), patients treated with FGA displayed significantly impaired task performance relative to risperidone-treated patients, whereas in our study, there were no significant drug effects on the level of task performance. Furthermore, there is evidence for relative functional specificity within prefrontal cortices. For example, studies have shown bilaterally increased DPFC (BA9/46) activity during rule discovery but increased VPFC (BA47/12) activity with changes in card-sorting rule on the WCST (Monchi et al. 2001). It is possible that SGAs have different effects on dorsal and ventral prefrontal brain regions.\nThe observation that in this study, working memory task performance did not differ between the two treatment groups, is noteworthy given that there were differences between the two groups at the level of BOLD in response to working memory load increase. Dissociations of effects at behavioural and neural levels of analysis in fMRI designs have previously been described in relation to experimental cognitive neuroscience (Wilkinson and Halligan 2004), but similar arguments may apply to pharmacological fMRI studies (Migo et al. 2011). Specifically, in this instance, it may be argued that our data provide evidence for the notion that the BOLD signal may be a more sensitive measure of pharmacological treatment effects than behavioural measures (Honey and Bullmore 2004) analogous to findings in the neurochemical imaging literature (Fannon et al. 2003).\nA likely pharmacological explanation of the treatment effects in this study may be the richer pharmacological profile of SGA compounds which rely not just on D2 receptor antagonism but also act on other neurotransmitter systems such as the serotonergic system. However, the group of SGA compounds used in this study is heterogeneous, thus not allowing a definitive explanation of the neurotransmitter mechanisms underlying the effects observed here.", "This study has a number of limitations. First, the patient group treated with FGA was small (N  =  6). This limitation of the study stems from the fact that we used a naturalistic design in a clinical environment where the majority of schizophrenia patients are treated with SGA. The small size of the FGA group could also be an explanation for the lack of differences between the two patient groups in demographic, clinical, or performance measures. An additional limitation is that the group of SGA is pharmacologically heterogeneous. Therefore, it cannot be concluded that there is a single mechanism common to all these compounds by which they affected BOLD in this study. It should also be noted that the performance-by-load and treatment-by-load interactions were observed only when considering individual ROIs but not at the voxelwise level. Previous studies have noted that antipsychotic effects at the level of BOLD may be of small to moderate effect size (Meisenzahl et al. 2006); therefore, replication using larger samples will be important. A further limitation is the fact that some patients were co-medicated with other compounds; these may influence cognition and brain function. Finally, the effects of pharmacological treatment should ideally be investigated using more powerful longitudinal designs rather than cross-sectional studies. Future longitudinal studies are needed to further advance this field.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\nOnline Resource 1The figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)\n\nThe figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)\nBelow is the link to the electronic supplementary material.\nOnline Resource 1The figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)\n\nThe figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)", "Below is the link to the electronic supplementary material.\nOnline Resource 1The figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)\n\nThe figure depicts significant BOLD response as a function of load on the n-back working memory task in each of the two patient subgroups (patients with high performance and patients with low performance) (p < 0.05, FWE corrected voxel level) (DOC 252 kb)" ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, null, null, null, "supplementary-material", null ]
[ "Schizophrenia", "Spatial working memory", "Functional brain imaging", "Antipsychotics", "Biomarker" ]
Early postoperative MRI overestimates residual tumour after resection of gliomas with no or minimal enhancement.
21331595
Standards for residual tumour measurement after resection of gliomas with no or minimal enhancement have not yet been established. In this study residual volumes on early and late postoperative T2-/FLAIR-weighted MRI are compared.
BACKGROUND
A retrospective cohort included 58 consecutive glioma patients with no or minimal preoperative gadolinium enhancement. Inclusion criteria were first-time resection between 2007 and 2009 with a T2-/FLAIR-based target volume and availability of preoperative, early (<48 h) and late (1-7 months) postoperative MRI. The volumes of non-enhancing T2/FLAIR tissue and diffusion restriction areas were measured.
METHODS
Residual tumour volumes were 22% smaller on late postoperative compared with early postoperative T2-weighted MRI and 49% smaller for FLAIR-weighted imaging. Postoperative restricted diffusion volume correlated with the difference between early and late postoperative FLAIR volumes and with the difference between T2 and FLAIR volumes on early postoperative MRI.
RESULTS
We observed a systematic and substantial overestimation of residual non-enhancing volume on MRI within 48 h of resection compared with months postoperatively, in particular for FLAIR imaging. Resection-induced ischaemia contributes to this overestimation, as may other operative effects. This indicates that early postoperative MRI is less reliable to determine the extent of non-enhancing residual glioma and restricted diffusion volumes are imperative.
CONCLUSION
[ "Adult", "Brain Neoplasms", "Contrast Media", "Female", "Glioma", "Humans", "Linear Models", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Neoplasm, Residual", "Postoperative Period", "Retrospective Studies", "Statistics, Nonparametric" ]
3101346
Introduction
Initial treatment for gliomas usually consists of resective surgery in order to obtain a histopathological diagnosis and to accomplish as radical a glioma removal as possible with preservation of brain functions. Important determinants of survival in gliomas are histopathological grading, patient age and clinical condition before treatment [1, 2]. Residual tumour after resection is an independent prognostic factor for survival of low-grade and high-grade gliomas [3–6]. Therefore, a reliable quantitative measure of residual tumour after resection is important to accurately evaluate the impact of surgery on the course of the disease and to compare the extent of resection between patient series [6, 7]. For high-grade gliomas residual tumour is usually determined by tumour volume calculation on MRI of the elements that enhance after administration of gadolinium on T1-weighted imaging. To avoid artefacts from enhancing gliotic tissue in the resection margins and from adjuvant radiotherapy that can complicate delineation of residual tumour, the postoperative MRI is performed within 48 h of surgery [8]. Gliomas may also present with no or minimal gadolinium enhancement on MRI. A lower grade glioma is then suspected and the target volume for resection is concordantly based on the non-enhancing elements that are hyperintense on T2-weighted sequences. FLAIR-weighted sequences have been particularly useful for this purpose, because the contrast between tumour and normal brain is superior to T2-weighted sequences as the signal from cerebrospinal fluid is suppressed [9]. Standards for the timing of postoperative MRI of gliomas with no or minimal enhancement have however not been established. The effects of the timing of postoperative MRI and of the pulse-sequence on residual tumour measurements have not been determined. We hypothesised that resection-induced ischaemia might contribute to differences in residual tumour volumes. This study aimed to compare residual tumour volumes measured on early versus late postoperative T2- and FLAIR-weighted MRI after resection of gliomas that show no or minimal gadolinium enhancement on preoperative imaging.
null
null
Results
[SUBTITLE] Patient and MRI characteristics and histopathology [SUBSECTION] The study population was a subset of 58 patients out of 223 with a glioma diagnosis in the inclusion time interval. Table 1 lists the details of the study population, consisting of 25 women and 33 men with a mean age of 44.7 years (range 20 to 70). Table 1Patient demographics and MRI characteristicsWHO1WHO2WHO3WHO4totaln (%)2 (3%)35 (60%)14 (24%)7 (12%)58patient demographicsage, mean (SE)36.0 (0.4)45.4 (1.7)45.2 (3.7)42.8 (7.0)44.7 (1.6)female/male0/218/175/91/625/33No. pts with radiotherapy026210No, pts with chemo-irradiation00257tumour lateralisation and locationL/R0/215/205/93/423/35nr pts with eloquent tumour location2267641frontal087116SMA072211parietal063211temporal02114insula2121116MRI characteristicspreoperative T2 hyperintense tumour volume in cm3, mean (SE)135.0 (58.2)71.6 (8.7)129.1 (19.8)76.6 (12.2)88.3 (8.0)preoperative FLAIR hyperintense tumour volume in cm3, mean (SE)129.9 (52.0)72.3 (9.9)138.7 (29.9)89.8 (16.2)90.3 (9.7)days preoperative MRI, mean (SE)162 (121)56 (15)27 (9)38 (14)50 (10)days early postoperative MRI, mean (SE)1 (0)1.4 (0.1)1.8 (0.3)1.6 (0.4)1.5 (0.1)days late postoperative MRI, mean (SE)189 (10)104 (7)117 (21)96 (15)109 (7) Patient demographics and MRI characteristics Preoperative MRI was performed at a median of 23 days (range: 1 to 282) before resection which showed no T1 gadolinium enhancement in 32 and minimal enhancement in 26 patients. The median preoperative T2 and FLAIR tumour volumes were 80.7 cm3 (range: 0.8 to 288.2) and 77.9 cm3 (range: 1.3 to 344.5), respectively. Histopathological diagnosis established WHO grade 1 in 2 patients (3%), grade 2 in 35 (60%), grade 3 in 14 (24%) and grade 4 in 7 (12%). Histopathological subtypes included 2 pilocytic astrocytomas (WHO grade 1); 14 astrocytomas, 18 oligodendrogliomas and 3 oligoastrocytomas (WHO grade 2); 6 anaplastic astrocytomas, 5 anaplastic oligodendrogliomas, 3 anaplastic oligoastrocytomas (WHO grade 3) and 7 glioblastomas (WHO grade 4). Early and late postoperative T2 images were available for 52 patients; early and late postoperative FLAIR images for 14 patients. Early postoperative T2 and FLAIR images were available for 23 patients; late postoperative T2 and FLAIR images for 24 patients. Early postoperative diffusion-weighted images were available for 33 patients. Early postoperative MRI was obtained within 48 hours of resection in all patients, 44 on the first postoperative day, 14 on the second day. Late postoperative MRI was performed at a median of 100 days (range: 34 to 192) after resection; 18 within 3 months postoperatively, 31 between four and 6 months and 7 between seven and 9 months. The study population was a subset of 58 patients out of 223 with a glioma diagnosis in the inclusion time interval. Table 1 lists the details of the study population, consisting of 25 women and 33 men with a mean age of 44.7 years (range 20 to 70). Table 1Patient demographics and MRI characteristicsWHO1WHO2WHO3WHO4totaln (%)2 (3%)35 (60%)14 (24%)7 (12%)58patient demographicsage, mean (SE)36.0 (0.4)45.4 (1.7)45.2 (3.7)42.8 (7.0)44.7 (1.6)female/male0/218/175/91/625/33No. pts with radiotherapy026210No, pts with chemo-irradiation00257tumour lateralisation and locationL/R0/215/205/93/423/35nr pts with eloquent tumour location2267641frontal087116SMA072211parietal063211temporal02114insula2121116MRI characteristicspreoperative T2 hyperintense tumour volume in cm3, mean (SE)135.0 (58.2)71.6 (8.7)129.1 (19.8)76.6 (12.2)88.3 (8.0)preoperative FLAIR hyperintense tumour volume in cm3, mean (SE)129.9 (52.0)72.3 (9.9)138.7 (29.9)89.8 (16.2)90.3 (9.7)days preoperative MRI, mean (SE)162 (121)56 (15)27 (9)38 (14)50 (10)days early postoperative MRI, mean (SE)1 (0)1.4 (0.1)1.8 (0.3)1.6 (0.4)1.5 (0.1)days late postoperative MRI, mean (SE)189 (10)104 (7)117 (21)96 (15)109 (7) Patient demographics and MRI characteristics Preoperative MRI was performed at a median of 23 days (range: 1 to 282) before resection which showed no T1 gadolinium enhancement in 32 and minimal enhancement in 26 patients. The median preoperative T2 and FLAIR tumour volumes were 80.7 cm3 (range: 0.8 to 288.2) and 77.9 cm3 (range: 1.3 to 344.5), respectively. Histopathological diagnosis established WHO grade 1 in 2 patients (3%), grade 2 in 35 (60%), grade 3 in 14 (24%) and grade 4 in 7 (12%). Histopathological subtypes included 2 pilocytic astrocytomas (WHO grade 1); 14 astrocytomas, 18 oligodendrogliomas and 3 oligoastrocytomas (WHO grade 2); 6 anaplastic astrocytomas, 5 anaplastic oligodendrogliomas, 3 anaplastic oligoastrocytomas (WHO grade 3) and 7 glioblastomas (WHO grade 4). Early and late postoperative T2 images were available for 52 patients; early and late postoperative FLAIR images for 14 patients. Early postoperative T2 and FLAIR images were available for 23 patients; late postoperative T2 and FLAIR images for 24 patients. Early postoperative diffusion-weighted images were available for 33 patients. Early postoperative MRI was obtained within 48 hours of resection in all patients, 44 on the first postoperative day, 14 on the second day. Late postoperative MRI was performed at a median of 100 days (range: 34 to 192) after resection; 18 within 3 months postoperatively, 31 between four and 6 months and 7 between seven and 9 months. [SUBTITLE] Residual tumour volumes on early compared with late postoperative MRI [SUBSECTION] The median early and late T2 residual volumes were 28.7 cm3 and 22.4 cm3, respectively, resulting in a 22% smaller residual volume on late T2 images. Residual T2 tumour volumes on early postoperative MRI demonstrated a systematically larger residual tumour volume (on average 4.3 cm3) compared with late postoperative MRI (Fig. 1a, c) with a regression coefficient of 0.767 (95%CI: 0.665–0.870), a correlation coefficient of 0.81 (p < 0.0001) and a paired Wilcoxon rank sum of V = 746.5 (p = 0.295). Similarly, the median early and late FLAIR residual volumes were 27.3 cm3 and 13.9 cm3, respectively, resulting in a 49% smaller residual volume on late FLAIR images. Systematically larger residual FLAIR tumour volumes (on average 5.7 cm3) were observed based on early postoperative MRI compared with late postoperative MRI (Fig. 1b, d) with a regression coefficient of 0.833 (95%CI: 0.693–0.973), a correlation coefficient of 0.95 (p < 0.0001) and a paired Wilcoxon rank sum of V = 78 (p = 0.119). Data plots confirmed that the differential residual tumour volumes were independent of glioma grading, subtyping and timing of late postoperative MRI. Fig. 1Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval The median early and late T2 residual volumes were 28.7 cm3 and 22.4 cm3, respectively, resulting in a 22% smaller residual volume on late T2 images. Residual T2 tumour volumes on early postoperative MRI demonstrated a systematically larger residual tumour volume (on average 4.3 cm3) compared with late postoperative MRI (Fig. 1a, c) with a regression coefficient of 0.767 (95%CI: 0.665–0.870), a correlation coefficient of 0.81 (p < 0.0001) and a paired Wilcoxon rank sum of V = 746.5 (p = 0.295). Similarly, the median early and late FLAIR residual volumes were 27.3 cm3 and 13.9 cm3, respectively, resulting in a 49% smaller residual volume on late FLAIR images. Systematically larger residual FLAIR tumour volumes (on average 5.7 cm3) were observed based on early postoperative MRI compared with late postoperative MRI (Fig. 1b, d) with a regression coefficient of 0.833 (95%CI: 0.693–0.973), a correlation coefficient of 0.95 (p < 0.0001) and a paired Wilcoxon rank sum of V = 78 (p = 0.119). Data plots confirmed that the differential residual tumour volumes were independent of glioma grading, subtyping and timing of late postoperative MRI. Fig. 1Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval [SUBTITLE] Tumour volumes on FLAIR- compared with T2-weighted imaging [SUBSECTION] FLAIR tumour volumes were marginally larger (on average 4.7 cm3) compared with T2 volumes on preoperative MRI (Fig. 2a, d) with a regression coefficient of 1.068 (95%CI: 1.033–1.102), a correlation coefficient of 0.98 (p < 0.0001) and a paired Wilcoxon rank sum of V = 419.5 (p = 0.056). FLAIR residual tumour volumes were substantially larger (on average 7.2 cm3) than T2 tumour volumes on early postoperative MRI (Fig. 2b, e) with a regression coefficient of 1.156 (95%CI: 1.049–1.262), a correlation coefficient of 0.96 (p < 0.0001) and a paired Wilcoxon rank sum of V = 40 (p = 0.009). On the late postoperative MRI, FLAIR and T2 residual tumour volumes were comparable (Fig. 2c, f): T2 tumour volumes on average 1.5 cm3 larger with a regression coefficient of 0.948 (95%CI: 0.880–1.015), a correlation coefficient of 0.94 (p < 0.0001) and a paired Wilcoxon rank sum of V = 194 (p = 0.218). Again, data plots confirmed that the differential residual tumour volumes were independent of glioma grading and timing of late postoperative MRI. Fig. 2Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1 Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1 FLAIR tumour volumes were marginally larger (on average 4.7 cm3) compared with T2 volumes on preoperative MRI (Fig. 2a, d) with a regression coefficient of 1.068 (95%CI: 1.033–1.102), a correlation coefficient of 0.98 (p < 0.0001) and a paired Wilcoxon rank sum of V = 419.5 (p = 0.056). FLAIR residual tumour volumes were substantially larger (on average 7.2 cm3) than T2 tumour volumes on early postoperative MRI (Fig. 2b, e) with a regression coefficient of 1.156 (95%CI: 1.049–1.262), a correlation coefficient of 0.96 (p < 0.0001) and a paired Wilcoxon rank sum of V = 40 (p = 0.009). On the late postoperative MRI, FLAIR and T2 residual tumour volumes were comparable (Fig. 2c, f): T2 tumour volumes on average 1.5 cm3 larger with a regression coefficient of 0.948 (95%CI: 0.880–1.015), a correlation coefficient of 0.94 (p < 0.0001) and a paired Wilcoxon rank sum of V = 194 (p = 0.218). Again, data plots confirmed that the differential residual tumour volumes were independent of glioma grading and timing of late postoperative MRI. Fig. 2Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1 Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1 [SUBTITLE] Postoperative ischaemia [SUBSECTION] The median restricted diffusion volume was 6.1 cm3. The restricted diffusion volume only weakly correlated with the difference between early and late postoperative volumes on T2-weighted imaging (Fig. 3a) with a correlation coefficient of 0.32 (p = 0.089). The restricted diffusion volume however did correlate more strongly with the difference between early and late postoperative volumes on FLAIR-weighted imaging (Fig. 3b) with a regression coefficient of 0.877 (95%CI: 0.486–1.267) and a correlation coefficient of 0.76 (p = 0.004). In addition, the restricted diffusion volume also correlated with the difference between FLAIR and T2 volumes on early postoperative MRI (Fig. 3c) with a regression coefficient of 0.655 (95%CI: 0.440–0.871) and a correlation coefficient of 0.77 (p = 0.0001). Fig. 3Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1 Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1 The median restricted diffusion volume was 6.1 cm3. The restricted diffusion volume only weakly correlated with the difference between early and late postoperative volumes on T2-weighted imaging (Fig. 3a) with a correlation coefficient of 0.32 (p = 0.089). The restricted diffusion volume however did correlate more strongly with the difference between early and late postoperative volumes on FLAIR-weighted imaging (Fig. 3b) with a regression coefficient of 0.877 (95%CI: 0.486–1.267) and a correlation coefficient of 0.76 (p = 0.004). In addition, the restricted diffusion volume also correlated with the difference between FLAIR and T2 volumes on early postoperative MRI (Fig. 3c) with a regression coefficient of 0.655 (95%CI: 0.440–0.871) and a correlation coefficient of 0.77 (p = 0.0001). Fig. 3Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1 Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1
null
null
[ "Inclusion criteria", "Resective surgery", "MRI data acquisition", "MRI volumetry", "Comparison of volume measurements", "Statistical analysis", "Patient and MRI characteristics and histopathology", "Residual tumour volumes on early compared with late postoperative MRI", "Tumour volumes on FLAIR- compared with T2-weighted imaging", "Postoperative ischaemia" ]
[ "All adult patients with a histopathological diagnosis of a glioma grade 1 to 4 according to the WHO 2007 criteria [10], who had surgery between January 2007 and December 2009 were retrieved from a consecutive patient series in an electronic database of our institution, which is a tertiary referral centre for brain tumour treatment. Then, patients with first-time resective surgery were selected for analysis, excluding stereotactic or open biopsies. In order to ensure that the target volume for resection was based on T2-/FLAIR-weighted imaging, only gliomas with no or minimal gadolinium enhancement on preoperative T1-weighted imaging were selected. However, focal or faint enhancement arbitrarily up to 15% of the T2/FLAIR volume, as determined volumetrically, was allowed. Furthermore, MR imaging was required to be available (a) preoperatively, (b) within 48 h of surgery, i.e. early postoperatively, and (c) between 2 and 6 months after surgery, i.e. late postoperatively.\nThe local ethical committee approved the research protocol and waived informed consent of participants.", "The patients in this study were preoperatively considered to have either a low-grade glioma or an anaplastic focus in a previously low-grade glioma. Therefore, the target volume for resection consisted of the T2/FLAIR volume. To minimise the risk of neurological morbidity this target volume was restricted by eloquent brain areas as determined by intraoperative cortical and subcortical electrostimulation mapping. The relation between the tumour and eloquent areas was determined according to the definitions used in the prognostic classification system of Chang et al [2]. Preoperative functional MRI, magnetic source imaging and diffusion tensor imaging tractography were obtained, as required by the location of the tumour. In general, the resection proceeded until eloquent brain areas were reached or volumetric tumour resection was complete. The tumour was subpially resected along sulci and fissures to minimise ischaemia around the resection cavity.", "MR imaging was performed on a 1.5T system (Siemens Sonata or Avanto; Siemens Medical Systems, Erlangen, Germany). The imaging protocol included non-enhanced axial T1-weighted spin echo images [repetition time/echo time (TR/TE) 520-600/8-12 ms] with 5-mm section thickness and axial T2-weighted turbo spin echo images (TR/TE 5190-8670/93-101 ms) with 5-mm section thickness. Sagittal 3D turbo fluid-attenuated inversion-recovery (tFLAIR) images [repetition time/echo time/inversion time (TR/TE/TI) 6500/355/2200 ms] with 1.3-mm section thickness plus axial multiplanar reconstructions (MPR) and axial single shot spin echo echo-planar diffusion-weighted (DWI) images (TR/TE 3400/122 ms) with 5-mm section thickness were added to the imaging protocol in 2009. Diffusion gradients were applied along 3 orthogonal directions using b-values of 0, 500 and 1000 s/ mm2. Apparent diffusion coefficient (ADC) maps were calculated from the DWI images. Post-contrast (0.2 mmol/Kg) sagittal 3D T1-weighted gradient-echo (MPRAGE) images (TR/TE/TI 2300-2700/5-4.5/950 ms) with 1- to 1.5-mm section thickness and axial T1-weighted spin echo images with 5-mm section thickness were obtained. In all patients preoperative imaging was repeated at our institution for navigation protocol purposes, even if preoperative imaging was available from a referring hospital.", "Tumour volumes were measured using image fusion and volumetric software (BrainLab iPlan Cranial 2.6; BrainLab AG, Feldkirchen, Germany). MRI volumetry was based on the Cavalieri principle which provides unbiased volume estimates [11]. For this purpose tumour contours were manually segmented on sequential axial images and verified in the coronal and sagittal reconstruction planes. The sum of tumour contour surfaces of an MRI study was multiplied by slice thickness to obtain the estimated volume in cm3. This method has been demonstrated to be reproducible and accurate [12, 13]. In this way preoperative tumour volumes were determined for T2-, FLAIR- and gadolinium-enhanced T1-weighted images. The residual tumour after resection was also determined on T2- and FLAIR-weighted sequences for early and late postoperative imaging. The postoperative ischaemic volume was measured on an early postoperative diffusion-weighted sequence. Tumour and ischaemia were manually segmented independently by two observers (SB, PW) and disagreement was resolved by consensus with a third observer (ES).", "Several volume measurements were compared. To determine the effect of MRI timing on tumour volume measurements, early and late postoperative volumes were compared for T2- and FLAIR-weighted MRI. To determine the effect of pulse-sequence on tumour volume assessment, T2 and FLAIR volumes were compared on preoperative, early and late postoperative MRI. To determine the effect of resection-induced ischaemia on differences in residual tumour volumes, tissue volumes with restricted diffusion on early postoperative MRI were compared with the difference between early and late postoperative volumes for T2- and FLAIR-weighted imaging.", "Tumour volumes for individual patients were plotted for visual inspection. In the case of perfect agreement between two volumes, a linear regression coefficient of 1.0 with an intercept of 0 was expected (plotted as a dashed line) with a correlation coefficient of 1.0. The actual regression coefficients with 95% confidence intervals were calculated by linear regression analysis with a fixed intercept of 0. As the volumes were considered not to be normally distributed, Spearman’s correlation coefficient was calculated. Bland-Altman plots were created to visualise agreement between volume estimates. Absolute volume differences were compared using the Wilcoxon signed rank test for paired samples.", "The study population was a subset of 58 patients out of 223 with a glioma diagnosis in the inclusion time interval. Table 1 lists the details of the study population, consisting of 25 women and 33 men with a mean age of 44.7 years (range 20 to 70).\nTable 1Patient demographics and MRI characteristicsWHO1WHO2WHO3WHO4totaln (%)2 (3%)35 (60%)14 (24%)7 (12%)58patient demographicsage, mean (SE)36.0 (0.4)45.4 (1.7)45.2 (3.7)42.8 (7.0)44.7 (1.6)female/male0/218/175/91/625/33No. pts with radiotherapy026210No, pts with chemo-irradiation00257tumour lateralisation and locationL/R0/215/205/93/423/35nr pts with eloquent tumour location2267641frontal087116SMA072211parietal063211temporal02114insula2121116MRI characteristicspreoperative T2 hyperintense tumour volume in cm3, mean (SE)135.0 (58.2)71.6 (8.7)129.1 (19.8)76.6 (12.2)88.3 (8.0)preoperative FLAIR hyperintense tumour volume in cm3, mean (SE)129.9 (52.0)72.3 (9.9)138.7 (29.9)89.8 (16.2)90.3 (9.7)days preoperative MRI, mean (SE)162 (121)56 (15)27 (9)38 (14)50 (10)days early postoperative MRI, mean (SE)1 (0)1.4 (0.1)1.8 (0.3)1.6 (0.4)1.5 (0.1)days late postoperative MRI, mean (SE)189 (10)104 (7)117 (21)96 (15)109 (7)\n\nPatient demographics and MRI characteristics\nPreoperative MRI was performed at a median of 23 days (range: 1 to 282) before resection which showed no T1 gadolinium enhancement in 32 and minimal enhancement in 26 patients. The median preoperative T2 and FLAIR tumour volumes were 80.7 cm3 (range: 0.8 to 288.2) and 77.9 cm3 (range: 1.3 to 344.5), respectively. Histopathological diagnosis established WHO grade 1 in 2 patients (3%), grade 2 in 35 (60%), grade 3 in 14 (24%) and grade 4 in 7 (12%). Histopathological subtypes included 2 pilocytic astrocytomas (WHO grade 1); 14 astrocytomas, 18 oligodendrogliomas and 3 oligoastrocytomas (WHO grade 2); 6 anaplastic astrocytomas, 5 anaplastic oligodendrogliomas, 3 anaplastic oligoastrocytomas (WHO grade 3) and 7 glioblastomas (WHO grade 4).\nEarly and late postoperative T2 images were available for 52 patients; early and late postoperative FLAIR images for 14 patients. Early postoperative T2 and FLAIR images were available for 23 patients; late postoperative T2 and FLAIR images for 24 patients. Early postoperative diffusion-weighted images were available for 33 patients. Early postoperative MRI was obtained within 48 hours of resection in all patients, 44 on the first postoperative day, 14 on the second day. Late postoperative MRI was performed at a median of 100 days (range: 34 to 192) after resection; 18 within 3 months postoperatively, 31 between four and 6 months and 7 between seven and 9 months.", "The median early and late T2 residual volumes were 28.7 cm3 and 22.4 cm3, respectively, resulting in a 22% smaller residual volume on late T2 images. Residual T2 tumour volumes on early postoperative MRI demonstrated a systematically larger residual tumour volume (on average 4.3 cm3) compared with late postoperative MRI (Fig. 1a, c) with a regression coefficient of 0.767 (95%CI: 0.665–0.870), a correlation coefficient of 0.81 (p < 0.0001) and a paired Wilcoxon rank sum of V = 746.5 (p = 0.295). Similarly, the median early and late FLAIR residual volumes were 27.3 cm3 and 13.9 cm3, respectively, resulting in a 49% smaller residual volume on late FLAIR images. Systematically larger residual FLAIR tumour volumes (on average 5.7 cm3) were observed based on early postoperative MRI compared with late postoperative MRI (Fig. 1b, d) with a regression coefficient of 0.833 (95%CI: 0.693–0.973), a correlation coefficient of 0.95 (p < 0.0001) and a paired Wilcoxon rank sum of V = 78 (p = 0.119). Data plots confirmed that the differential residual tumour volumes were independent of glioma grading, subtyping and timing of late postoperative MRI.\nFig. 1Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval\n\nResidual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval", "FLAIR tumour volumes were marginally larger (on average 4.7 cm3) compared with T2 volumes on preoperative MRI (Fig. 2a, d) with a regression coefficient of 1.068 (95%CI: 1.033–1.102), a correlation coefficient of 0.98 (p < 0.0001) and a paired Wilcoxon rank sum of V = 419.5 (p = 0.056). FLAIR residual tumour volumes were substantially larger (on average 7.2 cm3) than T2 tumour volumes on early postoperative MRI (Fig. 2b, e) with a regression coefficient of 1.156 (95%CI: 1.049–1.262), a correlation coefficient of 0.96 (p < 0.0001) and a paired Wilcoxon rank sum of V = 40 (p = 0.009). On the late postoperative MRI, FLAIR and T2 residual tumour volumes were comparable (Fig. 2c, f): T2 tumour volumes on average 1.5 cm3 larger with a regression coefficient of 0.948 (95%CI: 0.880–1.015), a correlation coefficient of 0.94 (p < 0.0001) and a paired Wilcoxon rank sum of V = 194 (p = 0.218). Again, data plots confirmed that the differential residual tumour volumes were independent of glioma grading and timing of late postoperative MRI.\nFig. 2Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n\n\nResidual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n", "The median restricted diffusion volume was 6.1 cm3. The restricted diffusion volume only weakly correlated with the difference between early and late postoperative volumes on T2-weighted imaging (Fig. 3a) with a correlation coefficient of 0.32 (p = 0.089). The restricted diffusion volume however did correlate more strongly with the difference between early and late postoperative volumes on FLAIR-weighted imaging (Fig. 3b) with a regression coefficient of 0.877 (95%CI: 0.486–1.267) and a correlation coefficient of 0.76 (p = 0.004). In addition, the restricted diffusion volume also correlated with the difference between FLAIR and T2 volumes on early postoperative MRI (Fig. 3c) with a regression coefficient of 0.655 (95%CI: 0.440–0.871) and a correlation coefficient of 0.77 (p = 0.0001).\nFig. 3Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n\n\nPlots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Inclusion criteria", "Resective surgery", "MRI data acquisition", "MRI volumetry", "Comparison of volume measurements", "Statistical analysis", "Results", "Patient and MRI characteristics and histopathology", "Residual tumour volumes on early compared with late postoperative MRI", "Tumour volumes on FLAIR- compared with T2-weighted imaging", "Postoperative ischaemia", "Discussion" ]
[ "Initial treatment for gliomas usually consists of resective surgery in order to obtain a histopathological diagnosis and to accomplish as radical a glioma removal as possible with preservation of brain functions. Important determinants of survival in gliomas are histopathological grading, patient age and clinical condition before treatment [1, 2]. Residual tumour after resection is an independent prognostic factor for survival of low-grade and high-grade gliomas [3–6]. Therefore, a reliable quantitative measure of residual tumour after resection is important to accurately evaluate the impact of surgery on the course of the disease and to compare the extent of resection between patient series [6, 7].\nFor high-grade gliomas residual tumour is usually determined by tumour volume calculation on MRI of the elements that enhance after administration of gadolinium on T1-weighted imaging. To avoid artefacts from enhancing gliotic tissue in the resection margins and from adjuvant radiotherapy that can complicate delineation of residual tumour, the postoperative MRI is performed within 48 h of surgery [8]. Gliomas may also present with no or minimal gadolinium enhancement on MRI. A lower grade glioma is then suspected and the target volume for resection is concordantly based on the non-enhancing elements that are hyperintense on T2-weighted sequences. FLAIR-weighted sequences have been particularly useful for this purpose, because the contrast between tumour and normal brain is superior to T2-weighted sequences as the signal from cerebrospinal fluid is suppressed [9]. Standards for the timing of postoperative MRI of gliomas with no or minimal enhancement have however not been established. The effects of the timing of postoperative MRI and of the pulse-sequence on residual tumour measurements have not been determined. We hypothesised that resection-induced ischaemia might contribute to differences in residual tumour volumes.\nThis study aimed to compare residual tumour volumes measured on early versus late postoperative T2- and FLAIR-weighted MRI after resection of gliomas that show no or minimal gadolinium enhancement on preoperative imaging.", "[SUBTITLE] Inclusion criteria [SUBSECTION] All adult patients with a histopathological diagnosis of a glioma grade 1 to 4 according to the WHO 2007 criteria [10], who had surgery between January 2007 and December 2009 were retrieved from a consecutive patient series in an electronic database of our institution, which is a tertiary referral centre for brain tumour treatment. Then, patients with first-time resective surgery were selected for analysis, excluding stereotactic or open biopsies. In order to ensure that the target volume for resection was based on T2-/FLAIR-weighted imaging, only gliomas with no or minimal gadolinium enhancement on preoperative T1-weighted imaging were selected. However, focal or faint enhancement arbitrarily up to 15% of the T2/FLAIR volume, as determined volumetrically, was allowed. Furthermore, MR imaging was required to be available (a) preoperatively, (b) within 48 h of surgery, i.e. early postoperatively, and (c) between 2 and 6 months after surgery, i.e. late postoperatively.\nThe local ethical committee approved the research protocol and waived informed consent of participants.\nAll adult patients with a histopathological diagnosis of a glioma grade 1 to 4 according to the WHO 2007 criteria [10], who had surgery between January 2007 and December 2009 were retrieved from a consecutive patient series in an electronic database of our institution, which is a tertiary referral centre for brain tumour treatment. Then, patients with first-time resective surgery were selected for analysis, excluding stereotactic or open biopsies. In order to ensure that the target volume for resection was based on T2-/FLAIR-weighted imaging, only gliomas with no or minimal gadolinium enhancement on preoperative T1-weighted imaging were selected. However, focal or faint enhancement arbitrarily up to 15% of the T2/FLAIR volume, as determined volumetrically, was allowed. Furthermore, MR imaging was required to be available (a) preoperatively, (b) within 48 h of surgery, i.e. early postoperatively, and (c) between 2 and 6 months after surgery, i.e. late postoperatively.\nThe local ethical committee approved the research protocol and waived informed consent of participants.\n[SUBTITLE] Resective surgery [SUBSECTION] The patients in this study were preoperatively considered to have either a low-grade glioma or an anaplastic focus in a previously low-grade glioma. Therefore, the target volume for resection consisted of the T2/FLAIR volume. To minimise the risk of neurological morbidity this target volume was restricted by eloquent brain areas as determined by intraoperative cortical and subcortical electrostimulation mapping. The relation between the tumour and eloquent areas was determined according to the definitions used in the prognostic classification system of Chang et al [2]. Preoperative functional MRI, magnetic source imaging and diffusion tensor imaging tractography were obtained, as required by the location of the tumour. In general, the resection proceeded until eloquent brain areas were reached or volumetric tumour resection was complete. The tumour was subpially resected along sulci and fissures to minimise ischaemia around the resection cavity.\nThe patients in this study were preoperatively considered to have either a low-grade glioma or an anaplastic focus in a previously low-grade glioma. Therefore, the target volume for resection consisted of the T2/FLAIR volume. To minimise the risk of neurological morbidity this target volume was restricted by eloquent brain areas as determined by intraoperative cortical and subcortical electrostimulation mapping. The relation between the tumour and eloquent areas was determined according to the definitions used in the prognostic classification system of Chang et al [2]. Preoperative functional MRI, magnetic source imaging and diffusion tensor imaging tractography were obtained, as required by the location of the tumour. In general, the resection proceeded until eloquent brain areas were reached or volumetric tumour resection was complete. The tumour was subpially resected along sulci and fissures to minimise ischaemia around the resection cavity.\n[SUBTITLE] MRI data acquisition [SUBSECTION] MR imaging was performed on a 1.5T system (Siemens Sonata or Avanto; Siemens Medical Systems, Erlangen, Germany). The imaging protocol included non-enhanced axial T1-weighted spin echo images [repetition time/echo time (TR/TE) 520-600/8-12 ms] with 5-mm section thickness and axial T2-weighted turbo spin echo images (TR/TE 5190-8670/93-101 ms) with 5-mm section thickness. Sagittal 3D turbo fluid-attenuated inversion-recovery (tFLAIR) images [repetition time/echo time/inversion time (TR/TE/TI) 6500/355/2200 ms] with 1.3-mm section thickness plus axial multiplanar reconstructions (MPR) and axial single shot spin echo echo-planar diffusion-weighted (DWI) images (TR/TE 3400/122 ms) with 5-mm section thickness were added to the imaging protocol in 2009. Diffusion gradients were applied along 3 orthogonal directions using b-values of 0, 500 and 1000 s/ mm2. Apparent diffusion coefficient (ADC) maps were calculated from the DWI images. Post-contrast (0.2 mmol/Kg) sagittal 3D T1-weighted gradient-echo (MPRAGE) images (TR/TE/TI 2300-2700/5-4.5/950 ms) with 1- to 1.5-mm section thickness and axial T1-weighted spin echo images with 5-mm section thickness were obtained. In all patients preoperative imaging was repeated at our institution for navigation protocol purposes, even if preoperative imaging was available from a referring hospital.\nMR imaging was performed on a 1.5T system (Siemens Sonata or Avanto; Siemens Medical Systems, Erlangen, Germany). The imaging protocol included non-enhanced axial T1-weighted spin echo images [repetition time/echo time (TR/TE) 520-600/8-12 ms] with 5-mm section thickness and axial T2-weighted turbo spin echo images (TR/TE 5190-8670/93-101 ms) with 5-mm section thickness. Sagittal 3D turbo fluid-attenuated inversion-recovery (tFLAIR) images [repetition time/echo time/inversion time (TR/TE/TI) 6500/355/2200 ms] with 1.3-mm section thickness plus axial multiplanar reconstructions (MPR) and axial single shot spin echo echo-planar diffusion-weighted (DWI) images (TR/TE 3400/122 ms) with 5-mm section thickness were added to the imaging protocol in 2009. Diffusion gradients were applied along 3 orthogonal directions using b-values of 0, 500 and 1000 s/ mm2. Apparent diffusion coefficient (ADC) maps were calculated from the DWI images. Post-contrast (0.2 mmol/Kg) sagittal 3D T1-weighted gradient-echo (MPRAGE) images (TR/TE/TI 2300-2700/5-4.5/950 ms) with 1- to 1.5-mm section thickness and axial T1-weighted spin echo images with 5-mm section thickness were obtained. In all patients preoperative imaging was repeated at our institution for navigation protocol purposes, even if preoperative imaging was available from a referring hospital.\n[SUBTITLE] MRI volumetry [SUBSECTION] Tumour volumes were measured using image fusion and volumetric software (BrainLab iPlan Cranial 2.6; BrainLab AG, Feldkirchen, Germany). MRI volumetry was based on the Cavalieri principle which provides unbiased volume estimates [11]. For this purpose tumour contours were manually segmented on sequential axial images and verified in the coronal and sagittal reconstruction planes. The sum of tumour contour surfaces of an MRI study was multiplied by slice thickness to obtain the estimated volume in cm3. This method has been demonstrated to be reproducible and accurate [12, 13]. In this way preoperative tumour volumes were determined for T2-, FLAIR- and gadolinium-enhanced T1-weighted images. The residual tumour after resection was also determined on T2- and FLAIR-weighted sequences for early and late postoperative imaging. The postoperative ischaemic volume was measured on an early postoperative diffusion-weighted sequence. Tumour and ischaemia were manually segmented independently by two observers (SB, PW) and disagreement was resolved by consensus with a third observer (ES).\nTumour volumes were measured using image fusion and volumetric software (BrainLab iPlan Cranial 2.6; BrainLab AG, Feldkirchen, Germany). MRI volumetry was based on the Cavalieri principle which provides unbiased volume estimates [11]. For this purpose tumour contours were manually segmented on sequential axial images and verified in the coronal and sagittal reconstruction planes. The sum of tumour contour surfaces of an MRI study was multiplied by slice thickness to obtain the estimated volume in cm3. This method has been demonstrated to be reproducible and accurate [12, 13]. In this way preoperative tumour volumes were determined for T2-, FLAIR- and gadolinium-enhanced T1-weighted images. The residual tumour after resection was also determined on T2- and FLAIR-weighted sequences for early and late postoperative imaging. The postoperative ischaemic volume was measured on an early postoperative diffusion-weighted sequence. Tumour and ischaemia were manually segmented independently by two observers (SB, PW) and disagreement was resolved by consensus with a third observer (ES).\n[SUBTITLE] Comparison of volume measurements [SUBSECTION] Several volume measurements were compared. To determine the effect of MRI timing on tumour volume measurements, early and late postoperative volumes were compared for T2- and FLAIR-weighted MRI. To determine the effect of pulse-sequence on tumour volume assessment, T2 and FLAIR volumes were compared on preoperative, early and late postoperative MRI. To determine the effect of resection-induced ischaemia on differences in residual tumour volumes, tissue volumes with restricted diffusion on early postoperative MRI were compared with the difference between early and late postoperative volumes for T2- and FLAIR-weighted imaging.\nSeveral volume measurements were compared. To determine the effect of MRI timing on tumour volume measurements, early and late postoperative volumes were compared for T2- and FLAIR-weighted MRI. To determine the effect of pulse-sequence on tumour volume assessment, T2 and FLAIR volumes were compared on preoperative, early and late postoperative MRI. To determine the effect of resection-induced ischaemia on differences in residual tumour volumes, tissue volumes with restricted diffusion on early postoperative MRI were compared with the difference between early and late postoperative volumes for T2- and FLAIR-weighted imaging.\n[SUBTITLE] Statistical analysis [SUBSECTION] Tumour volumes for individual patients were plotted for visual inspection. In the case of perfect agreement between two volumes, a linear regression coefficient of 1.0 with an intercept of 0 was expected (plotted as a dashed line) with a correlation coefficient of 1.0. The actual regression coefficients with 95% confidence intervals were calculated by linear regression analysis with a fixed intercept of 0. As the volumes were considered not to be normally distributed, Spearman’s correlation coefficient was calculated. Bland-Altman plots were created to visualise agreement between volume estimates. Absolute volume differences were compared using the Wilcoxon signed rank test for paired samples.\nTumour volumes for individual patients were plotted for visual inspection. In the case of perfect agreement between two volumes, a linear regression coefficient of 1.0 with an intercept of 0 was expected (plotted as a dashed line) with a correlation coefficient of 1.0. The actual regression coefficients with 95% confidence intervals were calculated by linear regression analysis with a fixed intercept of 0. As the volumes were considered not to be normally distributed, Spearman’s correlation coefficient was calculated. Bland-Altman plots were created to visualise agreement between volume estimates. Absolute volume differences were compared using the Wilcoxon signed rank test for paired samples.", "All adult patients with a histopathological diagnosis of a glioma grade 1 to 4 according to the WHO 2007 criteria [10], who had surgery between January 2007 and December 2009 were retrieved from a consecutive patient series in an electronic database of our institution, which is a tertiary referral centre for brain tumour treatment. Then, patients with first-time resective surgery were selected for analysis, excluding stereotactic or open biopsies. In order to ensure that the target volume for resection was based on T2-/FLAIR-weighted imaging, only gliomas with no or minimal gadolinium enhancement on preoperative T1-weighted imaging were selected. However, focal or faint enhancement arbitrarily up to 15% of the T2/FLAIR volume, as determined volumetrically, was allowed. Furthermore, MR imaging was required to be available (a) preoperatively, (b) within 48 h of surgery, i.e. early postoperatively, and (c) between 2 and 6 months after surgery, i.e. late postoperatively.\nThe local ethical committee approved the research protocol and waived informed consent of participants.", "The patients in this study were preoperatively considered to have either a low-grade glioma or an anaplastic focus in a previously low-grade glioma. Therefore, the target volume for resection consisted of the T2/FLAIR volume. To minimise the risk of neurological morbidity this target volume was restricted by eloquent brain areas as determined by intraoperative cortical and subcortical electrostimulation mapping. The relation between the tumour and eloquent areas was determined according to the definitions used in the prognostic classification system of Chang et al [2]. Preoperative functional MRI, magnetic source imaging and diffusion tensor imaging tractography were obtained, as required by the location of the tumour. In general, the resection proceeded until eloquent brain areas were reached or volumetric tumour resection was complete. The tumour was subpially resected along sulci and fissures to minimise ischaemia around the resection cavity.", "MR imaging was performed on a 1.5T system (Siemens Sonata or Avanto; Siemens Medical Systems, Erlangen, Germany). The imaging protocol included non-enhanced axial T1-weighted spin echo images [repetition time/echo time (TR/TE) 520-600/8-12 ms] with 5-mm section thickness and axial T2-weighted turbo spin echo images (TR/TE 5190-8670/93-101 ms) with 5-mm section thickness. Sagittal 3D turbo fluid-attenuated inversion-recovery (tFLAIR) images [repetition time/echo time/inversion time (TR/TE/TI) 6500/355/2200 ms] with 1.3-mm section thickness plus axial multiplanar reconstructions (MPR) and axial single shot spin echo echo-planar diffusion-weighted (DWI) images (TR/TE 3400/122 ms) with 5-mm section thickness were added to the imaging protocol in 2009. Diffusion gradients were applied along 3 orthogonal directions using b-values of 0, 500 and 1000 s/ mm2. Apparent diffusion coefficient (ADC) maps were calculated from the DWI images. Post-contrast (0.2 mmol/Kg) sagittal 3D T1-weighted gradient-echo (MPRAGE) images (TR/TE/TI 2300-2700/5-4.5/950 ms) with 1- to 1.5-mm section thickness and axial T1-weighted spin echo images with 5-mm section thickness were obtained. In all patients preoperative imaging was repeated at our institution for navigation protocol purposes, even if preoperative imaging was available from a referring hospital.", "Tumour volumes were measured using image fusion and volumetric software (BrainLab iPlan Cranial 2.6; BrainLab AG, Feldkirchen, Germany). MRI volumetry was based on the Cavalieri principle which provides unbiased volume estimates [11]. For this purpose tumour contours were manually segmented on sequential axial images and verified in the coronal and sagittal reconstruction planes. The sum of tumour contour surfaces of an MRI study was multiplied by slice thickness to obtain the estimated volume in cm3. This method has been demonstrated to be reproducible and accurate [12, 13]. In this way preoperative tumour volumes were determined for T2-, FLAIR- and gadolinium-enhanced T1-weighted images. The residual tumour after resection was also determined on T2- and FLAIR-weighted sequences for early and late postoperative imaging. The postoperative ischaemic volume was measured on an early postoperative diffusion-weighted sequence. Tumour and ischaemia were manually segmented independently by two observers (SB, PW) and disagreement was resolved by consensus with a third observer (ES).", "Several volume measurements were compared. To determine the effect of MRI timing on tumour volume measurements, early and late postoperative volumes were compared for T2- and FLAIR-weighted MRI. To determine the effect of pulse-sequence on tumour volume assessment, T2 and FLAIR volumes were compared on preoperative, early and late postoperative MRI. To determine the effect of resection-induced ischaemia on differences in residual tumour volumes, tissue volumes with restricted diffusion on early postoperative MRI were compared with the difference between early and late postoperative volumes for T2- and FLAIR-weighted imaging.", "Tumour volumes for individual patients were plotted for visual inspection. In the case of perfect agreement between two volumes, a linear regression coefficient of 1.0 with an intercept of 0 was expected (plotted as a dashed line) with a correlation coefficient of 1.0. The actual regression coefficients with 95% confidence intervals were calculated by linear regression analysis with a fixed intercept of 0. As the volumes were considered not to be normally distributed, Spearman’s correlation coefficient was calculated. Bland-Altman plots were created to visualise agreement between volume estimates. Absolute volume differences were compared using the Wilcoxon signed rank test for paired samples.", "[SUBTITLE] Patient and MRI characteristics and histopathology [SUBSECTION] The study population was a subset of 58 patients out of 223 with a glioma diagnosis in the inclusion time interval. Table 1 lists the details of the study population, consisting of 25 women and 33 men with a mean age of 44.7 years (range 20 to 70).\nTable 1Patient demographics and MRI characteristicsWHO1WHO2WHO3WHO4totaln (%)2 (3%)35 (60%)14 (24%)7 (12%)58patient demographicsage, mean (SE)36.0 (0.4)45.4 (1.7)45.2 (3.7)42.8 (7.0)44.7 (1.6)female/male0/218/175/91/625/33No. pts with radiotherapy026210No, pts with chemo-irradiation00257tumour lateralisation and locationL/R0/215/205/93/423/35nr pts with eloquent tumour location2267641frontal087116SMA072211parietal063211temporal02114insula2121116MRI characteristicspreoperative T2 hyperintense tumour volume in cm3, mean (SE)135.0 (58.2)71.6 (8.7)129.1 (19.8)76.6 (12.2)88.3 (8.0)preoperative FLAIR hyperintense tumour volume in cm3, mean (SE)129.9 (52.0)72.3 (9.9)138.7 (29.9)89.8 (16.2)90.3 (9.7)days preoperative MRI, mean (SE)162 (121)56 (15)27 (9)38 (14)50 (10)days early postoperative MRI, mean (SE)1 (0)1.4 (0.1)1.8 (0.3)1.6 (0.4)1.5 (0.1)days late postoperative MRI, mean (SE)189 (10)104 (7)117 (21)96 (15)109 (7)\n\nPatient demographics and MRI characteristics\nPreoperative MRI was performed at a median of 23 days (range: 1 to 282) before resection which showed no T1 gadolinium enhancement in 32 and minimal enhancement in 26 patients. The median preoperative T2 and FLAIR tumour volumes were 80.7 cm3 (range: 0.8 to 288.2) and 77.9 cm3 (range: 1.3 to 344.5), respectively. Histopathological diagnosis established WHO grade 1 in 2 patients (3%), grade 2 in 35 (60%), grade 3 in 14 (24%) and grade 4 in 7 (12%). Histopathological subtypes included 2 pilocytic astrocytomas (WHO grade 1); 14 astrocytomas, 18 oligodendrogliomas and 3 oligoastrocytomas (WHO grade 2); 6 anaplastic astrocytomas, 5 anaplastic oligodendrogliomas, 3 anaplastic oligoastrocytomas (WHO grade 3) and 7 glioblastomas (WHO grade 4).\nEarly and late postoperative T2 images were available for 52 patients; early and late postoperative FLAIR images for 14 patients. Early postoperative T2 and FLAIR images were available for 23 patients; late postoperative T2 and FLAIR images for 24 patients. Early postoperative diffusion-weighted images were available for 33 patients. Early postoperative MRI was obtained within 48 hours of resection in all patients, 44 on the first postoperative day, 14 on the second day. Late postoperative MRI was performed at a median of 100 days (range: 34 to 192) after resection; 18 within 3 months postoperatively, 31 between four and 6 months and 7 between seven and 9 months.\nThe study population was a subset of 58 patients out of 223 with a glioma diagnosis in the inclusion time interval. Table 1 lists the details of the study population, consisting of 25 women and 33 men with a mean age of 44.7 years (range 20 to 70).\nTable 1Patient demographics and MRI characteristicsWHO1WHO2WHO3WHO4totaln (%)2 (3%)35 (60%)14 (24%)7 (12%)58patient demographicsage, mean (SE)36.0 (0.4)45.4 (1.7)45.2 (3.7)42.8 (7.0)44.7 (1.6)female/male0/218/175/91/625/33No. pts with radiotherapy026210No, pts with chemo-irradiation00257tumour lateralisation and locationL/R0/215/205/93/423/35nr pts with eloquent tumour location2267641frontal087116SMA072211parietal063211temporal02114insula2121116MRI characteristicspreoperative T2 hyperintense tumour volume in cm3, mean (SE)135.0 (58.2)71.6 (8.7)129.1 (19.8)76.6 (12.2)88.3 (8.0)preoperative FLAIR hyperintense tumour volume in cm3, mean (SE)129.9 (52.0)72.3 (9.9)138.7 (29.9)89.8 (16.2)90.3 (9.7)days preoperative MRI, mean (SE)162 (121)56 (15)27 (9)38 (14)50 (10)days early postoperative MRI, mean (SE)1 (0)1.4 (0.1)1.8 (0.3)1.6 (0.4)1.5 (0.1)days late postoperative MRI, mean (SE)189 (10)104 (7)117 (21)96 (15)109 (7)\n\nPatient demographics and MRI characteristics\nPreoperative MRI was performed at a median of 23 days (range: 1 to 282) before resection which showed no T1 gadolinium enhancement in 32 and minimal enhancement in 26 patients. The median preoperative T2 and FLAIR tumour volumes were 80.7 cm3 (range: 0.8 to 288.2) and 77.9 cm3 (range: 1.3 to 344.5), respectively. Histopathological diagnosis established WHO grade 1 in 2 patients (3%), grade 2 in 35 (60%), grade 3 in 14 (24%) and grade 4 in 7 (12%). Histopathological subtypes included 2 pilocytic astrocytomas (WHO grade 1); 14 astrocytomas, 18 oligodendrogliomas and 3 oligoastrocytomas (WHO grade 2); 6 anaplastic astrocytomas, 5 anaplastic oligodendrogliomas, 3 anaplastic oligoastrocytomas (WHO grade 3) and 7 glioblastomas (WHO grade 4).\nEarly and late postoperative T2 images were available for 52 patients; early and late postoperative FLAIR images for 14 patients. Early postoperative T2 and FLAIR images were available for 23 patients; late postoperative T2 and FLAIR images for 24 patients. Early postoperative diffusion-weighted images were available for 33 patients. Early postoperative MRI was obtained within 48 hours of resection in all patients, 44 on the first postoperative day, 14 on the second day. Late postoperative MRI was performed at a median of 100 days (range: 34 to 192) after resection; 18 within 3 months postoperatively, 31 between four and 6 months and 7 between seven and 9 months.\n[SUBTITLE] Residual tumour volumes on early compared with late postoperative MRI [SUBSECTION] The median early and late T2 residual volumes were 28.7 cm3 and 22.4 cm3, respectively, resulting in a 22% smaller residual volume on late T2 images. Residual T2 tumour volumes on early postoperative MRI demonstrated a systematically larger residual tumour volume (on average 4.3 cm3) compared with late postoperative MRI (Fig. 1a, c) with a regression coefficient of 0.767 (95%CI: 0.665–0.870), a correlation coefficient of 0.81 (p < 0.0001) and a paired Wilcoxon rank sum of V = 746.5 (p = 0.295). Similarly, the median early and late FLAIR residual volumes were 27.3 cm3 and 13.9 cm3, respectively, resulting in a 49% smaller residual volume on late FLAIR images. Systematically larger residual FLAIR tumour volumes (on average 5.7 cm3) were observed based on early postoperative MRI compared with late postoperative MRI (Fig. 1b, d) with a regression coefficient of 0.833 (95%CI: 0.693–0.973), a correlation coefficient of 0.95 (p < 0.0001) and a paired Wilcoxon rank sum of V = 78 (p = 0.119). Data plots confirmed that the differential residual tumour volumes were independent of glioma grading, subtyping and timing of late postoperative MRI.\nFig. 1Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval\n\nResidual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval\nThe median early and late T2 residual volumes were 28.7 cm3 and 22.4 cm3, respectively, resulting in a 22% smaller residual volume on late T2 images. Residual T2 tumour volumes on early postoperative MRI demonstrated a systematically larger residual tumour volume (on average 4.3 cm3) compared with late postoperative MRI (Fig. 1a, c) with a regression coefficient of 0.767 (95%CI: 0.665–0.870), a correlation coefficient of 0.81 (p < 0.0001) and a paired Wilcoxon rank sum of V = 746.5 (p = 0.295). Similarly, the median early and late FLAIR residual volumes were 27.3 cm3 and 13.9 cm3, respectively, resulting in a 49% smaller residual volume on late FLAIR images. Systematically larger residual FLAIR tumour volumes (on average 5.7 cm3) were observed based on early postoperative MRI compared with late postoperative MRI (Fig. 1b, d) with a regression coefficient of 0.833 (95%CI: 0.693–0.973), a correlation coefficient of 0.95 (p < 0.0001) and a paired Wilcoxon rank sum of V = 78 (p = 0.119). Data plots confirmed that the differential residual tumour volumes were independent of glioma grading, subtyping and timing of late postoperative MRI.\nFig. 1Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval\n\nResidual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval\n[SUBTITLE] Tumour volumes on FLAIR- compared with T2-weighted imaging [SUBSECTION] FLAIR tumour volumes were marginally larger (on average 4.7 cm3) compared with T2 volumes on preoperative MRI (Fig. 2a, d) with a regression coefficient of 1.068 (95%CI: 1.033–1.102), a correlation coefficient of 0.98 (p < 0.0001) and a paired Wilcoxon rank sum of V = 419.5 (p = 0.056). FLAIR residual tumour volumes were substantially larger (on average 7.2 cm3) than T2 tumour volumes on early postoperative MRI (Fig. 2b, e) with a regression coefficient of 1.156 (95%CI: 1.049–1.262), a correlation coefficient of 0.96 (p < 0.0001) and a paired Wilcoxon rank sum of V = 40 (p = 0.009). On the late postoperative MRI, FLAIR and T2 residual tumour volumes were comparable (Fig. 2c, f): T2 tumour volumes on average 1.5 cm3 larger with a regression coefficient of 0.948 (95%CI: 0.880–1.015), a correlation coefficient of 0.94 (p < 0.0001) and a paired Wilcoxon rank sum of V = 194 (p = 0.218). Again, data plots confirmed that the differential residual tumour volumes were independent of glioma grading and timing of late postoperative MRI.\nFig. 2Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n\n\nResidual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n\nFLAIR tumour volumes were marginally larger (on average 4.7 cm3) compared with T2 volumes on preoperative MRI (Fig. 2a, d) with a regression coefficient of 1.068 (95%CI: 1.033–1.102), a correlation coefficient of 0.98 (p < 0.0001) and a paired Wilcoxon rank sum of V = 419.5 (p = 0.056). FLAIR residual tumour volumes were substantially larger (on average 7.2 cm3) than T2 tumour volumes on early postoperative MRI (Fig. 2b, e) with a regression coefficient of 1.156 (95%CI: 1.049–1.262), a correlation coefficient of 0.96 (p < 0.0001) and a paired Wilcoxon rank sum of V = 40 (p = 0.009). On the late postoperative MRI, FLAIR and T2 residual tumour volumes were comparable (Fig. 2c, f): T2 tumour volumes on average 1.5 cm3 larger with a regression coefficient of 0.948 (95%CI: 0.880–1.015), a correlation coefficient of 0.94 (p < 0.0001) and a paired Wilcoxon rank sum of V = 194 (p = 0.218). Again, data plots confirmed that the differential residual tumour volumes were independent of glioma grading and timing of late postoperative MRI.\nFig. 2Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n\n\nResidual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n\n[SUBTITLE] Postoperative ischaemia [SUBSECTION] The median restricted diffusion volume was 6.1 cm3. The restricted diffusion volume only weakly correlated with the difference between early and late postoperative volumes on T2-weighted imaging (Fig. 3a) with a correlation coefficient of 0.32 (p = 0.089). The restricted diffusion volume however did correlate more strongly with the difference between early and late postoperative volumes on FLAIR-weighted imaging (Fig. 3b) with a regression coefficient of 0.877 (95%CI: 0.486–1.267) and a correlation coefficient of 0.76 (p = 0.004). In addition, the restricted diffusion volume also correlated with the difference between FLAIR and T2 volumes on early postoperative MRI (Fig. 3c) with a regression coefficient of 0.655 (95%CI: 0.440–0.871) and a correlation coefficient of 0.77 (p = 0.0001).\nFig. 3Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n\n\nPlots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n\nThe median restricted diffusion volume was 6.1 cm3. The restricted diffusion volume only weakly correlated with the difference between early and late postoperative volumes on T2-weighted imaging (Fig. 3a) with a correlation coefficient of 0.32 (p = 0.089). The restricted diffusion volume however did correlate more strongly with the difference between early and late postoperative volumes on FLAIR-weighted imaging (Fig. 3b) with a regression coefficient of 0.877 (95%CI: 0.486–1.267) and a correlation coefficient of 0.76 (p = 0.004). In addition, the restricted diffusion volume also correlated with the difference between FLAIR and T2 volumes on early postoperative MRI (Fig. 3c) with a regression coefficient of 0.655 (95%CI: 0.440–0.871) and a correlation coefficient of 0.77 (p = 0.0001).\nFig. 3Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n\n\nPlots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n", "The study population was a subset of 58 patients out of 223 with a glioma diagnosis in the inclusion time interval. Table 1 lists the details of the study population, consisting of 25 women and 33 men with a mean age of 44.7 years (range 20 to 70).\nTable 1Patient demographics and MRI characteristicsWHO1WHO2WHO3WHO4totaln (%)2 (3%)35 (60%)14 (24%)7 (12%)58patient demographicsage, mean (SE)36.0 (0.4)45.4 (1.7)45.2 (3.7)42.8 (7.0)44.7 (1.6)female/male0/218/175/91/625/33No. pts with radiotherapy026210No, pts with chemo-irradiation00257tumour lateralisation and locationL/R0/215/205/93/423/35nr pts with eloquent tumour location2267641frontal087116SMA072211parietal063211temporal02114insula2121116MRI characteristicspreoperative T2 hyperintense tumour volume in cm3, mean (SE)135.0 (58.2)71.6 (8.7)129.1 (19.8)76.6 (12.2)88.3 (8.0)preoperative FLAIR hyperintense tumour volume in cm3, mean (SE)129.9 (52.0)72.3 (9.9)138.7 (29.9)89.8 (16.2)90.3 (9.7)days preoperative MRI, mean (SE)162 (121)56 (15)27 (9)38 (14)50 (10)days early postoperative MRI, mean (SE)1 (0)1.4 (0.1)1.8 (0.3)1.6 (0.4)1.5 (0.1)days late postoperative MRI, mean (SE)189 (10)104 (7)117 (21)96 (15)109 (7)\n\nPatient demographics and MRI characteristics\nPreoperative MRI was performed at a median of 23 days (range: 1 to 282) before resection which showed no T1 gadolinium enhancement in 32 and minimal enhancement in 26 patients. The median preoperative T2 and FLAIR tumour volumes were 80.7 cm3 (range: 0.8 to 288.2) and 77.9 cm3 (range: 1.3 to 344.5), respectively. Histopathological diagnosis established WHO grade 1 in 2 patients (3%), grade 2 in 35 (60%), grade 3 in 14 (24%) and grade 4 in 7 (12%). Histopathological subtypes included 2 pilocytic astrocytomas (WHO grade 1); 14 astrocytomas, 18 oligodendrogliomas and 3 oligoastrocytomas (WHO grade 2); 6 anaplastic astrocytomas, 5 anaplastic oligodendrogliomas, 3 anaplastic oligoastrocytomas (WHO grade 3) and 7 glioblastomas (WHO grade 4).\nEarly and late postoperative T2 images were available for 52 patients; early and late postoperative FLAIR images for 14 patients. Early postoperative T2 and FLAIR images were available for 23 patients; late postoperative T2 and FLAIR images for 24 patients. Early postoperative diffusion-weighted images were available for 33 patients. Early postoperative MRI was obtained within 48 hours of resection in all patients, 44 on the first postoperative day, 14 on the second day. Late postoperative MRI was performed at a median of 100 days (range: 34 to 192) after resection; 18 within 3 months postoperatively, 31 between four and 6 months and 7 between seven and 9 months.", "The median early and late T2 residual volumes were 28.7 cm3 and 22.4 cm3, respectively, resulting in a 22% smaller residual volume on late T2 images. Residual T2 tumour volumes on early postoperative MRI demonstrated a systematically larger residual tumour volume (on average 4.3 cm3) compared with late postoperative MRI (Fig. 1a, c) with a regression coefficient of 0.767 (95%CI: 0.665–0.870), a correlation coefficient of 0.81 (p < 0.0001) and a paired Wilcoxon rank sum of V = 746.5 (p = 0.295). Similarly, the median early and late FLAIR residual volumes were 27.3 cm3 and 13.9 cm3, respectively, resulting in a 49% smaller residual volume on late FLAIR images. Systematically larger residual FLAIR tumour volumes (on average 5.7 cm3) were observed based on early postoperative MRI compared with late postoperative MRI (Fig. 1b, d) with a regression coefficient of 0.833 (95%CI: 0.693–0.973), a correlation coefficient of 0.95 (p < 0.0001) and a paired Wilcoxon rank sum of V = 78 (p = 0.119). Data plots confirmed that the differential residual tumour volumes were independent of glioma grading, subtyping and timing of late postoperative MRI.\nFig. 1Residual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval\n\nResidual tumour volumes in mL comparing early and late postoperative MRI based on T2-weighted imaging and FLAIR-weighted imaging shown as data plots (a and b) and Bland-Altman plots (c and d) respectively depicting systematic overestimation of residual tumour volume by early postoperative MRI. Each data point represents measurements obtained from one patient, tagged by glioma grade according to the legend. The straight diagonal line in a and b represents hypothetical perfect agreement and the dotted lines the actual linear regression fit and corresponding 95% confidence interval. The three dotted horizontal lines in c and d represent the average differential volume and corresponding 95% confidence interval", "FLAIR tumour volumes were marginally larger (on average 4.7 cm3) compared with T2 volumes on preoperative MRI (Fig. 2a, d) with a regression coefficient of 1.068 (95%CI: 1.033–1.102), a correlation coefficient of 0.98 (p < 0.0001) and a paired Wilcoxon rank sum of V = 419.5 (p = 0.056). FLAIR residual tumour volumes were substantially larger (on average 7.2 cm3) than T2 tumour volumes on early postoperative MRI (Fig. 2b, e) with a regression coefficient of 1.156 (95%CI: 1.049–1.262), a correlation coefficient of 0.96 (p < 0.0001) and a paired Wilcoxon rank sum of V = 40 (p = 0.009). On the late postoperative MRI, FLAIR and T2 residual tumour volumes were comparable (Fig. 2c, f): T2 tumour volumes on average 1.5 cm3 larger with a regression coefficient of 0.948 (95%CI: 0.880–1.015), a correlation coefficient of 0.94 (p < 0.0001) and a paired Wilcoxon rank sum of V = 194 (p = 0.218). Again, data plots confirmed that the differential residual tumour volumes were independent of glioma grading and timing of late postoperative MRI.\nFig. 2Residual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n\n\nResidual tumour volumes in mL comparing T2- and FLAIR-weighted imaging respectively based on preoperative, early postoperative and late postoperative MRI shown as data plots (a, b and c) and Bland Altman plots (d, e and f) depicting good agreement in residual tumour volume based on preoperative and late postoperative MRI and systematic overestimation of FLAIR-weighted imaging on early postoperative MRI. Data points, tagging and line styles as in Fig. 1\n", "The median restricted diffusion volume was 6.1 cm3. The restricted diffusion volume only weakly correlated with the difference between early and late postoperative volumes on T2-weighted imaging (Fig. 3a) with a correlation coefficient of 0.32 (p = 0.089). The restricted diffusion volume however did correlate more strongly with the difference between early and late postoperative volumes on FLAIR-weighted imaging (Fig. 3b) with a regression coefficient of 0.877 (95%CI: 0.486–1.267) and a correlation coefficient of 0.76 (p = 0.004). In addition, the restricted diffusion volume also correlated with the difference between FLAIR and T2 volumes on early postoperative MRI (Fig. 3c) with a regression coefficient of 0.655 (95%CI: 0.440–0.871) and a correlation coefficient of 0.77 (p = 0.0001).\nFig. 3Plots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n\n\nPlots of diffusion restriction volume on early postoperative MRI and (a) the difference in T2 residual tumour volumes of early and late postoperative MRI, (b) the difference in FLAIR residual tumour volumes of early and late postoperative MRI and (c) the difference in early postoperative MRI residual tumour volumes of T2- and FLAIR-weighted imaging. Data points, tagging and line styles as in Fig. 1\n", "This study population consisted of a consecutive series of patients in whom resective surgery targeted a FLAIR-based tumour volume. Owing to the absence of overt T1 gadolinium enhancement these lesions were deemed low-grade based on preoperative diagnostic neuroimaging. The histopathological diagnosis however established that 21 (36%) of the gliomas were in fact of high grade (WHO grade 3 or 4), reflecting the well-known discordance between MRI and histopathological grading of glioma [14–16]. Three main observations were made in this study. First, a systematic and substantial overestimation of residual tumour on MRI within 48 h of resection, indicating that late MRI months after surgery is more suitable to determine the extent of resection than an early MRI days after resection. Second, our data do not support a preference for either FLAIR- or T2-weighted imaging for either pre- or late postoperative imaging, although FLAIR-weighted imaging is more sensitive for early postoperative overestimation than T2-weighted imaging. Third, resection-induced diffusion restriction contributes to the residual volume overestimation of early compared with late postoperative MRI. These main findings are illustrated with a patient example in Fig. 4.\nFig. 4An example of preoperative, early and late postoperative MRI displaying T2-, FLAIR- and diffusion-weighted axial images and volume contours. This 36-year old man with an oligoastrocytoma in the left cingulate gyrus had preoperative T2 and FLAIR tumour volumes of 101 and 94 cm3, respectively, outlined in green, with faint (2%) T1 gadolinium enhancement 3 weeks before resection. The residual T2 and FLAIR volumes 2 days after surgery were 35 and 61 cm3, respectively, outlined in red, with a diffusion restriction volume of 37 cm3. The residual T2 and FLAIR volumes 97 days after resection measured 27 and 21 cm3, respectively, outlined in yellow. Note the involution of the hyperintensity at the lateral margin of the resection cavity with restricted diffusion, interpreted as resection-induced ischaemia, and the stable hyperintensity at the posterior margin of the resection cavity, presumably genuine residual glioma within the corticospinal tract\n\nAn example of preoperative, early and late postoperative MRI displaying T2-, FLAIR- and diffusion-weighted axial images and volume contours. This 36-year old man with an oligoastrocytoma in the left cingulate gyrus had preoperative T2 and FLAIR tumour volumes of 101 and 94 cm3, respectively, outlined in green, with faint (2%) T1 gadolinium enhancement 3 weeks before resection. The residual T2 and FLAIR volumes 2 days after surgery were 35 and 61 cm3, respectively, outlined in red, with a diffusion restriction volume of 37 cm3. The residual T2 and FLAIR volumes 97 days after resection measured 27 and 21 cm3, respectively, outlined in yellow. Note the involution of the hyperintensity at the lateral margin of the resection cavity with restricted diffusion, interpreted as resection-induced ischaemia, and the stable hyperintensity at the posterior margin of the resection cavity, presumably genuine residual glioma within the corticospinal tract\nSeveral factors likely contribute to the reduction of T2/FLAIR hyperintense regions between early and late postoperative MRI. It is unlikely that residual tumour actually regresses in the months following resection, although in theory involution of tumoural tissue induced by hypoperfusion due to surgical devascularisation can be postulated. The contribution of oedema and contusion of brain tissue surrounding the resection cavity, which subsides in weeks to months after resection, seems to be a more likely explanation. This reversible oedema can be either due to tumour compression that was present before resection, similar to peritumoral oedema as observed in some meningiomas for instance, or due to surgical manipulation of the tissue surrounding the resection cavity. Furthermore, the diffusion restricted areas may evolve in ischaemic (non-tumoral) tissue with volume involution over weeks to months similar to spontaneous infarction [17].\nIt is not surprising that T2 and FLAIR sequences provide similar tumour volumes on preoperative and late postoperative MRI because these sequences essentially measure the same physical properties of tissue except for the suppression of cerebrospinal fluid signal in the FLAIR sequence. This suppression provides superior contrast among tumour, cerebrospinal fluid and brain, thereby facilitating for instance segmentation algorithms to delineate tumour volume in neuronavigation preplanning [18–20]. Nevertheless, FLAIR tumour volumes tend to be larger than T2 volumes on early postoperative MRI. This could be explained by differences in sensitivity in detecting ischaemia especially in grey matter [21–24] rather than by differences in the detection of residual tumour or oedema originating from tumour compression or surgical manipulation.\nRecent studies on the extent of resection of non-enhancing gliomas have used various postoperative MRI timings and sequences to determine residual tumour. Examples include using T2 or FLAIR MRI within 48 h postoperatively [25], MRI less than 48 h postoperatively and FLAIR-weighted imaging with subtraction of areas that were likely to contain oedema [3], unspecified postoperative timing of T2- or FLAIR-weighted MRI when available [26], unspecified timing of FLAIR-weighted MRI [4], T1-weighted MRI at three to 6 months postoperatively [27], unspecified postoperative timing of T2- and FLAIR-weighted MRI [28] and T2-weighted MRI at five or six weeks postoperatively [29]. In order to determine the impact of residual tumour on the course of disease and to be able to compare the extent of resection between patient series, it seems reasonable to use standardised objective criteria for measurements of residual tumour. Based on our observations, we recommend avoiding measuring the extent of resection on MRI immediately after resection. We also recommend incorporating diffusion-weighted imaging into the immediate postoperative MRI for distinction between residual tumour and ischaemia in the reporting of results on residual tumour after non-enhancing glioma resections.\nNotwithstanding this limitation, early postoperative MRI is clearly useful for a number of other purposes. First, as several of these patients will be diagnosed with high-grade glioma, baseline imaging for response evaluation of adjuvant treatment is assured by early postoperative MRI. Second, incidental tumour progression between resection and late postoperative MRI can be detected only in comparison with an early postoperative MRI. Third, new gadolinium enhancement on late postoperative MRI can be recognised as ischaemia-related luxury perfusion rather than be misinterpreted as anaplastic transformation [30]. Fourth, immediate postoperative deficits can be placed in the context of proximity of the resection cavity to eloquent structures and resection-induced diffusion restriction of these structures. Fifth, it is likely that more educational benefit is obtained for the surgical team from early postoperative MRI with feedback information on the surgical decisions rather than from postoperative MRI months after resection.\nOne limitation of this study is that the timing of the late postoperative MRI was not standardised and serial late postoperative measurements are lacking. Therefore, the optimal timing for late postoperative MRI remains uncertain. Another limitation is that for only a subset of patients FLAIR-weighted imaging was available both early and late postoperatively. This is due to the fact that the FLAIR and DWI sequences were added to the routine imaging protocol in the last year of patient inclusion. Despite this potential lack of power, a systematic difference was detected between early and late postoperative residual volumes based on FLAIR-weighted imaging in 14 patients. A further limitation is that some patients were adjuvantly treated with radio- or chemotherapy protocolised by trial participation, which could potentially interfere with late residual tumour volumes. This would however result in an underestimation of the observed postoperative reduction of residual tumour volume.\nAs a consequence of the overestimation of residual tumour on early MRI, we propose that quantitation of extent of resection is preferably based on MRI months after resection. The optimal timing of postoperative MRI for standardized criteria is to be determined in future prospective studies. Furthermore, estimates of residual tumour based on early MRI are to be carefully distinguished from regions with resection-induced diffusion restriction to avoid undue underestimation of surgical treatment effects.\nIn conclusion, our observation of a systematic and substantial overestimation of residual non-enhancing volume on MRI within 48 h of resection compared with months after resection indicates that early postoperative MRI is less reliable for volumetry of non-enhancing residual glioma. Diffusion-weighted imaging to detect resection-induced ischaemia is imperative in the interpretation of postresection tumour volumes." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "Glioma", "Brain", "Neurosurgical procedures", "Magnetic resonance imaging", "Diagnostic techniques and procedures", "Diffusion magnetic resonance imaging" ]
High-resolution 3D X-ray imaging of intracranial nitinol stents.
21331601
To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out.
INTRODUCTION
Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom.
METHODS
We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI).
RESULTS
By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations.
CONCLUSION
[ "Alloys", "Humans", "Imaging, Three-Dimensional", "Intracranial Aneurysm", "Phantoms, Imaging", "Radiation Dosage", "Radiographic Image Interpretation, Computer-Assisted", "Stents", "Tomography, X-Ray Computed", "X-Rays" ]
3261414
Introduction
The use of flat detectors allows display of vessel anatomy with submillimeter resolution and a high contrast-to-noise ratio (CNR). Three-dimensional (3D) cone beam imaging using a flat detector in a C-arc system was adopted and eventually displayed an image quality approaching that of CT with respect to contrast resolution [1, 2]. However, and despite the above improvement, 3D imaging of objects with high X-ray transparency and small detail may still be difficult [3]. Materials such as nitinol with its excellent biocompatibility and self deployment by shape memory [4] are widely used for intracranial stents, and generally yield good clinical results [5]. The usage of nitinol stents has become a common practice in the endovascular treatment as a coiling scaffold [6–8] to prevent wire herniation [9]. The visualization of nitinol stents in treatment of atherosclerotic stenoses [10, 11] is challenging and necessitates a highly developed X-ray imaging technique. Generally, in 2D clinical imaging protocols, only the stent end-markers made of tantalum or platinum can be recognized, but the stent body itself and struts are barely visible due to the low absorption of the constituents. To improve visualization, we need a high contrast resolution combined with high spatial resolution imaging. With 3D cone beam imaging based on flat detectors, both CNR and spatial resolution can be tailored such that fine detail rendition is sufficient to visualize the stent's struts. We describe a vascular imaging technique validated by an image quality assessment, using phantom objects with a variety of commercially available Nitinol stents. The purpose of this study is to visualize details of a stent, to support an improved analysis of its placement, by exploiting a joint optimization of all components of the imaging chain and the associated 3D vascular imaging platform.
null
null
Results
The high contrast limiting resolution has been optimized by balancing the relevant MTF of the imaging components, taking noise transfer into account. Focus size, detector pixel size, and the reconstruction voxel size are matched such that the transfer contributions of each parameter are balanced, resulting in an optimal voxel size and voxel number. The nearest predefined volume is chosen such that it matches the size of the stent under test. A further reduction of the voxel size would increase the image noise, whereas an increase of the voxel size would result in a spatial resolution loss. The results on spatial limiting resolution are shown in Table 3. The true resolution of the high-resolution protocol is better than the maximum reading of the Catphan™ 500 phantom and could thus not be read by this phantom. Deviations of 10–20% between measured and simulated values were found, which can be accounted for by the choice of the threshold modulation of 4%. We optimized for the MTF and the inherent system impulse response, including the stent strut diameter of up to 0.08 mm, as well as for contrast and noise. Since the basic materials of the stents are identical for all types (see Table 1), we decided to only use one stent type, namely a Wingspan nitinol stent. Only in case of the high-resolution/high-contrast case, the imaging results of all stents are shown, including the Pharos reference stent. Table 3Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis programSpatial resolutionVolume definitionMeasured (lp/mm)Simulated (lp/mm)StandardNon-zoomed reconstruction volume0.50.533% reconstruction volume1.31.617% reconstruction volume1.61.8HIRESNon-zoomed reconstruction volume1.11.250% reconstruction volume2.02.533% reconstruction volume2.1a 3.2 aCatphan® 500 phantom range limit Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis program aCatphan® 500 phantom range limit [SUBTITLE] Reconstructed stent images inside a head phantom [SUBSECTION] Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent. Fig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left Fig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown Fig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Fig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case Fig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference [SUBTITLE] High contrast limiting spatial resolution [SUBSECTION] Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane. [SUBTITLE] Dose CTDI [SUBSECTION] By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values. Table 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955 Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol
Conclusions
We have optimized nitinol stent imaging, in the framework of a full-scale 3D imaging model for X-ray imaging. By balancing the sharpness transfer and noise transfer of the imaging components and tuning the contrast-rendering capabilities of a 3D X-ray imaging system, we showed that thin nitinol stent struts can be viewed with a high CNR and good detail rendering. While optimizing, the CTDI radiation doses were kept below recommended values. The quality of 3D images, produced with optimum system settings, proved that independent of the type or manufacturer of the stents, the detail rendering is adequate to assess the post-deployment shape. The stent's struts can be imaged with virtually continuous strokes.
[ "Reconstructed stent images inside a head phantom", "High contrast limiting spatial resolution", "Dose CTDI" ]
[ "Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference", "Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.", "By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol" ]
[ null, null, null ]
[ "Introduction", "Materials and methods", "Results", "Reconstructed stent images inside a head phantom", "High contrast limiting spatial resolution", "Dose CTDI", "Discussion", "Conclusions" ]
[ "The use of flat detectors allows display of vessel anatomy with submillimeter resolution and a high contrast-to-noise ratio (CNR). Three-dimensional (3D) cone beam imaging using a flat detector in a C-arc system was adopted and eventually displayed an image quality approaching that of CT with respect to contrast resolution [1, 2]. However, and despite the above improvement, 3D imaging of objects with high X-ray transparency and small detail may still be difficult [3]. Materials such as nitinol with its excellent biocompatibility and self deployment by shape memory [4] are widely used for intracranial stents, and generally yield good clinical results [5]. The usage of nitinol stents has become a common practice in the endovascular treatment as a coiling scaffold [6–8] to prevent wire herniation [9]. The visualization of nitinol stents in treatment of atherosclerotic stenoses [10, 11] is challenging and necessitates a highly developed X-ray imaging technique. Generally, in 2D clinical imaging protocols, only the stent end-markers made of tantalum or platinum can be recognized, but the stent body itself and struts are barely visible due to the low absorption of the constituents. To improve visualization, we need a high contrast resolution combined with high spatial resolution imaging. With 3D cone beam imaging based on flat detectors, both CNR and spatial resolution can be tailored such that fine detail rendition is sufficient to visualize the stent's struts. We describe a vascular imaging technique validated by an image quality assessment, using phantom objects with a variety of commercially available Nitinol stents. The purpose of this study is to visualize details of a stent, to support an improved analysis of its placement, by exploiting a joint optimization of all components of the imaging chain and the associated 3D vascular imaging platform.", "A proprietary SW package calculates the image quality in terms of signal flow through the entire imaging chain, using image quality descriptors like Modulation Transfer Function (MTF), impulse response, noise power spectrum, low-contrast and high-contrast (spatial) resolution. Associated with these descriptors, the acquired dose can be calculated at any point in the physical chain, including a computed tomography dose index (CTDI) type of dose assessment for cone beam imaging. The analytical model also considers all the intra- and inter-component relations within the imaging system (also before and after detection). The quality description also incorporates an image quality degradation resulting from, e.g., system remnant blurring upon geometrical calibration and arc movement. The analytical model, comprehensively described by Kroon et al. [12, 13], allows us to vary the system parameters in X-ray generation, absorption, dose, and detection as well as in image processing quality, yielding accurate verified results.\nA number of self-expandable nitinol stents (Table 1), as well as one steel-based stent for reference, were deployed in plastic (infuse) tubes with an inner diameter of 3.5 mm which were inserted in a channel of an anthropomorphic head phantom (CIRS, Norfolk, Virginia; model 603), placed in the system isocenter. The tubes were filled with a diluted contrast agent: 500 ml H2O and 50 ml Visip 270 (GE), in order to produce a 550 HU density mimicking a contrast-filled vessel. The remainder of the phantom channel was filled with diluted contrast agent: 5,000 ml H2O and 50 ml Visip 270, producing 31-HU density. Another set of experiments was carried out by filling the tubes with 31-HU diluted contrast agent and the remaining volume again with 31 HU, representing blood. Yet, another experiment was performed by inserting platinum coiling wires close to the stents, in an air-filled cavity in the phantom, enabling an assessment of streaking effects.\nTable 1An overview of the stent propertiesTypeManufacturerLength (mm)Diameter (mm)Strut cross-section (μm)MaterialNeuroformBoston Scientific153w68*t66NitinolNatick, MAEnterpriseCordis374.5–a\nNitinolBridgewater, NJSolitaireev3204Ø 80NitinolPlymouth, MNWingspanBoston Scientific203.5w68*t73NitinolNatick, MASilkBalt Extrusion253.5–a\nNitinolMontmorency, FrFour Pt wiresLeo+Balt Extrusion252.5–a\nNitinolMontmorency, FrTwo Pt wiresPharosMicrus/Biotroniks252.75Ø 60SteelSan Jose, CACr–Co alloy\naThe manufacturers did not supply the dimensions\n\nAn overview of the stent properties\n\naThe manufacturers did not supply the dimensions\nWe used a Philips Xper™ vascular system equipped with a large (30 × 40 cm) flat detector and a 3D workstation for producing the 3D rendered images and hosting the typical 3D image post-processing modules. Images were acquired with a standard 3D protocol (with 45–49 mGy CTDI dose) using a large detector zoom format in combination with a large tube focus, or in a zoomed detector format (22-cm imaging diagonal) and a small tube focus. A set of 620 images was acquired with a 30 fr/s, resulting in a 20-s scan-time over a 200-degree arc-travel. CTDI was measured using a standard measurement protocol with a 16-cm diameter polymethyl methacrylate (PMMA) cylinder, and dose was measured with the Unfors (Billdal, Sweden) Mult-O-Meter 601-PMS. The CTDI dose was kept below 50 mGy. We determined the values using the largest irradiation field while keeping the technique factors identical to those for the high-resolution case with its smaller field. The CTDI protocol would be meaningless for a smaller beam format, as it is not irradiating the complete cylinder with a diameter of 16 cm, and thus not the peripheral probe positions. Spatial resolution was measured with a Catphan® 500 phantom (The Phantom Factory, Salem NY). The resolution reading was limited by the phantom maximum of 2.1 lp/mm. For the modeled limiting resolution analysis, a 4% modulation threshold was chosen. Object (physics governed) contrast was set by three tube voltages. This led us to the following protocols: standard 3D neuro protocol (120 kV), intracranial stent protocol (ics, 80 kV) and ics high resolution (hires, 80 kV, zoomed detector format).\nThe reconstructions were carried out in a zoomed mode, i.e., a region of interest was chosen as a fraction of the maximum volume, determined by the detector size and the projection geometry. This volume is divided into the desired voxel matrix. The corresponding technical parameters are shown in Table 2, where linear reconstruction zoom factors of 50% and 33% for the HIRES protocol and 33% and 17% for the standard protocol are listed. A voxel matrix of 2563 was applied for all cases. The final images were rendered by maximum intensity projection with a slice thickness of 5.0 mm, accommodating the stent's radial dimensions. The zoomed secondary reconstructions were carried out by panning the volume such that the relevant stent phantom portions were centered therein.\nTable 2Volume and voxel dimensions for the standard protocol and the high-resolution/high-contrast imaging protocolReconstruction zooming (%)Volume (2563 voxel matrix) (mm3)Voxel size (mm)HIRES5052.8 × 52.8 × 52.80.213334.4 × 34.4 × 34.40.13Standard3382.7 × 82.7 × 63.90.321742.6 × 42.6 × 32.90.17\n\nVolume and voxel dimensions for the standard protocol and the high-resolution/high-contrast imaging protocol", "The high contrast limiting resolution has been optimized by balancing the relevant MTF of the imaging components, taking noise transfer into account. Focus size, detector pixel size, and the reconstruction voxel size are matched such that the transfer contributions of each parameter are balanced, resulting in an optimal voxel size and voxel number. The nearest predefined volume is chosen such that it matches the size of the stent under test. A further reduction of the voxel size would increase the image noise, whereas an increase of the voxel size would result in a spatial resolution loss. The results on spatial limiting resolution are shown in Table 3. The true resolution of the high-resolution protocol is better than the maximum reading of the Catphan™ 500 phantom and could thus not be read by this phantom. Deviations of 10–20% between measured and simulated values were found, which can be accounted for by the choice of the threshold modulation of 4%. We optimized for the MTF and the inherent system impulse response, including the stent strut diameter of up to 0.08 mm, as well as for contrast and noise. Since the basic materials of the stents are identical for all types (see Table 1), we decided to only use one stent type, namely a Wingspan nitinol stent. Only in case of the high-resolution/high-contrast case, the imaging results of all stents are shown, including the Pharos reference stent.\nTable 3Limiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis programSpatial resolutionVolume definitionMeasured (lp/mm)Simulated (lp/mm)StandardNon-zoomed reconstruction volume0.50.533% reconstruction volume1.31.617% reconstruction volume1.61.8HIRESNon-zoomed reconstruction volume1.11.250% reconstruction volume2.02.533% reconstruction volume2.1a\n3.2\naCatphan® 500 phantom range limit\n\nLimiting high contrast spatial resolution as measured with a Catphan® 500 phantom (The Phantom Factory) and simulated with the IQ analysis program\n\naCatphan® 500 phantom range limit\n[SUBTITLE] Reconstructed stent images inside a head phantom [SUBSECTION] Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\nFigures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n[SUBTITLE] High contrast limiting spatial resolution [SUBSECTION] Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.\nMeasured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.\n[SUBTITLE] Dose CTDI [SUBSECTION] By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol\nBy adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol", "Figures 1, 2, 3, 4, and 5 show snapshots of reconstructed images in planes through the stent's axes and centered within the volume, where window settings on the work station were such that contrast and noise were comparable. For absolute stent dimensions, see Table 1. Figure 1 shows the results of the three protocols for 31 HU, where for reference, an optical image of the stent is shown also. The reconstruction scaling for the high- and standard resolution cases is found in Table 2. Figure 2 shows the images for the high contrast tube filling of 550 HU. Apart from the object contrast, the conditions are identical to those described for Fig. 1. The effect of varying the tube voltage is shown in Fig. 3, referring to a sequence of reconstructed images, displaying the image contrast for the infuse tubes filled with 31-HU contrast liquid. Figure 4 depicts the effect of streaking due to highly opaque platinum coils for 31-HU filling. Finally, in Fig. 5, a number of stent types illustrate the performance in the high-resolution/high-contrast mode, for again 31-HU contrast liquid. Figure 5 also shows example results of the more radiopaque material platinum in the Silk and Leo stents and the Co–Cr steel alloy-based Pharos stent.\nFig. 1A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nFig. 2A Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nFig. 3A Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nFig. 4Streak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nFig. 5High-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube contrast agent of 31 HU was used. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown. For comparison, a contrast inverted optical image is shown on the far left\n\nA Wingspan nitinol stent. A comparison between a the high-resolution/high-contrast case, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case. In all cases, an intra-tube high-density contrast agent of 550 HU was applied. The relevant reconstruction (sub) volumes are denoted as 50% and 33% for the HIRES case. For the standard resolution, 17% and 33% images are shown\nA Wingspan nitinol stent. A comparison between the high-resolution cases for high-contrast (80 kV) (a), intermediate contrast (100 kV) (b), and standard contrast (120 kV) (c). The 50% and 33% reconstruction volumes are shown. The applied intra-tube density was 31 HU\nStreak artifacts for Pt coils neighboring the Neuroform stent. a High-resolution/high-contrast, b standard resolution/high-contrast case, and c the standard resolution/standard contrast case\nHigh-resolution/high-contrast cases for 50% and 33% reconstruction volumes and various stent types. The tubes were filled with 31-HU contrast. The Enterprise stent at 33% reconstruction volume was intentionally left out due to its large length not fitting this reduced volume. The steel-based Pharos stent was included for reference", "Measured and modeled limiting resolutions are summarized in Table 3. The results on measured resolution were found by reading the reconstructed images from a Catphan™ 500 phantom in a transaxial plane.", "By adjusting the technique factors, a CTDI-type weighted dose is kept at the desired low value of 50 mGy for all protocols. The actual dose measurement and the analysis were carried out with a radiation field covering the 16-cm CTDI phantom. Table 4 shows the measured and simulated values. The measured intersystem CTDI dose variability was +/−0.5 mGy. We observed a systematic low error between measured and analytically modeled dose values.\nTable 4Measured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocolCTDIMeasured dose (mGy)Simulated dose (mGy)Standard4543HIRES4955\n\nMeasured and simulated CTDI dose for the standard protocol and the high-resolution/high-contrast protocol", "Intracranial stenting became feasible as a routine treatment by the introduction of very flexible stents and have become a single treatment for a number of neurovascular conditions (Biondi et al. [14], Kurre et al. [15]). The stents are manufactured from a very thin material and most are self expanding, i.e., made from an alloy such as nitinol. However, most nitinol stents are difficult to visualize with fluoroscopy or C-arm cone beam CT. Visualization can be improved by increasing the density of the stent, more radiation, or enhanced image acquisition settings and processing. Increasing the density will add material and therefore unfavorably alters the stent characteristics and is therefore undesirable. An increase in dose will not be accepted (Struelens et al. [16]). Consequently, the visibility is preferably improved by further optimizing the imaging chain. To support and judge correct positioning, the nitinol stent should be visualized in its wall apposition, while conforming to varying diameters throughout the stent's length. To this end, it is advantageous to view details with virtually optical quality.\nDuring the past 6 years, the image quality of CT-like reconstructions, using a C-arm interventional X-ray system equipped with a flat detector, became increasingly notable. High-resolution imaging, with submillimeter isotropic spatial resolution, outperformed digital radiography, fluoroscopy, and even conventional CT (Kamran et al. [17]). A continuous effort to enhance image quality enabled imaging of details with low absorption. Evidence on high-quality in vitro imaging of the proper deployment of nitinol stents, related to area coverage, kinking, prolapse, and flattening, is given by Aurboonyawat [11], Ebrahimi [18] and Alvarado [19]. The stent conformity in curved vascular models and simulated aneurysm necks could be studied in detail. In a clinical setting, Benndorf [20] showed in vivo flat panel imaging of the Neuroform nitinol stent. Imagery of balloon mounted stents, based on a Cr–Co steel alloy, with intravenous administration of contrast medium, was carried out by Buhk et al. [21]. The reconstructed images allowed an accurate assessment of the stented lumen. Recently, Patel et al. [22] demonstrated high-resolution and contrast-enhanced simultaneous imaging of intraarterial cerebrovascular stents and their host arteries.\nIn our study, a Wingspan stent was chosen as an arbitrary example object for an evaluation through reconstructed images. The image quality improvement is assessed by an evaluation of the reconstructed images. Clearly the spatial impulse response improved by introducing the HIRES protocol, as is shown in Fig. 1 for the 31-HU contrast-filled tube, where the CNR was improved as well by the increased object contrast. These two measures lead to an improved visibility of the stent's struts, when comparing the standard protocol (17% zooming) with the results of the high-resolution/high-contrast protocol (33% zooming). The open cell structure of the stent is clearly discerned by virtually continuous strokes of the struts, so that its X-ray quality is approaching an optical quality. In Fig. 2, a comparison can be made for the 550-HU contrast filling. In this case, the contrast between struts and background is lower due to the denser filling of the tube, giving an accordingly reduced CNR. The sensitivity to object contrast can be readily viewed in Fig. 3, where the comparison is made for a varying tube voltage: the obvious increase in the CNR makes the stent more pronounced with respect to the (noisy) background and accounts for the choice of the lowest voltage, 80 kV, for the ICS and HIRES protocols. In Fig. 4, streak artifacts associated with high-density platinum coils are shown. Although the window settings are optimal for stent perception, details are still rendered with a sufficient contrast to artifact distance. Finally, Fig. 5 displays reconstructions of all stent types for the high-resolution/high-contrast case, showing the detail rendering of the protocol. As the cross-sections of the struts of the investigated types vary between 60 and 80 μm, the rendered contrasts vary accordingly, since the optimized system impulse response dominates for all cases. The Silk and Leo stents stand out more by high object contrast of the platinum auxiliary wires. In spite of the minimal strut diameter, the Pharos struts are rendered with the highest contrast. The Co–Cr steel alloy accounts for an increased visibility due to its inherent higher absorption. The remaining stent types are comparable in their visualization quality, which is obvious, considering the similarity in dimensions and composition.\nFor all protocols under test, the maximum weighted CTDI dose of 50 mGy is lower than the European guidelines, as according to EUR 16262 [23]. A dose of 60 mGy is recommended for routine head examinations. Due to the limited anatomical coverage, by using a smaller irradiated field in the high-resolution case, the dose-length product is smaller as well. Moreover, we found that the difference between modeled and measured doses is sufficiently small.", "We have optimized nitinol stent imaging, in the framework of a full-scale 3D imaging model for X-ray imaging. By balancing the sharpness transfer and noise transfer of the imaging components and tuning the contrast-rendering capabilities of a 3D X-ray imaging system, we showed that thin nitinol stent struts can be viewed with a high CNR and good detail rendering. While optimizing, the CTDI radiation doses were kept below recommended values.\nThe quality of 3D images, produced with optimum system settings, proved that independent of the type or manufacturer of the stents, the detail rendering is adequate to assess the post-deployment shape. The stent's struts can be imaged with virtually continuous strokes." ]
[ "introduction", "materials|methods", "results", null, null, null, "discussion", "conclusion" ]
[ "X-ray imaging", "Cone beam 3D", "Intracranial stents", "Nitinol", "Aneurysms" ]
Association between single nucleotide polymorphisms within genes encoding sirtuin families and diabetic nephropathy in Japanese subjects with type 2 diabetes.
21331741
Sirtuin is a member of the nicotinamide adenine dinucleotide (NAD)-dependent deacetylases, and has been reported to play a pivotal role in energy expenditure, mitochondrial function and pathogenesis of metabolic diseases, including aging kidneys. In this study, we focused on the genes encoding sirtuin families, and examined the association between single nucleotide polymorphisms (SNPs) within genes encoding sirtuin families and diabetic nephropathy.
BACKGROUND
We examined 52 SNPs within the SIRT genes (11 in SIRT1, 7 in SIRT2, 14 in SIRT3, 7 in SIRT4, 9 in SIRT5, and 4 in SIRT6) in 3 independent Japanese populations with type 2 diabetes (study 1: 747 cases (overt proteinuria), 557 controls; study 2: 455 cases (overt proteinuria) and 965 controls; study 3: 300 cases (end-stage renal disease) and 218 controls). The associations between these SNPs were analyzed by the Cochran-Armitage trend test, and results of the 3 studies were combined with a meta-analysis. We further examined an independent cohort (195 proteinuria cases and 264 controls) for validation of the original association.
METHODS
We identified 4 SNPs in SIRT1 that were nominally associated with diabetic nephropathy (P < 0.05), and subsequent haplotype analysis revealed that a haplotype consisting of the 11 SNPs within SIRT1 locus had a stronger association (P = 0.0028).
RESULTS
These results indicate that SIRT1 may play a role in susceptibility to diabetic nephropathy in Japanese subjects with type 2 diabetes.
CONCLUSION
[ "Asian People", "Diabetes Mellitus, Type 2", "Diabetic Nephropathies", "Gene Frequency", "Humans", "Middle Aged", "Polymorphism, Single Nucleotide", "Sirtuin 1", "Sirtuins" ]
3110272
Introduction
Diabetic nephropathy is a serious microvascular complication of diabetes, and is a leading cause of end-stage renal disease in Western countries [1] and in Japan [2]. The escalating prevalence and limitation of currently available therapeutic options highlight the need for a more accurate understanding of the pathogenesis of diabetic nephropathy. Several environmental factors, such as medication, daily energy consumptions, and daily sodium intake, are likely to cooperate with genetic factors to contribute to its development and progression [3, 4]; however, the precise mechanism for this contribution is unknown. Krolewski et al. [5] reported that the cumulative incidence of diabetic retinopathy increased linearly with duration of diabetes, whereas the occurrence of nephropathy was almost none after 20–25 years of diabetes duration, and only a modest number of individuals with diabetes (~30%) developed diabetic nephropathy. Familial clustering of diabetic nephropathy was also reported in both type 1 [4] and type 2 diabetes [6]; thus, the involvement of genetic factors in the development of diabetic nephropathy is strongly suggested. Both candidate gene approaches and genome-wide linkage analyses have suggested several candidate genes with a potential impact on diabetic nephropathy. These findings, however, have not been robustly replicated and many genes responsible for susceptibility to diabetic nephropathy remain to be identified. To identify loci involved in susceptibility to common diseases, we initiated the first round of a genome-wide association study (GWAS) using 100,000 single nucleotide polymorphisms (SNPs) from a Japanese SNP database (JSNP: http://snp.ims.u-tokyo.ac.jp/index_ja.html). Through this project, we have previously identified genes encoding solute carrier family 12 (sodium/chloride) member 3 (SLC12A3, MIM 600968, Online Mendelian Inheritance in Man: http://www.ncbi.nlm.nih.gov/omim) [7]; engulfment and cell motility 1 (ELMO1, MIM 606420) [8]; neurocalcin δ (NCALD, MIM 606722) [9]; and acetyl-coenzyme A carboxylase beta gene (ACACB, MIM: 601557) [10] as being associated with susceptibility to diabetic nephropathy. The association between ELMO1 or ACACB and diabetic nephropathy has been confirmed in different ethnic populations [11–13]. The GWAS for diabetic nephropathy using European American populations (the Genetics of Kidneys in Diabetes (GoKinD) collection) led to the identification of 4 distinct loci as novel candidate loci for susceptibility to diabetic nephropathy in European American subjects with type 1 diabetes [14]: the CPVL/CHN2 locus on chromosome 7, the FRMD3 locus on chromosome 9, the CARS locus on chromosome 11, and a locus near IRS2 on chromosome 13. Among those 4 loci, only one locus (near IRS2 in chromosome 13) could be replicated in Japanese subjects with type 2 diabetes [15]. Although these loci are considered as convincing susceptibility loci for diabetic nephropathy across different ethnic groups, a considerable number of susceptibility genes for diabetic nephropathy still remain to be identified. Sirtuins, the silent information regulator-2 (SIR2) family, is a member of NAD-dependent deacetylases, and the sir2 gene was originally identified as a gene affecting the malting ability of yeast. Mammalian sirtuins consist of seven members, SIRT1–SIRT7, and some of them, especially SIRT1, have been shown to play pivotal roles in the regulation of aging, longevity, or in the pathogenesis of age-related metabolic diseases, such as type 2 diabetes [16–18]. The expressions of sirtuin families have also been observed in the kidneys, and recently SIRT1 has been shown to mediate a protective role of calorie restriction (CR) in the progression of the aging kidney [19]. These observations suggest the possibility that mammalian sirtuins are a candidate for conferring susceptibility to diabetic nephropathy. In order to test this hypothesis, we focused on genes encoding mammalian sirtuins as candidate genes for diabetic nephropathy and investigated the association between SNPs within the SIRT genes and diabetic nephropathy in Japanese subjects with type 2 diabetes.
null
null
Results
Among the 55 SNPs examined, genotype distributions of 3 SNPs, rs12576565 in SIRT3, and rs2804923 and rs2841514 in SIRT 5, showed significant deviation from HWE proportion in control groups (P < 0.01, Supplementary Table 2), and these 3 SNPs were excluded from the association study. As shown in Table 1, 8 out of 11 SNPs in SIRT1 showed a directionally consistent association with diabetic nephropathy in all 3 studies, although individual associations were not significant (P > 0.05, Supplementary Table 2). In a combined meta-analysis, we could identify a nominally significant association between rs4746720 and proteinuria, and between 4 SNPs, rs2236319, rs10823108, rs3818292, rs4746720, and combined phenotypes (proteinuria + ESRD, P < 0.05). Subsequent haplotype analysis revealed that the 11 SNPs formed one haplotype block (Fig. 1), and 7 common haplotypes covered >99% of the present Japanese population. Among them one haplotype had a stronger association with diabetic nephropathy than single SNPs alone (P = 0.016, odds ratio (OR) 1.31 95% confidence interval (CI) 1.05–1.62]. Any SNPs or haplotypes in SIRT2–6 were not associated with diabetic nephropathy in the combined analysis (Tables 2, 3, 4, 5, 6), although there was an association between 3 SNPs (rs4712047, rs3734674, rs3757261) in SIRT5 and diabetic nephropathy in the study 2 population (Supplementary Table 2). To validate the association between SIRT1 and diabetic nephropathy, we examined another 195 cases (overt proteinuria) and 264 controls registered in the BioBank Japan (study 4). As shown in Table 7, most SNPs showed a consistent association with those in the original finding, and the association of the haplotype was strengthened further (P = 0.0028, OR 1.36, 95% CI 1.11–1.66). We further examined the association between SIRT1 SNPs and microalbuminuria in studies 1 and 2, but could not identify a significant association (Supplementary Table 3), suggesting SIRT1 SNPs might contribute to the progression of nephropathy rather than its onset in patients with type 2 diabetes.Table 1Association between SNPs in SIRT1 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2 P OR (95% CI)Study 3 P OR (95% CI)SNP rs12778366a T>C0.111/0.1030.125/0.1240.6721.04 (0.86–1.26)0.101/0.1190.9810.998 (0.84–1.18) rs3740051a A>G0.291/0.2770.316/0.3010.2991.07 (0.94–1.22)0.310/0.2740.1381.09 (0.97–1.23) rs2236318a T>A0.121/0.1290.099/0.1110.3270.91 (0.75–1.10)0.106/0.1190.2360.90 (0.76–1.07) rs2236319 A>G0.339/0.3170.358/0.3390.1651.09 (0.96–1.24)0.349/0.3000.0481.12 (1.00–1.26) rs10823108 G>A0.335/0.3180.357/0.3350.1691.09 (0.96–1.24)0.351/0.3020.0491.12 (1.00–1.26) rs10997868a C>A0.187/0.1840.187/0.1740.5201.05 (0.90–1.23)0.180/0.1730.4821.05 (0.91–1.21) rs2273773 T>C0.339/0.3250.361/0.3470.3251.07 (0.94–1.21)0.353/0.3060.1131.10 (0.98–1.23) rs3818292 A>G0.336/0.3170.360/0.3350.1341.10 (0.97–1.25)0.352/0.3060.0421.13 (1.00–1.26) rs3818291 G>A0.111/0.1010.127/0.1290.6501.04 (0.87–1.26)0.101/0.1240.9270.99 (0.84–1.17) rs4746720a T>C0.366/0.3940.331/0.3640.0410.88 (0.77–0.99)0.367/0.4000.0210.88 (0.78–0.98) rs10823116a A>G0.446/0.4420.441/0.4480.9050.99 (0.88–1.12)0.459/0.3940.4281.05 (0.94–1.16)Haplotype TGTGACCGGTG0.294/0.2790.316/0.3000.2501.08 (0.95–1.23)0.315/0.2730.0951.10 (0.98–1.24) TATAGCTAGCA0.255/0.2730.251/0.2520.4640.95 (0.83–1.09)0.253/0.3040.1430.91 (0.81–1.03) CATAGCTAATA0.112/0.1030.124/0.1290.8171.02 (0.85–1.23)0.100/0.1190.8410.98 (0.83–1.16) TAAAGATAGTA0.123/0.1280.104/0.1120.4840.94 (0.78–1.13)0.105/0.1220.3190.92 (0.78–1.08) TATAGCTAGCG0.109/0.1230.085/0.1110.0370.81 (0.67–0.99)0.113/0.0990.1170.87 (0.73–1.03) TATAGATAGTA0.065/0.0550.078/0.0590.0511.27 (0.998–1.61)0.077/0.0530.0161.31 (1.05–1.62) TATGACCGGTG0.042/0.0390.040/0.0360.571.09 (0.81–1.48)0.036/0.0280.4211.12 (0.85–1.48)aTag SNPs Fig. 1Position of the 11 SNPs in SIRT1, and pair-wise linkage disequilibrium coefficients (r 2) among 11 SNPs in the present Japanese population Table 2Association between SNPs in SIRT2 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2 P OR (95% CI)Study 3 P OR (95% CI)SNP rs1001413a G>A0.381/0.3590.353/0.3610.5941.03 (0.91–1.17)0.342/0.3890.8370.99 (0.88–1.11) rs892034a C>T0.193/0.1660.180/0.1690.0991.14 (0.98–1.34)0.164/0.1940.3521.07 (0.93–1.23) rs2015a A>C0.398/0.4000.389/0.4060.5500.96 (0.85–1.09)0.384/0.3910.5220.96 (0.86–1.08) rs2241703a G>A0.235/0.2260.222/0.2150.5341.05 (0.91–1.21)0.219/0.2080.4531.05 (0.92–1.20) rs2082435a C>G0.247/0.2620.260/0.2700.3650.94 (0.82–1.08)0.256/0.2310.6780.97 (0.86–1.10) rs11575003a T>C0.118/0.1150.126/0.1320.8860.99 (0.82–1.18)0.132/0.1190.8871.01 (0.86–1.19) rs2053071a G>C0.377/0.3980.384/0.3960.2600.93 (0.82–1.05)0.426/0.4020.5060.96 (0.86–1.08)Haplotype Block 1  CCGG0.243/0.2580.245/0.2680.1360.90 (0.79–1.03)0.254/0.2260.3660.95 (0.84–1.07)  CAAC0.231/0.2220.227/0.2160.4381.06 (0.92–1.22)0.218/0.2040.3471.06 (0.94–1.21)  CAGC0.179/0.2120.207/0.2060.1860.91 (0.78–1.05)0.232/0.2050.4770.95 (0.84–1.09)  TAGC0.191/0.1650.191/0.1690.0371.18 (1.01–1.37)0.164/0.1970.1961.10 (0.95–1.26)  CCGC0.154/0.1420.127/0.1400.9930.999 (0.84–1.18)0.130/0.1640.4970.95 (0.81–1.04) Block 2  TG0.620/0.5970.614/0.5990.1911.08 (0.96–1.22)0.568/0.5850.3461.05 (0.95–1.17)  TC0.262/0.2880.256/0.2700.1420.91 (0.79–1.03)0.300/0.2950.1210.91 (0.81–1.02)  CC0.114/0.1100.126/0.1270.8501.02 (0.85–1.22)0.127/0.1090.5861.05 (0.89–1.23)Block 1; rs892034, rs2015, rs2241703, rs2082435Block 2; rs11575003, rs2053071aTag SNPs Table 3Association between SNPs in SIRT3 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2 P OR (95% CI)Study 3 P OR (95% CI)SNP rs11246002a G>A0.137/0.1230.152/0.1370.1691.13 (0.95–1.34)0.122/0.1100.1381.13 (0.96–1.32) rs2293168 G>A0.356/0.3620.385/0.4020.4400.95 (0.84–1.08)0.400/0.3720.7760.98 (0.88–1.10) rs3216 C>G0.172/0.1680.160/0.1550.7421.03 (0.87–1.21)0.152/0.1920.6550.97 (0.84–1.12) rs10081a A>G0.507/0.5150.464/0.4630.8051.02 (0.90–1.15)0.460/0.5140.3380.95 (0.85–1.06) rs511744a C>T0.488/0.4820.469/0.4850.7780.98 (0.87–1.11)0.491/0.4870.8530.99 (0.89–1.10) rs6598074 T>C0.164/0.1610.126/0.1350.7970.98 (0.82–1.16)0.154/0.1440.9630.996 (0.86–1.16) rs4758633a G>A0.347/0.3550.288/0.2940.5990.97 (0.85–1.10)0.319/0.3490.3600.94 (0.84–1.06) rs11246007a C>T0.143/0.1550.149/0.1520.4710.94 (0.79–1.11)0.142/0.1430.5120.95 (0.82–1.11) rs3782117a A>G0.168/0.1710.160/0.1530.8431.02 (0.86–1.20)0.152/0.1930.5440.96 (0.83–1.11) rs3782116a G>A0.307/0.2940.278/0.2720.5071.05 (0.92–1.19)0.292/0.2680.3331.06 (0.94–1.20) rs3782115a C>T0.283/0.2830.265/0.2570.7851.02 (0.89–1.17)0.263/0.2410.9641.04 (0.92–1.17) rs1023430a A>G0.291/0.3020.325/0.3070.8491.01 (0.89–1.15)0.285/0.2750.7431.02 (0.91–1.15) rs536715a G>A0.367/0.3670.395/0.4060.7360.98 (0.86–1.11)0.404/0.3890.9390.996 (0.89–1.11) rs3829998a G>A0.167/0.1670.124/0.1390.5290.95 (0.80–1.12)0.160/0.1530.6740.97 (0.83–1.13)Haplotype Block 1  GACT0.354/0.3620.378/0.3980.3420.94 (0.83–1.06)0.403/0.3770.6350.97 (0.87–1.09)  GGCC0.335/0.3460.310/0.3090.7000.98 (0.86–1.11)0.317/0.3210.6880.97 (0.87–1.09)  GGGC0.172/0.1680.159/0.1540.6881.03 (0.88–1.21)0.151/0.1920.6780.97 (0.84–1.12)  AGCT0.138/0.1230.151/0.1370.1711.27 (0.95–1.34)0.124/0.1100.1271.13 (0.97–1.11) Block 2  TGGA0.519/0.5110.560/0.5560.7101.02 (0.91–1.15)0.550/0.5210.4621.04 (0.94–1.16)  TAGG0.171/0.1690.158/0.1530.7651.02 (0.87–1.20)0.151/0.1920.6220.96 (0.84–1.11)  TGAA0.143/0.1550.150/0.1520.490.94 (0.80–1.11)0.142/0.1420.5400.95 (0.82–1.11)  CAGA0.167/0.1640.131/0.1360.9520.99 (0.84–1.17)0.157/0.1460.8680.95 (0.82–1.11) Block 3  AAG0.364/0.3630.383/0.4020.5470.96 (0.85–1.09)0.403/0.3840.7790.98 (0.88–1.10)  GGG0.287/0.2970.320/0.3030.8011.02 (0.89–1.16)0.281/0.2650.6401.03 (0.92–1.15)  AGG0.177/0.1700.157/0.1520.6181.04 (0.89–1.22)0.154/0.1910.8090.98 (0.85–1.13)  AGA0.168/0.1660.133/0.1400.8560.98 (0.84–1.16)0.158/0.1520.9670.997 (0.86–1.16)Block 1; rs11246002, rs2293168, rs3216, rs10081Block 2; rs6598074, rs4758633, rs11246007, rs3782117Block 3; rs1023430, rs536715, rs3829998aTag SNPs Table 4Association between SNPs in SIRT4 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2 P OR (95% CI)Study 3 P OR (95% CI)SNP rs6490288 G>C0.068/0.0760.076/0.0770.5740.94 (0.74–1.18)0.080/0.0660.8800.98 (0.80–1.21) rs7298516a T>G0.009/0.0090.008/0.0110.6080.85 (0.46–1.58)0.017/0.0160.7140.91 (0.54–1.53) rs3847968a C>T0.187/0.1840.187/0.1740.4500.91 (0.71–1.16)0.180/0.1730.8061.03 (0.82–1.28) rs12424555 C>T0.059/0.0690.065/0.0690.3660.89 (0.70–1.14)0.071/0.0460.9120.99 (0.79–1.23) rs7137625a C>T0.057/0.0400.058/0.0560.1411.23 (0.94–1.60)0.045/0.0630.4351.10 (0.87–1.40) rs2261612 A>G0.473/0.4840.457/0.4760.3380.94 (0.84–1.06)0.476/0.4590.5320.97 (0.87–1.08) rs2070873a T>G0.469/0.4760.457/0.4740.4430.95 (0.85–1.08)0.480/0.4680.6000.97 (0.87–1.08)Haplotype Block 1  CCCAT0.527/0.5180.546/0.5200.2451.07 (0.95–1.21)0.517/0.5320.4001.05 (0.94–1.16)  CCCGG0.350/0.3680.326/0.3480.1540.91 (0.81–1.03)0.360/0.3420.3050.94 (0.84–1.05)  TTCGG0.058/0.0670.065/0.0620.6950.95 (0.75–1.21)0.067/0.0520.9321.01 (0.81–1.26)  CCTGG0.056/0.0390.056/0.0560.1811.20 (0.92–1.56)0.046/0.0630.5011.08 (0.86–1.38)Block 1; rs3847968, rs12424555, rs7137625, rs2261612, rs2070873aTag SNPs Table 5Association between SNPs in SIRT5 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaCombinedStudy 1Study 2 P OR (95% CI)Study 3 P OR (95% CI)SNP rs9382227a G>T0.188/0.1960.218/0.1920.4941.05 (0.91–1.22)0.198/0.2100.6881.03 (0.90–1.18) rs2804916a T>C0.170/0.1660.157/0.1630.9210.99 (0.84–1.17)0.138/0.1350.9710.997 (0.86–1.16) rs2804918a A>G0.345/0.3570.352/0.3330.8471.01 (0.89–1.15)0.318/0.3210.8961.01 (0.90–1.13) rs9370232a G>C0.361/0.3700.358/0.3620.6400.97 (0.86–1.10)0.357/0.3460.7970.99 (0.88–1.10) rs4712047a G>A0.494/0.4770.448/0.5050.2210.93 (0.82–1.05)0.456/0.4570.2690.94 (0.84–1.05) rs3734674 G>A0.158/0.1710.191/0.1490.2521.10 (0.93–1.29)0.176/0.1880.4161.06 (0.92–1.23) rs11751539a A>T0.309/0.3200.302/0.3120.4760.95 (0.84–1.09)0.315/0.2760.9550.997 (0.87–1.12) rs3757261a G>A0.155/0.1650.184/0.1390.1591.12 (0.95–1.32)0.168/0.1740.2521.09 (0.94–1.26) rs2253217a A>G0.063/0.0710.056/0.0680.2100.85 (0.67–1.09)0.045/0.0610.1110.83 (0.67–1.04)Haplotype Block 1  GT0.641/0.6370.629/0.6450.6660.97 (0.86–1.10)0.665/0.6550.7960.99 (0.88–1.10)  TT0.189/0.1960.215/0.1920.5191.05 (0.91–1.22)0.198/0.2100.7111.03 (0.90–1.17)  GC0.171/0.1670.156/0.1630.0860.87 (0.74–1.02)0.137/0.1350.9490.995 (0.86–1.15) Block 2  GAGA0.471/0.4420.446/0.4780.9040.99 (0.88–1.12)0.468/0.4910.6740.98 (0.88–1.09)  GTGA0.311/0.3200.313/0.3120.7580.98 (0.86–1.11)0.313/0.2720.7341.02 (0.91–1.14)  AAAA0.154/0.1660.184/0.1390.1501.12 (0.96–1.32)0.169/0.1740.2391.09 (0.94–1.26)  GAGG0.061/0.0670.054/0.0610.3530.89 (0.70–1.14)0.042/0.0500.2800.88 (0.70–1.11)Block 1; rs9382227, rs2804916Block 2; rs3734674, rs11751539, rs3757261, rs2253217, rs2841514aTag SNPs Table 6Association between SNPs in SIRT6 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2 P OR (95% CI)Study 3 P OR (95% CI)SNP rs350852a T>C0.313/0.3380.313/0.3030.5450.96 (0.84–1.09)0.324/0.3480.3670.95 (0.84–1.06) rs7246235a T>G0.185/0.1860.168/0.2090.1100.88 (0.75–1.03)0.202/0.1640.4470.95 (0.82–1.09) rs107251a C>T0.296/0.3150.305/0.2910.8410.99 (0.87–1.12)0.323/0.3280.7990.98 (0.88–1.11) rs350844 G>A0.304/0.3220.309/0.2910.9360.99 (0.87–1.13)0.336/0.3470.8190.99 (0.88–1.11)Haplotype Block 1  TCG0.516/0.4990.529/0.5000.1221.10 (0.98–1.24)0.517/0.5320.3421.05 (0.95–1.17)  TTA0.299/0.3180.303/0.2910.7760.98 (0.86–1.12)0.360/0.3420.7130.98 (0.87–1.10)  GCG0.185/0.1830.168/0.2090.1000.88 (0.76–1.02)0.067/0.0520.4330.95 (0.83–1.09)Block 1; rs7246235, rs107251, rs350844aTag SNPs Table 7Replication study for the association between SNPs in SIRT1 and diabetic nephropathyAllele frequencies (nephropathy case−control)Proteinuria (study 1, 2, 4)Proteinuria + ESRD (study 1, 2, 3, 4)Study 4 P OR (95% CI) P OR (95% CI)SNP rs12778366a T>C0.089/0.1310.6760.96 (0.81–1.15)0.4480.94 (0.80–1.10) rs3740051a A>G0.311/0.2910.2261.08 (0.96–1.21)0.1061.09 (0.98–1.22) rs2236318a T>A0.113/0.1160.3500.92 (0.78–1.09)0.2570.91 (0.78–1.07) rs2236319 A>G0.360/0.3440.1421.09 (0.97–1.22)0.0441.12 (1.00–1.24) rs10823108 G>A0.358/0.3370.1271.09 (0.97–1.23)0.0381.12 (1.01–1.24) rs10997868a C>A0.181/0.1750.4901.05 (0.91–1.21)0.4561.05 (0.92–1.20) rs2273773 T>C0.364/0.3420.2391.07 (0.95–1.20)0.0851.10 (0.99–1.22) rs3818292 A>G0.358/0.3440.1201.10 (0.98–1.23)0.0401.12 (1.01–1.24) rs3818291 G>A0.090/0.1320.6960.97 (0.81–1.15)0.4120.94 (0.80–1.10) rs4746720a T>C0.371/0.3610.0840.90 (0.81–1.01)0.0440.90 (0.81–0.997) rs10823116a A>G0.453/0.4500.9390.996 (0.89–1.11)0.4461.04 (0.94–1.15)Haplotype TGTGACCGGTG0.306/0.2970.2401.07 (0.95–1.21)0.0981.09 (0.98–1.22) TATAGCTAGCA0.269/0.2430.8090.96 (0.87–1.11)0.3360.95 (0.85–1.06) CATAGCTAATA0.105/0.1290.7410.97 (0.82–1.15)0.4960.95 (0.81–1.10) TAAAGATAGTA0.122/0.1160.6210.96 (0.81–1.13)0.4300.94 (0.80–1.09) TATAGCTAGCG0.095/0.1120.0220.82 (0.69–0.97)0.0710.86 (0.74–1.01) TATAGATAGTA0.072/0.0590.00911.34 (1.07–1.66)0.00281.36 (1.11–1.66) TATGACCGGTG0.031/0.0440.9421.01 (0.77–1.33)0.7461.04 (0.81–1.35)aTag SNPs Association between SNPs in SIRT1 and diabetic nephropathy aTag SNPs Position of the 11 SNPs in SIRT1, and pair-wise linkage disequilibrium coefficients (r 2) among 11 SNPs in the present Japanese population Association between SNPs in SIRT2 and diabetic nephropathy Block 1; rs892034, rs2015, rs2241703, rs2082435 Block 2; rs11575003, rs2053071 aTag SNPs Association between SNPs in SIRT3 and diabetic nephropathy Block 1; rs11246002, rs2293168, rs3216, rs10081 Block 2; rs6598074, rs4758633, rs11246007, rs3782117 Block 3; rs1023430, rs536715, rs3829998 aTag SNPs Association between SNPs in SIRT4 and diabetic nephropathy Block 1; rs3847968, rs12424555, rs7137625, rs2261612, rs2070873 aTag SNPs Association between SNPs in SIRT5 and diabetic nephropathy Block 1; rs9382227, rs2804916 Block 2; rs3734674, rs11751539, rs3757261, rs2253217, rs2841514 aTag SNPs Association between SNPs in SIRT6 and diabetic nephropathy Block 1; rs7246235, rs107251, rs350844 aTag SNPs Replication study for the association between SNPs in SIRT1 and diabetic nephropathy aTag SNPs
null
null
[ "Subjects, DNA preparation", "Study 1", "Study 2", "Study 3", "SNP genotyping", "Statistical analyses", "" ]
[ "[SUBTITLE] Study 1 [SUBSECTION] DNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\nDNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\n[SUBTITLE] Study 2 [SUBSECTION] We selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\nWe selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\n[SUBTITLE] Study 3 [SUBSECTION] Patients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.\nPatients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.", "DNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.", "We selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).", "Patients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.", "We searched the HapMap database (http://hapmap.ncbi.nlm.nih.gov/) for SNPs within the genes encoding sirtuin families, and selected 55 SNPs (39 tagging SNPs) for genotyping; 11 in SIRT1 (rs12778366, rs3740051, rs2236318, rs2236319, rs10823108, rs10997868, rs2273773, rs3818292, rs3818291, rs4746720, rs10823116), 7 in SIRT2 (rs1001413, rs892034, rs2015, rs2241703, rs2082435, rs11575003, rs2053071), 15 in SIRT3 (rs11246002, rs2293168, rs3216, rs10081, rs511744, rs6598074, rs4758633, rs11246007, rs3782117, rs3782116, rs3782115, rs1023430, rs12576565, rs536715, rs3829998), 7 in SIRT4 (rs6490288, rs7298516, rs3847968, rs12424555, rs7137625, rs2261612, rs2070873), 11 in SIRT5 (rs2804923, rs9382227, rs2804916, rs2804918, rs9370232, rs4712047, rs3734674, rs11751539, rs3757261, rs2253217, rs2841514), and 4 in SIRT6 (rs350852, rs7246235, rs107251, rs350844). We could not identify any confirmed SNPs within SIRT7 in the Japanese population. The genotyping of these SNPs was performed by using multiplex polymerase chain reaction (PCR)-invader assays, as described previously [7–10].", "We tested the genotype distributions for Hardy–Weinberg equilibrium (HWE) proportions by using the chi-squared test. We analyzed the differences between the case−control groups in terms of the distribution of genotypes with the Cochran–Armitage trend test. The analyses for haplotype structures within each gene were performed using Haploview software version 4.1 [20]. A combined meta-analysis was performed using the Mantel–Haenszel procedure with a fixed effects model after testing for heterogeneity.", "Below is the link to the electronic supplementary material.\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)\n\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)" ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Subjects, DNA preparation", "Study 1", "Study 2", "Study 3", "SNP genotyping", "Statistical analyses", "Results", "Discussion", "Electronic supplementary material", "" ]
[ "Diabetic nephropathy is a serious microvascular complication of diabetes, and is a leading cause of end-stage renal disease in Western countries [1] and in Japan [2]. The escalating prevalence and limitation of currently available therapeutic options highlight the need for a more accurate understanding of the pathogenesis of diabetic nephropathy. Several environmental factors, such as medication, daily energy consumptions, and daily sodium intake, are likely to cooperate with genetic factors to contribute to its development and progression [3, 4]; however, the precise mechanism for this contribution is unknown. Krolewski et al. [5] reported that the cumulative incidence of diabetic retinopathy increased linearly with duration of diabetes, whereas the occurrence of nephropathy was almost none after 20–25 years of diabetes duration, and only a modest number of individuals with diabetes (~30%) developed diabetic nephropathy. Familial clustering of diabetic nephropathy was also reported in both type 1 [4] and type 2 diabetes [6]; thus, the involvement of genetic factors in the development of diabetic nephropathy is strongly suggested. Both candidate gene approaches and genome-wide linkage analyses have suggested several candidate genes with a potential impact on diabetic nephropathy. These findings, however, have not been robustly replicated and many genes responsible for susceptibility to diabetic nephropathy remain to be identified. To identify loci involved in susceptibility to common diseases, we initiated the first round of a genome-wide association study (GWAS) using 100,000 single nucleotide polymorphisms (SNPs) from a Japanese SNP database (JSNP: http://snp.ims.u-tokyo.ac.jp/index_ja.html). Through this project, we have previously identified genes encoding solute carrier family 12 (sodium/chloride) member 3 (SLC12A3, MIM 600968, Online Mendelian Inheritance in Man: http://www.ncbi.nlm.nih.gov/omim) [7]; engulfment and cell motility 1 (ELMO1, MIM 606420) [8]; neurocalcin δ (NCALD, MIM 606722) [9]; and acetyl-coenzyme A carboxylase beta gene (ACACB, MIM: 601557) [10] as being associated with susceptibility to diabetic nephropathy. The association between ELMO1 or ACACB and diabetic nephropathy has been confirmed in different ethnic populations [11–13]. The GWAS for diabetic nephropathy using European American populations (the Genetics of Kidneys in Diabetes (GoKinD) collection) led to the identification of 4 distinct loci as novel candidate loci for susceptibility to diabetic nephropathy in European American subjects with type 1 diabetes [14]: the CPVL/CHN2 locus on chromosome 7, the FRMD3 locus on chromosome 9, the CARS locus on chromosome 11, and a locus near IRS2 on chromosome 13. Among those 4 loci, only one locus (near IRS2 in chromosome 13) could be replicated in Japanese subjects with type 2 diabetes [15]. Although these loci are considered as convincing susceptibility loci for diabetic nephropathy across different ethnic groups, a considerable number of susceptibility genes for diabetic nephropathy still remain to be identified.\nSirtuins, the silent information regulator-2 (SIR2) family, is a member of NAD-dependent deacetylases, and the sir2 gene was originally identified as a gene affecting the malting ability of yeast. Mammalian sirtuins consist of seven members, SIRT1–SIRT7, and some of them, especially SIRT1, have been shown to play pivotal roles in the regulation of aging, longevity, or in the pathogenesis of age-related metabolic diseases, such as type 2 diabetes [16–18]. The expressions of sirtuin families have also been observed in the kidneys, and recently SIRT1 has been shown to mediate a protective role of calorie restriction (CR) in the progression of the aging kidney [19]. These observations suggest the possibility that mammalian sirtuins are a candidate for conferring susceptibility to diabetic nephropathy.\nIn order to test this hypothesis, we focused on genes encoding mammalian sirtuins as candidate genes for diabetic nephropathy and investigated the association between SNPs within the SIRT genes and diabetic nephropathy in Japanese subjects with type 2 diabetes.", "[SUBTITLE] Subjects, DNA preparation [SUBSECTION] [SUBTITLE] Study 1 [SUBSECTION] DNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\nDNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\n[SUBTITLE] Study 2 [SUBSECTION] We selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\nWe selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\n[SUBTITLE] Study 3 [SUBSECTION] Patients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.\nPatients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.\n[SUBTITLE] Study 1 [SUBSECTION] DNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\nDNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\n[SUBTITLE] Study 2 [SUBSECTION] We selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\nWe selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\n[SUBTITLE] Study 3 [SUBSECTION] Patients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.\nPatients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.\n[SUBTITLE] SNP genotyping [SUBSECTION] We searched the HapMap database (http://hapmap.ncbi.nlm.nih.gov/) for SNPs within the genes encoding sirtuin families, and selected 55 SNPs (39 tagging SNPs) for genotyping; 11 in SIRT1 (rs12778366, rs3740051, rs2236318, rs2236319, rs10823108, rs10997868, rs2273773, rs3818292, rs3818291, rs4746720, rs10823116), 7 in SIRT2 (rs1001413, rs892034, rs2015, rs2241703, rs2082435, rs11575003, rs2053071), 15 in SIRT3 (rs11246002, rs2293168, rs3216, rs10081, rs511744, rs6598074, rs4758633, rs11246007, rs3782117, rs3782116, rs3782115, rs1023430, rs12576565, rs536715, rs3829998), 7 in SIRT4 (rs6490288, rs7298516, rs3847968, rs12424555, rs7137625, rs2261612, rs2070873), 11 in SIRT5 (rs2804923, rs9382227, rs2804916, rs2804918, rs9370232, rs4712047, rs3734674, rs11751539, rs3757261, rs2253217, rs2841514), and 4 in SIRT6 (rs350852, rs7246235, rs107251, rs350844). We could not identify any confirmed SNPs within SIRT7 in the Japanese population. The genotyping of these SNPs was performed by using multiplex polymerase chain reaction (PCR)-invader assays, as described previously [7–10].\nWe searched the HapMap database (http://hapmap.ncbi.nlm.nih.gov/) for SNPs within the genes encoding sirtuin families, and selected 55 SNPs (39 tagging SNPs) for genotyping; 11 in SIRT1 (rs12778366, rs3740051, rs2236318, rs2236319, rs10823108, rs10997868, rs2273773, rs3818292, rs3818291, rs4746720, rs10823116), 7 in SIRT2 (rs1001413, rs892034, rs2015, rs2241703, rs2082435, rs11575003, rs2053071), 15 in SIRT3 (rs11246002, rs2293168, rs3216, rs10081, rs511744, rs6598074, rs4758633, rs11246007, rs3782117, rs3782116, rs3782115, rs1023430, rs12576565, rs536715, rs3829998), 7 in SIRT4 (rs6490288, rs7298516, rs3847968, rs12424555, rs7137625, rs2261612, rs2070873), 11 in SIRT5 (rs2804923, rs9382227, rs2804916, rs2804918, rs9370232, rs4712047, rs3734674, rs11751539, rs3757261, rs2253217, rs2841514), and 4 in SIRT6 (rs350852, rs7246235, rs107251, rs350844). We could not identify any confirmed SNPs within SIRT7 in the Japanese population. The genotyping of these SNPs was performed by using multiplex polymerase chain reaction (PCR)-invader assays, as described previously [7–10].\n[SUBTITLE] Statistical analyses [SUBSECTION] We tested the genotype distributions for Hardy–Weinberg equilibrium (HWE) proportions by using the chi-squared test. We analyzed the differences between the case−control groups in terms of the distribution of genotypes with the Cochran–Armitage trend test. The analyses for haplotype structures within each gene were performed using Haploview software version 4.1 [20]. A combined meta-analysis was performed using the Mantel–Haenszel procedure with a fixed effects model after testing for heterogeneity.\nWe tested the genotype distributions for Hardy–Weinberg equilibrium (HWE) proportions by using the chi-squared test. We analyzed the differences between the case−control groups in terms of the distribution of genotypes with the Cochran–Armitage trend test. The analyses for haplotype structures within each gene were performed using Haploview software version 4.1 [20]. A combined meta-analysis was performed using the Mantel–Haenszel procedure with a fixed effects model after testing for heterogeneity.", "[SUBTITLE] Study 1 [SUBSECTION] DNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\nDNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.\n[SUBTITLE] Study 2 [SUBSECTION] We selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\nWe selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).\n[SUBTITLE] Study 3 [SUBSECTION] Patients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.\nPatients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.", "DNA samples were obtained from the peripheral blood of patients with type 2 diabetes who regularly visited the outpatient clinic at Shiga University of Medical Science, Tokyo Women’s Medical University, Juntendo University, Kawasaki Medical School, Iwate Medical University, Toride Kyodo Hospital, Kawai Clinic, Osaka City General Hospital, Chiba Tokushukai Hospital, or Osaka Rosai Hospital. Diabetes was diagnosed according to the World Health Organization criteria. Type 2 diabetes was clinically defined as a disease with gradual adult onset. Subjects who tested positive for anti-glutamic acid decarboxylase antibodies and those diagnosed with mitochondrial disease (mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS)) or maturity onset diabetes of the young were not included. The patients were divided into 2 groups: (1) the nephropathy group (n = 754, age 60.1 ± 0.4, diabetes duration 19.3 ± 0.4, body mass index (BMI) 23.7 ± 0.2, mean ± SE) comprised patients with diabetic retinopathy and overt nephropathy indicated by a urinary albumin excretion rate (AER) ≥200 μg/min or a urinary albumin/creatinine ratio (ACR) ≥300 mg/g creatinine (Cr), and (2) the control group (n = 558, age 62.4 ± 0.5, diabetes duration 15.3 ± 0.4, BMI 23.6 ± 0.2) comprised patients who had diabetic retinopathy but no evidence of renal dysfunction (i.e. AER <20 μg/min or ACR <30 mg/g Cr). The AER or ACR were measured at least twice for each patient.", "We selected diabetic nephropathy patients and control patients among the subjects enrolled in the BioBank Japan. Nephropathy cases were defined as patients with type 2 diabetes having both overt diabetic nephropathy and diabetic retinopathy (n = 449, age 64.7 ± 0.4, BMI 23.5 ± 0.2). The control subjects were patients with type 2 diabetes who had diabetic retinopathy and normoalbuminuria (n = 965, age 64.8 ± 0.3, BMI 23.8 ± 0.1).", "Patients with type 2 diabetes who regularly visited Tokai University Hospital or its affiliated hospitals were enrolled in this study. All the nephropathy patients (n = 300, age 64.4 ± 0.6, diabetes duration 21.9 ± 0.9, BMI 22.1 ± 0.2, mean ± SE) were receiving chronic hemodialysis therapy, and the control patients (n = 224, age 65.0 ± 0.7, diabetes duration 16.3 ± 0.4, BMI 23.4 ± 0.3, mean ± SE) included those with normoalbuminuria as determined by at least 2 measurements of urinary ACR and with diabetes for >10 years.\nAll the patients participating in this study provided written informed consent, and the study protocol was approved by the ethics committees of RIKEN Yokohama Institute and of each participating institution.\nThe clinical characteristics of the subjects are shown in Supplementary Table 1.", "We searched the HapMap database (http://hapmap.ncbi.nlm.nih.gov/) for SNPs within the genes encoding sirtuin families, and selected 55 SNPs (39 tagging SNPs) for genotyping; 11 in SIRT1 (rs12778366, rs3740051, rs2236318, rs2236319, rs10823108, rs10997868, rs2273773, rs3818292, rs3818291, rs4746720, rs10823116), 7 in SIRT2 (rs1001413, rs892034, rs2015, rs2241703, rs2082435, rs11575003, rs2053071), 15 in SIRT3 (rs11246002, rs2293168, rs3216, rs10081, rs511744, rs6598074, rs4758633, rs11246007, rs3782117, rs3782116, rs3782115, rs1023430, rs12576565, rs536715, rs3829998), 7 in SIRT4 (rs6490288, rs7298516, rs3847968, rs12424555, rs7137625, rs2261612, rs2070873), 11 in SIRT5 (rs2804923, rs9382227, rs2804916, rs2804918, rs9370232, rs4712047, rs3734674, rs11751539, rs3757261, rs2253217, rs2841514), and 4 in SIRT6 (rs350852, rs7246235, rs107251, rs350844). We could not identify any confirmed SNPs within SIRT7 in the Japanese population. The genotyping of these SNPs was performed by using multiplex polymerase chain reaction (PCR)-invader assays, as described previously [7–10].", "We tested the genotype distributions for Hardy–Weinberg equilibrium (HWE) proportions by using the chi-squared test. We analyzed the differences between the case−control groups in terms of the distribution of genotypes with the Cochran–Armitage trend test. The analyses for haplotype structures within each gene were performed using Haploview software version 4.1 [20]. A combined meta-analysis was performed using the Mantel–Haenszel procedure with a fixed effects model after testing for heterogeneity.", "Among the 55 SNPs examined, genotype distributions of 3 SNPs, rs12576565 in SIRT3, and rs2804923 and rs2841514 in SIRT 5, showed significant deviation from HWE proportion in control groups (P < 0.01, Supplementary Table 2), and these 3 SNPs were excluded from the association study. As shown in Table 1, 8 out of 11 SNPs in SIRT1 showed a directionally consistent association with diabetic nephropathy in all 3 studies, although individual associations were not significant (P > 0.05, Supplementary Table 2). In a combined meta-analysis, we could identify a nominally significant association between rs4746720 and proteinuria, and between 4 SNPs, rs2236319, rs10823108, rs3818292, rs4746720, and combined phenotypes (proteinuria + ESRD, P < 0.05). Subsequent haplotype analysis revealed that the 11 SNPs formed one haplotype block (Fig. 1), and 7 common haplotypes covered >99% of the present Japanese population. Among them one haplotype had a stronger association with diabetic nephropathy than single SNPs alone (P = 0.016, odds ratio (OR) 1.31 95% confidence interval (CI) 1.05–1.62]. Any SNPs or haplotypes in SIRT2–6 were not associated with diabetic nephropathy in the combined analysis (Tables 2, 3, 4, 5, 6), although there was an association between 3 SNPs (rs4712047, rs3734674, rs3757261) in SIRT5 and diabetic nephropathy in the study 2 population (Supplementary Table 2). To validate the association between SIRT1 and diabetic nephropathy, we examined another 195 cases (overt proteinuria) and 264 controls registered in the BioBank Japan (study 4). As shown in Table 7, most SNPs showed a consistent association with those in the original finding, and the association of the haplotype was strengthened further (P = 0.0028, OR 1.36, 95% CI 1.11–1.66). We further examined the association between SIRT1 SNPs and microalbuminuria in studies 1 and 2, but could not identify a significant association (Supplementary Table 3), suggesting SIRT1 SNPs might contribute to the progression of nephropathy rather than its onset in patients with type 2 diabetes.Table 1Association between SNPs in SIRT1 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2\nP\nOR (95% CI)Study 3\nP\nOR (95% CI)SNP rs12778366a T>C0.111/0.1030.125/0.1240.6721.04 (0.86–1.26)0.101/0.1190.9810.998 (0.84–1.18) rs3740051a A>G0.291/0.2770.316/0.3010.2991.07 (0.94–1.22)0.310/0.2740.1381.09 (0.97–1.23) rs2236318a T>A0.121/0.1290.099/0.1110.3270.91 (0.75–1.10)0.106/0.1190.2360.90 (0.76–1.07) rs2236319 A>G0.339/0.3170.358/0.3390.1651.09 (0.96–1.24)0.349/0.3000.0481.12 (1.00–1.26) rs10823108 G>A0.335/0.3180.357/0.3350.1691.09 (0.96–1.24)0.351/0.3020.0491.12 (1.00–1.26) rs10997868a C>A0.187/0.1840.187/0.1740.5201.05 (0.90–1.23)0.180/0.1730.4821.05 (0.91–1.21) rs2273773 T>C0.339/0.3250.361/0.3470.3251.07 (0.94–1.21)0.353/0.3060.1131.10 (0.98–1.23) rs3818292 A>G0.336/0.3170.360/0.3350.1341.10 (0.97–1.25)0.352/0.3060.0421.13 (1.00–1.26) rs3818291 G>A0.111/0.1010.127/0.1290.6501.04 (0.87–1.26)0.101/0.1240.9270.99 (0.84–1.17) rs4746720a T>C0.366/0.3940.331/0.3640.0410.88 (0.77–0.99)0.367/0.4000.0210.88 (0.78–0.98) rs10823116a A>G0.446/0.4420.441/0.4480.9050.99 (0.88–1.12)0.459/0.3940.4281.05 (0.94–1.16)Haplotype TGTGACCGGTG0.294/0.2790.316/0.3000.2501.08 (0.95–1.23)0.315/0.2730.0951.10 (0.98–1.24) TATAGCTAGCA0.255/0.2730.251/0.2520.4640.95 (0.83–1.09)0.253/0.3040.1430.91 (0.81–1.03) CATAGCTAATA0.112/0.1030.124/0.1290.8171.02 (0.85–1.23)0.100/0.1190.8410.98 (0.83–1.16) TAAAGATAGTA0.123/0.1280.104/0.1120.4840.94 (0.78–1.13)0.105/0.1220.3190.92 (0.78–1.08) TATAGCTAGCG0.109/0.1230.085/0.1110.0370.81 (0.67–0.99)0.113/0.0990.1170.87 (0.73–1.03) TATAGATAGTA0.065/0.0550.078/0.0590.0511.27 (0.998–1.61)0.077/0.0530.0161.31 (1.05–1.62) TATGACCGGTG0.042/0.0390.040/0.0360.571.09 (0.81–1.48)0.036/0.0280.4211.12 (0.85–1.48)aTag SNPs\nFig. 1Position of the 11 SNPs in SIRT1, and pair-wise linkage disequilibrium coefficients (r\n2) among 11 SNPs in the present Japanese population\nTable 2Association between SNPs in SIRT2 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2\nP\nOR (95% CI)Study 3\nP\nOR (95% CI)SNP rs1001413a G>A0.381/0.3590.353/0.3610.5941.03 (0.91–1.17)0.342/0.3890.8370.99 (0.88–1.11) rs892034a C>T0.193/0.1660.180/0.1690.0991.14 (0.98–1.34)0.164/0.1940.3521.07 (0.93–1.23) rs2015a A>C0.398/0.4000.389/0.4060.5500.96 (0.85–1.09)0.384/0.3910.5220.96 (0.86–1.08) rs2241703a G>A0.235/0.2260.222/0.2150.5341.05 (0.91–1.21)0.219/0.2080.4531.05 (0.92–1.20) rs2082435a C>G0.247/0.2620.260/0.2700.3650.94 (0.82–1.08)0.256/0.2310.6780.97 (0.86–1.10) rs11575003a T>C0.118/0.1150.126/0.1320.8860.99 (0.82–1.18)0.132/0.1190.8871.01 (0.86–1.19) rs2053071a G>C0.377/0.3980.384/0.3960.2600.93 (0.82–1.05)0.426/0.4020.5060.96 (0.86–1.08)Haplotype Block 1  CCGG0.243/0.2580.245/0.2680.1360.90 (0.79–1.03)0.254/0.2260.3660.95 (0.84–1.07)  CAAC0.231/0.2220.227/0.2160.4381.06 (0.92–1.22)0.218/0.2040.3471.06 (0.94–1.21)  CAGC0.179/0.2120.207/0.2060.1860.91 (0.78–1.05)0.232/0.2050.4770.95 (0.84–1.09)  TAGC0.191/0.1650.191/0.1690.0371.18 (1.01–1.37)0.164/0.1970.1961.10 (0.95–1.26)  CCGC0.154/0.1420.127/0.1400.9930.999 (0.84–1.18)0.130/0.1640.4970.95 (0.81–1.04) Block 2  TG0.620/0.5970.614/0.5990.1911.08 (0.96–1.22)0.568/0.5850.3461.05 (0.95–1.17)  TC0.262/0.2880.256/0.2700.1420.91 (0.79–1.03)0.300/0.2950.1210.91 (0.81–1.02)  CC0.114/0.1100.126/0.1270.8501.02 (0.85–1.22)0.127/0.1090.5861.05 (0.89–1.23)Block 1; rs892034, rs2015, rs2241703, rs2082435Block 2; rs11575003, rs2053071aTag SNPs\nTable 3Association between SNPs in SIRT3 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2\nP\nOR (95% CI)Study 3\nP\nOR (95% CI)SNP rs11246002a G>A0.137/0.1230.152/0.1370.1691.13 (0.95–1.34)0.122/0.1100.1381.13 (0.96–1.32) rs2293168 G>A0.356/0.3620.385/0.4020.4400.95 (0.84–1.08)0.400/0.3720.7760.98 (0.88–1.10) rs3216 C>G0.172/0.1680.160/0.1550.7421.03 (0.87–1.21)0.152/0.1920.6550.97 (0.84–1.12) rs10081a A>G0.507/0.5150.464/0.4630.8051.02 (0.90–1.15)0.460/0.5140.3380.95 (0.85–1.06) rs511744a C>T0.488/0.4820.469/0.4850.7780.98 (0.87–1.11)0.491/0.4870.8530.99 (0.89–1.10) rs6598074 T>C0.164/0.1610.126/0.1350.7970.98 (0.82–1.16)0.154/0.1440.9630.996 (0.86–1.16) rs4758633a G>A0.347/0.3550.288/0.2940.5990.97 (0.85–1.10)0.319/0.3490.3600.94 (0.84–1.06) rs11246007a C>T0.143/0.1550.149/0.1520.4710.94 (0.79–1.11)0.142/0.1430.5120.95 (0.82–1.11) rs3782117a A>G0.168/0.1710.160/0.1530.8431.02 (0.86–1.20)0.152/0.1930.5440.96 (0.83–1.11) rs3782116a G>A0.307/0.2940.278/0.2720.5071.05 (0.92–1.19)0.292/0.2680.3331.06 (0.94–1.20) rs3782115a C>T0.283/0.2830.265/0.2570.7851.02 (0.89–1.17)0.263/0.2410.9641.04 (0.92–1.17) rs1023430a A>G0.291/0.3020.325/0.3070.8491.01 (0.89–1.15)0.285/0.2750.7431.02 (0.91–1.15) rs536715a G>A0.367/0.3670.395/0.4060.7360.98 (0.86–1.11)0.404/0.3890.9390.996 (0.89–1.11) rs3829998a G>A0.167/0.1670.124/0.1390.5290.95 (0.80–1.12)0.160/0.1530.6740.97 (0.83–1.13)Haplotype Block 1  GACT0.354/0.3620.378/0.3980.3420.94 (0.83–1.06)0.403/0.3770.6350.97 (0.87–1.09)  GGCC0.335/0.3460.310/0.3090.7000.98 (0.86–1.11)0.317/0.3210.6880.97 (0.87–1.09)  GGGC0.172/0.1680.159/0.1540.6881.03 (0.88–1.21)0.151/0.1920.6780.97 (0.84–1.12)  AGCT0.138/0.1230.151/0.1370.1711.27 (0.95–1.34)0.124/0.1100.1271.13 (0.97–1.11) Block 2  TGGA0.519/0.5110.560/0.5560.7101.02 (0.91–1.15)0.550/0.5210.4621.04 (0.94–1.16)  TAGG0.171/0.1690.158/0.1530.7651.02 (0.87–1.20)0.151/0.1920.6220.96 (0.84–1.11)  TGAA0.143/0.1550.150/0.1520.490.94 (0.80–1.11)0.142/0.1420.5400.95 (0.82–1.11)  CAGA0.167/0.1640.131/0.1360.9520.99 (0.84–1.17)0.157/0.1460.8680.95 (0.82–1.11) Block 3  AAG0.364/0.3630.383/0.4020.5470.96 (0.85–1.09)0.403/0.3840.7790.98 (0.88–1.10)  GGG0.287/0.2970.320/0.3030.8011.02 (0.89–1.16)0.281/0.2650.6401.03 (0.92–1.15)  AGG0.177/0.1700.157/0.1520.6181.04 (0.89–1.22)0.154/0.1910.8090.98 (0.85–1.13)  AGA0.168/0.1660.133/0.1400.8560.98 (0.84–1.16)0.158/0.1520.9670.997 (0.86–1.16)Block 1; rs11246002, rs2293168, rs3216, rs10081Block 2; rs6598074, rs4758633, rs11246007, rs3782117Block 3; rs1023430, rs536715, rs3829998aTag SNPs\nTable 4Association between SNPs in SIRT4 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2\nP\nOR (95% CI)Study 3\nP\nOR (95% CI)SNP rs6490288 G>C0.068/0.0760.076/0.0770.5740.94 (0.74–1.18)0.080/0.0660.8800.98 (0.80–1.21) rs7298516a T>G0.009/0.0090.008/0.0110.6080.85 (0.46–1.58)0.017/0.0160.7140.91 (0.54–1.53) rs3847968a C>T0.187/0.1840.187/0.1740.4500.91 (0.71–1.16)0.180/0.1730.8061.03 (0.82–1.28) rs12424555 C>T0.059/0.0690.065/0.0690.3660.89 (0.70–1.14)0.071/0.0460.9120.99 (0.79–1.23) rs7137625a C>T0.057/0.0400.058/0.0560.1411.23 (0.94–1.60)0.045/0.0630.4351.10 (0.87–1.40) rs2261612 A>G0.473/0.4840.457/0.4760.3380.94 (0.84–1.06)0.476/0.4590.5320.97 (0.87–1.08) rs2070873a T>G0.469/0.4760.457/0.4740.4430.95 (0.85–1.08)0.480/0.4680.6000.97 (0.87–1.08)Haplotype Block 1  CCCAT0.527/0.5180.546/0.5200.2451.07 (0.95–1.21)0.517/0.5320.4001.05 (0.94–1.16)  CCCGG0.350/0.3680.326/0.3480.1540.91 (0.81–1.03)0.360/0.3420.3050.94 (0.84–1.05)  TTCGG0.058/0.0670.065/0.0620.6950.95 (0.75–1.21)0.067/0.0520.9321.01 (0.81–1.26)  CCTGG0.056/0.0390.056/0.0560.1811.20 (0.92–1.56)0.046/0.0630.5011.08 (0.86–1.38)Block 1; rs3847968, rs12424555, rs7137625, rs2261612, rs2070873aTag SNPs\nTable 5Association between SNPs in SIRT5 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaCombinedStudy 1Study 2\nP\nOR (95% CI)Study 3\nP\nOR (95% CI)SNP rs9382227a G>T0.188/0.1960.218/0.1920.4941.05 (0.91–1.22)0.198/0.2100.6881.03 (0.90–1.18) rs2804916a T>C0.170/0.1660.157/0.1630.9210.99 (0.84–1.17)0.138/0.1350.9710.997 (0.86–1.16) rs2804918a A>G0.345/0.3570.352/0.3330.8471.01 (0.89–1.15)0.318/0.3210.8961.01 (0.90–1.13) rs9370232a G>C0.361/0.3700.358/0.3620.6400.97 (0.86–1.10)0.357/0.3460.7970.99 (0.88–1.10) rs4712047a G>A0.494/0.4770.448/0.5050.2210.93 (0.82–1.05)0.456/0.4570.2690.94 (0.84–1.05) rs3734674 G>A0.158/0.1710.191/0.1490.2521.10 (0.93–1.29)0.176/0.1880.4161.06 (0.92–1.23) rs11751539a A>T0.309/0.3200.302/0.3120.4760.95 (0.84–1.09)0.315/0.2760.9550.997 (0.87–1.12) rs3757261a G>A0.155/0.1650.184/0.1390.1591.12 (0.95–1.32)0.168/0.1740.2521.09 (0.94–1.26) rs2253217a A>G0.063/0.0710.056/0.0680.2100.85 (0.67–1.09)0.045/0.0610.1110.83 (0.67–1.04)Haplotype Block 1  GT0.641/0.6370.629/0.6450.6660.97 (0.86–1.10)0.665/0.6550.7960.99 (0.88–1.10)  TT0.189/0.1960.215/0.1920.5191.05 (0.91–1.22)0.198/0.2100.7111.03 (0.90–1.17)  GC0.171/0.1670.156/0.1630.0860.87 (0.74–1.02)0.137/0.1350.9490.995 (0.86–1.15) Block 2  GAGA0.471/0.4420.446/0.4780.9040.99 (0.88–1.12)0.468/0.4910.6740.98 (0.88–1.09)  GTGA0.311/0.3200.313/0.3120.7580.98 (0.86–1.11)0.313/0.2720.7341.02 (0.91–1.14)  AAAA0.154/0.1660.184/0.1390.1501.12 (0.96–1.32)0.169/0.1740.2391.09 (0.94–1.26)  GAGG0.061/0.0670.054/0.0610.3530.89 (0.70–1.14)0.042/0.0500.2800.88 (0.70–1.11)Block 1; rs9382227, rs2804916Block 2; rs3734674, rs11751539, rs3757261, rs2253217, rs2841514aTag SNPs\nTable 6Association between SNPs in SIRT6 and diabetic nephropathyAllele frequencies (nephropathy case−control)ProteinuriaESRDCombinedStudy 1Study 2\nP\nOR (95% CI)Study 3\nP\nOR (95% CI)SNP rs350852a T>C0.313/0.3380.313/0.3030.5450.96 (0.84–1.09)0.324/0.3480.3670.95 (0.84–1.06) rs7246235a T>G0.185/0.1860.168/0.2090.1100.88 (0.75–1.03)0.202/0.1640.4470.95 (0.82–1.09) rs107251a C>T0.296/0.3150.305/0.2910.8410.99 (0.87–1.12)0.323/0.3280.7990.98 (0.88–1.11) rs350844 G>A0.304/0.3220.309/0.2910.9360.99 (0.87–1.13)0.336/0.3470.8190.99 (0.88–1.11)Haplotype Block 1  TCG0.516/0.4990.529/0.5000.1221.10 (0.98–1.24)0.517/0.5320.3421.05 (0.95–1.17)  TTA0.299/0.3180.303/0.2910.7760.98 (0.86–1.12)0.360/0.3420.7130.98 (0.87–1.10)  GCG0.185/0.1830.168/0.2090.1000.88 (0.76–1.02)0.067/0.0520.4330.95 (0.83–1.09)Block 1; rs7246235, rs107251, rs350844aTag SNPs\nTable 7Replication study for the association between SNPs in SIRT1 and diabetic nephropathyAllele frequencies (nephropathy case−control)Proteinuria (study 1, 2, 4)Proteinuria + ESRD (study 1, 2, 3, 4)Study 4\nP\nOR (95% CI)\nP\nOR (95% CI)SNP rs12778366a T>C0.089/0.1310.6760.96 (0.81–1.15)0.4480.94 (0.80–1.10) rs3740051a A>G0.311/0.2910.2261.08 (0.96–1.21)0.1061.09 (0.98–1.22) rs2236318a T>A0.113/0.1160.3500.92 (0.78–1.09)0.2570.91 (0.78–1.07) rs2236319 A>G0.360/0.3440.1421.09 (0.97–1.22)0.0441.12 (1.00–1.24) rs10823108 G>A0.358/0.3370.1271.09 (0.97–1.23)0.0381.12 (1.01–1.24) rs10997868a C>A0.181/0.1750.4901.05 (0.91–1.21)0.4561.05 (0.92–1.20) rs2273773 T>C0.364/0.3420.2391.07 (0.95–1.20)0.0851.10 (0.99–1.22) rs3818292 A>G0.358/0.3440.1201.10 (0.98–1.23)0.0401.12 (1.01–1.24) rs3818291 G>A0.090/0.1320.6960.97 (0.81–1.15)0.4120.94 (0.80–1.10) rs4746720a T>C0.371/0.3610.0840.90 (0.81–1.01)0.0440.90 (0.81–0.997) rs10823116a A>G0.453/0.4500.9390.996 (0.89–1.11)0.4461.04 (0.94–1.15)Haplotype TGTGACCGGTG0.306/0.2970.2401.07 (0.95–1.21)0.0981.09 (0.98–1.22) TATAGCTAGCA0.269/0.2430.8090.96 (0.87–1.11)0.3360.95 (0.85–1.06) CATAGCTAATA0.105/0.1290.7410.97 (0.82–1.15)0.4960.95 (0.81–1.10) TAAAGATAGTA0.122/0.1160.6210.96 (0.81–1.13)0.4300.94 (0.80–1.09) TATAGCTAGCG0.095/0.1120.0220.82 (0.69–0.97)0.0710.86 (0.74–1.01) TATAGATAGTA0.072/0.0590.00911.34 (1.07–1.66)0.00281.36 (1.11–1.66) TATGACCGGTG0.031/0.0440.9421.01 (0.77–1.33)0.7461.04 (0.81–1.35)aTag SNPs\n\nAssociation between SNPs in SIRT1 and diabetic nephropathy\naTag SNPs\nPosition of the 11 SNPs in SIRT1, and pair-wise linkage disequilibrium coefficients (r\n2) among 11 SNPs in the present Japanese population\nAssociation between SNPs in SIRT2 and diabetic nephropathy\nBlock 1; rs892034, rs2015, rs2241703, rs2082435\nBlock 2; rs11575003, rs2053071\naTag SNPs\nAssociation between SNPs in SIRT3 and diabetic nephropathy\nBlock 1; rs11246002, rs2293168, rs3216, rs10081\nBlock 2; rs6598074, rs4758633, rs11246007, rs3782117\nBlock 3; rs1023430, rs536715, rs3829998\naTag SNPs\nAssociation between SNPs in SIRT4 and diabetic nephropathy\nBlock 1; rs3847968, rs12424555, rs7137625, rs2261612, rs2070873\naTag SNPs\nAssociation between SNPs in SIRT5 and diabetic nephropathy\nBlock 1; rs9382227, rs2804916\nBlock 2; rs3734674, rs11751539, rs3757261, rs2253217, rs2841514\naTag SNPs\nAssociation between SNPs in SIRT6 and diabetic nephropathy\nBlock 1; rs7246235, rs107251, rs350844\naTag SNPs\nReplication study for the association between SNPs in SIRT1 and diabetic nephropathy\naTag SNPs", "In the present study, we identified that SNPs within SIRT1 were nominally associated with susceptibility to diabetic nephropathy. We also identified one haplotype consisting of the 11 SNPs in SIRT1 had a stronger association with diabetic nephropathy than single SNPs alone.\nSIRT1 encodes a member of NAD(+)-dependent histone deacetylase, involved in various nuclear events such as transcription, DNA replication, and DNA repair. Cumulative evidence during the past decade has demonstrated that SIRT1 plays an important role not only in the regulation of aging and longevity, but also in the development and/or progression of age-associated metabolic diseases, such as type 2 diabetes. SIRT1 activation is considered to be a key mediator for favorable effects on lifespan or on metabolic activity in animals under calorie restriction (CR) [21–24]. Recently, Kume et al. [19] reported that mice under 40% CR were protected from the development of glomerular sclerosis in aging mice kidneys through increasing mitochondrial biogenesis caused by sirt1 activation. From these observations, it is suggested that SIRT1 has a pivotal role in the pathogenesis of aging-related metabolic diseases, such as type 2 diabetes or glomerulosclerosis, and a genetic difference in SIRT1 activity among individuals, if it is present, may contribute to conferring susceptibility to these diseases.\nIn the present study, we identified that SNPs within SIRT1 were nominally associated with diabetic nephropathy, whereas SNPs in other sirtuin families did not show any association with diabetic nephropathy. Combining the present finding with a previous report, SIRT1 may be considered a good new candidate for diabetic nephropathy, although, the role of sirtuin families other than SIRT1 in age-related metabolic diseases has not been well evaluated. The mechanism by which the SIRT1 polymorphism contributes to conferring susceptibility to diabetic nephropathy remains to be elucidated. Since SIRT1 could affect various metabolic activities, the effects of SIRT1 polymorphisms on susceptibility to diabetic nephropathy might be mediated by differences in the metabolic state among individuals, including glycemic control, obesity, blood pressure, etc. We then examined the association between SNPs in SIRT1 and BMI, hemoglobin A1c (HbA1c), fasting plasma glucose, or systolic blood pressure in the present subjects with type 2 diabetes, but we could not observe any association between the SIRT1 SNPs and those quantitative traits (P > 0.05, Supplementary Table 4). In contrast to our present finding, SNPs within the SIRT1, rs7895833 and rs1467568, were shown to be significantly associated with BMI in Dutch populations [25]. We did not examine those SNPs, but the present study includes an SNP in high linkage disequilibrium (LD) to these 2 SNPs (rs10997868; r\n2 = 1 and 0.64 to rs1467568 and rs7895833, respectively). Interestingly, there is a dramatic difference in the frequency of the reported protected allele (A allele of rs1467568) between European and Japanese populations (0.25 in the European population vs. 0.841 in the Japanese population, HapMap database, http://www.ncbi.nlm.nih.gov/projects/SNP/snp_ref.cgi?rs=1467568). Since rs10997868 was not associated with either BMI or susceptibility to the disease, ethnic differences may contribute to the discrepancy between the Dutch and Japanese populations, and the contribution of SIRT1 SNPs to BMI, if it is present, is considered very minor in the Japanese population. It has been also reported that SNPs in SIRT1 were associated with energy expenditure in a small number of Finnish healthy nondiabetic offspring of patients with type 2 diabetes [23]. The alleles associated with higher energy expenditure, supposed to be favorable alleles for glucose metabolism, are G for rs3740051, G for rs2236319, and C for rs2273773, respectively; although these alleles increase the risk of diabetic nephropathy in the present Japanese population. From these observations, we speculate that the effects of SIRT1 gene polymorphisms on diabetic nephropathy are independent of these metabolic parameters; however, there are limitations to the present cross-sectional study and further longitudinal prospective studies are required to obtain a precise conclusion. The association between individual SIRT1 SNPs and diabetic nephropathy did not attain statistically significant levels after correction for multiple-testing errors, and a haplotype consisting of 11 SIRT1 SNPs had a stronger association with the disease, suggesting the existence of other true causal variations within this locus. In addition, since nephropathy cases in the present study were at a more advanced stages of diabetic nephropathy, the findings on SNPs and the haplotype within SIRT1 may be applicable mainly to advanced diabetic nephropathy. Therefore, further extensive analyses for this locus, re-sequencing, dense LD mapping, or further confirmation studies are also required to link the SIRT1 locus to the genetic susceptibility of diabetic nephropathy as a whole.\nIn conclusion, we found that the SNPs and a haplotype within SIRT1 were nominally associated with susceptibility to diabetic nephropathy in four independent Japanese case–control studies. The present data suggest that SIRT1 may be a good candidate for diabetic nephropathy, although the association should be evaluated further in independent studies.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)\n\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)\nBelow is the link to the electronic supplementary material.\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)\n\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)", "Below is the link to the electronic supplementary material.\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)\n\nSupplementary table 1 (DOC 47 kb)\nSupplementary table 2 (XLS 74 kb)\nSupplementary table 3 (XLS 37 kb)\nSupplementary table 4 (XLS 23 kb)" ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", "discussion", "supplementary-material", null ]
[ "Single nucleotide polymorphism (SNP)", "Association study", "SIRT1", "Diabetic nephropathy" ]
Constipation and diarrhoea - common adverse drug reactions? A cross sectional study in the general population.
21332973
Constipation and diarrhoea are common complaints and often reported as adverse drug reactions. This study aimed at finding associations between drugs and constipation and diarrhoea in a general population.
BACKGROUND
A selection of inhabitants in Oppland County, Norway participated in a cross-sectional survey. Information about demographics, diseases including gastrointestinal complaints classified according to the Rome II criteria and use of drugs were collected on questionnaires. Constipation was defined as functional constipation and constipation predominant Irritable Bowel Syndrome (IBS), and diarrhoea as functional diarrhoea and diarrhoea predominant IBS. Associations between drugs and constipation and diarrhoea were examined with multivariable logistic regression models. Based on the multivariable model, the changes in prevalence (risk difference) of the abdominal complaints for non-users and users of drugs were calculated.
METHODS
In total 11078 subjects were invited, 4622 completed the questionnaires, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. To start using drugs increased the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. Polypharmacy was an additional risk factor for diarrhoea. Use of furosemide, levothyroxine sodium and ibuprofen was associated with constipation, and lithium and carbamazepine with diarrhoea. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation to 27.5% for the association between lithium and diarrhoea.
RESULTS
Use of drugs was associated with constipation and diarrhoea in the general population. The associations are most likely adverse drug reactions and show that drug-induced symptoms need to be considered in subjects with these complaints.
CONCLUSIONS
[ "Constipation", "Cross-Sectional Studies", "Diarrhea", "Drug-Related Side Effects and Adverse Reactions", "Female", "Humans", "Male", "Middle Aged", "Multivariate Analysis", "Norway", "Prevalence", "Surveys and Questionnaires" ]
3049147
null
null
Methods
[SUBTITLE] Participants [SUBSECTION] In 2001, all persons in Oppland County, Norway, born in 1970, 1960, 1955, 1940 and 1925, were invited to a health study conducted by the Norwegian Institute of Public Health who also collected the data [19]. In 2001, all persons in Oppland County, Norway, born in 1970, 1960, 1955, 1940 and 1925, were invited to a health study conducted by the Norwegian Institute of Public Health who also collected the data [19]. [SUBTITLE] Design [SUBSECTION] The study was population-based with a cross-sectional design. No remuneration was given. At attendance, standardized questionnaires were completed and a physical examination was performed. All subjects were asked to take home, complete and return by post a supplementary questionnaire about the complaints. Non-responders received two reminders. The study was population-based with a cross-sectional design. No remuneration was given. At attendance, standardized questionnaires were completed and a physical examination was performed. All subjects were asked to take home, complete and return by post a supplementary questionnaire about the complaints. Non-responders received two reminders. [SUBTITLE] Variables [SUBSECTION] [SUBTITLE] Questionnaires [SUBSECTION] All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = "never" to 8 = "4-7 times/week") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22]. All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = "never" to 8 = "4-7 times/week") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22]. [SUBTITLE] Definition of constipation and diarrhoea [SUBSECTION] The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS. The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS. [SUBTITLE] Classification of drugs [SUBSECTION] Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs. Analyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed. Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs. Analyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed. [SUBTITLE] Questionnaires [SUBSECTION] All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = "never" to 8 = "4-7 times/week") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22]. All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = "never" to 8 = "4-7 times/week") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22]. [SUBTITLE] Definition of constipation and diarrhoea [SUBSECTION] The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS. The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS. [SUBTITLE] Classification of drugs [SUBSECTION] Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs. Analyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed. Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs. Analyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed. [SUBTITLE] Statistics [SUBSECTION] Bivariate analyses of the association between the abdominal complaint and one variable at a time were analysed with Student's t-test, Wilcoxon-Mann-Whitney's test, or Fisher's exact test (Tables 1 and 2). In multivariable logistic regression models with abdominal complaint as outcome, the effects of drugs were adjusted for covariates selected as follows: Candidate covariates were all background variables listed in Table 1. Forward likelihood ratio (LR) selection with p-enter = 0.15 and p-remove = 0.20 was followed by backwards LR with p-enter = 0.05 and p-remove = 0.06, a procedure adapted from Hosmer and Lemeshow [24]. Age and gender were included in all models. Separate analyses were performed for drug use no/yes (Table 3), number of drugs (Table 4), and drug substances (Table 2). Candidate drug substances (for Table 5) were those with unadjusted p-values ≤ 0.2 and groups of drugs with p-value ≤0.2 if none of the substances in the group were candidates. Based on the multivariable model, we estimated the average change in prevalence (also called average risk difference) of the abdominal complaint for non-users and users of drugs and the separate drug substances, if they would start or stop using the drugs [25]. Unlike in randomized controlled trials, the characteristics of users and non-users of the drugs differ. Hence, the effect of the drug in terms of risk difference is not the same among users and non-users of the drug. The two risk differences are estimated using the method of Bender et al [26]. Characteristics of all participants (independent of complaints), participants with constipations and participants with diarrhoea, and comparisons between subjects with and without diarrhoea and with and without constipation. The results are given as number of subjects, proportion (in percentage) or mean (with SD in brackets) and the proportion of missing data to each question in square brackets. † = p < 0.20; * = p < 0.05; ** = p < 0.01; *** = p < 0.001. ¤ Frequency of use of alcohol: 8 point ordinal scale: 1 = never used to 8 = 4-7 times a week § Musculoskeletal complaints: Sum score (six location and intensity) 0-12 # HSCL 10: Hopkins Symptom Check List 10 (Mood disorders) 1-4 The prevalence of constipation and diarrhoea among users and non-users of drugs significantly associated with the complaints (bivariate analyses, all participants) Independent predictors for constipation and diarrhoea Multivariable logistic regression analyses with stepwise selection of variables. Use of drugs and not the individual drugs were included in the models. Age and gender were included in the models independent of significance. Number of drugs as predictors for constipation and diarrhoea Multivariable logistic regression analyses - adjusted for significant predictors. Significant predictors for constipation: Age, gender, BMI, frequency of use of alcohol, musculoskeletal complaints, angina and multiple sclerosis. Significant predictors for diarrhoea: Age, gender, frequency of use of alcohol, years of education, mood disorders and osteoporosis. Observed prevalence (in per cent) of constipation and diarrhoea in users and non-users of drugs, calculated prevalence if treatment is stopped or started, and changes of prevalence (average risk difference) with 95% CI when stopping and starting treatment (multivariable analyses, 4586 and 4268 cases available for the analyses of constipation and diarrhoea respectively) Two-sided p-values ≤ 0.05 were regarded as statistically significant. Due to hypotheses testing for many substances, p-values above 0.01 should be interpreted with caution. We report 95% confidence intervals (CI) where appropriate. The analyses were performed in SPSS 16. The SPSS code "nne_ein.sps" available at Ralf Bender's software page was used for calculating change in prevalence [26]. Bivariate analyses of the association between the abdominal complaint and one variable at a time were analysed with Student's t-test, Wilcoxon-Mann-Whitney's test, or Fisher's exact test (Tables 1 and 2). In multivariable logistic regression models with abdominal complaint as outcome, the effects of drugs were adjusted for covariates selected as follows: Candidate covariates were all background variables listed in Table 1. Forward likelihood ratio (LR) selection with p-enter = 0.15 and p-remove = 0.20 was followed by backwards LR with p-enter = 0.05 and p-remove = 0.06, a procedure adapted from Hosmer and Lemeshow [24]. Age and gender were included in all models. Separate analyses were performed for drug use no/yes (Table 3), number of drugs (Table 4), and drug substances (Table 2). Candidate drug substances (for Table 5) were those with unadjusted p-values ≤ 0.2 and groups of drugs with p-value ≤0.2 if none of the substances in the group were candidates. Based on the multivariable model, we estimated the average change in prevalence (also called average risk difference) of the abdominal complaint for non-users and users of drugs and the separate drug substances, if they would start or stop using the drugs [25]. Unlike in randomized controlled trials, the characteristics of users and non-users of the drugs differ. Hence, the effect of the drug in terms of risk difference is not the same among users and non-users of the drug. The two risk differences are estimated using the method of Bender et al [26]. Characteristics of all participants (independent of complaints), participants with constipations and participants with diarrhoea, and comparisons between subjects with and without diarrhoea and with and without constipation. The results are given as number of subjects, proportion (in percentage) or mean (with SD in brackets) and the proportion of missing data to each question in square brackets. † = p < 0.20; * = p < 0.05; ** = p < 0.01; *** = p < 0.001. ¤ Frequency of use of alcohol: 8 point ordinal scale: 1 = never used to 8 = 4-7 times a week § Musculoskeletal complaints: Sum score (six location and intensity) 0-12 # HSCL 10: Hopkins Symptom Check List 10 (Mood disorders) 1-4 The prevalence of constipation and diarrhoea among users and non-users of drugs significantly associated with the complaints (bivariate analyses, all participants) Independent predictors for constipation and diarrhoea Multivariable logistic regression analyses with stepwise selection of variables. Use of drugs and not the individual drugs were included in the models. Age and gender were included in the models independent of significance. Number of drugs as predictors for constipation and diarrhoea Multivariable logistic regression analyses - adjusted for significant predictors. Significant predictors for constipation: Age, gender, BMI, frequency of use of alcohol, musculoskeletal complaints, angina and multiple sclerosis. Significant predictors for diarrhoea: Age, gender, frequency of use of alcohol, years of education, mood disorders and osteoporosis. Observed prevalence (in per cent) of constipation and diarrhoea in users and non-users of drugs, calculated prevalence if treatment is stopped or started, and changes of prevalence (average risk difference) with 95% CI when stopping and starting treatment (multivariable analyses, 4586 and 4268 cases available for the analyses of constipation and diarrhoea respectively) Two-sided p-values ≤ 0.05 were regarded as statistically significant. Due to hypotheses testing for many substances, p-values above 0.01 should be interpreted with caution. We report 95% confidence intervals (CI) where appropriate. The analyses were performed in SPSS 16. The SPSS code "nne_ein.sps" available at Ralf Bender's software page was used for calculating change in prevalence [26]. [SUBTITLE] Ethics [SUBSECTION] This survey was approved by the Regional Committee for Medical Research Ethics and the Data Inspectorate, Oslo, Norway, and performed in accordance with the Declaration of Helsinki. This survey was approved by the Regional Committee for Medical Research Ethics and the Data Inspectorate, Oslo, Norway, and performed in accordance with the Declaration of Helsinki.
null
null
null
null
[ "Background", "Participants", "Design", "Variables", "Questionnaires", "Definition of constipation and diarrhoea", "Classification of drugs", "Statistics", "Ethics", "Results", "Participants", "Use of drugs and associations with constipation and diarrhoea", "Drug substances associated with constipation and diarrhoea", "Discussion", "Strengths and weaknesses", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Constipation and diarrhoea are worldwide complaints with some variations between geographical regions, populations and definitions [1-7]. The use of drugs is common, increases with age and has increased over time with a doubling of prescriptions to elderly from 1996 to 2006 [8,9]. Unfortunately, drugs are associated with adverse drug reactions (ADRs) which increase with polypharmacy [3,10-16]. Gastrointestinal complaints like constipation and diarrhoea are common ADRs [16-18]. The reported prevalence rates of gastrointestinal ADRs, especially constipation and diarrhoea, are based mainly on clinical drug trials and observational studies in selected, often elderly, populations, and use heterogeneous definitions of constipation and diarrhoea [3,11,14,17,18]. The prevalence of constipation and diarrhoea related to everyday use of drugs in an unselected general population is in large unknown.\nIncreased knowledge about ADRs allows individual adjustment of drug treatment in patients with gastrointestinal symptoms. This population based cross-sectional study aimed at finding associations between drugs and constipation and diarrhoea.", "In 2001, all persons in Oppland County, Norway, born in 1970, 1960, 1955, 1940 and 1925, were invited to a health study conducted by the Norwegian Institute of Public Health who also collected the data [19].", "The study was population-based with a cross-sectional design. No remuneration was given. At attendance, standardized questionnaires were completed and a physical examination was performed. All subjects were asked to take home, complete and return by post a supplementary questionnaire about the complaints. Non-responders received two reminders.", "[SUBTITLE] Questionnaires [SUBSECTION] All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\nAll participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\n[SUBTITLE] Definition of constipation and diarrhoea [SUBSECTION] The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\nThe gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\n[SUBTITLE] Classification of drugs [SUBSECTION] Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.\nDrug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.", "All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].", "The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.", "Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.", "Bivariate analyses of the association between the abdominal complaint and one variable at a time were analysed with Student's t-test, Wilcoxon-Mann-Whitney's test, or Fisher's exact test (Tables 1 and 2). In multivariable logistic regression models with abdominal complaint as outcome, the effects of drugs were adjusted for covariates selected as follows: Candidate covariates were all background variables listed in Table 1. Forward likelihood ratio (LR) selection with p-enter = 0.15 and p-remove = 0.20 was followed by backwards LR with p-enter = 0.05 and p-remove = 0.06, a procedure adapted from Hosmer and Lemeshow [24]. Age and gender were included in all models. Separate analyses were performed for drug use no/yes (Table 3), number of drugs (Table 4), and drug substances (Table 2). Candidate drug substances (for Table 5) were those with unadjusted p-values ≤ 0.2 and groups of drugs with p-value ≤0.2 if none of the substances in the group were candidates. Based on the multivariable model, we estimated the average change in prevalence (also called average risk difference) of the abdominal complaint for non-users and users of drugs and the separate drug substances, if they would start or stop using the drugs [25]. Unlike in randomized controlled trials, the characteristics of users and non-users of the drugs differ. Hence, the effect of the drug in terms of risk difference is not the same among users and non-users of the drug. The two risk differences are estimated using the method of Bender et al [26].\nCharacteristics of all participants (independent of complaints), participants with constipations and participants with diarrhoea, and comparisons between subjects with and without diarrhoea and with and without constipation.\nThe results are given as number of subjects, proportion (in percentage) or mean (with SD in brackets) and the proportion of missing data to each question in square brackets.\n† = p < 0.20; * = p < 0.05; ** = p < 0.01; *** = p < 0.001.\n¤ Frequency of use of alcohol: 8 point ordinal scale: 1 = never used to 8 = 4-7 times a week\n§ Musculoskeletal complaints: Sum score (six location and intensity) 0-12\n# HSCL 10: Hopkins Symptom Check List 10 (Mood disorders) 1-4\nThe prevalence of constipation and diarrhoea among users and non-users of drugs significantly associated with the complaints (bivariate analyses, all participants)\nIndependent predictors for constipation and diarrhoea\nMultivariable logistic regression analyses with stepwise selection of variables. Use of drugs and not the individual drugs were included in the models. Age and gender were included in the models independent of significance.\nNumber of drugs as predictors for constipation and diarrhoea\nMultivariable logistic regression analyses - adjusted for significant predictors.\nSignificant predictors for constipation: Age, gender, BMI, frequency of use of alcohol, musculoskeletal complaints, angina and multiple sclerosis.\nSignificant predictors for diarrhoea: Age, gender, frequency of use of alcohol, years of education, mood disorders and osteoporosis.\nObserved prevalence (in per cent) of constipation and diarrhoea in users and non-users of drugs, calculated prevalence if treatment is stopped or started, and changes of prevalence (average risk difference) with 95% CI when stopping and starting treatment (multivariable analyses, 4586 and 4268 cases available for the analyses of constipation and diarrhoea respectively)\nTwo-sided p-values ≤ 0.05 were regarded as statistically significant. Due to hypotheses testing for many substances, p-values above 0.01 should be interpreted with caution. We report 95% confidence intervals (CI) where appropriate. The analyses were performed in SPSS 16. The SPSS code \"nne_ein.sps\" available at Ralf Bender's software page was used for calculating change in prevalence [26].", "This survey was approved by the Regional Committee for Medical Research Ethics and the Data Inspectorate, Oslo, Norway, and performed in accordance with the Declaration of Helsinki.", "[SUBTITLE] Participants [SUBSECTION] Out of 11078 subjects invited to the health survey, 6141 participated. In the participants and non-participants the proportions of women was 55% and 45% respectively, and mean age was 49.2 and 45.0 years respectively.\nThe gastrointestinal questionnaire was completed by 4622 of the participants, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. Two patients had both constipation and diarrhoea (diarrhoea was probably induced by use of laxatives). The responders were more likely to be women than the non-responders were (56% and 51% respectively), and younger (mean age 48.9 and 50.1 years respectively). Figure 1 shows the subjects in the study and the classification of functional gastrointestinal disorders in more detail. Table 1 presents the characteristics of all participants and of participants with constipation and diarrhoea, and gives comparisons between those with and without constipation and with and without diarrhoea. Age, gender, body mass index, use of alcohol, osteoporosis, musculoskeletal complaints, mood disorders, use of drugs and number of drugs were highly significantly associated with constipation; and smoking, musculoskeletal complaints, mood disorders, use of drugs and number of drugs with diarrhoea.\nFlow chart of the participants in the study with indication of complaints. The study focuses on the groups with constipation (N = 640) and diarrhoea (N = 407).\nOut of 11078 subjects invited to the health survey, 6141 participated. In the participants and non-participants the proportions of women was 55% and 45% respectively, and mean age was 49.2 and 45.0 years respectively.\nThe gastrointestinal questionnaire was completed by 4622 of the participants, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. Two patients had both constipation and diarrhoea (diarrhoea was probably induced by use of laxatives). The responders were more likely to be women than the non-responders were (56% and 51% respectively), and younger (mean age 48.9 and 50.1 years respectively). Figure 1 shows the subjects in the study and the classification of functional gastrointestinal disorders in more detail. Table 1 presents the characteristics of all participants and of participants with constipation and diarrhoea, and gives comparisons between those with and without constipation and with and without diarrhoea. Age, gender, body mass index, use of alcohol, osteoporosis, musculoskeletal complaints, mood disorders, use of drugs and number of drugs were highly significantly associated with constipation; and smoking, musculoskeletal complaints, mood disorders, use of drugs and number of drugs with diarrhoea.\nFlow chart of the participants in the study with indication of complaints. The study focuses on the groups with constipation (N = 640) and diarrhoea (N = 407).\n[SUBTITLE] Use of drugs and associations with constipation and diarrhoea [SUBSECTION] In all, 288 different generic drugs were used, 252 after exclusion of drugs for gastrointestinal disorders. After exclusion of drugs and groups of drugs used by <10 persons, 98 substances (ATC-level 5) and 20 groups at ATC-level 4 were included in the analyses.\nThe mean number of drugs per person was 1.34 (range 0-9), and 2891 (62.5%) used one or more drugs. Use of drugs was associated with constipation and diarrhoea, the unadjusted ORs were 1.69 and 1.46 respectively (p ≤ 0.001), and likewise, adjusted OR were 1.30 (p = 0.012) and 1.37 (p = 0.011). Table 3 gives independent predictors of constipation and diarrhoea.\nTable 4 shows the association between numbers of drugs and constipation and diarrhoea. Using more than one drug was not associated with constipation over that of one drug alone. However, polypharmacy increased the prevalence of diarrhoea significantly, compared with using one drug and 2-3 drugs the ORs were 1.46 (CI 1.01 to 2.10, p = 0.042), and 1.47 (CI 1.03 to 2.11, p = 0.034) respectively.\nIn all, 288 different generic drugs were used, 252 after exclusion of drugs for gastrointestinal disorders. After exclusion of drugs and groups of drugs used by <10 persons, 98 substances (ATC-level 5) and 20 groups at ATC-level 4 were included in the analyses.\nThe mean number of drugs per person was 1.34 (range 0-9), and 2891 (62.5%) used one or more drugs. Use of drugs was associated with constipation and diarrhoea, the unadjusted ORs were 1.69 and 1.46 respectively (p ≤ 0.001), and likewise, adjusted OR were 1.30 (p = 0.012) and 1.37 (p = 0.011). Table 3 gives independent predictors of constipation and diarrhoea.\nTable 4 shows the association between numbers of drugs and constipation and diarrhoea. Using more than one drug was not associated with constipation over that of one drug alone. However, polypharmacy increased the prevalence of diarrhoea significantly, compared with using one drug and 2-3 drugs the ORs were 1.46 (CI 1.01 to 2.10, p = 0.042), and 1.47 (CI 1.03 to 2.11, p = 0.034) respectively.\n[SUBTITLE] Drug substances associated with constipation and diarrhoea [SUBSECTION] Drugs and group of drugs associated with constipation and diarrhoea with p≤0.05 in bivariate analyses are listed in table 2. In addition, 15 and 12 drugs were associated with constipations and diarrhoea respectively with p≤0.20. Three drugs were independent predictors of constipation: furosemide (OR 2.16, CI 1.14 to 4.10, p = 0.019), levothyroxine sodium (OR 1.55, CI 1.04 to 2.31, p = 0.033) and ibuprofen (OR 1.54, CI 1.17 to 2.03, p = 0.002), and two with diarrhoea: lithium (OR 6.09, CI 1.73 to 21.48, p = 0.005) and carbamazepine (OR 4.07, CI 1.52 to 10.89, p = 0.005) (logistic regression analyses with correction for the independent predictors given in table 3). No groups of drugs at ATC-level 4 were independent predictors of constipations or diarrhoea without one substance in the group being associated with the disorders.\nTable 5 gives the observed prevalence of constipation and diarrhoea for users and nonusers of drugs and for drugs significantly associated to the complaint, the calculated prevalence if users stop and nonusers start treatment and the change in prevalence when stopping and starting treatment. To start using drugs would increase the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation, to 27.5% for the association between lithium and diarrhoea.\nDrugs and group of drugs associated with constipation and diarrhoea with p≤0.05 in bivariate analyses are listed in table 2. In addition, 15 and 12 drugs were associated with constipations and diarrhoea respectively with p≤0.20. Three drugs were independent predictors of constipation: furosemide (OR 2.16, CI 1.14 to 4.10, p = 0.019), levothyroxine sodium (OR 1.55, CI 1.04 to 2.31, p = 0.033) and ibuprofen (OR 1.54, CI 1.17 to 2.03, p = 0.002), and two with diarrhoea: lithium (OR 6.09, CI 1.73 to 21.48, p = 0.005) and carbamazepine (OR 4.07, CI 1.52 to 10.89, p = 0.005) (logistic regression analyses with correction for the independent predictors given in table 3). No groups of drugs at ATC-level 4 were independent predictors of constipations or diarrhoea without one substance in the group being associated with the disorders.\nTable 5 gives the observed prevalence of constipation and diarrhoea for users and nonusers of drugs and for drugs significantly associated to the complaint, the calculated prevalence if users stop and nonusers start treatment and the change in prevalence when stopping and starting treatment. To start using drugs would increase the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation, to 27.5% for the association between lithium and diarrhoea.", "Out of 11078 subjects invited to the health survey, 6141 participated. In the participants and non-participants the proportions of women was 55% and 45% respectively, and mean age was 49.2 and 45.0 years respectively.\nThe gastrointestinal questionnaire was completed by 4622 of the participants, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. Two patients had both constipation and diarrhoea (diarrhoea was probably induced by use of laxatives). The responders were more likely to be women than the non-responders were (56% and 51% respectively), and younger (mean age 48.9 and 50.1 years respectively). Figure 1 shows the subjects in the study and the classification of functional gastrointestinal disorders in more detail. Table 1 presents the characteristics of all participants and of participants with constipation and diarrhoea, and gives comparisons between those with and without constipation and with and without diarrhoea. Age, gender, body mass index, use of alcohol, osteoporosis, musculoskeletal complaints, mood disorders, use of drugs and number of drugs were highly significantly associated with constipation; and smoking, musculoskeletal complaints, mood disorders, use of drugs and number of drugs with diarrhoea.\nFlow chart of the participants in the study with indication of complaints. The study focuses on the groups with constipation (N = 640) and diarrhoea (N = 407).", "In all, 288 different generic drugs were used, 252 after exclusion of drugs for gastrointestinal disorders. After exclusion of drugs and groups of drugs used by <10 persons, 98 substances (ATC-level 5) and 20 groups at ATC-level 4 were included in the analyses.\nThe mean number of drugs per person was 1.34 (range 0-9), and 2891 (62.5%) used one or more drugs. Use of drugs was associated with constipation and diarrhoea, the unadjusted ORs were 1.69 and 1.46 respectively (p ≤ 0.001), and likewise, adjusted OR were 1.30 (p = 0.012) and 1.37 (p = 0.011). Table 3 gives independent predictors of constipation and diarrhoea.\nTable 4 shows the association between numbers of drugs and constipation and diarrhoea. Using more than one drug was not associated with constipation over that of one drug alone. However, polypharmacy increased the prevalence of diarrhoea significantly, compared with using one drug and 2-3 drugs the ORs were 1.46 (CI 1.01 to 2.10, p = 0.042), and 1.47 (CI 1.03 to 2.11, p = 0.034) respectively.", "Drugs and group of drugs associated with constipation and diarrhoea with p≤0.05 in bivariate analyses are listed in table 2. In addition, 15 and 12 drugs were associated with constipations and diarrhoea respectively with p≤0.20. Three drugs were independent predictors of constipation: furosemide (OR 2.16, CI 1.14 to 4.10, p = 0.019), levothyroxine sodium (OR 1.55, CI 1.04 to 2.31, p = 0.033) and ibuprofen (OR 1.54, CI 1.17 to 2.03, p = 0.002), and two with diarrhoea: lithium (OR 6.09, CI 1.73 to 21.48, p = 0.005) and carbamazepine (OR 4.07, CI 1.52 to 10.89, p = 0.005) (logistic regression analyses with correction for the independent predictors given in table 3). No groups of drugs at ATC-level 4 were independent predictors of constipations or diarrhoea without one substance in the group being associated with the disorders.\nTable 5 gives the observed prevalence of constipation and diarrhoea for users and nonusers of drugs and for drugs significantly associated to the complaint, the calculated prevalence if users stop and nonusers start treatment and the change in prevalence when stopping and starting treatment. To start using drugs would increase the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation, to 27.5% for the association between lithium and diarrhoea.", "Use of drugs was significantly associated with constipation and diarrhoea in this study in the general population, as it has been in studies performed in general practice, in elderly and in other settings [3,7,14,16,18]. The prevalence of drug associated constipation and diarrhoea was judged as high (2.3% - 3.0%), particularly because compliance declines markedly in subjects with ADRs [27]. It is to expect that subjects with ADRs reduce or stop treatment, or switch to alternatives to avoid the complaints, and thereby reduce the prevalence of ADRs. However, it is likely that some subjects are unaware of the connection between the complaints and the drug. The results indicate an unfavourable effect of everyday use of drugs on constipation and diarrhoea in the general population. Changes in drug therapy might be the most important easily susceptible factor to influence on in subjects with these disorders.\nPolypharmacy is a risk factor for ADRs in general, and especially among the elderly [3,10-15]. This study showed, however, only an association between polypharmacy and diarrhoea. One possible explanation of the difference between constipation and diarrhoea is that more drugs are associated with diarrhoea than with constipation.\nAs expected, several drugs were associated with constipation and diarrhoea in bivariate analyses (table 2), but the number of specific drugs associated with constipation and diarrhoea in multivariable analyses was lower than expected: three and two respectively. Constipation has been mentioned as an ADR to both furosemide and ibuprofen in some reports and high quality information about marketed drugs [7,16,28]. The pathophysiology is uncertain. Dehydration might cause constipation in users of furosemide, and inhibition of prostaglandins might explain constipation in ibuprofen users since prostaglandin analogues cause diarrhoea [29]. Diarrhoea has been related to too high doses of levothyroxine sodium, constipation has not [28]. Since severe hypothyroidism is associated with constipation, the association between levothyroxine sodium and constipation might have been confounded by the disorder under treatment (hypothyroidism) or insufficient treatment, or it might be a type I error. Carbamazepine and lithium were both associated with diarrhoea, which is a known ADR to these drugs [28]. The excess prevalence in users was surprisingly high (19 and 27% respectively), particularly for lithium because gastrointestinal ADRs are expected to level off and/or decline with continuous use [30]. The fact that it might be undesirable to change lithium therapy if the effect is satisfactory could explain the high ADR rate. In all, the excess prevalence of constipation and diarrhoea associated with these five drugs was high (5-27%) related to the authorities definition of an ADR as common if the prevalence is between 1-10% [31]. The Rome II definition of constipation and diarrhoea includes mild and intermittent symptoms, which might be missed in clinical trials. Therefore, the clinical relevance of the findings is uncertain, but calls attention to these drugs in patients bothered by constipation and diarrhoea. The overall prevalence of constipation and diarrhoea in this study (13.8% and 8.8% respectively) was of the same order as reported in other studies using the same definitions (13.1-20.3% and 13.5% respectively) [5,6].\nOther drugs known to be associated with constipation (such as iron, codein (opiates), calcium channel blockers, anticholenergic drugs, anticonvulsant drugs, anti-parkinson drugs, and antipsychotics) and diarrhoea (such as antibiotics, NSAID, psycholeptics, selective serotonin reuptake inhibitors and antihypertensives) did not show significant associations with constipation and diarrhoea in this study [3,7,16,18]. Some of them were significantly associated in the bivariate analyses (table 2) but not in the multivariable analyses. It shows the importance of pragmatic studies in the general population, and that such studies differ from traditional clinical trials. Reasons could be dose reduction or reduced compliance when ADRs occur, waning symptoms during long-term treatment, switching to other drugs without ADR, or a type II error.\nOther predictors for constipation and diarrhoea were overall in accordance with other reports. There was a predominance of females with constipation and males with diarrhoea [4,5,7,16]. Constipation increased with age and seemed to be related to inactivity, coronary disease and neurological diseases [4,7,16,32]. The association between diarrhoea and mood disorders has been reported in other studies, whereas the associations to low education and osteoporosis remain unexplained [33].\n[SUBTITLE] Strengths and weaknesses [SUBSECTION] Unlike most reports on ADRs, this study describes associations between drugs and symptoms related to everyday use of drugs in the general population. The design avoids problems related to clinical trials and surveys in selected, often elderly, populations. Except for the possibility that \"trained complainers\" were more prone to respond, there was no selection bias or loss to follow up, compliance and reporting of complaints were unaffected by the personnel's interest in the reports, and the treatment duration was probably longer than in most clinical trials. It is not known how the drugs were used, and incorrect use might increase ADRs. However, these results reflect associations related to everyday use of drugs, used correctly or not, but the cross sectional design gives no information about causality.\nThe ATC-classification of drugs used in this study relates ADRs to specific chemical compounds or groups with similar chemical compounds, which is a strength [23]. Most studies relate ADRs to broader and poorly defined groups of drugs e.g. antipsychotics and antihistamines, which are groups containing drugs with different ADRs, and therefore give insufficient information about ADRs related to each compound.\nIn addition to the commonly reported OR, we present the estimated changes in prevalence of the complaints when non-users start and users stop treatment. Unlike in randomized controlled trials, the distribution of covariates in subjects using and not using the drug differs. These risk differences are estimated taking into account differences in background variables among users and non users of the drug, as well as uncertainties in the estimates for the background variables [25]. This is a clinically more meaningful measure than OR.\nThe low response rate, 42% of all invited and 75% of the participants in the health survey might reduce the external validity. However, analyses of a similar study conducted by Norwegian Institute of Public Health in 2001 with a response rate of 46% showed no impact of the low response rate on self-selection [34].\nThe size of the study allows only demonstration of common ADRs. Common ADRs are defined by the authorities as prevalence rates between 1-10% [31]. The power of the study was 80% to detect an adverse event with a prevalence of 1%, given that 5% of the participants use the drug (α = 0.05). Clinically relevant information about individual drugs might have been missed since most drugs were used by less than 1% of the population.\nThe temporal relationship between drugs and symptoms in this study is uncertain. The questionnaire asked for symptoms last 3 months and use of drugs last 4 weeks. However, the temporal relationship is probably a minor problem since symptoms according to the Rome criteria are long lasting and regularly used drugs are most often long-term treatment.\nBecause the participants were asked for regularly used drugs the last four weeks, information about compliance, over the counter drugs, drugs taken to relieve symptoms, drugs taken on demand, general life style and food habits are insufficient. Analgesics (e.g. with codein and acetylsalicylic acid) and other drugs that influence on gastrointestinal function might therefore have been left out, and irregular and therefore not registered intake of laxatives and anti-diarrhoeas might have reduced the prevalence of the complaints. Therefore, the associations between drugs and the complaints do not prove, but only indicate causality.\nUnlike most reports on ADRs, this study describes associations between drugs and symptoms related to everyday use of drugs in the general population. The design avoids problems related to clinical trials and surveys in selected, often elderly, populations. Except for the possibility that \"trained complainers\" were more prone to respond, there was no selection bias or loss to follow up, compliance and reporting of complaints were unaffected by the personnel's interest in the reports, and the treatment duration was probably longer than in most clinical trials. It is not known how the drugs were used, and incorrect use might increase ADRs. However, these results reflect associations related to everyday use of drugs, used correctly or not, but the cross sectional design gives no information about causality.\nThe ATC-classification of drugs used in this study relates ADRs to specific chemical compounds or groups with similar chemical compounds, which is a strength [23]. Most studies relate ADRs to broader and poorly defined groups of drugs e.g. antipsychotics and antihistamines, which are groups containing drugs with different ADRs, and therefore give insufficient information about ADRs related to each compound.\nIn addition to the commonly reported OR, we present the estimated changes in prevalence of the complaints when non-users start and users stop treatment. Unlike in randomized controlled trials, the distribution of covariates in subjects using and not using the drug differs. These risk differences are estimated taking into account differences in background variables among users and non users of the drug, as well as uncertainties in the estimates for the background variables [25]. This is a clinically more meaningful measure than OR.\nThe low response rate, 42% of all invited and 75% of the participants in the health survey might reduce the external validity. However, analyses of a similar study conducted by Norwegian Institute of Public Health in 2001 with a response rate of 46% showed no impact of the low response rate on self-selection [34].\nThe size of the study allows only demonstration of common ADRs. Common ADRs are defined by the authorities as prevalence rates between 1-10% [31]. The power of the study was 80% to detect an adverse event with a prevalence of 1%, given that 5% of the participants use the drug (α = 0.05). Clinically relevant information about individual drugs might have been missed since most drugs were used by less than 1% of the population.\nThe temporal relationship between drugs and symptoms in this study is uncertain. The questionnaire asked for symptoms last 3 months and use of drugs last 4 weeks. However, the temporal relationship is probably a minor problem since symptoms according to the Rome criteria are long lasting and regularly used drugs are most often long-term treatment.\nBecause the participants were asked for regularly used drugs the last four weeks, information about compliance, over the counter drugs, drugs taken to relieve symptoms, drugs taken on demand, general life style and food habits are insufficient. Analgesics (e.g. with codein and acetylsalicylic acid) and other drugs that influence on gastrointestinal function might therefore have been left out, and irregular and therefore not registered intake of laxatives and anti-diarrhoeas might have reduced the prevalence of the complaints. Therefore, the associations between drugs and the complaints do not prove, but only indicate causality.", "Unlike most reports on ADRs, this study describes associations between drugs and symptoms related to everyday use of drugs in the general population. The design avoids problems related to clinical trials and surveys in selected, often elderly, populations. Except for the possibility that \"trained complainers\" were more prone to respond, there was no selection bias or loss to follow up, compliance and reporting of complaints were unaffected by the personnel's interest in the reports, and the treatment duration was probably longer than in most clinical trials. It is not known how the drugs were used, and incorrect use might increase ADRs. However, these results reflect associations related to everyday use of drugs, used correctly or not, but the cross sectional design gives no information about causality.\nThe ATC-classification of drugs used in this study relates ADRs to specific chemical compounds or groups with similar chemical compounds, which is a strength [23]. Most studies relate ADRs to broader and poorly defined groups of drugs e.g. antipsychotics and antihistamines, which are groups containing drugs with different ADRs, and therefore give insufficient information about ADRs related to each compound.\nIn addition to the commonly reported OR, we present the estimated changes in prevalence of the complaints when non-users start and users stop treatment. Unlike in randomized controlled trials, the distribution of covariates in subjects using and not using the drug differs. These risk differences are estimated taking into account differences in background variables among users and non users of the drug, as well as uncertainties in the estimates for the background variables [25]. This is a clinically more meaningful measure than OR.\nThe low response rate, 42% of all invited and 75% of the participants in the health survey might reduce the external validity. However, analyses of a similar study conducted by Norwegian Institute of Public Health in 2001 with a response rate of 46% showed no impact of the low response rate on self-selection [34].\nThe size of the study allows only demonstration of common ADRs. Common ADRs are defined by the authorities as prevalence rates between 1-10% [31]. The power of the study was 80% to detect an adverse event with a prevalence of 1%, given that 5% of the participants use the drug (α = 0.05). Clinically relevant information about individual drugs might have been missed since most drugs were used by less than 1% of the population.\nThe temporal relationship between drugs and symptoms in this study is uncertain. The questionnaire asked for symptoms last 3 months and use of drugs last 4 weeks. However, the temporal relationship is probably a minor problem since symptoms according to the Rome criteria are long lasting and regularly used drugs are most often long-term treatment.\nBecause the participants were asked for regularly used drugs the last four weeks, information about compliance, over the counter drugs, drugs taken to relieve symptoms, drugs taken on demand, general life style and food habits are insufficient. Analgesics (e.g. with codein and acetylsalicylic acid) and other drugs that influence on gastrointestinal function might therefore have been left out, and irregular and therefore not registered intake of laxatives and anti-diarrhoeas might have reduced the prevalence of the complaints. Therefore, the associations between drugs and the complaints do not prove, but only indicate causality.", "Everyday use of drugs was associated with an increased prevalence of constipation and diarrhoea, and polypharmacy with an additional risk of diarrhoea in the general population. Furosemide, levothyroxine sodium and ibuprofen were significantly associated with constipation, and carbamazepine and lithium with diarrhoea. The associations do not prove, but indicate that constipation and diarrhoea are common ADRs. In patients with constipation or diarrhoea, drug induced symptoms need to be considered and changes in drug regimens performed along with other interventions.", "The authors declare that they have no competing interests.", "GSF has worked up the data file, performed the statistical analyses under supervision of the medical statistician, interpreted the results, and written the manuscript. SL is responsible for the medical statistics and has participated in interpretation of the results and preparation of the manuscript. PGF prepared the questionnaire and parts the survey in collaboration with the Norwegian Institute of Public Health, is responsible for conception and design, and has participated in the statistical analyses and preparation of the manuscript and is the main supervisor of the project. All authors have read and approved the last version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6904/11/2/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Design", "Variables", "Questionnaires", "Definition of constipation and diarrhoea", "Classification of drugs", "Statistics", "Ethics", "Results", "Participants", "Use of drugs and associations with constipation and diarrhoea", "Drug substances associated with constipation and diarrhoea", "Discussion", "Strengths and weaknesses", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Constipation and diarrhoea are worldwide complaints with some variations between geographical regions, populations and definitions [1-7]. The use of drugs is common, increases with age and has increased over time with a doubling of prescriptions to elderly from 1996 to 2006 [8,9]. Unfortunately, drugs are associated with adverse drug reactions (ADRs) which increase with polypharmacy [3,10-16]. Gastrointestinal complaints like constipation and diarrhoea are common ADRs [16-18]. The reported prevalence rates of gastrointestinal ADRs, especially constipation and diarrhoea, are based mainly on clinical drug trials and observational studies in selected, often elderly, populations, and use heterogeneous definitions of constipation and diarrhoea [3,11,14,17,18]. The prevalence of constipation and diarrhoea related to everyday use of drugs in an unselected general population is in large unknown.\nIncreased knowledge about ADRs allows individual adjustment of drug treatment in patients with gastrointestinal symptoms. This population based cross-sectional study aimed at finding associations between drugs and constipation and diarrhoea.", "[SUBTITLE] Participants [SUBSECTION] In 2001, all persons in Oppland County, Norway, born in 1970, 1960, 1955, 1940 and 1925, were invited to a health study conducted by the Norwegian Institute of Public Health who also collected the data [19].\nIn 2001, all persons in Oppland County, Norway, born in 1970, 1960, 1955, 1940 and 1925, were invited to a health study conducted by the Norwegian Institute of Public Health who also collected the data [19].\n[SUBTITLE] Design [SUBSECTION] The study was population-based with a cross-sectional design. No remuneration was given. At attendance, standardized questionnaires were completed and a physical examination was performed. All subjects were asked to take home, complete and return by post a supplementary questionnaire about the complaints. Non-responders received two reminders.\nThe study was population-based with a cross-sectional design. No remuneration was given. At attendance, standardized questionnaires were completed and a physical examination was performed. All subjects were asked to take home, complete and return by post a supplementary questionnaire about the complaints. Non-responders received two reminders.\n[SUBTITLE] Variables [SUBSECTION] [SUBTITLE] Questionnaires [SUBSECTION] All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\nAll participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\n[SUBTITLE] Definition of constipation and diarrhoea [SUBSECTION] The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\nThe gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\n[SUBTITLE] Classification of drugs [SUBSECTION] Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.\nDrug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.\n[SUBTITLE] Questionnaires [SUBSECTION] All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\nAll participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\n[SUBTITLE] Definition of constipation and diarrhoea [SUBSECTION] The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\nThe gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\n[SUBTITLE] Classification of drugs [SUBSECTION] Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.\nDrug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.\n[SUBTITLE] Statistics [SUBSECTION] Bivariate analyses of the association between the abdominal complaint and one variable at a time were analysed with Student's t-test, Wilcoxon-Mann-Whitney's test, or Fisher's exact test (Tables 1 and 2). In multivariable logistic regression models with abdominal complaint as outcome, the effects of drugs were adjusted for covariates selected as follows: Candidate covariates were all background variables listed in Table 1. Forward likelihood ratio (LR) selection with p-enter = 0.15 and p-remove = 0.20 was followed by backwards LR with p-enter = 0.05 and p-remove = 0.06, a procedure adapted from Hosmer and Lemeshow [24]. Age and gender were included in all models. Separate analyses were performed for drug use no/yes (Table 3), number of drugs (Table 4), and drug substances (Table 2). Candidate drug substances (for Table 5) were those with unadjusted p-values ≤ 0.2 and groups of drugs with p-value ≤0.2 if none of the substances in the group were candidates. Based on the multivariable model, we estimated the average change in prevalence (also called average risk difference) of the abdominal complaint for non-users and users of drugs and the separate drug substances, if they would start or stop using the drugs [25]. Unlike in randomized controlled trials, the characteristics of users and non-users of the drugs differ. Hence, the effect of the drug in terms of risk difference is not the same among users and non-users of the drug. The two risk differences are estimated using the method of Bender et al [26].\nCharacteristics of all participants (independent of complaints), participants with constipations and participants with diarrhoea, and comparisons between subjects with and without diarrhoea and with and without constipation.\nThe results are given as number of subjects, proportion (in percentage) or mean (with SD in brackets) and the proportion of missing data to each question in square brackets.\n† = p < 0.20; * = p < 0.05; ** = p < 0.01; *** = p < 0.001.\n¤ Frequency of use of alcohol: 8 point ordinal scale: 1 = never used to 8 = 4-7 times a week\n§ Musculoskeletal complaints: Sum score (six location and intensity) 0-12\n# HSCL 10: Hopkins Symptom Check List 10 (Mood disorders) 1-4\nThe prevalence of constipation and diarrhoea among users and non-users of drugs significantly associated with the complaints (bivariate analyses, all participants)\nIndependent predictors for constipation and diarrhoea\nMultivariable logistic regression analyses with stepwise selection of variables. Use of drugs and not the individual drugs were included in the models. Age and gender were included in the models independent of significance.\nNumber of drugs as predictors for constipation and diarrhoea\nMultivariable logistic regression analyses - adjusted for significant predictors.\nSignificant predictors for constipation: Age, gender, BMI, frequency of use of alcohol, musculoskeletal complaints, angina and multiple sclerosis.\nSignificant predictors for diarrhoea: Age, gender, frequency of use of alcohol, years of education, mood disorders and osteoporosis.\nObserved prevalence (in per cent) of constipation and diarrhoea in users and non-users of drugs, calculated prevalence if treatment is stopped or started, and changes of prevalence (average risk difference) with 95% CI when stopping and starting treatment (multivariable analyses, 4586 and 4268 cases available for the analyses of constipation and diarrhoea respectively)\nTwo-sided p-values ≤ 0.05 were regarded as statistically significant. Due to hypotheses testing for many substances, p-values above 0.01 should be interpreted with caution. We report 95% confidence intervals (CI) where appropriate. The analyses were performed in SPSS 16. The SPSS code \"nne_ein.sps\" available at Ralf Bender's software page was used for calculating change in prevalence [26].\nBivariate analyses of the association between the abdominal complaint and one variable at a time were analysed with Student's t-test, Wilcoxon-Mann-Whitney's test, or Fisher's exact test (Tables 1 and 2). In multivariable logistic regression models with abdominal complaint as outcome, the effects of drugs were adjusted for covariates selected as follows: Candidate covariates were all background variables listed in Table 1. Forward likelihood ratio (LR) selection with p-enter = 0.15 and p-remove = 0.20 was followed by backwards LR with p-enter = 0.05 and p-remove = 0.06, a procedure adapted from Hosmer and Lemeshow [24]. Age and gender were included in all models. Separate analyses were performed for drug use no/yes (Table 3), number of drugs (Table 4), and drug substances (Table 2). Candidate drug substances (for Table 5) were those with unadjusted p-values ≤ 0.2 and groups of drugs with p-value ≤0.2 if none of the substances in the group were candidates. Based on the multivariable model, we estimated the average change in prevalence (also called average risk difference) of the abdominal complaint for non-users and users of drugs and the separate drug substances, if they would start or stop using the drugs [25]. Unlike in randomized controlled trials, the characteristics of users and non-users of the drugs differ. Hence, the effect of the drug in terms of risk difference is not the same among users and non-users of the drug. The two risk differences are estimated using the method of Bender et al [26].\nCharacteristics of all participants (independent of complaints), participants with constipations and participants with diarrhoea, and comparisons between subjects with and without diarrhoea and with and without constipation.\nThe results are given as number of subjects, proportion (in percentage) or mean (with SD in brackets) and the proportion of missing data to each question in square brackets.\n† = p < 0.20; * = p < 0.05; ** = p < 0.01; *** = p < 0.001.\n¤ Frequency of use of alcohol: 8 point ordinal scale: 1 = never used to 8 = 4-7 times a week\n§ Musculoskeletal complaints: Sum score (six location and intensity) 0-12\n# HSCL 10: Hopkins Symptom Check List 10 (Mood disorders) 1-4\nThe prevalence of constipation and diarrhoea among users and non-users of drugs significantly associated with the complaints (bivariate analyses, all participants)\nIndependent predictors for constipation and diarrhoea\nMultivariable logistic regression analyses with stepwise selection of variables. Use of drugs and not the individual drugs were included in the models. Age and gender were included in the models independent of significance.\nNumber of drugs as predictors for constipation and diarrhoea\nMultivariable logistic regression analyses - adjusted for significant predictors.\nSignificant predictors for constipation: Age, gender, BMI, frequency of use of alcohol, musculoskeletal complaints, angina and multiple sclerosis.\nSignificant predictors for diarrhoea: Age, gender, frequency of use of alcohol, years of education, mood disorders and osteoporosis.\nObserved prevalence (in per cent) of constipation and diarrhoea in users and non-users of drugs, calculated prevalence if treatment is stopped or started, and changes of prevalence (average risk difference) with 95% CI when stopping and starting treatment (multivariable analyses, 4586 and 4268 cases available for the analyses of constipation and diarrhoea respectively)\nTwo-sided p-values ≤ 0.05 were regarded as statistically significant. Due to hypotheses testing for many substances, p-values above 0.01 should be interpreted with caution. We report 95% confidence intervals (CI) where appropriate. The analyses were performed in SPSS 16. The SPSS code \"nne_ein.sps\" available at Ralf Bender's software page was used for calculating change in prevalence [26].\n[SUBTITLE] Ethics [SUBSECTION] This survey was approved by the Regional Committee for Medical Research Ethics and the Data Inspectorate, Oslo, Norway, and performed in accordance with the Declaration of Helsinki.\nThis survey was approved by the Regional Committee for Medical Research Ethics and the Data Inspectorate, Oslo, Norway, and performed in accordance with the Declaration of Helsinki.", "In 2001, all persons in Oppland County, Norway, born in 1970, 1960, 1955, 1940 and 1925, were invited to a health study conducted by the Norwegian Institute of Public Health who also collected the data [19].", "The study was population-based with a cross-sectional design. No remuneration was given. At attendance, standardized questionnaires were completed and a physical examination was performed. All subjects were asked to take home, complete and return by post a supplementary questionnaire about the complaints. Non-responders received two reminders.", "[SUBTITLE] Questionnaires [SUBSECTION] All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\nAll participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].\n[SUBTITLE] Definition of constipation and diarrhoea [SUBSECTION] The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\nThe gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.\n[SUBTITLE] Classification of drugs [SUBSECTION] Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.\nDrug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.", "All participants filled in detailed questionnaires with information about age, gender, height, weight, cohabiting, years of education, physical activity (rated less than 1 hour/week, 1-2 hours/week and more than 3 hours/week), number of cups of coffee per day, use of alcohol (8 point ordinal scale from 1 = \"never\" to 8 = \"4-7 times/week\") and smoking habits (never, past, current). All present and previous diseases were noted. Musculoskeletal complaints the last four weeks were assessed for six locations (neck/shoulder, arms/hands, upper back, lower back, hip/legs/feet and other locations) and the intensity for each of them was rated as none, mild or severe. A musculoskeletal complaints score was calculated by multiplying number of locations and intensity giving a score with range 0-12. Mood disorders (mainly anxiety and depression) were measured with Hopkins' Symptom Check List-10 (HSCL-10) with range 1.0-4.0 and 1.85 as upper normal limit [20]. The subjects gave a written report on all drugs used regularly the last four weeks. The questionnaires have been translated into Norwegian, validated and extensively used in national surveys by the Norwegian Institute of Public Health [19]. Gastrointestinal complaints were assessed with a questionnaire based on the Rome II criteria for functional gastrointestinal disorders [21]. This survey has previously been used for the study of the prevalence, co-morbidity and impact of irritable bowel syndrome [22].", "The gastrointestinal symptom questionnaire allowed classification of the disorders into functional constipation, functional diarrhoea and Irritable Bowel Syndrome (IBS) with the subgroups diarrhoea predominant, constipation predominant and alternating, in accordance with the Rome II criteria [21]. In this study, constipation includes functional constipation and constipation predominant IBS, and diarrhoea includes functional diarrhoea and diarrhoea predominant IBS.", "Drug substances were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC) at level 5 and as group of drugs at ATC level 4 [23]. Use of drugs was measured as yes/no (Yes = use of one or more drugs), as number of drugs and on a categorical scale (0 = no drugs, 1 = use of one drug, 2 = use of 2-3 drugs, and 3 = use of >3 drugs). Polypharmacy was defined as using more than three drugs.\nAnalyses were performed for individual drugs (ATC-level 5) and groups of drugs at ATC level 4, which are rather similar and probably have concurrent ADRs. However, no analyses were performed for drugs used for gastrointestinal disorders (ATC-classes A02, A03, A06 and A07) since drug related gastrointestinal symptoms could not be distinguished from the disorder under treatment. Nor were drugs or groups of drugs used by less than 10 persons analysed.", "Bivariate analyses of the association between the abdominal complaint and one variable at a time were analysed with Student's t-test, Wilcoxon-Mann-Whitney's test, or Fisher's exact test (Tables 1 and 2). In multivariable logistic regression models with abdominal complaint as outcome, the effects of drugs were adjusted for covariates selected as follows: Candidate covariates were all background variables listed in Table 1. Forward likelihood ratio (LR) selection with p-enter = 0.15 and p-remove = 0.20 was followed by backwards LR with p-enter = 0.05 and p-remove = 0.06, a procedure adapted from Hosmer and Lemeshow [24]. Age and gender were included in all models. Separate analyses were performed for drug use no/yes (Table 3), number of drugs (Table 4), and drug substances (Table 2). Candidate drug substances (for Table 5) were those with unadjusted p-values ≤ 0.2 and groups of drugs with p-value ≤0.2 if none of the substances in the group were candidates. Based on the multivariable model, we estimated the average change in prevalence (also called average risk difference) of the abdominal complaint for non-users and users of drugs and the separate drug substances, if they would start or stop using the drugs [25]. Unlike in randomized controlled trials, the characteristics of users and non-users of the drugs differ. Hence, the effect of the drug in terms of risk difference is not the same among users and non-users of the drug. The two risk differences are estimated using the method of Bender et al [26].\nCharacteristics of all participants (independent of complaints), participants with constipations and participants with diarrhoea, and comparisons between subjects with and without diarrhoea and with and without constipation.\nThe results are given as number of subjects, proportion (in percentage) or mean (with SD in brackets) and the proportion of missing data to each question in square brackets.\n† = p < 0.20; * = p < 0.05; ** = p < 0.01; *** = p < 0.001.\n¤ Frequency of use of alcohol: 8 point ordinal scale: 1 = never used to 8 = 4-7 times a week\n§ Musculoskeletal complaints: Sum score (six location and intensity) 0-12\n# HSCL 10: Hopkins Symptom Check List 10 (Mood disorders) 1-4\nThe prevalence of constipation and diarrhoea among users and non-users of drugs significantly associated with the complaints (bivariate analyses, all participants)\nIndependent predictors for constipation and diarrhoea\nMultivariable logistic regression analyses with stepwise selection of variables. Use of drugs and not the individual drugs were included in the models. Age and gender were included in the models independent of significance.\nNumber of drugs as predictors for constipation and diarrhoea\nMultivariable logistic regression analyses - adjusted for significant predictors.\nSignificant predictors for constipation: Age, gender, BMI, frequency of use of alcohol, musculoskeletal complaints, angina and multiple sclerosis.\nSignificant predictors for diarrhoea: Age, gender, frequency of use of alcohol, years of education, mood disorders and osteoporosis.\nObserved prevalence (in per cent) of constipation and diarrhoea in users and non-users of drugs, calculated prevalence if treatment is stopped or started, and changes of prevalence (average risk difference) with 95% CI when stopping and starting treatment (multivariable analyses, 4586 and 4268 cases available for the analyses of constipation and diarrhoea respectively)\nTwo-sided p-values ≤ 0.05 were regarded as statistically significant. Due to hypotheses testing for many substances, p-values above 0.01 should be interpreted with caution. We report 95% confidence intervals (CI) where appropriate. The analyses were performed in SPSS 16. The SPSS code \"nne_ein.sps\" available at Ralf Bender's software page was used for calculating change in prevalence [26].", "This survey was approved by the Regional Committee for Medical Research Ethics and the Data Inspectorate, Oslo, Norway, and performed in accordance with the Declaration of Helsinki.", "[SUBTITLE] Participants [SUBSECTION] Out of 11078 subjects invited to the health survey, 6141 participated. In the participants and non-participants the proportions of women was 55% and 45% respectively, and mean age was 49.2 and 45.0 years respectively.\nThe gastrointestinal questionnaire was completed by 4622 of the participants, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. Two patients had both constipation and diarrhoea (diarrhoea was probably induced by use of laxatives). The responders were more likely to be women than the non-responders were (56% and 51% respectively), and younger (mean age 48.9 and 50.1 years respectively). Figure 1 shows the subjects in the study and the classification of functional gastrointestinal disorders in more detail. Table 1 presents the characteristics of all participants and of participants with constipation and diarrhoea, and gives comparisons between those with and without constipation and with and without diarrhoea. Age, gender, body mass index, use of alcohol, osteoporosis, musculoskeletal complaints, mood disorders, use of drugs and number of drugs were highly significantly associated with constipation; and smoking, musculoskeletal complaints, mood disorders, use of drugs and number of drugs with diarrhoea.\nFlow chart of the participants in the study with indication of complaints. The study focuses on the groups with constipation (N = 640) and diarrhoea (N = 407).\nOut of 11078 subjects invited to the health survey, 6141 participated. In the participants and non-participants the proportions of women was 55% and 45% respectively, and mean age was 49.2 and 45.0 years respectively.\nThe gastrointestinal questionnaire was completed by 4622 of the participants, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. Two patients had both constipation and diarrhoea (diarrhoea was probably induced by use of laxatives). The responders were more likely to be women than the non-responders were (56% and 51% respectively), and younger (mean age 48.9 and 50.1 years respectively). Figure 1 shows the subjects in the study and the classification of functional gastrointestinal disorders in more detail. Table 1 presents the characteristics of all participants and of participants with constipation and diarrhoea, and gives comparisons between those with and without constipation and with and without diarrhoea. Age, gender, body mass index, use of alcohol, osteoporosis, musculoskeletal complaints, mood disorders, use of drugs and number of drugs were highly significantly associated with constipation; and smoking, musculoskeletal complaints, mood disorders, use of drugs and number of drugs with diarrhoea.\nFlow chart of the participants in the study with indication of complaints. The study focuses on the groups with constipation (N = 640) and diarrhoea (N = 407).\n[SUBTITLE] Use of drugs and associations with constipation and diarrhoea [SUBSECTION] In all, 288 different generic drugs were used, 252 after exclusion of drugs for gastrointestinal disorders. After exclusion of drugs and groups of drugs used by <10 persons, 98 substances (ATC-level 5) and 20 groups at ATC-level 4 were included in the analyses.\nThe mean number of drugs per person was 1.34 (range 0-9), and 2891 (62.5%) used one or more drugs. Use of drugs was associated with constipation and diarrhoea, the unadjusted ORs were 1.69 and 1.46 respectively (p ≤ 0.001), and likewise, adjusted OR were 1.30 (p = 0.012) and 1.37 (p = 0.011). Table 3 gives independent predictors of constipation and diarrhoea.\nTable 4 shows the association between numbers of drugs and constipation and diarrhoea. Using more than one drug was not associated with constipation over that of one drug alone. However, polypharmacy increased the prevalence of diarrhoea significantly, compared with using one drug and 2-3 drugs the ORs were 1.46 (CI 1.01 to 2.10, p = 0.042), and 1.47 (CI 1.03 to 2.11, p = 0.034) respectively.\nIn all, 288 different generic drugs were used, 252 after exclusion of drugs for gastrointestinal disorders. After exclusion of drugs and groups of drugs used by <10 persons, 98 substances (ATC-level 5) and 20 groups at ATC-level 4 were included in the analyses.\nThe mean number of drugs per person was 1.34 (range 0-9), and 2891 (62.5%) used one or more drugs. Use of drugs was associated with constipation and diarrhoea, the unadjusted ORs were 1.69 and 1.46 respectively (p ≤ 0.001), and likewise, adjusted OR were 1.30 (p = 0.012) and 1.37 (p = 0.011). Table 3 gives independent predictors of constipation and diarrhoea.\nTable 4 shows the association between numbers of drugs and constipation and diarrhoea. Using more than one drug was not associated with constipation over that of one drug alone. However, polypharmacy increased the prevalence of diarrhoea significantly, compared with using one drug and 2-3 drugs the ORs were 1.46 (CI 1.01 to 2.10, p = 0.042), and 1.47 (CI 1.03 to 2.11, p = 0.034) respectively.\n[SUBTITLE] Drug substances associated with constipation and diarrhoea [SUBSECTION] Drugs and group of drugs associated with constipation and diarrhoea with p≤0.05 in bivariate analyses are listed in table 2. In addition, 15 and 12 drugs were associated with constipations and diarrhoea respectively with p≤0.20. Three drugs were independent predictors of constipation: furosemide (OR 2.16, CI 1.14 to 4.10, p = 0.019), levothyroxine sodium (OR 1.55, CI 1.04 to 2.31, p = 0.033) and ibuprofen (OR 1.54, CI 1.17 to 2.03, p = 0.002), and two with diarrhoea: lithium (OR 6.09, CI 1.73 to 21.48, p = 0.005) and carbamazepine (OR 4.07, CI 1.52 to 10.89, p = 0.005) (logistic regression analyses with correction for the independent predictors given in table 3). No groups of drugs at ATC-level 4 were independent predictors of constipations or diarrhoea without one substance in the group being associated with the disorders.\nTable 5 gives the observed prevalence of constipation and diarrhoea for users and nonusers of drugs and for drugs significantly associated to the complaint, the calculated prevalence if users stop and nonusers start treatment and the change in prevalence when stopping and starting treatment. To start using drugs would increase the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation, to 27.5% for the association between lithium and diarrhoea.\nDrugs and group of drugs associated with constipation and diarrhoea with p≤0.05 in bivariate analyses are listed in table 2. In addition, 15 and 12 drugs were associated with constipations and diarrhoea respectively with p≤0.20. Three drugs were independent predictors of constipation: furosemide (OR 2.16, CI 1.14 to 4.10, p = 0.019), levothyroxine sodium (OR 1.55, CI 1.04 to 2.31, p = 0.033) and ibuprofen (OR 1.54, CI 1.17 to 2.03, p = 0.002), and two with diarrhoea: lithium (OR 6.09, CI 1.73 to 21.48, p = 0.005) and carbamazepine (OR 4.07, CI 1.52 to 10.89, p = 0.005) (logistic regression analyses with correction for the independent predictors given in table 3). No groups of drugs at ATC-level 4 were independent predictors of constipations or diarrhoea without one substance in the group being associated with the disorders.\nTable 5 gives the observed prevalence of constipation and diarrhoea for users and nonusers of drugs and for drugs significantly associated to the complaint, the calculated prevalence if users stop and nonusers start treatment and the change in prevalence when stopping and starting treatment. To start using drugs would increase the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation, to 27.5% for the association between lithium and diarrhoea.", "Out of 11078 subjects invited to the health survey, 6141 participated. In the participants and non-participants the proportions of women was 55% and 45% respectively, and mean age was 49.2 and 45.0 years respectively.\nThe gastrointestinal questionnaire was completed by 4622 of the participants, 640 (13.8%) had constipation and 407 (8.8%) had diarrhoea. Two patients had both constipation and diarrhoea (diarrhoea was probably induced by use of laxatives). The responders were more likely to be women than the non-responders were (56% and 51% respectively), and younger (mean age 48.9 and 50.1 years respectively). Figure 1 shows the subjects in the study and the classification of functional gastrointestinal disorders in more detail. Table 1 presents the characteristics of all participants and of participants with constipation and diarrhoea, and gives comparisons between those with and without constipation and with and without diarrhoea. Age, gender, body mass index, use of alcohol, osteoporosis, musculoskeletal complaints, mood disorders, use of drugs and number of drugs were highly significantly associated with constipation; and smoking, musculoskeletal complaints, mood disorders, use of drugs and number of drugs with diarrhoea.\nFlow chart of the participants in the study with indication of complaints. The study focuses on the groups with constipation (N = 640) and diarrhoea (N = 407).", "In all, 288 different generic drugs were used, 252 after exclusion of drugs for gastrointestinal disorders. After exclusion of drugs and groups of drugs used by <10 persons, 98 substances (ATC-level 5) and 20 groups at ATC-level 4 were included in the analyses.\nThe mean number of drugs per person was 1.34 (range 0-9), and 2891 (62.5%) used one or more drugs. Use of drugs was associated with constipation and diarrhoea, the unadjusted ORs were 1.69 and 1.46 respectively (p ≤ 0.001), and likewise, adjusted OR were 1.30 (p = 0.012) and 1.37 (p = 0.011). Table 3 gives independent predictors of constipation and diarrhoea.\nTable 4 shows the association between numbers of drugs and constipation and diarrhoea. Using more than one drug was not associated with constipation over that of one drug alone. However, polypharmacy increased the prevalence of diarrhoea significantly, compared with using one drug and 2-3 drugs the ORs were 1.46 (CI 1.01 to 2.10, p = 0.042), and 1.47 (CI 1.03 to 2.11, p = 0.034) respectively.", "Drugs and group of drugs associated with constipation and diarrhoea with p≤0.05 in bivariate analyses are listed in table 2. In addition, 15 and 12 drugs were associated with constipations and diarrhoea respectively with p≤0.20. Three drugs were independent predictors of constipation: furosemide (OR 2.16, CI 1.14 to 4.10, p = 0.019), levothyroxine sodium (OR 1.55, CI 1.04 to 2.31, p = 0.033) and ibuprofen (OR 1.54, CI 1.17 to 2.03, p = 0.002), and two with diarrhoea: lithium (OR 6.09, CI 1.73 to 21.48, p = 0.005) and carbamazepine (OR 4.07, CI 1.52 to 10.89, p = 0.005) (logistic regression analyses with correction for the independent predictors given in table 3). No groups of drugs at ATC-level 4 were independent predictors of constipations or diarrhoea without one substance in the group being associated with the disorders.\nTable 5 gives the observed prevalence of constipation and diarrhoea for users and nonusers of drugs and for drugs significantly associated to the complaint, the calculated prevalence if users stop and nonusers start treatment and the change in prevalence when stopping and starting treatment. To start using drugs would increase the prevalence of constipation and diarrhoea with 2.5% and 2.3% respectively. The excess drug related prevalence varied from 5.3% for the association between ibuprofen and constipation, to 27.5% for the association between lithium and diarrhoea.", "Use of drugs was significantly associated with constipation and diarrhoea in this study in the general population, as it has been in studies performed in general practice, in elderly and in other settings [3,7,14,16,18]. The prevalence of drug associated constipation and diarrhoea was judged as high (2.3% - 3.0%), particularly because compliance declines markedly in subjects with ADRs [27]. It is to expect that subjects with ADRs reduce or stop treatment, or switch to alternatives to avoid the complaints, and thereby reduce the prevalence of ADRs. However, it is likely that some subjects are unaware of the connection between the complaints and the drug. The results indicate an unfavourable effect of everyday use of drugs on constipation and diarrhoea in the general population. Changes in drug therapy might be the most important easily susceptible factor to influence on in subjects with these disorders.\nPolypharmacy is a risk factor for ADRs in general, and especially among the elderly [3,10-15]. This study showed, however, only an association between polypharmacy and diarrhoea. One possible explanation of the difference between constipation and diarrhoea is that more drugs are associated with diarrhoea than with constipation.\nAs expected, several drugs were associated with constipation and diarrhoea in bivariate analyses (table 2), but the number of specific drugs associated with constipation and diarrhoea in multivariable analyses was lower than expected: three and two respectively. Constipation has been mentioned as an ADR to both furosemide and ibuprofen in some reports and high quality information about marketed drugs [7,16,28]. The pathophysiology is uncertain. Dehydration might cause constipation in users of furosemide, and inhibition of prostaglandins might explain constipation in ibuprofen users since prostaglandin analogues cause diarrhoea [29]. Diarrhoea has been related to too high doses of levothyroxine sodium, constipation has not [28]. Since severe hypothyroidism is associated with constipation, the association between levothyroxine sodium and constipation might have been confounded by the disorder under treatment (hypothyroidism) or insufficient treatment, or it might be a type I error. Carbamazepine and lithium were both associated with diarrhoea, which is a known ADR to these drugs [28]. The excess prevalence in users was surprisingly high (19 and 27% respectively), particularly for lithium because gastrointestinal ADRs are expected to level off and/or decline with continuous use [30]. The fact that it might be undesirable to change lithium therapy if the effect is satisfactory could explain the high ADR rate. In all, the excess prevalence of constipation and diarrhoea associated with these five drugs was high (5-27%) related to the authorities definition of an ADR as common if the prevalence is between 1-10% [31]. The Rome II definition of constipation and diarrhoea includes mild and intermittent symptoms, which might be missed in clinical trials. Therefore, the clinical relevance of the findings is uncertain, but calls attention to these drugs in patients bothered by constipation and diarrhoea. The overall prevalence of constipation and diarrhoea in this study (13.8% and 8.8% respectively) was of the same order as reported in other studies using the same definitions (13.1-20.3% and 13.5% respectively) [5,6].\nOther drugs known to be associated with constipation (such as iron, codein (opiates), calcium channel blockers, anticholenergic drugs, anticonvulsant drugs, anti-parkinson drugs, and antipsychotics) and diarrhoea (such as antibiotics, NSAID, psycholeptics, selective serotonin reuptake inhibitors and antihypertensives) did not show significant associations with constipation and diarrhoea in this study [3,7,16,18]. Some of them were significantly associated in the bivariate analyses (table 2) but not in the multivariable analyses. It shows the importance of pragmatic studies in the general population, and that such studies differ from traditional clinical trials. Reasons could be dose reduction or reduced compliance when ADRs occur, waning symptoms during long-term treatment, switching to other drugs without ADR, or a type II error.\nOther predictors for constipation and diarrhoea were overall in accordance with other reports. There was a predominance of females with constipation and males with diarrhoea [4,5,7,16]. Constipation increased with age and seemed to be related to inactivity, coronary disease and neurological diseases [4,7,16,32]. The association between diarrhoea and mood disorders has been reported in other studies, whereas the associations to low education and osteoporosis remain unexplained [33].\n[SUBTITLE] Strengths and weaknesses [SUBSECTION] Unlike most reports on ADRs, this study describes associations between drugs and symptoms related to everyday use of drugs in the general population. The design avoids problems related to clinical trials and surveys in selected, often elderly, populations. Except for the possibility that \"trained complainers\" were more prone to respond, there was no selection bias or loss to follow up, compliance and reporting of complaints were unaffected by the personnel's interest in the reports, and the treatment duration was probably longer than in most clinical trials. It is not known how the drugs were used, and incorrect use might increase ADRs. However, these results reflect associations related to everyday use of drugs, used correctly or not, but the cross sectional design gives no information about causality.\nThe ATC-classification of drugs used in this study relates ADRs to specific chemical compounds or groups with similar chemical compounds, which is a strength [23]. Most studies relate ADRs to broader and poorly defined groups of drugs e.g. antipsychotics and antihistamines, which are groups containing drugs with different ADRs, and therefore give insufficient information about ADRs related to each compound.\nIn addition to the commonly reported OR, we present the estimated changes in prevalence of the complaints when non-users start and users stop treatment. Unlike in randomized controlled trials, the distribution of covariates in subjects using and not using the drug differs. These risk differences are estimated taking into account differences in background variables among users and non users of the drug, as well as uncertainties in the estimates for the background variables [25]. This is a clinically more meaningful measure than OR.\nThe low response rate, 42% of all invited and 75% of the participants in the health survey might reduce the external validity. However, analyses of a similar study conducted by Norwegian Institute of Public Health in 2001 with a response rate of 46% showed no impact of the low response rate on self-selection [34].\nThe size of the study allows only demonstration of common ADRs. Common ADRs are defined by the authorities as prevalence rates between 1-10% [31]. The power of the study was 80% to detect an adverse event with a prevalence of 1%, given that 5% of the participants use the drug (α = 0.05). Clinically relevant information about individual drugs might have been missed since most drugs were used by less than 1% of the population.\nThe temporal relationship between drugs and symptoms in this study is uncertain. The questionnaire asked for symptoms last 3 months and use of drugs last 4 weeks. However, the temporal relationship is probably a minor problem since symptoms according to the Rome criteria are long lasting and regularly used drugs are most often long-term treatment.\nBecause the participants were asked for regularly used drugs the last four weeks, information about compliance, over the counter drugs, drugs taken to relieve symptoms, drugs taken on demand, general life style and food habits are insufficient. Analgesics (e.g. with codein and acetylsalicylic acid) and other drugs that influence on gastrointestinal function might therefore have been left out, and irregular and therefore not registered intake of laxatives and anti-diarrhoeas might have reduced the prevalence of the complaints. Therefore, the associations between drugs and the complaints do not prove, but only indicate causality.\nUnlike most reports on ADRs, this study describes associations between drugs and symptoms related to everyday use of drugs in the general population. The design avoids problems related to clinical trials and surveys in selected, often elderly, populations. Except for the possibility that \"trained complainers\" were more prone to respond, there was no selection bias or loss to follow up, compliance and reporting of complaints were unaffected by the personnel's interest in the reports, and the treatment duration was probably longer than in most clinical trials. It is not known how the drugs were used, and incorrect use might increase ADRs. However, these results reflect associations related to everyday use of drugs, used correctly or not, but the cross sectional design gives no information about causality.\nThe ATC-classification of drugs used in this study relates ADRs to specific chemical compounds or groups with similar chemical compounds, which is a strength [23]. Most studies relate ADRs to broader and poorly defined groups of drugs e.g. antipsychotics and antihistamines, which are groups containing drugs with different ADRs, and therefore give insufficient information about ADRs related to each compound.\nIn addition to the commonly reported OR, we present the estimated changes in prevalence of the complaints when non-users start and users stop treatment. Unlike in randomized controlled trials, the distribution of covariates in subjects using and not using the drug differs. These risk differences are estimated taking into account differences in background variables among users and non users of the drug, as well as uncertainties in the estimates for the background variables [25]. This is a clinically more meaningful measure than OR.\nThe low response rate, 42% of all invited and 75% of the participants in the health survey might reduce the external validity. However, analyses of a similar study conducted by Norwegian Institute of Public Health in 2001 with a response rate of 46% showed no impact of the low response rate on self-selection [34].\nThe size of the study allows only demonstration of common ADRs. Common ADRs are defined by the authorities as prevalence rates between 1-10% [31]. The power of the study was 80% to detect an adverse event with a prevalence of 1%, given that 5% of the participants use the drug (α = 0.05). Clinically relevant information about individual drugs might have been missed since most drugs were used by less than 1% of the population.\nThe temporal relationship between drugs and symptoms in this study is uncertain. The questionnaire asked for symptoms last 3 months and use of drugs last 4 weeks. However, the temporal relationship is probably a minor problem since symptoms according to the Rome criteria are long lasting and regularly used drugs are most often long-term treatment.\nBecause the participants were asked for regularly used drugs the last four weeks, information about compliance, over the counter drugs, drugs taken to relieve symptoms, drugs taken on demand, general life style and food habits are insufficient. Analgesics (e.g. with codein and acetylsalicylic acid) and other drugs that influence on gastrointestinal function might therefore have been left out, and irregular and therefore not registered intake of laxatives and anti-diarrhoeas might have reduced the prevalence of the complaints. Therefore, the associations between drugs and the complaints do not prove, but only indicate causality.", "Unlike most reports on ADRs, this study describes associations between drugs and symptoms related to everyday use of drugs in the general population. The design avoids problems related to clinical trials and surveys in selected, often elderly, populations. Except for the possibility that \"trained complainers\" were more prone to respond, there was no selection bias or loss to follow up, compliance and reporting of complaints were unaffected by the personnel's interest in the reports, and the treatment duration was probably longer than in most clinical trials. It is not known how the drugs were used, and incorrect use might increase ADRs. However, these results reflect associations related to everyday use of drugs, used correctly or not, but the cross sectional design gives no information about causality.\nThe ATC-classification of drugs used in this study relates ADRs to specific chemical compounds or groups with similar chemical compounds, which is a strength [23]. Most studies relate ADRs to broader and poorly defined groups of drugs e.g. antipsychotics and antihistamines, which are groups containing drugs with different ADRs, and therefore give insufficient information about ADRs related to each compound.\nIn addition to the commonly reported OR, we present the estimated changes in prevalence of the complaints when non-users start and users stop treatment. Unlike in randomized controlled trials, the distribution of covariates in subjects using and not using the drug differs. These risk differences are estimated taking into account differences in background variables among users and non users of the drug, as well as uncertainties in the estimates for the background variables [25]. This is a clinically more meaningful measure than OR.\nThe low response rate, 42% of all invited and 75% of the participants in the health survey might reduce the external validity. However, analyses of a similar study conducted by Norwegian Institute of Public Health in 2001 with a response rate of 46% showed no impact of the low response rate on self-selection [34].\nThe size of the study allows only demonstration of common ADRs. Common ADRs are defined by the authorities as prevalence rates between 1-10% [31]. The power of the study was 80% to detect an adverse event with a prevalence of 1%, given that 5% of the participants use the drug (α = 0.05). Clinically relevant information about individual drugs might have been missed since most drugs were used by less than 1% of the population.\nThe temporal relationship between drugs and symptoms in this study is uncertain. The questionnaire asked for symptoms last 3 months and use of drugs last 4 weeks. However, the temporal relationship is probably a minor problem since symptoms according to the Rome criteria are long lasting and regularly used drugs are most often long-term treatment.\nBecause the participants were asked for regularly used drugs the last four weeks, information about compliance, over the counter drugs, drugs taken to relieve symptoms, drugs taken on demand, general life style and food habits are insufficient. Analgesics (e.g. with codein and acetylsalicylic acid) and other drugs that influence on gastrointestinal function might therefore have been left out, and irregular and therefore not registered intake of laxatives and anti-diarrhoeas might have reduced the prevalence of the complaints. Therefore, the associations between drugs and the complaints do not prove, but only indicate causality.", "Everyday use of drugs was associated with an increased prevalence of constipation and diarrhoea, and polypharmacy with an additional risk of diarrhoea in the general population. Furosemide, levothyroxine sodium and ibuprofen were significantly associated with constipation, and carbamazepine and lithium with diarrhoea. The associations do not prove, but indicate that constipation and diarrhoea are common ADRs. In patients with constipation or diarrhoea, drug induced symptoms need to be considered and changes in drug regimens performed along with other interventions.", "The authors declare that they have no competing interests.", "GSF has worked up the data file, performed the statistical analyses under supervision of the medical statistician, interpreted the results, and written the manuscript. SL is responsible for the medical statistics and has participated in interpretation of the results and preparation of the manuscript. PGF prepared the questionnaire and parts the survey in collaboration with the Norwegian Institute of Public Health, is responsible for conception and design, and has participated in the statistical analyses and preparation of the manuscript and is the main supervisor of the project. All authors have read and approved the last version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6904/11/2/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Spatial and temporal EEG dynamics of dual-task driving performance.
21332977
Driver distraction is a significant cause of traffic accidents. The aim of this study is to investigate Electroencephalography (EEG) dynamics in relation to distraction during driving. To study human cognition under a specific driving task, simulated real driving using virtual reality (VR)-based simulation and designed dual-task events are built, which include unexpected car deviations and mathematics questions.
BACKGROUND
We designed five cases with different stimulus onset asynchrony (SOA) to investigate the distraction effects between the deviations and equations. The EEG channel signals are first converted into separated brain sources by independent component analysis (ICA). Then, event-related spectral perturbation (ERSP) changes of the EEG power spectrum are used to evaluate brain dynamics in time-frequency domains.
METHODS
Power increases in the theta and beta bands are observed in relation with distraction effects in the frontal cortex. In the motor area, alpha and beta power suppressions are also observed. All of the above results are consistently observed across 15 subjects. Additionally, further analysis demonstrates that response time and multiple cortical EEG power both changed significantly with different SOA.
RESULTS
This study suggests that theta power increases in the frontal area is related to driver distraction and represents the strength of distraction in real-life situations.
CONCLUSIONS
[ "Adult", "Attention", "Automobile Driving", "Brain", "Brain Mapping", "Computer Simulation", "Electroencephalography", "Humans", "Male", "Reaction Time", "Task Performance and Analysis", "Young Adult" ]
3050807
null
null
Methods
[SUBTITLE] Subjects [SUBSECTION] Fifteen healthy participants (all males), between 20 and 28 years of age, were recruited from the university population. They have normal or corrected-to-normal vision, are right handed, have a driver's license, and are reported being free from psychiatric or neurological disorders. Written informed consent was obtained prior to the study. Each subject participated in four simulated sessions inside a car with hands on the steering wheel to keep the car in the center of the third lane, which was numbered from the left lane, in a VR surround scene on a four-lane freeway [23]. Thirty scalp electrodes (Ag/AgCl electrodes with a unipolar reference at the right earlobe) by the NuAmp system (Compumedics Ltd., VIC, Australia) were mounted on the subject's head to record the physiological EEG [25]. The EEG electrodes were placed based on a modified international 10-20 system. The contact impedance between EEG electrodes and the cortex was calibrated to be less than 10 kΩ. Before beginning first session, each subject took a 15 ~ 30 minute for practice session. In each session, subjects proceeded to a freeway simulated driving lasting fifteen minutes with the corresponding EEG signals synchronously recorded. For these four-session experiments, subjects were required to rest for ten minutes between every two sessions to avoid fatigue. Fifteen healthy participants (all males), between 20 and 28 years of age, were recruited from the university population. They have normal or corrected-to-normal vision, are right handed, have a driver's license, and are reported being free from psychiatric or neurological disorders. Written informed consent was obtained prior to the study. Each subject participated in four simulated sessions inside a car with hands on the steering wheel to keep the car in the center of the third lane, which was numbered from the left lane, in a VR surround scene on a four-lane freeway [23]. Thirty scalp electrodes (Ag/AgCl electrodes with a unipolar reference at the right earlobe) by the NuAmp system (Compumedics Ltd., VIC, Australia) were mounted on the subject's head to record the physiological EEG [25]. The EEG electrodes were placed based on a modified international 10-20 system. The contact impedance between EEG electrodes and the cortex was calibrated to be less than 10 kΩ. Before beginning first session, each subject took a 15 ~ 30 minute for practice session. In each session, subjects proceeded to a freeway simulated driving lasting fifteen minutes with the corresponding EEG signals synchronously recorded. For these four-session experiments, subjects were required to rest for ten minutes between every two sessions to avoid fatigue. [SUBTITLE] Recordings and experimental conditions [SUBSECTION] For this study, a simulated freeway scene was built using VR technology with a WTK library on a 6 DOF motion platform [23]. The four-lane freeway scene was displayed on a surrounded environment. Since the main purpose of this paper is to investigate distraction effects in dual-task conditions, two tasks involving unexpected car deviations and mathematical questions were designed. In the driving task, the car frequently and randomly drifted from the center of the third lane. Subjects were required to steer the car back to the center of the third lane. This task mimicked the effects of driving on a non-ideal road surface. In the mathematical task, two-digit addition equations were presented to the subjects. The answers were designed to be either valid or invalid. Subjects were asked to press the right or left button on the steering wheel corresponding to on correct or incorrect equations, respectively. The allotment ratio of correct-incorrect equations was 50-50. The choice of mathematic task was motivated by the desire for control in the task demands [26]. All drivers could perform this mathematic task well without training. To investigate the effects of SOA between two tasks, the combinations of these two tasks were designed to provide different distracting conditions to the subjects as shown in Figure 1. Five cases were developed to study the interaction of the two tasks. The bottom insets show the onset sequences of two tasks. Therefore, this study investigated the relationship of math task and driving task and how two tasks affected each other in the SOA conditions. The illustration shows the relationship of occurrences between the deviation and math tasks. D: deviation task onset. M: math task onset. (a) Case 1: math task presents 400 ms before the deviation task onset. (b) Case 2: math and deviation tasks occur at the same time. (c) Case 3: math task presents 400 ms after the deviation task onset. (d) Case 4: only math task presents. (e) Case 5: only deviation task occurs. The bottom insets show the onset sequences of the two tasks. For this study, a simulated freeway scene was built using VR technology with a WTK library on a 6 DOF motion platform [23]. The four-lane freeway scene was displayed on a surrounded environment. Since the main purpose of this paper is to investigate distraction effects in dual-task conditions, two tasks involving unexpected car deviations and mathematical questions were designed. In the driving task, the car frequently and randomly drifted from the center of the third lane. Subjects were required to steer the car back to the center of the third lane. This task mimicked the effects of driving on a non-ideal road surface. In the mathematical task, two-digit addition equations were presented to the subjects. The answers were designed to be either valid or invalid. Subjects were asked to press the right or left button on the steering wheel corresponding to on correct or incorrect equations, respectively. The allotment ratio of correct-incorrect equations was 50-50. The choice of mathematic task was motivated by the desire for control in the task demands [26]. All drivers could perform this mathematic task well without training. To investigate the effects of SOA between two tasks, the combinations of these two tasks were designed to provide different distracting conditions to the subjects as shown in Figure 1. Five cases were developed to study the interaction of the two tasks. The bottom insets show the onset sequences of two tasks. Therefore, this study investigated the relationship of math task and driving task and how two tasks affected each other in the SOA conditions. The illustration shows the relationship of occurrences between the deviation and math tasks. D: deviation task onset. M: math task onset. (a) Case 1: math task presents 400 ms before the deviation task onset. (b) Case 2: math and deviation tasks occur at the same time. (c) Case 3: math task presents 400 ms after the deviation task onset. (d) Case 4: only math task presents. (e) Case 5: only deviation task occurs. The bottom insets show the onset sequences of the two tasks. [SUBTITLE] Statistical analysis of behavior performance [SUBSECTION] After recording the behavior data, statistical package for the social science (SPSS) Version 13.0 for Windows software is applied to estimate the significance testing of behavior data. The response time of these two tasks (the driving deviation and the math equation) is analyzed to study the behavior of subjects in the experiments. Using ANOVA (analysis of variance), the significances of the response time of these two tasks are tested for every subject. A non-parametric test is also utilized to study the trends of the behavior data. Firstly, this study excluded outliers, comprising around 6.57% of all trials, based on the criteria that response time was distributed outside the mean response time plus three times the standard deviation of each single session. Secondly, the number of trials in one of five cases which is minimal is chosen to make a benchmark to randomly select the same number of trials in other cases. Thirdly, a single task is taken for the baseline to normalize the behavior data to be XiXmean (Xi: mean of response time in case i, Xmean: mean of response time in single case). For example, in order to compare the distraction effects from the math equation, case 4 (the single math task) is the baseline. After recording the behavior data, statistical package for the social science (SPSS) Version 13.0 for Windows software is applied to estimate the significance testing of behavior data. The response time of these two tasks (the driving deviation and the math equation) is analyzed to study the behavior of subjects in the experiments. Using ANOVA (analysis of variance), the significances of the response time of these two tasks are tested for every subject. A non-parametric test is also utilized to study the trends of the behavior data. Firstly, this study excluded outliers, comprising around 6.57% of all trials, based on the criteria that response time was distributed outside the mean response time plus three times the standard deviation of each single session. Secondly, the number of trials in one of five cases which is minimal is chosen to make a benchmark to randomly select the same number of trials in other cases. Thirdly, a single task is taken for the baseline to normalize the behavior data to be XiXmean (Xi: mean of response time in case i, Xmean: mean of response time in single case). For example, in order to compare the distraction effects from the math equation, case 4 (the single math task) is the baseline. [SUBTITLE] Measurement of distraction effects in dual-task EEG time series [SUBSECTION] EEG epochs are extracted from the recorded EEG signals with 16-bit quantization, at the sampling rate of 500 Hz. The data are then preprocessed using a simple low pass filter with a cut-off frequency of 50 Hz to remove line noise and other high frequency noise. One more high-pass filter with a cut-off frequency of 0.5 Hz is utilized to remove DC drift. This study adopts ICA to separate independent brain sources [27-29]. ERSP technology is then applied to these independent component (IC) signals (separated independent brain sources) to transfer the signal into the time-frequency domain for the event-related frequency study. Finally, the stability of component activations and scalp topographies of meaningful components are investigated with component clustering technology. Because different cases with various combinations of driving and the math tasks are designed, EEG responses from five different cases are extracted separately. EEG source segregation, identification, and localization is very difficult because EEG data collected from the human scalp induce brain activities within a large brain area. Although the conductivity between the skull and brain is different, the spatial "smearing" of EEG data caused by volume conduction does not cause a significant time delay. This suggests that ICA algorithm is suitable for performing blind source separation on EEG data. The first applications of ICA to biomedical time series analysis were presented by Makeig and Inlow [30]. Their report shows segregation of eye movements from brain EEG phenomena, and separates EEG data into constituent components defined by spatial stability and temporal independence. Subsequent technical experiments demonstrated that ICA could also be used to remove artifacts from both continuous and event-related (single-trial) EEG data [27,28]. Presumably, multi-channel EEG recordings are mixtures of underlying brain sources and artificial signals. By assuming that (a) mixing medium is linear and propagation delays are negligible, (b) the time courses of the sources are independent, and (c) the number of sources is the same as the number of sensors; that is, if there are N sensors, the ICA algorithm can separate N sources [27]. The time sequences of ICA component signals are subjected to Fast Fourier Transform with overlapped moving windows. In addition, the spectrum in each epoch is smoothed by 3-window (768 points) moving-average to reduce random errors. The spectrum prior to event onsets is considered as the baseline spectrum for every epoch. The mean of the baseline spectrum is subtracted from the power spectral after stimulus onsets so spectral "perturbation" can be visualized. This procedure is then applied repeatedly to every epoch. The results are averaged to yield ERSP images [31]. These measures can evaluate averaged dynamic changes in amplitudes of the broad band EEG spectrum as a function of time following cognitive events. The ERSP images mainly show spectral differences after an event since the baseline spectrum prior to event onsets had been removed. After performing a bootstrap analysis (usually 0.01 or 0.03 or 0.05; here 0.01 was applied) on ERSP, only statistically significant (p < 0.01) spectral changes are shown in the ERSP images. Non-significant time/frequency points are masked (replaced with zero). Consequently, any perturbations in the frequency domain become relatively prominent. To study the cross-subject component stability of ICA decomposition, components from multiple subjects are clustered, based on their spatial distributions and EEG characteristics. However, components from different subjects differ in many ways such as scalp maps, power spectrum, ERPs and ERSPs. Some studies attempted to solve this problem by calculating similarities among different ICs [32-34]. Based on these studies, ICs of interest are selected and clustered semi-automatically based on their scalp maps, dipole source locations, and within-subject consistency. To match scalp maps of ICs within and across subjects in this paper, the gradients of the IC scalp maps from different sessions of the same subject are computed and grouped together based on the highest correlations of gradients of the common electrodes retained in all sessions. For dipole source locations, DIPFIT2 routines from EEGLAB are used to fit single dipole source models to the remaining IC scalp topographies using a four-shell spherical head model [35]. In the DIPFIT software, the spherical head model is co-registered with an average brain model (Montreal Neurological Institute) and returns approximate Talairach coordinates for each equivalent dipole source. EEG epochs are extracted from the recorded EEG signals with 16-bit quantization, at the sampling rate of 500 Hz. The data are then preprocessed using a simple low pass filter with a cut-off frequency of 50 Hz to remove line noise and other high frequency noise. One more high-pass filter with a cut-off frequency of 0.5 Hz is utilized to remove DC drift. This study adopts ICA to separate independent brain sources [27-29]. ERSP technology is then applied to these independent component (IC) signals (separated independent brain sources) to transfer the signal into the time-frequency domain for the event-related frequency study. Finally, the stability of component activations and scalp topographies of meaningful components are investigated with component clustering technology. Because different cases with various combinations of driving and the math tasks are designed, EEG responses from five different cases are extracted separately. EEG source segregation, identification, and localization is very difficult because EEG data collected from the human scalp induce brain activities within a large brain area. Although the conductivity between the skull and brain is different, the spatial "smearing" of EEG data caused by volume conduction does not cause a significant time delay. This suggests that ICA algorithm is suitable for performing blind source separation on EEG data. The first applications of ICA to biomedical time series analysis were presented by Makeig and Inlow [30]. Their report shows segregation of eye movements from brain EEG phenomena, and separates EEG data into constituent components defined by spatial stability and temporal independence. Subsequent technical experiments demonstrated that ICA could also be used to remove artifacts from both continuous and event-related (single-trial) EEG data [27,28]. Presumably, multi-channel EEG recordings are mixtures of underlying brain sources and artificial signals. By assuming that (a) mixing medium is linear and propagation delays are negligible, (b) the time courses of the sources are independent, and (c) the number of sources is the same as the number of sensors; that is, if there are N sensors, the ICA algorithm can separate N sources [27]. The time sequences of ICA component signals are subjected to Fast Fourier Transform with overlapped moving windows. In addition, the spectrum in each epoch is smoothed by 3-window (768 points) moving-average to reduce random errors. The spectrum prior to event onsets is considered as the baseline spectrum for every epoch. The mean of the baseline spectrum is subtracted from the power spectral after stimulus onsets so spectral "perturbation" can be visualized. This procedure is then applied repeatedly to every epoch. The results are averaged to yield ERSP images [31]. These measures can evaluate averaged dynamic changes in amplitudes of the broad band EEG spectrum as a function of time following cognitive events. The ERSP images mainly show spectral differences after an event since the baseline spectrum prior to event onsets had been removed. After performing a bootstrap analysis (usually 0.01 or 0.03 or 0.05; here 0.01 was applied) on ERSP, only statistically significant (p < 0.01) spectral changes are shown in the ERSP images. Non-significant time/frequency points are masked (replaced with zero). Consequently, any perturbations in the frequency domain become relatively prominent. To study the cross-subject component stability of ICA decomposition, components from multiple subjects are clustered, based on their spatial distributions and EEG characteristics. However, components from different subjects differ in many ways such as scalp maps, power spectrum, ERPs and ERSPs. Some studies attempted to solve this problem by calculating similarities among different ICs [32-34]. Based on these studies, ICs of interest are selected and clustered semi-automatically based on their scalp maps, dipole source locations, and within-subject consistency. To match scalp maps of ICs within and across subjects in this paper, the gradients of the IC scalp maps from different sessions of the same subject are computed and grouped together based on the highest correlations of gradients of the common electrodes retained in all sessions. For dipole source locations, DIPFIT2 routines from EEGLAB are used to fit single dipole source models to the remaining IC scalp topographies using a four-shell spherical head model [35]. In the DIPFIT software, the spherical head model is co-registered with an average brain model (Montreal Neurological Institute) and returns approximate Talairach coordinates for each equivalent dipole source.
null
null
null
null
[ "Background", "Subjects", "Recordings and experimental conditions", "Statistical analysis of behavior performance", "Measurement of distraction effects in dual-task EEG time series", "Results", "Behavior performance", "Independent component clustering", "Frontal and left motor clusters", "Discussion", "Frontal cluster", "Motor cluster", "Brain dynamics related to behavior performance", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Driver distraction has been identified as the leading cause of car accidents. The U.S. National Highway Traffic Safety Administration had reported driver distraction as a high priority area about 20-30% of car accidents [1]. Distraction during driving by any cause is a significant contributor to road traffic accidents [2,3]. Driving is a complex task in which several skills and abilities are simultaneously involved. Distractions found during driving are quite widespread, including eating, drinking, talking with passengers, using cell phones, reading, feeling fatigue, solving problems, and using in-car equipment. Commercial vehicle operators with complex in-car technologies also cause an increased risk as they may become increasingly distracting in the years to come [4,5]. Some literature studied the behavioral effect of driver's distraction in car. Tijerina showed driver distraction from measurements of the static completion time of an in-vehicle task [6]. Similarly, distraction effects caused by talking on cellular phones during driving have been a focal point of recent in-car studies [7-9]. Experimental studies have been conducted to assess the impact of specific types of driver distraction on driving performance. Though these studies generally reported significant driving impairment, simulator studies cannot provide information about accidents due to impairment resulting in hospitalization of the driver [10,11]. To provide information before the occurrence of crashes, the drivers' physiological responses are investigated in this paper. However, monitoring drivers' attention-related brain resources is still a challenge for researchers and practitioners in the field of cognitive brain research and human-machine interaction.\nRegarding neural physiological investigation, some literature focused on the brain activities of \"divided attention,\" referring to attention divided between two or more sources of information, such as visual, auditory, shape, and color stimuli. Positron emission tomography (PET) measurements were taken while subjects discriminated among shape, color, and speed of a visual stimulus under conditions of selective and divided attention. The divided attention condition activated the anterior cingulated and prefrontal cortex in the right hemisphere [12]. In another study, functional magnetic resonance imaging (fMRI) was used to investigate brain activity during a dual-task (visual stimulus) experiment. Findings revealed activation in the posterior dorsolateral prefrontal cortex (middle frontal gyrus) and lateral parietal cortex [13]. In addition, several neuroimaging studies showed the importance of the prefrontal network in dual-task management [14,15]. Some studies investigated traffic scenarios recorded the EEG to compare P300 amplitudes [16]. During simulated traffic scenarios, resource allocation was assessed as an event-related potential (ERP) novelty oddball paradigm [17]. In these EEG studies, however, only the time course was analyzed. Deiber took one more step to analyze the relation between time and frequency courses [18]. Their study used EEG to investigate mental arithmetic-induced workload and found theta band power increases in areas of the frontal cortex. Despite so much research on brain activities, the above-mentioned studies only investigated brain activities during dual-task interactions without considering the SOA problem during driving, which is with the temporal gap between presentations of two stimuli. When dual tasks are presented within a short SOA, the response time of each task is typically lower than that presented within a longer SOA [19]. Therefore, the current study investigates the effects of the different temporal relationships of stimuli.\nClinical practices as well as basic scientific studies have been using the EEG for 80 years. Presently, EEG measurement is widely used as a standard procedure in research such as sleep studies, epileptic abnormalities, and other disorder diagnoses [20,21]. Compared to another widely used neuroimaging modality, fMRI, the EEG is much less expensive and has superior temporal resolution in investigating SOA problems. To avoid interference and decrease risks while operating a vehicle on the road, researchers adopted driving simulations for vehicle design. Studies of driver's behavior and cognitive states are also expanding rapidly [22]. However, static driving simulation cannot fully create real-life driving conditions, such as the vibrations experienced when driving an actual vehicle on the road. Therefore, the VR-based simulation with a motion platform was developed [23,24]. This VR technique allows subjects to interact directly with a virtual environment rather than only monotonic auditory or visual stimuli. Integrating realistic VR scenes with visual stimuli makes it easy to study the brain response to attention during driving. Therefore, in recent years, VR-based simulation combined with EEG monitoring is a recent and beneficial innovation in cognitive engineering research.\nThe main goal of this study is to investigate the brain dynamics related to distraction by using EEG and a VR-based realistic driving environment. Unlike previous studies, the experiment design has three main characteristics. First, the SOA experimental design, with different appearance times of two tasks, has the benefit of investigating the driver's behavioral and physiological response under multiple conditions and multiple distraction levels. Second, ICA-based advanced analysis methods are used to extract brain responses and the cortical location related to distraction. Third, this study investigates the interaction and effects of dual-task-related brain activities, in contrast to a single task.", "Fifteen healthy participants (all males), between 20 and 28 years of age, were recruited from the university population. They have normal or corrected-to-normal vision, are right handed, have a driver's license, and are reported being free from psychiatric or neurological disorders. Written informed consent was obtained prior to the study.\nEach subject participated in four simulated sessions inside a car with hands on the steering wheel to keep the car in the center of the third lane, which was numbered from the left lane, in a VR surround scene on a four-lane freeway [23]. Thirty scalp electrodes (Ag/AgCl electrodes with a unipolar reference at the right earlobe) by the NuAmp system (Compumedics Ltd., VIC, Australia) were mounted on the subject's head to record the physiological EEG [25]. The EEG electrodes were placed based on a modified international 10-20 system. The contact impedance between EEG electrodes and the cortex was calibrated to be less than 10 kΩ. Before beginning first session, each subject took a 15 ~ 30 minute for practice session. In each session, subjects proceeded to a freeway simulated driving lasting fifteen minutes with the corresponding EEG signals synchronously recorded. For these four-session experiments, subjects were required to rest for ten minutes between every two sessions to avoid fatigue.", "For this study, a simulated freeway scene was built using VR technology with a WTK library on a 6 DOF motion platform [23]. The four-lane freeway scene was displayed on a surrounded environment. Since the main purpose of this paper is to investigate distraction effects in dual-task conditions, two tasks involving unexpected car deviations and mathematical questions were designed. In the driving task, the car frequently and randomly drifted from the center of the third lane. Subjects were required to steer the car back to the center of the third lane. This task mimicked the effects of driving on a non-ideal road surface. In the mathematical task, two-digit addition equations were presented to the subjects. The answers were designed to be either valid or invalid. Subjects were asked to press the right or left button on the steering wheel corresponding to on correct or incorrect equations, respectively. The allotment ratio of correct-incorrect equations was 50-50. The choice of mathematic task was motivated by the desire for control in the task demands [26]. All drivers could perform this mathematic task well without training.\nTo investigate the effects of SOA between two tasks, the combinations of these two tasks were designed to provide different distracting conditions to the subjects as shown in Figure 1. Five cases were developed to study the interaction of the two tasks. The bottom insets show the onset sequences of two tasks. Therefore, this study investigated the relationship of math task and driving task and how two tasks affected each other in the SOA conditions.\nThe illustration shows the relationship of occurrences between the deviation and math tasks. D: deviation task onset. M: math task onset. (a) Case 1: math task presents 400 ms before the deviation task onset. (b) Case 2: math and deviation tasks occur at the same time. (c) Case 3: math task presents 400 ms after the deviation task onset. (d) Case 4: only math task presents. (e) Case 5: only deviation task occurs. The bottom insets show the onset sequences of the two tasks.", "After recording the behavior data, statistical package for the social science (SPSS) Version 13.0 for Windows software is applied to estimate the significance testing of behavior data. The response time of these two tasks (the driving deviation and the math equation) is analyzed to study the behavior of subjects in the experiments.\nUsing ANOVA (analysis of variance), the significances of the response time of these two tasks are tested for every subject. A non-parametric test is also utilized to study the trends of the behavior data. Firstly, this study excluded outliers, comprising around 6.57% of all trials, based on the criteria that response time was distributed outside the mean response time plus three times the standard deviation of each single session. Secondly, the number of trials in one of five cases which is minimal is chosen to make a benchmark to randomly select the same number of trials in other cases. Thirdly, a single task is taken for the baseline to normalize the behavior data to be XiXmean (Xi: mean of response time in case i, Xmean: mean of response time in single case). For example, in order to compare the distraction effects from the math equation, case 4 (the single math task) is the baseline.", "EEG epochs are extracted from the recorded EEG signals with 16-bit quantization, at the sampling rate of 500 Hz. The data are then preprocessed using a simple low pass filter with a cut-off frequency of 50 Hz to remove line noise and other high frequency noise. One more high-pass filter with a cut-off frequency of 0.5 Hz is utilized to remove DC drift. This study adopts ICA to separate independent brain sources [27-29]. ERSP technology is then applied to these independent component (IC) signals (separated independent brain sources) to transfer the signal into the time-frequency domain for the event-related frequency study. Finally, the stability of component activations and scalp topographies of meaningful components are investigated with component clustering technology. Because different cases with various combinations of driving and the math tasks are designed, EEG responses from five different cases are extracted separately.\nEEG source segregation, identification, and localization is very difficult because EEG data collected from the human scalp induce brain activities within a large brain area. Although the conductivity between the skull and brain is different, the spatial \"smearing\" of EEG data caused by volume conduction does not cause a significant time delay. This suggests that ICA algorithm is suitable for performing blind source separation on EEG data. The first applications of ICA to biomedical time series analysis were presented by Makeig and Inlow [30]. Their report shows segregation of eye movements from brain EEG phenomena, and separates EEG data into constituent components defined by spatial stability and temporal independence. Subsequent technical experiments demonstrated that ICA could also be used to remove artifacts from both continuous and event-related (single-trial) EEG data [27,28]. Presumably, multi-channel EEG recordings are mixtures of underlying brain sources and artificial signals. By assuming that (a) mixing medium is linear and propagation delays are negligible, (b) the time courses of the sources are independent, and (c) the number of sources is the same as the number of sensors; that is, if there are N sensors, the ICA algorithm can separate N sources [27].\nThe time sequences of ICA component signals are subjected to Fast Fourier Transform with overlapped moving windows. In addition, the spectrum in each epoch is smoothed by 3-window (768 points) moving-average to reduce random errors. The spectrum prior to event onsets is considered as the baseline spectrum for every epoch. The mean of the baseline spectrum is subtracted from the power spectral after stimulus onsets so spectral \"perturbation\" can be visualized. This procedure is then applied repeatedly to every epoch. The results are averaged to yield ERSP images [31]. These measures can evaluate averaged dynamic changes in amplitudes of the broad band EEG spectrum as a function of time following cognitive events. The ERSP images mainly show spectral differences after an event since the baseline spectrum prior to event onsets had been removed. After performing a bootstrap analysis (usually 0.01 or 0.03 or 0.05; here 0.01 was applied) on ERSP, only statistically significant (p < 0.01) spectral changes are shown in the ERSP images. Non-significant time/frequency points are masked (replaced with zero). Consequently, any perturbations in the frequency domain become relatively prominent.\nTo study the cross-subject component stability of ICA decomposition, components from multiple subjects are clustered, based on their spatial distributions and EEG characteristics. However, components from different subjects differ in many ways such as scalp maps, power spectrum, ERPs and ERSPs. Some studies attempted to solve this problem by calculating similarities among different ICs [32-34]. Based on these studies, ICs of interest are selected and clustered semi-automatically based on their scalp maps, dipole source locations, and within-subject consistency. To match scalp maps of ICs within and across subjects in this paper, the gradients of the IC scalp maps from different sessions of the same subject are computed and grouped together based on the highest correlations of gradients of the common electrodes retained in all sessions. For dipole source locations, DIPFIT2 routines from EEGLAB are used to fit single dipole source models to the remaining IC scalp topographies using a four-shell spherical head model [35]. In the DIPFIT software, the spherical head model is co-registered with an average brain model (Montreal Neurological Institute) and returns approximate Talairach coordinates for each equivalent dipole source.", "[SUBTITLE] Behavior performance [SUBSECTION] To investigate the overall behavior index, this study uses nonparametric tests because several extremely large scores are significantly skewed. Firstly, the trials of data are randomly selected to have the same number of the trials in all cases. Then, the response time of the deviation and math tasks in the five cases are normalized to correspond to single-deviation and single-math cases, respectively. SPSS software is used for the Friedman test, and the results of which are shown in Figure 2. Dual-task cases are marked for easy discrimination from single-task cases.\nThis shows the bar charts of normalized response times. (a) for the math task and (b) for deviation task across 15 subjects. The filled black bar: case 1; dark gray bar: case 2; light gray bar: case 3; the open bar: single case. The response time for math task in dual-task cases (case 1, case 2, and case 3) is significantly longer than that for in single task (case 4). The shortest response time for the math onset is in case 4. The response time for deviation task in case 1 is significantly shorter than those in other cases. The longest response time to the deviation onset is in case 5. The bottom insets show the onset sequences of the two tasks.\nTo know how the cases make the differences, the Student-Newman-Keuls test is used for the post hoc test (in Table 1). The test statistic on response time of math tasks in cases 1-4, is χ2(3) = 903.926 from the Friedman's ANOVA test, and p < 0.01. The Student-Newman-Keuls test show three significant groups: case 1 with case 2, case 3, and case 4 in which the response time for math task in case 1 is the longest. Statistical test results of the response time for deviation tasks in cases 1-3, and case 5, is χ2(3) = 493.98 from the Friedman's ANOVA test, and p < 0.01. Using the Student-Newman-Keuls test, there are two significant groups: case 1, and the other cases in which the response time for deviation task in case 1 is the shortest.\nThe normalized response time to deviation and math\nTo investigate the overall behavior index, this study uses nonparametric tests because several extremely large scores are significantly skewed. Firstly, the trials of data are randomly selected to have the same number of the trials in all cases. Then, the response time of the deviation and math tasks in the five cases are normalized to correspond to single-deviation and single-math cases, respectively. SPSS software is used for the Friedman test, and the results of which are shown in Figure 2. Dual-task cases are marked for easy discrimination from single-task cases.\nThis shows the bar charts of normalized response times. (a) for the math task and (b) for deviation task across 15 subjects. The filled black bar: case 1; dark gray bar: case 2; light gray bar: case 3; the open bar: single case. The response time for math task in dual-task cases (case 1, case 2, and case 3) is significantly longer than that for in single task (case 4). The shortest response time for the math onset is in case 4. The response time for deviation task in case 1 is significantly shorter than those in other cases. The longest response time to the deviation onset is in case 5. The bottom insets show the onset sequences of the two tasks.\nTo know how the cases make the differences, the Student-Newman-Keuls test is used for the post hoc test (in Table 1). The test statistic on response time of math tasks in cases 1-4, is χ2(3) = 903.926 from the Friedman's ANOVA test, and p < 0.01. The Student-Newman-Keuls test show three significant groups: case 1 with case 2, case 3, and case 4 in which the response time for math task in case 1 is the longest. Statistical test results of the response time for deviation tasks in cases 1-3, and case 5, is χ2(3) = 493.98 from the Friedman's ANOVA test, and p < 0.01. Using the Student-Newman-Keuls test, there are two significant groups: case 1, and the other cases in which the response time for deviation task in case 1 is the shortest.\nThe normalized response time to deviation and math\n[SUBTITLE] Independent component clustering [SUBSECTION] EEG epochs are extracted from the recorded EEG signals. Then, ICA is utilized to decompose independent brain sources from the EEG epochs. Based on distraction effects in this study, many brain resources are involved in this experiment. Especially, the motor component is active when subjects are steering the car. At the same time, activations related to attention in the frontal component appear. Therefore, ICA components, including frontal and motor, are selected for IC clustering to analyze cross-subject data based on their EEG characteristics.\nAt first, IC clustering groups massive components from multiple sessions and subjects into several significant clusters. Cluster analysis, k-means, is applied to the normalized scalp topographies and power spectra of all 450 (30 channels × 15 subjects) components from the 15 subjects. Cluster analysis identifies at least 7 component clusters having similar power spectra and scalp projections. These 7 distinct component clusters consisted of frontal, central midline, parietal, left/right motor and left/right occipital. Table 2 gives the number of components in different clusters. This investigation uses the frontal and left motor components to analyze distraction effects. Figure 3 shows the scalp maps and equivalent dipole source locations for fontal and left motor clusters. Based on this finding, the EEG sources of different subjects in the same cluster are from the same physiological component.\nThe Number of Components in Different Clusters\nThe scalp maps and equivalent dipole source locations after IC clustering across 15 subjects. (a) the frontal components and (b) the left motor components are shown here. There are 14 subjects in the frontal cluster and 11 subjects in the left motor cluster. The grand scalp map is the mean of the total component maps in each cluster. The smaller maps are the individual scalp maps. The right panels (c) and (d) show the 3-D dipole source locations (colored spheres) and their projections onto average brain images. The colored source locations correspond to their own scalp maps by the same color of the text above.\nEEG epochs are extracted from the recorded EEG signals. Then, ICA is utilized to decompose independent brain sources from the EEG epochs. Based on distraction effects in this study, many brain resources are involved in this experiment. Especially, the motor component is active when subjects are steering the car. At the same time, activations related to attention in the frontal component appear. Therefore, ICA components, including frontal and motor, are selected for IC clustering to analyze cross-subject data based on their EEG characteristics.\nAt first, IC clustering groups massive components from multiple sessions and subjects into several significant clusters. Cluster analysis, k-means, is applied to the normalized scalp topographies and power spectra of all 450 (30 channels × 15 subjects) components from the 15 subjects. Cluster analysis identifies at least 7 component clusters having similar power spectra and scalp projections. These 7 distinct component clusters consisted of frontal, central midline, parietal, left/right motor and left/right occipital. Table 2 gives the number of components in different clusters. This investigation uses the frontal and left motor components to analyze distraction effects. Figure 3 shows the scalp maps and equivalent dipole source locations for fontal and left motor clusters. Based on this finding, the EEG sources of different subjects in the same cluster are from the same physiological component.\nThe Number of Components in Different Clusters\nThe scalp maps and equivalent dipole source locations after IC clustering across 15 subjects. (a) the frontal components and (b) the left motor components are shown here. There are 14 subjects in the frontal cluster and 11 subjects in the left motor cluster. The grand scalp map is the mean of the total component maps in each cluster. The smaller maps are the individual scalp maps. The right panels (c) and (d) show the 3-D dipole source locations (colored spheres) and their projections onto average brain images. The colored source locations correspond to their own scalp maps by the same color of the text above.\n[SUBTITLE] Frontal and left motor clusters [SUBSECTION] Figure 4a shows the cross-subject averaged ERSP in the frontal cluster corresponding to the five cases. Figure 4 also reveals significant (p < 0.01) power increases related to the math task, demonstrating that the power increases in the frontal cluster are related to the math task. The theta power increases in three dual-task cases including cases 1-3 are slightly different from each other. Compared to the single math task (case 4), the power in dual-task cases is stronger. Especially, the power increase in case 1 is the strongest. On the beta band, it also shows power increases, which appear only in the math-task and time-locked to mathematics onsets.\nThe ERSP images of frontal cluster with five cases. (a) The ERSP images of frontal cluster with five cases. The right column show the onset sequences of the two tasks. Color bars indicate the magnitude of ERSPs. Red solid lines show the onset of the math task. Red dashed lines show the mean response time for the math task. Blue solid lines show the onset of the deviation task. Blue dashed lines show the mean response time for the deviation task. The red circle pointed out by the red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating time form the math task onset to the first occurrence of power increases. The open bars represent the latencies in the theta (4.5 ~ 9 Hz) band. The gray bars represent these latencies in the beta (11 ~ 15 Hz) band. The comparison of total power in cross-subject (14 subjects) averaged ERSP images in the frontal cluster between cases is shown in (c). The amount of total power is calculated by adding all the power increases in the same temporal period and the same frequency band. The open bars represent the total power in the theta band. The gray bars represent the total power in the beta band.\nFigure 4b and 4c give comparisons of the latency and total power in four cases from Figure 4a. It demonstrates that the latencies of power increases in two frequency bands are different with the different SOA time. The shortest latencies in both bands occur in case 1 and the longest power increase latency in the theta band occurs in case 4. It also demonstrates that the amount of power increases in the theta band is different with the different SOA time. The most significant power increase occurs in case 1.\nFigure 5a shows the cross-subject average ERSP in the left motor cluster corresponding to five cases. Significant (p < 0.01) power suppressions appear around the event onsets (at 0 ms) and stop at different time axes by cases. In case 4, the alpha and beta power suppressions appear continuously until the red dashed lines, which indicates the mean of the response time for the math task. Compared with case 4, the alpha and beta power suppressions in case 5 are stronger and also last longer. In other cases, the alpha and beta power suppressions continue after the blue dashed lines. This phenomenon is suggested to be related to steering the car back to the center of the third lane.\nThe ERSP images of the left motor cluster with five cases. (a) The ERSP images of the left motor cluster with five cases. The right column shows the onset sequences of the two tasks. Color bars indicate the magnitude of the ERSPs. Red solid lines show the onset of math. Red dashed lines show the mean response time for math task. Blue solid lines show the onset of deviation task. Blue dashed lines show the mean response time for deviation task. The red circle pointed out by a red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating from the deviation task onset to the first occurrence of power suppressions. The open bars represent the latencies in the alpha (8 ~ 14 Hz) band. The gray blue bars represent these latencies in the beta band (16 ~ 20 Hz). (c) shows the comparison of total power in cross-subject (11 subjects) averaged ERSP images in the left motor cluster between cases. The amount of total power is calculated by adding all the power suppressions in the same temporal period and the same frequency band. The open bars represent the total power in the alpha band. The gray bars represent the total power in the beta band.\nFigure 5b and 5c shows comparisons of the latency and total power between the four cases in Figure 5a. It demonstrates that power suppression latencies in the beta band are different with the different SOA time. The shortest power suppression latency occurs in case 1 and the longest power increase latency occurs in case 5. It also demonstrates that the amount of power suppression in the alpha band is different with the different SOA time. The most significant power suppression occurs in case 5 (the single driving task) and the smallest power suppression occurs in case 4 (the single math task).\nFigure 6a and 6d show the ERSP in the frontal and left motor clusters without a significance test. Columns (b) and (e) show the differences among three single-task cases; columns (c) and (f) show the differences between single- and dual-task cases. In columns (b), (c), (e), and (f), a Wilcoxon signed-rank test is used to retain the regions with significant power inside the black circles. Columns (b) and (c) show the comparison of power increases between cases. The remained regions show greater power increases in the single-task case than in the dual-task case. Columns (e) and (f) show compared power suppressions between cases. The remained regions show greater power suppressions in the dual-task cases than in the single-task case.\nERSP without a significance test and the differences between cases. Column (a) shows the ERSP in the frontal cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 4. Column (b) shows the differences among three single-task cases in column (a). Column (c) shows the differences between single- and dual-task cases in column (a). Column (d) shows the ERSP in the left motor cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 5. Column (e) shows the differences among three single-task cases in column (d). Column (f) shows the differences between single- and dual-task cases in column (d). A Wilcoxon signed-rank test (p < 0.01) is used for the statistical test in (b), (c), (e), and (f).\nFigure 4a shows the cross-subject averaged ERSP in the frontal cluster corresponding to the five cases. Figure 4 also reveals significant (p < 0.01) power increases related to the math task, demonstrating that the power increases in the frontal cluster are related to the math task. The theta power increases in three dual-task cases including cases 1-3 are slightly different from each other. Compared to the single math task (case 4), the power in dual-task cases is stronger. Especially, the power increase in case 1 is the strongest. On the beta band, it also shows power increases, which appear only in the math-task and time-locked to mathematics onsets.\nThe ERSP images of frontal cluster with five cases. (a) The ERSP images of frontal cluster with five cases. The right column show the onset sequences of the two tasks. Color bars indicate the magnitude of ERSPs. Red solid lines show the onset of the math task. Red dashed lines show the mean response time for the math task. Blue solid lines show the onset of the deviation task. Blue dashed lines show the mean response time for the deviation task. The red circle pointed out by the red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating time form the math task onset to the first occurrence of power increases. The open bars represent the latencies in the theta (4.5 ~ 9 Hz) band. The gray bars represent these latencies in the beta (11 ~ 15 Hz) band. The comparison of total power in cross-subject (14 subjects) averaged ERSP images in the frontal cluster between cases is shown in (c). The amount of total power is calculated by adding all the power increases in the same temporal period and the same frequency band. The open bars represent the total power in the theta band. The gray bars represent the total power in the beta band.\nFigure 4b and 4c give comparisons of the latency and total power in four cases from Figure 4a. It demonstrates that the latencies of power increases in two frequency bands are different with the different SOA time. The shortest latencies in both bands occur in case 1 and the longest power increase latency in the theta band occurs in case 4. It also demonstrates that the amount of power increases in the theta band is different with the different SOA time. The most significant power increase occurs in case 1.\nFigure 5a shows the cross-subject average ERSP in the left motor cluster corresponding to five cases. Significant (p < 0.01) power suppressions appear around the event onsets (at 0 ms) and stop at different time axes by cases. In case 4, the alpha and beta power suppressions appear continuously until the red dashed lines, which indicates the mean of the response time for the math task. Compared with case 4, the alpha and beta power suppressions in case 5 are stronger and also last longer. In other cases, the alpha and beta power suppressions continue after the blue dashed lines. This phenomenon is suggested to be related to steering the car back to the center of the third lane.\nThe ERSP images of the left motor cluster with five cases. (a) The ERSP images of the left motor cluster with five cases. The right column shows the onset sequences of the two tasks. Color bars indicate the magnitude of the ERSPs. Red solid lines show the onset of math. Red dashed lines show the mean response time for math task. Blue solid lines show the onset of deviation task. Blue dashed lines show the mean response time for deviation task. The red circle pointed out by a red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating from the deviation task onset to the first occurrence of power suppressions. The open bars represent the latencies in the alpha (8 ~ 14 Hz) band. The gray blue bars represent these latencies in the beta band (16 ~ 20 Hz). (c) shows the comparison of total power in cross-subject (11 subjects) averaged ERSP images in the left motor cluster between cases. The amount of total power is calculated by adding all the power suppressions in the same temporal period and the same frequency band. The open bars represent the total power in the alpha band. The gray bars represent the total power in the beta band.\nFigure 5b and 5c shows comparisons of the latency and total power between the four cases in Figure 5a. It demonstrates that power suppression latencies in the beta band are different with the different SOA time. The shortest power suppression latency occurs in case 1 and the longest power increase latency occurs in case 5. It also demonstrates that the amount of power suppression in the alpha band is different with the different SOA time. The most significant power suppression occurs in case 5 (the single driving task) and the smallest power suppression occurs in case 4 (the single math task).\nFigure 6a and 6d show the ERSP in the frontal and left motor clusters without a significance test. Columns (b) and (e) show the differences among three single-task cases; columns (c) and (f) show the differences between single- and dual-task cases. In columns (b), (c), (e), and (f), a Wilcoxon signed-rank test is used to retain the regions with significant power inside the black circles. Columns (b) and (c) show the comparison of power increases between cases. The remained regions show greater power increases in the single-task case than in the dual-task case. Columns (e) and (f) show compared power suppressions between cases. The remained regions show greater power suppressions in the dual-task cases than in the single-task case.\nERSP without a significance test and the differences between cases. Column (a) shows the ERSP in the frontal cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 4. Column (b) shows the differences among three single-task cases in column (a). Column (c) shows the differences between single- and dual-task cases in column (a). Column (d) shows the ERSP in the left motor cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 5. Column (e) shows the differences among three single-task cases in column (d). Column (f) shows the differences between single- and dual-task cases in column (d). A Wilcoxon signed-rank test (p < 0.01) is used for the statistical test in (b), (c), (e), and (f).", "To investigate the overall behavior index, this study uses nonparametric tests because several extremely large scores are significantly skewed. Firstly, the trials of data are randomly selected to have the same number of the trials in all cases. Then, the response time of the deviation and math tasks in the five cases are normalized to correspond to single-deviation and single-math cases, respectively. SPSS software is used for the Friedman test, and the results of which are shown in Figure 2. Dual-task cases are marked for easy discrimination from single-task cases.\nThis shows the bar charts of normalized response times. (a) for the math task and (b) for deviation task across 15 subjects. The filled black bar: case 1; dark gray bar: case 2; light gray bar: case 3; the open bar: single case. The response time for math task in dual-task cases (case 1, case 2, and case 3) is significantly longer than that for in single task (case 4). The shortest response time for the math onset is in case 4. The response time for deviation task in case 1 is significantly shorter than those in other cases. The longest response time to the deviation onset is in case 5. The bottom insets show the onset sequences of the two tasks.\nTo know how the cases make the differences, the Student-Newman-Keuls test is used for the post hoc test (in Table 1). The test statistic on response time of math tasks in cases 1-4, is χ2(3) = 903.926 from the Friedman's ANOVA test, and p < 0.01. The Student-Newman-Keuls test show three significant groups: case 1 with case 2, case 3, and case 4 in which the response time for math task in case 1 is the longest. Statistical test results of the response time for deviation tasks in cases 1-3, and case 5, is χ2(3) = 493.98 from the Friedman's ANOVA test, and p < 0.01. Using the Student-Newman-Keuls test, there are two significant groups: case 1, and the other cases in which the response time for deviation task in case 1 is the shortest.\nThe normalized response time to deviation and math", "EEG epochs are extracted from the recorded EEG signals. Then, ICA is utilized to decompose independent brain sources from the EEG epochs. Based on distraction effects in this study, many brain resources are involved in this experiment. Especially, the motor component is active when subjects are steering the car. At the same time, activations related to attention in the frontal component appear. Therefore, ICA components, including frontal and motor, are selected for IC clustering to analyze cross-subject data based on their EEG characteristics.\nAt first, IC clustering groups massive components from multiple sessions and subjects into several significant clusters. Cluster analysis, k-means, is applied to the normalized scalp topographies and power spectra of all 450 (30 channels × 15 subjects) components from the 15 subjects. Cluster analysis identifies at least 7 component clusters having similar power spectra and scalp projections. These 7 distinct component clusters consisted of frontal, central midline, parietal, left/right motor and left/right occipital. Table 2 gives the number of components in different clusters. This investigation uses the frontal and left motor components to analyze distraction effects. Figure 3 shows the scalp maps and equivalent dipole source locations for fontal and left motor clusters. Based on this finding, the EEG sources of different subjects in the same cluster are from the same physiological component.\nThe Number of Components in Different Clusters\nThe scalp maps and equivalent dipole source locations after IC clustering across 15 subjects. (a) the frontal components and (b) the left motor components are shown here. There are 14 subjects in the frontal cluster and 11 subjects in the left motor cluster. The grand scalp map is the mean of the total component maps in each cluster. The smaller maps are the individual scalp maps. The right panels (c) and (d) show the 3-D dipole source locations (colored spheres) and their projections onto average brain images. The colored source locations correspond to their own scalp maps by the same color of the text above.", "Figure 4a shows the cross-subject averaged ERSP in the frontal cluster corresponding to the five cases. Figure 4 also reveals significant (p < 0.01) power increases related to the math task, demonstrating that the power increases in the frontal cluster are related to the math task. The theta power increases in three dual-task cases including cases 1-3 are slightly different from each other. Compared to the single math task (case 4), the power in dual-task cases is stronger. Especially, the power increase in case 1 is the strongest. On the beta band, it also shows power increases, which appear only in the math-task and time-locked to mathematics onsets.\nThe ERSP images of frontal cluster with five cases. (a) The ERSP images of frontal cluster with five cases. The right column show the onset sequences of the two tasks. Color bars indicate the magnitude of ERSPs. Red solid lines show the onset of the math task. Red dashed lines show the mean response time for the math task. Blue solid lines show the onset of the deviation task. Blue dashed lines show the mean response time for the deviation task. The red circle pointed out by the red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating time form the math task onset to the first occurrence of power increases. The open bars represent the latencies in the theta (4.5 ~ 9 Hz) band. The gray bars represent these latencies in the beta (11 ~ 15 Hz) band. The comparison of total power in cross-subject (14 subjects) averaged ERSP images in the frontal cluster between cases is shown in (c). The amount of total power is calculated by adding all the power increases in the same temporal period and the same frequency band. The open bars represent the total power in the theta band. The gray bars represent the total power in the beta band.\nFigure 4b and 4c give comparisons of the latency and total power in four cases from Figure 4a. It demonstrates that the latencies of power increases in two frequency bands are different with the different SOA time. The shortest latencies in both bands occur in case 1 and the longest power increase latency in the theta band occurs in case 4. It also demonstrates that the amount of power increases in the theta band is different with the different SOA time. The most significant power increase occurs in case 1.\nFigure 5a shows the cross-subject average ERSP in the left motor cluster corresponding to five cases. Significant (p < 0.01) power suppressions appear around the event onsets (at 0 ms) and stop at different time axes by cases. In case 4, the alpha and beta power suppressions appear continuously until the red dashed lines, which indicates the mean of the response time for the math task. Compared with case 4, the alpha and beta power suppressions in case 5 are stronger and also last longer. In other cases, the alpha and beta power suppressions continue after the blue dashed lines. This phenomenon is suggested to be related to steering the car back to the center of the third lane.\nThe ERSP images of the left motor cluster with five cases. (a) The ERSP images of the left motor cluster with five cases. The right column shows the onset sequences of the two tasks. Color bars indicate the magnitude of the ERSPs. Red solid lines show the onset of math. Red dashed lines show the mean response time for math task. Blue solid lines show the onset of deviation task. Blue dashed lines show the mean response time for deviation task. The red circle pointed out by a red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating from the deviation task onset to the first occurrence of power suppressions. The open bars represent the latencies in the alpha (8 ~ 14 Hz) band. The gray blue bars represent these latencies in the beta band (16 ~ 20 Hz). (c) shows the comparison of total power in cross-subject (11 subjects) averaged ERSP images in the left motor cluster between cases. The amount of total power is calculated by adding all the power suppressions in the same temporal period and the same frequency band. The open bars represent the total power in the alpha band. The gray bars represent the total power in the beta band.\nFigure 5b and 5c shows comparisons of the latency and total power between the four cases in Figure 5a. It demonstrates that power suppression latencies in the beta band are different with the different SOA time. The shortest power suppression latency occurs in case 1 and the longest power increase latency occurs in case 5. It also demonstrates that the amount of power suppression in the alpha band is different with the different SOA time. The most significant power suppression occurs in case 5 (the single driving task) and the smallest power suppression occurs in case 4 (the single math task).\nFigure 6a and 6d show the ERSP in the frontal and left motor clusters without a significance test. Columns (b) and (e) show the differences among three single-task cases; columns (c) and (f) show the differences between single- and dual-task cases. In columns (b), (c), (e), and (f), a Wilcoxon signed-rank test is used to retain the regions with significant power inside the black circles. Columns (b) and (c) show the comparison of power increases between cases. The remained regions show greater power increases in the single-task case than in the dual-task case. Columns (e) and (f) show compared power suppressions between cases. The remained regions show greater power suppressions in the dual-task cases than in the single-task case.\nERSP without a significance test and the differences between cases. Column (a) shows the ERSP in the frontal cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 4. Column (b) shows the differences among three single-task cases in column (a). Column (c) shows the differences between single- and dual-task cases in column (a). Column (d) shows the ERSP in the left motor cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 5. Column (e) shows the differences among three single-task cases in column (d). Column (f) shows the differences between single- and dual-task cases in column (d). A Wilcoxon signed-rank test (p < 0.01) is used for the statistical test in (b), (c), (e), and (f).", "[SUBTITLE] Frontal cluster [SUBSECTION] The frontal lobe is an area in the brain, located at the front of each cerebral hemisphere. The frontal area deals with impulse control, judgment, language production, working memory, motor function, and problem solving [36,37]. In Figure 4a, the greater frontal power increases in cases 1-4 appear due to the solving of the math questions. The power increases in the theta (4.5 ~ 9 Hz) and beta bands (11 ~ 15 Hz) appear briefly after the math onset. Figure 4b and 4c show the quantified frontal power latencies and power increases in four conditions for the purpose of discussing the EEG dynamics made by solving the math question. In the theta power, the shortest latency is revealed in case 1. Power increases in three dual-task cases are higher than that in single-task case with the greatest power occurring in case 1. These phenomena suggest that dual tasks induce more event-related theta activities as well as subjects need more brain resources to accomplish dual tasks. The theta increase is associated with numerous processes such as mental work load, problem solving, encoding, or self monitoring [34]. Based on this evidence, the study demonstrates that the subjects were distracted under dual-task conditions in the experiment.\nSince human visual sensors need about 300 ms to perceive stimulus (P300 activity), 400 ms between first and second tasks is sufficient for a subject to perceive stimulus[38]. In case 1, a processing task is already in the brain and subjects need more brain resources to manage the high priority task presented 400 ms after the processing task. Therefore, the total power in the theta band in case 1 is the highest as shown in Figure 4c. Clearly the theta power increase appears the earliest in case 1 as shown in Figure 4b. The early theta response in the frontal area primarily reflects the activation of neural networks involved in allocating attention related to the target stimulus [39].\nThe trends of response time for the math task (in Figure 2a) and EEG theta increases in the frontal cluster (in Figure 4c) are consistent with one another. In the case of the single math task, the response time is the shortest and the theta power increase is the weakest. Among the dual-task cases, the longest response time and the greatest theta power increase are in case 1. This evidence suggests that the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, power increases in the beta band appear in all cases. From the ERSP images, the patterns are time-locked to the onset of the math task. Fernández suggested that significant EEG beta band differences in the frontal area are due to a specific component of mental calculation [40].\nThe frontal lobe is an area in the brain, located at the front of each cerebral hemisphere. The frontal area deals with impulse control, judgment, language production, working memory, motor function, and problem solving [36,37]. In Figure 4a, the greater frontal power increases in cases 1-4 appear due to the solving of the math questions. The power increases in the theta (4.5 ~ 9 Hz) and beta bands (11 ~ 15 Hz) appear briefly after the math onset. Figure 4b and 4c show the quantified frontal power latencies and power increases in four conditions for the purpose of discussing the EEG dynamics made by solving the math question. In the theta power, the shortest latency is revealed in case 1. Power increases in three dual-task cases are higher than that in single-task case with the greatest power occurring in case 1. These phenomena suggest that dual tasks induce more event-related theta activities as well as subjects need more brain resources to accomplish dual tasks. The theta increase is associated with numerous processes such as mental work load, problem solving, encoding, or self monitoring [34]. Based on this evidence, the study demonstrates that the subjects were distracted under dual-task conditions in the experiment.\nSince human visual sensors need about 300 ms to perceive stimulus (P300 activity), 400 ms between first and second tasks is sufficient for a subject to perceive stimulus[38]. In case 1, a processing task is already in the brain and subjects need more brain resources to manage the high priority task presented 400 ms after the processing task. Therefore, the total power in the theta band in case 1 is the highest as shown in Figure 4c. Clearly the theta power increase appears the earliest in case 1 as shown in Figure 4b. The early theta response in the frontal area primarily reflects the activation of neural networks involved in allocating attention related to the target stimulus [39].\nThe trends of response time for the math task (in Figure 2a) and EEG theta increases in the frontal cluster (in Figure 4c) are consistent with one another. In the case of the single math task, the response time is the shortest and the theta power increase is the weakest. Among the dual-task cases, the longest response time and the greatest theta power increase are in case 1. This evidence suggests that the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, power increases in the beta band appear in all cases. From the ERSP images, the patterns are time-locked to the onset of the math task. Fernández suggested that significant EEG beta band differences in the frontal area are due to a specific component of mental calculation [40].\n[SUBTITLE] Motor cluster [SUBSECTION] Mu rhythm (μ rhythm) is an EEG rhythm usually recorded from the motor cortex of the dominant hemisphere. It can be suppressed by simple motor activities such as clenching the fist of the contra lateral side, or passively moved [41-43]. Mu suppression is believed to be the electrical output of the synchronization on large portions of pyramidal neurons in the motor cortex that controls hand and arm movements.\nIn this study, the mu suppressions (8 ~ 14 Hz) and beta power suppression (16 ~ 20 Hz) are mostly caused by subjects steering the wheel and pressing buttons as shown in Figure 5a. The mu suppressions caused by steering the wheel are almost time-locked to the response onset of driving task in cases 1-3 and case 5. However, the mu suppressions caused by pressing the buttons have no effects in case 4. As for in the dual-task cases, the mu suppressions are weaker than those in single-task case. This may due to the competition of brain resources required by wheel steering and button pressing.\nThus, Figure 5b and Figure 5c show motor power latencies and power increases in 4 cases for the purposes of discussing the EEG dynamics caused by the driving task. In (b), the longest latency of beta power suppression is observed in case 5 and the shortest latency appears in case 1. Perhaps motor planning is involved in preparing for steering the wheel and answering the math questions [44]. In (c), the three dual-task power suppressions are weaker than those in single task. Based on above evidences, it suggests that math processing occupies more brain resources in the frontal area during dual-task cases so less activation is induced in the motor area.\nMu rhythm (μ rhythm) is an EEG rhythm usually recorded from the motor cortex of the dominant hemisphere. It can be suppressed by simple motor activities such as clenching the fist of the contra lateral side, or passively moved [41-43]. Mu suppression is believed to be the electrical output of the synchronization on large portions of pyramidal neurons in the motor cortex that controls hand and arm movements.\nIn this study, the mu suppressions (8 ~ 14 Hz) and beta power suppression (16 ~ 20 Hz) are mostly caused by subjects steering the wheel and pressing buttons as shown in Figure 5a. The mu suppressions caused by steering the wheel are almost time-locked to the response onset of driving task in cases 1-3 and case 5. However, the mu suppressions caused by pressing the buttons have no effects in case 4. As for in the dual-task cases, the mu suppressions are weaker than those in single-task case. This may due to the competition of brain resources required by wheel steering and button pressing.\nThus, Figure 5b and Figure 5c show motor power latencies and power increases in 4 cases for the purposes of discussing the EEG dynamics caused by the driving task. In (b), the longest latency of beta power suppression is observed in case 5 and the shortest latency appears in case 1. Perhaps motor planning is involved in preparing for steering the wheel and answering the math questions [44]. In (c), the three dual-task power suppressions are weaker than those in single task. Based on above evidences, it suggests that math processing occupies more brain resources in the frontal area during dual-task cases so less activation is induced in the motor area.\n[SUBTITLE] Brain dynamics related to behavior performance [SUBSECTION] Posner postulated that two tasks performed simultaneously did not interfere with each other's performance when different brain areas were used for these two tasks [45]. However, this study uses two visual-stimuli tasks that compete within the frontal and motor areas for taking action. From the results, these two visual-stimuli tasks interfere with each other in both behavior performance (in Figure 2) and brain dynamics (in Figure 6).\nIn order to compare brain dynamics among different cases (in Figure 6), a statistical analysis was also conducted to assess the significance of the ERSP differences of the independent clusters under different cases. Since the true sample distribution of the cluster ERSP was unknown and the sample size (N = 14 as 1 of 15 subjects and N = 11 as 4 of 15 subjects were exclude in frontal and left motor clusters, respectively) was small, a nonparametric statistical analysis, a paired-sample Wilcoxon signed-rank test, was employed to access the statistically significant ERSP differences under different cases. The level of significance was set to p < 0.01.\nIn Figure 6c, the significant differences between dual-task cases and case 4 are due to that subjects' reaction to a math question is impaired when they are also facing a car deviation. Lavie demonstrated that dual-task load increases distraction effects [46]. Because of the distraction effects, the behavioral response time are significantly higher in dual-task cases than that in single-task case. In order to study the comparisons of these dual-task cases, the differences of them are shown in Figure 6b. From the behavior performance in Figure 2, response time in case 1 and case 2 are the longest which means that the most distraction effects occurred in these two cases. It is also shown in Figure 6b. Especially, distraction effects in case 1 are slightly higher than those in case 2. Therefore, it is suggested that some kinds of two sequent tasks make the same distraction effects as two simultaneous tasks, or even higher.\nJong investigated how performance of two overlapping discrete tasks was organized and controlled [47]. The sequential performance of overlapping tasks can be scheduled in advance and regulated by initially allocating brain resources to one task and subsequently switching to the other task. Thus in case 1, when the math task is presented to the subject, it occupies the brain resources. Then because the driving task appears, the brain resources are immediately switched to the driving task and the math task is temporally dropped. Subsequently, the brain resources are then switched back to the math task. This processing consumes the most brain resources and makes the longest response time for the math question The response time in case 1 is significantly higher than that in case 3 and case 4. The occurrence of distraction effects is due in large part to the switching of brain resources.\nThe fact, which no significant differences occur on behavior performance for the driving tasks between the simultaneous-task case 2 and single-task case 5 (in Figure 2), suggests that the driving task is too simple to require much brain resources. These results are also due to the first priority on the driving task. No differences of behavior performance, which appear among case 2, case 3 and case5, also prove this fact. Thus, the subjects always chose to respond to the driving task when the driving task occurs even if they are handling a math task. In case 1, however, the math question is took as a cue to let the subjects rapidly respond to the driving task to avoid hitting the wall. This situation makes the response time short for the driving task in case 1 due to the subjects under a high perceptual load. Consistently, Lavie demonstrated that a high perceptual load reduced response time [46]. This also causes case 1 and case 3, which are formed as a symmetrical paradigm, be much different from each other (in Figure 2).\nIn Figure 6, the most power suppression occurs in case 5 (in Figure 6f) with only driving task. Three dual-task cases have the same level of power suppression. The reason why less power suppression occurs on dual-task cases in motor area is suggested that most brain resources are occupied in frontal area to deal with two tasks instead of those in motor area. It is proposed that motor area is not related to distraction effects. This is proved by one more result that the correlation is low between EEG dynamics in motor area and its corresponding response time.\nIn summary, this study observes several differences between dual-task and single-task cases. We investigate the relationship between brain dynamics associated with dual-task management and the behavior performance of response modalities. It is suggested the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, the appearing order of the two tasks with different difficulties is an important factor in dual-task performance.\nPosner postulated that two tasks performed simultaneously did not interfere with each other's performance when different brain areas were used for these two tasks [45]. However, this study uses two visual-stimuli tasks that compete within the frontal and motor areas for taking action. From the results, these two visual-stimuli tasks interfere with each other in both behavior performance (in Figure 2) and brain dynamics (in Figure 6).\nIn order to compare brain dynamics among different cases (in Figure 6), a statistical analysis was also conducted to assess the significance of the ERSP differences of the independent clusters under different cases. Since the true sample distribution of the cluster ERSP was unknown and the sample size (N = 14 as 1 of 15 subjects and N = 11 as 4 of 15 subjects were exclude in frontal and left motor clusters, respectively) was small, a nonparametric statistical analysis, a paired-sample Wilcoxon signed-rank test, was employed to access the statistically significant ERSP differences under different cases. The level of significance was set to p < 0.01.\nIn Figure 6c, the significant differences between dual-task cases and case 4 are due to that subjects' reaction to a math question is impaired when they are also facing a car deviation. Lavie demonstrated that dual-task load increases distraction effects [46]. Because of the distraction effects, the behavioral response time are significantly higher in dual-task cases than that in single-task case. In order to study the comparisons of these dual-task cases, the differences of them are shown in Figure 6b. From the behavior performance in Figure 2, response time in case 1 and case 2 are the longest which means that the most distraction effects occurred in these two cases. It is also shown in Figure 6b. Especially, distraction effects in case 1 are slightly higher than those in case 2. Therefore, it is suggested that some kinds of two sequent tasks make the same distraction effects as two simultaneous tasks, or even higher.\nJong investigated how performance of two overlapping discrete tasks was organized and controlled [47]. The sequential performance of overlapping tasks can be scheduled in advance and regulated by initially allocating brain resources to one task and subsequently switching to the other task. Thus in case 1, when the math task is presented to the subject, it occupies the brain resources. Then because the driving task appears, the brain resources are immediately switched to the driving task and the math task is temporally dropped. Subsequently, the brain resources are then switched back to the math task. This processing consumes the most brain resources and makes the longest response time for the math question The response time in case 1 is significantly higher than that in case 3 and case 4. The occurrence of distraction effects is due in large part to the switching of brain resources.\nThe fact, which no significant differences occur on behavior performance for the driving tasks between the simultaneous-task case 2 and single-task case 5 (in Figure 2), suggests that the driving task is too simple to require much brain resources. These results are also due to the first priority on the driving task. No differences of behavior performance, which appear among case 2, case 3 and case5, also prove this fact. Thus, the subjects always chose to respond to the driving task when the driving task occurs even if they are handling a math task. In case 1, however, the math question is took as a cue to let the subjects rapidly respond to the driving task to avoid hitting the wall. This situation makes the response time short for the driving task in case 1 due to the subjects under a high perceptual load. Consistently, Lavie demonstrated that a high perceptual load reduced response time [46]. This also causes case 1 and case 3, which are formed as a symmetrical paradigm, be much different from each other (in Figure 2).\nIn Figure 6, the most power suppression occurs in case 5 (in Figure 6f) with only driving task. Three dual-task cases have the same level of power suppression. The reason why less power suppression occurs on dual-task cases in motor area is suggested that most brain resources are occupied in frontal area to deal with two tasks instead of those in motor area. It is proposed that motor area is not related to distraction effects. This is proved by one more result that the correlation is low between EEG dynamics in motor area and its corresponding response time.\nIn summary, this study observes several differences between dual-task and single-task cases. We investigate the relationship between brain dynamics associated with dual-task management and the behavior performance of response modalities. It is suggested the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, the appearing order of the two tasks with different difficulties is an important factor in dual-task performance.", "The frontal lobe is an area in the brain, located at the front of each cerebral hemisphere. The frontal area deals with impulse control, judgment, language production, working memory, motor function, and problem solving [36,37]. In Figure 4a, the greater frontal power increases in cases 1-4 appear due to the solving of the math questions. The power increases in the theta (4.5 ~ 9 Hz) and beta bands (11 ~ 15 Hz) appear briefly after the math onset. Figure 4b and 4c show the quantified frontal power latencies and power increases in four conditions for the purpose of discussing the EEG dynamics made by solving the math question. In the theta power, the shortest latency is revealed in case 1. Power increases in three dual-task cases are higher than that in single-task case with the greatest power occurring in case 1. These phenomena suggest that dual tasks induce more event-related theta activities as well as subjects need more brain resources to accomplish dual tasks. The theta increase is associated with numerous processes such as mental work load, problem solving, encoding, or self monitoring [34]. Based on this evidence, the study demonstrates that the subjects were distracted under dual-task conditions in the experiment.\nSince human visual sensors need about 300 ms to perceive stimulus (P300 activity), 400 ms between first and second tasks is sufficient for a subject to perceive stimulus[38]. In case 1, a processing task is already in the brain and subjects need more brain resources to manage the high priority task presented 400 ms after the processing task. Therefore, the total power in the theta band in case 1 is the highest as shown in Figure 4c. Clearly the theta power increase appears the earliest in case 1 as shown in Figure 4b. The early theta response in the frontal area primarily reflects the activation of neural networks involved in allocating attention related to the target stimulus [39].\nThe trends of response time for the math task (in Figure 2a) and EEG theta increases in the frontal cluster (in Figure 4c) are consistent with one another. In the case of the single math task, the response time is the shortest and the theta power increase is the weakest. Among the dual-task cases, the longest response time and the greatest theta power increase are in case 1. This evidence suggests that the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, power increases in the beta band appear in all cases. From the ERSP images, the patterns are time-locked to the onset of the math task. Fernández suggested that significant EEG beta band differences in the frontal area are due to a specific component of mental calculation [40].", "Mu rhythm (μ rhythm) is an EEG rhythm usually recorded from the motor cortex of the dominant hemisphere. It can be suppressed by simple motor activities such as clenching the fist of the contra lateral side, or passively moved [41-43]. Mu suppression is believed to be the electrical output of the synchronization on large portions of pyramidal neurons in the motor cortex that controls hand and arm movements.\nIn this study, the mu suppressions (8 ~ 14 Hz) and beta power suppression (16 ~ 20 Hz) are mostly caused by subjects steering the wheel and pressing buttons as shown in Figure 5a. The mu suppressions caused by steering the wheel are almost time-locked to the response onset of driving task in cases 1-3 and case 5. However, the mu suppressions caused by pressing the buttons have no effects in case 4. As for in the dual-task cases, the mu suppressions are weaker than those in single-task case. This may due to the competition of brain resources required by wheel steering and button pressing.\nThus, Figure 5b and Figure 5c show motor power latencies and power increases in 4 cases for the purposes of discussing the EEG dynamics caused by the driving task. In (b), the longest latency of beta power suppression is observed in case 5 and the shortest latency appears in case 1. Perhaps motor planning is involved in preparing for steering the wheel and answering the math questions [44]. In (c), the three dual-task power suppressions are weaker than those in single task. Based on above evidences, it suggests that math processing occupies more brain resources in the frontal area during dual-task cases so less activation is induced in the motor area.", "Posner postulated that two tasks performed simultaneously did not interfere with each other's performance when different brain areas were used for these two tasks [45]. However, this study uses two visual-stimuli tasks that compete within the frontal and motor areas for taking action. From the results, these two visual-stimuli tasks interfere with each other in both behavior performance (in Figure 2) and brain dynamics (in Figure 6).\nIn order to compare brain dynamics among different cases (in Figure 6), a statistical analysis was also conducted to assess the significance of the ERSP differences of the independent clusters under different cases. Since the true sample distribution of the cluster ERSP was unknown and the sample size (N = 14 as 1 of 15 subjects and N = 11 as 4 of 15 subjects were exclude in frontal and left motor clusters, respectively) was small, a nonparametric statistical analysis, a paired-sample Wilcoxon signed-rank test, was employed to access the statistically significant ERSP differences under different cases. The level of significance was set to p < 0.01.\nIn Figure 6c, the significant differences between dual-task cases and case 4 are due to that subjects' reaction to a math question is impaired when they are also facing a car deviation. Lavie demonstrated that dual-task load increases distraction effects [46]. Because of the distraction effects, the behavioral response time are significantly higher in dual-task cases than that in single-task case. In order to study the comparisons of these dual-task cases, the differences of them are shown in Figure 6b. From the behavior performance in Figure 2, response time in case 1 and case 2 are the longest which means that the most distraction effects occurred in these two cases. It is also shown in Figure 6b. Especially, distraction effects in case 1 are slightly higher than those in case 2. Therefore, it is suggested that some kinds of two sequent tasks make the same distraction effects as two simultaneous tasks, or even higher.\nJong investigated how performance of two overlapping discrete tasks was organized and controlled [47]. The sequential performance of overlapping tasks can be scheduled in advance and regulated by initially allocating brain resources to one task and subsequently switching to the other task. Thus in case 1, when the math task is presented to the subject, it occupies the brain resources. Then because the driving task appears, the brain resources are immediately switched to the driving task and the math task is temporally dropped. Subsequently, the brain resources are then switched back to the math task. This processing consumes the most brain resources and makes the longest response time for the math question The response time in case 1 is significantly higher than that in case 3 and case 4. The occurrence of distraction effects is due in large part to the switching of brain resources.\nThe fact, which no significant differences occur on behavior performance for the driving tasks between the simultaneous-task case 2 and single-task case 5 (in Figure 2), suggests that the driving task is too simple to require much brain resources. These results are also due to the first priority on the driving task. No differences of behavior performance, which appear among case 2, case 3 and case5, also prove this fact. Thus, the subjects always chose to respond to the driving task when the driving task occurs even if they are handling a math task. In case 1, however, the math question is took as a cue to let the subjects rapidly respond to the driving task to avoid hitting the wall. This situation makes the response time short for the driving task in case 1 due to the subjects under a high perceptual load. Consistently, Lavie demonstrated that a high perceptual load reduced response time [46]. This also causes case 1 and case 3, which are formed as a symmetrical paradigm, be much different from each other (in Figure 2).\nIn Figure 6, the most power suppression occurs in case 5 (in Figure 6f) with only driving task. Three dual-task cases have the same level of power suppression. The reason why less power suppression occurs on dual-task cases in motor area is suggested that most brain resources are occupied in frontal area to deal with two tasks instead of those in motor area. It is proposed that motor area is not related to distraction effects. This is proved by one more result that the correlation is low between EEG dynamics in motor area and its corresponding response time.\nIn summary, this study observes several differences between dual-task and single-task cases. We investigate the relationship between brain dynamics associated with dual-task management and the behavior performance of response modalities. It is suggested the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, the appearing order of the two tasks with different difficulties is an important factor in dual-task performance.", "This study investigates behavioral and physiological (EEG) responses under multiple cases and multiple distraction levels. Firstly, the response time for mathematical problem solving in dual-task condition is significantly higher than that in single-task condition. Therefore, distraction effects occur while processing two tasks during driving. Comparing to the mathematical problems, however, the response time for driving tasks under multiple cases is almost the same without differences. This is due to the order of task appearance and the relative difficulty of the two tasks, which suggesting these factors are important considerations in dual-task performance. Secondly, theta power increases in the frontal area are higher with higher response time. The phasic changes around the theta band in the case, in which the mathematic task is presented before the deviation task, show the strongest increase as the same as that in the simultaneous-task case. This is because subjects already process a task in the brain and need more brain resources to manage the second task presented after the first task. In conclusions, this study suggests that the power increases of the 4.5 ~ 9 Hz frequency band in the frontal area is related to driver distraction and represents the strength of distraction in real-life driving.", "The authors declare that they have no competing interests.", "CTL started this study with the main idea, participated in the design of this study, and led the team to well finish it. SAC participated in the design of the study, the acquisition of data, the analysis/interpretation of data, and the modification of paper to submit. TTC participated in the design of the study and performed the statistical analysis. HZL participated in the design of the study and drafted the manuscript. LWK conceived of the study, and participated in the design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Recordings and experimental conditions", "Statistical analysis of behavior performance", "Measurement of distraction effects in dual-task EEG time series", "Results", "Behavior performance", "Independent component clustering", "Frontal and left motor clusters", "Discussion", "Frontal cluster", "Motor cluster", "Brain dynamics related to behavior performance", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Driver distraction has been identified as the leading cause of car accidents. The U.S. National Highway Traffic Safety Administration had reported driver distraction as a high priority area about 20-30% of car accidents [1]. Distraction during driving by any cause is a significant contributor to road traffic accidents [2,3]. Driving is a complex task in which several skills and abilities are simultaneously involved. Distractions found during driving are quite widespread, including eating, drinking, talking with passengers, using cell phones, reading, feeling fatigue, solving problems, and using in-car equipment. Commercial vehicle operators with complex in-car technologies also cause an increased risk as they may become increasingly distracting in the years to come [4,5]. Some literature studied the behavioral effect of driver's distraction in car. Tijerina showed driver distraction from measurements of the static completion time of an in-vehicle task [6]. Similarly, distraction effects caused by talking on cellular phones during driving have been a focal point of recent in-car studies [7-9]. Experimental studies have been conducted to assess the impact of specific types of driver distraction on driving performance. Though these studies generally reported significant driving impairment, simulator studies cannot provide information about accidents due to impairment resulting in hospitalization of the driver [10,11]. To provide information before the occurrence of crashes, the drivers' physiological responses are investigated in this paper. However, monitoring drivers' attention-related brain resources is still a challenge for researchers and practitioners in the field of cognitive brain research and human-machine interaction.\nRegarding neural physiological investigation, some literature focused on the brain activities of \"divided attention,\" referring to attention divided between two or more sources of information, such as visual, auditory, shape, and color stimuli. Positron emission tomography (PET) measurements were taken while subjects discriminated among shape, color, and speed of a visual stimulus under conditions of selective and divided attention. The divided attention condition activated the anterior cingulated and prefrontal cortex in the right hemisphere [12]. In another study, functional magnetic resonance imaging (fMRI) was used to investigate brain activity during a dual-task (visual stimulus) experiment. Findings revealed activation in the posterior dorsolateral prefrontal cortex (middle frontal gyrus) and lateral parietal cortex [13]. In addition, several neuroimaging studies showed the importance of the prefrontal network in dual-task management [14,15]. Some studies investigated traffic scenarios recorded the EEG to compare P300 amplitudes [16]. During simulated traffic scenarios, resource allocation was assessed as an event-related potential (ERP) novelty oddball paradigm [17]. In these EEG studies, however, only the time course was analyzed. Deiber took one more step to analyze the relation between time and frequency courses [18]. Their study used EEG to investigate mental arithmetic-induced workload and found theta band power increases in areas of the frontal cortex. Despite so much research on brain activities, the above-mentioned studies only investigated brain activities during dual-task interactions without considering the SOA problem during driving, which is with the temporal gap between presentations of two stimuli. When dual tasks are presented within a short SOA, the response time of each task is typically lower than that presented within a longer SOA [19]. Therefore, the current study investigates the effects of the different temporal relationships of stimuli.\nClinical practices as well as basic scientific studies have been using the EEG for 80 years. Presently, EEG measurement is widely used as a standard procedure in research such as sleep studies, epileptic abnormalities, and other disorder diagnoses [20,21]. Compared to another widely used neuroimaging modality, fMRI, the EEG is much less expensive and has superior temporal resolution in investigating SOA problems. To avoid interference and decrease risks while operating a vehicle on the road, researchers adopted driving simulations for vehicle design. Studies of driver's behavior and cognitive states are also expanding rapidly [22]. However, static driving simulation cannot fully create real-life driving conditions, such as the vibrations experienced when driving an actual vehicle on the road. Therefore, the VR-based simulation with a motion platform was developed [23,24]. This VR technique allows subjects to interact directly with a virtual environment rather than only monotonic auditory or visual stimuli. Integrating realistic VR scenes with visual stimuli makes it easy to study the brain response to attention during driving. Therefore, in recent years, VR-based simulation combined with EEG monitoring is a recent and beneficial innovation in cognitive engineering research.\nThe main goal of this study is to investigate the brain dynamics related to distraction by using EEG and a VR-based realistic driving environment. Unlike previous studies, the experiment design has three main characteristics. First, the SOA experimental design, with different appearance times of two tasks, has the benefit of investigating the driver's behavioral and physiological response under multiple conditions and multiple distraction levels. Second, ICA-based advanced analysis methods are used to extract brain responses and the cortical location related to distraction. Third, this study investigates the interaction and effects of dual-task-related brain activities, in contrast to a single task.", "[SUBTITLE] Subjects [SUBSECTION] Fifteen healthy participants (all males), between 20 and 28 years of age, were recruited from the university population. They have normal or corrected-to-normal vision, are right handed, have a driver's license, and are reported being free from psychiatric or neurological disorders. Written informed consent was obtained prior to the study.\nEach subject participated in four simulated sessions inside a car with hands on the steering wheel to keep the car in the center of the third lane, which was numbered from the left lane, in a VR surround scene on a four-lane freeway [23]. Thirty scalp electrodes (Ag/AgCl electrodes with a unipolar reference at the right earlobe) by the NuAmp system (Compumedics Ltd., VIC, Australia) were mounted on the subject's head to record the physiological EEG [25]. The EEG electrodes were placed based on a modified international 10-20 system. The contact impedance between EEG electrodes and the cortex was calibrated to be less than 10 kΩ. Before beginning first session, each subject took a 15 ~ 30 minute for practice session. In each session, subjects proceeded to a freeway simulated driving lasting fifteen minutes with the corresponding EEG signals synchronously recorded. For these four-session experiments, subjects were required to rest for ten minutes between every two sessions to avoid fatigue.\nFifteen healthy participants (all males), between 20 and 28 years of age, were recruited from the university population. They have normal or corrected-to-normal vision, are right handed, have a driver's license, and are reported being free from psychiatric or neurological disorders. Written informed consent was obtained prior to the study.\nEach subject participated in four simulated sessions inside a car with hands on the steering wheel to keep the car in the center of the third lane, which was numbered from the left lane, in a VR surround scene on a four-lane freeway [23]. Thirty scalp electrodes (Ag/AgCl electrodes with a unipolar reference at the right earlobe) by the NuAmp system (Compumedics Ltd., VIC, Australia) were mounted on the subject's head to record the physiological EEG [25]. The EEG electrodes were placed based on a modified international 10-20 system. The contact impedance between EEG electrodes and the cortex was calibrated to be less than 10 kΩ. Before beginning first session, each subject took a 15 ~ 30 minute for practice session. In each session, subjects proceeded to a freeway simulated driving lasting fifteen minutes with the corresponding EEG signals synchronously recorded. For these four-session experiments, subjects were required to rest for ten minutes between every two sessions to avoid fatigue.\n[SUBTITLE] Recordings and experimental conditions [SUBSECTION] For this study, a simulated freeway scene was built using VR technology with a WTK library on a 6 DOF motion platform [23]. The four-lane freeway scene was displayed on a surrounded environment. Since the main purpose of this paper is to investigate distraction effects in dual-task conditions, two tasks involving unexpected car deviations and mathematical questions were designed. In the driving task, the car frequently and randomly drifted from the center of the third lane. Subjects were required to steer the car back to the center of the third lane. This task mimicked the effects of driving on a non-ideal road surface. In the mathematical task, two-digit addition equations were presented to the subjects. The answers were designed to be either valid or invalid. Subjects were asked to press the right or left button on the steering wheel corresponding to on correct or incorrect equations, respectively. The allotment ratio of correct-incorrect equations was 50-50. The choice of mathematic task was motivated by the desire for control in the task demands [26]. All drivers could perform this mathematic task well without training.\nTo investigate the effects of SOA between two tasks, the combinations of these two tasks were designed to provide different distracting conditions to the subjects as shown in Figure 1. Five cases were developed to study the interaction of the two tasks. The bottom insets show the onset sequences of two tasks. Therefore, this study investigated the relationship of math task and driving task and how two tasks affected each other in the SOA conditions.\nThe illustration shows the relationship of occurrences between the deviation and math tasks. D: deviation task onset. M: math task onset. (a) Case 1: math task presents 400 ms before the deviation task onset. (b) Case 2: math and deviation tasks occur at the same time. (c) Case 3: math task presents 400 ms after the deviation task onset. (d) Case 4: only math task presents. (e) Case 5: only deviation task occurs. The bottom insets show the onset sequences of the two tasks.\nFor this study, a simulated freeway scene was built using VR technology with a WTK library on a 6 DOF motion platform [23]. The four-lane freeway scene was displayed on a surrounded environment. Since the main purpose of this paper is to investigate distraction effects in dual-task conditions, two tasks involving unexpected car deviations and mathematical questions were designed. In the driving task, the car frequently and randomly drifted from the center of the third lane. Subjects were required to steer the car back to the center of the third lane. This task mimicked the effects of driving on a non-ideal road surface. In the mathematical task, two-digit addition equations were presented to the subjects. The answers were designed to be either valid or invalid. Subjects were asked to press the right or left button on the steering wheel corresponding to on correct or incorrect equations, respectively. The allotment ratio of correct-incorrect equations was 50-50. The choice of mathematic task was motivated by the desire for control in the task demands [26]. All drivers could perform this mathematic task well without training.\nTo investigate the effects of SOA between two tasks, the combinations of these two tasks were designed to provide different distracting conditions to the subjects as shown in Figure 1. Five cases were developed to study the interaction of the two tasks. The bottom insets show the onset sequences of two tasks. Therefore, this study investigated the relationship of math task and driving task and how two tasks affected each other in the SOA conditions.\nThe illustration shows the relationship of occurrences between the deviation and math tasks. D: deviation task onset. M: math task onset. (a) Case 1: math task presents 400 ms before the deviation task onset. (b) Case 2: math and deviation tasks occur at the same time. (c) Case 3: math task presents 400 ms after the deviation task onset. (d) Case 4: only math task presents. (e) Case 5: only deviation task occurs. The bottom insets show the onset sequences of the two tasks.\n[SUBTITLE] Statistical analysis of behavior performance [SUBSECTION] After recording the behavior data, statistical package for the social science (SPSS) Version 13.0 for Windows software is applied to estimate the significance testing of behavior data. The response time of these two tasks (the driving deviation and the math equation) is analyzed to study the behavior of subjects in the experiments.\nUsing ANOVA (analysis of variance), the significances of the response time of these two tasks are tested for every subject. A non-parametric test is also utilized to study the trends of the behavior data. Firstly, this study excluded outliers, comprising around 6.57% of all trials, based on the criteria that response time was distributed outside the mean response time plus three times the standard deviation of each single session. Secondly, the number of trials in one of five cases which is minimal is chosen to make a benchmark to randomly select the same number of trials in other cases. Thirdly, a single task is taken for the baseline to normalize the behavior data to be XiXmean (Xi: mean of response time in case i, Xmean: mean of response time in single case). For example, in order to compare the distraction effects from the math equation, case 4 (the single math task) is the baseline.\nAfter recording the behavior data, statistical package for the social science (SPSS) Version 13.0 for Windows software is applied to estimate the significance testing of behavior data. The response time of these two tasks (the driving deviation and the math equation) is analyzed to study the behavior of subjects in the experiments.\nUsing ANOVA (analysis of variance), the significances of the response time of these two tasks are tested for every subject. A non-parametric test is also utilized to study the trends of the behavior data. Firstly, this study excluded outliers, comprising around 6.57% of all trials, based on the criteria that response time was distributed outside the mean response time plus three times the standard deviation of each single session. Secondly, the number of trials in one of five cases which is minimal is chosen to make a benchmark to randomly select the same number of trials in other cases. Thirdly, a single task is taken for the baseline to normalize the behavior data to be XiXmean (Xi: mean of response time in case i, Xmean: mean of response time in single case). For example, in order to compare the distraction effects from the math equation, case 4 (the single math task) is the baseline.\n[SUBTITLE] Measurement of distraction effects in dual-task EEG time series [SUBSECTION] EEG epochs are extracted from the recorded EEG signals with 16-bit quantization, at the sampling rate of 500 Hz. The data are then preprocessed using a simple low pass filter with a cut-off frequency of 50 Hz to remove line noise and other high frequency noise. One more high-pass filter with a cut-off frequency of 0.5 Hz is utilized to remove DC drift. This study adopts ICA to separate independent brain sources [27-29]. ERSP technology is then applied to these independent component (IC) signals (separated independent brain sources) to transfer the signal into the time-frequency domain for the event-related frequency study. Finally, the stability of component activations and scalp topographies of meaningful components are investigated with component clustering technology. Because different cases with various combinations of driving and the math tasks are designed, EEG responses from five different cases are extracted separately.\nEEG source segregation, identification, and localization is very difficult because EEG data collected from the human scalp induce brain activities within a large brain area. Although the conductivity between the skull and brain is different, the spatial \"smearing\" of EEG data caused by volume conduction does not cause a significant time delay. This suggests that ICA algorithm is suitable for performing blind source separation on EEG data. The first applications of ICA to biomedical time series analysis were presented by Makeig and Inlow [30]. Their report shows segregation of eye movements from brain EEG phenomena, and separates EEG data into constituent components defined by spatial stability and temporal independence. Subsequent technical experiments demonstrated that ICA could also be used to remove artifacts from both continuous and event-related (single-trial) EEG data [27,28]. Presumably, multi-channel EEG recordings are mixtures of underlying brain sources and artificial signals. By assuming that (a) mixing medium is linear and propagation delays are negligible, (b) the time courses of the sources are independent, and (c) the number of sources is the same as the number of sensors; that is, if there are N sensors, the ICA algorithm can separate N sources [27].\nThe time sequences of ICA component signals are subjected to Fast Fourier Transform with overlapped moving windows. In addition, the spectrum in each epoch is smoothed by 3-window (768 points) moving-average to reduce random errors. The spectrum prior to event onsets is considered as the baseline spectrum for every epoch. The mean of the baseline spectrum is subtracted from the power spectral after stimulus onsets so spectral \"perturbation\" can be visualized. This procedure is then applied repeatedly to every epoch. The results are averaged to yield ERSP images [31]. These measures can evaluate averaged dynamic changes in amplitudes of the broad band EEG spectrum as a function of time following cognitive events. The ERSP images mainly show spectral differences after an event since the baseline spectrum prior to event onsets had been removed. After performing a bootstrap analysis (usually 0.01 or 0.03 or 0.05; here 0.01 was applied) on ERSP, only statistically significant (p < 0.01) spectral changes are shown in the ERSP images. Non-significant time/frequency points are masked (replaced with zero). Consequently, any perturbations in the frequency domain become relatively prominent.\nTo study the cross-subject component stability of ICA decomposition, components from multiple subjects are clustered, based on their spatial distributions and EEG characteristics. However, components from different subjects differ in many ways such as scalp maps, power spectrum, ERPs and ERSPs. Some studies attempted to solve this problem by calculating similarities among different ICs [32-34]. Based on these studies, ICs of interest are selected and clustered semi-automatically based on their scalp maps, dipole source locations, and within-subject consistency. To match scalp maps of ICs within and across subjects in this paper, the gradients of the IC scalp maps from different sessions of the same subject are computed and grouped together based on the highest correlations of gradients of the common electrodes retained in all sessions. For dipole source locations, DIPFIT2 routines from EEGLAB are used to fit single dipole source models to the remaining IC scalp topographies using a four-shell spherical head model [35]. In the DIPFIT software, the spherical head model is co-registered with an average brain model (Montreal Neurological Institute) and returns approximate Talairach coordinates for each equivalent dipole source.\nEEG epochs are extracted from the recorded EEG signals with 16-bit quantization, at the sampling rate of 500 Hz. The data are then preprocessed using a simple low pass filter with a cut-off frequency of 50 Hz to remove line noise and other high frequency noise. One more high-pass filter with a cut-off frequency of 0.5 Hz is utilized to remove DC drift. This study adopts ICA to separate independent brain sources [27-29]. ERSP technology is then applied to these independent component (IC) signals (separated independent brain sources) to transfer the signal into the time-frequency domain for the event-related frequency study. Finally, the stability of component activations and scalp topographies of meaningful components are investigated with component clustering technology. Because different cases with various combinations of driving and the math tasks are designed, EEG responses from five different cases are extracted separately.\nEEG source segregation, identification, and localization is very difficult because EEG data collected from the human scalp induce brain activities within a large brain area. Although the conductivity between the skull and brain is different, the spatial \"smearing\" of EEG data caused by volume conduction does not cause a significant time delay. This suggests that ICA algorithm is suitable for performing blind source separation on EEG data. The first applications of ICA to biomedical time series analysis were presented by Makeig and Inlow [30]. Their report shows segregation of eye movements from brain EEG phenomena, and separates EEG data into constituent components defined by spatial stability and temporal independence. Subsequent technical experiments demonstrated that ICA could also be used to remove artifacts from both continuous and event-related (single-trial) EEG data [27,28]. Presumably, multi-channel EEG recordings are mixtures of underlying brain sources and artificial signals. By assuming that (a) mixing medium is linear and propagation delays are negligible, (b) the time courses of the sources are independent, and (c) the number of sources is the same as the number of sensors; that is, if there are N sensors, the ICA algorithm can separate N sources [27].\nThe time sequences of ICA component signals are subjected to Fast Fourier Transform with overlapped moving windows. In addition, the spectrum in each epoch is smoothed by 3-window (768 points) moving-average to reduce random errors. The spectrum prior to event onsets is considered as the baseline spectrum for every epoch. The mean of the baseline spectrum is subtracted from the power spectral after stimulus onsets so spectral \"perturbation\" can be visualized. This procedure is then applied repeatedly to every epoch. The results are averaged to yield ERSP images [31]. These measures can evaluate averaged dynamic changes in amplitudes of the broad band EEG spectrum as a function of time following cognitive events. The ERSP images mainly show spectral differences after an event since the baseline spectrum prior to event onsets had been removed. After performing a bootstrap analysis (usually 0.01 or 0.03 or 0.05; here 0.01 was applied) on ERSP, only statistically significant (p < 0.01) spectral changes are shown in the ERSP images. Non-significant time/frequency points are masked (replaced with zero). Consequently, any perturbations in the frequency domain become relatively prominent.\nTo study the cross-subject component stability of ICA decomposition, components from multiple subjects are clustered, based on their spatial distributions and EEG characteristics. However, components from different subjects differ in many ways such as scalp maps, power spectrum, ERPs and ERSPs. Some studies attempted to solve this problem by calculating similarities among different ICs [32-34]. Based on these studies, ICs of interest are selected and clustered semi-automatically based on their scalp maps, dipole source locations, and within-subject consistency. To match scalp maps of ICs within and across subjects in this paper, the gradients of the IC scalp maps from different sessions of the same subject are computed and grouped together based on the highest correlations of gradients of the common electrodes retained in all sessions. For dipole source locations, DIPFIT2 routines from EEGLAB are used to fit single dipole source models to the remaining IC scalp topographies using a four-shell spherical head model [35]. In the DIPFIT software, the spherical head model is co-registered with an average brain model (Montreal Neurological Institute) and returns approximate Talairach coordinates for each equivalent dipole source.", "Fifteen healthy participants (all males), between 20 and 28 years of age, were recruited from the university population. They have normal or corrected-to-normal vision, are right handed, have a driver's license, and are reported being free from psychiatric or neurological disorders. Written informed consent was obtained prior to the study.\nEach subject participated in four simulated sessions inside a car with hands on the steering wheel to keep the car in the center of the third lane, which was numbered from the left lane, in a VR surround scene on a four-lane freeway [23]. Thirty scalp electrodes (Ag/AgCl electrodes with a unipolar reference at the right earlobe) by the NuAmp system (Compumedics Ltd., VIC, Australia) were mounted on the subject's head to record the physiological EEG [25]. The EEG electrodes were placed based on a modified international 10-20 system. The contact impedance between EEG electrodes and the cortex was calibrated to be less than 10 kΩ. Before beginning first session, each subject took a 15 ~ 30 minute for practice session. In each session, subjects proceeded to a freeway simulated driving lasting fifteen minutes with the corresponding EEG signals synchronously recorded. For these four-session experiments, subjects were required to rest for ten minutes between every two sessions to avoid fatigue.", "For this study, a simulated freeway scene was built using VR technology with a WTK library on a 6 DOF motion platform [23]. The four-lane freeway scene was displayed on a surrounded environment. Since the main purpose of this paper is to investigate distraction effects in dual-task conditions, two tasks involving unexpected car deviations and mathematical questions were designed. In the driving task, the car frequently and randomly drifted from the center of the third lane. Subjects were required to steer the car back to the center of the third lane. This task mimicked the effects of driving on a non-ideal road surface. In the mathematical task, two-digit addition equations were presented to the subjects. The answers were designed to be either valid or invalid. Subjects were asked to press the right or left button on the steering wheel corresponding to on correct or incorrect equations, respectively. The allotment ratio of correct-incorrect equations was 50-50. The choice of mathematic task was motivated by the desire for control in the task demands [26]. All drivers could perform this mathematic task well without training.\nTo investigate the effects of SOA between two tasks, the combinations of these two tasks were designed to provide different distracting conditions to the subjects as shown in Figure 1. Five cases were developed to study the interaction of the two tasks. The bottom insets show the onset sequences of two tasks. Therefore, this study investigated the relationship of math task and driving task and how two tasks affected each other in the SOA conditions.\nThe illustration shows the relationship of occurrences between the deviation and math tasks. D: deviation task onset. M: math task onset. (a) Case 1: math task presents 400 ms before the deviation task onset. (b) Case 2: math and deviation tasks occur at the same time. (c) Case 3: math task presents 400 ms after the deviation task onset. (d) Case 4: only math task presents. (e) Case 5: only deviation task occurs. The bottom insets show the onset sequences of the two tasks.", "After recording the behavior data, statistical package for the social science (SPSS) Version 13.0 for Windows software is applied to estimate the significance testing of behavior data. The response time of these two tasks (the driving deviation and the math equation) is analyzed to study the behavior of subjects in the experiments.\nUsing ANOVA (analysis of variance), the significances of the response time of these two tasks are tested for every subject. A non-parametric test is also utilized to study the trends of the behavior data. Firstly, this study excluded outliers, comprising around 6.57% of all trials, based on the criteria that response time was distributed outside the mean response time plus three times the standard deviation of each single session. Secondly, the number of trials in one of five cases which is minimal is chosen to make a benchmark to randomly select the same number of trials in other cases. Thirdly, a single task is taken for the baseline to normalize the behavior data to be XiXmean (Xi: mean of response time in case i, Xmean: mean of response time in single case). For example, in order to compare the distraction effects from the math equation, case 4 (the single math task) is the baseline.", "EEG epochs are extracted from the recorded EEG signals with 16-bit quantization, at the sampling rate of 500 Hz. The data are then preprocessed using a simple low pass filter with a cut-off frequency of 50 Hz to remove line noise and other high frequency noise. One more high-pass filter with a cut-off frequency of 0.5 Hz is utilized to remove DC drift. This study adopts ICA to separate independent brain sources [27-29]. ERSP technology is then applied to these independent component (IC) signals (separated independent brain sources) to transfer the signal into the time-frequency domain for the event-related frequency study. Finally, the stability of component activations and scalp topographies of meaningful components are investigated with component clustering technology. Because different cases with various combinations of driving and the math tasks are designed, EEG responses from five different cases are extracted separately.\nEEG source segregation, identification, and localization is very difficult because EEG data collected from the human scalp induce brain activities within a large brain area. Although the conductivity between the skull and brain is different, the spatial \"smearing\" of EEG data caused by volume conduction does not cause a significant time delay. This suggests that ICA algorithm is suitable for performing blind source separation on EEG data. The first applications of ICA to biomedical time series analysis were presented by Makeig and Inlow [30]. Their report shows segregation of eye movements from brain EEG phenomena, and separates EEG data into constituent components defined by spatial stability and temporal independence. Subsequent technical experiments demonstrated that ICA could also be used to remove artifacts from both continuous and event-related (single-trial) EEG data [27,28]. Presumably, multi-channel EEG recordings are mixtures of underlying brain sources and artificial signals. By assuming that (a) mixing medium is linear and propagation delays are negligible, (b) the time courses of the sources are independent, and (c) the number of sources is the same as the number of sensors; that is, if there are N sensors, the ICA algorithm can separate N sources [27].\nThe time sequences of ICA component signals are subjected to Fast Fourier Transform with overlapped moving windows. In addition, the spectrum in each epoch is smoothed by 3-window (768 points) moving-average to reduce random errors. The spectrum prior to event onsets is considered as the baseline spectrum for every epoch. The mean of the baseline spectrum is subtracted from the power spectral after stimulus onsets so spectral \"perturbation\" can be visualized. This procedure is then applied repeatedly to every epoch. The results are averaged to yield ERSP images [31]. These measures can evaluate averaged dynamic changes in amplitudes of the broad band EEG spectrum as a function of time following cognitive events. The ERSP images mainly show spectral differences after an event since the baseline spectrum prior to event onsets had been removed. After performing a bootstrap analysis (usually 0.01 or 0.03 or 0.05; here 0.01 was applied) on ERSP, only statistically significant (p < 0.01) spectral changes are shown in the ERSP images. Non-significant time/frequency points are masked (replaced with zero). Consequently, any perturbations in the frequency domain become relatively prominent.\nTo study the cross-subject component stability of ICA decomposition, components from multiple subjects are clustered, based on their spatial distributions and EEG characteristics. However, components from different subjects differ in many ways such as scalp maps, power spectrum, ERPs and ERSPs. Some studies attempted to solve this problem by calculating similarities among different ICs [32-34]. Based on these studies, ICs of interest are selected and clustered semi-automatically based on their scalp maps, dipole source locations, and within-subject consistency. To match scalp maps of ICs within and across subjects in this paper, the gradients of the IC scalp maps from different sessions of the same subject are computed and grouped together based on the highest correlations of gradients of the common electrodes retained in all sessions. For dipole source locations, DIPFIT2 routines from EEGLAB are used to fit single dipole source models to the remaining IC scalp topographies using a four-shell spherical head model [35]. In the DIPFIT software, the spherical head model is co-registered with an average brain model (Montreal Neurological Institute) and returns approximate Talairach coordinates for each equivalent dipole source.", "[SUBTITLE] Behavior performance [SUBSECTION] To investigate the overall behavior index, this study uses nonparametric tests because several extremely large scores are significantly skewed. Firstly, the trials of data are randomly selected to have the same number of the trials in all cases. Then, the response time of the deviation and math tasks in the five cases are normalized to correspond to single-deviation and single-math cases, respectively. SPSS software is used for the Friedman test, and the results of which are shown in Figure 2. Dual-task cases are marked for easy discrimination from single-task cases.\nThis shows the bar charts of normalized response times. (a) for the math task and (b) for deviation task across 15 subjects. The filled black bar: case 1; dark gray bar: case 2; light gray bar: case 3; the open bar: single case. The response time for math task in dual-task cases (case 1, case 2, and case 3) is significantly longer than that for in single task (case 4). The shortest response time for the math onset is in case 4. The response time for deviation task in case 1 is significantly shorter than those in other cases. The longest response time to the deviation onset is in case 5. The bottom insets show the onset sequences of the two tasks.\nTo know how the cases make the differences, the Student-Newman-Keuls test is used for the post hoc test (in Table 1). The test statistic on response time of math tasks in cases 1-4, is χ2(3) = 903.926 from the Friedman's ANOVA test, and p < 0.01. The Student-Newman-Keuls test show three significant groups: case 1 with case 2, case 3, and case 4 in which the response time for math task in case 1 is the longest. Statistical test results of the response time for deviation tasks in cases 1-3, and case 5, is χ2(3) = 493.98 from the Friedman's ANOVA test, and p < 0.01. Using the Student-Newman-Keuls test, there are two significant groups: case 1, and the other cases in which the response time for deviation task in case 1 is the shortest.\nThe normalized response time to deviation and math\nTo investigate the overall behavior index, this study uses nonparametric tests because several extremely large scores are significantly skewed. Firstly, the trials of data are randomly selected to have the same number of the trials in all cases. Then, the response time of the deviation and math tasks in the five cases are normalized to correspond to single-deviation and single-math cases, respectively. SPSS software is used for the Friedman test, and the results of which are shown in Figure 2. Dual-task cases are marked for easy discrimination from single-task cases.\nThis shows the bar charts of normalized response times. (a) for the math task and (b) for deviation task across 15 subjects. The filled black bar: case 1; dark gray bar: case 2; light gray bar: case 3; the open bar: single case. The response time for math task in dual-task cases (case 1, case 2, and case 3) is significantly longer than that for in single task (case 4). The shortest response time for the math onset is in case 4. The response time for deviation task in case 1 is significantly shorter than those in other cases. The longest response time to the deviation onset is in case 5. The bottom insets show the onset sequences of the two tasks.\nTo know how the cases make the differences, the Student-Newman-Keuls test is used for the post hoc test (in Table 1). The test statistic on response time of math tasks in cases 1-4, is χ2(3) = 903.926 from the Friedman's ANOVA test, and p < 0.01. The Student-Newman-Keuls test show three significant groups: case 1 with case 2, case 3, and case 4 in which the response time for math task in case 1 is the longest. Statistical test results of the response time for deviation tasks in cases 1-3, and case 5, is χ2(3) = 493.98 from the Friedman's ANOVA test, and p < 0.01. Using the Student-Newman-Keuls test, there are two significant groups: case 1, and the other cases in which the response time for deviation task in case 1 is the shortest.\nThe normalized response time to deviation and math\n[SUBTITLE] Independent component clustering [SUBSECTION] EEG epochs are extracted from the recorded EEG signals. Then, ICA is utilized to decompose independent brain sources from the EEG epochs. Based on distraction effects in this study, many brain resources are involved in this experiment. Especially, the motor component is active when subjects are steering the car. At the same time, activations related to attention in the frontal component appear. Therefore, ICA components, including frontal and motor, are selected for IC clustering to analyze cross-subject data based on their EEG characteristics.\nAt first, IC clustering groups massive components from multiple sessions and subjects into several significant clusters. Cluster analysis, k-means, is applied to the normalized scalp topographies and power spectra of all 450 (30 channels × 15 subjects) components from the 15 subjects. Cluster analysis identifies at least 7 component clusters having similar power spectra and scalp projections. These 7 distinct component clusters consisted of frontal, central midline, parietal, left/right motor and left/right occipital. Table 2 gives the number of components in different clusters. This investigation uses the frontal and left motor components to analyze distraction effects. Figure 3 shows the scalp maps and equivalent dipole source locations for fontal and left motor clusters. Based on this finding, the EEG sources of different subjects in the same cluster are from the same physiological component.\nThe Number of Components in Different Clusters\nThe scalp maps and equivalent dipole source locations after IC clustering across 15 subjects. (a) the frontal components and (b) the left motor components are shown here. There are 14 subjects in the frontal cluster and 11 subjects in the left motor cluster. The grand scalp map is the mean of the total component maps in each cluster. The smaller maps are the individual scalp maps. The right panels (c) and (d) show the 3-D dipole source locations (colored spheres) and their projections onto average brain images. The colored source locations correspond to their own scalp maps by the same color of the text above.\nEEG epochs are extracted from the recorded EEG signals. Then, ICA is utilized to decompose independent brain sources from the EEG epochs. Based on distraction effects in this study, many brain resources are involved in this experiment. Especially, the motor component is active when subjects are steering the car. At the same time, activations related to attention in the frontal component appear. Therefore, ICA components, including frontal and motor, are selected for IC clustering to analyze cross-subject data based on their EEG characteristics.\nAt first, IC clustering groups massive components from multiple sessions and subjects into several significant clusters. Cluster analysis, k-means, is applied to the normalized scalp topographies and power spectra of all 450 (30 channels × 15 subjects) components from the 15 subjects. Cluster analysis identifies at least 7 component clusters having similar power spectra and scalp projections. These 7 distinct component clusters consisted of frontal, central midline, parietal, left/right motor and left/right occipital. Table 2 gives the number of components in different clusters. This investigation uses the frontal and left motor components to analyze distraction effects. Figure 3 shows the scalp maps and equivalent dipole source locations for fontal and left motor clusters. Based on this finding, the EEG sources of different subjects in the same cluster are from the same physiological component.\nThe Number of Components in Different Clusters\nThe scalp maps and equivalent dipole source locations after IC clustering across 15 subjects. (a) the frontal components and (b) the left motor components are shown here. There are 14 subjects in the frontal cluster and 11 subjects in the left motor cluster. The grand scalp map is the mean of the total component maps in each cluster. The smaller maps are the individual scalp maps. The right panels (c) and (d) show the 3-D dipole source locations (colored spheres) and their projections onto average brain images. The colored source locations correspond to their own scalp maps by the same color of the text above.\n[SUBTITLE] Frontal and left motor clusters [SUBSECTION] Figure 4a shows the cross-subject averaged ERSP in the frontal cluster corresponding to the five cases. Figure 4 also reveals significant (p < 0.01) power increases related to the math task, demonstrating that the power increases in the frontal cluster are related to the math task. The theta power increases in three dual-task cases including cases 1-3 are slightly different from each other. Compared to the single math task (case 4), the power in dual-task cases is stronger. Especially, the power increase in case 1 is the strongest. On the beta band, it also shows power increases, which appear only in the math-task and time-locked to mathematics onsets.\nThe ERSP images of frontal cluster with five cases. (a) The ERSP images of frontal cluster with five cases. The right column show the onset sequences of the two tasks. Color bars indicate the magnitude of ERSPs. Red solid lines show the onset of the math task. Red dashed lines show the mean response time for the math task. Blue solid lines show the onset of the deviation task. Blue dashed lines show the mean response time for the deviation task. The red circle pointed out by the red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating time form the math task onset to the first occurrence of power increases. The open bars represent the latencies in the theta (4.5 ~ 9 Hz) band. The gray bars represent these latencies in the beta (11 ~ 15 Hz) band. The comparison of total power in cross-subject (14 subjects) averaged ERSP images in the frontal cluster between cases is shown in (c). The amount of total power is calculated by adding all the power increases in the same temporal period and the same frequency band. The open bars represent the total power in the theta band. The gray bars represent the total power in the beta band.\nFigure 4b and 4c give comparisons of the latency and total power in four cases from Figure 4a. It demonstrates that the latencies of power increases in two frequency bands are different with the different SOA time. The shortest latencies in both bands occur in case 1 and the longest power increase latency in the theta band occurs in case 4. It also demonstrates that the amount of power increases in the theta band is different with the different SOA time. The most significant power increase occurs in case 1.\nFigure 5a shows the cross-subject average ERSP in the left motor cluster corresponding to five cases. Significant (p < 0.01) power suppressions appear around the event onsets (at 0 ms) and stop at different time axes by cases. In case 4, the alpha and beta power suppressions appear continuously until the red dashed lines, which indicates the mean of the response time for the math task. Compared with case 4, the alpha and beta power suppressions in case 5 are stronger and also last longer. In other cases, the alpha and beta power suppressions continue after the blue dashed lines. This phenomenon is suggested to be related to steering the car back to the center of the third lane.\nThe ERSP images of the left motor cluster with five cases. (a) The ERSP images of the left motor cluster with five cases. The right column shows the onset sequences of the two tasks. Color bars indicate the magnitude of the ERSPs. Red solid lines show the onset of math. Red dashed lines show the mean response time for math task. Blue solid lines show the onset of deviation task. Blue dashed lines show the mean response time for deviation task. The red circle pointed out by a red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating from the deviation task onset to the first occurrence of power suppressions. The open bars represent the latencies in the alpha (8 ~ 14 Hz) band. The gray blue bars represent these latencies in the beta band (16 ~ 20 Hz). (c) shows the comparison of total power in cross-subject (11 subjects) averaged ERSP images in the left motor cluster between cases. The amount of total power is calculated by adding all the power suppressions in the same temporal period and the same frequency band. The open bars represent the total power in the alpha band. The gray bars represent the total power in the beta band.\nFigure 5b and 5c shows comparisons of the latency and total power between the four cases in Figure 5a. It demonstrates that power suppression latencies in the beta band are different with the different SOA time. The shortest power suppression latency occurs in case 1 and the longest power increase latency occurs in case 5. It also demonstrates that the amount of power suppression in the alpha band is different with the different SOA time. The most significant power suppression occurs in case 5 (the single driving task) and the smallest power suppression occurs in case 4 (the single math task).\nFigure 6a and 6d show the ERSP in the frontal and left motor clusters without a significance test. Columns (b) and (e) show the differences among three single-task cases; columns (c) and (f) show the differences between single- and dual-task cases. In columns (b), (c), (e), and (f), a Wilcoxon signed-rank test is used to retain the regions with significant power inside the black circles. Columns (b) and (c) show the comparison of power increases between cases. The remained regions show greater power increases in the single-task case than in the dual-task case. Columns (e) and (f) show compared power suppressions between cases. The remained regions show greater power suppressions in the dual-task cases than in the single-task case.\nERSP without a significance test and the differences between cases. Column (a) shows the ERSP in the frontal cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 4. Column (b) shows the differences among three single-task cases in column (a). Column (c) shows the differences between single- and dual-task cases in column (a). Column (d) shows the ERSP in the left motor cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 5. Column (e) shows the differences among three single-task cases in column (d). Column (f) shows the differences between single- and dual-task cases in column (d). A Wilcoxon signed-rank test (p < 0.01) is used for the statistical test in (b), (c), (e), and (f).\nFigure 4a shows the cross-subject averaged ERSP in the frontal cluster corresponding to the five cases. Figure 4 also reveals significant (p < 0.01) power increases related to the math task, demonstrating that the power increases in the frontal cluster are related to the math task. The theta power increases in three dual-task cases including cases 1-3 are slightly different from each other. Compared to the single math task (case 4), the power in dual-task cases is stronger. Especially, the power increase in case 1 is the strongest. On the beta band, it also shows power increases, which appear only in the math-task and time-locked to mathematics onsets.\nThe ERSP images of frontal cluster with five cases. (a) The ERSP images of frontal cluster with five cases. The right column show the onset sequences of the two tasks. Color bars indicate the magnitude of ERSPs. Red solid lines show the onset of the math task. Red dashed lines show the mean response time for the math task. Blue solid lines show the onset of the deviation task. Blue dashed lines show the mean response time for the deviation task. The red circle pointed out by the red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating time form the math task onset to the first occurrence of power increases. The open bars represent the latencies in the theta (4.5 ~ 9 Hz) band. The gray bars represent these latencies in the beta (11 ~ 15 Hz) band. The comparison of total power in cross-subject (14 subjects) averaged ERSP images in the frontal cluster between cases is shown in (c). The amount of total power is calculated by adding all the power increases in the same temporal period and the same frequency band. The open bars represent the total power in the theta band. The gray bars represent the total power in the beta band.\nFigure 4b and 4c give comparisons of the latency and total power in four cases from Figure 4a. It demonstrates that the latencies of power increases in two frequency bands are different with the different SOA time. The shortest latencies in both bands occur in case 1 and the longest power increase latency in the theta band occurs in case 4. It also demonstrates that the amount of power increases in the theta band is different with the different SOA time. The most significant power increase occurs in case 1.\nFigure 5a shows the cross-subject average ERSP in the left motor cluster corresponding to five cases. Significant (p < 0.01) power suppressions appear around the event onsets (at 0 ms) and stop at different time axes by cases. In case 4, the alpha and beta power suppressions appear continuously until the red dashed lines, which indicates the mean of the response time for the math task. Compared with case 4, the alpha and beta power suppressions in case 5 are stronger and also last longer. In other cases, the alpha and beta power suppressions continue after the blue dashed lines. This phenomenon is suggested to be related to steering the car back to the center of the third lane.\nThe ERSP images of the left motor cluster with five cases. (a) The ERSP images of the left motor cluster with five cases. The right column shows the onset sequences of the two tasks. Color bars indicate the magnitude of the ERSPs. Red solid lines show the onset of math. Red dashed lines show the mean response time for math task. Blue solid lines show the onset of deviation task. Blue dashed lines show the mean response time for deviation task. The red circle pointed out by a red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating from the deviation task onset to the first occurrence of power suppressions. The open bars represent the latencies in the alpha (8 ~ 14 Hz) band. The gray blue bars represent these latencies in the beta band (16 ~ 20 Hz). (c) shows the comparison of total power in cross-subject (11 subjects) averaged ERSP images in the left motor cluster between cases. The amount of total power is calculated by adding all the power suppressions in the same temporal period and the same frequency band. The open bars represent the total power in the alpha band. The gray bars represent the total power in the beta band.\nFigure 5b and 5c shows comparisons of the latency and total power between the four cases in Figure 5a. It demonstrates that power suppression latencies in the beta band are different with the different SOA time. The shortest power suppression latency occurs in case 1 and the longest power increase latency occurs in case 5. It also demonstrates that the amount of power suppression in the alpha band is different with the different SOA time. The most significant power suppression occurs in case 5 (the single driving task) and the smallest power suppression occurs in case 4 (the single math task).\nFigure 6a and 6d show the ERSP in the frontal and left motor clusters without a significance test. Columns (b) and (e) show the differences among three single-task cases; columns (c) and (f) show the differences between single- and dual-task cases. In columns (b), (c), (e), and (f), a Wilcoxon signed-rank test is used to retain the regions with significant power inside the black circles. Columns (b) and (c) show the comparison of power increases between cases. The remained regions show greater power increases in the single-task case than in the dual-task case. Columns (e) and (f) show compared power suppressions between cases. The remained regions show greater power suppressions in the dual-task cases than in the single-task case.\nERSP without a significance test and the differences between cases. Column (a) shows the ERSP in the frontal cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 4. Column (b) shows the differences among three single-task cases in column (a). Column (c) shows the differences between single- and dual-task cases in column (a). Column (d) shows the ERSP in the left motor cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 5. Column (e) shows the differences among three single-task cases in column (d). Column (f) shows the differences between single- and dual-task cases in column (d). A Wilcoxon signed-rank test (p < 0.01) is used for the statistical test in (b), (c), (e), and (f).", "To investigate the overall behavior index, this study uses nonparametric tests because several extremely large scores are significantly skewed. Firstly, the trials of data are randomly selected to have the same number of the trials in all cases. Then, the response time of the deviation and math tasks in the five cases are normalized to correspond to single-deviation and single-math cases, respectively. SPSS software is used for the Friedman test, and the results of which are shown in Figure 2. Dual-task cases are marked for easy discrimination from single-task cases.\nThis shows the bar charts of normalized response times. (a) for the math task and (b) for deviation task across 15 subjects. The filled black bar: case 1; dark gray bar: case 2; light gray bar: case 3; the open bar: single case. The response time for math task in dual-task cases (case 1, case 2, and case 3) is significantly longer than that for in single task (case 4). The shortest response time for the math onset is in case 4. The response time for deviation task in case 1 is significantly shorter than those in other cases. The longest response time to the deviation onset is in case 5. The bottom insets show the onset sequences of the two tasks.\nTo know how the cases make the differences, the Student-Newman-Keuls test is used for the post hoc test (in Table 1). The test statistic on response time of math tasks in cases 1-4, is χ2(3) = 903.926 from the Friedman's ANOVA test, and p < 0.01. The Student-Newman-Keuls test show three significant groups: case 1 with case 2, case 3, and case 4 in which the response time for math task in case 1 is the longest. Statistical test results of the response time for deviation tasks in cases 1-3, and case 5, is χ2(3) = 493.98 from the Friedman's ANOVA test, and p < 0.01. Using the Student-Newman-Keuls test, there are two significant groups: case 1, and the other cases in which the response time for deviation task in case 1 is the shortest.\nThe normalized response time to deviation and math", "EEG epochs are extracted from the recorded EEG signals. Then, ICA is utilized to decompose independent brain sources from the EEG epochs. Based on distraction effects in this study, many brain resources are involved in this experiment. Especially, the motor component is active when subjects are steering the car. At the same time, activations related to attention in the frontal component appear. Therefore, ICA components, including frontal and motor, are selected for IC clustering to analyze cross-subject data based on their EEG characteristics.\nAt first, IC clustering groups massive components from multiple sessions and subjects into several significant clusters. Cluster analysis, k-means, is applied to the normalized scalp topographies and power spectra of all 450 (30 channels × 15 subjects) components from the 15 subjects. Cluster analysis identifies at least 7 component clusters having similar power spectra and scalp projections. These 7 distinct component clusters consisted of frontal, central midline, parietal, left/right motor and left/right occipital. Table 2 gives the number of components in different clusters. This investigation uses the frontal and left motor components to analyze distraction effects. Figure 3 shows the scalp maps and equivalent dipole source locations for fontal and left motor clusters. Based on this finding, the EEG sources of different subjects in the same cluster are from the same physiological component.\nThe Number of Components in Different Clusters\nThe scalp maps and equivalent dipole source locations after IC clustering across 15 subjects. (a) the frontal components and (b) the left motor components are shown here. There are 14 subjects in the frontal cluster and 11 subjects in the left motor cluster. The grand scalp map is the mean of the total component maps in each cluster. The smaller maps are the individual scalp maps. The right panels (c) and (d) show the 3-D dipole source locations (colored spheres) and their projections onto average brain images. The colored source locations correspond to their own scalp maps by the same color of the text above.", "Figure 4a shows the cross-subject averaged ERSP in the frontal cluster corresponding to the five cases. Figure 4 also reveals significant (p < 0.01) power increases related to the math task, demonstrating that the power increases in the frontal cluster are related to the math task. The theta power increases in three dual-task cases including cases 1-3 are slightly different from each other. Compared to the single math task (case 4), the power in dual-task cases is stronger. Especially, the power increase in case 1 is the strongest. On the beta band, it also shows power increases, which appear only in the math-task and time-locked to mathematics onsets.\nThe ERSP images of frontal cluster with five cases. (a) The ERSP images of frontal cluster with five cases. The right column show the onset sequences of the two tasks. Color bars indicate the magnitude of ERSPs. Red solid lines show the onset of the math task. Red dashed lines show the mean response time for the math task. Blue solid lines show the onset of the deviation task. Blue dashed lines show the mean response time for the deviation task. The red circle pointed out by the red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating time form the math task onset to the first occurrence of power increases. The open bars represent the latencies in the theta (4.5 ~ 9 Hz) band. The gray bars represent these latencies in the beta (11 ~ 15 Hz) band. The comparison of total power in cross-subject (14 subjects) averaged ERSP images in the frontal cluster between cases is shown in (c). The amount of total power is calculated by adding all the power increases in the same temporal period and the same frequency band. The open bars represent the total power in the theta band. The gray bars represent the total power in the beta band.\nFigure 4b and 4c give comparisons of the latency and total power in four cases from Figure 4a. It demonstrates that the latencies of power increases in two frequency bands are different with the different SOA time. The shortest latencies in both bands occur in case 1 and the longest power increase latency in the theta band occurs in case 4. It also demonstrates that the amount of power increases in the theta band is different with the different SOA time. The most significant power increase occurs in case 1.\nFigure 5a shows the cross-subject average ERSP in the left motor cluster corresponding to five cases. Significant (p < 0.01) power suppressions appear around the event onsets (at 0 ms) and stop at different time axes by cases. In case 4, the alpha and beta power suppressions appear continuously until the red dashed lines, which indicates the mean of the response time for the math task. Compared with case 4, the alpha and beta power suppressions in case 5 are stronger and also last longer. In other cases, the alpha and beta power suppressions continue after the blue dashed lines. This phenomenon is suggested to be related to steering the car back to the center of the third lane.\nThe ERSP images of the left motor cluster with five cases. (a) The ERSP images of the left motor cluster with five cases. The right column shows the onset sequences of the two tasks. Color bars indicate the magnitude of the ERSPs. Red solid lines show the onset of math. Red dashed lines show the mean response time for math task. Blue solid lines show the onset of deviation task. Blue dashed lines show the mean response time for deviation task. The red circle pointed out by a red arrow in case 2 means the red solid line and blue solid line are on the same position. Latencies calculated from (a) are shown in (b) by calculating from the deviation task onset to the first occurrence of power suppressions. The open bars represent the latencies in the alpha (8 ~ 14 Hz) band. The gray blue bars represent these latencies in the beta band (16 ~ 20 Hz). (c) shows the comparison of total power in cross-subject (11 subjects) averaged ERSP images in the left motor cluster between cases. The amount of total power is calculated by adding all the power suppressions in the same temporal period and the same frequency band. The open bars represent the total power in the alpha band. The gray bars represent the total power in the beta band.\nFigure 5b and 5c shows comparisons of the latency and total power between the four cases in Figure 5a. It demonstrates that power suppression latencies in the beta band are different with the different SOA time. The shortest power suppression latency occurs in case 1 and the longest power increase latency occurs in case 5. It also demonstrates that the amount of power suppression in the alpha band is different with the different SOA time. The most significant power suppression occurs in case 5 (the single driving task) and the smallest power suppression occurs in case 4 (the single math task).\nFigure 6a and 6d show the ERSP in the frontal and left motor clusters without a significance test. Columns (b) and (e) show the differences among three single-task cases; columns (c) and (f) show the differences between single- and dual-task cases. In columns (b), (c), (e), and (f), a Wilcoxon signed-rank test is used to retain the regions with significant power inside the black circles. Columns (b) and (c) show the comparison of power increases between cases. The remained regions show greater power increases in the single-task case than in the dual-task case. Columns (e) and (f) show compared power suppressions between cases. The remained regions show greater power suppressions in the dual-task cases than in the single-task case.\nERSP without a significance test and the differences between cases. Column (a) shows the ERSP in the frontal cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 4. Column (b) shows the differences among three single-task cases in column (a). Column (c) shows the differences between single- and dual-task cases in column (a). Column (d) shows the ERSP in the left motor cluster without a significance test which contains all the details of case 1, case 2, case 3, and case 5. Column (e) shows the differences among three single-task cases in column (d). Column (f) shows the differences between single- and dual-task cases in column (d). A Wilcoxon signed-rank test (p < 0.01) is used for the statistical test in (b), (c), (e), and (f).", "[SUBTITLE] Frontal cluster [SUBSECTION] The frontal lobe is an area in the brain, located at the front of each cerebral hemisphere. The frontal area deals with impulse control, judgment, language production, working memory, motor function, and problem solving [36,37]. In Figure 4a, the greater frontal power increases in cases 1-4 appear due to the solving of the math questions. The power increases in the theta (4.5 ~ 9 Hz) and beta bands (11 ~ 15 Hz) appear briefly after the math onset. Figure 4b and 4c show the quantified frontal power latencies and power increases in four conditions for the purpose of discussing the EEG dynamics made by solving the math question. In the theta power, the shortest latency is revealed in case 1. Power increases in three dual-task cases are higher than that in single-task case with the greatest power occurring in case 1. These phenomena suggest that dual tasks induce more event-related theta activities as well as subjects need more brain resources to accomplish dual tasks. The theta increase is associated with numerous processes such as mental work load, problem solving, encoding, or self monitoring [34]. Based on this evidence, the study demonstrates that the subjects were distracted under dual-task conditions in the experiment.\nSince human visual sensors need about 300 ms to perceive stimulus (P300 activity), 400 ms between first and second tasks is sufficient for a subject to perceive stimulus[38]. In case 1, a processing task is already in the brain and subjects need more brain resources to manage the high priority task presented 400 ms after the processing task. Therefore, the total power in the theta band in case 1 is the highest as shown in Figure 4c. Clearly the theta power increase appears the earliest in case 1 as shown in Figure 4b. The early theta response in the frontal area primarily reflects the activation of neural networks involved in allocating attention related to the target stimulus [39].\nThe trends of response time for the math task (in Figure 2a) and EEG theta increases in the frontal cluster (in Figure 4c) are consistent with one another. In the case of the single math task, the response time is the shortest and the theta power increase is the weakest. Among the dual-task cases, the longest response time and the greatest theta power increase are in case 1. This evidence suggests that the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, power increases in the beta band appear in all cases. From the ERSP images, the patterns are time-locked to the onset of the math task. Fernández suggested that significant EEG beta band differences in the frontal area are due to a specific component of mental calculation [40].\nThe frontal lobe is an area in the brain, located at the front of each cerebral hemisphere. The frontal area deals with impulse control, judgment, language production, working memory, motor function, and problem solving [36,37]. In Figure 4a, the greater frontal power increases in cases 1-4 appear due to the solving of the math questions. The power increases in the theta (4.5 ~ 9 Hz) and beta bands (11 ~ 15 Hz) appear briefly after the math onset. Figure 4b and 4c show the quantified frontal power latencies and power increases in four conditions for the purpose of discussing the EEG dynamics made by solving the math question. In the theta power, the shortest latency is revealed in case 1. Power increases in three dual-task cases are higher than that in single-task case with the greatest power occurring in case 1. These phenomena suggest that dual tasks induce more event-related theta activities as well as subjects need more brain resources to accomplish dual tasks. The theta increase is associated with numerous processes such as mental work load, problem solving, encoding, or self monitoring [34]. Based on this evidence, the study demonstrates that the subjects were distracted under dual-task conditions in the experiment.\nSince human visual sensors need about 300 ms to perceive stimulus (P300 activity), 400 ms between first and second tasks is sufficient for a subject to perceive stimulus[38]. In case 1, a processing task is already in the brain and subjects need more brain resources to manage the high priority task presented 400 ms after the processing task. Therefore, the total power in the theta band in case 1 is the highest as shown in Figure 4c. Clearly the theta power increase appears the earliest in case 1 as shown in Figure 4b. The early theta response in the frontal area primarily reflects the activation of neural networks involved in allocating attention related to the target stimulus [39].\nThe trends of response time for the math task (in Figure 2a) and EEG theta increases in the frontal cluster (in Figure 4c) are consistent with one another. In the case of the single math task, the response time is the shortest and the theta power increase is the weakest. Among the dual-task cases, the longest response time and the greatest theta power increase are in case 1. This evidence suggests that the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, power increases in the beta band appear in all cases. From the ERSP images, the patterns are time-locked to the onset of the math task. Fernández suggested that significant EEG beta band differences in the frontal area are due to a specific component of mental calculation [40].\n[SUBTITLE] Motor cluster [SUBSECTION] Mu rhythm (μ rhythm) is an EEG rhythm usually recorded from the motor cortex of the dominant hemisphere. It can be suppressed by simple motor activities such as clenching the fist of the contra lateral side, or passively moved [41-43]. Mu suppression is believed to be the electrical output of the synchronization on large portions of pyramidal neurons in the motor cortex that controls hand and arm movements.\nIn this study, the mu suppressions (8 ~ 14 Hz) and beta power suppression (16 ~ 20 Hz) are mostly caused by subjects steering the wheel and pressing buttons as shown in Figure 5a. The mu suppressions caused by steering the wheel are almost time-locked to the response onset of driving task in cases 1-3 and case 5. However, the mu suppressions caused by pressing the buttons have no effects in case 4. As for in the dual-task cases, the mu suppressions are weaker than those in single-task case. This may due to the competition of brain resources required by wheel steering and button pressing.\nThus, Figure 5b and Figure 5c show motor power latencies and power increases in 4 cases for the purposes of discussing the EEG dynamics caused by the driving task. In (b), the longest latency of beta power suppression is observed in case 5 and the shortest latency appears in case 1. Perhaps motor planning is involved in preparing for steering the wheel and answering the math questions [44]. In (c), the three dual-task power suppressions are weaker than those in single task. Based on above evidences, it suggests that math processing occupies more brain resources in the frontal area during dual-task cases so less activation is induced in the motor area.\nMu rhythm (μ rhythm) is an EEG rhythm usually recorded from the motor cortex of the dominant hemisphere. It can be suppressed by simple motor activities such as clenching the fist of the contra lateral side, or passively moved [41-43]. Mu suppression is believed to be the electrical output of the synchronization on large portions of pyramidal neurons in the motor cortex that controls hand and arm movements.\nIn this study, the mu suppressions (8 ~ 14 Hz) and beta power suppression (16 ~ 20 Hz) are mostly caused by subjects steering the wheel and pressing buttons as shown in Figure 5a. The mu suppressions caused by steering the wheel are almost time-locked to the response onset of driving task in cases 1-3 and case 5. However, the mu suppressions caused by pressing the buttons have no effects in case 4. As for in the dual-task cases, the mu suppressions are weaker than those in single-task case. This may due to the competition of brain resources required by wheel steering and button pressing.\nThus, Figure 5b and Figure 5c show motor power latencies and power increases in 4 cases for the purposes of discussing the EEG dynamics caused by the driving task. In (b), the longest latency of beta power suppression is observed in case 5 and the shortest latency appears in case 1. Perhaps motor planning is involved in preparing for steering the wheel and answering the math questions [44]. In (c), the three dual-task power suppressions are weaker than those in single task. Based on above evidences, it suggests that math processing occupies more brain resources in the frontal area during dual-task cases so less activation is induced in the motor area.\n[SUBTITLE] Brain dynamics related to behavior performance [SUBSECTION] Posner postulated that two tasks performed simultaneously did not interfere with each other's performance when different brain areas were used for these two tasks [45]. However, this study uses two visual-stimuli tasks that compete within the frontal and motor areas for taking action. From the results, these two visual-stimuli tasks interfere with each other in both behavior performance (in Figure 2) and brain dynamics (in Figure 6).\nIn order to compare brain dynamics among different cases (in Figure 6), a statistical analysis was also conducted to assess the significance of the ERSP differences of the independent clusters under different cases. Since the true sample distribution of the cluster ERSP was unknown and the sample size (N = 14 as 1 of 15 subjects and N = 11 as 4 of 15 subjects were exclude in frontal and left motor clusters, respectively) was small, a nonparametric statistical analysis, a paired-sample Wilcoxon signed-rank test, was employed to access the statistically significant ERSP differences under different cases. The level of significance was set to p < 0.01.\nIn Figure 6c, the significant differences between dual-task cases and case 4 are due to that subjects' reaction to a math question is impaired when they are also facing a car deviation. Lavie demonstrated that dual-task load increases distraction effects [46]. Because of the distraction effects, the behavioral response time are significantly higher in dual-task cases than that in single-task case. In order to study the comparisons of these dual-task cases, the differences of them are shown in Figure 6b. From the behavior performance in Figure 2, response time in case 1 and case 2 are the longest which means that the most distraction effects occurred in these two cases. It is also shown in Figure 6b. Especially, distraction effects in case 1 are slightly higher than those in case 2. Therefore, it is suggested that some kinds of two sequent tasks make the same distraction effects as two simultaneous tasks, or even higher.\nJong investigated how performance of two overlapping discrete tasks was organized and controlled [47]. The sequential performance of overlapping tasks can be scheduled in advance and regulated by initially allocating brain resources to one task and subsequently switching to the other task. Thus in case 1, when the math task is presented to the subject, it occupies the brain resources. Then because the driving task appears, the brain resources are immediately switched to the driving task and the math task is temporally dropped. Subsequently, the brain resources are then switched back to the math task. This processing consumes the most brain resources and makes the longest response time for the math question The response time in case 1 is significantly higher than that in case 3 and case 4. The occurrence of distraction effects is due in large part to the switching of brain resources.\nThe fact, which no significant differences occur on behavior performance for the driving tasks between the simultaneous-task case 2 and single-task case 5 (in Figure 2), suggests that the driving task is too simple to require much brain resources. These results are also due to the first priority on the driving task. No differences of behavior performance, which appear among case 2, case 3 and case5, also prove this fact. Thus, the subjects always chose to respond to the driving task when the driving task occurs even if they are handling a math task. In case 1, however, the math question is took as a cue to let the subjects rapidly respond to the driving task to avoid hitting the wall. This situation makes the response time short for the driving task in case 1 due to the subjects under a high perceptual load. Consistently, Lavie demonstrated that a high perceptual load reduced response time [46]. This also causes case 1 and case 3, which are formed as a symmetrical paradigm, be much different from each other (in Figure 2).\nIn Figure 6, the most power suppression occurs in case 5 (in Figure 6f) with only driving task. Three dual-task cases have the same level of power suppression. The reason why less power suppression occurs on dual-task cases in motor area is suggested that most brain resources are occupied in frontal area to deal with two tasks instead of those in motor area. It is proposed that motor area is not related to distraction effects. This is proved by one more result that the correlation is low between EEG dynamics in motor area and its corresponding response time.\nIn summary, this study observes several differences between dual-task and single-task cases. We investigate the relationship between brain dynamics associated with dual-task management and the behavior performance of response modalities. It is suggested the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, the appearing order of the two tasks with different difficulties is an important factor in dual-task performance.\nPosner postulated that two tasks performed simultaneously did not interfere with each other's performance when different brain areas were used for these two tasks [45]. However, this study uses two visual-stimuli tasks that compete within the frontal and motor areas for taking action. From the results, these two visual-stimuli tasks interfere with each other in both behavior performance (in Figure 2) and brain dynamics (in Figure 6).\nIn order to compare brain dynamics among different cases (in Figure 6), a statistical analysis was also conducted to assess the significance of the ERSP differences of the independent clusters under different cases. Since the true sample distribution of the cluster ERSP was unknown and the sample size (N = 14 as 1 of 15 subjects and N = 11 as 4 of 15 subjects were exclude in frontal and left motor clusters, respectively) was small, a nonparametric statistical analysis, a paired-sample Wilcoxon signed-rank test, was employed to access the statistically significant ERSP differences under different cases. The level of significance was set to p < 0.01.\nIn Figure 6c, the significant differences between dual-task cases and case 4 are due to that subjects' reaction to a math question is impaired when they are also facing a car deviation. Lavie demonstrated that dual-task load increases distraction effects [46]. Because of the distraction effects, the behavioral response time are significantly higher in dual-task cases than that in single-task case. In order to study the comparisons of these dual-task cases, the differences of them are shown in Figure 6b. From the behavior performance in Figure 2, response time in case 1 and case 2 are the longest which means that the most distraction effects occurred in these two cases. It is also shown in Figure 6b. Especially, distraction effects in case 1 are slightly higher than those in case 2. Therefore, it is suggested that some kinds of two sequent tasks make the same distraction effects as two simultaneous tasks, or even higher.\nJong investigated how performance of two overlapping discrete tasks was organized and controlled [47]. The sequential performance of overlapping tasks can be scheduled in advance and regulated by initially allocating brain resources to one task and subsequently switching to the other task. Thus in case 1, when the math task is presented to the subject, it occupies the brain resources. Then because the driving task appears, the brain resources are immediately switched to the driving task and the math task is temporally dropped. Subsequently, the brain resources are then switched back to the math task. This processing consumes the most brain resources and makes the longest response time for the math question The response time in case 1 is significantly higher than that in case 3 and case 4. The occurrence of distraction effects is due in large part to the switching of brain resources.\nThe fact, which no significant differences occur on behavior performance for the driving tasks between the simultaneous-task case 2 and single-task case 5 (in Figure 2), suggests that the driving task is too simple to require much brain resources. These results are also due to the first priority on the driving task. No differences of behavior performance, which appear among case 2, case 3 and case5, also prove this fact. Thus, the subjects always chose to respond to the driving task when the driving task occurs even if they are handling a math task. In case 1, however, the math question is took as a cue to let the subjects rapidly respond to the driving task to avoid hitting the wall. This situation makes the response time short for the driving task in case 1 due to the subjects under a high perceptual load. Consistently, Lavie demonstrated that a high perceptual load reduced response time [46]. This also causes case 1 and case 3, which are formed as a symmetrical paradigm, be much different from each other (in Figure 2).\nIn Figure 6, the most power suppression occurs in case 5 (in Figure 6f) with only driving task. Three dual-task cases have the same level of power suppression. The reason why less power suppression occurs on dual-task cases in motor area is suggested that most brain resources are occupied in frontal area to deal with two tasks instead of those in motor area. It is proposed that motor area is not related to distraction effects. This is proved by one more result that the correlation is low between EEG dynamics in motor area and its corresponding response time.\nIn summary, this study observes several differences between dual-task and single-task cases. We investigate the relationship between brain dynamics associated with dual-task management and the behavior performance of response modalities. It is suggested the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, the appearing order of the two tasks with different difficulties is an important factor in dual-task performance.", "The frontal lobe is an area in the brain, located at the front of each cerebral hemisphere. The frontal area deals with impulse control, judgment, language production, working memory, motor function, and problem solving [36,37]. In Figure 4a, the greater frontal power increases in cases 1-4 appear due to the solving of the math questions. The power increases in the theta (4.5 ~ 9 Hz) and beta bands (11 ~ 15 Hz) appear briefly after the math onset. Figure 4b and 4c show the quantified frontal power latencies and power increases in four conditions for the purpose of discussing the EEG dynamics made by solving the math question. In the theta power, the shortest latency is revealed in case 1. Power increases in three dual-task cases are higher than that in single-task case with the greatest power occurring in case 1. These phenomena suggest that dual tasks induce more event-related theta activities as well as subjects need more brain resources to accomplish dual tasks. The theta increase is associated with numerous processes such as mental work load, problem solving, encoding, or self monitoring [34]. Based on this evidence, the study demonstrates that the subjects were distracted under dual-task conditions in the experiment.\nSince human visual sensors need about 300 ms to perceive stimulus (P300 activity), 400 ms between first and second tasks is sufficient for a subject to perceive stimulus[38]. In case 1, a processing task is already in the brain and subjects need more brain resources to manage the high priority task presented 400 ms after the processing task. Therefore, the total power in the theta band in case 1 is the highest as shown in Figure 4c. Clearly the theta power increase appears the earliest in case 1 as shown in Figure 4b. The early theta response in the frontal area primarily reflects the activation of neural networks involved in allocating attention related to the target stimulus [39].\nThe trends of response time for the math task (in Figure 2a) and EEG theta increases in the frontal cluster (in Figure 4c) are consistent with one another. In the case of the single math task, the response time is the shortest and the theta power increase is the weakest. Among the dual-task cases, the longest response time and the greatest theta power increase are in case 1. This evidence suggests that the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, power increases in the beta band appear in all cases. From the ERSP images, the patterns are time-locked to the onset of the math task. Fernández suggested that significant EEG beta band differences in the frontal area are due to a specific component of mental calculation [40].", "Mu rhythm (μ rhythm) is an EEG rhythm usually recorded from the motor cortex of the dominant hemisphere. It can be suppressed by simple motor activities such as clenching the fist of the contra lateral side, or passively moved [41-43]. Mu suppression is believed to be the electrical output of the synchronization on large portions of pyramidal neurons in the motor cortex that controls hand and arm movements.\nIn this study, the mu suppressions (8 ~ 14 Hz) and beta power suppression (16 ~ 20 Hz) are mostly caused by subjects steering the wheel and pressing buttons as shown in Figure 5a. The mu suppressions caused by steering the wheel are almost time-locked to the response onset of driving task in cases 1-3 and case 5. However, the mu suppressions caused by pressing the buttons have no effects in case 4. As for in the dual-task cases, the mu suppressions are weaker than those in single-task case. This may due to the competition of brain resources required by wheel steering and button pressing.\nThus, Figure 5b and Figure 5c show motor power latencies and power increases in 4 cases for the purposes of discussing the EEG dynamics caused by the driving task. In (b), the longest latency of beta power suppression is observed in case 5 and the shortest latency appears in case 1. Perhaps motor planning is involved in preparing for steering the wheel and answering the math questions [44]. In (c), the three dual-task power suppressions are weaker than those in single task. Based on above evidences, it suggests that math processing occupies more brain resources in the frontal area during dual-task cases so less activation is induced in the motor area.", "Posner postulated that two tasks performed simultaneously did not interfere with each other's performance when different brain areas were used for these two tasks [45]. However, this study uses two visual-stimuli tasks that compete within the frontal and motor areas for taking action. From the results, these two visual-stimuli tasks interfere with each other in both behavior performance (in Figure 2) and brain dynamics (in Figure 6).\nIn order to compare brain dynamics among different cases (in Figure 6), a statistical analysis was also conducted to assess the significance of the ERSP differences of the independent clusters under different cases. Since the true sample distribution of the cluster ERSP was unknown and the sample size (N = 14 as 1 of 15 subjects and N = 11 as 4 of 15 subjects were exclude in frontal and left motor clusters, respectively) was small, a nonparametric statistical analysis, a paired-sample Wilcoxon signed-rank test, was employed to access the statistically significant ERSP differences under different cases. The level of significance was set to p < 0.01.\nIn Figure 6c, the significant differences between dual-task cases and case 4 are due to that subjects' reaction to a math question is impaired when they are also facing a car deviation. Lavie demonstrated that dual-task load increases distraction effects [46]. Because of the distraction effects, the behavioral response time are significantly higher in dual-task cases than that in single-task case. In order to study the comparisons of these dual-task cases, the differences of them are shown in Figure 6b. From the behavior performance in Figure 2, response time in case 1 and case 2 are the longest which means that the most distraction effects occurred in these two cases. It is also shown in Figure 6b. Especially, distraction effects in case 1 are slightly higher than those in case 2. Therefore, it is suggested that some kinds of two sequent tasks make the same distraction effects as two simultaneous tasks, or even higher.\nJong investigated how performance of two overlapping discrete tasks was organized and controlled [47]. The sequential performance of overlapping tasks can be scheduled in advance and regulated by initially allocating brain resources to one task and subsequently switching to the other task. Thus in case 1, when the math task is presented to the subject, it occupies the brain resources. Then because the driving task appears, the brain resources are immediately switched to the driving task and the math task is temporally dropped. Subsequently, the brain resources are then switched back to the math task. This processing consumes the most brain resources and makes the longest response time for the math question The response time in case 1 is significantly higher than that in case 3 and case 4. The occurrence of distraction effects is due in large part to the switching of brain resources.\nThe fact, which no significant differences occur on behavior performance for the driving tasks between the simultaneous-task case 2 and single-task case 5 (in Figure 2), suggests that the driving task is too simple to require much brain resources. These results are also due to the first priority on the driving task. No differences of behavior performance, which appear among case 2, case 3 and case5, also prove this fact. Thus, the subjects always chose to respond to the driving task when the driving task occurs even if they are handling a math task. In case 1, however, the math question is took as a cue to let the subjects rapidly respond to the driving task to avoid hitting the wall. This situation makes the response time short for the driving task in case 1 due to the subjects under a high perceptual load. Consistently, Lavie demonstrated that a high perceptual load reduced response time [46]. This also causes case 1 and case 3, which are formed as a symmetrical paradigm, be much different from each other (in Figure 2).\nIn Figure 6, the most power suppression occurs in case 5 (in Figure 6f) with only driving task. Three dual-task cases have the same level of power suppression. The reason why less power suppression occurs on dual-task cases in motor area is suggested that most brain resources are occupied in frontal area to deal with two tasks instead of those in motor area. It is proposed that motor area is not related to distraction effects. This is proved by one more result that the correlation is low between EEG dynamics in motor area and its corresponding response time.\nIn summary, this study observes several differences between dual-task and single-task cases. We investigate the relationship between brain dynamics associated with dual-task management and the behavior performance of response modalities. It is suggested the theta activity of the EEG in the frontal area during dual tasks is related to distraction effects and represents the strength of distraction. In addition, the appearing order of the two tasks with different difficulties is an important factor in dual-task performance.", "This study investigates behavioral and physiological (EEG) responses under multiple cases and multiple distraction levels. Firstly, the response time for mathematical problem solving in dual-task condition is significantly higher than that in single-task condition. Therefore, distraction effects occur while processing two tasks during driving. Comparing to the mathematical problems, however, the response time for driving tasks under multiple cases is almost the same without differences. This is due to the order of task appearance and the relative difficulty of the two tasks, which suggesting these factors are important considerations in dual-task performance. Secondly, theta power increases in the frontal area are higher with higher response time. The phasic changes around the theta band in the case, in which the mathematic task is presented before the deviation task, show the strongest increase as the same as that in the simultaneous-task case. This is because subjects already process a task in the brain and need more brain resources to manage the second task presented after the first task. In conclusions, this study suggests that the power increases of the 4.5 ~ 9 Hz frequency band in the frontal area is related to driver distraction and represents the strength of distraction in real-life driving.", "The authors declare that they have no competing interests.", "CTL started this study with the main idea, participated in the design of this study, and led the team to well finish it. SAC participated in the design of the study, the acquisition of data, the analysis/interpretation of data, and the modification of paper to submit. TTC participated in the design of the study and performed the statistical analysis. HZL participated in the design of the study and drafted the manuscript. LWK conceived of the study, and participated in the design and coordination and helped to draft the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Evaluation of a single round polymerase chain reaction assay using dried blood spots for diagnosis of HIV-1 infection in infants in an African setting.
21332984
The aim of this study was to develop an economical 'in-house' single round polymerase chain reaction (PCR) assay using filter paper-dried blood spots (FP-DBS) for early infant HIV-1 diagnosis and to evaluate its performance in an African setting.
BACKGROUND
An 'in-house' single round PCR assay that targets conserved regions in the HIV-1 polymerase (pol) gene was validated for use with FP-DBS; first we validated this assay using FP-DBS spiked with cell standards of known HIV-1 copy numbers. Next, we validated the assay by testing the archived FP-DBS (N=115) from infants of known HIV-1 infection status. Subsequently this 'in-house' HIV-1 pol PCR FP-DBS assay was then established in Nairobi, Kenya for further evaluation on freshly collected FP-DBS (N=186) from infants, and compared with findings from a reference laboratory using the Roche Amplicor® HIV-1 DNA Test, version 1.5 assay.
METHODS
The HIV-1 pol PCR FP-DBS assay could detect one HIV-1 proviral copy in 38.7% of tests, 2 copies in 46.9% of tests, 5 copies in 72.5% of tests and 10 copies in 98.1% of tests performed with spiked samples. Using the archived FP-DBS samples from infants of known infection status, this assay was 92.8% sensitive and 98.3% specific for HIV-1 infant diagnosis. Using 186 FP-DBS collected from infants recently defined as HIV-1 positive using the commercially available Roche Amplicor v1.5 assay, 178 FP-DBS tested positive by this 'in-house' single-round HIV-1 pol PCR FP-DBS PCR assay. Upon subsequent retesting, the 8 infant FP-DBS samples that were discordant were confirmed as HIV-1 negative by both assays using a second blood sample.
RESULTS
HIV-1 was detected with high sensitivity and specificity using both archived and more recently collected samples. This suggests that this 'in-house' HIV-1 pol FP-DBS PCR assay can provide an alternative cost-effective, reliable and rapid method for early detection of HIV-1 infection in infants.
CONCLUSIONS
[ "Africa", "DNA, Viral", "HIV Infections", "HIV-1", "Humans", "Infant", "Polymerase Chain Reaction", "Sensitivity and Specificity" ]
3050718
null
null
Methods
[SUBTITLE] PCR methods [SUBSECTION] The PCR method described here was a modification of a previously described real-time PCR assay that targets the HIV polymerase (pol) gene [14,15]. Minor changes were made by shifting the primers to minimize non-specific amplification. The primers used were forward primer pol 151 5'TACAGTGCAGGGGAAAGAATAATAG3' (corresponds to positions 4809 - 4833 in HXB2) and the reverse primer pol 40 5'CTACTGCCCCTTCACCTTTCC3' (position 4954- 4974 in HXB2). The PCR reaction mixture contained 150 μmol/L of MgCl2, 200 μmol/L of dNTP, 1 μmol/L of each primer, 1.5U of ABI AmpliTaq Gold Polymerase and appropriate buffer mix (Applied Biosystems), 0.1% of Bovine Serum Albumin, and 2 μl of the DNA template. The cycling parameters used were 50°C for 2 min; 95°C for 10 min, 1 cycle; 95°C for 15s and 60°C for 42 cycles. The expected product is 166bp, which was visualized by gel electrophoresis through 2% agarose and ethidium bromide staining. We refer to this assay as the HIV-1 pol PCR FP-DBS assay. The PCR method described here was a modification of a previously described real-time PCR assay that targets the HIV polymerase (pol) gene [14,15]. Minor changes were made by shifting the primers to minimize non-specific amplification. The primers used were forward primer pol 151 5'TACAGTGCAGGGGAAAGAATAATAG3' (corresponds to positions 4809 - 4833 in HXB2) and the reverse primer pol 40 5'CTACTGCCCCTTCACCTTTCC3' (position 4954- 4974 in HXB2). The PCR reaction mixture contained 150 μmol/L of MgCl2, 200 μmol/L of dNTP, 1 μmol/L of each primer, 1.5U of ABI AmpliTaq Gold Polymerase and appropriate buffer mix (Applied Biosystems), 0.1% of Bovine Serum Albumin, and 2 μl of the DNA template. The cycling parameters used were 50°C for 2 min; 95°C for 10 min, 1 cycle; 95°C for 15s and 60°C for 42 cycles. The expected product is 166bp, which was visualized by gel electrophoresis through 2% agarose and ethidium bromide staining. We refer to this assay as the HIV-1 pol PCR FP-DBS assay. [SUBTITLE] Extraction of nucleic acids from FP-DBS [SUBSECTION] The nucleic acids were extracted from the FP-DBS by two different methods, depending on the assay performed on the sample. For the regular 'in-house' pol and gag PCR FP-DBS assay, a lysate was prepared by lysing the blood sample from the FP-DBS, using an ethanol-flamed 8mm hole punch to detach a blood spot, which samples about one quarter of the total blood spot. Nucleic acids from the DBS spot were extracted using a quick lysis approach, that required addition of 100 μl lysis buffer (10 mM Tris-HCl {pH 8.3}, 50 mM KCl, 100 μg of gelatin, 0.45% Tween 20, 0.45% Nonidet P-40, 60 μg of proteinase K per ml), and lysing for 90 minutes at 56°C, followed by incubation at 95°C for 20 minutes to inactivate the proteinase K, all performed in a single tube to minimize handling [4]. Each tube with lysed sample was then spun at 1000g for 7 minutes to force the filter paper disc and other debris to the bottom of the tube and supernatants containing lysed samples were either immediately used in PCR or stored at -20°C for later use. A volume of 2 μl of lysate from FP-DBS was used in 50 μl PCR reaction for all studies unless indicated. To validate the assay, the amplified product from one PCR was verified by sequence analysis as being the desired sequence (not shown). For testing of samples by HIV-1 pol real-time PCR to verify HIV-1 copy numbers, nucleic acids were extracted from the FP-DBS using standard Qiagen DNA extraction kit. The nucleic acids were extracted from the FP-DBS by two different methods, depending on the assay performed on the sample. For the regular 'in-house' pol and gag PCR FP-DBS assay, a lysate was prepared by lysing the blood sample from the FP-DBS, using an ethanol-flamed 8mm hole punch to detach a blood spot, which samples about one quarter of the total blood spot. Nucleic acids from the DBS spot were extracted using a quick lysis approach, that required addition of 100 μl lysis buffer (10 mM Tris-HCl {pH 8.3}, 50 mM KCl, 100 μg of gelatin, 0.45% Tween 20, 0.45% Nonidet P-40, 60 μg of proteinase K per ml), and lysing for 90 minutes at 56°C, followed by incubation at 95°C for 20 minutes to inactivate the proteinase K, all performed in a single tube to minimize handling [4]. Each tube with lysed sample was then spun at 1000g for 7 minutes to force the filter paper disc and other debris to the bottom of the tube and supernatants containing lysed samples were either immediately used in PCR or stored at -20°C for later use. A volume of 2 μl of lysate from FP-DBS was used in 50 μl PCR reaction for all studies unless indicated. To validate the assay, the amplified product from one PCR was verified by sequence analysis as being the desired sequence (not shown). For testing of samples by HIV-1 pol real-time PCR to verify HIV-1 copy numbers, nucleic acids were extracted from the FP-DBS using standard Qiagen DNA extraction kit. [SUBTITLE] FP-DBS samples [SUBSECTION] For initial studies FP-DBSs with known quantities of HIV-infected cells were made by spotting approximately 50 μl HIV negative blood along with ACH2 cells, which contain a single integrated copy of HIV-1 proviral DNA per cell [16,17], on S&S 903 filter paper (Schleicher & Schuell, Keene, NH). HIV negative blood spots were made from drops of blood with no anticoagulant mimicking blood collected from infant heel prick, and allowed to air-dry overnight. HIV-1 infected ACH2 cells were counted on a hemacytometer and the cells were diluted in sterile PBS to obtain the final expected 10, 5, 2 or 1 infected cells per 2 μl of eluate, which was the volume used for each PCR reaction. The quantified cells in PBS (10 μl) were then spotted onto 8mm punch of the FP-DBS prepared from HIV-negative blood, allowed to soak in and air-dry. In this case, the total number of cells changed very little from sample to sample, since the added ACH2 cells would represent a very small fraction of the total cells in a dried blood spot. To confirm the quantity of viral copies in the diluted HIV-1 infected ACH-2 cell suspension, at a later date, the DNA was extracted using the Qiagen DNA extraction kit, and HIV copy number was quantified using HIV-1 pol real-time PCR [14,15]. Archived FP-DBS, collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) from 115 infants from Nairobi, Kenya of known infection status (56 HIV-1 positive and 59 HIV-1 negative), were collected, air-dried and shipped to Seattle. These archived FP-DBS that had been stored at ambient room temperature for 3 to 7 years in envelopes, in the Seattle laboratory with no desiccant were tested with the HIV-1 pol PCR FP-DBS assay. The operator was blinded to the infection status of the infant when the HIV-1 pol PCR FP-DBS assay was performed. Subsequently, FP-DBS (N = 186) from more recent samples collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) between the years 2007 to 2009 from infants aged < 1 year were tested on site in Nairobi, Kenya. The FP-DBS were prepared by spotting 50 μl of whole blood in EDTA, air-dried overnight and stored in zip-lock bags with a desiccant at ambient room temperatures before use, which was within one month of storage. The infant blood samples for the FP-DBS were obtained as part of NIH-funded research studies with consent from the mothers or the caregivers of the infant, and tested for HIV infection with ethical approval from the Institutional Review board at University of Washington (approval # 98-7407-A) and Fred Hutchinson Cancer Research Center, USA (# 6341), and Kenyatta National Hospital, Kenya (approval # P4/01/2006). For initial studies FP-DBSs with known quantities of HIV-infected cells were made by spotting approximately 50 μl HIV negative blood along with ACH2 cells, which contain a single integrated copy of HIV-1 proviral DNA per cell [16,17], on S&S 903 filter paper (Schleicher & Schuell, Keene, NH). HIV negative blood spots were made from drops of blood with no anticoagulant mimicking blood collected from infant heel prick, and allowed to air-dry overnight. HIV-1 infected ACH2 cells were counted on a hemacytometer and the cells were diluted in sterile PBS to obtain the final expected 10, 5, 2 or 1 infected cells per 2 μl of eluate, which was the volume used for each PCR reaction. The quantified cells in PBS (10 μl) were then spotted onto 8mm punch of the FP-DBS prepared from HIV-negative blood, allowed to soak in and air-dry. In this case, the total number of cells changed very little from sample to sample, since the added ACH2 cells would represent a very small fraction of the total cells in a dried blood spot. To confirm the quantity of viral copies in the diluted HIV-1 infected ACH-2 cell suspension, at a later date, the DNA was extracted using the Qiagen DNA extraction kit, and HIV copy number was quantified using HIV-1 pol real-time PCR [14,15]. Archived FP-DBS, collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) from 115 infants from Nairobi, Kenya of known infection status (56 HIV-1 positive and 59 HIV-1 negative), were collected, air-dried and shipped to Seattle. These archived FP-DBS that had been stored at ambient room temperature for 3 to 7 years in envelopes, in the Seattle laboratory with no desiccant were tested with the HIV-1 pol PCR FP-DBS assay. The operator was blinded to the infection status of the infant when the HIV-1 pol PCR FP-DBS assay was performed. Subsequently, FP-DBS (N = 186) from more recent samples collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) between the years 2007 to 2009 from infants aged < 1 year were tested on site in Nairobi, Kenya. The FP-DBS were prepared by spotting 50 μl of whole blood in EDTA, air-dried overnight and stored in zip-lock bags with a desiccant at ambient room temperatures before use, which was within one month of storage. The infant blood samples for the FP-DBS were obtained as part of NIH-funded research studies with consent from the mothers or the caregivers of the infant, and tested for HIV infection with ethical approval from the Institutional Review board at University of Washington (approval # 98-7407-A) and Fred Hutchinson Cancer Research Center, USA (# 6341), and Kenyatta National Hospital, Kenya (approval # P4/01/2006).
null
null
null
null
[ "Background", "PCR methods", "Extraction of nucleic acids from FP-DBS", "FP-DBS samples", "Results", "Performance of HIV-1 pol PCR FP-DBS assay with spiked FP-DBS prepared from known HIV-1 copy numbers", "pol PCR FP-DBS assay on stored samples from infants of known HIV-1 infection status", "Comparison of pol PCR FP-DBS assay with commercial DBS-FP assay", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Although interventions to prevent mother-to-child transmission of HIV-1 infection are increasingly implemented as part of national guidelines, the prevalence of pediatric HIV-1 infection remains high in Africa. It is projected that about 1000 new pediatric cases occur daily worldwide, with 90% occurring in sub-Saharan African countries [1,2]. Hence, an accurate economical and reliable early infant diagnosis of HIV-1 infection in Africa has become of paramount importance as such diagnosis can ensure that antiretroviral therapy is promptly provided for those in need. In addition infant HIV-1 diagnosis is the best measure for evaluation of mother-to-child transmission programs and can facilitate appropriate stratification of healthcare services [3].\nMolecular methods such as polymerase chain reaction (PCR) assays are the most sensitive method for infant HIV-1 diagnosis [3-10] because passively acquired maternal antibodies in the infant complicates the use of conventional HIV-1 serologic diagnostic assays. Currently, a variety of validated commercially available and 'in-house' PCR-based methods that detect HIV-1 nucleic acids are available [3,5-8,10-13]. Many of these methods have been adapted for HIV-1 diagnosis using either whole blood, or dried blood spots collected on filter papers (FP-DBS), which are more convenient for collection, transport and storage. However many of these commercial PCR-based assays on FP-DBS for early HIV-1 infant diagnosis are expensive (in the range of $20-$50 per assay), and therefore beyond the reach of the majority of the population that resides in low-resource settings where the epidemic is prevalent [3]. Hence, there has been an urgent need for cheaper and reliable assays for early HIV-1 infant diagnosis.\nPreviously, our laboratory evaluated an 'in-house' PCR assay for HIV diagnosis that relied on a two round, nested PCR amplification of the HIV-1-gag sequences from FP-DBS [4]. The PCR results using FP-DBS showed 100% specificity, and 96% sensitivity (based on quadruplicate testing) compared to results with blood mononuclear cells collected from paired venous blood [4]. However, an assay that relies on two rounds of PCR can be challenging in laboratories that do not have optimal facilities for minimizing PCR contamination.\nHere we describe an inexpensive single round PCR that requires minimal nucleic acid manipulation and compare its performance with the earlier HIV-1-gag PCR assay and the commercial Roche qualitative HIV Amplicor® DNA PCR, version 1.5 assay, which is currently the assay with extensive validation in Africa [3].", "The PCR method described here was a modification of a previously described real-time PCR assay that targets the HIV polymerase (pol) gene [14,15]. Minor changes were made by shifting the primers to minimize non-specific amplification. The primers used were forward primer pol 151 5'TACAGTGCAGGGGAAAGAATAATAG3' (corresponds to positions 4809 - 4833 in HXB2) and the reverse primer pol 40 5'CTACTGCCCCTTCACCTTTCC3' (position 4954- 4974 in HXB2). The PCR reaction mixture contained 150 μmol/L of MgCl2, 200 μmol/L of dNTP, 1 μmol/L of each primer, 1.5U of ABI AmpliTaq Gold Polymerase and appropriate buffer mix (Applied Biosystems), 0.1% of Bovine Serum Albumin, and 2 μl of the DNA template. The cycling parameters used were 50°C for 2 min; 95°C for 10 min, 1 cycle; 95°C for 15s and 60°C for 42 cycles. The expected product is 166bp, which was visualized by gel electrophoresis through 2% agarose and ethidium bromide staining. We refer to this assay as the HIV-1 pol PCR FP-DBS assay.", "The nucleic acids were extracted from the FP-DBS by two different methods, depending on the assay performed on the sample.\nFor the regular 'in-house' pol and gag PCR FP-DBS assay, a lysate was prepared by lysing the blood sample from the FP-DBS, using an ethanol-flamed 8mm hole punch to detach a blood spot, which samples about one quarter of the total blood spot. Nucleic acids from the DBS spot were extracted using a quick lysis approach, that required addition of 100 μl lysis buffer (10 mM Tris-HCl {pH 8.3}, 50 mM KCl, 100 μg of gelatin, 0.45% Tween 20, 0.45% Nonidet P-40, 60 μg of proteinase K per ml), and lysing for 90 minutes at 56°C, followed by incubation at 95°C for 20 minutes to inactivate the proteinase K, all performed in a single tube to minimize handling [4]. Each tube with lysed sample was then spun at 1000g for 7 minutes to force the filter paper disc and other debris to the bottom of the tube and supernatants containing lysed samples were either immediately used in PCR or stored at -20°C for later use. A volume of 2 μl of lysate from FP-DBS was used in 50 μl PCR reaction for all studies unless indicated. To validate the assay, the amplified product from one PCR was verified by sequence analysis as being the desired sequence (not shown).\nFor testing of samples by HIV-1 pol real-time PCR to verify HIV-1 copy numbers, nucleic acids were extracted from the FP-DBS using standard Qiagen DNA extraction kit.", "For initial studies FP-DBSs with known quantities of HIV-infected cells were made by spotting approximately 50 μl HIV negative blood along with ACH2 cells, which contain a single integrated copy of HIV-1 proviral DNA per cell [16,17], on S&S 903 filter paper (Schleicher & Schuell, Keene, NH). HIV negative blood spots were made from drops of blood with no anticoagulant mimicking blood collected from infant heel prick, and allowed to air-dry overnight. HIV-1 infected ACH2 cells were counted on a hemacytometer and the cells were diluted in sterile PBS to obtain the final expected 10, 5, 2 or 1 infected cells per 2 μl of eluate, which was the volume used for each PCR reaction. The quantified cells in PBS (10 μl) were then spotted onto 8mm punch of the FP-DBS prepared from HIV-negative blood, allowed to soak in and air-dry. In this case, the total number of cells changed very little from sample to sample, since the added ACH2 cells would represent a very small fraction of the total cells in a dried blood spot. To confirm the quantity of viral copies in the diluted HIV-1 infected ACH-2 cell suspension, at a later date, the DNA was extracted using the Qiagen DNA extraction kit, and HIV copy number was quantified using HIV-1 pol real-time PCR [14,15].\nArchived FP-DBS, collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) from 115 infants from Nairobi, Kenya of known infection status (56 HIV-1 positive and 59 HIV-1 negative), were collected, air-dried and shipped to Seattle. These archived FP-DBS that had been stored at ambient room temperature for 3 to 7 years in envelopes, in the Seattle laboratory with no desiccant were tested with the HIV-1 pol PCR FP-DBS assay. The operator was blinded to the infection status of the infant when the HIV-1 pol PCR FP-DBS assay was performed.\nSubsequently, FP-DBS (N = 186) from more recent samples collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) between the years 2007 to 2009 from infants aged < 1 year were tested on site in Nairobi, Kenya. The FP-DBS were prepared by spotting 50 μl of whole blood in EDTA, air-dried overnight and stored in zip-lock bags with a desiccant at ambient room temperatures before use, which was within one month of storage.\nThe infant blood samples for the FP-DBS were obtained as part of NIH-funded research studies with consent from the mothers or the caregivers of the infant, and tested for HIV infection with ethical approval from the Institutional Review board at University of Washington (approval # 98-7407-A) and Fred Hutchinson Cancer Research Center, USA (# 6341), and Kenyatta National Hospital, Kenya (approval # P4/01/2006).", "In initial studies, various amounts of lysate were tested in the HIV-1 pol FP-DBS PCR to determine if there was inhibition due to heme and other factors in the lysed sample. As we had seen previously with the nested HIV-1 gag FP-DBS PCR assay [4], addition of 5 μl and greater of the FP-DBS lysate into a 50 μl PCR resulted in some inhibition of the reaction (not shown). In other studies, we found this was true for blood collected in no anticoagulant or with EDTA or ACD; the inhibition was even more pronounced using blood collected in heparin (data not shown). For this reason, we used 2 μl of lysate from FP-DBS for the HIV-1 pol FP-DBS PCR from FP-DBS prepared by spotting whole blood or blood collected in EDTA.\n[SUBTITLE] Performance of HIV-1 pol PCR FP-DBS assay with spiked FP-DBS prepared from known HIV-1 copy numbers [SUBSECTION] To examine the ability of this 'in-house' HIV-1 pol PCR assay to detect the range of HIV-1 copy numbers from FP-DBS samples, defined quantities of ACH-2 cells were applied to FP-DBS. For this purpose, ACH2 cells were counted two different times (A & B) on two separate days (D1 & D2), as shown in Table 1. The total amount added to the FP-DBS was calculated so that 2 μl of the final lysate would be expected to have 10, 5, 2 and 1 HIV-1 proviral copies. To verify these numbers in parallel, extracted DNA from an aliquot of each tube of manually counted ACH2 cells was tested in triplicate with the HIV-1 pol real-time PCR assay [14,15], and the results from this assay gave an expected HIV-1 copy number that was within 2-fold of that predicted by cell count in 14 of 16 cases; the 2 discrepant cases were at the lowest cell count (Table 1). A total of 40 PCRs were performed on lysate from each cell preparation: in each case, four FP-DBSs were prepared and 10 HIV-1 pol PCRs were performed from every FP-DBS lysate (Table 1).\nSummary of HIV-1 pol FP-DBS PCR assay performed on DBS spiked with low copy number ACH2 cells\nA total of 40 HIV-1 pol PCR assays were performed on FP-DBS spiked in duplicate (A and B) with known copies of HIV-1 in ACH-2 cells (manually counted and quantified by HIV-1 pol real-time PCR assay done) on two different days (D1 and D2). The results of the HIV-1 pol PCR assay are indicated as an average percentage of HIV-1 proviral copies detected from the spiked FP-DBS.\nThe results of HIV-1 pol PCR FP-DBS assay showed that one HIV-1 proviral copy was detectable in 38.7% of tests (160 total tests), 2 copies in 46.9% of tests, 5 copies in 72.5% of tests, and 10 copies in 98.1% of tests, as expected based on Poisson distribution (Table 1). This experiment was repeated a second time on two separate days with similar results (data not shown). Comparable results were also obtained in smaller studies using FP-DBS prepared from PBMCs infected with different HIV subtypes (A, C and D; data not shown), which is consistent with our previous studies showing that with the same primers, HIV-1 pol PCR assay in the real-time format can detect these HIV subtypes [15]. Overall, these data with unpurified cell lysates using HIV-1 pol PCR FP-DBS assay compared favourably with results of the real-time HIV-1 pol PCR assay where nucleic acid was prepared using a Qiagen purification method prior to PCR [14]. These data suggest that the HIV-1 pol PCR FP-DBS assay is able to reliably detect as low as a single copy of HIV provirus from a FP-DBS with minimal nucleic acid purification.\nTo examine the ability of this 'in-house' HIV-1 pol PCR assay to detect the range of HIV-1 copy numbers from FP-DBS samples, defined quantities of ACH-2 cells were applied to FP-DBS. For this purpose, ACH2 cells were counted two different times (A & B) on two separate days (D1 & D2), as shown in Table 1. The total amount added to the FP-DBS was calculated so that 2 μl of the final lysate would be expected to have 10, 5, 2 and 1 HIV-1 proviral copies. To verify these numbers in parallel, extracted DNA from an aliquot of each tube of manually counted ACH2 cells was tested in triplicate with the HIV-1 pol real-time PCR assay [14,15], and the results from this assay gave an expected HIV-1 copy number that was within 2-fold of that predicted by cell count in 14 of 16 cases; the 2 discrepant cases were at the lowest cell count (Table 1). A total of 40 PCRs were performed on lysate from each cell preparation: in each case, four FP-DBSs were prepared and 10 HIV-1 pol PCRs were performed from every FP-DBS lysate (Table 1).\nSummary of HIV-1 pol FP-DBS PCR assay performed on DBS spiked with low copy number ACH2 cells\nA total of 40 HIV-1 pol PCR assays were performed on FP-DBS spiked in duplicate (A and B) with known copies of HIV-1 in ACH-2 cells (manually counted and quantified by HIV-1 pol real-time PCR assay done) on two different days (D1 and D2). The results of the HIV-1 pol PCR assay are indicated as an average percentage of HIV-1 proviral copies detected from the spiked FP-DBS.\nThe results of HIV-1 pol PCR FP-DBS assay showed that one HIV-1 proviral copy was detectable in 38.7% of tests (160 total tests), 2 copies in 46.9% of tests, 5 copies in 72.5% of tests, and 10 copies in 98.1% of tests, as expected based on Poisson distribution (Table 1). This experiment was repeated a second time on two separate days with similar results (data not shown). Comparable results were also obtained in smaller studies using FP-DBS prepared from PBMCs infected with different HIV subtypes (A, C and D; data not shown), which is consistent with our previous studies showing that with the same primers, HIV-1 pol PCR assay in the real-time format can detect these HIV subtypes [15]. Overall, these data with unpurified cell lysates using HIV-1 pol PCR FP-DBS assay compared favourably with results of the real-time HIV-1 pol PCR assay where nucleic acid was prepared using a Qiagen purification method prior to PCR [14]. These data suggest that the HIV-1 pol PCR FP-DBS assay is able to reliably detect as low as a single copy of HIV provirus from a FP-DBS with minimal nucleic acid purification.\n[SUBTITLE] pol PCR FP-DBS assay on stored samples from infants of known HIV-1 infection status [SUBSECTION] We next evaluated the HIV-1 pol PCR FP-DBS assay using stored FP-DBS from infants of known infection status, as defined by prior testing of two sequential samples with the HIV-1 gag PCR assay [4,18]. These FP-DBS had been prepared by spotting whole blood on the filter paper, air-dried and stored at ambient room temperature for several years. One hundred and fifteen FP-DBS (56 HIV-1 positive and 59 HIV-1 negative) were tested with the operator blinded to the infection status. The lysate was tested in parallel with the two round, nested HIV-1 gag PCR that was used previously and the single round HIV-1 pol PCR FP-DBS assay. The nested HIV-1 gag PCR that amplified a 142bp of fragment, nearly the same size fragment as the single round HIV-1 pol PCR FP-DBS assay served as a control for the integrity of the FP-DBS samples, which had been stored at ambient room temperature for several years.\nThe repeat testing of the stored FP-DBS from these infants with the original HIV-1 gag PCR showed 95.6% agreement with the prior testing that established infection status, suggesting that the long-term storage had not significantly compromised the samples. These same samples were tested with the HIV-1 pol PCR FP-DBS assay. The sensitivity and specificity of the HIV-1 pol PCR assay in relation to the known HIV-1 infection status of the infants was 92.8% and 98.3% respectively (Table 2).\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on archived FP-DBS samples\nSamples were obtained from infants of known HIV-1 infection status (56 HIV-1 positive and 59 HIV-1 negative). The sensitivity of HIV-1 pol PCR FP-DBS assay on archived FP-DBS was 92.8% and specificity of 98.3%.\nWe next evaluated the HIV-1 pol PCR FP-DBS assay using stored FP-DBS from infants of known infection status, as defined by prior testing of two sequential samples with the HIV-1 gag PCR assay [4,18]. These FP-DBS had been prepared by spotting whole blood on the filter paper, air-dried and stored at ambient room temperature for several years. One hundred and fifteen FP-DBS (56 HIV-1 positive and 59 HIV-1 negative) were tested with the operator blinded to the infection status. The lysate was tested in parallel with the two round, nested HIV-1 gag PCR that was used previously and the single round HIV-1 pol PCR FP-DBS assay. The nested HIV-1 gag PCR that amplified a 142bp of fragment, nearly the same size fragment as the single round HIV-1 pol PCR FP-DBS assay served as a control for the integrity of the FP-DBS samples, which had been stored at ambient room temperature for several years.\nThe repeat testing of the stored FP-DBS from these infants with the original HIV-1 gag PCR showed 95.6% agreement with the prior testing that established infection status, suggesting that the long-term storage had not significantly compromised the samples. These same samples were tested with the HIV-1 pol PCR FP-DBS assay. The sensitivity and specificity of the HIV-1 pol PCR assay in relation to the known HIV-1 infection status of the infants was 92.8% and 98.3% respectively (Table 2).\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on archived FP-DBS samples\nSamples were obtained from infants of known HIV-1 infection status (56 HIV-1 positive and 59 HIV-1 negative). The sensitivity of HIV-1 pol PCR FP-DBS assay on archived FP-DBS was 92.8% and specificity of 98.3%.\n[SUBTITLE] Comparison of pol PCR FP-DBS assay with commercial DBS-FP assay [SUBSECTION] The assay was transferred from the Seattle-based laboratory to a newly established molecular virology laboratory at the University of Nairobi. This laboratory included a PCR set-up room that had established standard practices to minimize the potential for introduction of PCR product, plasmid and other possible PCR contaminants. In this laboratory, we tested follow-up FP-DBS samples prepared from whole blood in EDTA collected from infants that were initially reported as HIV-1 positive using the Roche Amplicor v1.5 assay at the National Reference Laboratory for Early Infant Diagnosis testing at Kenya Medical Research Institute or at Kenyatta National Hospital, in Nairobi, Kenya, typically within the prior month. For this confirmatory HIV-1 testing, the infant was rebled and a fresh FP-DBS was prepared from 50 μl of whole blood, air-dried and lysed for PCR analysis. The lysate was then tested in quadruplicate PCRs using both the HIV-1 gag and pol PCRs (see gel picture Figure 1).\nGel picture of amplified products using 'in-house' HIV-1 gag and pol FP-DBS PCR assay. Gel picture of the HIV-1 pol (top panel - 160bp) and gag (lower panel - 152bp) PCR products. Lanes labeled from 1- 10 are PCR products from the ACH copy number controls: 1-2 are 100 HIV copies; 3-4 are 10 copies; 4-8 are 2 copies; 9 and 10 are negative controls using DNA from an uninfected T cell cell-line and water as the PCR template, respectively. PCR results from two samples of DBS-FP from infants who tested positive with the Roche Amplicor assay are labeled as A1-A4 and B1-B4, with 1-4 indicating quadruplicate tests. Based on the results show, both infants would be defined a HIV-1 DNA PCR positive according to testing algorithm for the' in-house' gag and pol FP-DBS FP PCR. The first and last lane in the gel picture (labeled M) in both the panels is the molecular weight marker - Hyperladder IV.\nAs a control, we also included randomly within the test runs, 25 FP-DBS collected from HIV-1 seronegative adults, as defined by two parallel rapid HIV-1 serological assays. The test operator was blinded to HIV-1 serostatus of the control samples and upon testing all the quadruplicate PCR tests on the samples, both for gag and pol products were negative.\nOf the 186 HIV-1 positive samples from the infants that were defined as positive with Roche Amplicor® HIV-1 DNA assay version 1.5, 178 samples were confirmed as positive by HIV-1 pol PCR FP-DBS assay (Figure 2). Of the 8 samples that were negative by both HIV-1 pol and gag PCRs, the infants were re-bled and re-tested using all the tests, the Roche Amplicor® HIV-1 DNA v1.5 assay (at the reference lab) and the HIV-1 gag and pol PCR FP-DBS assay. Upon retesting all of the 8 infants were identified as HIV-1 negative by all assays, suggesting the initial results using the commercial Roche Amplicor® assay were false positive results. However, we could not determine whether the initial results of the Roche Amplicor® assay for the 8 infants were the result of false positive tests or whether sample mix-up could have contributed to these results. The HIV- gag PCR showed only 87% sensitivity (155 of 178 positives) compared to both the Roche Amplicor® and HIV-1 pol PCR FP-DBS assay.\nExperimental approach for performance of HIV-1 gag and pol FP-DBS PCR assay for detection of HIV-1 infection from FP-DBS obtained from HIV-1 positive infants as defined by Roche Amplicor v1.5 assay. 186 recently collected FP-DBS that were determined in the prior ~month to be HIV-1 positive by Roche Amplicor v1.5 assay were tested by HIV-1 gag and pol PCR assays. Of these 178 FP-DBS from the infants were identified as HIV-1 positive. Retesting of subsequent blood samples from the 8 discordant infants showed that they were HIV-1 negative. Thus, the sensitivity of HIV-1 pol PCR assay was found to be 100% (178 of 178) for detection of confirmed HIV-1 positive infants from FP-DBS.\nThe infants were all under the age of 12 months. Most (46%) were between 3 and 6 months, but we also sampled younger infants (25% < 3 months) and older infants (29% > 6 months), as shown on Table 3. Of the 178 HIV-1 positive infants as determined by DNA PCR on FP-DBS using all the 3 methods (Roche Amplicor® and 'in-house' gag and pol assay), 110 infants had HIV-1 subtype data available based on pol sequence. The majority (69%) of the infants were identified as infected with HIV-1 subtype A, and others with subtype D (23%), subtype C (7%) and intersubtype recombinant AD (1%) (Table 4). This is very similar to the subtype distribution in Nairobi [19].\nAge demographics of infants tested for HIV-1 DNA PCR from freshly collected DBS samples in Nairobi, Kenya\nHIV-1 subtypes in infants who tested positive with the HIV-1 pol FP-DBS PCR assay in Nairobi, Kenya\nNote: HIV-1 subtypes based on 667bp polymerase genome\nHIV-1 subtypes available for only 110 of the 178 HIV-1 positive DBS-FP from infants in the Nairobi study.\nOverall, from the 178 samples confirmed positive with Roche Amplicor® assay, all were positive by HIV-1 pol PCR, indicating that the sensitivity and specificity of the HIV-1 pol PCR assay was 100% in this study. As this is based on using data from 4 HIV-1 pol PCR tests to detection infection, we also examined the sensitivity if we considered the results from just the first (single test), the first two (duplicate) or the first 3 (triplicate) HIV-1 pol PCR tests. Based on our results of HIV-1 pol PCR assay the likelihood of missing true positives if we had performed the assay singly would be 14.6% (26 of 178), in duplicate testing would be 6.7% (12/178), in triplicate testing would be approximately 0.6% (1/178), and on quadruplicate testing would be none.\nThe assay was transferred from the Seattle-based laboratory to a newly established molecular virology laboratory at the University of Nairobi. This laboratory included a PCR set-up room that had established standard practices to minimize the potential for introduction of PCR product, plasmid and other possible PCR contaminants. In this laboratory, we tested follow-up FP-DBS samples prepared from whole blood in EDTA collected from infants that were initially reported as HIV-1 positive using the Roche Amplicor v1.5 assay at the National Reference Laboratory for Early Infant Diagnosis testing at Kenya Medical Research Institute or at Kenyatta National Hospital, in Nairobi, Kenya, typically within the prior month. For this confirmatory HIV-1 testing, the infant was rebled and a fresh FP-DBS was prepared from 50 μl of whole blood, air-dried and lysed for PCR analysis. The lysate was then tested in quadruplicate PCRs using both the HIV-1 gag and pol PCRs (see gel picture Figure 1).\nGel picture of amplified products using 'in-house' HIV-1 gag and pol FP-DBS PCR assay. Gel picture of the HIV-1 pol (top panel - 160bp) and gag (lower panel - 152bp) PCR products. Lanes labeled from 1- 10 are PCR products from the ACH copy number controls: 1-2 are 100 HIV copies; 3-4 are 10 copies; 4-8 are 2 copies; 9 and 10 are negative controls using DNA from an uninfected T cell cell-line and water as the PCR template, respectively. PCR results from two samples of DBS-FP from infants who tested positive with the Roche Amplicor assay are labeled as A1-A4 and B1-B4, with 1-4 indicating quadruplicate tests. Based on the results show, both infants would be defined a HIV-1 DNA PCR positive according to testing algorithm for the' in-house' gag and pol FP-DBS FP PCR. The first and last lane in the gel picture (labeled M) in both the panels is the molecular weight marker - Hyperladder IV.\nAs a control, we also included randomly within the test runs, 25 FP-DBS collected from HIV-1 seronegative adults, as defined by two parallel rapid HIV-1 serological assays. The test operator was blinded to HIV-1 serostatus of the control samples and upon testing all the quadruplicate PCR tests on the samples, both for gag and pol products were negative.\nOf the 186 HIV-1 positive samples from the infants that were defined as positive with Roche Amplicor® HIV-1 DNA assay version 1.5, 178 samples were confirmed as positive by HIV-1 pol PCR FP-DBS assay (Figure 2). Of the 8 samples that were negative by both HIV-1 pol and gag PCRs, the infants were re-bled and re-tested using all the tests, the Roche Amplicor® HIV-1 DNA v1.5 assay (at the reference lab) and the HIV-1 gag and pol PCR FP-DBS assay. Upon retesting all of the 8 infants were identified as HIV-1 negative by all assays, suggesting the initial results using the commercial Roche Amplicor® assay were false positive results. However, we could not determine whether the initial results of the Roche Amplicor® assay for the 8 infants were the result of false positive tests or whether sample mix-up could have contributed to these results. The HIV- gag PCR showed only 87% sensitivity (155 of 178 positives) compared to both the Roche Amplicor® and HIV-1 pol PCR FP-DBS assay.\nExperimental approach for performance of HIV-1 gag and pol FP-DBS PCR assay for detection of HIV-1 infection from FP-DBS obtained from HIV-1 positive infants as defined by Roche Amplicor v1.5 assay. 186 recently collected FP-DBS that were determined in the prior ~month to be HIV-1 positive by Roche Amplicor v1.5 assay were tested by HIV-1 gag and pol PCR assays. Of these 178 FP-DBS from the infants were identified as HIV-1 positive. Retesting of subsequent blood samples from the 8 discordant infants showed that they were HIV-1 negative. Thus, the sensitivity of HIV-1 pol PCR assay was found to be 100% (178 of 178) for detection of confirmed HIV-1 positive infants from FP-DBS.\nThe infants were all under the age of 12 months. Most (46%) were between 3 and 6 months, but we also sampled younger infants (25% < 3 months) and older infants (29% > 6 months), as shown on Table 3. Of the 178 HIV-1 positive infants as determined by DNA PCR on FP-DBS using all the 3 methods (Roche Amplicor® and 'in-house' gag and pol assay), 110 infants had HIV-1 subtype data available based on pol sequence. The majority (69%) of the infants were identified as infected with HIV-1 subtype A, and others with subtype D (23%), subtype C (7%) and intersubtype recombinant AD (1%) (Table 4). This is very similar to the subtype distribution in Nairobi [19].\nAge demographics of infants tested for HIV-1 DNA PCR from freshly collected DBS samples in Nairobi, Kenya\nHIV-1 subtypes in infants who tested positive with the HIV-1 pol FP-DBS PCR assay in Nairobi, Kenya\nNote: HIV-1 subtypes based on 667bp polymerase genome\nHIV-1 subtypes available for only 110 of the 178 HIV-1 positive DBS-FP from infants in the Nairobi study.\nOverall, from the 178 samples confirmed positive with Roche Amplicor® assay, all were positive by HIV-1 pol PCR, indicating that the sensitivity and specificity of the HIV-1 pol PCR assay was 100% in this study. As this is based on using data from 4 HIV-1 pol PCR tests to detection infection, we also examined the sensitivity if we considered the results from just the first (single test), the first two (duplicate) or the first 3 (triplicate) HIV-1 pol PCR tests. Based on our results of HIV-1 pol PCR assay the likelihood of missing true positives if we had performed the assay singly would be 14.6% (26 of 178), in duplicate testing would be 6.7% (12/178), in triplicate testing would be approximately 0.6% (1/178), and on quadruplicate testing would be none.", "To examine the ability of this 'in-house' HIV-1 pol PCR assay to detect the range of HIV-1 copy numbers from FP-DBS samples, defined quantities of ACH-2 cells were applied to FP-DBS. For this purpose, ACH2 cells were counted two different times (A & B) on two separate days (D1 & D2), as shown in Table 1. The total amount added to the FP-DBS was calculated so that 2 μl of the final lysate would be expected to have 10, 5, 2 and 1 HIV-1 proviral copies. To verify these numbers in parallel, extracted DNA from an aliquot of each tube of manually counted ACH2 cells was tested in triplicate with the HIV-1 pol real-time PCR assay [14,15], and the results from this assay gave an expected HIV-1 copy number that was within 2-fold of that predicted by cell count in 14 of 16 cases; the 2 discrepant cases were at the lowest cell count (Table 1). A total of 40 PCRs were performed on lysate from each cell preparation: in each case, four FP-DBSs were prepared and 10 HIV-1 pol PCRs were performed from every FP-DBS lysate (Table 1).\nSummary of HIV-1 pol FP-DBS PCR assay performed on DBS spiked with low copy number ACH2 cells\nA total of 40 HIV-1 pol PCR assays were performed on FP-DBS spiked in duplicate (A and B) with known copies of HIV-1 in ACH-2 cells (manually counted and quantified by HIV-1 pol real-time PCR assay done) on two different days (D1 and D2). The results of the HIV-1 pol PCR assay are indicated as an average percentage of HIV-1 proviral copies detected from the spiked FP-DBS.\nThe results of HIV-1 pol PCR FP-DBS assay showed that one HIV-1 proviral copy was detectable in 38.7% of tests (160 total tests), 2 copies in 46.9% of tests, 5 copies in 72.5% of tests, and 10 copies in 98.1% of tests, as expected based on Poisson distribution (Table 1). This experiment was repeated a second time on two separate days with similar results (data not shown). Comparable results were also obtained in smaller studies using FP-DBS prepared from PBMCs infected with different HIV subtypes (A, C and D; data not shown), which is consistent with our previous studies showing that with the same primers, HIV-1 pol PCR assay in the real-time format can detect these HIV subtypes [15]. Overall, these data with unpurified cell lysates using HIV-1 pol PCR FP-DBS assay compared favourably with results of the real-time HIV-1 pol PCR assay where nucleic acid was prepared using a Qiagen purification method prior to PCR [14]. These data suggest that the HIV-1 pol PCR FP-DBS assay is able to reliably detect as low as a single copy of HIV provirus from a FP-DBS with minimal nucleic acid purification.", "We next evaluated the HIV-1 pol PCR FP-DBS assay using stored FP-DBS from infants of known infection status, as defined by prior testing of two sequential samples with the HIV-1 gag PCR assay [4,18]. These FP-DBS had been prepared by spotting whole blood on the filter paper, air-dried and stored at ambient room temperature for several years. One hundred and fifteen FP-DBS (56 HIV-1 positive and 59 HIV-1 negative) were tested with the operator blinded to the infection status. The lysate was tested in parallel with the two round, nested HIV-1 gag PCR that was used previously and the single round HIV-1 pol PCR FP-DBS assay. The nested HIV-1 gag PCR that amplified a 142bp of fragment, nearly the same size fragment as the single round HIV-1 pol PCR FP-DBS assay served as a control for the integrity of the FP-DBS samples, which had been stored at ambient room temperature for several years.\nThe repeat testing of the stored FP-DBS from these infants with the original HIV-1 gag PCR showed 95.6% agreement with the prior testing that established infection status, suggesting that the long-term storage had not significantly compromised the samples. These same samples were tested with the HIV-1 pol PCR FP-DBS assay. The sensitivity and specificity of the HIV-1 pol PCR assay in relation to the known HIV-1 infection status of the infants was 92.8% and 98.3% respectively (Table 2).\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on archived FP-DBS samples\nSamples were obtained from infants of known HIV-1 infection status (56 HIV-1 positive and 59 HIV-1 negative). The sensitivity of HIV-1 pol PCR FP-DBS assay on archived FP-DBS was 92.8% and specificity of 98.3%.", "The assay was transferred from the Seattle-based laboratory to a newly established molecular virology laboratory at the University of Nairobi. This laboratory included a PCR set-up room that had established standard practices to minimize the potential for introduction of PCR product, plasmid and other possible PCR contaminants. In this laboratory, we tested follow-up FP-DBS samples prepared from whole blood in EDTA collected from infants that were initially reported as HIV-1 positive using the Roche Amplicor v1.5 assay at the National Reference Laboratory for Early Infant Diagnosis testing at Kenya Medical Research Institute or at Kenyatta National Hospital, in Nairobi, Kenya, typically within the prior month. For this confirmatory HIV-1 testing, the infant was rebled and a fresh FP-DBS was prepared from 50 μl of whole blood, air-dried and lysed for PCR analysis. The lysate was then tested in quadruplicate PCRs using both the HIV-1 gag and pol PCRs (see gel picture Figure 1).\nGel picture of amplified products using 'in-house' HIV-1 gag and pol FP-DBS PCR assay. Gel picture of the HIV-1 pol (top panel - 160bp) and gag (lower panel - 152bp) PCR products. Lanes labeled from 1- 10 are PCR products from the ACH copy number controls: 1-2 are 100 HIV copies; 3-4 are 10 copies; 4-8 are 2 copies; 9 and 10 are negative controls using DNA from an uninfected T cell cell-line and water as the PCR template, respectively. PCR results from two samples of DBS-FP from infants who tested positive with the Roche Amplicor assay are labeled as A1-A4 and B1-B4, with 1-4 indicating quadruplicate tests. Based on the results show, both infants would be defined a HIV-1 DNA PCR positive according to testing algorithm for the' in-house' gag and pol FP-DBS FP PCR. The first and last lane in the gel picture (labeled M) in both the panels is the molecular weight marker - Hyperladder IV.\nAs a control, we also included randomly within the test runs, 25 FP-DBS collected from HIV-1 seronegative adults, as defined by two parallel rapid HIV-1 serological assays. The test operator was blinded to HIV-1 serostatus of the control samples and upon testing all the quadruplicate PCR tests on the samples, both for gag and pol products were negative.\nOf the 186 HIV-1 positive samples from the infants that were defined as positive with Roche Amplicor® HIV-1 DNA assay version 1.5, 178 samples were confirmed as positive by HIV-1 pol PCR FP-DBS assay (Figure 2). Of the 8 samples that were negative by both HIV-1 pol and gag PCRs, the infants were re-bled and re-tested using all the tests, the Roche Amplicor® HIV-1 DNA v1.5 assay (at the reference lab) and the HIV-1 gag and pol PCR FP-DBS assay. Upon retesting all of the 8 infants were identified as HIV-1 negative by all assays, suggesting the initial results using the commercial Roche Amplicor® assay were false positive results. However, we could not determine whether the initial results of the Roche Amplicor® assay for the 8 infants were the result of false positive tests or whether sample mix-up could have contributed to these results. The HIV- gag PCR showed only 87% sensitivity (155 of 178 positives) compared to both the Roche Amplicor® and HIV-1 pol PCR FP-DBS assay.\nExperimental approach for performance of HIV-1 gag and pol FP-DBS PCR assay for detection of HIV-1 infection from FP-DBS obtained from HIV-1 positive infants as defined by Roche Amplicor v1.5 assay. 186 recently collected FP-DBS that were determined in the prior ~month to be HIV-1 positive by Roche Amplicor v1.5 assay were tested by HIV-1 gag and pol PCR assays. Of these 178 FP-DBS from the infants were identified as HIV-1 positive. Retesting of subsequent blood samples from the 8 discordant infants showed that they were HIV-1 negative. Thus, the sensitivity of HIV-1 pol PCR assay was found to be 100% (178 of 178) for detection of confirmed HIV-1 positive infants from FP-DBS.\nThe infants were all under the age of 12 months. Most (46%) were between 3 and 6 months, but we also sampled younger infants (25% < 3 months) and older infants (29% > 6 months), as shown on Table 3. Of the 178 HIV-1 positive infants as determined by DNA PCR on FP-DBS using all the 3 methods (Roche Amplicor® and 'in-house' gag and pol assay), 110 infants had HIV-1 subtype data available based on pol sequence. The majority (69%) of the infants were identified as infected with HIV-1 subtype A, and others with subtype D (23%), subtype C (7%) and intersubtype recombinant AD (1%) (Table 4). This is very similar to the subtype distribution in Nairobi [19].\nAge demographics of infants tested for HIV-1 DNA PCR from freshly collected DBS samples in Nairobi, Kenya\nHIV-1 subtypes in infants who tested positive with the HIV-1 pol FP-DBS PCR assay in Nairobi, Kenya\nNote: HIV-1 subtypes based on 667bp polymerase genome\nHIV-1 subtypes available for only 110 of the 178 HIV-1 positive DBS-FP from infants in the Nairobi study.\nOverall, from the 178 samples confirmed positive with Roche Amplicor® assay, all were positive by HIV-1 pol PCR, indicating that the sensitivity and specificity of the HIV-1 pol PCR assay was 100% in this study. As this is based on using data from 4 HIV-1 pol PCR tests to detection infection, we also examined the sensitivity if we considered the results from just the first (single test), the first two (duplicate) or the first 3 (triplicate) HIV-1 pol PCR tests. Based on our results of HIV-1 pol PCR assay the likelihood of missing true positives if we had performed the assay singly would be 14.6% (26 of 178), in duplicate testing would be 6.7% (12/178), in triplicate testing would be approximately 0.6% (1/178), and on quadruplicate testing would be none.", "In summary, we describe here a sensitive HIV-1 pol PCR FP-DBS assay for detection of infant infection. The advantages of this assay include the fact that it requires minimal manipulation of the sample compared to assays that rely on extraction of nucleic acids and nested PCR methods. This method can detect a single copy of HIV provirus and has been validated on HIV-1 sequences of multiple subtypes.\nThe HIV-1 pol PCR FP-DBS assay was compared to results from historical studies in Seattle as well as samples from infants who recently tested positive by the Roche Amplicor® HIV-1 DNA Test, version 1.5 assay in Nairobi. The sensitivity and specificity of the HIV-1 pol PCR FP-DBS assay was > 90% on archived samples stored for more than 3 years at room temperature. The combined sensitivity of the HIV-1 pol PCR FP-DB assay using the archived (N = 56 positive) and recent samples (N = 178 positive) was 98.3% (Table 5). These comparisons are based on quadruplicate testing, which maximizes detection of low HIV copies in a sample. Using this approach, we detected 100% of confirmed positive samples from infants recently identified as HIV positive by the Roche Amplicor® HIV-1 DNA assay. Based on initial field site testing of FP-DBS from 178 HIV positive infants, this HIV-1 pol PCR FP-DBS assay with a single, duplicate or triplicate PCR testing would be predicted to detected ~85%, 93% and 99% of HIV-1 positive samples, respectively.\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on all FP-DBS\nA total of 115 archived and 211 current FP-DBS samples of known HIV-1 infection status were tested. Overall the sensitivity of HIV-1 pol PCR FP-DBS assay on all FP-DBS samples (archived and recent samples) was 98.3% and specificity of 98.9%.\n* a total of 56 archived FP-DBS samples and 178 recent FP-DBS samples obtained from infants.\n$ a total of 59 archived and 33 recent FP-DBS samples (8 obtained from infants and 25 from adults).\nThe HIV-1 pol PCR FP-DBS assay samples a lower total volume of blood than the commercial assays that go through a purification step to remove inhibitors, which lengthens and complicates these assays. Thus, while the Roche Amplicor® HIV-1 DNA assay typically tests for HIV in ~12.5 μl of blood eluate (assuming processing a 50 μl blood spot by adding 200 μl lysate and testing 50 μl), the HIV-1 pol PCR FP-DBS assay only tests 0.2-0.3 μl blood eluate per PCR (in this assay only ~a quarter of the blood spot is sampled, ~10-12 μl of blood); this is lysed in 100 μl and 2 μl of lysate is used per PCR. However, this lower blood volume used in the HIV-1 pol PCR FP-DBS assay should be adequate to sample HIV-infected cells in an infant sample. Infant blood contains an estimated 75.4 ± 104.3 HIV proviral copies per 1000 PBMC [20]. Assuming that 1 μl of blood contains 1 million total cells, of which 5000 are PBMCs [21], then there are ~1000 PBMC sampled in each PCR. Thus, each PCR that includes 0.2 μl of infant blood typically contains multiple HIV copies that can be detected in this assay that should be amplified with this single HIV-1 copy detection PCR method.\nEven when performed in quadruplicate, this qualitative DNA assay is economical and costs just a few dollars per patient, compared to some of the commercial assays that are nearly 5 to 10 times more expensive (~20-50 US$). While quadruplicate testing may provide optimal sensitivity, this assay may also be highly sensitive and specific when PCRs are performed in duplicate or triplicate. However, quadruplicate testing may increase the ability of this assay to detect infants that are destined to become slow progressors, who are estimated to have lower HIV proviral copy numbers (11.8+18.8 HIV copies/1000 PBMC; [20]). Laboratories that adapt this assay may wish to compare the performance of quadruplicate testing versus using fewer replicate tests to determine the number of PCR tests that are optimal to suit their needs.\nImportantly, when transferred to a new laboratory in Nairobi for on-site testing in a setting that applied stringent measures to minimize PCR contamination, the assay showed very high sensitivity and specificity compared to the results of commercial assays from established reference laboratories in the region. Of course, the performance of this assay in other settings may vary depending on the established protocols and expertise of the laboratory, as is true with the use of any PCR-based assays-either 'in-house' or commercial. One limitation of the study results described here is that they focused primarily on samples from infants recently detected as HIV-1 positive by the Roche Amplicor® HIV-1 DNA Test. Further studies of infants born to HIV-positive mothers that are not pre-screened in this manner will be needed to more precisely define the sensitivity and specificity of this assay in a clinical setting in real time. In this case, it will be important to not only verify the findings with a second assay, but also to test a follow-up sample from each infant because other assays also have limitations in their performance, such as the false-positive results using the Roche Amplicor® HIV-1 DNA Test described here. While further testing by other laboratories will be useful for validating the performance of this assay, these findings suggest that the HIV-1 pol PCR FP-DBS assay provides a reliable and rapid method, and is an economical assay for early detection of HIV-1 infection in infants.", "There is an urgent need for an economical and reliable assay for early HIV-1 infant diagnosis, especially for low-resource countries. This study has validated an economical 'in-house' HIV-1 pol PCR FP-DBS assay that is highly sensitive and specific when compared to a commercial Roche Amplicor® v1.5 FP-DBS assay. This study highlights the need for potential adaptation of this qualitative DNA-based assay in development of a rapid point-of care diagnostics assay for early infant HIV-1 diagnosis. This single round pol PCR FP-DBS can therefore be a useful tool for early infant HIV-1 diagnosis in Africa especially where the HIV epidemic prevails and resources are limited.", "The authors declare that they have no competing interests.", "BHC helped design aspects of the study, validated and performed assays, analyzed the data and helped draft the manuscript. SE validated and performed assays, and provided input into the manuscript. MM and MN participated in the study by performing the assay on the samples in the field. SF was helpful in setting up the molecular virology laboratory in the field and assisting in transferring the technology and training staff on the methods. GJ-S and DW as Principal Investigators of the research project from which clinical samples were obtained, provided the samples in the field for evaluation of the assay and gave input into the study design. JO conceived the idea and led the study design, implementation of the program and drafting and editing of the manuscript. All authors contributed to the data analysis and read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2431/11/18/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "PCR methods", "Extraction of nucleic acids from FP-DBS", "FP-DBS samples", "Results", "Performance of HIV-1 pol PCR FP-DBS assay with spiked FP-DBS prepared from known HIV-1 copy numbers", "pol PCR FP-DBS assay on stored samples from infants of known HIV-1 infection status", "Comparison of pol PCR FP-DBS assay with commercial DBS-FP assay", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Although interventions to prevent mother-to-child transmission of HIV-1 infection are increasingly implemented as part of national guidelines, the prevalence of pediatric HIV-1 infection remains high in Africa. It is projected that about 1000 new pediatric cases occur daily worldwide, with 90% occurring in sub-Saharan African countries [1,2]. Hence, an accurate economical and reliable early infant diagnosis of HIV-1 infection in Africa has become of paramount importance as such diagnosis can ensure that antiretroviral therapy is promptly provided for those in need. In addition infant HIV-1 diagnosis is the best measure for evaluation of mother-to-child transmission programs and can facilitate appropriate stratification of healthcare services [3].\nMolecular methods such as polymerase chain reaction (PCR) assays are the most sensitive method for infant HIV-1 diagnosis [3-10] because passively acquired maternal antibodies in the infant complicates the use of conventional HIV-1 serologic diagnostic assays. Currently, a variety of validated commercially available and 'in-house' PCR-based methods that detect HIV-1 nucleic acids are available [3,5-8,10-13]. Many of these methods have been adapted for HIV-1 diagnosis using either whole blood, or dried blood spots collected on filter papers (FP-DBS), which are more convenient for collection, transport and storage. However many of these commercial PCR-based assays on FP-DBS for early HIV-1 infant diagnosis are expensive (in the range of $20-$50 per assay), and therefore beyond the reach of the majority of the population that resides in low-resource settings where the epidemic is prevalent [3]. Hence, there has been an urgent need for cheaper and reliable assays for early HIV-1 infant diagnosis.\nPreviously, our laboratory evaluated an 'in-house' PCR assay for HIV diagnosis that relied on a two round, nested PCR amplification of the HIV-1-gag sequences from FP-DBS [4]. The PCR results using FP-DBS showed 100% specificity, and 96% sensitivity (based on quadruplicate testing) compared to results with blood mononuclear cells collected from paired venous blood [4]. However, an assay that relies on two rounds of PCR can be challenging in laboratories that do not have optimal facilities for minimizing PCR contamination.\nHere we describe an inexpensive single round PCR that requires minimal nucleic acid manipulation and compare its performance with the earlier HIV-1-gag PCR assay and the commercial Roche qualitative HIV Amplicor® DNA PCR, version 1.5 assay, which is currently the assay with extensive validation in Africa [3].", "[SUBTITLE] PCR methods [SUBSECTION] The PCR method described here was a modification of a previously described real-time PCR assay that targets the HIV polymerase (pol) gene [14,15]. Minor changes were made by shifting the primers to minimize non-specific amplification. The primers used were forward primer pol 151 5'TACAGTGCAGGGGAAAGAATAATAG3' (corresponds to positions 4809 - 4833 in HXB2) and the reverse primer pol 40 5'CTACTGCCCCTTCACCTTTCC3' (position 4954- 4974 in HXB2). The PCR reaction mixture contained 150 μmol/L of MgCl2, 200 μmol/L of dNTP, 1 μmol/L of each primer, 1.5U of ABI AmpliTaq Gold Polymerase and appropriate buffer mix (Applied Biosystems), 0.1% of Bovine Serum Albumin, and 2 μl of the DNA template. The cycling parameters used were 50°C for 2 min; 95°C for 10 min, 1 cycle; 95°C for 15s and 60°C for 42 cycles. The expected product is 166bp, which was visualized by gel electrophoresis through 2% agarose and ethidium bromide staining. We refer to this assay as the HIV-1 pol PCR FP-DBS assay.\nThe PCR method described here was a modification of a previously described real-time PCR assay that targets the HIV polymerase (pol) gene [14,15]. Minor changes were made by shifting the primers to minimize non-specific amplification. The primers used were forward primer pol 151 5'TACAGTGCAGGGGAAAGAATAATAG3' (corresponds to positions 4809 - 4833 in HXB2) and the reverse primer pol 40 5'CTACTGCCCCTTCACCTTTCC3' (position 4954- 4974 in HXB2). The PCR reaction mixture contained 150 μmol/L of MgCl2, 200 μmol/L of dNTP, 1 μmol/L of each primer, 1.5U of ABI AmpliTaq Gold Polymerase and appropriate buffer mix (Applied Biosystems), 0.1% of Bovine Serum Albumin, and 2 μl of the DNA template. The cycling parameters used were 50°C for 2 min; 95°C for 10 min, 1 cycle; 95°C for 15s and 60°C for 42 cycles. The expected product is 166bp, which was visualized by gel electrophoresis through 2% agarose and ethidium bromide staining. We refer to this assay as the HIV-1 pol PCR FP-DBS assay.\n[SUBTITLE] Extraction of nucleic acids from FP-DBS [SUBSECTION] The nucleic acids were extracted from the FP-DBS by two different methods, depending on the assay performed on the sample.\nFor the regular 'in-house' pol and gag PCR FP-DBS assay, a lysate was prepared by lysing the blood sample from the FP-DBS, using an ethanol-flamed 8mm hole punch to detach a blood spot, which samples about one quarter of the total blood spot. Nucleic acids from the DBS spot were extracted using a quick lysis approach, that required addition of 100 μl lysis buffer (10 mM Tris-HCl {pH 8.3}, 50 mM KCl, 100 μg of gelatin, 0.45% Tween 20, 0.45% Nonidet P-40, 60 μg of proteinase K per ml), and lysing for 90 minutes at 56°C, followed by incubation at 95°C for 20 minutes to inactivate the proteinase K, all performed in a single tube to minimize handling [4]. Each tube with lysed sample was then spun at 1000g for 7 minutes to force the filter paper disc and other debris to the bottom of the tube and supernatants containing lysed samples were either immediately used in PCR or stored at -20°C for later use. A volume of 2 μl of lysate from FP-DBS was used in 50 μl PCR reaction for all studies unless indicated. To validate the assay, the amplified product from one PCR was verified by sequence analysis as being the desired sequence (not shown).\nFor testing of samples by HIV-1 pol real-time PCR to verify HIV-1 copy numbers, nucleic acids were extracted from the FP-DBS using standard Qiagen DNA extraction kit.\nThe nucleic acids were extracted from the FP-DBS by two different methods, depending on the assay performed on the sample.\nFor the regular 'in-house' pol and gag PCR FP-DBS assay, a lysate was prepared by lysing the blood sample from the FP-DBS, using an ethanol-flamed 8mm hole punch to detach a blood spot, which samples about one quarter of the total blood spot. Nucleic acids from the DBS spot were extracted using a quick lysis approach, that required addition of 100 μl lysis buffer (10 mM Tris-HCl {pH 8.3}, 50 mM KCl, 100 μg of gelatin, 0.45% Tween 20, 0.45% Nonidet P-40, 60 μg of proteinase K per ml), and lysing for 90 minutes at 56°C, followed by incubation at 95°C for 20 minutes to inactivate the proteinase K, all performed in a single tube to minimize handling [4]. Each tube with lysed sample was then spun at 1000g for 7 minutes to force the filter paper disc and other debris to the bottom of the tube and supernatants containing lysed samples were either immediately used in PCR or stored at -20°C for later use. A volume of 2 μl of lysate from FP-DBS was used in 50 μl PCR reaction for all studies unless indicated. To validate the assay, the amplified product from one PCR was verified by sequence analysis as being the desired sequence (not shown).\nFor testing of samples by HIV-1 pol real-time PCR to verify HIV-1 copy numbers, nucleic acids were extracted from the FP-DBS using standard Qiagen DNA extraction kit.\n[SUBTITLE] FP-DBS samples [SUBSECTION] For initial studies FP-DBSs with known quantities of HIV-infected cells were made by spotting approximately 50 μl HIV negative blood along with ACH2 cells, which contain a single integrated copy of HIV-1 proviral DNA per cell [16,17], on S&S 903 filter paper (Schleicher & Schuell, Keene, NH). HIV negative blood spots were made from drops of blood with no anticoagulant mimicking blood collected from infant heel prick, and allowed to air-dry overnight. HIV-1 infected ACH2 cells were counted on a hemacytometer and the cells were diluted in sterile PBS to obtain the final expected 10, 5, 2 or 1 infected cells per 2 μl of eluate, which was the volume used for each PCR reaction. The quantified cells in PBS (10 μl) were then spotted onto 8mm punch of the FP-DBS prepared from HIV-negative blood, allowed to soak in and air-dry. In this case, the total number of cells changed very little from sample to sample, since the added ACH2 cells would represent a very small fraction of the total cells in a dried blood spot. To confirm the quantity of viral copies in the diluted HIV-1 infected ACH-2 cell suspension, at a later date, the DNA was extracted using the Qiagen DNA extraction kit, and HIV copy number was quantified using HIV-1 pol real-time PCR [14,15].\nArchived FP-DBS, collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) from 115 infants from Nairobi, Kenya of known infection status (56 HIV-1 positive and 59 HIV-1 negative), were collected, air-dried and shipped to Seattle. These archived FP-DBS that had been stored at ambient room temperature for 3 to 7 years in envelopes, in the Seattle laboratory with no desiccant were tested with the HIV-1 pol PCR FP-DBS assay. The operator was blinded to the infection status of the infant when the HIV-1 pol PCR FP-DBS assay was performed.\nSubsequently, FP-DBS (N = 186) from more recent samples collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) between the years 2007 to 2009 from infants aged < 1 year were tested on site in Nairobi, Kenya. The FP-DBS were prepared by spotting 50 μl of whole blood in EDTA, air-dried overnight and stored in zip-lock bags with a desiccant at ambient room temperatures before use, which was within one month of storage.\nThe infant blood samples for the FP-DBS were obtained as part of NIH-funded research studies with consent from the mothers or the caregivers of the infant, and tested for HIV infection with ethical approval from the Institutional Review board at University of Washington (approval # 98-7407-A) and Fred Hutchinson Cancer Research Center, USA (# 6341), and Kenyatta National Hospital, Kenya (approval # P4/01/2006).\nFor initial studies FP-DBSs with known quantities of HIV-infected cells were made by spotting approximately 50 μl HIV negative blood along with ACH2 cells, which contain a single integrated copy of HIV-1 proviral DNA per cell [16,17], on S&S 903 filter paper (Schleicher & Schuell, Keene, NH). HIV negative blood spots were made from drops of blood with no anticoagulant mimicking blood collected from infant heel prick, and allowed to air-dry overnight. HIV-1 infected ACH2 cells were counted on a hemacytometer and the cells were diluted in sterile PBS to obtain the final expected 10, 5, 2 or 1 infected cells per 2 μl of eluate, which was the volume used for each PCR reaction. The quantified cells in PBS (10 μl) were then spotted onto 8mm punch of the FP-DBS prepared from HIV-negative blood, allowed to soak in and air-dry. In this case, the total number of cells changed very little from sample to sample, since the added ACH2 cells would represent a very small fraction of the total cells in a dried blood spot. To confirm the quantity of viral copies in the diluted HIV-1 infected ACH-2 cell suspension, at a later date, the DNA was extracted using the Qiagen DNA extraction kit, and HIV copy number was quantified using HIV-1 pol real-time PCR [14,15].\nArchived FP-DBS, collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) from 115 infants from Nairobi, Kenya of known infection status (56 HIV-1 positive and 59 HIV-1 negative), were collected, air-dried and shipped to Seattle. These archived FP-DBS that had been stored at ambient room temperature for 3 to 7 years in envelopes, in the Seattle laboratory with no desiccant were tested with the HIV-1 pol PCR FP-DBS assay. The operator was blinded to the infection status of the infant when the HIV-1 pol PCR FP-DBS assay was performed.\nSubsequently, FP-DBS (N = 186) from more recent samples collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) between the years 2007 to 2009 from infants aged < 1 year were tested on site in Nairobi, Kenya. The FP-DBS were prepared by spotting 50 μl of whole blood in EDTA, air-dried overnight and stored in zip-lock bags with a desiccant at ambient room temperatures before use, which was within one month of storage.\nThe infant blood samples for the FP-DBS were obtained as part of NIH-funded research studies with consent from the mothers or the caregivers of the infant, and tested for HIV infection with ethical approval from the Institutional Review board at University of Washington (approval # 98-7407-A) and Fred Hutchinson Cancer Research Center, USA (# 6341), and Kenyatta National Hospital, Kenya (approval # P4/01/2006).", "The PCR method described here was a modification of a previously described real-time PCR assay that targets the HIV polymerase (pol) gene [14,15]. Minor changes were made by shifting the primers to minimize non-specific amplification. The primers used were forward primer pol 151 5'TACAGTGCAGGGGAAAGAATAATAG3' (corresponds to positions 4809 - 4833 in HXB2) and the reverse primer pol 40 5'CTACTGCCCCTTCACCTTTCC3' (position 4954- 4974 in HXB2). The PCR reaction mixture contained 150 μmol/L of MgCl2, 200 μmol/L of dNTP, 1 μmol/L of each primer, 1.5U of ABI AmpliTaq Gold Polymerase and appropriate buffer mix (Applied Biosystems), 0.1% of Bovine Serum Albumin, and 2 μl of the DNA template. The cycling parameters used were 50°C for 2 min; 95°C for 10 min, 1 cycle; 95°C for 15s and 60°C for 42 cycles. The expected product is 166bp, which was visualized by gel electrophoresis through 2% agarose and ethidium bromide staining. We refer to this assay as the HIV-1 pol PCR FP-DBS assay.", "The nucleic acids were extracted from the FP-DBS by two different methods, depending on the assay performed on the sample.\nFor the regular 'in-house' pol and gag PCR FP-DBS assay, a lysate was prepared by lysing the blood sample from the FP-DBS, using an ethanol-flamed 8mm hole punch to detach a blood spot, which samples about one quarter of the total blood spot. Nucleic acids from the DBS spot were extracted using a quick lysis approach, that required addition of 100 μl lysis buffer (10 mM Tris-HCl {pH 8.3}, 50 mM KCl, 100 μg of gelatin, 0.45% Tween 20, 0.45% Nonidet P-40, 60 μg of proteinase K per ml), and lysing for 90 minutes at 56°C, followed by incubation at 95°C for 20 minutes to inactivate the proteinase K, all performed in a single tube to minimize handling [4]. Each tube with lysed sample was then spun at 1000g for 7 minutes to force the filter paper disc and other debris to the bottom of the tube and supernatants containing lysed samples were either immediately used in PCR or stored at -20°C for later use. A volume of 2 μl of lysate from FP-DBS was used in 50 μl PCR reaction for all studies unless indicated. To validate the assay, the amplified product from one PCR was verified by sequence analysis as being the desired sequence (not shown).\nFor testing of samples by HIV-1 pol real-time PCR to verify HIV-1 copy numbers, nucleic acids were extracted from the FP-DBS using standard Qiagen DNA extraction kit.", "For initial studies FP-DBSs with known quantities of HIV-infected cells were made by spotting approximately 50 μl HIV negative blood along with ACH2 cells, which contain a single integrated copy of HIV-1 proviral DNA per cell [16,17], on S&S 903 filter paper (Schleicher & Schuell, Keene, NH). HIV negative blood spots were made from drops of blood with no anticoagulant mimicking blood collected from infant heel prick, and allowed to air-dry overnight. HIV-1 infected ACH2 cells were counted on a hemacytometer and the cells were diluted in sterile PBS to obtain the final expected 10, 5, 2 or 1 infected cells per 2 μl of eluate, which was the volume used for each PCR reaction. The quantified cells in PBS (10 μl) were then spotted onto 8mm punch of the FP-DBS prepared from HIV-negative blood, allowed to soak in and air-dry. In this case, the total number of cells changed very little from sample to sample, since the added ACH2 cells would represent a very small fraction of the total cells in a dried blood spot. To confirm the quantity of viral copies in the diluted HIV-1 infected ACH-2 cell suspension, at a later date, the DNA was extracted using the Qiagen DNA extraction kit, and HIV copy number was quantified using HIV-1 pol real-time PCR [14,15].\nArchived FP-DBS, collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) from 115 infants from Nairobi, Kenya of known infection status (56 HIV-1 positive and 59 HIV-1 negative), were collected, air-dried and shipped to Seattle. These archived FP-DBS that had been stored at ambient room temperature for 3 to 7 years in envelopes, in the Seattle laboratory with no desiccant were tested with the HIV-1 pol PCR FP-DBS assay. The operator was blinded to the infection status of the infant when the HIV-1 pol PCR FP-DBS assay was performed.\nSubsequently, FP-DBS (N = 186) from more recent samples collected on S&S 903 filter paper (Schleicher & Schuell, Keene, NH) between the years 2007 to 2009 from infants aged < 1 year were tested on site in Nairobi, Kenya. The FP-DBS were prepared by spotting 50 μl of whole blood in EDTA, air-dried overnight and stored in zip-lock bags with a desiccant at ambient room temperatures before use, which was within one month of storage.\nThe infant blood samples for the FP-DBS were obtained as part of NIH-funded research studies with consent from the mothers or the caregivers of the infant, and tested for HIV infection with ethical approval from the Institutional Review board at University of Washington (approval # 98-7407-A) and Fred Hutchinson Cancer Research Center, USA (# 6341), and Kenyatta National Hospital, Kenya (approval # P4/01/2006).", "In initial studies, various amounts of lysate were tested in the HIV-1 pol FP-DBS PCR to determine if there was inhibition due to heme and other factors in the lysed sample. As we had seen previously with the nested HIV-1 gag FP-DBS PCR assay [4], addition of 5 μl and greater of the FP-DBS lysate into a 50 μl PCR resulted in some inhibition of the reaction (not shown). In other studies, we found this was true for blood collected in no anticoagulant or with EDTA or ACD; the inhibition was even more pronounced using blood collected in heparin (data not shown). For this reason, we used 2 μl of lysate from FP-DBS for the HIV-1 pol FP-DBS PCR from FP-DBS prepared by spotting whole blood or blood collected in EDTA.\n[SUBTITLE] Performance of HIV-1 pol PCR FP-DBS assay with spiked FP-DBS prepared from known HIV-1 copy numbers [SUBSECTION] To examine the ability of this 'in-house' HIV-1 pol PCR assay to detect the range of HIV-1 copy numbers from FP-DBS samples, defined quantities of ACH-2 cells were applied to FP-DBS. For this purpose, ACH2 cells were counted two different times (A & B) on two separate days (D1 & D2), as shown in Table 1. The total amount added to the FP-DBS was calculated so that 2 μl of the final lysate would be expected to have 10, 5, 2 and 1 HIV-1 proviral copies. To verify these numbers in parallel, extracted DNA from an aliquot of each tube of manually counted ACH2 cells was tested in triplicate with the HIV-1 pol real-time PCR assay [14,15], and the results from this assay gave an expected HIV-1 copy number that was within 2-fold of that predicted by cell count in 14 of 16 cases; the 2 discrepant cases were at the lowest cell count (Table 1). A total of 40 PCRs were performed on lysate from each cell preparation: in each case, four FP-DBSs were prepared and 10 HIV-1 pol PCRs were performed from every FP-DBS lysate (Table 1).\nSummary of HIV-1 pol FP-DBS PCR assay performed on DBS spiked with low copy number ACH2 cells\nA total of 40 HIV-1 pol PCR assays were performed on FP-DBS spiked in duplicate (A and B) with known copies of HIV-1 in ACH-2 cells (manually counted and quantified by HIV-1 pol real-time PCR assay done) on two different days (D1 and D2). The results of the HIV-1 pol PCR assay are indicated as an average percentage of HIV-1 proviral copies detected from the spiked FP-DBS.\nThe results of HIV-1 pol PCR FP-DBS assay showed that one HIV-1 proviral copy was detectable in 38.7% of tests (160 total tests), 2 copies in 46.9% of tests, 5 copies in 72.5% of tests, and 10 copies in 98.1% of tests, as expected based on Poisson distribution (Table 1). This experiment was repeated a second time on two separate days with similar results (data not shown). Comparable results were also obtained in smaller studies using FP-DBS prepared from PBMCs infected with different HIV subtypes (A, C and D; data not shown), which is consistent with our previous studies showing that with the same primers, HIV-1 pol PCR assay in the real-time format can detect these HIV subtypes [15]. Overall, these data with unpurified cell lysates using HIV-1 pol PCR FP-DBS assay compared favourably with results of the real-time HIV-1 pol PCR assay where nucleic acid was prepared using a Qiagen purification method prior to PCR [14]. These data suggest that the HIV-1 pol PCR FP-DBS assay is able to reliably detect as low as a single copy of HIV provirus from a FP-DBS with minimal nucleic acid purification.\nTo examine the ability of this 'in-house' HIV-1 pol PCR assay to detect the range of HIV-1 copy numbers from FP-DBS samples, defined quantities of ACH-2 cells were applied to FP-DBS. For this purpose, ACH2 cells were counted two different times (A & B) on two separate days (D1 & D2), as shown in Table 1. The total amount added to the FP-DBS was calculated so that 2 μl of the final lysate would be expected to have 10, 5, 2 and 1 HIV-1 proviral copies. To verify these numbers in parallel, extracted DNA from an aliquot of each tube of manually counted ACH2 cells was tested in triplicate with the HIV-1 pol real-time PCR assay [14,15], and the results from this assay gave an expected HIV-1 copy number that was within 2-fold of that predicted by cell count in 14 of 16 cases; the 2 discrepant cases were at the lowest cell count (Table 1). A total of 40 PCRs were performed on lysate from each cell preparation: in each case, four FP-DBSs were prepared and 10 HIV-1 pol PCRs were performed from every FP-DBS lysate (Table 1).\nSummary of HIV-1 pol FP-DBS PCR assay performed on DBS spiked with low copy number ACH2 cells\nA total of 40 HIV-1 pol PCR assays were performed on FP-DBS spiked in duplicate (A and B) with known copies of HIV-1 in ACH-2 cells (manually counted and quantified by HIV-1 pol real-time PCR assay done) on two different days (D1 and D2). The results of the HIV-1 pol PCR assay are indicated as an average percentage of HIV-1 proviral copies detected from the spiked FP-DBS.\nThe results of HIV-1 pol PCR FP-DBS assay showed that one HIV-1 proviral copy was detectable in 38.7% of tests (160 total tests), 2 copies in 46.9% of tests, 5 copies in 72.5% of tests, and 10 copies in 98.1% of tests, as expected based on Poisson distribution (Table 1). This experiment was repeated a second time on two separate days with similar results (data not shown). Comparable results were also obtained in smaller studies using FP-DBS prepared from PBMCs infected with different HIV subtypes (A, C and D; data not shown), which is consistent with our previous studies showing that with the same primers, HIV-1 pol PCR assay in the real-time format can detect these HIV subtypes [15]. Overall, these data with unpurified cell lysates using HIV-1 pol PCR FP-DBS assay compared favourably with results of the real-time HIV-1 pol PCR assay where nucleic acid was prepared using a Qiagen purification method prior to PCR [14]. These data suggest that the HIV-1 pol PCR FP-DBS assay is able to reliably detect as low as a single copy of HIV provirus from a FP-DBS with minimal nucleic acid purification.\n[SUBTITLE] pol PCR FP-DBS assay on stored samples from infants of known HIV-1 infection status [SUBSECTION] We next evaluated the HIV-1 pol PCR FP-DBS assay using stored FP-DBS from infants of known infection status, as defined by prior testing of two sequential samples with the HIV-1 gag PCR assay [4,18]. These FP-DBS had been prepared by spotting whole blood on the filter paper, air-dried and stored at ambient room temperature for several years. One hundred and fifteen FP-DBS (56 HIV-1 positive and 59 HIV-1 negative) were tested with the operator blinded to the infection status. The lysate was tested in parallel with the two round, nested HIV-1 gag PCR that was used previously and the single round HIV-1 pol PCR FP-DBS assay. The nested HIV-1 gag PCR that amplified a 142bp of fragment, nearly the same size fragment as the single round HIV-1 pol PCR FP-DBS assay served as a control for the integrity of the FP-DBS samples, which had been stored at ambient room temperature for several years.\nThe repeat testing of the stored FP-DBS from these infants with the original HIV-1 gag PCR showed 95.6% agreement with the prior testing that established infection status, suggesting that the long-term storage had not significantly compromised the samples. These same samples were tested with the HIV-1 pol PCR FP-DBS assay. The sensitivity and specificity of the HIV-1 pol PCR assay in relation to the known HIV-1 infection status of the infants was 92.8% and 98.3% respectively (Table 2).\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on archived FP-DBS samples\nSamples were obtained from infants of known HIV-1 infection status (56 HIV-1 positive and 59 HIV-1 negative). The sensitivity of HIV-1 pol PCR FP-DBS assay on archived FP-DBS was 92.8% and specificity of 98.3%.\nWe next evaluated the HIV-1 pol PCR FP-DBS assay using stored FP-DBS from infants of known infection status, as defined by prior testing of two sequential samples with the HIV-1 gag PCR assay [4,18]. These FP-DBS had been prepared by spotting whole blood on the filter paper, air-dried and stored at ambient room temperature for several years. One hundred and fifteen FP-DBS (56 HIV-1 positive and 59 HIV-1 negative) were tested with the operator blinded to the infection status. The lysate was tested in parallel with the two round, nested HIV-1 gag PCR that was used previously and the single round HIV-1 pol PCR FP-DBS assay. The nested HIV-1 gag PCR that amplified a 142bp of fragment, nearly the same size fragment as the single round HIV-1 pol PCR FP-DBS assay served as a control for the integrity of the FP-DBS samples, which had been stored at ambient room temperature for several years.\nThe repeat testing of the stored FP-DBS from these infants with the original HIV-1 gag PCR showed 95.6% agreement with the prior testing that established infection status, suggesting that the long-term storage had not significantly compromised the samples. These same samples were tested with the HIV-1 pol PCR FP-DBS assay. The sensitivity and specificity of the HIV-1 pol PCR assay in relation to the known HIV-1 infection status of the infants was 92.8% and 98.3% respectively (Table 2).\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on archived FP-DBS samples\nSamples were obtained from infants of known HIV-1 infection status (56 HIV-1 positive and 59 HIV-1 negative). The sensitivity of HIV-1 pol PCR FP-DBS assay on archived FP-DBS was 92.8% and specificity of 98.3%.\n[SUBTITLE] Comparison of pol PCR FP-DBS assay with commercial DBS-FP assay [SUBSECTION] The assay was transferred from the Seattle-based laboratory to a newly established molecular virology laboratory at the University of Nairobi. This laboratory included a PCR set-up room that had established standard practices to minimize the potential for introduction of PCR product, plasmid and other possible PCR contaminants. In this laboratory, we tested follow-up FP-DBS samples prepared from whole blood in EDTA collected from infants that were initially reported as HIV-1 positive using the Roche Amplicor v1.5 assay at the National Reference Laboratory for Early Infant Diagnosis testing at Kenya Medical Research Institute or at Kenyatta National Hospital, in Nairobi, Kenya, typically within the prior month. For this confirmatory HIV-1 testing, the infant was rebled and a fresh FP-DBS was prepared from 50 μl of whole blood, air-dried and lysed for PCR analysis. The lysate was then tested in quadruplicate PCRs using both the HIV-1 gag and pol PCRs (see gel picture Figure 1).\nGel picture of amplified products using 'in-house' HIV-1 gag and pol FP-DBS PCR assay. Gel picture of the HIV-1 pol (top panel - 160bp) and gag (lower panel - 152bp) PCR products. Lanes labeled from 1- 10 are PCR products from the ACH copy number controls: 1-2 are 100 HIV copies; 3-4 are 10 copies; 4-8 are 2 copies; 9 and 10 are negative controls using DNA from an uninfected T cell cell-line and water as the PCR template, respectively. PCR results from two samples of DBS-FP from infants who tested positive with the Roche Amplicor assay are labeled as A1-A4 and B1-B4, with 1-4 indicating quadruplicate tests. Based on the results show, both infants would be defined a HIV-1 DNA PCR positive according to testing algorithm for the' in-house' gag and pol FP-DBS FP PCR. The first and last lane in the gel picture (labeled M) in both the panels is the molecular weight marker - Hyperladder IV.\nAs a control, we also included randomly within the test runs, 25 FP-DBS collected from HIV-1 seronegative adults, as defined by two parallel rapid HIV-1 serological assays. The test operator was blinded to HIV-1 serostatus of the control samples and upon testing all the quadruplicate PCR tests on the samples, both for gag and pol products were negative.\nOf the 186 HIV-1 positive samples from the infants that were defined as positive with Roche Amplicor® HIV-1 DNA assay version 1.5, 178 samples were confirmed as positive by HIV-1 pol PCR FP-DBS assay (Figure 2). Of the 8 samples that were negative by both HIV-1 pol and gag PCRs, the infants were re-bled and re-tested using all the tests, the Roche Amplicor® HIV-1 DNA v1.5 assay (at the reference lab) and the HIV-1 gag and pol PCR FP-DBS assay. Upon retesting all of the 8 infants were identified as HIV-1 negative by all assays, suggesting the initial results using the commercial Roche Amplicor® assay were false positive results. However, we could not determine whether the initial results of the Roche Amplicor® assay for the 8 infants were the result of false positive tests or whether sample mix-up could have contributed to these results. The HIV- gag PCR showed only 87% sensitivity (155 of 178 positives) compared to both the Roche Amplicor® and HIV-1 pol PCR FP-DBS assay.\nExperimental approach for performance of HIV-1 gag and pol FP-DBS PCR assay for detection of HIV-1 infection from FP-DBS obtained from HIV-1 positive infants as defined by Roche Amplicor v1.5 assay. 186 recently collected FP-DBS that were determined in the prior ~month to be HIV-1 positive by Roche Amplicor v1.5 assay were tested by HIV-1 gag and pol PCR assays. Of these 178 FP-DBS from the infants were identified as HIV-1 positive. Retesting of subsequent blood samples from the 8 discordant infants showed that they were HIV-1 negative. Thus, the sensitivity of HIV-1 pol PCR assay was found to be 100% (178 of 178) for detection of confirmed HIV-1 positive infants from FP-DBS.\nThe infants were all under the age of 12 months. Most (46%) were between 3 and 6 months, but we also sampled younger infants (25% < 3 months) and older infants (29% > 6 months), as shown on Table 3. Of the 178 HIV-1 positive infants as determined by DNA PCR on FP-DBS using all the 3 methods (Roche Amplicor® and 'in-house' gag and pol assay), 110 infants had HIV-1 subtype data available based on pol sequence. The majority (69%) of the infants were identified as infected with HIV-1 subtype A, and others with subtype D (23%), subtype C (7%) and intersubtype recombinant AD (1%) (Table 4). This is very similar to the subtype distribution in Nairobi [19].\nAge demographics of infants tested for HIV-1 DNA PCR from freshly collected DBS samples in Nairobi, Kenya\nHIV-1 subtypes in infants who tested positive with the HIV-1 pol FP-DBS PCR assay in Nairobi, Kenya\nNote: HIV-1 subtypes based on 667bp polymerase genome\nHIV-1 subtypes available for only 110 of the 178 HIV-1 positive DBS-FP from infants in the Nairobi study.\nOverall, from the 178 samples confirmed positive with Roche Amplicor® assay, all were positive by HIV-1 pol PCR, indicating that the sensitivity and specificity of the HIV-1 pol PCR assay was 100% in this study. As this is based on using data from 4 HIV-1 pol PCR tests to detection infection, we also examined the sensitivity if we considered the results from just the first (single test), the first two (duplicate) or the first 3 (triplicate) HIV-1 pol PCR tests. Based on our results of HIV-1 pol PCR assay the likelihood of missing true positives if we had performed the assay singly would be 14.6% (26 of 178), in duplicate testing would be 6.7% (12/178), in triplicate testing would be approximately 0.6% (1/178), and on quadruplicate testing would be none.\nThe assay was transferred from the Seattle-based laboratory to a newly established molecular virology laboratory at the University of Nairobi. This laboratory included a PCR set-up room that had established standard practices to minimize the potential for introduction of PCR product, plasmid and other possible PCR contaminants. In this laboratory, we tested follow-up FP-DBS samples prepared from whole blood in EDTA collected from infants that were initially reported as HIV-1 positive using the Roche Amplicor v1.5 assay at the National Reference Laboratory for Early Infant Diagnosis testing at Kenya Medical Research Institute or at Kenyatta National Hospital, in Nairobi, Kenya, typically within the prior month. For this confirmatory HIV-1 testing, the infant was rebled and a fresh FP-DBS was prepared from 50 μl of whole blood, air-dried and lysed for PCR analysis. The lysate was then tested in quadruplicate PCRs using both the HIV-1 gag and pol PCRs (see gel picture Figure 1).\nGel picture of amplified products using 'in-house' HIV-1 gag and pol FP-DBS PCR assay. Gel picture of the HIV-1 pol (top panel - 160bp) and gag (lower panel - 152bp) PCR products. Lanes labeled from 1- 10 are PCR products from the ACH copy number controls: 1-2 are 100 HIV copies; 3-4 are 10 copies; 4-8 are 2 copies; 9 and 10 are negative controls using DNA from an uninfected T cell cell-line and water as the PCR template, respectively. PCR results from two samples of DBS-FP from infants who tested positive with the Roche Amplicor assay are labeled as A1-A4 and B1-B4, with 1-4 indicating quadruplicate tests. Based on the results show, both infants would be defined a HIV-1 DNA PCR positive according to testing algorithm for the' in-house' gag and pol FP-DBS FP PCR. The first and last lane in the gel picture (labeled M) in both the panels is the molecular weight marker - Hyperladder IV.\nAs a control, we also included randomly within the test runs, 25 FP-DBS collected from HIV-1 seronegative adults, as defined by two parallel rapid HIV-1 serological assays. The test operator was blinded to HIV-1 serostatus of the control samples and upon testing all the quadruplicate PCR tests on the samples, both for gag and pol products were negative.\nOf the 186 HIV-1 positive samples from the infants that were defined as positive with Roche Amplicor® HIV-1 DNA assay version 1.5, 178 samples were confirmed as positive by HIV-1 pol PCR FP-DBS assay (Figure 2). Of the 8 samples that were negative by both HIV-1 pol and gag PCRs, the infants were re-bled and re-tested using all the tests, the Roche Amplicor® HIV-1 DNA v1.5 assay (at the reference lab) and the HIV-1 gag and pol PCR FP-DBS assay. Upon retesting all of the 8 infants were identified as HIV-1 negative by all assays, suggesting the initial results using the commercial Roche Amplicor® assay were false positive results. However, we could not determine whether the initial results of the Roche Amplicor® assay for the 8 infants were the result of false positive tests or whether sample mix-up could have contributed to these results. The HIV- gag PCR showed only 87% sensitivity (155 of 178 positives) compared to both the Roche Amplicor® and HIV-1 pol PCR FP-DBS assay.\nExperimental approach for performance of HIV-1 gag and pol FP-DBS PCR assay for detection of HIV-1 infection from FP-DBS obtained from HIV-1 positive infants as defined by Roche Amplicor v1.5 assay. 186 recently collected FP-DBS that were determined in the prior ~month to be HIV-1 positive by Roche Amplicor v1.5 assay were tested by HIV-1 gag and pol PCR assays. Of these 178 FP-DBS from the infants were identified as HIV-1 positive. Retesting of subsequent blood samples from the 8 discordant infants showed that they were HIV-1 negative. Thus, the sensitivity of HIV-1 pol PCR assay was found to be 100% (178 of 178) for detection of confirmed HIV-1 positive infants from FP-DBS.\nThe infants were all under the age of 12 months. Most (46%) were between 3 and 6 months, but we also sampled younger infants (25% < 3 months) and older infants (29% > 6 months), as shown on Table 3. Of the 178 HIV-1 positive infants as determined by DNA PCR on FP-DBS using all the 3 methods (Roche Amplicor® and 'in-house' gag and pol assay), 110 infants had HIV-1 subtype data available based on pol sequence. The majority (69%) of the infants were identified as infected with HIV-1 subtype A, and others with subtype D (23%), subtype C (7%) and intersubtype recombinant AD (1%) (Table 4). This is very similar to the subtype distribution in Nairobi [19].\nAge demographics of infants tested for HIV-1 DNA PCR from freshly collected DBS samples in Nairobi, Kenya\nHIV-1 subtypes in infants who tested positive with the HIV-1 pol FP-DBS PCR assay in Nairobi, Kenya\nNote: HIV-1 subtypes based on 667bp polymerase genome\nHIV-1 subtypes available for only 110 of the 178 HIV-1 positive DBS-FP from infants in the Nairobi study.\nOverall, from the 178 samples confirmed positive with Roche Amplicor® assay, all were positive by HIV-1 pol PCR, indicating that the sensitivity and specificity of the HIV-1 pol PCR assay was 100% in this study. As this is based on using data from 4 HIV-1 pol PCR tests to detection infection, we also examined the sensitivity if we considered the results from just the first (single test), the first two (duplicate) or the first 3 (triplicate) HIV-1 pol PCR tests. Based on our results of HIV-1 pol PCR assay the likelihood of missing true positives if we had performed the assay singly would be 14.6% (26 of 178), in duplicate testing would be 6.7% (12/178), in triplicate testing would be approximately 0.6% (1/178), and on quadruplicate testing would be none.", "To examine the ability of this 'in-house' HIV-1 pol PCR assay to detect the range of HIV-1 copy numbers from FP-DBS samples, defined quantities of ACH-2 cells were applied to FP-DBS. For this purpose, ACH2 cells were counted two different times (A & B) on two separate days (D1 & D2), as shown in Table 1. The total amount added to the FP-DBS was calculated so that 2 μl of the final lysate would be expected to have 10, 5, 2 and 1 HIV-1 proviral copies. To verify these numbers in parallel, extracted DNA from an aliquot of each tube of manually counted ACH2 cells was tested in triplicate with the HIV-1 pol real-time PCR assay [14,15], and the results from this assay gave an expected HIV-1 copy number that was within 2-fold of that predicted by cell count in 14 of 16 cases; the 2 discrepant cases were at the lowest cell count (Table 1). A total of 40 PCRs were performed on lysate from each cell preparation: in each case, four FP-DBSs were prepared and 10 HIV-1 pol PCRs were performed from every FP-DBS lysate (Table 1).\nSummary of HIV-1 pol FP-DBS PCR assay performed on DBS spiked with low copy number ACH2 cells\nA total of 40 HIV-1 pol PCR assays were performed on FP-DBS spiked in duplicate (A and B) with known copies of HIV-1 in ACH-2 cells (manually counted and quantified by HIV-1 pol real-time PCR assay done) on two different days (D1 and D2). The results of the HIV-1 pol PCR assay are indicated as an average percentage of HIV-1 proviral copies detected from the spiked FP-DBS.\nThe results of HIV-1 pol PCR FP-DBS assay showed that one HIV-1 proviral copy was detectable in 38.7% of tests (160 total tests), 2 copies in 46.9% of tests, 5 copies in 72.5% of tests, and 10 copies in 98.1% of tests, as expected based on Poisson distribution (Table 1). This experiment was repeated a second time on two separate days with similar results (data not shown). Comparable results were also obtained in smaller studies using FP-DBS prepared from PBMCs infected with different HIV subtypes (A, C and D; data not shown), which is consistent with our previous studies showing that with the same primers, HIV-1 pol PCR assay in the real-time format can detect these HIV subtypes [15]. Overall, these data with unpurified cell lysates using HIV-1 pol PCR FP-DBS assay compared favourably with results of the real-time HIV-1 pol PCR assay where nucleic acid was prepared using a Qiagen purification method prior to PCR [14]. These data suggest that the HIV-1 pol PCR FP-DBS assay is able to reliably detect as low as a single copy of HIV provirus from a FP-DBS with minimal nucleic acid purification.", "We next evaluated the HIV-1 pol PCR FP-DBS assay using stored FP-DBS from infants of known infection status, as defined by prior testing of two sequential samples with the HIV-1 gag PCR assay [4,18]. These FP-DBS had been prepared by spotting whole blood on the filter paper, air-dried and stored at ambient room temperature for several years. One hundred and fifteen FP-DBS (56 HIV-1 positive and 59 HIV-1 negative) were tested with the operator blinded to the infection status. The lysate was tested in parallel with the two round, nested HIV-1 gag PCR that was used previously and the single round HIV-1 pol PCR FP-DBS assay. The nested HIV-1 gag PCR that amplified a 142bp of fragment, nearly the same size fragment as the single round HIV-1 pol PCR FP-DBS assay served as a control for the integrity of the FP-DBS samples, which had been stored at ambient room temperature for several years.\nThe repeat testing of the stored FP-DBS from these infants with the original HIV-1 gag PCR showed 95.6% agreement with the prior testing that established infection status, suggesting that the long-term storage had not significantly compromised the samples. These same samples were tested with the HIV-1 pol PCR FP-DBS assay. The sensitivity and specificity of the HIV-1 pol PCR assay in relation to the known HIV-1 infection status of the infants was 92.8% and 98.3% respectively (Table 2).\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on archived FP-DBS samples\nSamples were obtained from infants of known HIV-1 infection status (56 HIV-1 positive and 59 HIV-1 negative). The sensitivity of HIV-1 pol PCR FP-DBS assay on archived FP-DBS was 92.8% and specificity of 98.3%.", "The assay was transferred from the Seattle-based laboratory to a newly established molecular virology laboratory at the University of Nairobi. This laboratory included a PCR set-up room that had established standard practices to minimize the potential for introduction of PCR product, plasmid and other possible PCR contaminants. In this laboratory, we tested follow-up FP-DBS samples prepared from whole blood in EDTA collected from infants that were initially reported as HIV-1 positive using the Roche Amplicor v1.5 assay at the National Reference Laboratory for Early Infant Diagnosis testing at Kenya Medical Research Institute or at Kenyatta National Hospital, in Nairobi, Kenya, typically within the prior month. For this confirmatory HIV-1 testing, the infant was rebled and a fresh FP-DBS was prepared from 50 μl of whole blood, air-dried and lysed for PCR analysis. The lysate was then tested in quadruplicate PCRs using both the HIV-1 gag and pol PCRs (see gel picture Figure 1).\nGel picture of amplified products using 'in-house' HIV-1 gag and pol FP-DBS PCR assay. Gel picture of the HIV-1 pol (top panel - 160bp) and gag (lower panel - 152bp) PCR products. Lanes labeled from 1- 10 are PCR products from the ACH copy number controls: 1-2 are 100 HIV copies; 3-4 are 10 copies; 4-8 are 2 copies; 9 and 10 are negative controls using DNA from an uninfected T cell cell-line and water as the PCR template, respectively. PCR results from two samples of DBS-FP from infants who tested positive with the Roche Amplicor assay are labeled as A1-A4 and B1-B4, with 1-4 indicating quadruplicate tests. Based on the results show, both infants would be defined a HIV-1 DNA PCR positive according to testing algorithm for the' in-house' gag and pol FP-DBS FP PCR. The first and last lane in the gel picture (labeled M) in both the panels is the molecular weight marker - Hyperladder IV.\nAs a control, we also included randomly within the test runs, 25 FP-DBS collected from HIV-1 seronegative adults, as defined by two parallel rapid HIV-1 serological assays. The test operator was blinded to HIV-1 serostatus of the control samples and upon testing all the quadruplicate PCR tests on the samples, both for gag and pol products were negative.\nOf the 186 HIV-1 positive samples from the infants that were defined as positive with Roche Amplicor® HIV-1 DNA assay version 1.5, 178 samples were confirmed as positive by HIV-1 pol PCR FP-DBS assay (Figure 2). Of the 8 samples that were negative by both HIV-1 pol and gag PCRs, the infants were re-bled and re-tested using all the tests, the Roche Amplicor® HIV-1 DNA v1.5 assay (at the reference lab) and the HIV-1 gag and pol PCR FP-DBS assay. Upon retesting all of the 8 infants were identified as HIV-1 negative by all assays, suggesting the initial results using the commercial Roche Amplicor® assay were false positive results. However, we could not determine whether the initial results of the Roche Amplicor® assay for the 8 infants were the result of false positive tests or whether sample mix-up could have contributed to these results. The HIV- gag PCR showed only 87% sensitivity (155 of 178 positives) compared to both the Roche Amplicor® and HIV-1 pol PCR FP-DBS assay.\nExperimental approach for performance of HIV-1 gag and pol FP-DBS PCR assay for detection of HIV-1 infection from FP-DBS obtained from HIV-1 positive infants as defined by Roche Amplicor v1.5 assay. 186 recently collected FP-DBS that were determined in the prior ~month to be HIV-1 positive by Roche Amplicor v1.5 assay were tested by HIV-1 gag and pol PCR assays. Of these 178 FP-DBS from the infants were identified as HIV-1 positive. Retesting of subsequent blood samples from the 8 discordant infants showed that they were HIV-1 negative. Thus, the sensitivity of HIV-1 pol PCR assay was found to be 100% (178 of 178) for detection of confirmed HIV-1 positive infants from FP-DBS.\nThe infants were all under the age of 12 months. Most (46%) were between 3 and 6 months, but we also sampled younger infants (25% < 3 months) and older infants (29% > 6 months), as shown on Table 3. Of the 178 HIV-1 positive infants as determined by DNA PCR on FP-DBS using all the 3 methods (Roche Amplicor® and 'in-house' gag and pol assay), 110 infants had HIV-1 subtype data available based on pol sequence. The majority (69%) of the infants were identified as infected with HIV-1 subtype A, and others with subtype D (23%), subtype C (7%) and intersubtype recombinant AD (1%) (Table 4). This is very similar to the subtype distribution in Nairobi [19].\nAge demographics of infants tested for HIV-1 DNA PCR from freshly collected DBS samples in Nairobi, Kenya\nHIV-1 subtypes in infants who tested positive with the HIV-1 pol FP-DBS PCR assay in Nairobi, Kenya\nNote: HIV-1 subtypes based on 667bp polymerase genome\nHIV-1 subtypes available for only 110 of the 178 HIV-1 positive DBS-FP from infants in the Nairobi study.\nOverall, from the 178 samples confirmed positive with Roche Amplicor® assay, all were positive by HIV-1 pol PCR, indicating that the sensitivity and specificity of the HIV-1 pol PCR assay was 100% in this study. As this is based on using data from 4 HIV-1 pol PCR tests to detection infection, we also examined the sensitivity if we considered the results from just the first (single test), the first two (duplicate) or the first 3 (triplicate) HIV-1 pol PCR tests. Based on our results of HIV-1 pol PCR assay the likelihood of missing true positives if we had performed the assay singly would be 14.6% (26 of 178), in duplicate testing would be 6.7% (12/178), in triplicate testing would be approximately 0.6% (1/178), and on quadruplicate testing would be none.", "In summary, we describe here a sensitive HIV-1 pol PCR FP-DBS assay for detection of infant infection. The advantages of this assay include the fact that it requires minimal manipulation of the sample compared to assays that rely on extraction of nucleic acids and nested PCR methods. This method can detect a single copy of HIV provirus and has been validated on HIV-1 sequences of multiple subtypes.\nThe HIV-1 pol PCR FP-DBS assay was compared to results from historical studies in Seattle as well as samples from infants who recently tested positive by the Roche Amplicor® HIV-1 DNA Test, version 1.5 assay in Nairobi. The sensitivity and specificity of the HIV-1 pol PCR FP-DBS assay was > 90% on archived samples stored for more than 3 years at room temperature. The combined sensitivity of the HIV-1 pol PCR FP-DB assay using the archived (N = 56 positive) and recent samples (N = 178 positive) was 98.3% (Table 5). These comparisons are based on quadruplicate testing, which maximizes detection of low HIV copies in a sample. Using this approach, we detected 100% of confirmed positive samples from infants recently identified as HIV positive by the Roche Amplicor® HIV-1 DNA assay. Based on initial field site testing of FP-DBS from 178 HIV positive infants, this HIV-1 pol PCR FP-DBS assay with a single, duplicate or triplicate PCR testing would be predicted to detected ~85%, 93% and 99% of HIV-1 positive samples, respectively.\nA 2 × 2 table for performance of HIV-1 pol FP-DBS PCR assay on all FP-DBS\nA total of 115 archived and 211 current FP-DBS samples of known HIV-1 infection status were tested. Overall the sensitivity of HIV-1 pol PCR FP-DBS assay on all FP-DBS samples (archived and recent samples) was 98.3% and specificity of 98.9%.\n* a total of 56 archived FP-DBS samples and 178 recent FP-DBS samples obtained from infants.\n$ a total of 59 archived and 33 recent FP-DBS samples (8 obtained from infants and 25 from adults).\nThe HIV-1 pol PCR FP-DBS assay samples a lower total volume of blood than the commercial assays that go through a purification step to remove inhibitors, which lengthens and complicates these assays. Thus, while the Roche Amplicor® HIV-1 DNA assay typically tests for HIV in ~12.5 μl of blood eluate (assuming processing a 50 μl blood spot by adding 200 μl lysate and testing 50 μl), the HIV-1 pol PCR FP-DBS assay only tests 0.2-0.3 μl blood eluate per PCR (in this assay only ~a quarter of the blood spot is sampled, ~10-12 μl of blood); this is lysed in 100 μl and 2 μl of lysate is used per PCR. However, this lower blood volume used in the HIV-1 pol PCR FP-DBS assay should be adequate to sample HIV-infected cells in an infant sample. Infant blood contains an estimated 75.4 ± 104.3 HIV proviral copies per 1000 PBMC [20]. Assuming that 1 μl of blood contains 1 million total cells, of which 5000 are PBMCs [21], then there are ~1000 PBMC sampled in each PCR. Thus, each PCR that includes 0.2 μl of infant blood typically contains multiple HIV copies that can be detected in this assay that should be amplified with this single HIV-1 copy detection PCR method.\nEven when performed in quadruplicate, this qualitative DNA assay is economical and costs just a few dollars per patient, compared to some of the commercial assays that are nearly 5 to 10 times more expensive (~20-50 US$). While quadruplicate testing may provide optimal sensitivity, this assay may also be highly sensitive and specific when PCRs are performed in duplicate or triplicate. However, quadruplicate testing may increase the ability of this assay to detect infants that are destined to become slow progressors, who are estimated to have lower HIV proviral copy numbers (11.8+18.8 HIV copies/1000 PBMC; [20]). Laboratories that adapt this assay may wish to compare the performance of quadruplicate testing versus using fewer replicate tests to determine the number of PCR tests that are optimal to suit their needs.\nImportantly, when transferred to a new laboratory in Nairobi for on-site testing in a setting that applied stringent measures to minimize PCR contamination, the assay showed very high sensitivity and specificity compared to the results of commercial assays from established reference laboratories in the region. Of course, the performance of this assay in other settings may vary depending on the established protocols and expertise of the laboratory, as is true with the use of any PCR-based assays-either 'in-house' or commercial. One limitation of the study results described here is that they focused primarily on samples from infants recently detected as HIV-1 positive by the Roche Amplicor® HIV-1 DNA Test. Further studies of infants born to HIV-positive mothers that are not pre-screened in this manner will be needed to more precisely define the sensitivity and specificity of this assay in a clinical setting in real time. In this case, it will be important to not only verify the findings with a second assay, but also to test a follow-up sample from each infant because other assays also have limitations in their performance, such as the false-positive results using the Roche Amplicor® HIV-1 DNA Test described here. While further testing by other laboratories will be useful for validating the performance of this assay, these findings suggest that the HIV-1 pol PCR FP-DBS assay provides a reliable and rapid method, and is an economical assay for early detection of HIV-1 infection in infants.", "There is an urgent need for an economical and reliable assay for early HIV-1 infant diagnosis, especially for low-resource countries. This study has validated an economical 'in-house' HIV-1 pol PCR FP-DBS assay that is highly sensitive and specific when compared to a commercial Roche Amplicor® v1.5 FP-DBS assay. This study highlights the need for potential adaptation of this qualitative DNA-based assay in development of a rapid point-of care diagnostics assay for early infant HIV-1 diagnosis. This single round pol PCR FP-DBS can therefore be a useful tool for early infant HIV-1 diagnosis in Africa especially where the HIV epidemic prevails and resources are limited.", "The authors declare that they have no competing interests.", "BHC helped design aspects of the study, validated and performed assays, analyzed the data and helped draft the manuscript. SE validated and performed assays, and provided input into the manuscript. MM and MN participated in the study by performing the assay on the samples in the field. SF was helpful in setting up the molecular virology laboratory in the field and assisting in transferring the technology and training staff on the methods. GJ-S and DW as Principal Investigators of the research project from which clinical samples were obtained, provided the samples in the field for evaluation of the assay and gave input into the study design. JO conceived the idea and led the study design, implementation of the program and drafting and editing of the manuscript. All authors contributed to the data analysis and read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2431/11/18/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Changes in body weight, body composition and cardiovascular risk factors after long-term nutritional intervention in patients with severe mental illness: an observational study.
21332986
Compared with the general population, individuals with severe mental illness (SMI) have increased prevalence rates of obesity and greater risk for cardiovascular disease. This study aimed to investigate the effects of a long term nutritional intervention on body weight, body fat and cardiovascular risk factors in a large number of patients with SMI.
BACKGROUND
Nine hundred and eighty-nine patients with a mean ± S.D age of 40 ± 11.7 yrs participated in a 9 mo nutritional intervention which provided personalised dietetic treatment and lifestyle counselling every two weeks. Patients had an average body mass index (BMI) of 34.3 ± 7.1 kg x m(-2) and body weight (BW) of 94.9 ± 21.7 kg. Fasted blood samples were collected for the measurement of glucose, total cholesterol, triglycerides and HDL-cholesterol. All measurements were undertaken at baseline and at 3 mo, 6 mo and 9 mo of the nutritional intervention.
METHODS
Four hundred and twenty-three patients of 989 total patients' cases (42.8%) dropped out within the first 3 months. Two hundred eighty-five completed 6 months of the program and 145 completed the entire 9 month nutritional intervention. There were progressive statistically significant reductions in mean weight, fat mass, waist and BMI throughout the duration of monitoring (p < 0.001). The mean final weight loss was 9.7 kg and BMI decreased to 30.7 kg x m(-2) (p < 0.001). The mean final fat mass loss was 8.0 kg and the mean final waist circumference reduction was 10.3 cm (p < 0.001) compared to baseline. Significant and continual reductions were observed in fasting plasma glucose, total cholesterol and triglycerides concentrations throughout the study (p < 0.001).
RESULTS
The nutritional intervention produced significant reductions in body weight, body fat and improved the cardiometabolic profile in patients with SMI. These findings indicate the importance of weight-reducing nutritional intervention in decreasing the cardiovascular risk in patients with SMI.
CONCLUSION
[ "Adult", "Blood Glucose", "Body Composition", "Body Weight", "Diet, Reducing", "Feeding Behavior", "Female", "Humans", "Life Style", "Lipids", "Male", "Mental Disorders", "Middle Aged", "Obesity", "Risk Factors", "Treatment Outcome", "Weight Loss" ]
3048521
null
null
Methods
[SUBTITLE] Subjects with SMI [SUBSECTION] A total of 989 psychiatric patients were recruited for the study (774 women and 215 men) and gave written informed consent. Patients were recommended to participate in the study by psychiatrists working either privately or in hospital offices in Thessaloniki (Greece). The study was carried out from January 2007 to November 2009. The study has been approved by the ethical committee of the Technological Educational Institute of Thessaloniki (Ref. No 20111). All patients were found competent by an independent psychiatrist, who was not involved in the study, to participate and to follow weight loss intervention at the enrollment visit. All patients continued on treatment with their medication. Antipsychotic drugs were being used by 28% of patients (n = 274), 30% of patients were taking antidepressants (n = 297), 23% of patients were taking both antipsychotics & antidepressants (n = 230) and 19% of patients (n = 288) were taking antipsychotics & antidepressants, as well other types of medication (e.g. acholytic, antiparkinson, antiepileptic). Medication was kept constant for every patient. A total of 989 psychiatric patients were recruited for the study (774 women and 215 men) and gave written informed consent. Patients were recommended to participate in the study by psychiatrists working either privately or in hospital offices in Thessaloniki (Greece). The study was carried out from January 2007 to November 2009. The study has been approved by the ethical committee of the Technological Educational Institute of Thessaloniki (Ref. No 20111). All patients were found competent by an independent psychiatrist, who was not involved in the study, to participate and to follow weight loss intervention at the enrollment visit. All patients continued on treatment with their medication. Antipsychotic drugs were being used by 28% of patients (n = 274), 30% of patients were taking antidepressants (n = 297), 23% of patients were taking both antipsychotics & antidepressants (n = 230) and 19% of patients (n = 288) were taking antipsychotics & antidepressants, as well other types of medication (e.g. acholytic, antiparkinson, antiepileptic). Medication was kept constant for every patient. [SUBTITLE] Anthropometric measurements [SUBSECTION] Prior to the baseline assessment, patients visited the dietitian for familiarization with study design and measurements. The dietitian explained the study design and measurements thoroughly and then patients' relevant questions were answered. At the beginning of the study (baseline-visit A), at 3 mo, 6 mo and 9 mo of the nutritional intervention (visit B, C and visit D respectively), several anthropometric measurements were undertaken to assess the outcome of the nutritional intervention program. All the measurements were carried out by the same two dietitians. Body weight was measured on a standing scale calibrated to 0.1 kg (Seca digital scale). Body height was measured on a wall-mounted stadiometer. The subjects stood with legs parallel and shoulder-width apart. Waist circumference (WC) was measured at the end of normal expiration at the minimal waist (smallest horizontal circumference above the umbilicus and below the xiphoid process). Hip circumference (HP) was measured around the maximum circumference over the buttocks. Body Fat was measured by the bioelectrical impedance analysis (BIA, Akern version 1.31). During the 9 mo period, subjects were asked to visit the nutrition unit every 2 wks. At these visits, body weight, waist circumference and body fat were measured by the same dietitian. For patients who dropped out, body weight was recorded and BMI was calculated when the drop out occurred. Prior to the baseline assessment, patients visited the dietitian for familiarization with study design and measurements. The dietitian explained the study design and measurements thoroughly and then patients' relevant questions were answered. At the beginning of the study (baseline-visit A), at 3 mo, 6 mo and 9 mo of the nutritional intervention (visit B, C and visit D respectively), several anthropometric measurements were undertaken to assess the outcome of the nutritional intervention program. All the measurements were carried out by the same two dietitians. Body weight was measured on a standing scale calibrated to 0.1 kg (Seca digital scale). Body height was measured on a wall-mounted stadiometer. The subjects stood with legs parallel and shoulder-width apart. Waist circumference (WC) was measured at the end of normal expiration at the minimal waist (smallest horizontal circumference above the umbilicus and below the xiphoid process). Hip circumference (HP) was measured around the maximum circumference over the buttocks. Body Fat was measured by the bioelectrical impedance analysis (BIA, Akern version 1.31). During the 9 mo period, subjects were asked to visit the nutrition unit every 2 wks. At these visits, body weight, waist circumference and body fat were measured by the same dietitian. For patients who dropped out, body weight was recorded and BMI was calculated when the drop out occurred. [SUBTITLE] Nutritional intervention [SUBSECTION] The intervention period lasted 9 months and consisted of 2 phases: a familiarization visit and an intensive 9 month nutritional intervention period. The dietary advice for weight control was given in each patient by a registered dietitian. It was based on a Mediterranean-style diet in combination with personalized healthy nutrition counselling. Each patient received personalized dietary regimen on the basis of dietary history and lifestyle. The dietary regimen was characterized by a moderate consumption of carbohydrates (50-55% of total energy per day) and a high fiber content, 15-20% protein and a fat intake of 30-35% of total energy per day. Moreover, patients were advised to consume fruits, vegetables, whole grains (legumes, rice, maize, and wheat) daily and to increase their consumption of olive oil. The dietary regimen was designed to produce an energy deficit of 500 kcal per week. The patients were visiting the dietitian every two weeks to discuss weight changes and treatment goals. The Resting Metabolic Rate (RMR) was measured by indirect calorimetry (Fitmate Pro, Cosmed USA Inc.) during their first visit. All patients completed a physical activity record. RMR was multiplied by an activity factor of 1.3-1.5, according to the physical activity level of each patient, and daily energy requirements of each patient were estimated. The intervention program consisted primarily of dietary counseling, physical activity counseling and behavioral interventions in order to aid patients' adherence to a healthy life plan during the nutritional intervention. Counseling sessions were undertaken individually by each patient and included teaching healthful weight management techniques, meal planning, food shopping and preparation, portion control, techniques to differentiate emotional from psychological hunger etc. In terms of physical activity counseling, subjects were instructed to participate in light or moderate exercise at least 30 min 3-5 times per week. The intervention period lasted 9 months and consisted of 2 phases: a familiarization visit and an intensive 9 month nutritional intervention period. The dietary advice for weight control was given in each patient by a registered dietitian. It was based on a Mediterranean-style diet in combination with personalized healthy nutrition counselling. Each patient received personalized dietary regimen on the basis of dietary history and lifestyle. The dietary regimen was characterized by a moderate consumption of carbohydrates (50-55% of total energy per day) and a high fiber content, 15-20% protein and a fat intake of 30-35% of total energy per day. Moreover, patients were advised to consume fruits, vegetables, whole grains (legumes, rice, maize, and wheat) daily and to increase their consumption of olive oil. The dietary regimen was designed to produce an energy deficit of 500 kcal per week. The patients were visiting the dietitian every two weeks to discuss weight changes and treatment goals. The Resting Metabolic Rate (RMR) was measured by indirect calorimetry (Fitmate Pro, Cosmed USA Inc.) during their first visit. All patients completed a physical activity record. RMR was multiplied by an activity factor of 1.3-1.5, according to the physical activity level of each patient, and daily energy requirements of each patient were estimated. The intervention program consisted primarily of dietary counseling, physical activity counseling and behavioral interventions in order to aid patients' adherence to a healthy life plan during the nutritional intervention. Counseling sessions were undertaken individually by each patient and included teaching healthful weight management techniques, meal planning, food shopping and preparation, portion control, techniques to differentiate emotional from psychological hunger etc. In terms of physical activity counseling, subjects were instructed to participate in light or moderate exercise at least 30 min 3-5 times per week. [SUBTITLE] Biochemical measurements [SUBSECTION] Biochemical measurements were undertaken at the beginning of the study (baseline-Visit A), at 3 mo (Visit B), 6 mo (Visit C) and 9 mo (Visit D) of the nutritional intervention. Data regarding plasma glucose, total cholesterol, HDL cholesterol and triglycerides were recorded by the dietitian. Biochemical measurements were undertaken at the beginning of the study (baseline-Visit A), at 3 mo (Visit B), 6 mo (Visit C) and 9 mo (Visit D) of the nutritional intervention. Data regarding plasma glucose, total cholesterol, HDL cholesterol and triglycerides were recorded by the dietitian. [SUBTITLE] Statistical Analysis [SUBSECTION] Data are expressed as means and standard deviations (SD). Within-subject paired t-tests compared initial vs end point measures for subjects that completed the 9 mo intervention. Comparisons between completers and drop-outs were performed using independent sample t-tests. In order to compensate for missing data due to withdrawal, the last-observation-carried-forward (LOCF) method was used and paired t-tests were performed against the LOCF data as well. Correlation analysis was also carried out for associations between body weight change and body fat percentage (BF %) change over time (baseline to 3 mo, 6 mo and 9 mo). Statistical significance was taken as P < 0.05. The statistical analysis was processed with SPSS 11 for Windows (SPSS, Inc., Chicago, IL, USA). Data are expressed as means and standard deviations (SD). Within-subject paired t-tests compared initial vs end point measures for subjects that completed the 9 mo intervention. Comparisons between completers and drop-outs were performed using independent sample t-tests. In order to compensate for missing data due to withdrawal, the last-observation-carried-forward (LOCF) method was used and paired t-tests were performed against the LOCF data as well. Correlation analysis was also carried out for associations between body weight change and body fat percentage (BF %) change over time (baseline to 3 mo, 6 mo and 9 mo). Statistical significance was taken as P < 0.05. The statistical analysis was processed with SPSS 11 for Windows (SPSS, Inc., Chicago, IL, USA).
null
null
null
null
[ "Background", "Subjects with SMI", "Anthropometric measurements", "Nutritional intervention", "Biochemical measurements", "Statistical Analysis", "Results", "Characteristics of SMI subjects and their baseline condition", "Effect of the nutritional intervention on body composition", "Effects of the nutritional intervention on biochemical parameters", "Discussion", "Limitations", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Psychiatric patients have a high prevalence of obesity or a greater risk for weight gain due to antipsychotic (neuroleptic) treatment. Recent studies suggest that patients with severe mental illness (SMI) might have an even higher proportion of obesity than individuals in the general population. For example, Dickerson et al. compared 149 psychiatric patients with matched controls and found that prevalence of obesity was twice as high as the general US adult population (men 41 vs. 20% and women 50 vs. 27%) [1]. As early as the mid-1960s, associations between conventional neuroleptic treatment and metabolic abnormalities were reported. Atypical antipsychotics are newer drugs that are increasingly replacing the conventional neuroleptics due to better efficacy and side effects profile. However evidence suggests that some of the atypical antipsychotics may have even greater associations with dramatic weight gain, diabetes and dyslipidemia [2].\nIt is well demonstrated that excessive body weight is a clearly established factor for type 2 diabetes and cardiovascular disease in the general population. Changes in some glucose and lipid parameters are commonly reported in patients with all forms of severe mental illness (SMI) (psychosis, depression, bipolar disease). These metabolic changes are probably related to a combination of genetic predisposition, lifestyle factors and psychotropic treatments [3]. Moreover, the burden of weight gain may affect compliance with medication which may predispose psychiatric patients in great health risk. Thus, psychiatric patients appear to be at increased risk of high morbidity and mortality [4].\nIt becomes clearly understood that controlling and decreasing the weight gain of psychiatric patients should be a priority within their treatment program. It is argued that managing obesity in SMI patients is a challenging task as these patients may have impaired attention, motivation and memory that may impair their ability to follow weight loss program. Behavioral approaches that combine reduced dietary intake and increased physical activity are recommend as most favorable and effective strategy for weight management than pharmacological approaches in psychiatric obese population [5]. In healthy overweight and obese individuals life style interventions through diet and exercise produce significant weight loss and reductions in body fat. Recent studies of dietary and behavioral modification interventions have found small significant weight decreases in SMI patients on antipsychotic medication over short-term intervals [6]. Evidence also suggests significant improvements in the metabolic profile of obese psychiatric patients after weight loss interventions [7].\nThe long-term effects of nutritional interventions on several adiposity parameters and cardiometabolic parameters are not clearly understood. Previous studies have mainly reported the effects of weight loss on body weight and little is known for the effects on body composition. In addition, although metabolic abnormalities are well documented in patients taking antipsychotics [8], the effects of weight loss on metabolic regulation is not clearly described in psychiatric patients. The previous evidence is derived from controlled clinical trials of small number of patients or from a few naturalistic observational studies of inpatients. Thus, more observational studies of large number of psychiatric outpatients are required to assess management of weight gain and of metabolic disorders. In addition, previous conclusions are tempered by the short term duration of the studies and the small sample sizes used in those studies. Therefore the present study aimed to investigate the effects of a long term nutritional intervention on body weight, body composition and cardiovascular risk factors in a large number of patients with severe mental illness.", "A total of 989 psychiatric patients were recruited for the study (774 women and 215 men) and gave written informed consent. Patients were recommended to participate in the study by psychiatrists working either privately or in hospital offices in Thessaloniki (Greece). The study was carried out from January 2007 to November 2009. The study has been approved by the ethical committee of the Technological Educational Institute of Thessaloniki (Ref. No 20111). All patients were found competent by an independent psychiatrist, who was not involved in the study, to participate and to follow weight loss intervention at the enrollment visit. All patients continued on treatment with their medication. Antipsychotic drugs were being used by 28% of patients (n = 274), 30% of patients were taking antidepressants (n = 297), 23% of patients were taking both antipsychotics & antidepressants (n = 230) and 19% of patients (n = 288) were taking antipsychotics & antidepressants, as well other types of medication (e.g. acholytic, antiparkinson, antiepileptic). Medication was kept constant for every patient.", "Prior to the baseline assessment, patients visited the dietitian for familiarization with study design and measurements. The dietitian explained the study design and measurements thoroughly and then patients' relevant questions were answered.\nAt the beginning of the study (baseline-visit A), at 3 mo, 6 mo and 9 mo of the nutritional intervention (visit B, C and visit D respectively), several anthropometric measurements were undertaken to assess the outcome of the nutritional intervention program. All the measurements were carried out by the same two dietitians.\nBody weight was measured on a standing scale calibrated to 0.1 kg (Seca digital scale). Body height was measured on a wall-mounted stadiometer. The subjects stood with legs parallel and shoulder-width apart. Waist circumference (WC) was measured at the end of normal expiration at the minimal waist (smallest horizontal circumference above the umbilicus and below the xiphoid process). Hip circumference (HP) was measured around the maximum circumference over the buttocks.\nBody Fat was measured by the bioelectrical impedance analysis (BIA, Akern version 1.31). During the 9 mo period, subjects were asked to visit the nutrition unit every 2 wks. At these visits, body weight, waist circumference and body fat were measured by the same dietitian. For patients who dropped out, body weight was recorded and BMI was calculated when the drop out occurred.", "The intervention period lasted 9 months and consisted of 2 phases: a familiarization visit and an intensive 9 month nutritional intervention period. The dietary advice for weight control was given in each patient by a registered dietitian. It was based on a Mediterranean-style diet in combination with personalized healthy nutrition counselling. Each patient received personalized dietary regimen on the basis of dietary history and lifestyle. The dietary regimen was characterized by a moderate consumption of carbohydrates (50-55% of total energy per day) and a high fiber content, 15-20% protein and a fat intake of 30-35% of total energy per day. Moreover, patients were advised to consume fruits, vegetables, whole grains (legumes, rice, maize, and wheat) daily and to increase their consumption of olive oil. The dietary regimen was designed to produce an energy deficit of 500 kcal per week. The patients were visiting the dietitian every two weeks to discuss weight changes and treatment goals.\nThe Resting Metabolic Rate (RMR) was measured by indirect calorimetry (Fitmate Pro, Cosmed USA Inc.) during their first visit. All patients completed a physical activity record. RMR was multiplied by an activity factor of 1.3-1.5, according to the physical activity level of each patient, and daily energy requirements of each patient were estimated. The intervention program consisted primarily of dietary counseling, physical activity counseling and behavioral interventions in order to aid patients' adherence to a healthy life plan during the nutritional intervention. Counseling sessions were undertaken individually by each patient and included teaching healthful weight management techniques, meal planning, food shopping and preparation, portion control, techniques to differentiate emotional from psychological hunger etc. In terms of physical activity counseling, subjects were instructed to participate in light or moderate exercise at least 30 min 3-5 times per week.", "Biochemical measurements were undertaken at the beginning of the study (baseline-Visit A), at 3 mo (Visit B), 6 mo (Visit C) and 9 mo (Visit D) of the nutritional intervention. Data regarding plasma glucose, total cholesterol, HDL cholesterol and triglycerides were recorded by the dietitian.", "Data are expressed as means and standard deviations (SD). Within-subject paired t-tests compared initial vs end point measures for subjects that completed the 9 mo intervention. Comparisons between completers and drop-outs were performed using independent sample t-tests. In order to compensate for missing data due to withdrawal, the last-observation-carried-forward (LOCF) method was used and paired t-tests were performed against the LOCF data as well. Correlation analysis was also carried out for associations between body weight change and body fat percentage (BF %) change over time (baseline to 3 mo, 6 mo and 9 mo). Statistical significance was taken as P < 0.05. The statistical analysis was processed with SPSS 11 for Windows (SPSS, Inc., Chicago, IL, USA).", "[SUBTITLE] Characteristics of SMI subjects and their baseline condition [SUBSECTION] Figure 1 presents the participants' flow during the 9 mo nutritional intervention. From the first drop-out sample, 82 subjects were males and 341 subjects were females, with average age 40.7 ± 11.8 y and average body weight 94.9 ± 21.2 kg. From the second drop-out sample, 70 subjects were males and 211 subjects were females, with average age 40.1 ± 11.2 y and average body weight 95.6 ± 23.1 kg (Figure 1). From the 3rd drop-out sample, 28 subjects were males and 112 females. Reasons for dropping out of the study included an inability or unwillingness to continue with the nutritional intervention, family problems, health problems and transportation. Table 1 shows the characteristics of the subjects obtained from the baseline investigation. At baseline, all patients were classified obese (BMI > 30 kg.m-2) with an average body weight of 94.9 ± 21.7 kg and an average BMI of 34.3 ± 6.9 kg.m-2. The ratio of men that completed the 9 mo nutritional intervention (completers) was significantly greater than the ratio of women (P = 0.009). No significant differences were found in anthropometric and biochemical characteristics between drop-outs and completers at baseline (P > 0.05) (Table 1).\nParticipants' Flow.\nBaseline characteristics\nFigure 1 presents the participants' flow during the 9 mo nutritional intervention. From the first drop-out sample, 82 subjects were males and 341 subjects were females, with average age 40.7 ± 11.8 y and average body weight 94.9 ± 21.2 kg. From the second drop-out sample, 70 subjects were males and 211 subjects were females, with average age 40.1 ± 11.2 y and average body weight 95.6 ± 23.1 kg (Figure 1). From the 3rd drop-out sample, 28 subjects were males and 112 females. Reasons for dropping out of the study included an inability or unwillingness to continue with the nutritional intervention, family problems, health problems and transportation. Table 1 shows the characteristics of the subjects obtained from the baseline investigation. At baseline, all patients were classified obese (BMI > 30 kg.m-2) with an average body weight of 94.9 ± 21.7 kg and an average BMI of 34.3 ± 6.9 kg.m-2. The ratio of men that completed the 9 mo nutritional intervention (completers) was significantly greater than the ratio of women (P = 0.009). No significant differences were found in anthropometric and biochemical characteristics between drop-outs and completers at baseline (P > 0.05) (Table 1).\nParticipants' Flow.\nBaseline characteristics\n[SUBTITLE] Effect of the nutritional intervention on body composition [SUBSECTION] Table 2 shows the change in adiposity parameters from baseline to 9 mo of the nutritional intervention in completers and drop-outs. Body weight, BMI, waist and hip decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention in both completers and drop-outs (P < 0.001). In addition, body fat %, body fat mass (kg) decreased significantly at 3 mo, 6 mo and 9 mo of the nutritional intervention relative to baseline in completers (P < 0.001). Baseline measurements of weight and BMI were not significantly different between completers and drop-outs (Table 2). Completers at visit B (3 mo) and visit C (6 mo) had significantly lower weight and BMI than patients who dropped out before visit B and visit C, respectively. Weight and BMI were not significantly different between completers at visit D (9 mo) and patients who dropped out before visit D. The average change of weight and BMI, however, was significantly higher in completers than drop-outs at 9 mo (Δ (weight) 9.7 ± 8.4 vs 5.9 ± 6.2 respectively, P < 0.001; (Δ (BMI) 3.6 ± 3.0 vs 2.1 ± 2.2 respectively, P < 0.001 ). RMR decreased significantly in completers at visit B and C compared to baseline (P < 0.001) (Table 2). The effect of nutritional intervention on body weight and body composition was confirmed when LOCF analysis was performed (Table 3). There were positive associations between change in body weight and BF % change in SMI patients (Visit A to Visit B, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.62 (P < 0.001). There was no significant difference in weight loss between patients receiving different psychotropic medication (P > 0.05).\nChanges in parameters of adiposity during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of adiposity measurements obtained in each visit B, C and D, consequently the same number of baseline measurements is used for the comparisons. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D. Symbols a, b show significant differences by independent t-tests between completes and drop-outs at visit B and C respectively (a P < 0.01, bP = 0.002).\nLast Observation Carried Forward Analysis (LOCF)\nSignificance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D\nTable 2 shows the change in adiposity parameters from baseline to 9 mo of the nutritional intervention in completers and drop-outs. Body weight, BMI, waist and hip decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention in both completers and drop-outs (P < 0.001). In addition, body fat %, body fat mass (kg) decreased significantly at 3 mo, 6 mo and 9 mo of the nutritional intervention relative to baseline in completers (P < 0.001). Baseline measurements of weight and BMI were not significantly different between completers and drop-outs (Table 2). Completers at visit B (3 mo) and visit C (6 mo) had significantly lower weight and BMI than patients who dropped out before visit B and visit C, respectively. Weight and BMI were not significantly different between completers at visit D (9 mo) and patients who dropped out before visit D. The average change of weight and BMI, however, was significantly higher in completers than drop-outs at 9 mo (Δ (weight) 9.7 ± 8.4 vs 5.9 ± 6.2 respectively, P < 0.001; (Δ (BMI) 3.6 ± 3.0 vs 2.1 ± 2.2 respectively, P < 0.001 ). RMR decreased significantly in completers at visit B and C compared to baseline (P < 0.001) (Table 2). The effect of nutritional intervention on body weight and body composition was confirmed when LOCF analysis was performed (Table 3). There were positive associations between change in body weight and BF % change in SMI patients (Visit A to Visit B, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.62 (P < 0.001). There was no significant difference in weight loss between patients receiving different psychotropic medication (P > 0.05).\nChanges in parameters of adiposity during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of adiposity measurements obtained in each visit B, C and D, consequently the same number of baseline measurements is used for the comparisons. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D. Symbols a, b show significant differences by independent t-tests between completes and drop-outs at visit B and C respectively (a P < 0.01, bP = 0.002).\nLast Observation Carried Forward Analysis (LOCF)\nSignificance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D\n[SUBTITLE] Effects of the nutritional intervention on biochemical parameters [SUBSECTION] Table 4 shows the change in plasma glucose and plasma lipid concentrations. Fasting plasma glucose concentrations and total cholesterol concentrations decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention (P < 0.05, P < 0.001, P < 0.001, respectively). Fasting plasma triglycerides concentrations decreased significantly at 6 mo and 9 mo of the nutritional intervention compared to baseline (P < 0.001). The nutritional intervention produced a small decrease in HDL-cholesterol compared to baseline but this was not statistically significant (P > 0.05) (Table 4). The effect of nutritional intervention on plasma glucose and plasma lipids was confirmed when LOCF analysis was performed (Table 3).\nChange in biochemical parameters during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of biochemical measures obtained in each visit B, C and D. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D.\nTable 4 shows the change in plasma glucose and plasma lipid concentrations. Fasting plasma glucose concentrations and total cholesterol concentrations decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention (P < 0.05, P < 0.001, P < 0.001, respectively). Fasting plasma triglycerides concentrations decreased significantly at 6 mo and 9 mo of the nutritional intervention compared to baseline (P < 0.001). The nutritional intervention produced a small decrease in HDL-cholesterol compared to baseline but this was not statistically significant (P > 0.05) (Table 4). The effect of nutritional intervention on plasma glucose and plasma lipids was confirmed when LOCF analysis was performed (Table 3).\nChange in biochemical parameters during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of biochemical measures obtained in each visit B, C and D. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D.", "Figure 1 presents the participants' flow during the 9 mo nutritional intervention. From the first drop-out sample, 82 subjects were males and 341 subjects were females, with average age 40.7 ± 11.8 y and average body weight 94.9 ± 21.2 kg. From the second drop-out sample, 70 subjects were males and 211 subjects were females, with average age 40.1 ± 11.2 y and average body weight 95.6 ± 23.1 kg (Figure 1). From the 3rd drop-out sample, 28 subjects were males and 112 females. Reasons for dropping out of the study included an inability or unwillingness to continue with the nutritional intervention, family problems, health problems and transportation. Table 1 shows the characteristics of the subjects obtained from the baseline investigation. At baseline, all patients were classified obese (BMI > 30 kg.m-2) with an average body weight of 94.9 ± 21.7 kg and an average BMI of 34.3 ± 6.9 kg.m-2. The ratio of men that completed the 9 mo nutritional intervention (completers) was significantly greater than the ratio of women (P = 0.009). No significant differences were found in anthropometric and biochemical characteristics between drop-outs and completers at baseline (P > 0.05) (Table 1).\nParticipants' Flow.\nBaseline characteristics", "Table 2 shows the change in adiposity parameters from baseline to 9 mo of the nutritional intervention in completers and drop-outs. Body weight, BMI, waist and hip decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention in both completers and drop-outs (P < 0.001). In addition, body fat %, body fat mass (kg) decreased significantly at 3 mo, 6 mo and 9 mo of the nutritional intervention relative to baseline in completers (P < 0.001). Baseline measurements of weight and BMI were not significantly different between completers and drop-outs (Table 2). Completers at visit B (3 mo) and visit C (6 mo) had significantly lower weight and BMI than patients who dropped out before visit B and visit C, respectively. Weight and BMI were not significantly different between completers at visit D (9 mo) and patients who dropped out before visit D. The average change of weight and BMI, however, was significantly higher in completers than drop-outs at 9 mo (Δ (weight) 9.7 ± 8.4 vs 5.9 ± 6.2 respectively, P < 0.001; (Δ (BMI) 3.6 ± 3.0 vs 2.1 ± 2.2 respectively, P < 0.001 ). RMR decreased significantly in completers at visit B and C compared to baseline (P < 0.001) (Table 2). The effect of nutritional intervention on body weight and body composition was confirmed when LOCF analysis was performed (Table 3). There were positive associations between change in body weight and BF % change in SMI patients (Visit A to Visit B, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.62 (P < 0.001). There was no significant difference in weight loss between patients receiving different psychotropic medication (P > 0.05).\nChanges in parameters of adiposity during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of adiposity measurements obtained in each visit B, C and D, consequently the same number of baseline measurements is used for the comparisons. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D. Symbols a, b show significant differences by independent t-tests between completes and drop-outs at visit B and C respectively (a P < 0.01, bP = 0.002).\nLast Observation Carried Forward Analysis (LOCF)\nSignificance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D", "Table 4 shows the change in plasma glucose and plasma lipid concentrations. Fasting plasma glucose concentrations and total cholesterol concentrations decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention (P < 0.05, P < 0.001, P < 0.001, respectively). Fasting plasma triglycerides concentrations decreased significantly at 6 mo and 9 mo of the nutritional intervention compared to baseline (P < 0.001). The nutritional intervention produced a small decrease in HDL-cholesterol compared to baseline but this was not statistically significant (P > 0.05) (Table 4). The effect of nutritional intervention on plasma glucose and plasma lipids was confirmed when LOCF analysis was performed (Table 3).\nChange in biochemical parameters during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of biochemical measures obtained in each visit B, C and D. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D.", "This study shows that a personalized nutritional intervention is effective in decreasing adiposity and metabolic parameters in patients with severe mental illness. Previous lifestyle interventions have clearly reported weight loss in patients with severe mental illness but these results were derived from small number of patients and over short term intervals [6,9]. The present study used a large sample size and a 9 month nutritional intervention in order to investigate changes on both adiposity and metabolic parameters in patients with severe mental illness.\nThe present study found a progressive statistically significant decrease in mean adiposity parameters throughout the duration of monitoring compared to baseline. There is a paucity of clinical trials of management of obesity in patients with severe mental illness. The randomized controlled studies found significant weight reductions or modest reductions on body weight in patients taking antipsychotic medication [10-17]. A small number of nonrandomized controlled studies reported significant weight change [18,19], while Ball and colleagues [20] reported no significant weight change between the nonrandomized intervention group and control group. The present study found a mean weight loss at 3 months of 4.3 kg which is in agreement with other studies [11-18]. However, the evidence is poor for the long term effects of nutritional intervention on adiposity parameters. In our study, the mean weight loss of 7.4 kg at 6 months is greater compared to previous open studies [16,21,22]. The mean weight reduction of 9.6 kg at 9 mo was progressive and significant and exceeds the weight loss achieved in previous long term studies with behavioral treatment programs [23,24]. In addition weight loss was also found significant and continual in the drop-outs which probably indicates a general efficacy of the present nutritional intervention. The body weight management in our patients was undertaken with personalized dietetic treatment and lifestyle counseling. Patients were seen by a dietitian who assessed weight changes and treatment goals every two weeks. The greater weight loss in our study might indicate that a personalized nutritional intervention can produce significant weight loss in psychiatric patients who manage to adhere to the nutritional intervention for more than three months.\nThe present nutritional intervention not only reduced body weight but demonstrated continual significant decrease in body fat mass (kg) and percent of body fat (%) in our patients. Skouroliakou et al. [17] reported significant reduction in fat mass but in the short term. The present decrease in fat mass is demonstrated for the first time in a long term nutritional intervention in SMI patients. The mean fat mass reduction was continual and significant throughout the study (e.g. 6 kg fat mass loss at 3 mo; 5.9 kg fat mass loss at 6 mo and 8 kg fat mass loss at 9 mo). BMI was also significantly decreased verifying the decrease in total body fat and general obesity. Moreover waist circumference, a well documented proxy for visceral obesity [25], was significantly decreased in our patients. Consistent with previous studies [26], weight loss produced a decrease in RMR. These findings in SMI patients are comparable to reduction of obesity-related factors with lifestyle modification within the general obese population [27]. Recent consensus guidelines for patients with severe mental illness recommend the measurements of both BMI and WC to monitor cardiovascular risk factors in this population [28]. The reductions found in waist circumference and body mass in our SMI patients indicate improvements in the risk factors associated with cardiovascular disease.\nThe reduction in fasting glucose was significant throughout the nutritional intervention compared to baseline. This is important since abnormalities in glucose metabolism have been associated with the use of antipsychotic treatment [29]. The significant reduction in fasting glucose may be primarily due to weight loss since medication was kept constant. Similarly there were significant reductions in total cholesterol and triglycerides during the 9 month nutritional intervention. These reductions in lipids concentrations are also important since psychiatric patients have been shown to have elevated dyslipidemia compared to general population [30]. Both total cholesterol and triglycerides dropped significantly since weight loss became significant throughout the intervention. The present results justify the important use of weight reducing programs and especially of nutritional intervention in the management of metabolic dysregulation in patients with severe mental illness.\n[SUBTITLE] Limitations [SUBSECTION] By design the present study did not include a control group, so it is unknown whether a similar group of obese patients would have lost or gained weight over the same time period. Ideally, longer term randomized controlled trials are needed to assess the effectiveness of the nutritional interventions. In addition, we can not draw conclusions on the long-term effectiveness of the intervention by means of weight maintenance as a follow-up period was not included. However, the present results are derived from a relatively large sample compared to previous shorter term or longer term studies of small-subject numbers. Another limitation of the present study is the large drop-out. It is recognized that psychiatric disorders can be a significant barrier to weight loss success in obese individuals, thus discontinuance of the study could have been expected. In a meta-analysis of compliance studies, DiMatteo et al. showed that patients with depression had a 3-fold higher rate of noncompliance with medical treatments, including diet recommendations [31]. However, the significant results from LOCF analysis confirm the efficacy of the 9 mo nutritional intervention in terms of successful weight loss and improvement of the metabolic profile in our SMI patients.\nBy design the present study did not include a control group, so it is unknown whether a similar group of obese patients would have lost or gained weight over the same time period. Ideally, longer term randomized controlled trials are needed to assess the effectiveness of the nutritional interventions. In addition, we can not draw conclusions on the long-term effectiveness of the intervention by means of weight maintenance as a follow-up period was not included. However, the present results are derived from a relatively large sample compared to previous shorter term or longer term studies of small-subject numbers. Another limitation of the present study is the large drop-out. It is recognized that psychiatric disorders can be a significant barrier to weight loss success in obese individuals, thus discontinuance of the study could have been expected. In a meta-analysis of compliance studies, DiMatteo et al. showed that patients with depression had a 3-fold higher rate of noncompliance with medical treatments, including diet recommendations [31]. However, the significant results from LOCF analysis confirm the efficacy of the 9 mo nutritional intervention in terms of successful weight loss and improvement of the metabolic profile in our SMI patients.", "By design the present study did not include a control group, so it is unknown whether a similar group of obese patients would have lost or gained weight over the same time period. Ideally, longer term randomized controlled trials are needed to assess the effectiveness of the nutritional interventions. In addition, we can not draw conclusions on the long-term effectiveness of the intervention by means of weight maintenance as a follow-up period was not included. However, the present results are derived from a relatively large sample compared to previous shorter term or longer term studies of small-subject numbers. Another limitation of the present study is the large drop-out. It is recognized that psychiatric disorders can be a significant barrier to weight loss success in obese individuals, thus discontinuance of the study could have been expected. In a meta-analysis of compliance studies, DiMatteo et al. showed that patients with depression had a 3-fold higher rate of noncompliance with medical treatments, including diet recommendations [31]. However, the significant results from LOCF analysis confirm the efficacy of the 9 mo nutritional intervention in terms of successful weight loss and improvement of the metabolic profile in our SMI patients.", "This study has important clinical implication, indicating the effectiveness of a simple nutritional intervention on adiposity and lipid regulation which is important in psychiatric patients who are a high risk group for the development of cardiovascular disease. The present results show that obese patients with severe mental illness can achieve weight control and improve cardiometabolic profile by following a simple personalized nutritional program for 9 months.", "The authors declare that they have no competing interests.", "FT contributed to the interpretation of the data, analysis of the results and prepared this manuscript. KP, VT, NA and IP were involved in data collection and analysis of the results. GV was involved in the statistical analysis of the revised manuscript. MH was the principal investigator and assisted in data collection, interpretation of the results and preparation of the manuscript. All authors read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/11/31/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects with SMI", "Anthropometric measurements", "Nutritional intervention", "Biochemical measurements", "Statistical Analysis", "Results", "Characteristics of SMI subjects and their baseline condition", "Effect of the nutritional intervention on body composition", "Effects of the nutritional intervention on biochemical parameters", "Discussion", "Limitations", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Psychiatric patients have a high prevalence of obesity or a greater risk for weight gain due to antipsychotic (neuroleptic) treatment. Recent studies suggest that patients with severe mental illness (SMI) might have an even higher proportion of obesity than individuals in the general population. For example, Dickerson et al. compared 149 psychiatric patients with matched controls and found that prevalence of obesity was twice as high as the general US adult population (men 41 vs. 20% and women 50 vs. 27%) [1]. As early as the mid-1960s, associations between conventional neuroleptic treatment and metabolic abnormalities were reported. Atypical antipsychotics are newer drugs that are increasingly replacing the conventional neuroleptics due to better efficacy and side effects profile. However evidence suggests that some of the atypical antipsychotics may have even greater associations with dramatic weight gain, diabetes and dyslipidemia [2].\nIt is well demonstrated that excessive body weight is a clearly established factor for type 2 diabetes and cardiovascular disease in the general population. Changes in some glucose and lipid parameters are commonly reported in patients with all forms of severe mental illness (SMI) (psychosis, depression, bipolar disease). These metabolic changes are probably related to a combination of genetic predisposition, lifestyle factors and psychotropic treatments [3]. Moreover, the burden of weight gain may affect compliance with medication which may predispose psychiatric patients in great health risk. Thus, psychiatric patients appear to be at increased risk of high morbidity and mortality [4].\nIt becomes clearly understood that controlling and decreasing the weight gain of psychiatric patients should be a priority within their treatment program. It is argued that managing obesity in SMI patients is a challenging task as these patients may have impaired attention, motivation and memory that may impair their ability to follow weight loss program. Behavioral approaches that combine reduced dietary intake and increased physical activity are recommend as most favorable and effective strategy for weight management than pharmacological approaches in psychiatric obese population [5]. In healthy overweight and obese individuals life style interventions through diet and exercise produce significant weight loss and reductions in body fat. Recent studies of dietary and behavioral modification interventions have found small significant weight decreases in SMI patients on antipsychotic medication over short-term intervals [6]. Evidence also suggests significant improvements in the metabolic profile of obese psychiatric patients after weight loss interventions [7].\nThe long-term effects of nutritional interventions on several adiposity parameters and cardiometabolic parameters are not clearly understood. Previous studies have mainly reported the effects of weight loss on body weight and little is known for the effects on body composition. In addition, although metabolic abnormalities are well documented in patients taking antipsychotics [8], the effects of weight loss on metabolic regulation is not clearly described in psychiatric patients. The previous evidence is derived from controlled clinical trials of small number of patients or from a few naturalistic observational studies of inpatients. Thus, more observational studies of large number of psychiatric outpatients are required to assess management of weight gain and of metabolic disorders. In addition, previous conclusions are tempered by the short term duration of the studies and the small sample sizes used in those studies. Therefore the present study aimed to investigate the effects of a long term nutritional intervention on body weight, body composition and cardiovascular risk factors in a large number of patients with severe mental illness.", "[SUBTITLE] Subjects with SMI [SUBSECTION] A total of 989 psychiatric patients were recruited for the study (774 women and 215 men) and gave written informed consent. Patients were recommended to participate in the study by psychiatrists working either privately or in hospital offices in Thessaloniki (Greece). The study was carried out from January 2007 to November 2009. The study has been approved by the ethical committee of the Technological Educational Institute of Thessaloniki (Ref. No 20111). All patients were found competent by an independent psychiatrist, who was not involved in the study, to participate and to follow weight loss intervention at the enrollment visit. All patients continued on treatment with their medication. Antipsychotic drugs were being used by 28% of patients (n = 274), 30% of patients were taking antidepressants (n = 297), 23% of patients were taking both antipsychotics & antidepressants (n = 230) and 19% of patients (n = 288) were taking antipsychotics & antidepressants, as well other types of medication (e.g. acholytic, antiparkinson, antiepileptic). Medication was kept constant for every patient.\nA total of 989 psychiatric patients were recruited for the study (774 women and 215 men) and gave written informed consent. Patients were recommended to participate in the study by psychiatrists working either privately or in hospital offices in Thessaloniki (Greece). The study was carried out from January 2007 to November 2009. The study has been approved by the ethical committee of the Technological Educational Institute of Thessaloniki (Ref. No 20111). All patients were found competent by an independent psychiatrist, who was not involved in the study, to participate and to follow weight loss intervention at the enrollment visit. All patients continued on treatment with their medication. Antipsychotic drugs were being used by 28% of patients (n = 274), 30% of patients were taking antidepressants (n = 297), 23% of patients were taking both antipsychotics & antidepressants (n = 230) and 19% of patients (n = 288) were taking antipsychotics & antidepressants, as well other types of medication (e.g. acholytic, antiparkinson, antiepileptic). Medication was kept constant for every patient.\n[SUBTITLE] Anthropometric measurements [SUBSECTION] Prior to the baseline assessment, patients visited the dietitian for familiarization with study design and measurements. The dietitian explained the study design and measurements thoroughly and then patients' relevant questions were answered.\nAt the beginning of the study (baseline-visit A), at 3 mo, 6 mo and 9 mo of the nutritional intervention (visit B, C and visit D respectively), several anthropometric measurements were undertaken to assess the outcome of the nutritional intervention program. All the measurements were carried out by the same two dietitians.\nBody weight was measured on a standing scale calibrated to 0.1 kg (Seca digital scale). Body height was measured on a wall-mounted stadiometer. The subjects stood with legs parallel and shoulder-width apart. Waist circumference (WC) was measured at the end of normal expiration at the minimal waist (smallest horizontal circumference above the umbilicus and below the xiphoid process). Hip circumference (HP) was measured around the maximum circumference over the buttocks.\nBody Fat was measured by the bioelectrical impedance analysis (BIA, Akern version 1.31). During the 9 mo period, subjects were asked to visit the nutrition unit every 2 wks. At these visits, body weight, waist circumference and body fat were measured by the same dietitian. For patients who dropped out, body weight was recorded and BMI was calculated when the drop out occurred.\nPrior to the baseline assessment, patients visited the dietitian for familiarization with study design and measurements. The dietitian explained the study design and measurements thoroughly and then patients' relevant questions were answered.\nAt the beginning of the study (baseline-visit A), at 3 mo, 6 mo and 9 mo of the nutritional intervention (visit B, C and visit D respectively), several anthropometric measurements were undertaken to assess the outcome of the nutritional intervention program. All the measurements were carried out by the same two dietitians.\nBody weight was measured on a standing scale calibrated to 0.1 kg (Seca digital scale). Body height was measured on a wall-mounted stadiometer. The subjects stood with legs parallel and shoulder-width apart. Waist circumference (WC) was measured at the end of normal expiration at the minimal waist (smallest horizontal circumference above the umbilicus and below the xiphoid process). Hip circumference (HP) was measured around the maximum circumference over the buttocks.\nBody Fat was measured by the bioelectrical impedance analysis (BIA, Akern version 1.31). During the 9 mo period, subjects were asked to visit the nutrition unit every 2 wks. At these visits, body weight, waist circumference and body fat were measured by the same dietitian. For patients who dropped out, body weight was recorded and BMI was calculated when the drop out occurred.\n[SUBTITLE] Nutritional intervention [SUBSECTION] The intervention period lasted 9 months and consisted of 2 phases: a familiarization visit and an intensive 9 month nutritional intervention period. The dietary advice for weight control was given in each patient by a registered dietitian. It was based on a Mediterranean-style diet in combination with personalized healthy nutrition counselling. Each patient received personalized dietary regimen on the basis of dietary history and lifestyle. The dietary regimen was characterized by a moderate consumption of carbohydrates (50-55% of total energy per day) and a high fiber content, 15-20% protein and a fat intake of 30-35% of total energy per day. Moreover, patients were advised to consume fruits, vegetables, whole grains (legumes, rice, maize, and wheat) daily and to increase their consumption of olive oil. The dietary regimen was designed to produce an energy deficit of 500 kcal per week. The patients were visiting the dietitian every two weeks to discuss weight changes and treatment goals.\nThe Resting Metabolic Rate (RMR) was measured by indirect calorimetry (Fitmate Pro, Cosmed USA Inc.) during their first visit. All patients completed a physical activity record. RMR was multiplied by an activity factor of 1.3-1.5, according to the physical activity level of each patient, and daily energy requirements of each patient were estimated. The intervention program consisted primarily of dietary counseling, physical activity counseling and behavioral interventions in order to aid patients' adherence to a healthy life plan during the nutritional intervention. Counseling sessions were undertaken individually by each patient and included teaching healthful weight management techniques, meal planning, food shopping and preparation, portion control, techniques to differentiate emotional from psychological hunger etc. In terms of physical activity counseling, subjects were instructed to participate in light or moderate exercise at least 30 min 3-5 times per week.\nThe intervention period lasted 9 months and consisted of 2 phases: a familiarization visit and an intensive 9 month nutritional intervention period. The dietary advice for weight control was given in each patient by a registered dietitian. It was based on a Mediterranean-style diet in combination with personalized healthy nutrition counselling. Each patient received personalized dietary regimen on the basis of dietary history and lifestyle. The dietary regimen was characterized by a moderate consumption of carbohydrates (50-55% of total energy per day) and a high fiber content, 15-20% protein and a fat intake of 30-35% of total energy per day. Moreover, patients were advised to consume fruits, vegetables, whole grains (legumes, rice, maize, and wheat) daily and to increase their consumption of olive oil. The dietary regimen was designed to produce an energy deficit of 500 kcal per week. The patients were visiting the dietitian every two weeks to discuss weight changes and treatment goals.\nThe Resting Metabolic Rate (RMR) was measured by indirect calorimetry (Fitmate Pro, Cosmed USA Inc.) during their first visit. All patients completed a physical activity record. RMR was multiplied by an activity factor of 1.3-1.5, according to the physical activity level of each patient, and daily energy requirements of each patient were estimated. The intervention program consisted primarily of dietary counseling, physical activity counseling and behavioral interventions in order to aid patients' adherence to a healthy life plan during the nutritional intervention. Counseling sessions were undertaken individually by each patient and included teaching healthful weight management techniques, meal planning, food shopping and preparation, portion control, techniques to differentiate emotional from psychological hunger etc. In terms of physical activity counseling, subjects were instructed to participate in light or moderate exercise at least 30 min 3-5 times per week.\n[SUBTITLE] Biochemical measurements [SUBSECTION] Biochemical measurements were undertaken at the beginning of the study (baseline-Visit A), at 3 mo (Visit B), 6 mo (Visit C) and 9 mo (Visit D) of the nutritional intervention. Data regarding plasma glucose, total cholesterol, HDL cholesterol and triglycerides were recorded by the dietitian.\nBiochemical measurements were undertaken at the beginning of the study (baseline-Visit A), at 3 mo (Visit B), 6 mo (Visit C) and 9 mo (Visit D) of the nutritional intervention. Data regarding plasma glucose, total cholesterol, HDL cholesterol and triglycerides were recorded by the dietitian.\n[SUBTITLE] Statistical Analysis [SUBSECTION] Data are expressed as means and standard deviations (SD). Within-subject paired t-tests compared initial vs end point measures for subjects that completed the 9 mo intervention. Comparisons between completers and drop-outs were performed using independent sample t-tests. In order to compensate for missing data due to withdrawal, the last-observation-carried-forward (LOCF) method was used and paired t-tests were performed against the LOCF data as well. Correlation analysis was also carried out for associations between body weight change and body fat percentage (BF %) change over time (baseline to 3 mo, 6 mo and 9 mo). Statistical significance was taken as P < 0.05. The statistical analysis was processed with SPSS 11 for Windows (SPSS, Inc., Chicago, IL, USA).\nData are expressed as means and standard deviations (SD). Within-subject paired t-tests compared initial vs end point measures for subjects that completed the 9 mo intervention. Comparisons between completers and drop-outs were performed using independent sample t-tests. In order to compensate for missing data due to withdrawal, the last-observation-carried-forward (LOCF) method was used and paired t-tests were performed against the LOCF data as well. Correlation analysis was also carried out for associations between body weight change and body fat percentage (BF %) change over time (baseline to 3 mo, 6 mo and 9 mo). Statistical significance was taken as P < 0.05. The statistical analysis was processed with SPSS 11 for Windows (SPSS, Inc., Chicago, IL, USA).", "A total of 989 psychiatric patients were recruited for the study (774 women and 215 men) and gave written informed consent. Patients were recommended to participate in the study by psychiatrists working either privately or in hospital offices in Thessaloniki (Greece). The study was carried out from January 2007 to November 2009. The study has been approved by the ethical committee of the Technological Educational Institute of Thessaloniki (Ref. No 20111). All patients were found competent by an independent psychiatrist, who was not involved in the study, to participate and to follow weight loss intervention at the enrollment visit. All patients continued on treatment with their medication. Antipsychotic drugs were being used by 28% of patients (n = 274), 30% of patients were taking antidepressants (n = 297), 23% of patients were taking both antipsychotics & antidepressants (n = 230) and 19% of patients (n = 288) were taking antipsychotics & antidepressants, as well other types of medication (e.g. acholytic, antiparkinson, antiepileptic). Medication was kept constant for every patient.", "Prior to the baseline assessment, patients visited the dietitian for familiarization with study design and measurements. The dietitian explained the study design and measurements thoroughly and then patients' relevant questions were answered.\nAt the beginning of the study (baseline-visit A), at 3 mo, 6 mo and 9 mo of the nutritional intervention (visit B, C and visit D respectively), several anthropometric measurements were undertaken to assess the outcome of the nutritional intervention program. All the measurements were carried out by the same two dietitians.\nBody weight was measured on a standing scale calibrated to 0.1 kg (Seca digital scale). Body height was measured on a wall-mounted stadiometer. The subjects stood with legs parallel and shoulder-width apart. Waist circumference (WC) was measured at the end of normal expiration at the minimal waist (smallest horizontal circumference above the umbilicus and below the xiphoid process). Hip circumference (HP) was measured around the maximum circumference over the buttocks.\nBody Fat was measured by the bioelectrical impedance analysis (BIA, Akern version 1.31). During the 9 mo period, subjects were asked to visit the nutrition unit every 2 wks. At these visits, body weight, waist circumference and body fat were measured by the same dietitian. For patients who dropped out, body weight was recorded and BMI was calculated when the drop out occurred.", "The intervention period lasted 9 months and consisted of 2 phases: a familiarization visit and an intensive 9 month nutritional intervention period. The dietary advice for weight control was given in each patient by a registered dietitian. It was based on a Mediterranean-style diet in combination with personalized healthy nutrition counselling. Each patient received personalized dietary regimen on the basis of dietary history and lifestyle. The dietary regimen was characterized by a moderate consumption of carbohydrates (50-55% of total energy per day) and a high fiber content, 15-20% protein and a fat intake of 30-35% of total energy per day. Moreover, patients were advised to consume fruits, vegetables, whole grains (legumes, rice, maize, and wheat) daily and to increase their consumption of olive oil. The dietary regimen was designed to produce an energy deficit of 500 kcal per week. The patients were visiting the dietitian every two weeks to discuss weight changes and treatment goals.\nThe Resting Metabolic Rate (RMR) was measured by indirect calorimetry (Fitmate Pro, Cosmed USA Inc.) during their first visit. All patients completed a physical activity record. RMR was multiplied by an activity factor of 1.3-1.5, according to the physical activity level of each patient, and daily energy requirements of each patient were estimated. The intervention program consisted primarily of dietary counseling, physical activity counseling and behavioral interventions in order to aid patients' adherence to a healthy life plan during the nutritional intervention. Counseling sessions were undertaken individually by each patient and included teaching healthful weight management techniques, meal planning, food shopping and preparation, portion control, techniques to differentiate emotional from psychological hunger etc. In terms of physical activity counseling, subjects were instructed to participate in light or moderate exercise at least 30 min 3-5 times per week.", "Biochemical measurements were undertaken at the beginning of the study (baseline-Visit A), at 3 mo (Visit B), 6 mo (Visit C) and 9 mo (Visit D) of the nutritional intervention. Data regarding plasma glucose, total cholesterol, HDL cholesterol and triglycerides were recorded by the dietitian.", "Data are expressed as means and standard deviations (SD). Within-subject paired t-tests compared initial vs end point measures for subjects that completed the 9 mo intervention. Comparisons between completers and drop-outs were performed using independent sample t-tests. In order to compensate for missing data due to withdrawal, the last-observation-carried-forward (LOCF) method was used and paired t-tests were performed against the LOCF data as well. Correlation analysis was also carried out for associations between body weight change and body fat percentage (BF %) change over time (baseline to 3 mo, 6 mo and 9 mo). Statistical significance was taken as P < 0.05. The statistical analysis was processed with SPSS 11 for Windows (SPSS, Inc., Chicago, IL, USA).", "[SUBTITLE] Characteristics of SMI subjects and their baseline condition [SUBSECTION] Figure 1 presents the participants' flow during the 9 mo nutritional intervention. From the first drop-out sample, 82 subjects were males and 341 subjects were females, with average age 40.7 ± 11.8 y and average body weight 94.9 ± 21.2 kg. From the second drop-out sample, 70 subjects were males and 211 subjects were females, with average age 40.1 ± 11.2 y and average body weight 95.6 ± 23.1 kg (Figure 1). From the 3rd drop-out sample, 28 subjects were males and 112 females. Reasons for dropping out of the study included an inability or unwillingness to continue with the nutritional intervention, family problems, health problems and transportation. Table 1 shows the characteristics of the subjects obtained from the baseline investigation. At baseline, all patients were classified obese (BMI > 30 kg.m-2) with an average body weight of 94.9 ± 21.7 kg and an average BMI of 34.3 ± 6.9 kg.m-2. The ratio of men that completed the 9 mo nutritional intervention (completers) was significantly greater than the ratio of women (P = 0.009). No significant differences were found in anthropometric and biochemical characteristics between drop-outs and completers at baseline (P > 0.05) (Table 1).\nParticipants' Flow.\nBaseline characteristics\nFigure 1 presents the participants' flow during the 9 mo nutritional intervention. From the first drop-out sample, 82 subjects were males and 341 subjects were females, with average age 40.7 ± 11.8 y and average body weight 94.9 ± 21.2 kg. From the second drop-out sample, 70 subjects were males and 211 subjects were females, with average age 40.1 ± 11.2 y and average body weight 95.6 ± 23.1 kg (Figure 1). From the 3rd drop-out sample, 28 subjects were males and 112 females. Reasons for dropping out of the study included an inability or unwillingness to continue with the nutritional intervention, family problems, health problems and transportation. Table 1 shows the characteristics of the subjects obtained from the baseline investigation. At baseline, all patients were classified obese (BMI > 30 kg.m-2) with an average body weight of 94.9 ± 21.7 kg and an average BMI of 34.3 ± 6.9 kg.m-2. The ratio of men that completed the 9 mo nutritional intervention (completers) was significantly greater than the ratio of women (P = 0.009). No significant differences were found in anthropometric and biochemical characteristics between drop-outs and completers at baseline (P > 0.05) (Table 1).\nParticipants' Flow.\nBaseline characteristics\n[SUBTITLE] Effect of the nutritional intervention on body composition [SUBSECTION] Table 2 shows the change in adiposity parameters from baseline to 9 mo of the nutritional intervention in completers and drop-outs. Body weight, BMI, waist and hip decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention in both completers and drop-outs (P < 0.001). In addition, body fat %, body fat mass (kg) decreased significantly at 3 mo, 6 mo and 9 mo of the nutritional intervention relative to baseline in completers (P < 0.001). Baseline measurements of weight and BMI were not significantly different between completers and drop-outs (Table 2). Completers at visit B (3 mo) and visit C (6 mo) had significantly lower weight and BMI than patients who dropped out before visit B and visit C, respectively. Weight and BMI were not significantly different between completers at visit D (9 mo) and patients who dropped out before visit D. The average change of weight and BMI, however, was significantly higher in completers than drop-outs at 9 mo (Δ (weight) 9.7 ± 8.4 vs 5.9 ± 6.2 respectively, P < 0.001; (Δ (BMI) 3.6 ± 3.0 vs 2.1 ± 2.2 respectively, P < 0.001 ). RMR decreased significantly in completers at visit B and C compared to baseline (P < 0.001) (Table 2). The effect of nutritional intervention on body weight and body composition was confirmed when LOCF analysis was performed (Table 3). There were positive associations between change in body weight and BF % change in SMI patients (Visit A to Visit B, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.62 (P < 0.001). There was no significant difference in weight loss between patients receiving different psychotropic medication (P > 0.05).\nChanges in parameters of adiposity during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of adiposity measurements obtained in each visit B, C and D, consequently the same number of baseline measurements is used for the comparisons. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D. Symbols a, b show significant differences by independent t-tests between completes and drop-outs at visit B and C respectively (a P < 0.01, bP = 0.002).\nLast Observation Carried Forward Analysis (LOCF)\nSignificance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D\nTable 2 shows the change in adiposity parameters from baseline to 9 mo of the nutritional intervention in completers and drop-outs. Body weight, BMI, waist and hip decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention in both completers and drop-outs (P < 0.001). In addition, body fat %, body fat mass (kg) decreased significantly at 3 mo, 6 mo and 9 mo of the nutritional intervention relative to baseline in completers (P < 0.001). Baseline measurements of weight and BMI were not significantly different between completers and drop-outs (Table 2). Completers at visit B (3 mo) and visit C (6 mo) had significantly lower weight and BMI than patients who dropped out before visit B and visit C, respectively. Weight and BMI were not significantly different between completers at visit D (9 mo) and patients who dropped out before visit D. The average change of weight and BMI, however, was significantly higher in completers than drop-outs at 9 mo (Δ (weight) 9.7 ± 8.4 vs 5.9 ± 6.2 respectively, P < 0.001; (Δ (BMI) 3.6 ± 3.0 vs 2.1 ± 2.2 respectively, P < 0.001 ). RMR decreased significantly in completers at visit B and C compared to baseline (P < 0.001) (Table 2). The effect of nutritional intervention on body weight and body composition was confirmed when LOCF analysis was performed (Table 3). There were positive associations between change in body weight and BF % change in SMI patients (Visit A to Visit B, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.62 (P < 0.001). There was no significant difference in weight loss between patients receiving different psychotropic medication (P > 0.05).\nChanges in parameters of adiposity during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of adiposity measurements obtained in each visit B, C and D, consequently the same number of baseline measurements is used for the comparisons. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D. Symbols a, b show significant differences by independent t-tests between completes and drop-outs at visit B and C respectively (a P < 0.01, bP = 0.002).\nLast Observation Carried Forward Analysis (LOCF)\nSignificance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D\n[SUBTITLE] Effects of the nutritional intervention on biochemical parameters [SUBSECTION] Table 4 shows the change in plasma glucose and plasma lipid concentrations. Fasting plasma glucose concentrations and total cholesterol concentrations decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention (P < 0.05, P < 0.001, P < 0.001, respectively). Fasting plasma triglycerides concentrations decreased significantly at 6 mo and 9 mo of the nutritional intervention compared to baseline (P < 0.001). The nutritional intervention produced a small decrease in HDL-cholesterol compared to baseline but this was not statistically significant (P > 0.05) (Table 4). The effect of nutritional intervention on plasma glucose and plasma lipids was confirmed when LOCF analysis was performed (Table 3).\nChange in biochemical parameters during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of biochemical measures obtained in each visit B, C and D. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D.\nTable 4 shows the change in plasma glucose and plasma lipid concentrations. Fasting plasma glucose concentrations and total cholesterol concentrations decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention (P < 0.05, P < 0.001, P < 0.001, respectively). Fasting plasma triglycerides concentrations decreased significantly at 6 mo and 9 mo of the nutritional intervention compared to baseline (P < 0.001). The nutritional intervention produced a small decrease in HDL-cholesterol compared to baseline but this was not statistically significant (P > 0.05) (Table 4). The effect of nutritional intervention on plasma glucose and plasma lipids was confirmed when LOCF analysis was performed (Table 3).\nChange in biochemical parameters during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of biochemical measures obtained in each visit B, C and D. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D.", "Figure 1 presents the participants' flow during the 9 mo nutritional intervention. From the first drop-out sample, 82 subjects were males and 341 subjects were females, with average age 40.7 ± 11.8 y and average body weight 94.9 ± 21.2 kg. From the second drop-out sample, 70 subjects were males and 211 subjects were females, with average age 40.1 ± 11.2 y and average body weight 95.6 ± 23.1 kg (Figure 1). From the 3rd drop-out sample, 28 subjects were males and 112 females. Reasons for dropping out of the study included an inability or unwillingness to continue with the nutritional intervention, family problems, health problems and transportation. Table 1 shows the characteristics of the subjects obtained from the baseline investigation. At baseline, all patients were classified obese (BMI > 30 kg.m-2) with an average body weight of 94.9 ± 21.7 kg and an average BMI of 34.3 ± 6.9 kg.m-2. The ratio of men that completed the 9 mo nutritional intervention (completers) was significantly greater than the ratio of women (P = 0.009). No significant differences were found in anthropometric and biochemical characteristics between drop-outs and completers at baseline (P > 0.05) (Table 1).\nParticipants' Flow.\nBaseline characteristics", "Table 2 shows the change in adiposity parameters from baseline to 9 mo of the nutritional intervention in completers and drop-outs. Body weight, BMI, waist and hip decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention in both completers and drop-outs (P < 0.001). In addition, body fat %, body fat mass (kg) decreased significantly at 3 mo, 6 mo and 9 mo of the nutritional intervention relative to baseline in completers (P < 0.001). Baseline measurements of weight and BMI were not significantly different between completers and drop-outs (Table 2). Completers at visit B (3 mo) and visit C (6 mo) had significantly lower weight and BMI than patients who dropped out before visit B and visit C, respectively. Weight and BMI were not significantly different between completers at visit D (9 mo) and patients who dropped out before visit D. The average change of weight and BMI, however, was significantly higher in completers than drop-outs at 9 mo (Δ (weight) 9.7 ± 8.4 vs 5.9 ± 6.2 respectively, P < 0.001; (Δ (BMI) 3.6 ± 3.0 vs 2.1 ± 2.2 respectively, P < 0.001 ). RMR decreased significantly in completers at visit B and C compared to baseline (P < 0.001) (Table 2). The effect of nutritional intervention on body weight and body composition was confirmed when LOCF analysis was performed (Table 3). There were positive associations between change in body weight and BF % change in SMI patients (Visit A to Visit B, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.46 (P < 0.001); Visit A to Visit C, r = 0.62 (P < 0.001). There was no significant difference in weight loss between patients receiving different psychotropic medication (P > 0.05).\nChanges in parameters of adiposity during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of adiposity measurements obtained in each visit B, C and D, consequently the same number of baseline measurements is used for the comparisons. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D. Symbols a, b show significant differences by independent t-tests between completes and drop-outs at visit B and C respectively (a P < 0.01, bP = 0.002).\nLast Observation Carried Forward Analysis (LOCF)\nSignificance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D", "Table 4 shows the change in plasma glucose and plasma lipid concentrations. Fasting plasma glucose concentrations and total cholesterol concentrations decreased significantly from baseline to 3 mo, 6 mo and 9 mo of the intervention (P < 0.05, P < 0.001, P < 0.001, respectively). Fasting plasma triglycerides concentrations decreased significantly at 6 mo and 9 mo of the nutritional intervention compared to baseline (P < 0.001). The nutritional intervention produced a small decrease in HDL-cholesterol compared to baseline but this was not statistically significant (P > 0.05) (Table 4). The effect of nutritional intervention on plasma glucose and plasma lipids was confirmed when LOCF analysis was performed (Table 3).\nChange in biochemical parameters during the 9 mo nutritional intervention\nValues are means ± SD. n refers to number of biochemical measures obtained in each visit B, C and D. Significance differences were determined by paired t-tests; *P < 0.001 for the difference between baseline and visit B; †P < 0.001 for the difference between baseline and visit C, ††P < 0.001 for the difference between baseline and visit D.", "This study shows that a personalized nutritional intervention is effective in decreasing adiposity and metabolic parameters in patients with severe mental illness. Previous lifestyle interventions have clearly reported weight loss in patients with severe mental illness but these results were derived from small number of patients and over short term intervals [6,9]. The present study used a large sample size and a 9 month nutritional intervention in order to investigate changes on both adiposity and metabolic parameters in patients with severe mental illness.\nThe present study found a progressive statistically significant decrease in mean adiposity parameters throughout the duration of monitoring compared to baseline. There is a paucity of clinical trials of management of obesity in patients with severe mental illness. The randomized controlled studies found significant weight reductions or modest reductions on body weight in patients taking antipsychotic medication [10-17]. A small number of nonrandomized controlled studies reported significant weight change [18,19], while Ball and colleagues [20] reported no significant weight change between the nonrandomized intervention group and control group. The present study found a mean weight loss at 3 months of 4.3 kg which is in agreement with other studies [11-18]. However, the evidence is poor for the long term effects of nutritional intervention on adiposity parameters. In our study, the mean weight loss of 7.4 kg at 6 months is greater compared to previous open studies [16,21,22]. The mean weight reduction of 9.6 kg at 9 mo was progressive and significant and exceeds the weight loss achieved in previous long term studies with behavioral treatment programs [23,24]. In addition weight loss was also found significant and continual in the drop-outs which probably indicates a general efficacy of the present nutritional intervention. The body weight management in our patients was undertaken with personalized dietetic treatment and lifestyle counseling. Patients were seen by a dietitian who assessed weight changes and treatment goals every two weeks. The greater weight loss in our study might indicate that a personalized nutritional intervention can produce significant weight loss in psychiatric patients who manage to adhere to the nutritional intervention for more than three months.\nThe present nutritional intervention not only reduced body weight but demonstrated continual significant decrease in body fat mass (kg) and percent of body fat (%) in our patients. Skouroliakou et al. [17] reported significant reduction in fat mass but in the short term. The present decrease in fat mass is demonstrated for the first time in a long term nutritional intervention in SMI patients. The mean fat mass reduction was continual and significant throughout the study (e.g. 6 kg fat mass loss at 3 mo; 5.9 kg fat mass loss at 6 mo and 8 kg fat mass loss at 9 mo). BMI was also significantly decreased verifying the decrease in total body fat and general obesity. Moreover waist circumference, a well documented proxy for visceral obesity [25], was significantly decreased in our patients. Consistent with previous studies [26], weight loss produced a decrease in RMR. These findings in SMI patients are comparable to reduction of obesity-related factors with lifestyle modification within the general obese population [27]. Recent consensus guidelines for patients with severe mental illness recommend the measurements of both BMI and WC to monitor cardiovascular risk factors in this population [28]. The reductions found in waist circumference and body mass in our SMI patients indicate improvements in the risk factors associated with cardiovascular disease.\nThe reduction in fasting glucose was significant throughout the nutritional intervention compared to baseline. This is important since abnormalities in glucose metabolism have been associated with the use of antipsychotic treatment [29]. The significant reduction in fasting glucose may be primarily due to weight loss since medication was kept constant. Similarly there were significant reductions in total cholesterol and triglycerides during the 9 month nutritional intervention. These reductions in lipids concentrations are also important since psychiatric patients have been shown to have elevated dyslipidemia compared to general population [30]. Both total cholesterol and triglycerides dropped significantly since weight loss became significant throughout the intervention. The present results justify the important use of weight reducing programs and especially of nutritional intervention in the management of metabolic dysregulation in patients with severe mental illness.\n[SUBTITLE] Limitations [SUBSECTION] By design the present study did not include a control group, so it is unknown whether a similar group of obese patients would have lost or gained weight over the same time period. Ideally, longer term randomized controlled trials are needed to assess the effectiveness of the nutritional interventions. In addition, we can not draw conclusions on the long-term effectiveness of the intervention by means of weight maintenance as a follow-up period was not included. However, the present results are derived from a relatively large sample compared to previous shorter term or longer term studies of small-subject numbers. Another limitation of the present study is the large drop-out. It is recognized that psychiatric disorders can be a significant barrier to weight loss success in obese individuals, thus discontinuance of the study could have been expected. In a meta-analysis of compliance studies, DiMatteo et al. showed that patients with depression had a 3-fold higher rate of noncompliance with medical treatments, including diet recommendations [31]. However, the significant results from LOCF analysis confirm the efficacy of the 9 mo nutritional intervention in terms of successful weight loss and improvement of the metabolic profile in our SMI patients.\nBy design the present study did not include a control group, so it is unknown whether a similar group of obese patients would have lost or gained weight over the same time period. Ideally, longer term randomized controlled trials are needed to assess the effectiveness of the nutritional interventions. In addition, we can not draw conclusions on the long-term effectiveness of the intervention by means of weight maintenance as a follow-up period was not included. However, the present results are derived from a relatively large sample compared to previous shorter term or longer term studies of small-subject numbers. Another limitation of the present study is the large drop-out. It is recognized that psychiatric disorders can be a significant barrier to weight loss success in obese individuals, thus discontinuance of the study could have been expected. In a meta-analysis of compliance studies, DiMatteo et al. showed that patients with depression had a 3-fold higher rate of noncompliance with medical treatments, including diet recommendations [31]. However, the significant results from LOCF analysis confirm the efficacy of the 9 mo nutritional intervention in terms of successful weight loss and improvement of the metabolic profile in our SMI patients.", "By design the present study did not include a control group, so it is unknown whether a similar group of obese patients would have lost or gained weight over the same time period. Ideally, longer term randomized controlled trials are needed to assess the effectiveness of the nutritional interventions. In addition, we can not draw conclusions on the long-term effectiveness of the intervention by means of weight maintenance as a follow-up period was not included. However, the present results are derived from a relatively large sample compared to previous shorter term or longer term studies of small-subject numbers. Another limitation of the present study is the large drop-out. It is recognized that psychiatric disorders can be a significant barrier to weight loss success in obese individuals, thus discontinuance of the study could have been expected. In a meta-analysis of compliance studies, DiMatteo et al. showed that patients with depression had a 3-fold higher rate of noncompliance with medical treatments, including diet recommendations [31]. However, the significant results from LOCF analysis confirm the efficacy of the 9 mo nutritional intervention in terms of successful weight loss and improvement of the metabolic profile in our SMI patients.", "This study has important clinical implication, indicating the effectiveness of a simple nutritional intervention on adiposity and lipid regulation which is important in psychiatric patients who are a high risk group for the development of cardiovascular disease. The present results show that obese patients with severe mental illness can achieve weight control and improve cardiometabolic profile by following a simple personalized nutritional program for 9 months.", "The authors declare that they have no competing interests.", "FT contributed to the interpretation of the data, analysis of the results and prepared this manuscript. KP, VT, NA and IP were involved in data collection and analysis of the results. GV was involved in the statistical analysis of the revised manuscript. MH was the principal investigator and assisted in data collection, interpretation of the results and preparation of the manuscript. All authors read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/11/31/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Reductions in malaria and anaemia case and death burden at hospitals following scale-up of malaria control in Zanzibar, 1999-2008.
21332989
In Zanzibar, the Ministry of Health and partners accelerated malaria control from September 2003 onwards. The impact of the scale-up of insecticide-treated nets (ITN), indoor-residual spraying (IRS) and artemisinin-combination therapy (ACT) combined on malaria burden was assessed at six out of seven in-patient health facilities.
BACKGROUND
Numbers of outpatient and inpatient cases and deaths were compared between 2008 and the pre-intervention period 1999-2003. Reductions were estimated by segmented log-linear regression, adjusting the effect size for time trends during the pre-intervention period.
METHODS
In 2008, for all age groups combined, malaria deaths had fallen by an estimated 90% (95% confidence interval 55-98%)(p < 0.025), malaria in-patient cases by 78% (48-90%), and parasitologically-confirmed malaria out-patient cases by 99.5% (92-99.9%). Anaemia in-patient cases decreased by 87% (57-96%); anaemia deaths and out-patient cases declined without reaching statistical significance due to small numbers. Reductions were similar for children under-five and older ages. Among under-fives, the proportion of all-cause deaths due to malaria fell from 46% in 1999-2003 to 12% in 2008 (p < 0.01) and that for anaemia from 26% to 4% (p < 0.01). Cases and deaths due to other causes fluctuated or increased over 1999-2008, without consistent difference in the trend before and after 2003.
RESULTS
Scaling-up effective malaria interventions reduced malaria-related burden at health facilities by over 75% within 5 years. In high-malaria settings, intensified malaria control can substantially contribute to reaching the Millennium Development Goal 4 target of reducing under-five mortality by two-thirds between 1990 and 2015.
CONCLUSIONS
[ "Anemia", "Antimalarials", "Artemisinins", "Child", "Child, Preschool", "Communicable Disease Control", "Drug Therapy, Combination", "Health Policy", "Hospitals", "Humans", "Incidence", "Infant", "Infant, Newborn", "Lactones", "Malaria", "Mosquito Control", "Survival Analysis", "Tanzania" ]
3050777
null
null
Methods
[SUBTITLE] Sample of hospitals [SUBSECTION] Out of Zanzibar's 126 total health facilities, seven are in-patient facilities; this analysis covers six of them: Chakechake, Mkoani and Wete district hospitals; and the three large primary health care centres with in-patient services (Kivunge and Makunduchi on Unguja island, and Micheweni centre on Pemba island). Zanzibar's main referral hospital was excluded because surveillance data for the past years were not available. Out of Zanzibar's 126 total health facilities, seven are in-patient facilities; this analysis covers six of them: Chakechake, Mkoani and Wete district hospitals; and the three large primary health care centres with in-patient services (Kivunge and Makunduchi on Unguja island, and Micheweni centre on Pemba island). Zanzibar's main referral hospital was excluded because surveillance data for the past years were not available. [SUBTITLE] Indicators [SUBSECTION] Monthly records of outpatient cases, inpatient cases and inpatient deaths, stratified by age <5 years old and ≥5 years old, were coded according to recorded diagnosis, i.e. malaria, anaemia and 'other conditions'. Inpatient malaria cases and deaths include those that were parasitologically-confirmed and those admitted based on clinical judgement. For malaria outpatient cases, analyses were done both for cases with a confirmed parasitological diagnosis and for the total number of cases, including those diagnosed presumptively. To facilitate interpretation of trends in outpatient cases, we computed slide positivity rate, by dividing the number of slides positive for malaria by the total number of slides examined. Cases and deaths attributed to anaemia, to which malaria is an important contributor in high-endemic settings [5], were analyzed as a proxy indicator of malaria-related burden. Possible confounding of the interventions' effect by external, non-program related determinants was explored by analysing the time trend in monthly rainfall and temperature, the principal short-term predictor of malaria transmission intensity and burden [6]. Monthly records of outpatient cases, inpatient cases and inpatient deaths, stratified by age <5 years old and ≥5 years old, were coded according to recorded diagnosis, i.e. malaria, anaemia and 'other conditions'. Inpatient malaria cases and deaths include those that were parasitologically-confirmed and those admitted based on clinical judgement. For malaria outpatient cases, analyses were done both for cases with a confirmed parasitological diagnosis and for the total number of cases, including those diagnosed presumptively. To facilitate interpretation of trends in outpatient cases, we computed slide positivity rate, by dividing the number of slides positive for malaria by the total number of slides examined. Cases and deaths attributed to anaemia, to which malaria is an important contributor in high-endemic settings [5], were analyzed as a proxy indicator of malaria-related burden. Possible confounding of the interventions' effect by external, non-program related determinants was explored by analysing the time trend in monthly rainfall and temperature, the principal short-term predictor of malaria transmission intensity and burden [6]. [SUBTITLE] Statistical analysis of time trends [SUBSECTION] Impact was evaluated by comparing 2008 indicators with those in the pre-intervention period 1999--2003 using a segmented regressive model of an interrupted time series[7]. Impact is defined as the ratio between the observed and a counterfactual estimated indicator levels in 2008, the latter calculated assuming a (hypothetical) continuation of the pre-intervention time trend throughout 2008, thereby adjusting for time trends occurring independent of the intervention's effect. The 95% confidence intervals (CI) around impact estimates were computed based on CIs around regression coefficient estimates. Changes over time in proportions of cases or deaths due to malaria or other causes were evaluated by Chi-squared test. The exact Fisher test was used where the chi-squared test was not suitable due to small numbers, i.e. where the expected value in any of the cells of a contingency table was below 5. Impact was evaluated by comparing 2008 indicators with those in the pre-intervention period 1999--2003 using a segmented regressive model of an interrupted time series[7]. Impact is defined as the ratio between the observed and a counterfactual estimated indicator levels in 2008, the latter calculated assuming a (hypothetical) continuation of the pre-intervention time trend throughout 2008, thereby adjusting for time trends occurring independent of the intervention's effect. The 95% confidence intervals (CI) around impact estimates were computed based on CIs around regression coefficient estimates. Changes over time in proportions of cases or deaths due to malaria or other causes were evaluated by Chi-squared test. The exact Fisher test was used where the chi-squared test was not suitable due to small numbers, i.e. where the expected value in any of the cells of a contingency table was below 5.
null
null
null
null
[ "Background", "Sample of hospitals", "Indicators", "Statistical analysis of time trends", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Before scale-up of malaria control activities in 2004-2005, malaria was the leading cause of illness and death in Zanzibar, accounting for more than 50% of in-patient cases and deaths in hospitals and nearly 40% of all out-patient consultations. Endemicity was classified as moderate to high, with transmission occurring for more than 6 months per year [1,2]. In 2003, malaria prevalence in sampled 60 clusters of Shehia (the smallest administrative unit with estimated population of 3,000) ranged between 9% and 50% in children under five years of age and other age groups, with Plasmodium falciparum the predominant malaria species [2]. In the year 2000, chloroquine, the first-line drug then, had a treatment failure of 60% at 14 days follow up [3].\nBetween end-2003 to 2006, in response to the enormous malaria burden and to the inadequacy of chloroquine treatment, the Zanzibar Ministry of Health and Social Welfare and partners initiated a concerted effort to accelerate malaria control activities. With support of the Global Fund to Fight AIDS, Tuberculosis and Malaria (starting in 2005), the USA President's Malaria Initiative, the Italian Cooperation through WHO, and other partners, the government scaled up four key malaria control interventions: deployment of artemisinin-based combination therapy (ACT), distribution of insecticide-treated nets (ITNs), including long-lasting insecticidal nets (LLINs), indoor residual spraying (IRS) and intermittent preventive treatment with sulphadoxine-pyrimethamine for pregnant women (SP-IPTp).\nIn September 2003, artemether-lumefantrine was introduced in all public health facilities, free-of-charge, as the first-line treatment for uncomplicated malaria, regardless of age (Zanzibar Ministry of Health and Social Welfare, unpublished report, 2002), while SP-IPTp was initiated in 2004. From 2004, ITNs were distributed for free, to pregnant women and children under five years of age attending antenatal clinics in public health facilities; LLINs followed by mid-2005. IRS with lambdacyhalothrin was implemented since June-July 2006, covering nearly all households (one round in 2006, two rounds in 2007 and one round in 2008), with the exception of the main town's centre. In May 2007, 74% children under-5 and 73% pregnant women used an ITN. In addition, nearly 95% of houses had been sprayed with an insecticide during the previous six months [4].\nThis study reports on the impact of the interventions' scale up in six out of seven Zanzibar's in-patient health facilities, based on analysis of time trends in malaria-related and overall child and adult morbidity and mortality over a 10-year period (1999-2008). Results are discussed in light of international goals and targets for reducing malaria and improving child health at a population level.", "Out of Zanzibar's 126 total health facilities, seven are in-patient facilities; this analysis covers six of them: Chakechake, Mkoani and Wete district hospitals; and the three large primary health care centres with in-patient services (Kivunge and Makunduchi on Unguja island, and Micheweni centre on Pemba island). Zanzibar's main referral hospital was excluded because surveillance data for the past years were not available.", "Monthly records of outpatient cases, inpatient cases and inpatient deaths, stratified by age <5 years old and ≥5 years old, were coded according to recorded diagnosis, i.e. malaria, anaemia and 'other conditions'.\nInpatient malaria cases and deaths include those that were parasitologically-confirmed and those admitted based on clinical judgement. For malaria outpatient cases, analyses were done both for cases with a confirmed parasitological diagnosis and for the total number of cases, including those diagnosed presumptively. To facilitate interpretation of trends in outpatient cases, we computed slide positivity rate, by dividing the number of slides positive for malaria by the total number of slides examined.\nCases and deaths attributed to anaemia, to which malaria is an important contributor in high-endemic settings [5], were analyzed as a proxy indicator of malaria-related burden.\nPossible confounding of the interventions' effect by external, non-program related determinants was explored by analysing the time trend in monthly rainfall and temperature, the principal short-term predictor of malaria transmission intensity and burden [6].", "Impact was evaluated by comparing 2008 indicators with those in the pre-intervention period 1999--2003 using a segmented regressive model of an interrupted time series[7]. Impact is defined as the ratio between the observed and a counterfactual estimated indicator levels in 2008, the latter calculated assuming a (hypothetical) continuation of the pre-intervention time trend throughout 2008, thereby adjusting for time trends occurring independent of the intervention's effect. The 95% confidence intervals (CI) around impact estimates were computed based on CIs around regression coefficient estimates.\nChanges over time in proportions of cases or deaths due to malaria or other causes were evaluated by Chi-squared test. The exact Fisher test was used where the chi-squared test was not suitable due to small numbers, i.e. where the expected value in any of the cells of a contingency table was below 5.", "Malaria in-patient cases were fairly stable between 1999 and 2002 but declined gradually thereafter, both among children under-5 (Figure 1a) and in the older age group (Figure 1b). A similar decline was seen for anaemia cases, although among children under-five this decline started already around the year 2000. In contrast, hospitalizations due to other causes tended to increase throughout the evaluation period. This resulted in a stable number of total hospitalizations in the older age group (Figure 1d). Among children under-five, the number of hospitalizations decreased, while the percentage of malaria cases decreased from 54% in 1999 to 17% in 2008 (Figure 1c).\nIn-patient cases due to malaria, anaemia and other causes in children under 5 years and >5 years old, 6 hospitals in Zanzibar.\nHospital deaths showed similar time patterns across causes for both age groups (Figure 2a and 2b). Among under-fives, among all-cause deaths, the proportion due to malaria fell from 46% in 1999 to 12% in 2008 (p < 0.01)(Figure 2c) and that due to anaemia from 26% to 4% (p < 0.01). Similarly, in the older age group the proportion of malaria deaths decreased from 52% in 1999 to 4% in 2008 (p < 0.01)(Figure 2d) and that due to anaemia from 8% to 4% (p < 0.05). Deaths due to other causes did not change or rather increased, with no consistent difference in the time trend before and after 2003.\nIn-patient deaths due to malaria, anaemia and other causes in children under 5 years and >5 years old, 6 hospitals in Zanzibar.\nWithin each of the six individual health facilities, similar declines were observed in the number of malaria in-patient cases (Figure 3a), although the onset of malaria-related reductions varied among facilities. In Chakechake hospital, malaria in-patient cases peaked in 2004 and then decreased 3-fold by 2007--2008 as compared to the 1999 level. Kivunge and Makunduchi showed similar but lower peaks in 2004 with subsequent declines to below the 1999 level. In the other 3 health facilities, no major peaks were observed: malaria cases were stable initially, declining from 2003 onwards in Wete and Micheweni, and declining throughout the 10-year period in Mkoani. Malaria deaths also declined in each of the individual hospital, starting in 2002--2003 and reaching numbers well below the 1999 levels by 2008 (Figure 3b).\nIn-patient (a) cases and (b) deaths due to malaria, children under-5 years, in individual hospitals.\nMalaria out-patient cases totalled 24,000 in 1999, among children under-5, out of which 9,000 were microscopically confirmed (Figure 4a). These numbers declined from 2003 onwards, reaching 5,000 total cases of which only 31 were confirmed in 2008. The decline was more marked for confirmed cases and this was paralleled by a similar decrease in the slide positivity rate (Figure 4c). Numbers of slides examined also fell, in line with the number of suspected cases; both of these indicators decreasing around three-fold between 1999 and 2008 (Figure 4d). A similar pattern was observed for the older age group (Figure 4b). The number of inpatient anaemia cases declined in a similar manner, with about a 2-fold decrease by 2008 compared to 1999-2001 levels.\nOut-patient cases due to malaria, anaemia and other causes, (a) children under-5 and (b) older ages; (c) malaria slide positivity rates and (d) numbers of slides examined for malaria, 6 hospitals in Zanzibar.\nAdjusting for pre-intervention time trends, malaria-attributed in-patient and out-patient cases and deaths decreased by 76% or more by 2008 compared to 1999-2003, both age groups (Table 1; p < 0.025 for each indicator). In-patient anaemia cases decreased by 85% and 90% among under-fives and over-fives, respectively (p < 0.025), with non-significant decrease for anaemia deaths (by 38% in under-fives and 66% for older ages). In contrast, no difference (and sometimes an increase) was observed in the number of cases and deaths due to other causes, without any consistent difference in time trends before and after 2003 (Table 1).\nMalaria and non-malaria cases and deaths in 2008 compared to the pre-intervention period 1999--2003, in six out of seven hospitals in Zanzibar\nNotes:\nPositive values indicate a decline; negative percentages indicate an increase from 1999-2003 to 2008.\n‡ Significant difference. A 95% CI not including the 0% value indicates that the difference from pre-intervention period to 2008 was statistically significant (p < 0.025).\nRainfall peaked in March--May and October--November (Figure 5). Malaria and anaemia inpatient cases varied seasonally with yearly peaks in May-June and November-December. From 2006 onwards, as malaria and anaemia cases decreased, the seasonal peaks in these hospital indicators became less pronounced.\nSeasonal patterns in malaria and anaemia in-patient cases, children under-5, and monthly rainfall.", "This rapid impact evaluation shows that scale-up of ACT as first-line treatment of malaria, combined with vector control using ITNs/LLINs and IRS resulted a dramatic decline in the malaria burden. Within four years of intervention scale-up, malaria deaths, hospitalizations, laboratory-confirmed outpatient cases and slide positivity rates fell by 76% or more, both in children under-5 years and older age groups.\nUnconfirmed (suspected) outpatient cases and numbers of slides examined also fell, but to a lesser extent (two- to three-fold), and starting later than laboratory-confirmed indicators. This suggests that the decline in confirmed malaria cases reflect a real reduction in malaria-attributable burden, and not an artefact related to changing laboratory testing patterns. The lesser decline in suspected malaria cases compared to parasitologically-confirmed cases or malaria hospitalizations and deaths confirms the notion that in endemic settings a large proportion of suspected cases is not due to malaria. This supports the WHO's 2010 recommendation for parasitological confirmation using either microscopy or RDT of all cases as a condition for ACT-based treatment, including in children under-five years in high-endemic Africa for whom presumptive treatment was recommended until 2010 - in an effort to contain costs of ACT-based treatment and to preserve ACT efficacy [8].\nHospitalizations and deaths due to anaemia fell in parallel with the malaria-attributed events, confirming the importance of malaria as an underlying cause, especially in children under-five years. These observations are consistent with previous studies that showed steady reductions in childhood anaemia in response to malaria control [5], and with observed correlation between rates of malaria and of blood transfusions in young children [9]. Anaemia represented a much larger proportion of hospital deaths than hospital admissions, e.g. among under-fives in 1999, 26% and 8.5%, respectively. This difference illustrates the poor prognosis of severe anaemia cases in Zanzibar, which is possibly due to varying availability and quality of blood transfusion services and late presentation of patients [10,11].\nThe decline in malaria-related burden started around 2003, when ACT was introduced, although for in-patient cases and deaths in children under five years it started slightly earlier, around 2002, possibly reflecting the increasing use, since 2001, of sulphadoxine-pyrimethamine as second-line treatment. In addition, ongoing socio-economic development and urbanization may have led to a better health over time in including malaria burden, before and during the intensified malaria control [12]. However, impact estimates using segmented log-linear regression (Table 1) were adjusted for such trends during pre-intervention period and the decrease in malaria was observed against an increase in non-malaria attendances - which may be the result of improved health services and access in recent years.\nAcross all 147 out-patient peripheral health facilities in Zanzibar (primary health care units), confirmed malaria cases fell by 73% between 1999 and 2008 while the slide positivity rate fell from 36% to 1.5% (Zanzibar Malaria Control Programme, unpublished, 2010). Over the same period, the slide testing rate of all outpatient consultancies in the peripheral facilities increased from 6% to 30% as a result of RDT roll-out.\nIn the weekly case-based surveillance system, which covers approximately one-third of health facilities, slide positivity rate was 3% in peak malaria months (April-June) during 2008 [13]. Reductions in parasitologically-confirmed malaria out-patient cases furthermore fit with results from a nation-wide population survey in May 2007, showing parasite prevalence of 0.4% in children <5 years old and 0.9% in all ages [4].\nA 2007 study in North A district of Unguja using surveillance records in 13 public health facilities found a decline in under-five mortality by 52% in 2006 compared to 2003. Similarly, malaria-related admissions, blood transfusions, and malaria-attributed mortality decreased significantly by 77%, 67% and 75%, respectively, between 2002 and 2005 in children under five. While climatic conditions favourable for malaria transmission persisted throughout the observational period, additional distribution of LLINs in early 2006 resulted in a 10-fold reduction of malaria parasite prevalence [14].\nIn response to its current low endemicity, Zanzibar has since 2008 shifted its malaria surveillance system to weekly reporting of laboratory-based confirmed malaria cases. A Malaria Early Epidemic Detection System (MEEDS) is now operational in 52 health facilities, with the plan to expand it to all facilities by 2011. The next step for enhancing disease surveillance should be reporting and investigation of individual in-patient cases and deaths, which at low transmission levels represent a failure of the health system in adequately treating both uncomplicated and severe malaria cases. Future reporting of individual case records will also enable more precise geographical tracking of remaining transmission foci or \"hotspots\" and resurgences, as well as to identify risk groups and factors.\nDespite Zanzibar's enormous success in reducing malaria, the risk of an explosive resurgence is still very real. This is not the first time that the islands have achieved such dramatic decline in malaria burden. In the 1970s malaria had been reduced to very low levels through IRS with dichlorodiphenyltrichloroethane (DDT), only to resurge again once partner funds decreased and IRS was stopped [15]. Aggressive malaria control activities and adequate funding therefore need to be maintained to keep the risk of malaria resurgence near to zero.\nSubstantial decreases in malaria burden have also been reported by other high-endemic sub-Saharan African countries that achieved high coverage of ACT, LLINs and/or IRS. In Zambia, as artemether-lumefantrine was introduced as first-line treatment, and LLINs were distributed over 2002-2008, hospital admissions and deaths, and to a lesser extent outpatient cases from malaria and anaemia, decreased substantially. Declines were more marked in children under five than among older ages, and time trends were consistent across the indicators. Repeated household surveys demonstrated parallel decreases in parasite prevalence and anaemia among children under five years. In addition, among children under five years, both all-cause mortality from household surveys and hospital-recorded deaths fell by half, in a markedly similar time pattern [16]. Importantly, these downward trends were followed by levelling off in 2009 in malaria admissions and deaths - including a major resurgence in two provinces, where parasite prevalence among children again rose according to a 2010 survey [17,18]. This rebound, possibly related to decay of insecticide and physical deterioration of ITNs distributed several years before, underscores the importance also for Zanzibar to maintain malaria control, surveillance and funding to prevent similar resurgence.\nIn Bioko Island, Equatorial Guinea, four years after achieving high intervention coverage, repeated household surveys found a decrease in all-cause under-five mortality of 60% from 2004 to 2008, and reductions of 70% in parasite prevalence, 90% in anaemia and 60% in reported fevers among children under-5 [19]. In Rwanda, within two years of nationwide implementation of LLIN distribution and ACT as first-line treatment, in-patient malaria cases and death in nine hospitals and 10 health centers sampled throughout 10 districts fell by 55% and 67% in children under-five, respectively. Non-malaria cases and deaths remained stable or increased [20]. Similarly, in Ethiopia, in a convenience sample of public facilities with relatively complete data, the in-patient case and death burden decreased by 73% and 62%, respectively, after the first two years of scaled-up LLIN and ACT usage [20]. In Sao Tome and Principe, after three years of intensified interventions with IRS, LLINs, ACT and SP-IPTp, malaria-attributed outpatient consultations, hospitalizations, and deaths decreased by more than 85%, 80%, and 95%, respectively, in all age groups [21]. In The Gambia, a similar retrospective analysis at four sites, showed between 2003 and 2007 a significant decline in malaria slide positivity rate, malaria admissions and malaria-related deaths. The same study also demonstrated a significant increase in mean haemoglobin concentrations for all-cause admissions (12 g/l) and age of paediatric malaria admissions (from 3.9 to 5.6 years) [9].\nWhen interpreting the results presented, the limitations of health facility-based studies should not be underestimated [22]. Importantly, the use of a five-year period (1999-2003) as reference was not necessarily long enough to provide a stable pre-intervention baseline, given the historical resurgence of malaria in Zanzibar in the past [23]. In addition, a longer period should ideally be used as the post-intervention. Therefore, point estimates of impact based on a five-year baseline and one-year post-intervention must, therefore, be regarded as indicative, rather than precise effect sizes.\nIt should be also considered that time trends observed in health facility statistics may not reflect trends in malaria burden at the population level, if the completeness of case or death notifications and/or access to health facilities changes over the evaluation period, and in particular if notification fraction changed differently for malaria versus other causes of attendance. As a result, the decrease in malaria burden may be of lesser magnitude than that observed in the hospitals sampled. Of note, the reduction in all-cause under-five mortality may be lower at a population level than shown here from hospitals data, because typically in sub-Saharan African countries with stable malaria around 15-30% of deaths, not 53% as in the Zanzibar hospitals between 1999 and 2003, are due to malaria [24].\nThe child survival Millennium Development Goal is to reduce all-cause under-5 mortality by two-thirds by 2015 compared to 1990 baseline levels [25,26]. The new Roll Back Malaria Partnership goal is the near elimination of malaria deaths by 2015 [27]. Based on mortality estimates from 1990 to 2009, the under-5 mortality rates are now declining in all regions of the world, with declines in sub-Saharan Africa having accelerated in the 2000-2010 decade. However, a further acceleration in these declines will be needed to meet the MDG child survival goal [28].", "Effective malaria control measures can dramatically reduce the burden of malaria and anaemia on the health system. Ensuing reductions in all-cause under-5 mortality as a result of malaria control could play a key role in achieving MDG4 on improving child survival by 2015. Aggressive malaria control should be raised to the highest levels of public health priority in Africa and globally.", "The authors declare that they have no competing interests.", "MA carried out the study design, data collection, analysis and drafting and overall coordination of writing up-of the paper. AWA helped in data collection and editing; and field supervision. ASA helped in overall field coordination, reviewing the paper and programme interventions in Zanzibar. FM and AB contributed to data collection and validation, analysis and review of the paper. RN, MW and SK participated in field coordination, data collection and review of the paper. MO helped in the overall study design, data analysis and drafting of the paper. EK and MH helped in the statistical analysis and review of the paper. RK and DL provided critical review of the paper. UA and MC provided over all technical guide and reviewed the paper. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Sample of hospitals", "Indicators", "Statistical analysis of time trends", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Before scale-up of malaria control activities in 2004-2005, malaria was the leading cause of illness and death in Zanzibar, accounting for more than 50% of in-patient cases and deaths in hospitals and nearly 40% of all out-patient consultations. Endemicity was classified as moderate to high, with transmission occurring for more than 6 months per year [1,2]. In 2003, malaria prevalence in sampled 60 clusters of Shehia (the smallest administrative unit with estimated population of 3,000) ranged between 9% and 50% in children under five years of age and other age groups, with Plasmodium falciparum the predominant malaria species [2]. In the year 2000, chloroquine, the first-line drug then, had a treatment failure of 60% at 14 days follow up [3].\nBetween end-2003 to 2006, in response to the enormous malaria burden and to the inadequacy of chloroquine treatment, the Zanzibar Ministry of Health and Social Welfare and partners initiated a concerted effort to accelerate malaria control activities. With support of the Global Fund to Fight AIDS, Tuberculosis and Malaria (starting in 2005), the USA President's Malaria Initiative, the Italian Cooperation through WHO, and other partners, the government scaled up four key malaria control interventions: deployment of artemisinin-based combination therapy (ACT), distribution of insecticide-treated nets (ITNs), including long-lasting insecticidal nets (LLINs), indoor residual spraying (IRS) and intermittent preventive treatment with sulphadoxine-pyrimethamine for pregnant women (SP-IPTp).\nIn September 2003, artemether-lumefantrine was introduced in all public health facilities, free-of-charge, as the first-line treatment for uncomplicated malaria, regardless of age (Zanzibar Ministry of Health and Social Welfare, unpublished report, 2002), while SP-IPTp was initiated in 2004. From 2004, ITNs were distributed for free, to pregnant women and children under five years of age attending antenatal clinics in public health facilities; LLINs followed by mid-2005. IRS with lambdacyhalothrin was implemented since June-July 2006, covering nearly all households (one round in 2006, two rounds in 2007 and one round in 2008), with the exception of the main town's centre. In May 2007, 74% children under-5 and 73% pregnant women used an ITN. In addition, nearly 95% of houses had been sprayed with an insecticide during the previous six months [4].\nThis study reports on the impact of the interventions' scale up in six out of seven Zanzibar's in-patient health facilities, based on analysis of time trends in malaria-related and overall child and adult morbidity and mortality over a 10-year period (1999-2008). Results are discussed in light of international goals and targets for reducing malaria and improving child health at a population level.", "[SUBTITLE] Sample of hospitals [SUBSECTION] Out of Zanzibar's 126 total health facilities, seven are in-patient facilities; this analysis covers six of them: Chakechake, Mkoani and Wete district hospitals; and the three large primary health care centres with in-patient services (Kivunge and Makunduchi on Unguja island, and Micheweni centre on Pemba island). Zanzibar's main referral hospital was excluded because surveillance data for the past years were not available.\nOut of Zanzibar's 126 total health facilities, seven are in-patient facilities; this analysis covers six of them: Chakechake, Mkoani and Wete district hospitals; and the three large primary health care centres with in-patient services (Kivunge and Makunduchi on Unguja island, and Micheweni centre on Pemba island). Zanzibar's main referral hospital was excluded because surveillance data for the past years were not available.\n[SUBTITLE] Indicators [SUBSECTION] Monthly records of outpatient cases, inpatient cases and inpatient deaths, stratified by age <5 years old and ≥5 years old, were coded according to recorded diagnosis, i.e. malaria, anaemia and 'other conditions'.\nInpatient malaria cases and deaths include those that were parasitologically-confirmed and those admitted based on clinical judgement. For malaria outpatient cases, analyses were done both for cases with a confirmed parasitological diagnosis and for the total number of cases, including those diagnosed presumptively. To facilitate interpretation of trends in outpatient cases, we computed slide positivity rate, by dividing the number of slides positive for malaria by the total number of slides examined.\nCases and deaths attributed to anaemia, to which malaria is an important contributor in high-endemic settings [5], were analyzed as a proxy indicator of malaria-related burden.\nPossible confounding of the interventions' effect by external, non-program related determinants was explored by analysing the time trend in monthly rainfall and temperature, the principal short-term predictor of malaria transmission intensity and burden [6].\nMonthly records of outpatient cases, inpatient cases and inpatient deaths, stratified by age <5 years old and ≥5 years old, were coded according to recorded diagnosis, i.e. malaria, anaemia and 'other conditions'.\nInpatient malaria cases and deaths include those that were parasitologically-confirmed and those admitted based on clinical judgement. For malaria outpatient cases, analyses were done both for cases with a confirmed parasitological diagnosis and for the total number of cases, including those diagnosed presumptively. To facilitate interpretation of trends in outpatient cases, we computed slide positivity rate, by dividing the number of slides positive for malaria by the total number of slides examined.\nCases and deaths attributed to anaemia, to which malaria is an important contributor in high-endemic settings [5], were analyzed as a proxy indicator of malaria-related burden.\nPossible confounding of the interventions' effect by external, non-program related determinants was explored by analysing the time trend in monthly rainfall and temperature, the principal short-term predictor of malaria transmission intensity and burden [6].\n[SUBTITLE] Statistical analysis of time trends [SUBSECTION] Impact was evaluated by comparing 2008 indicators with those in the pre-intervention period 1999--2003 using a segmented regressive model of an interrupted time series[7]. Impact is defined as the ratio between the observed and a counterfactual estimated indicator levels in 2008, the latter calculated assuming a (hypothetical) continuation of the pre-intervention time trend throughout 2008, thereby adjusting for time trends occurring independent of the intervention's effect. The 95% confidence intervals (CI) around impact estimates were computed based on CIs around regression coefficient estimates.\nChanges over time in proportions of cases or deaths due to malaria or other causes were evaluated by Chi-squared test. The exact Fisher test was used where the chi-squared test was not suitable due to small numbers, i.e. where the expected value in any of the cells of a contingency table was below 5.\nImpact was evaluated by comparing 2008 indicators with those in the pre-intervention period 1999--2003 using a segmented regressive model of an interrupted time series[7]. Impact is defined as the ratio between the observed and a counterfactual estimated indicator levels in 2008, the latter calculated assuming a (hypothetical) continuation of the pre-intervention time trend throughout 2008, thereby adjusting for time trends occurring independent of the intervention's effect. The 95% confidence intervals (CI) around impact estimates were computed based on CIs around regression coefficient estimates.\nChanges over time in proportions of cases or deaths due to malaria or other causes were evaluated by Chi-squared test. The exact Fisher test was used where the chi-squared test was not suitable due to small numbers, i.e. where the expected value in any of the cells of a contingency table was below 5.", "Out of Zanzibar's 126 total health facilities, seven are in-patient facilities; this analysis covers six of them: Chakechake, Mkoani and Wete district hospitals; and the three large primary health care centres with in-patient services (Kivunge and Makunduchi on Unguja island, and Micheweni centre on Pemba island). Zanzibar's main referral hospital was excluded because surveillance data for the past years were not available.", "Monthly records of outpatient cases, inpatient cases and inpatient deaths, stratified by age <5 years old and ≥5 years old, were coded according to recorded diagnosis, i.e. malaria, anaemia and 'other conditions'.\nInpatient malaria cases and deaths include those that were parasitologically-confirmed and those admitted based on clinical judgement. For malaria outpatient cases, analyses were done both for cases with a confirmed parasitological diagnosis and for the total number of cases, including those diagnosed presumptively. To facilitate interpretation of trends in outpatient cases, we computed slide positivity rate, by dividing the number of slides positive for malaria by the total number of slides examined.\nCases and deaths attributed to anaemia, to which malaria is an important contributor in high-endemic settings [5], were analyzed as a proxy indicator of malaria-related burden.\nPossible confounding of the interventions' effect by external, non-program related determinants was explored by analysing the time trend in monthly rainfall and temperature, the principal short-term predictor of malaria transmission intensity and burden [6].", "Impact was evaluated by comparing 2008 indicators with those in the pre-intervention period 1999--2003 using a segmented regressive model of an interrupted time series[7]. Impact is defined as the ratio between the observed and a counterfactual estimated indicator levels in 2008, the latter calculated assuming a (hypothetical) continuation of the pre-intervention time trend throughout 2008, thereby adjusting for time trends occurring independent of the intervention's effect. The 95% confidence intervals (CI) around impact estimates were computed based on CIs around regression coefficient estimates.\nChanges over time in proportions of cases or deaths due to malaria or other causes were evaluated by Chi-squared test. The exact Fisher test was used where the chi-squared test was not suitable due to small numbers, i.e. where the expected value in any of the cells of a contingency table was below 5.", "Malaria in-patient cases were fairly stable between 1999 and 2002 but declined gradually thereafter, both among children under-5 (Figure 1a) and in the older age group (Figure 1b). A similar decline was seen for anaemia cases, although among children under-five this decline started already around the year 2000. In contrast, hospitalizations due to other causes tended to increase throughout the evaluation period. This resulted in a stable number of total hospitalizations in the older age group (Figure 1d). Among children under-five, the number of hospitalizations decreased, while the percentage of malaria cases decreased from 54% in 1999 to 17% in 2008 (Figure 1c).\nIn-patient cases due to malaria, anaemia and other causes in children under 5 years and >5 years old, 6 hospitals in Zanzibar.\nHospital deaths showed similar time patterns across causes for both age groups (Figure 2a and 2b). Among under-fives, among all-cause deaths, the proportion due to malaria fell from 46% in 1999 to 12% in 2008 (p < 0.01)(Figure 2c) and that due to anaemia from 26% to 4% (p < 0.01). Similarly, in the older age group the proportion of malaria deaths decreased from 52% in 1999 to 4% in 2008 (p < 0.01)(Figure 2d) and that due to anaemia from 8% to 4% (p < 0.05). Deaths due to other causes did not change or rather increased, with no consistent difference in the time trend before and after 2003.\nIn-patient deaths due to malaria, anaemia and other causes in children under 5 years and >5 years old, 6 hospitals in Zanzibar.\nWithin each of the six individual health facilities, similar declines were observed in the number of malaria in-patient cases (Figure 3a), although the onset of malaria-related reductions varied among facilities. In Chakechake hospital, malaria in-patient cases peaked in 2004 and then decreased 3-fold by 2007--2008 as compared to the 1999 level. Kivunge and Makunduchi showed similar but lower peaks in 2004 with subsequent declines to below the 1999 level. In the other 3 health facilities, no major peaks were observed: malaria cases were stable initially, declining from 2003 onwards in Wete and Micheweni, and declining throughout the 10-year period in Mkoani. Malaria deaths also declined in each of the individual hospital, starting in 2002--2003 and reaching numbers well below the 1999 levels by 2008 (Figure 3b).\nIn-patient (a) cases and (b) deaths due to malaria, children under-5 years, in individual hospitals.\nMalaria out-patient cases totalled 24,000 in 1999, among children under-5, out of which 9,000 were microscopically confirmed (Figure 4a). These numbers declined from 2003 onwards, reaching 5,000 total cases of which only 31 were confirmed in 2008. The decline was more marked for confirmed cases and this was paralleled by a similar decrease in the slide positivity rate (Figure 4c). Numbers of slides examined also fell, in line with the number of suspected cases; both of these indicators decreasing around three-fold between 1999 and 2008 (Figure 4d). A similar pattern was observed for the older age group (Figure 4b). The number of inpatient anaemia cases declined in a similar manner, with about a 2-fold decrease by 2008 compared to 1999-2001 levels.\nOut-patient cases due to malaria, anaemia and other causes, (a) children under-5 and (b) older ages; (c) malaria slide positivity rates and (d) numbers of slides examined for malaria, 6 hospitals in Zanzibar.\nAdjusting for pre-intervention time trends, malaria-attributed in-patient and out-patient cases and deaths decreased by 76% or more by 2008 compared to 1999-2003, both age groups (Table 1; p < 0.025 for each indicator). In-patient anaemia cases decreased by 85% and 90% among under-fives and over-fives, respectively (p < 0.025), with non-significant decrease for anaemia deaths (by 38% in under-fives and 66% for older ages). In contrast, no difference (and sometimes an increase) was observed in the number of cases and deaths due to other causes, without any consistent difference in time trends before and after 2003 (Table 1).\nMalaria and non-malaria cases and deaths in 2008 compared to the pre-intervention period 1999--2003, in six out of seven hospitals in Zanzibar\nNotes:\nPositive values indicate a decline; negative percentages indicate an increase from 1999-2003 to 2008.\n‡ Significant difference. A 95% CI not including the 0% value indicates that the difference from pre-intervention period to 2008 was statistically significant (p < 0.025).\nRainfall peaked in March--May and October--November (Figure 5). Malaria and anaemia inpatient cases varied seasonally with yearly peaks in May-June and November-December. From 2006 onwards, as malaria and anaemia cases decreased, the seasonal peaks in these hospital indicators became less pronounced.\nSeasonal patterns in malaria and anaemia in-patient cases, children under-5, and monthly rainfall.", "This rapid impact evaluation shows that scale-up of ACT as first-line treatment of malaria, combined with vector control using ITNs/LLINs and IRS resulted a dramatic decline in the malaria burden. Within four years of intervention scale-up, malaria deaths, hospitalizations, laboratory-confirmed outpatient cases and slide positivity rates fell by 76% or more, both in children under-5 years and older age groups.\nUnconfirmed (suspected) outpatient cases and numbers of slides examined also fell, but to a lesser extent (two- to three-fold), and starting later than laboratory-confirmed indicators. This suggests that the decline in confirmed malaria cases reflect a real reduction in malaria-attributable burden, and not an artefact related to changing laboratory testing patterns. The lesser decline in suspected malaria cases compared to parasitologically-confirmed cases or malaria hospitalizations and deaths confirms the notion that in endemic settings a large proportion of suspected cases is not due to malaria. This supports the WHO's 2010 recommendation for parasitological confirmation using either microscopy or RDT of all cases as a condition for ACT-based treatment, including in children under-five years in high-endemic Africa for whom presumptive treatment was recommended until 2010 - in an effort to contain costs of ACT-based treatment and to preserve ACT efficacy [8].\nHospitalizations and deaths due to anaemia fell in parallel with the malaria-attributed events, confirming the importance of malaria as an underlying cause, especially in children under-five years. These observations are consistent with previous studies that showed steady reductions in childhood anaemia in response to malaria control [5], and with observed correlation between rates of malaria and of blood transfusions in young children [9]. Anaemia represented a much larger proportion of hospital deaths than hospital admissions, e.g. among under-fives in 1999, 26% and 8.5%, respectively. This difference illustrates the poor prognosis of severe anaemia cases in Zanzibar, which is possibly due to varying availability and quality of blood transfusion services and late presentation of patients [10,11].\nThe decline in malaria-related burden started around 2003, when ACT was introduced, although for in-patient cases and deaths in children under five years it started slightly earlier, around 2002, possibly reflecting the increasing use, since 2001, of sulphadoxine-pyrimethamine as second-line treatment. In addition, ongoing socio-economic development and urbanization may have led to a better health over time in including malaria burden, before and during the intensified malaria control [12]. However, impact estimates using segmented log-linear regression (Table 1) were adjusted for such trends during pre-intervention period and the decrease in malaria was observed against an increase in non-malaria attendances - which may be the result of improved health services and access in recent years.\nAcross all 147 out-patient peripheral health facilities in Zanzibar (primary health care units), confirmed malaria cases fell by 73% between 1999 and 2008 while the slide positivity rate fell from 36% to 1.5% (Zanzibar Malaria Control Programme, unpublished, 2010). Over the same period, the slide testing rate of all outpatient consultancies in the peripheral facilities increased from 6% to 30% as a result of RDT roll-out.\nIn the weekly case-based surveillance system, which covers approximately one-third of health facilities, slide positivity rate was 3% in peak malaria months (April-June) during 2008 [13]. Reductions in parasitologically-confirmed malaria out-patient cases furthermore fit with results from a nation-wide population survey in May 2007, showing parasite prevalence of 0.4% in children <5 years old and 0.9% in all ages [4].\nA 2007 study in North A district of Unguja using surveillance records in 13 public health facilities found a decline in under-five mortality by 52% in 2006 compared to 2003. Similarly, malaria-related admissions, blood transfusions, and malaria-attributed mortality decreased significantly by 77%, 67% and 75%, respectively, between 2002 and 2005 in children under five. While climatic conditions favourable for malaria transmission persisted throughout the observational period, additional distribution of LLINs in early 2006 resulted in a 10-fold reduction of malaria parasite prevalence [14].\nIn response to its current low endemicity, Zanzibar has since 2008 shifted its malaria surveillance system to weekly reporting of laboratory-based confirmed malaria cases. A Malaria Early Epidemic Detection System (MEEDS) is now operational in 52 health facilities, with the plan to expand it to all facilities by 2011. The next step for enhancing disease surveillance should be reporting and investigation of individual in-patient cases and deaths, which at low transmission levels represent a failure of the health system in adequately treating both uncomplicated and severe malaria cases. Future reporting of individual case records will also enable more precise geographical tracking of remaining transmission foci or \"hotspots\" and resurgences, as well as to identify risk groups and factors.\nDespite Zanzibar's enormous success in reducing malaria, the risk of an explosive resurgence is still very real. This is not the first time that the islands have achieved such dramatic decline in malaria burden. In the 1970s malaria had been reduced to very low levels through IRS with dichlorodiphenyltrichloroethane (DDT), only to resurge again once partner funds decreased and IRS was stopped [15]. Aggressive malaria control activities and adequate funding therefore need to be maintained to keep the risk of malaria resurgence near to zero.\nSubstantial decreases in malaria burden have also been reported by other high-endemic sub-Saharan African countries that achieved high coverage of ACT, LLINs and/or IRS. In Zambia, as artemether-lumefantrine was introduced as first-line treatment, and LLINs were distributed over 2002-2008, hospital admissions and deaths, and to a lesser extent outpatient cases from malaria and anaemia, decreased substantially. Declines were more marked in children under five than among older ages, and time trends were consistent across the indicators. Repeated household surveys demonstrated parallel decreases in parasite prevalence and anaemia among children under five years. In addition, among children under five years, both all-cause mortality from household surveys and hospital-recorded deaths fell by half, in a markedly similar time pattern [16]. Importantly, these downward trends were followed by levelling off in 2009 in malaria admissions and deaths - including a major resurgence in two provinces, where parasite prevalence among children again rose according to a 2010 survey [17,18]. This rebound, possibly related to decay of insecticide and physical deterioration of ITNs distributed several years before, underscores the importance also for Zanzibar to maintain malaria control, surveillance and funding to prevent similar resurgence.\nIn Bioko Island, Equatorial Guinea, four years after achieving high intervention coverage, repeated household surveys found a decrease in all-cause under-five mortality of 60% from 2004 to 2008, and reductions of 70% in parasite prevalence, 90% in anaemia and 60% in reported fevers among children under-5 [19]. In Rwanda, within two years of nationwide implementation of LLIN distribution and ACT as first-line treatment, in-patient malaria cases and death in nine hospitals and 10 health centers sampled throughout 10 districts fell by 55% and 67% in children under-five, respectively. Non-malaria cases and deaths remained stable or increased [20]. Similarly, in Ethiopia, in a convenience sample of public facilities with relatively complete data, the in-patient case and death burden decreased by 73% and 62%, respectively, after the first two years of scaled-up LLIN and ACT usage [20]. In Sao Tome and Principe, after three years of intensified interventions with IRS, LLINs, ACT and SP-IPTp, malaria-attributed outpatient consultations, hospitalizations, and deaths decreased by more than 85%, 80%, and 95%, respectively, in all age groups [21]. In The Gambia, a similar retrospective analysis at four sites, showed between 2003 and 2007 a significant decline in malaria slide positivity rate, malaria admissions and malaria-related deaths. The same study also demonstrated a significant increase in mean haemoglobin concentrations for all-cause admissions (12 g/l) and age of paediatric malaria admissions (from 3.9 to 5.6 years) [9].\nWhen interpreting the results presented, the limitations of health facility-based studies should not be underestimated [22]. Importantly, the use of a five-year period (1999-2003) as reference was not necessarily long enough to provide a stable pre-intervention baseline, given the historical resurgence of malaria in Zanzibar in the past [23]. In addition, a longer period should ideally be used as the post-intervention. Therefore, point estimates of impact based on a five-year baseline and one-year post-intervention must, therefore, be regarded as indicative, rather than precise effect sizes.\nIt should be also considered that time trends observed in health facility statistics may not reflect trends in malaria burden at the population level, if the completeness of case or death notifications and/or access to health facilities changes over the evaluation period, and in particular if notification fraction changed differently for malaria versus other causes of attendance. As a result, the decrease in malaria burden may be of lesser magnitude than that observed in the hospitals sampled. Of note, the reduction in all-cause under-five mortality may be lower at a population level than shown here from hospitals data, because typically in sub-Saharan African countries with stable malaria around 15-30% of deaths, not 53% as in the Zanzibar hospitals between 1999 and 2003, are due to malaria [24].\nThe child survival Millennium Development Goal is to reduce all-cause under-5 mortality by two-thirds by 2015 compared to 1990 baseline levels [25,26]. The new Roll Back Malaria Partnership goal is the near elimination of malaria deaths by 2015 [27]. Based on mortality estimates from 1990 to 2009, the under-5 mortality rates are now declining in all regions of the world, with declines in sub-Saharan Africa having accelerated in the 2000-2010 decade. However, a further acceleration in these declines will be needed to meet the MDG child survival goal [28].", "Effective malaria control measures can dramatically reduce the burden of malaria and anaemia on the health system. Ensuing reductions in all-cause under-5 mortality as a result of malaria control could play a key role in achieving MDG4 on improving child survival by 2015. Aggressive malaria control should be raised to the highest levels of public health priority in Africa and globally.", "The authors declare that they have no competing interests.", "MA carried out the study design, data collection, analysis and drafting and overall coordination of writing up-of the paper. AWA helped in data collection and editing; and field supervision. ASA helped in overall field coordination, reviewing the paper and programme interventions in Zanzibar. FM and AB contributed to data collection and validation, analysis and review of the paper. RN, MW and SK participated in field coordination, data collection and review of the paper. MO helped in the overall study design, data analysis and drafting of the paper. EK and MH helped in the statistical analysis and review of the paper. RK and DL provided critical review of the paper. UA and MC provided over all technical guide and reviewed the paper. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[]
Effects of rehabilitative interventions on pain, function and physical impairments in people with hand osteoarthritis: a systematic review.
21332991
Hand osteoarthritis (OA) is associated with pain, reduced grip strength, loss of range of motion and joint stiffness leading to impaired hand function and difficulty with daily activities. The effectiveness of different rehabilitation interventions on specific treatment goals has not yet been fully explored. The objective of this systematic review is to provide evidence based knowledge on the treatment effects of different rehabilitation interventions for specific treatment goals for hand OA.
INTRODUCTION
A computerized literature search of Medline, the Cumulative Index to Nursing and Allied Health Literature (CINAHL), ISI Web of Science, the Physiotherapy Evidence Database (PEDro) and SCOPUS was performed. Studies that had an evidence level of 2b or higher and that compared a rehabilitation intervention with a control group and assessed at least one of the following outcome measures - pain, physical hand function or other measures of hand impairment - were included. The eligibility and methodological quality of trials were systematically assessed by two independent reviewers using the PEDro scale. Treatment effects were calculated using standardized mean difference and 95% confidence intervals.
METHODS
Ten studies, of which six were of higher quality (PEDro score >6), were included. The rehabilitation techniques reviewed included three studies on exercise, two studies each on laser and heat, and one study each on splints, massage and acupuncture. One higher quality trial showed a large positive effect of 12-month use of a night splint on hand pain, function, strength and range of motion. Exercise had no effect on hand pain or function although it may be able to improve hand strength. Low level laser therapy may be useful for improving range of motion. No rehabilitation interventions were found to improve stiffness.
RESULTS
There is emerging high quality evidence to support that rehabilitation interventions can offer significant benefits to individuals with hand OA. A summary of the higher quality evidence is provided to assist with clinical decision making based on current evidence. Further high-quality research is needed concerning the effects of rehabilitation interventions on specific treatment goals for hand OA.
CONCLUSIONS
[ "Hand", "Humans", "Occupational Therapy", "Osteoarthritis", "Pain", "Physical Therapy Modalities", "Recovery of Function" ]
3241372
null
null
null
null
Results
[SUBTITLE] Study selection [SUBSECTION] A flow diagram, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [21], of the results of the study selection procedure is presented in Figure 1. The search strategy yielded 629 articles. After duplications were deleted, 430 articles remained. Of these, 20 studies met the inclusion criteria [22-41]. After the full-text versions of these papers were reviewed, 10 studies were selected for this systematic review [22,24,26,27,30,31,33-35,39]. Reasons for exclusion included lack of a control group (n = 8) [23,25,32,36-38,40,41], language other than English (n = 1) [28], and not RCT or quasi-RCT (n = 1) [29]. Flow diagram of the results of the study selection procedure, which is in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. OA, osteoarthritis; RCT, randomized controlled trial. A flow diagram, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [21], of the results of the study selection procedure is presented in Figure 1. The search strategy yielded 629 articles. After duplications were deleted, 430 articles remained. Of these, 20 studies met the inclusion criteria [22-41]. After the full-text versions of these papers were reviewed, 10 studies were selected for this systematic review [22,24,26,27,30,31,33-35,39]. Reasons for exclusion included lack of a control group (n = 8) [23,25,32,36-38,40,41], language other than English (n = 1) [28], and not RCT or quasi-RCT (n = 1) [29]. Flow diagram of the results of the study selection procedure, which is in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. OA, osteoarthritis; RCT, randomized controlled trial. [SUBTITLE] Study characteristics [SUBSECTION] Details of the 10 eligible studies are presented in Tables 1 and 2. Of these studies, seven were RCTs, two were crossover trials, and one was a quasi-RCT. Five studies involved patients with both CMC joint and interphalangeal (IP) joint OA, one study involved patients with OA of the CMC joint only, while the remainder did not report the specific hand joints involved. Diagnosis of hand OA was based on clinical or radiologic criteria (or both) in five studies and on clinical criteria only in three studies; two studies did not clearly state their method of diagnosing hand OA. The age of participants ranged from 56 to 82 years, which is representative of adults with OA as reported in cohort studies [42,43]. Six different rehabilitation interventions were investigated (Table 2): one study investigated splints [31], two investigated laser therapy [22,24], two investigated heat therapy (using infrared radiation from a lamp or a heated tiled stove) [35,39], three investigated exercise programs [30,33,34], one investigated massage [27], and one investigated acupuncture [26]. Treatment durations ranged from 2 to 52 weeks, with a mean (standard deviation) of 10.9 (15.1) weeks. All studies, except one [39], reported the outcome measures immediately after treatment. Two studies reported a longer-term follow-up, with durations ranging from 2 weeks to 1 year [24,31]. Study design and participant characteristics CMC, carpometacarpal; F, female; IP, interphalageal; LOE, level of evidence (Oxford); M, male; n, number; NS, not stated; OA, osteoarthritis; RCT, randomized controlled trial; SD, standard deviation. Description of study interventions and outcome measures ROM refers to active range of motion of carpometacarpal (CMC), metacarpophalangeal (MCP), and interphalangeal (IP) of the thumb and MCP, distal interphalangeal (DIP), and proximal interphalangeal (PIP) joint movements of the 2nd-5th fingers. AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; HAQ, Health Assessment Questionnaire; NSAID, nonsteroidal anti-inflammatory drug; OA, osteoarthritis; VAS, visual analogue scale. Details of the 10 eligible studies are presented in Tables 1 and 2. Of these studies, seven were RCTs, two were crossover trials, and one was a quasi-RCT. Five studies involved patients with both CMC joint and interphalangeal (IP) joint OA, one study involved patients with OA of the CMC joint only, while the remainder did not report the specific hand joints involved. Diagnosis of hand OA was based on clinical or radiologic criteria (or both) in five studies and on clinical criteria only in three studies; two studies did not clearly state their method of diagnosing hand OA. The age of participants ranged from 56 to 82 years, which is representative of adults with OA as reported in cohort studies [42,43]. Six different rehabilitation interventions were investigated (Table 2): one study investigated splints [31], two investigated laser therapy [22,24], two investigated heat therapy (using infrared radiation from a lamp or a heated tiled stove) [35,39], three investigated exercise programs [30,33,34], one investigated massage [27], and one investigated acupuncture [26]. Treatment durations ranged from 2 to 52 weeks, with a mean (standard deviation) of 10.9 (15.1) weeks. All studies, except one [39], reported the outcome measures immediately after treatment. Two studies reported a longer-term follow-up, with durations ranging from 2 weeks to 1 year [24,31]. Study design and participant characteristics CMC, carpometacarpal; F, female; IP, interphalageal; LOE, level of evidence (Oxford); M, male; n, number; NS, not stated; OA, osteoarthritis; RCT, randomized controlled trial; SD, standard deviation. Description of study interventions and outcome measures ROM refers to active range of motion of carpometacarpal (CMC), metacarpophalangeal (MCP), and interphalangeal (IP) of the thumb and MCP, distal interphalangeal (DIP), and proximal interphalangeal (PIP) joint movements of the 2nd-5th fingers. AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; HAQ, Health Assessment Questionnaire; NSAID, nonsteroidal anti-inflammatory drug; OA, osteoarthritis; VAS, visual analogue scale. [SUBTITLE] Methodological quality [SUBSECTION] The methodological quality of included studies (Table 3) ranged from 3 to 10 points out of a maximum of 10 points. Six trials were considered to have relatively high quality [22,24,26,31,34,35] and four trials lower quality [27,30,33,39]. One study, investigating laser therapy [24], met the criteria of blinding therapists and participants. Concealed allocation and the use of an intention-to-treat analysis were other criteria not met in most studies. Quality ratings of included studies according to the PEDro methodology scoring system ITT, intention-to-treat; PEDro, Physiotherapy Evidence Database. The methodological quality of included studies (Table 3) ranged from 3 to 10 points out of a maximum of 10 points. Six trials were considered to have relatively high quality [22,24,26,31,34,35] and four trials lower quality [27,30,33,39]. One study, investigating laser therapy [24], met the criteria of blinding therapists and participants. Concealed allocation and the use of an intention-to-treat analysis were other criteria not met in most studies. Quality ratings of included studies according to the PEDro methodology scoring system ITT, intention-to-treat; PEDro, Physiotherapy Evidence Database. [SUBTITLE] Results of studies [SUBSECTION] The treatment effects (SMD with 95% CI) of the six different rehabilitative interventions on the outcomes of pain, self-reported physical function, strength, ROM, and self-reported stiffness, immediately after treatment and at the longest follow-up time point, are presented in Table 4. Treatment effects from the higher-quality studies on each of the outcomes are shown in Figures 2, 3, 4, 5 and 6. Most studies focused on interventions to improve pain and strength. Fewer studies investigated the effects on improving function, which is an important goal in clinical practice. Seven studies reported sufficient data to calculate the SMD with its 95% CI. For the remaining three studies, the author or authors were contacted, resulting in additional information from which to calculate the SMD in one of these three studies. The following sections will outline the treatment effects of rehabilitation strategies for each of the included outcomes. Treatment effects of rehabilitation interventions on study outcomes aSignificant treatment effects. ADL, activities of daily living; AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; Dy, dynamometer(s); HAQ, Health Assessment Questionnaire; KI, Kapandji index (thumb opposition); NA, standardized mean difference not estimable; NS, measurement tool not stated; PEDro, Physiotherapy Evidence Database; S, sphygmomanometer; SMD, standardized mean difference; V, vigorimeter; VAS, visual analogue scale; VITAS, visual analogue scale anchored with five faces. Treatment effects of the higher-quality studies on pain. CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on function. CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on strength. CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on range of motion (ROM). CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on stiffness. CI, confidence interval; SMD, standardized mean difference. The treatment effects (SMD with 95% CI) of the six different rehabilitative interventions on the outcomes of pain, self-reported physical function, strength, ROM, and self-reported stiffness, immediately after treatment and at the longest follow-up time point, are presented in Table 4. Treatment effects from the higher-quality studies on each of the outcomes are shown in Figures 2, 3, 4, 5 and 6. Most studies focused on interventions to improve pain and strength. Fewer studies investigated the effects on improving function, which is an important goal in clinical practice. Seven studies reported sufficient data to calculate the SMD with its 95% CI. For the remaining three studies, the author or authors were contacted, resulting in additional information from which to calculate the SMD in one of these three studies. The following sections will outline the treatment effects of rehabilitation strategies for each of the included outcomes. Treatment effects of rehabilitation interventions on study outcomes aSignificant treatment effects. ADL, activities of daily living; AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; Dy, dynamometer(s); HAQ, Health Assessment Questionnaire; KI, Kapandji index (thumb opposition); NA, standardized mean difference not estimable; NS, measurement tool not stated; PEDro, Physiotherapy Evidence Database; S, sphygmomanometer; SMD, standardized mean difference; V, vigorimeter; VAS, visual analogue scale; VITAS, visual analogue scale anchored with five faces. Treatment effects of the higher-quality studies on pain. CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on function. CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on strength. CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on range of motion (ROM). CI, confidence interval; SMD, standardized mean difference. Treatment effects of the higher-quality studies on stiffness. CI, confidence interval; SMD, standardized mean difference. [SUBTITLE] Pain [SUBSECTION] The effects of all six rehabilitation interventions on pain were reported in eight of the 10 studies (Table 4). From the eight studies, six were graded as higher quality (greater than 6 on the PEDro scale). Of these higher-quality studies, only one study investigating long-term splint use was shown to have a positive treatment effect on improving pain when the visual analogue scale was used to measure outcome (Figure 2). In this study, Rannou and colleagues [31] found that 12 months of continued use of a night splint resulted in large improvements in pain (SMD = 4.24, 95% CI 3.52, 4.97). One lower-quality study demonstrated a smaller treatment effect of massage on improving pain (SMD = 1.18, 95% CI 0.26, 2.10) [29]. Although we could not calculate the SMD, the authors of the one trial of acupuncture reported no short-term pain-relieving effects (P = 1.0) [26]. The effects of all six rehabilitation interventions on pain were reported in eight of the 10 studies (Table 4). From the eight studies, six were graded as higher quality (greater than 6 on the PEDro scale). Of these higher-quality studies, only one study investigating long-term splint use was shown to have a positive treatment effect on improving pain when the visual analogue scale was used to measure outcome (Figure 2). In this study, Rannou and colleagues [31] found that 12 months of continued use of a night splint resulted in large improvements in pain (SMD = 4.24, 95% CI 3.52, 4.97). One lower-quality study demonstrated a smaller treatment effect of massage on improving pain (SMD = 1.18, 95% CI 0.26, 2.10) [29]. Although we could not calculate the SMD, the authors of the one trial of acupuncture reported no short-term pain-relieving effects (P = 1.0) [26]. [SUBTITLE] Self-reported hand function [SUBSECTION] The effects of all interventions, except massage, were investigated on hand function in six of the 10 studies (Table 4). From the six studies, five were graded as higher-quality studies. Of these higher-quality studies, a positive treatment effect could be calculated from one study. In this study [31], use of a splint resulted in a large improvement in hand function in both the short and long term as measured by the Cochin hand functional scale (SMD = 1.10 and 3.73, respectively) (Figure 3). Of the two studies from which we were unable to calculate SMD, a significantly higher proportion of patients reported improved function with a 3-month hand ROM exercise program and education about joint protection in comparison with those who received general OA education and use of non-slip matting to open jars (P < 0.05) [34]. However, no functional improvement was shown in another exercise trial that included both ROM and strengthening exercises [33]. Laser therapy [24] and heat treatment [35] had no effect on hand function as measured by the AUSCAN. Similarly, the trial on acupuncture reported no effect on function [26]. The effects of all interventions, except massage, were investigated on hand function in six of the 10 studies (Table 4). From the six studies, five were graded as higher-quality studies. Of these higher-quality studies, a positive treatment effect could be calculated from one study. In this study [31], use of a splint resulted in a large improvement in hand function in both the short and long term as measured by the Cochin hand functional scale (SMD = 1.10 and 3.73, respectively) (Figure 3). Of the two studies from which we were unable to calculate SMD, a significantly higher proportion of patients reported improved function with a 3-month hand ROM exercise program and education about joint protection in comparison with those who received general OA education and use of non-slip matting to open jars (P < 0.05) [34]. However, no functional improvement was shown in another exercise trial that included both ROM and strengthening exercises [33]. Laser therapy [24] and heat treatment [35] had no effect on hand function as measured by the AUSCAN. Similarly, the trial on acupuncture reported no effect on function [26]. [SUBTITLE] Strength [SUBSECTION] The effects of all interventions on hand strength were investigated in all 10 trials (Table 4). Six of these 10 studies were graded as higher-quality studies, and positive treatment effects could be calculated from two of the six studies (Figure 4). Improvements in hand strength, measured by means of an electronic dynamometer, were found in both the short and long term with the use of splinting in one study (SMD = 0.9 and 1.2, respectively) [31]. A large positive treatment effect (SMD = 4.5), measured by means of a vigorimeter, was found with the use of a home ROM exercise program [34]. Effect sizes could not be calculated in three studies [24,26,39]. Of these studies, one study [24] reported significant improvement in grip strength (P = 0.041) when measured with a dynamometer following laser therapy, one trial [39] did not measure between-group strength difference, and the other trial [26] drew no conclusion on the effect of acupuncture on hand strength. The effects of all interventions on hand strength were investigated in all 10 trials (Table 4). Six of these 10 studies were graded as higher-quality studies, and positive treatment effects could be calculated from two of the six studies (Figure 4). Improvements in hand strength, measured by means of an electronic dynamometer, were found in both the short and long term with the use of splinting in one study (SMD = 0.9 and 1.2, respectively) [31]. A large positive treatment effect (SMD = 4.5), measured by means of a vigorimeter, was found with the use of a home ROM exercise program [34]. Effect sizes could not be calculated in three studies [24,26,39]. Of these studies, one study [24] reported significant improvement in grip strength (P = 0.041) when measured with a dynamometer following laser therapy, one trial [39] did not measure between-group strength difference, and the other trial [26] drew no conclusion on the effect of acupuncture on hand strength. [SUBTITLE] Range of motion [SUBSECTION] The effects of three interventions (splints, laser, and exercise) on ROM were investigated by four studies (Table 4). Of these, three were graded as higher-quality studies, and treatment effects could be calculated from one of the three studies. A small negative effect (SMD = -0.4) in the short term and a large positive effect (SMD = 3.3) in the long term were found on hand ROM in one trial of splinting [31] (Figure 5). Of the two studies from which we were unable to calculate SMD, a significant improvement in ROM was reported for hand-strengthening exercises [30] whereas no overall improvement was reported for laser therapy [22,24], except CMC opposition (P = 0.011) [24]. The effects of three interventions (splints, laser, and exercise) on ROM were investigated by four studies (Table 4). Of these, three were graded as higher-quality studies, and treatment effects could be calculated from one of the three studies. A small negative effect (SMD = -0.4) in the short term and a large positive effect (SMD = 3.3) in the long term were found on hand ROM in one trial of splinting [31] (Figure 5). Of the two studies from which we were unable to calculate SMD, a significant improvement in ROM was reported for hand-strengthening exercises [30] whereas no overall improvement was reported for laser therapy [22,24], except CMC opposition (P = 0.011) [24]. [SUBTITLE] Stiffness [SUBSECTION] The effects of three interventions (laser, heat, and exercise) on self-reported stiffness using the AUSCAN scale were investigated in three studies, two of which were graded as higher-quality studies (Table 4). None of the interventions had positive treatment effects on hand joint stiffness (Figure 6). However, as stiffness was measured with only one item from the 15-item AUSCAN scale, it is possible that this tool did not capture the full dimension of stiffness. The effects of three interventions (laser, heat, and exercise) on self-reported stiffness using the AUSCAN scale were investigated in three studies, two of which were graded as higher-quality studies (Table 4). None of the interventions had positive treatment effects on hand joint stiffness (Figure 6). However, as stiffness was measured with only one item from the 15-item AUSCAN scale, it is possible that this tool did not capture the full dimension of stiffness. [SUBTITLE] Synthesis of results [SUBSECTION] A summary of current available evidence from higher-quality studies with positive treatment effects of rehabilitative interventions on pain, function, and physical impairments is provided in Table 5. Summary of the higher-quality evidence for treating impairments and function in individuals with hand osteoarthritis CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; CMC, carpometacarpal; Dy, dynamometer(s); G, goniometer (s); HAQ, Health Assessment Questionnaire; IP, interphalangeal; KI, Kapandji index (thumb opposition); LOE, level of evidence (Oxford); NA, standardized mean difference not estimable; PEDro, Physiotherapy Evidence Database; SMD, standardized mean difference; V, vigorimeter VAS, visual analogue scale. A summary of current available evidence from higher-quality studies with positive treatment effects of rehabilitative interventions on pain, function, and physical impairments is provided in Table 5. Summary of the higher-quality evidence for treating impairments and function in individuals with hand osteoarthritis CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; CMC, carpometacarpal; Dy, dynamometer(s); G, goniometer (s); HAQ, Health Assessment Questionnaire; IP, interphalangeal; KI, Kapandji index (thumb opposition); LOE, level of evidence (Oxford); NA, standardized mean difference not estimable; PEDro, Physiotherapy Evidence Database; SMD, standardized mean difference; V, vigorimeter VAS, visual analogue scale.
Conclusions
This systematic review establishes that there is emerging high-quality evidence to support that certain rehabilitation interventions provide benefits to specific treatment goals in individuals with hand OA. A summary of the higher-quality evidence is provided to assist with clinical decision making based on current evidence. In this review, the evidence suggests the following: (a) long-term use of a night splint offers significant benefits to improve pain, hand function, strength, and ROM for patients with OA; (b) programs of joint protection, advice, and home exercises are effective at improving grip strength and hand function; (c) low-level laser therapy is effective at improving ROM; and (d) no rehabilitation interventions were found to improve stiffness. Though recommended for OA, exercise programs have not yet been shown to reduce pain in this patient group. We concur with previous systematic reviews suggesting that further high-quality research is urgently needed concerning the effects of rehabilitation interventions on specific patient goals for individuals with hand OA. Specifically, the future agenda should include (a) the use of a common set of outcome measures that adequately capture the dimensions of impairments and function; (b) the use of higher-quality, well-powered studies that adhere to the CONSORT (Consolidated Standards of Reporting Trials) guidelines for non-pharmacological treatments [59]; and (c) the role of exercise on specific patient goals for individuals with hand OA with consideration of the optimal frequency and intensity of training.
[ "Introduction", "Eligibility criteria", "Search strategy", "Study selection", "Assessment of study quality", "Date extraction and analysis", "Study selection", "Study characteristics", "Methodological quality", "Results of studies", "Pain", "Self-reported hand function", "Strength", "Range of motion", "Stiffness", "Synthesis of results", "Pain relief and function", "Strength, range of motion, and stiffness", "Other treatment modalities", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Hand osteoarthritis (OA) is a common chronic condition involving one or more joints of the thumb and fingers [1]. Estimates of the prevalence of symptomatic hand OA range from 13% to 26% and are greater in women [1]. Hand OA is associated with pain, reduced grip strength, loss of range of motion (ROM), and joint stiffness, leading to impaired hand function and difficulty with daily activities [2].\nAccording to the European League Against Rheumatism (EULAR), the optimal management of hand OA requires both non-pharmacological and pharmacological approaches [1]. Rehabilitative interventions are both non-pharmacological and non-surgical treatments used by therapists in clinical practice to help maintain or regain a person's maximum self-sufficiency and function. They include treatments such as exercise, splints, heat therapy, electrotherapy, acupuncture, and massage and are recommended for relieving pain and improving hand function, although the level of evidence supporting this recommendation is mainly at the level of 'expert opinion' [1].\nCommon goals for the treatment of hand OA are pain relief, improved hand strength and ROM, and reduced stiffness, with an overall goal to improve physical hand function [3]. Evidence-based practice requires knowledge of which interventions will most effectively address treatment goals and which interventions best target prioritized problems [4].\nTo date, there have been five systematic reviews [5-9] investigating conservative interventions for hand OA. The focus of the two earliest reviews was on pharmacological interventions, with little emphasis given to rehabilitative treatments [6,9]. Although Towheed's systematic review [8] and its update [5] reviewed studies of rehabilitative approaches, the main emphasis of these reviews was on methodological quality rather than treatment effects. The effectiveness of different rehabilitation interventions on specific treatment goals has not yet been fully explored. The most recently published systematic review [7] summarized the evidence based on systematic reviews rather than relevant primary studies. Its most striking finding was the paucity of available systematic reviews in this area and limited quality evidence that can be used to guide best practice.\nGiven the prevalence of hand OA and the limited evidence for non-pharmacological conservative treatments, the objectives of this systematic review were (a) to review the current quality of evidence of rehabilitation interventions for hand OA; (b) to explore the treatment effects of these rehabilitation treatments in relation to specific outcome measures of hand pain, strength, ROM, and stiffness and to hand function in adults with hand OA; and (c) to provide evidence-based knowledge on the treatment effects of different rehabilitation interventions for specific treatment goals.\nKnowledge of study quality and the treatment effects of specific rehabilitation techniques will be useful to help guide best clinical practice for individuals with a diagnosis of hand OA. Greater knowledge of which treatments offer the greatest effect on specific treatment goals will aid therapists to select the most effective rehabilitation strategies to improve impairment and function in individuals with hand OA. Evidence of treatment effects from higher-quality studies can be used in clinical practice to guide informed decision making and meet patient-specific goals.", "Randomized controlled trials (RCTs), quasi-RCTs, or crossover trials (that is, level of evidence 1b and 2b on Oxford levels of evidence) [10] in English were included for evaluation if they compared some form of rehabilitation with a control for adults whose condition was diagnosed as hand OA. The rehabilitative interventions included those that are used by therapists in clinical practice to treat hand OA, such as exercise, splints, heat therapy, electrotherapy, acupuncture, and massage. The control could be no treatment, usual care, or a placebo intervention. In addition, studies needed to assess at least one of the following outcomes: (a) hand pain including individual joint(s) or overall hand pain, (b) self-reported hand physical function, or (c) other measures of hand impairment, such as grip strength, ROM, or stiffness. Studies evaluating surgical or pharmacological interventions were excluded as were studies reported only in the form of abstracts, conference proceedings, or poster presentations.", "We searched the following electronic databases: MEDLINE (1950 to October 2010), CINAHL (Cumulative Index to Nursing and Allied Health Literature) (1981 to October 2010), ISI Web of Science (1950 to October 2010), SciVerse Scopus (1960 to October 2010), and Physiotherapy Evidence Database (PEDro) (1999). Specific search strategies for each database are provided in Appendix 1 (Additional file 1). We also searched the references of all systematic reviews of hand OA [5-9] and papers from experts in the field.", "We examined the list of titles and abstracts identified by the literature searches for potentially relevant studies. Two reviewers (LY and LK) independently applied the predetermined inclusion criteria to the full text of the identified studies. Any conflicts were resolved through a third independent researcher (KB).", "Two independent raters (LY and LK) assessed the methodological quality of included trials by means of the PEDro scale [11]. Disagreements were resolved by discussion with a third reviewer (KB). The PEDro scale is a validated scale used to assess the quality of randomized controlled rehabilitative studies [12-14] and provides a comprehensive measure of methodological quality [15]. It includes 11 criteria to assess the internal and external validity of clinical trials: criterion 1 measures external validity and is not included in the final score, and criteria 2 to 11 measure internal validity. The scale is scored out of 10, with 10 indicating the highest quality and 0 indicating the poorest quality. The items consist of (1) specification of eligibility criteria, (2) random allocation, (3) concealed allocation, (4) similarity at baseline, (5) blinding of subjects, (6) blinding of operators, (7) blinding of assessors, (8) measures of at least one key outcome obtained from at least 85% of subjects initially allocated to groups, (9) intention-to-treat principle, (10) results of between-group comparison, and (11) point measures and measures of variability reported. As it is difficult to blind therapists or participants in most rehabilitation trials, many studies do not meet all criteria; therefore, a trial can be considered to be of relatively high quality if it scores greater than 6 out of 10 on the PEDro scale [16].", "A predefined data extraction form with study design, participant characteristics, diagnosis, affected hand joints, intervention, and duration of interventions was used. To provide a comparison between outcomes reported by the studies, the standardized mean difference (SMD) over time and corresponding 95% confidence interval (CI) were calculated for continuous variables, if possible, immediately after treatment and at the longest follow-up time point by means of the software package RevMan 5 [17]. Although studies may have provided more than one outcome measure under each category of pain, function, strength, ROM, and stiffness, only one measure in each category per study was selected. The measures selected for calculation of the SMD were based on the following hierarchy: (a) for pain, measures of global hand pain took precedence over pain on motion and the Australian/Canadian OA hand index (AUSCAN) pain subscale [18]; (b) for strength, grip strength took precedence over lateral pinch strength and other strength as grip strength is the most commonly used outcome measure in these trials; and (c) for trials measuring outcomes for different hand joints, we extracted data of the joints in the following order: the distal interphalangeal (DIP) joints, the base of the thumb carpometacarpal (CMC) joints, and the proximal interphalangeal (PIP) joints, as the most commonly affected hand joints, in decreasing order, are the DIP joints, thumb CMC joints, and the PIP joints [19]. The effect estimates were interpreted as described by Cohen [20]; that is, an SMD of 0.2 to 0.5 was considered a small effect, 0.5 to 0.8 a moderate effect, and at least 0.8 a large effect of the individual rehabilitative intervention. We had planned to conduct a meta-analysis but this was not possible, owing to the heterogeneity of study interventions and outcome measures, which made pooling of data across trials inappropriate (I2 values of 89% to 99%).", "A flow diagram, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [21], of the results of the study selection procedure is presented in Figure 1. The search strategy yielded 629 articles. After duplications were deleted, 430 articles remained. Of these, 20 studies met the inclusion criteria [22-41]. After the full-text versions of these papers were reviewed, 10 studies were selected for this systematic review [22,24,26,27,30,31,33-35,39]. Reasons for exclusion included lack of a control group (n = 8) [23,25,32,36-38,40,41], language other than English (n = 1) [28], and not RCT or quasi-RCT (n = 1) [29].\nFlow diagram of the results of the study selection procedure, which is in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. OA, osteoarthritis; RCT, randomized controlled trial.", "Details of the 10 eligible studies are presented in Tables 1 and 2. Of these studies, seven were RCTs, two were crossover trials, and one was a quasi-RCT. Five studies involved patients with both CMC joint and interphalangeal (IP) joint OA, one study involved patients with OA of the CMC joint only, while the remainder did not report the specific hand joints involved. Diagnosis of hand OA was based on clinical or radiologic criteria (or both) in five studies and on clinical criteria only in three studies; two studies did not clearly state their method of diagnosing hand OA. The age of participants ranged from 56 to 82 years, which is representative of adults with OA as reported in cohort studies [42,43]. Six different rehabilitation interventions were investigated (Table 2): one study investigated splints [31], two investigated laser therapy [22,24], two investigated heat therapy (using infrared radiation from a lamp or a heated tiled stove) [35,39], three investigated exercise programs [30,33,34], one investigated massage [27], and one investigated acupuncture [26]. Treatment durations ranged from 2 to 52 weeks, with a mean (standard deviation) of 10.9 (15.1) weeks. All studies, except one [39], reported the outcome measures immediately after treatment. Two studies reported a longer-term follow-up, with durations ranging from 2 weeks to 1 year [24,31].\nStudy design and participant characteristics\nCMC, carpometacarpal; F, female; IP, interphalageal; LOE, level of evidence (Oxford); M, male; n, number; NS, not stated; OA, osteoarthritis; RCT, randomized controlled trial; SD, standard deviation.\nDescription of study interventions and outcome measures\nROM refers to active range of motion of carpometacarpal (CMC), metacarpophalangeal (MCP), and interphalangeal (IP) of the thumb and MCP, distal interphalangeal (DIP), and proximal interphalangeal (PIP) joint movements of the 2nd-5th fingers. AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; HAQ, Health Assessment Questionnaire; NSAID, nonsteroidal anti-inflammatory drug; OA, osteoarthritis; VAS, visual analogue scale.", "The methodological quality of included studies (Table 3) ranged from 3 to 10 points out of a maximum of 10 points. Six trials were considered to have relatively high quality [22,24,26,31,34,35] and four trials lower quality [27,30,33,39]. One study, investigating laser therapy [24], met the criteria of blinding therapists and participants. Concealed allocation and the use of an intention-to-treat analysis were other criteria not met in most studies.\nQuality ratings of included studies according to the PEDro methodology scoring system\nITT, intention-to-treat; PEDro, Physiotherapy Evidence Database.", "The treatment effects (SMD with 95% CI) of the six different rehabilitative interventions on the outcomes of pain, self-reported physical function, strength, ROM, and self-reported stiffness, immediately after treatment and at the longest follow-up time point, are presented in Table 4. Treatment effects from the higher-quality studies on each of the outcomes are shown in Figures 2, 3, 4, 5 and 6. Most studies focused on interventions to improve pain and strength. Fewer studies investigated the effects on improving function, which is an important goal in clinical practice. Seven studies reported sufficient data to calculate the SMD with its 95% CI. For the remaining three studies, the author or authors were contacted, resulting in additional information from which to calculate the SMD in one of these three studies. The following sections will outline the treatment effects of rehabilitation strategies for each of the included outcomes.\nTreatment effects of rehabilitation interventions on study outcomes\naSignificant treatment effects. ADL, activities of daily living; AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; Dy, dynamometer(s); HAQ, Health Assessment Questionnaire; KI, Kapandji index (thumb opposition); NA, standardized mean difference not estimable; NS, measurement tool not stated; PEDro, Physiotherapy Evidence Database; S, sphygmomanometer; SMD, standardized mean difference; V, vigorimeter; VAS, visual analogue scale; VITAS, visual analogue scale anchored with five faces.\nTreatment effects of the higher-quality studies on pain. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on function. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on strength. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on range of motion (ROM). CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on stiffness. CI, confidence interval; SMD, standardized mean difference.", "The effects of all six rehabilitation interventions on pain were reported in eight of the 10 studies (Table 4). From the eight studies, six were graded as higher quality (greater than 6 on the PEDro scale). Of these higher-quality studies, only one study investigating long-term splint use was shown to have a positive treatment effect on improving pain when the visual analogue scale was used to measure outcome (Figure 2). In this study, Rannou and colleagues [31] found that 12 months of continued use of a night splint resulted in large improvements in pain (SMD = 4.24, 95% CI 3.52, 4.97). One lower-quality study demonstrated a smaller treatment effect of massage on improving pain (SMD = 1.18, 95% CI 0.26, 2.10) [29]. Although we could not calculate the SMD, the authors of the one trial of acupuncture reported no short-term pain-relieving effects (P = 1.0) [26].", "The effects of all interventions, except massage, were investigated on hand function in six of the 10 studies (Table 4). From the six studies, five were graded as higher-quality studies. Of these higher-quality studies, a positive treatment effect could be calculated from one study. In this study [31], use of a splint resulted in a large improvement in hand function in both the short and long term as measured by the Cochin hand functional scale (SMD = 1.10 and 3.73, respectively) (Figure 3). Of the two studies from which we were unable to calculate SMD, a significantly higher proportion of patients reported improved function with a 3-month hand ROM exercise program and education about joint protection in comparison with those who received general OA education and use of non-slip matting to open jars (P < 0.05) [34]. However, no functional improvement was shown in another exercise trial that included both ROM and strengthening exercises [33]. Laser therapy [24] and heat treatment [35] had no effect on hand function as measured by the AUSCAN. Similarly, the trial on acupuncture reported no effect on function [26].", "The effects of all interventions on hand strength were investigated in all 10 trials (Table 4). Six of these 10 studies were graded as higher-quality studies, and positive treatment effects could be calculated from two of the six studies (Figure 4). Improvements in hand strength, measured by means of an electronic dynamometer, were found in both the short and long term with the use of splinting in one study (SMD = 0.9 and 1.2, respectively) [31]. A large positive treatment effect (SMD = 4.5), measured by means of a vigorimeter, was found with the use of a home ROM exercise program [34]. Effect sizes could not be calculated in three studies [24,26,39]. Of these studies, one study [24] reported significant improvement in grip strength (P = 0.041) when measured with a dynamometer following laser therapy, one trial [39] did not measure between-group strength difference, and the other trial [26] drew no conclusion on the effect of acupuncture on hand strength.", "The effects of three interventions (splints, laser, and exercise) on ROM were investigated by four studies (Table 4). Of these, three were graded as higher-quality studies, and treatment effects could be calculated from one of the three studies. A small negative effect (SMD = -0.4) in the short term and a large positive effect (SMD = 3.3) in the long term were found on hand ROM in one trial of splinting [31] (Figure 5). Of the two studies from which we were unable to calculate SMD, a significant improvement in ROM was reported for hand-strengthening exercises [30] whereas no overall improvement was reported for laser therapy [22,24], except CMC opposition (P = 0.011) [24].", "The effects of three interventions (laser, heat, and exercise) on self-reported stiffness using the AUSCAN scale were investigated in three studies, two of which were graded as higher-quality studies (Table 4). None of the interventions had positive treatment effects on hand joint stiffness (Figure 6). However, as stiffness was measured with only one item from the 15-item AUSCAN scale, it is possible that this tool did not capture the full dimension of stiffness.", "A summary of current available evidence from higher-quality studies with positive treatment effects of rehabilitative interventions on pain, function, and physical impairments is provided in Table 5.\nSummary of the higher-quality evidence for treating impairments and function in individuals with hand osteoarthritis\nCHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; CMC, carpometacarpal; Dy, dynamometer(s); G, goniometer (s); HAQ, Health Assessment Questionnaire; IP, interphalangeal; KI, Kapandji index (thumb opposition); LOE, level of evidence (Oxford); NA, standardized mean difference not estimable; PEDro, Physiotherapy Evidence Database; SMD, standardized mean difference; V, vigorimeter VAS, visual analogue scale.", "Pain relief has been reported as the primary treatment goal for hand OA because of its direct correlation with increased hand function [44]. In this review, the use of long-term night splinting was found to be the only effective intervention for both pain reduction and improved physical function [24]. This relative paucity of effect on pain is somewhat surprising given that RCTs for knee and hip OA have reported positive effects on pain from a variety of rehabilitative interventions [45]. However, this discrepancy may reflect the different disease characteristics, such as different risk factors for development and progression, biomechanical features, and physical impairments of hand OA when compared with lower-extremity OA.\nNight splinting of the thumb has particularly been recommended for OA of the hand [46] as CMC joint OA has a greater impact on pain and dysfunction than IP OA does [47]. A 7-year prospective study [48] showed that thumb splinting improved hand function and, importantly, reduced the need for surgery. EULAR [49] also recommends using splints to prevent/correct lateral angulation and flexion deformity at the thumb. Our review found evidence from a higher-quality adequately powered RCT that a custom-made neoprene night splint led to significant improvements compared with usual care for 12 months, although it did not improve pain or ROM in the short term (1 month) [31]. In the trial by Rannou and colleagues [31], participants were instructed to use the night splint for 12 months. Adherence was good: 86% wore the splint 5 to 7 nights a week [31].\nEvidence from this review did not support the use of laser therapy, heat treatment, exercise, or acupuncture for reducing both pain and improving function in hand OA. However, Stamm and colleagues [34] reported a higher proportion of patients with an at least 10% increase in global hand function using exercise. This was the only exercise study to report an improvement in hand function; however, as the exercise was combined with joint protection education, it is difficult to truly isolate the independent effects of exercise [34].\nLow-level laser therapy has been found to regulate chondrocytic proliferation and stimulate collagen synthesis in animals [50,51]. It is thought to have analgesic effects as well as biomodulatory effects of microcirculation [52]. Despite these physiological effects, the two high-quality, well-powered RCTs in our review reported no significant positive clinical effects of laser therapy delivered thrice weekly for 3 to 6 weeks on pain and hand function. This contrasts with findings for laser therapy in the treatment of knee OA, for which there is moderate-quality evidence of beneficial effects, including pain reduction and functional improvement [53,54]. It may be that different devices, method and site of application, wavelength, treatment regime, and measurement tools influence the result.\nMassage therapy was shown to be effective in reducing pain in patients with hand OA; however, owing to the lower quality (3 on the PEDro scale) of the one study on massage [27], it is hard to draw definitive conclusions about massage therapy. The single trial of acupuncture did not support its use for hand OA for pain and function, but no detail was provided about the treatment dosage, including the acupuncture points, used. This lack of effect of acupuncture is consistent with findings of a recent systematic review of acupuncture for all OA; the review showed that, while there were statistically significant benefits in sham-controlled trials, the benefits were small, did not meet predefined thresholds for clinical relevance, and were possibly due at least partially to placebo effects from incomplete blinding [55].", "Improvements of hand strength and ROM and reduction of stiffness are also common goals of rehabilitation on hand OA [3]. The use of night splints in both the short term and long term was shown to have a treatment effect on strength and ROM but not on stiffness. Interestingly, the use of night splinting produced a small negative treatment effect (SMD = -0.4) in the short term but a large positive effect (SMD = 3.3) in the long term on ROM in one study [24]. This finding is important knowledge for therapists when providing advice on the duration of night splint use when the goal is to improve ROM.\nExercise is considered a mainstay of treatment for OA and yet, in this review, only three RCTs [30,33,34] of lower quality investigated the effects of various exercise programs to improve strength, ROM, or stiffness. Surprisingly, the exercise programs that incorporated strengthening exercises failed to find strength gains yet found an effect on ROM [30,33], while a large significant improvement in grip strength was found with a program that involved ROM exercises [34]. These programs all differed in their exercise content and dosage. Precise details on the intensity of the exercise program were limited. It is possible that the intensity of the strengthening exercises was insufficient for change to occur, especially given that increases in strength were not evident. Further studies that address the optimal intensity of strengthening exercises for hand OA are required.\nNo studies found significant positive effects of splints, laser, heat, or exercise on stiffness. Further trials using larger sample sizes and a more rigorous methodology are needed to evaluate different forms of exercise on improving strength and ROM and reducing stiffness in patients with hand OA. Constraining outcome measures to only self-reported methods, such as using the 1-item AUSCAN stiffness subscale to measure stiffness, may reduce the ability to capture the full dimension of the impairment [56]. The additional use of performance-based outcome measures that can complement self-reported measures needs to be considered when assessing outcomes, such as stiffness, to assist in capturing this extent of impairment and function in hand OA.\nThe only other rehabilitation intervention reported to improve strength or ROM was laser therapy [24]. This high-quality, well-powered RCT found a benefit of laser therapy delivered thrice weekly for 3 to 6 weeks on grip strength and CMC opposition. Other treatment modalities investigating the effect of heat therapy for patients with hand OA did not find improvements in strength or stiffness when using either the heat provided by a tiled stove [35] or infrared radiation [39]. No studies on the application of wax or hot packs were included in this review.", "No studies fulfilling our inclusion criteria were found for ultrasound or transcutaneous electrical nerve stimulation (TENS). Ultrasound is recommended by EULAR for the management of OA, yet there is evidence from studies of knee OA that ultrasound offers no benefit over placebo [53]. Given that hand joints are more superficial than the knee joint, ultrasound may have different effects in hand OA and is worthy of investigation. Likewise, the effect of TENS for the management of hand OA should be investigated given that some [53,54] but not all [57] systematic reviews in knee OA show that TENS has significant pain-relieving benefits. One study involving TENS, excluded from our review but included in that of Towheed [8], found that use of a glove electrode was, overall, more effective than use of a carbon electrode when using TENS in individuals with hand OA. Other rehabilitative interventions we excluded from our review involved a yoga program [29], which was reported to be effective in improving pain, tenderness, and ROM, and leech therapy, which was more effective than treatment with the drug diclofenac [58].\nThere are several limitations to this review. First, the statistical power of most studies was rather low. To detect a medium effect size of 0.5 (with α = 0.5 and power at 80%), the sample size per group needs to be at least 50 [20]. This is particularly relevant given that many studies reported a lack of treatment effect on the measured outcomes, and this lack of effect may simply reflect inadequate statistical power. Furthermore, despite contacting authors requesting additional information where required, we were unable to calculate effect sizes for two trials included in the review. Second, we did not confine our studies to RCTs, given the likely lack of studies in this area, and instead included one quasi-RCT [39] and two crossover trials [33,35] on the assumption that hand OA is a non-curable condition and that carry-over of treatment effect across periods may be less likely. The findings of these studies need to be interpreted cautiously given these study designs. Third, the methodological assessment revealed some threats to the validity of the included trials, with around half the studies rated as being of lower quality. A summary of the evidence was therefore made with higher-quality studies graded by means of the PEDro system. Fourth, there was variable use of outcome measures across the trials, making it difficult to compare and pool results across studies.", "AUSCAN: Australian/Canadian osteoarthritis hand index; CI: confidence interval; CMC: carpometacarpal; DIP: distal interphalangeal; EULAR: European League Against Rheumatism; IP: interphalangeal; OA: osteoarthritis; PEDro: Physiotherapy Evidence Database; PIP: proximal interphalangeal; RCT: randomized controlled trial; ROM: range of motion; SMD: standardized mean difference; TENS: transcutaneous electrical nerve stimulation.", "The authors declare that they have no competing interests.", "LY participated in the study design and in the acquisition, analysis, and interpretation of data and drafted the manuscript. LK participated in the study design and in the acquisition and analysis of the data and helped to draft the manuscript. AS participated in the study design and in the analysis and interpretation of the data and helped to draft the manuscript. FD participated in data acquisition, analysis, and interpretation and drafted the final revisions of the manuscript. KB participated in the study concept and design and in the interpretation of the data and assisted with the drafting of the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Eligibility criteria", "Search strategy", "Study selection", "Assessment of study quality", "Date extraction and analysis", "Results", "Study selection", "Study characteristics", "Methodological quality", "Results of studies", "Pain", "Self-reported hand function", "Strength", "Range of motion", "Stiffness", "Synthesis of results", "Discussion", "Pain relief and function", "Strength, range of motion, and stiffness", "Other treatment modalities", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions", "Supplementary Material" ]
[ "Hand osteoarthritis (OA) is a common chronic condition involving one or more joints of the thumb and fingers [1]. Estimates of the prevalence of symptomatic hand OA range from 13% to 26% and are greater in women [1]. Hand OA is associated with pain, reduced grip strength, loss of range of motion (ROM), and joint stiffness, leading to impaired hand function and difficulty with daily activities [2].\nAccording to the European League Against Rheumatism (EULAR), the optimal management of hand OA requires both non-pharmacological and pharmacological approaches [1]. Rehabilitative interventions are both non-pharmacological and non-surgical treatments used by therapists in clinical practice to help maintain or regain a person's maximum self-sufficiency and function. They include treatments such as exercise, splints, heat therapy, electrotherapy, acupuncture, and massage and are recommended for relieving pain and improving hand function, although the level of evidence supporting this recommendation is mainly at the level of 'expert opinion' [1].\nCommon goals for the treatment of hand OA are pain relief, improved hand strength and ROM, and reduced stiffness, with an overall goal to improve physical hand function [3]. Evidence-based practice requires knowledge of which interventions will most effectively address treatment goals and which interventions best target prioritized problems [4].\nTo date, there have been five systematic reviews [5-9] investigating conservative interventions for hand OA. The focus of the two earliest reviews was on pharmacological interventions, with little emphasis given to rehabilitative treatments [6,9]. Although Towheed's systematic review [8] and its update [5] reviewed studies of rehabilitative approaches, the main emphasis of these reviews was on methodological quality rather than treatment effects. The effectiveness of different rehabilitation interventions on specific treatment goals has not yet been fully explored. The most recently published systematic review [7] summarized the evidence based on systematic reviews rather than relevant primary studies. Its most striking finding was the paucity of available systematic reviews in this area and limited quality evidence that can be used to guide best practice.\nGiven the prevalence of hand OA and the limited evidence for non-pharmacological conservative treatments, the objectives of this systematic review were (a) to review the current quality of evidence of rehabilitation interventions for hand OA; (b) to explore the treatment effects of these rehabilitation treatments in relation to specific outcome measures of hand pain, strength, ROM, and stiffness and to hand function in adults with hand OA; and (c) to provide evidence-based knowledge on the treatment effects of different rehabilitation interventions for specific treatment goals.\nKnowledge of study quality and the treatment effects of specific rehabilitation techniques will be useful to help guide best clinical practice for individuals with a diagnosis of hand OA. Greater knowledge of which treatments offer the greatest effect on specific treatment goals will aid therapists to select the most effective rehabilitation strategies to improve impairment and function in individuals with hand OA. Evidence of treatment effects from higher-quality studies can be used in clinical practice to guide informed decision making and meet patient-specific goals.", "[SUBTITLE] Eligibility criteria [SUBSECTION] Randomized controlled trials (RCTs), quasi-RCTs, or crossover trials (that is, level of evidence 1b and 2b on Oxford levels of evidence) [10] in English were included for evaluation if they compared some form of rehabilitation with a control for adults whose condition was diagnosed as hand OA. The rehabilitative interventions included those that are used by therapists in clinical practice to treat hand OA, such as exercise, splints, heat therapy, electrotherapy, acupuncture, and massage. The control could be no treatment, usual care, or a placebo intervention. In addition, studies needed to assess at least one of the following outcomes: (a) hand pain including individual joint(s) or overall hand pain, (b) self-reported hand physical function, or (c) other measures of hand impairment, such as grip strength, ROM, or stiffness. Studies evaluating surgical or pharmacological interventions were excluded as were studies reported only in the form of abstracts, conference proceedings, or poster presentations.\nRandomized controlled trials (RCTs), quasi-RCTs, or crossover trials (that is, level of evidence 1b and 2b on Oxford levels of evidence) [10] in English were included for evaluation if they compared some form of rehabilitation with a control for adults whose condition was diagnosed as hand OA. The rehabilitative interventions included those that are used by therapists in clinical practice to treat hand OA, such as exercise, splints, heat therapy, electrotherapy, acupuncture, and massage. The control could be no treatment, usual care, or a placebo intervention. In addition, studies needed to assess at least one of the following outcomes: (a) hand pain including individual joint(s) or overall hand pain, (b) self-reported hand physical function, or (c) other measures of hand impairment, such as grip strength, ROM, or stiffness. Studies evaluating surgical or pharmacological interventions were excluded as were studies reported only in the form of abstracts, conference proceedings, or poster presentations.\n[SUBTITLE] Search strategy [SUBSECTION] We searched the following electronic databases: MEDLINE (1950 to October 2010), CINAHL (Cumulative Index to Nursing and Allied Health Literature) (1981 to October 2010), ISI Web of Science (1950 to October 2010), SciVerse Scopus (1960 to October 2010), and Physiotherapy Evidence Database (PEDro) (1999). Specific search strategies for each database are provided in Appendix 1 (Additional file 1). We also searched the references of all systematic reviews of hand OA [5-9] and papers from experts in the field.\nWe searched the following electronic databases: MEDLINE (1950 to October 2010), CINAHL (Cumulative Index to Nursing and Allied Health Literature) (1981 to October 2010), ISI Web of Science (1950 to October 2010), SciVerse Scopus (1960 to October 2010), and Physiotherapy Evidence Database (PEDro) (1999). Specific search strategies for each database are provided in Appendix 1 (Additional file 1). We also searched the references of all systematic reviews of hand OA [5-9] and papers from experts in the field.\n[SUBTITLE] Study selection [SUBSECTION] We examined the list of titles and abstracts identified by the literature searches for potentially relevant studies. Two reviewers (LY and LK) independently applied the predetermined inclusion criteria to the full text of the identified studies. Any conflicts were resolved through a third independent researcher (KB).\nWe examined the list of titles and abstracts identified by the literature searches for potentially relevant studies. Two reviewers (LY and LK) independently applied the predetermined inclusion criteria to the full text of the identified studies. Any conflicts were resolved through a third independent researcher (KB).\n[SUBTITLE] Assessment of study quality [SUBSECTION] Two independent raters (LY and LK) assessed the methodological quality of included trials by means of the PEDro scale [11]. Disagreements were resolved by discussion with a third reviewer (KB). The PEDro scale is a validated scale used to assess the quality of randomized controlled rehabilitative studies [12-14] and provides a comprehensive measure of methodological quality [15]. It includes 11 criteria to assess the internal and external validity of clinical trials: criterion 1 measures external validity and is not included in the final score, and criteria 2 to 11 measure internal validity. The scale is scored out of 10, with 10 indicating the highest quality and 0 indicating the poorest quality. The items consist of (1) specification of eligibility criteria, (2) random allocation, (3) concealed allocation, (4) similarity at baseline, (5) blinding of subjects, (6) blinding of operators, (7) blinding of assessors, (8) measures of at least one key outcome obtained from at least 85% of subjects initially allocated to groups, (9) intention-to-treat principle, (10) results of between-group comparison, and (11) point measures and measures of variability reported. As it is difficult to blind therapists or participants in most rehabilitation trials, many studies do not meet all criteria; therefore, a trial can be considered to be of relatively high quality if it scores greater than 6 out of 10 on the PEDro scale [16].\nTwo independent raters (LY and LK) assessed the methodological quality of included trials by means of the PEDro scale [11]. Disagreements were resolved by discussion with a third reviewer (KB). The PEDro scale is a validated scale used to assess the quality of randomized controlled rehabilitative studies [12-14] and provides a comprehensive measure of methodological quality [15]. It includes 11 criteria to assess the internal and external validity of clinical trials: criterion 1 measures external validity and is not included in the final score, and criteria 2 to 11 measure internal validity. The scale is scored out of 10, with 10 indicating the highest quality and 0 indicating the poorest quality. The items consist of (1) specification of eligibility criteria, (2) random allocation, (3) concealed allocation, (4) similarity at baseline, (5) blinding of subjects, (6) blinding of operators, (7) blinding of assessors, (8) measures of at least one key outcome obtained from at least 85% of subjects initially allocated to groups, (9) intention-to-treat principle, (10) results of between-group comparison, and (11) point measures and measures of variability reported. As it is difficult to blind therapists or participants in most rehabilitation trials, many studies do not meet all criteria; therefore, a trial can be considered to be of relatively high quality if it scores greater than 6 out of 10 on the PEDro scale [16].\n[SUBTITLE] Date extraction and analysis [SUBSECTION] A predefined data extraction form with study design, participant characteristics, diagnosis, affected hand joints, intervention, and duration of interventions was used. To provide a comparison between outcomes reported by the studies, the standardized mean difference (SMD) over time and corresponding 95% confidence interval (CI) were calculated for continuous variables, if possible, immediately after treatment and at the longest follow-up time point by means of the software package RevMan 5 [17]. Although studies may have provided more than one outcome measure under each category of pain, function, strength, ROM, and stiffness, only one measure in each category per study was selected. The measures selected for calculation of the SMD were based on the following hierarchy: (a) for pain, measures of global hand pain took precedence over pain on motion and the Australian/Canadian OA hand index (AUSCAN) pain subscale [18]; (b) for strength, grip strength took precedence over lateral pinch strength and other strength as grip strength is the most commonly used outcome measure in these trials; and (c) for trials measuring outcomes for different hand joints, we extracted data of the joints in the following order: the distal interphalangeal (DIP) joints, the base of the thumb carpometacarpal (CMC) joints, and the proximal interphalangeal (PIP) joints, as the most commonly affected hand joints, in decreasing order, are the DIP joints, thumb CMC joints, and the PIP joints [19]. The effect estimates were interpreted as described by Cohen [20]; that is, an SMD of 0.2 to 0.5 was considered a small effect, 0.5 to 0.8 a moderate effect, and at least 0.8 a large effect of the individual rehabilitative intervention. We had planned to conduct a meta-analysis but this was not possible, owing to the heterogeneity of study interventions and outcome measures, which made pooling of data across trials inappropriate (I2 values of 89% to 99%).\nA predefined data extraction form with study design, participant characteristics, diagnosis, affected hand joints, intervention, and duration of interventions was used. To provide a comparison between outcomes reported by the studies, the standardized mean difference (SMD) over time and corresponding 95% confidence interval (CI) were calculated for continuous variables, if possible, immediately after treatment and at the longest follow-up time point by means of the software package RevMan 5 [17]. Although studies may have provided more than one outcome measure under each category of pain, function, strength, ROM, and stiffness, only one measure in each category per study was selected. The measures selected for calculation of the SMD were based on the following hierarchy: (a) for pain, measures of global hand pain took precedence over pain on motion and the Australian/Canadian OA hand index (AUSCAN) pain subscale [18]; (b) for strength, grip strength took precedence over lateral pinch strength and other strength as grip strength is the most commonly used outcome measure in these trials; and (c) for trials measuring outcomes for different hand joints, we extracted data of the joints in the following order: the distal interphalangeal (DIP) joints, the base of the thumb carpometacarpal (CMC) joints, and the proximal interphalangeal (PIP) joints, as the most commonly affected hand joints, in decreasing order, are the DIP joints, thumb CMC joints, and the PIP joints [19]. The effect estimates were interpreted as described by Cohen [20]; that is, an SMD of 0.2 to 0.5 was considered a small effect, 0.5 to 0.8 a moderate effect, and at least 0.8 a large effect of the individual rehabilitative intervention. We had planned to conduct a meta-analysis but this was not possible, owing to the heterogeneity of study interventions and outcome measures, which made pooling of data across trials inappropriate (I2 values of 89% to 99%).", "Randomized controlled trials (RCTs), quasi-RCTs, or crossover trials (that is, level of evidence 1b and 2b on Oxford levels of evidence) [10] in English were included for evaluation if they compared some form of rehabilitation with a control for adults whose condition was diagnosed as hand OA. The rehabilitative interventions included those that are used by therapists in clinical practice to treat hand OA, such as exercise, splints, heat therapy, electrotherapy, acupuncture, and massage. The control could be no treatment, usual care, or a placebo intervention. In addition, studies needed to assess at least one of the following outcomes: (a) hand pain including individual joint(s) or overall hand pain, (b) self-reported hand physical function, or (c) other measures of hand impairment, such as grip strength, ROM, or stiffness. Studies evaluating surgical or pharmacological interventions were excluded as were studies reported only in the form of abstracts, conference proceedings, or poster presentations.", "We searched the following electronic databases: MEDLINE (1950 to October 2010), CINAHL (Cumulative Index to Nursing and Allied Health Literature) (1981 to October 2010), ISI Web of Science (1950 to October 2010), SciVerse Scopus (1960 to October 2010), and Physiotherapy Evidence Database (PEDro) (1999). Specific search strategies for each database are provided in Appendix 1 (Additional file 1). We also searched the references of all systematic reviews of hand OA [5-9] and papers from experts in the field.", "We examined the list of titles and abstracts identified by the literature searches for potentially relevant studies. Two reviewers (LY and LK) independently applied the predetermined inclusion criteria to the full text of the identified studies. Any conflicts were resolved through a third independent researcher (KB).", "Two independent raters (LY and LK) assessed the methodological quality of included trials by means of the PEDro scale [11]. Disagreements were resolved by discussion with a third reviewer (KB). The PEDro scale is a validated scale used to assess the quality of randomized controlled rehabilitative studies [12-14] and provides a comprehensive measure of methodological quality [15]. It includes 11 criteria to assess the internal and external validity of clinical trials: criterion 1 measures external validity and is not included in the final score, and criteria 2 to 11 measure internal validity. The scale is scored out of 10, with 10 indicating the highest quality and 0 indicating the poorest quality. The items consist of (1) specification of eligibility criteria, (2) random allocation, (3) concealed allocation, (4) similarity at baseline, (5) blinding of subjects, (6) blinding of operators, (7) blinding of assessors, (8) measures of at least one key outcome obtained from at least 85% of subjects initially allocated to groups, (9) intention-to-treat principle, (10) results of between-group comparison, and (11) point measures and measures of variability reported. As it is difficult to blind therapists or participants in most rehabilitation trials, many studies do not meet all criteria; therefore, a trial can be considered to be of relatively high quality if it scores greater than 6 out of 10 on the PEDro scale [16].", "A predefined data extraction form with study design, participant characteristics, diagnosis, affected hand joints, intervention, and duration of interventions was used. To provide a comparison between outcomes reported by the studies, the standardized mean difference (SMD) over time and corresponding 95% confidence interval (CI) were calculated for continuous variables, if possible, immediately after treatment and at the longest follow-up time point by means of the software package RevMan 5 [17]. Although studies may have provided more than one outcome measure under each category of pain, function, strength, ROM, and stiffness, only one measure in each category per study was selected. The measures selected for calculation of the SMD were based on the following hierarchy: (a) for pain, measures of global hand pain took precedence over pain on motion and the Australian/Canadian OA hand index (AUSCAN) pain subscale [18]; (b) for strength, grip strength took precedence over lateral pinch strength and other strength as grip strength is the most commonly used outcome measure in these trials; and (c) for trials measuring outcomes for different hand joints, we extracted data of the joints in the following order: the distal interphalangeal (DIP) joints, the base of the thumb carpometacarpal (CMC) joints, and the proximal interphalangeal (PIP) joints, as the most commonly affected hand joints, in decreasing order, are the DIP joints, thumb CMC joints, and the PIP joints [19]. The effect estimates were interpreted as described by Cohen [20]; that is, an SMD of 0.2 to 0.5 was considered a small effect, 0.5 to 0.8 a moderate effect, and at least 0.8 a large effect of the individual rehabilitative intervention. We had planned to conduct a meta-analysis but this was not possible, owing to the heterogeneity of study interventions and outcome measures, which made pooling of data across trials inappropriate (I2 values of 89% to 99%).", "[SUBTITLE] Study selection [SUBSECTION] A flow diagram, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [21], of the results of the study selection procedure is presented in Figure 1. The search strategy yielded 629 articles. After duplications were deleted, 430 articles remained. Of these, 20 studies met the inclusion criteria [22-41]. After the full-text versions of these papers were reviewed, 10 studies were selected for this systematic review [22,24,26,27,30,31,33-35,39]. Reasons for exclusion included lack of a control group (n = 8) [23,25,32,36-38,40,41], language other than English (n = 1) [28], and not RCT or quasi-RCT (n = 1) [29].\nFlow diagram of the results of the study selection procedure, which is in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. OA, osteoarthritis; RCT, randomized controlled trial.\nA flow diagram, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [21], of the results of the study selection procedure is presented in Figure 1. The search strategy yielded 629 articles. After duplications were deleted, 430 articles remained. Of these, 20 studies met the inclusion criteria [22-41]. After the full-text versions of these papers were reviewed, 10 studies were selected for this systematic review [22,24,26,27,30,31,33-35,39]. Reasons for exclusion included lack of a control group (n = 8) [23,25,32,36-38,40,41], language other than English (n = 1) [28], and not RCT or quasi-RCT (n = 1) [29].\nFlow diagram of the results of the study selection procedure, which is in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. OA, osteoarthritis; RCT, randomized controlled trial.\n[SUBTITLE] Study characteristics [SUBSECTION] Details of the 10 eligible studies are presented in Tables 1 and 2. Of these studies, seven were RCTs, two were crossover trials, and one was a quasi-RCT. Five studies involved patients with both CMC joint and interphalangeal (IP) joint OA, one study involved patients with OA of the CMC joint only, while the remainder did not report the specific hand joints involved. Diagnosis of hand OA was based on clinical or radiologic criteria (or both) in five studies and on clinical criteria only in three studies; two studies did not clearly state their method of diagnosing hand OA. The age of participants ranged from 56 to 82 years, which is representative of adults with OA as reported in cohort studies [42,43]. Six different rehabilitation interventions were investigated (Table 2): one study investigated splints [31], two investigated laser therapy [22,24], two investigated heat therapy (using infrared radiation from a lamp or a heated tiled stove) [35,39], three investigated exercise programs [30,33,34], one investigated massage [27], and one investigated acupuncture [26]. Treatment durations ranged from 2 to 52 weeks, with a mean (standard deviation) of 10.9 (15.1) weeks. All studies, except one [39], reported the outcome measures immediately after treatment. Two studies reported a longer-term follow-up, with durations ranging from 2 weeks to 1 year [24,31].\nStudy design and participant characteristics\nCMC, carpometacarpal; F, female; IP, interphalageal; LOE, level of evidence (Oxford); M, male; n, number; NS, not stated; OA, osteoarthritis; RCT, randomized controlled trial; SD, standard deviation.\nDescription of study interventions and outcome measures\nROM refers to active range of motion of carpometacarpal (CMC), metacarpophalangeal (MCP), and interphalangeal (IP) of the thumb and MCP, distal interphalangeal (DIP), and proximal interphalangeal (PIP) joint movements of the 2nd-5th fingers. AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; HAQ, Health Assessment Questionnaire; NSAID, nonsteroidal anti-inflammatory drug; OA, osteoarthritis; VAS, visual analogue scale.\nDetails of the 10 eligible studies are presented in Tables 1 and 2. Of these studies, seven were RCTs, two were crossover trials, and one was a quasi-RCT. Five studies involved patients with both CMC joint and interphalangeal (IP) joint OA, one study involved patients with OA of the CMC joint only, while the remainder did not report the specific hand joints involved. Diagnosis of hand OA was based on clinical or radiologic criteria (or both) in five studies and on clinical criteria only in three studies; two studies did not clearly state their method of diagnosing hand OA. The age of participants ranged from 56 to 82 years, which is representative of adults with OA as reported in cohort studies [42,43]. Six different rehabilitation interventions were investigated (Table 2): one study investigated splints [31], two investigated laser therapy [22,24], two investigated heat therapy (using infrared radiation from a lamp or a heated tiled stove) [35,39], three investigated exercise programs [30,33,34], one investigated massage [27], and one investigated acupuncture [26]. Treatment durations ranged from 2 to 52 weeks, with a mean (standard deviation) of 10.9 (15.1) weeks. All studies, except one [39], reported the outcome measures immediately after treatment. Two studies reported a longer-term follow-up, with durations ranging from 2 weeks to 1 year [24,31].\nStudy design and participant characteristics\nCMC, carpometacarpal; F, female; IP, interphalageal; LOE, level of evidence (Oxford); M, male; n, number; NS, not stated; OA, osteoarthritis; RCT, randomized controlled trial; SD, standard deviation.\nDescription of study interventions and outcome measures\nROM refers to active range of motion of carpometacarpal (CMC), metacarpophalangeal (MCP), and interphalangeal (IP) of the thumb and MCP, distal interphalangeal (DIP), and proximal interphalangeal (PIP) joint movements of the 2nd-5th fingers. AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; HAQ, Health Assessment Questionnaire; NSAID, nonsteroidal anti-inflammatory drug; OA, osteoarthritis; VAS, visual analogue scale.\n[SUBTITLE] Methodological quality [SUBSECTION] The methodological quality of included studies (Table 3) ranged from 3 to 10 points out of a maximum of 10 points. Six trials were considered to have relatively high quality [22,24,26,31,34,35] and four trials lower quality [27,30,33,39]. One study, investigating laser therapy [24], met the criteria of blinding therapists and participants. Concealed allocation and the use of an intention-to-treat analysis were other criteria not met in most studies.\nQuality ratings of included studies according to the PEDro methodology scoring system\nITT, intention-to-treat; PEDro, Physiotherapy Evidence Database.\nThe methodological quality of included studies (Table 3) ranged from 3 to 10 points out of a maximum of 10 points. Six trials were considered to have relatively high quality [22,24,26,31,34,35] and four trials lower quality [27,30,33,39]. One study, investigating laser therapy [24], met the criteria of blinding therapists and participants. Concealed allocation and the use of an intention-to-treat analysis were other criteria not met in most studies.\nQuality ratings of included studies according to the PEDro methodology scoring system\nITT, intention-to-treat; PEDro, Physiotherapy Evidence Database.\n[SUBTITLE] Results of studies [SUBSECTION] The treatment effects (SMD with 95% CI) of the six different rehabilitative interventions on the outcomes of pain, self-reported physical function, strength, ROM, and self-reported stiffness, immediately after treatment and at the longest follow-up time point, are presented in Table 4. Treatment effects from the higher-quality studies on each of the outcomes are shown in Figures 2, 3, 4, 5 and 6. Most studies focused on interventions to improve pain and strength. Fewer studies investigated the effects on improving function, which is an important goal in clinical practice. Seven studies reported sufficient data to calculate the SMD with its 95% CI. For the remaining three studies, the author or authors were contacted, resulting in additional information from which to calculate the SMD in one of these three studies. The following sections will outline the treatment effects of rehabilitation strategies for each of the included outcomes.\nTreatment effects of rehabilitation interventions on study outcomes\naSignificant treatment effects. ADL, activities of daily living; AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; Dy, dynamometer(s); HAQ, Health Assessment Questionnaire; KI, Kapandji index (thumb opposition); NA, standardized mean difference not estimable; NS, measurement tool not stated; PEDro, Physiotherapy Evidence Database; S, sphygmomanometer; SMD, standardized mean difference; V, vigorimeter; VAS, visual analogue scale; VITAS, visual analogue scale anchored with five faces.\nTreatment effects of the higher-quality studies on pain. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on function. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on strength. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on range of motion (ROM). CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on stiffness. CI, confidence interval; SMD, standardized mean difference.\nThe treatment effects (SMD with 95% CI) of the six different rehabilitative interventions on the outcomes of pain, self-reported physical function, strength, ROM, and self-reported stiffness, immediately after treatment and at the longest follow-up time point, are presented in Table 4. Treatment effects from the higher-quality studies on each of the outcomes are shown in Figures 2, 3, 4, 5 and 6. Most studies focused on interventions to improve pain and strength. Fewer studies investigated the effects on improving function, which is an important goal in clinical practice. Seven studies reported sufficient data to calculate the SMD with its 95% CI. For the remaining three studies, the author or authors were contacted, resulting in additional information from which to calculate the SMD in one of these three studies. The following sections will outline the treatment effects of rehabilitation strategies for each of the included outcomes.\nTreatment effects of rehabilitation interventions on study outcomes\naSignificant treatment effects. ADL, activities of daily living; AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; Dy, dynamometer(s); HAQ, Health Assessment Questionnaire; KI, Kapandji index (thumb opposition); NA, standardized mean difference not estimable; NS, measurement tool not stated; PEDro, Physiotherapy Evidence Database; S, sphygmomanometer; SMD, standardized mean difference; V, vigorimeter; VAS, visual analogue scale; VITAS, visual analogue scale anchored with five faces.\nTreatment effects of the higher-quality studies on pain. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on function. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on strength. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on range of motion (ROM). CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on stiffness. CI, confidence interval; SMD, standardized mean difference.\n[SUBTITLE] Pain [SUBSECTION] The effects of all six rehabilitation interventions on pain were reported in eight of the 10 studies (Table 4). From the eight studies, six were graded as higher quality (greater than 6 on the PEDro scale). Of these higher-quality studies, only one study investigating long-term splint use was shown to have a positive treatment effect on improving pain when the visual analogue scale was used to measure outcome (Figure 2). In this study, Rannou and colleagues [31] found that 12 months of continued use of a night splint resulted in large improvements in pain (SMD = 4.24, 95% CI 3.52, 4.97). One lower-quality study demonstrated a smaller treatment effect of massage on improving pain (SMD = 1.18, 95% CI 0.26, 2.10) [29]. Although we could not calculate the SMD, the authors of the one trial of acupuncture reported no short-term pain-relieving effects (P = 1.0) [26].\nThe effects of all six rehabilitation interventions on pain were reported in eight of the 10 studies (Table 4). From the eight studies, six were graded as higher quality (greater than 6 on the PEDro scale). Of these higher-quality studies, only one study investigating long-term splint use was shown to have a positive treatment effect on improving pain when the visual analogue scale was used to measure outcome (Figure 2). In this study, Rannou and colleagues [31] found that 12 months of continued use of a night splint resulted in large improvements in pain (SMD = 4.24, 95% CI 3.52, 4.97). One lower-quality study demonstrated a smaller treatment effect of massage on improving pain (SMD = 1.18, 95% CI 0.26, 2.10) [29]. Although we could not calculate the SMD, the authors of the one trial of acupuncture reported no short-term pain-relieving effects (P = 1.0) [26].\n[SUBTITLE] Self-reported hand function [SUBSECTION] The effects of all interventions, except massage, were investigated on hand function in six of the 10 studies (Table 4). From the six studies, five were graded as higher-quality studies. Of these higher-quality studies, a positive treatment effect could be calculated from one study. In this study [31], use of a splint resulted in a large improvement in hand function in both the short and long term as measured by the Cochin hand functional scale (SMD = 1.10 and 3.73, respectively) (Figure 3). Of the two studies from which we were unable to calculate SMD, a significantly higher proportion of patients reported improved function with a 3-month hand ROM exercise program and education about joint protection in comparison with those who received general OA education and use of non-slip matting to open jars (P < 0.05) [34]. However, no functional improvement was shown in another exercise trial that included both ROM and strengthening exercises [33]. Laser therapy [24] and heat treatment [35] had no effect on hand function as measured by the AUSCAN. Similarly, the trial on acupuncture reported no effect on function [26].\nThe effects of all interventions, except massage, were investigated on hand function in six of the 10 studies (Table 4). From the six studies, five were graded as higher-quality studies. Of these higher-quality studies, a positive treatment effect could be calculated from one study. In this study [31], use of a splint resulted in a large improvement in hand function in both the short and long term as measured by the Cochin hand functional scale (SMD = 1.10 and 3.73, respectively) (Figure 3). Of the two studies from which we were unable to calculate SMD, a significantly higher proportion of patients reported improved function with a 3-month hand ROM exercise program and education about joint protection in comparison with those who received general OA education and use of non-slip matting to open jars (P < 0.05) [34]. However, no functional improvement was shown in another exercise trial that included both ROM and strengthening exercises [33]. Laser therapy [24] and heat treatment [35] had no effect on hand function as measured by the AUSCAN. Similarly, the trial on acupuncture reported no effect on function [26].\n[SUBTITLE] Strength [SUBSECTION] The effects of all interventions on hand strength were investigated in all 10 trials (Table 4). Six of these 10 studies were graded as higher-quality studies, and positive treatment effects could be calculated from two of the six studies (Figure 4). Improvements in hand strength, measured by means of an electronic dynamometer, were found in both the short and long term with the use of splinting in one study (SMD = 0.9 and 1.2, respectively) [31]. A large positive treatment effect (SMD = 4.5), measured by means of a vigorimeter, was found with the use of a home ROM exercise program [34]. Effect sizes could not be calculated in three studies [24,26,39]. Of these studies, one study [24] reported significant improvement in grip strength (P = 0.041) when measured with a dynamometer following laser therapy, one trial [39] did not measure between-group strength difference, and the other trial [26] drew no conclusion on the effect of acupuncture on hand strength.\nThe effects of all interventions on hand strength were investigated in all 10 trials (Table 4). Six of these 10 studies were graded as higher-quality studies, and positive treatment effects could be calculated from two of the six studies (Figure 4). Improvements in hand strength, measured by means of an electronic dynamometer, were found in both the short and long term with the use of splinting in one study (SMD = 0.9 and 1.2, respectively) [31]. A large positive treatment effect (SMD = 4.5), measured by means of a vigorimeter, was found with the use of a home ROM exercise program [34]. Effect sizes could not be calculated in three studies [24,26,39]. Of these studies, one study [24] reported significant improvement in grip strength (P = 0.041) when measured with a dynamometer following laser therapy, one trial [39] did not measure between-group strength difference, and the other trial [26] drew no conclusion on the effect of acupuncture on hand strength.\n[SUBTITLE] Range of motion [SUBSECTION] The effects of three interventions (splints, laser, and exercise) on ROM were investigated by four studies (Table 4). Of these, three were graded as higher-quality studies, and treatment effects could be calculated from one of the three studies. A small negative effect (SMD = -0.4) in the short term and a large positive effect (SMD = 3.3) in the long term were found on hand ROM in one trial of splinting [31] (Figure 5). Of the two studies from which we were unable to calculate SMD, a significant improvement in ROM was reported for hand-strengthening exercises [30] whereas no overall improvement was reported for laser therapy [22,24], except CMC opposition (P = 0.011) [24].\nThe effects of three interventions (splints, laser, and exercise) on ROM were investigated by four studies (Table 4). Of these, three were graded as higher-quality studies, and treatment effects could be calculated from one of the three studies. A small negative effect (SMD = -0.4) in the short term and a large positive effect (SMD = 3.3) in the long term were found on hand ROM in one trial of splinting [31] (Figure 5). Of the two studies from which we were unable to calculate SMD, a significant improvement in ROM was reported for hand-strengthening exercises [30] whereas no overall improvement was reported for laser therapy [22,24], except CMC opposition (P = 0.011) [24].\n[SUBTITLE] Stiffness [SUBSECTION] The effects of three interventions (laser, heat, and exercise) on self-reported stiffness using the AUSCAN scale were investigated in three studies, two of which were graded as higher-quality studies (Table 4). None of the interventions had positive treatment effects on hand joint stiffness (Figure 6). However, as stiffness was measured with only one item from the 15-item AUSCAN scale, it is possible that this tool did not capture the full dimension of stiffness.\nThe effects of three interventions (laser, heat, and exercise) on self-reported stiffness using the AUSCAN scale were investigated in three studies, two of which were graded as higher-quality studies (Table 4). None of the interventions had positive treatment effects on hand joint stiffness (Figure 6). However, as stiffness was measured with only one item from the 15-item AUSCAN scale, it is possible that this tool did not capture the full dimension of stiffness.\n[SUBTITLE] Synthesis of results [SUBSECTION] A summary of current available evidence from higher-quality studies with positive treatment effects of rehabilitative interventions on pain, function, and physical impairments is provided in Table 5.\nSummary of the higher-quality evidence for treating impairments and function in individuals with hand osteoarthritis\nCHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; CMC, carpometacarpal; Dy, dynamometer(s); G, goniometer (s); HAQ, Health Assessment Questionnaire; IP, interphalangeal; KI, Kapandji index (thumb opposition); LOE, level of evidence (Oxford); NA, standardized mean difference not estimable; PEDro, Physiotherapy Evidence Database; SMD, standardized mean difference; V, vigorimeter VAS, visual analogue scale.\nA summary of current available evidence from higher-quality studies with positive treatment effects of rehabilitative interventions on pain, function, and physical impairments is provided in Table 5.\nSummary of the higher-quality evidence for treating impairments and function in individuals with hand osteoarthritis\nCHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; CMC, carpometacarpal; Dy, dynamometer(s); G, goniometer (s); HAQ, Health Assessment Questionnaire; IP, interphalangeal; KI, Kapandji index (thumb opposition); LOE, level of evidence (Oxford); NA, standardized mean difference not estimable; PEDro, Physiotherapy Evidence Database; SMD, standardized mean difference; V, vigorimeter VAS, visual analogue scale.", "A flow diagram, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [21], of the results of the study selection procedure is presented in Figure 1. The search strategy yielded 629 articles. After duplications were deleted, 430 articles remained. Of these, 20 studies met the inclusion criteria [22-41]. After the full-text versions of these papers were reviewed, 10 studies were selected for this systematic review [22,24,26,27,30,31,33-35,39]. Reasons for exclusion included lack of a control group (n = 8) [23,25,32,36-38,40,41], language other than English (n = 1) [28], and not RCT or quasi-RCT (n = 1) [29].\nFlow diagram of the results of the study selection procedure, which is in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. OA, osteoarthritis; RCT, randomized controlled trial.", "Details of the 10 eligible studies are presented in Tables 1 and 2. Of these studies, seven were RCTs, two were crossover trials, and one was a quasi-RCT. Five studies involved patients with both CMC joint and interphalangeal (IP) joint OA, one study involved patients with OA of the CMC joint only, while the remainder did not report the specific hand joints involved. Diagnosis of hand OA was based on clinical or radiologic criteria (or both) in five studies and on clinical criteria only in three studies; two studies did not clearly state their method of diagnosing hand OA. The age of participants ranged from 56 to 82 years, which is representative of adults with OA as reported in cohort studies [42,43]. Six different rehabilitation interventions were investigated (Table 2): one study investigated splints [31], two investigated laser therapy [22,24], two investigated heat therapy (using infrared radiation from a lamp or a heated tiled stove) [35,39], three investigated exercise programs [30,33,34], one investigated massage [27], and one investigated acupuncture [26]. Treatment durations ranged from 2 to 52 weeks, with a mean (standard deviation) of 10.9 (15.1) weeks. All studies, except one [39], reported the outcome measures immediately after treatment. Two studies reported a longer-term follow-up, with durations ranging from 2 weeks to 1 year [24,31].\nStudy design and participant characteristics\nCMC, carpometacarpal; F, female; IP, interphalageal; LOE, level of evidence (Oxford); M, male; n, number; NS, not stated; OA, osteoarthritis; RCT, randomized controlled trial; SD, standard deviation.\nDescription of study interventions and outcome measures\nROM refers to active range of motion of carpometacarpal (CMC), metacarpophalangeal (MCP), and interphalangeal (IP) of the thumb and MCP, distal interphalangeal (DIP), and proximal interphalangeal (PIP) joint movements of the 2nd-5th fingers. AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; HAQ, Health Assessment Questionnaire; NSAID, nonsteroidal anti-inflammatory drug; OA, osteoarthritis; VAS, visual analogue scale.", "The methodological quality of included studies (Table 3) ranged from 3 to 10 points out of a maximum of 10 points. Six trials were considered to have relatively high quality [22,24,26,31,34,35] and four trials lower quality [27,30,33,39]. One study, investigating laser therapy [24], met the criteria of blinding therapists and participants. Concealed allocation and the use of an intention-to-treat analysis were other criteria not met in most studies.\nQuality ratings of included studies according to the PEDro methodology scoring system\nITT, intention-to-treat; PEDro, Physiotherapy Evidence Database.", "The treatment effects (SMD with 95% CI) of the six different rehabilitative interventions on the outcomes of pain, self-reported physical function, strength, ROM, and self-reported stiffness, immediately after treatment and at the longest follow-up time point, are presented in Table 4. Treatment effects from the higher-quality studies on each of the outcomes are shown in Figures 2, 3, 4, 5 and 6. Most studies focused on interventions to improve pain and strength. Fewer studies investigated the effects on improving function, which is an important goal in clinical practice. Seven studies reported sufficient data to calculate the SMD with its 95% CI. For the remaining three studies, the author or authors were contacted, resulting in additional information from which to calculate the SMD in one of these three studies. The following sections will outline the treatment effects of rehabilitation strategies for each of the included outcomes.\nTreatment effects of rehabilitation interventions on study outcomes\naSignificant treatment effects. ADL, activities of daily living; AUSCAN, Australian/Canadian osteoarthritis hand index; CHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; Dy, dynamometer(s); HAQ, Health Assessment Questionnaire; KI, Kapandji index (thumb opposition); NA, standardized mean difference not estimable; NS, measurement tool not stated; PEDro, Physiotherapy Evidence Database; S, sphygmomanometer; SMD, standardized mean difference; V, vigorimeter; VAS, visual analogue scale; VITAS, visual analogue scale anchored with five faces.\nTreatment effects of the higher-quality studies on pain. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on function. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on strength. CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on range of motion (ROM). CI, confidence interval; SMD, standardized mean difference.\nTreatment effects of the higher-quality studies on stiffness. CI, confidence interval; SMD, standardized mean difference.", "The effects of all six rehabilitation interventions on pain were reported in eight of the 10 studies (Table 4). From the eight studies, six were graded as higher quality (greater than 6 on the PEDro scale). Of these higher-quality studies, only one study investigating long-term splint use was shown to have a positive treatment effect on improving pain when the visual analogue scale was used to measure outcome (Figure 2). In this study, Rannou and colleagues [31] found that 12 months of continued use of a night splint resulted in large improvements in pain (SMD = 4.24, 95% CI 3.52, 4.97). One lower-quality study demonstrated a smaller treatment effect of massage on improving pain (SMD = 1.18, 95% CI 0.26, 2.10) [29]. Although we could not calculate the SMD, the authors of the one trial of acupuncture reported no short-term pain-relieving effects (P = 1.0) [26].", "The effects of all interventions, except massage, were investigated on hand function in six of the 10 studies (Table 4). From the six studies, five were graded as higher-quality studies. Of these higher-quality studies, a positive treatment effect could be calculated from one study. In this study [31], use of a splint resulted in a large improvement in hand function in both the short and long term as measured by the Cochin hand functional scale (SMD = 1.10 and 3.73, respectively) (Figure 3). Of the two studies from which we were unable to calculate SMD, a significantly higher proportion of patients reported improved function with a 3-month hand ROM exercise program and education about joint protection in comparison with those who received general OA education and use of non-slip matting to open jars (P < 0.05) [34]. However, no functional improvement was shown in another exercise trial that included both ROM and strengthening exercises [33]. Laser therapy [24] and heat treatment [35] had no effect on hand function as measured by the AUSCAN. Similarly, the trial on acupuncture reported no effect on function [26].", "The effects of all interventions on hand strength were investigated in all 10 trials (Table 4). Six of these 10 studies were graded as higher-quality studies, and positive treatment effects could be calculated from two of the six studies (Figure 4). Improvements in hand strength, measured by means of an electronic dynamometer, were found in both the short and long term with the use of splinting in one study (SMD = 0.9 and 1.2, respectively) [31]. A large positive treatment effect (SMD = 4.5), measured by means of a vigorimeter, was found with the use of a home ROM exercise program [34]. Effect sizes could not be calculated in three studies [24,26,39]. Of these studies, one study [24] reported significant improvement in grip strength (P = 0.041) when measured with a dynamometer following laser therapy, one trial [39] did not measure between-group strength difference, and the other trial [26] drew no conclusion on the effect of acupuncture on hand strength.", "The effects of three interventions (splints, laser, and exercise) on ROM were investigated by four studies (Table 4). Of these, three were graded as higher-quality studies, and treatment effects could be calculated from one of the three studies. A small negative effect (SMD = -0.4) in the short term and a large positive effect (SMD = 3.3) in the long term were found on hand ROM in one trial of splinting [31] (Figure 5). Of the two studies from which we were unable to calculate SMD, a significant improvement in ROM was reported for hand-strengthening exercises [30] whereas no overall improvement was reported for laser therapy [22,24], except CMC opposition (P = 0.011) [24].", "The effects of three interventions (laser, heat, and exercise) on self-reported stiffness using the AUSCAN scale were investigated in three studies, two of which were graded as higher-quality studies (Table 4). None of the interventions had positive treatment effects on hand joint stiffness (Figure 6). However, as stiffness was measured with only one item from the 15-item AUSCAN scale, it is possible that this tool did not capture the full dimension of stiffness.", "A summary of current available evidence from higher-quality studies with positive treatment effects of rehabilitative interventions on pain, function, and physical impairments is provided in Table 5.\nSummary of the higher-quality evidence for treating impairments and function in individuals with hand osteoarthritis\nCHFS, Cochin hand functional scale; CI, confidence intervals for continuous variables; CMC, carpometacarpal; Dy, dynamometer(s); G, goniometer (s); HAQ, Health Assessment Questionnaire; IP, interphalangeal; KI, Kapandji index (thumb opposition); LOE, level of evidence (Oxford); NA, standardized mean difference not estimable; PEDro, Physiotherapy Evidence Database; SMD, standardized mean difference; V, vigorimeter VAS, visual analogue scale.", "This systematic review revealed very few high-quality clinical trials, particularly given the range of rehabilitative interventions that are available to clinicians for the management of hand OA and that are recommended by international bodies. Given the limited amount and varying quality of evidence, firm conclusions about the benefits of various rehabilitation interventions on specific treatment goals cannot be fully drawn from the results of this review. This review does, however, establish that there is emerging high-quality evidence to support the use of common rehabilitation interventions to treat individuals with hand OA. It also suggests which interventions most effectively target specific treatment goals for hand OA.\n[SUBTITLE] Pain relief and function [SUBSECTION] Pain relief has been reported as the primary treatment goal for hand OA because of its direct correlation with increased hand function [44]. In this review, the use of long-term night splinting was found to be the only effective intervention for both pain reduction and improved physical function [24]. This relative paucity of effect on pain is somewhat surprising given that RCTs for knee and hip OA have reported positive effects on pain from a variety of rehabilitative interventions [45]. However, this discrepancy may reflect the different disease characteristics, such as different risk factors for development and progression, biomechanical features, and physical impairments of hand OA when compared with lower-extremity OA.\nNight splinting of the thumb has particularly been recommended for OA of the hand [46] as CMC joint OA has a greater impact on pain and dysfunction than IP OA does [47]. A 7-year prospective study [48] showed that thumb splinting improved hand function and, importantly, reduced the need for surgery. EULAR [49] also recommends using splints to prevent/correct lateral angulation and flexion deformity at the thumb. Our review found evidence from a higher-quality adequately powered RCT that a custom-made neoprene night splint led to significant improvements compared with usual care for 12 months, although it did not improve pain or ROM in the short term (1 month) [31]. In the trial by Rannou and colleagues [31], participants were instructed to use the night splint for 12 months. Adherence was good: 86% wore the splint 5 to 7 nights a week [31].\nEvidence from this review did not support the use of laser therapy, heat treatment, exercise, or acupuncture for reducing both pain and improving function in hand OA. However, Stamm and colleagues [34] reported a higher proportion of patients with an at least 10% increase in global hand function using exercise. This was the only exercise study to report an improvement in hand function; however, as the exercise was combined with joint protection education, it is difficult to truly isolate the independent effects of exercise [34].\nLow-level laser therapy has been found to regulate chondrocytic proliferation and stimulate collagen synthesis in animals [50,51]. It is thought to have analgesic effects as well as biomodulatory effects of microcirculation [52]. Despite these physiological effects, the two high-quality, well-powered RCTs in our review reported no significant positive clinical effects of laser therapy delivered thrice weekly for 3 to 6 weeks on pain and hand function. This contrasts with findings for laser therapy in the treatment of knee OA, for which there is moderate-quality evidence of beneficial effects, including pain reduction and functional improvement [53,54]. It may be that different devices, method and site of application, wavelength, treatment regime, and measurement tools influence the result.\nMassage therapy was shown to be effective in reducing pain in patients with hand OA; however, owing to the lower quality (3 on the PEDro scale) of the one study on massage [27], it is hard to draw definitive conclusions about massage therapy. The single trial of acupuncture did not support its use for hand OA for pain and function, but no detail was provided about the treatment dosage, including the acupuncture points, used. This lack of effect of acupuncture is consistent with findings of a recent systematic review of acupuncture for all OA; the review showed that, while there were statistically significant benefits in sham-controlled trials, the benefits were small, did not meet predefined thresholds for clinical relevance, and were possibly due at least partially to placebo effects from incomplete blinding [55].\nPain relief has been reported as the primary treatment goal for hand OA because of its direct correlation with increased hand function [44]. In this review, the use of long-term night splinting was found to be the only effective intervention for both pain reduction and improved physical function [24]. This relative paucity of effect on pain is somewhat surprising given that RCTs for knee and hip OA have reported positive effects on pain from a variety of rehabilitative interventions [45]. However, this discrepancy may reflect the different disease characteristics, such as different risk factors for development and progression, biomechanical features, and physical impairments of hand OA when compared with lower-extremity OA.\nNight splinting of the thumb has particularly been recommended for OA of the hand [46] as CMC joint OA has a greater impact on pain and dysfunction than IP OA does [47]. A 7-year prospective study [48] showed that thumb splinting improved hand function and, importantly, reduced the need for surgery. EULAR [49] also recommends using splints to prevent/correct lateral angulation and flexion deformity at the thumb. Our review found evidence from a higher-quality adequately powered RCT that a custom-made neoprene night splint led to significant improvements compared with usual care for 12 months, although it did not improve pain or ROM in the short term (1 month) [31]. In the trial by Rannou and colleagues [31], participants were instructed to use the night splint for 12 months. Adherence was good: 86% wore the splint 5 to 7 nights a week [31].\nEvidence from this review did not support the use of laser therapy, heat treatment, exercise, or acupuncture for reducing both pain and improving function in hand OA. However, Stamm and colleagues [34] reported a higher proportion of patients with an at least 10% increase in global hand function using exercise. This was the only exercise study to report an improvement in hand function; however, as the exercise was combined with joint protection education, it is difficult to truly isolate the independent effects of exercise [34].\nLow-level laser therapy has been found to regulate chondrocytic proliferation and stimulate collagen synthesis in animals [50,51]. It is thought to have analgesic effects as well as biomodulatory effects of microcirculation [52]. Despite these physiological effects, the two high-quality, well-powered RCTs in our review reported no significant positive clinical effects of laser therapy delivered thrice weekly for 3 to 6 weeks on pain and hand function. This contrasts with findings for laser therapy in the treatment of knee OA, for which there is moderate-quality evidence of beneficial effects, including pain reduction and functional improvement [53,54]. It may be that different devices, method and site of application, wavelength, treatment regime, and measurement tools influence the result.\nMassage therapy was shown to be effective in reducing pain in patients with hand OA; however, owing to the lower quality (3 on the PEDro scale) of the one study on massage [27], it is hard to draw definitive conclusions about massage therapy. The single trial of acupuncture did not support its use for hand OA for pain and function, but no detail was provided about the treatment dosage, including the acupuncture points, used. This lack of effect of acupuncture is consistent with findings of a recent systematic review of acupuncture for all OA; the review showed that, while there were statistically significant benefits in sham-controlled trials, the benefits were small, did not meet predefined thresholds for clinical relevance, and were possibly due at least partially to placebo effects from incomplete blinding [55].\n[SUBTITLE] Strength, range of motion, and stiffness [SUBSECTION] Improvements of hand strength and ROM and reduction of stiffness are also common goals of rehabilitation on hand OA [3]. The use of night splints in both the short term and long term was shown to have a treatment effect on strength and ROM but not on stiffness. Interestingly, the use of night splinting produced a small negative treatment effect (SMD = -0.4) in the short term but a large positive effect (SMD = 3.3) in the long term on ROM in one study [24]. This finding is important knowledge for therapists when providing advice on the duration of night splint use when the goal is to improve ROM.\nExercise is considered a mainstay of treatment for OA and yet, in this review, only three RCTs [30,33,34] of lower quality investigated the effects of various exercise programs to improve strength, ROM, or stiffness. Surprisingly, the exercise programs that incorporated strengthening exercises failed to find strength gains yet found an effect on ROM [30,33], while a large significant improvement in grip strength was found with a program that involved ROM exercises [34]. These programs all differed in their exercise content and dosage. Precise details on the intensity of the exercise program were limited. It is possible that the intensity of the strengthening exercises was insufficient for change to occur, especially given that increases in strength were not evident. Further studies that address the optimal intensity of strengthening exercises for hand OA are required.\nNo studies found significant positive effects of splints, laser, heat, or exercise on stiffness. Further trials using larger sample sizes and a more rigorous methodology are needed to evaluate different forms of exercise on improving strength and ROM and reducing stiffness in patients with hand OA. Constraining outcome measures to only self-reported methods, such as using the 1-item AUSCAN stiffness subscale to measure stiffness, may reduce the ability to capture the full dimension of the impairment [56]. The additional use of performance-based outcome measures that can complement self-reported measures needs to be considered when assessing outcomes, such as stiffness, to assist in capturing this extent of impairment and function in hand OA.\nThe only other rehabilitation intervention reported to improve strength or ROM was laser therapy [24]. This high-quality, well-powered RCT found a benefit of laser therapy delivered thrice weekly for 3 to 6 weeks on grip strength and CMC opposition. Other treatment modalities investigating the effect of heat therapy for patients with hand OA did not find improvements in strength or stiffness when using either the heat provided by a tiled stove [35] or infrared radiation [39]. No studies on the application of wax or hot packs were included in this review.\nImprovements of hand strength and ROM and reduction of stiffness are also common goals of rehabilitation on hand OA [3]. The use of night splints in both the short term and long term was shown to have a treatment effect on strength and ROM but not on stiffness. Interestingly, the use of night splinting produced a small negative treatment effect (SMD = -0.4) in the short term but a large positive effect (SMD = 3.3) in the long term on ROM in one study [24]. This finding is important knowledge for therapists when providing advice on the duration of night splint use when the goal is to improve ROM.\nExercise is considered a mainstay of treatment for OA and yet, in this review, only three RCTs [30,33,34] of lower quality investigated the effects of various exercise programs to improve strength, ROM, or stiffness. Surprisingly, the exercise programs that incorporated strengthening exercises failed to find strength gains yet found an effect on ROM [30,33], while a large significant improvement in grip strength was found with a program that involved ROM exercises [34]. These programs all differed in their exercise content and dosage. Precise details on the intensity of the exercise program were limited. It is possible that the intensity of the strengthening exercises was insufficient for change to occur, especially given that increases in strength were not evident. Further studies that address the optimal intensity of strengthening exercises for hand OA are required.\nNo studies found significant positive effects of splints, laser, heat, or exercise on stiffness. Further trials using larger sample sizes and a more rigorous methodology are needed to evaluate different forms of exercise on improving strength and ROM and reducing stiffness in patients with hand OA. Constraining outcome measures to only self-reported methods, such as using the 1-item AUSCAN stiffness subscale to measure stiffness, may reduce the ability to capture the full dimension of the impairment [56]. The additional use of performance-based outcome measures that can complement self-reported measures needs to be considered when assessing outcomes, such as stiffness, to assist in capturing this extent of impairment and function in hand OA.\nThe only other rehabilitation intervention reported to improve strength or ROM was laser therapy [24]. This high-quality, well-powered RCT found a benefit of laser therapy delivered thrice weekly for 3 to 6 weeks on grip strength and CMC opposition. Other treatment modalities investigating the effect of heat therapy for patients with hand OA did not find improvements in strength or stiffness when using either the heat provided by a tiled stove [35] or infrared radiation [39]. No studies on the application of wax or hot packs were included in this review.\n[SUBTITLE] Other treatment modalities [SUBSECTION] No studies fulfilling our inclusion criteria were found for ultrasound or transcutaneous electrical nerve stimulation (TENS). Ultrasound is recommended by EULAR for the management of OA, yet there is evidence from studies of knee OA that ultrasound offers no benefit over placebo [53]. Given that hand joints are more superficial than the knee joint, ultrasound may have different effects in hand OA and is worthy of investigation. Likewise, the effect of TENS for the management of hand OA should be investigated given that some [53,54] but not all [57] systematic reviews in knee OA show that TENS has significant pain-relieving benefits. One study involving TENS, excluded from our review but included in that of Towheed [8], found that use of a glove electrode was, overall, more effective than use of a carbon electrode when using TENS in individuals with hand OA. Other rehabilitative interventions we excluded from our review involved a yoga program [29], which was reported to be effective in improving pain, tenderness, and ROM, and leech therapy, which was more effective than treatment with the drug diclofenac [58].\nThere are several limitations to this review. First, the statistical power of most studies was rather low. To detect a medium effect size of 0.5 (with α = 0.5 and power at 80%), the sample size per group needs to be at least 50 [20]. This is particularly relevant given that many studies reported a lack of treatment effect on the measured outcomes, and this lack of effect may simply reflect inadequate statistical power. Furthermore, despite contacting authors requesting additional information where required, we were unable to calculate effect sizes for two trials included in the review. Second, we did not confine our studies to RCTs, given the likely lack of studies in this area, and instead included one quasi-RCT [39] and two crossover trials [33,35] on the assumption that hand OA is a non-curable condition and that carry-over of treatment effect across periods may be less likely. The findings of these studies need to be interpreted cautiously given these study designs. Third, the methodological assessment revealed some threats to the validity of the included trials, with around half the studies rated as being of lower quality. A summary of the evidence was therefore made with higher-quality studies graded by means of the PEDro system. Fourth, there was variable use of outcome measures across the trials, making it difficult to compare and pool results across studies.\nNo studies fulfilling our inclusion criteria were found for ultrasound or transcutaneous electrical nerve stimulation (TENS). Ultrasound is recommended by EULAR for the management of OA, yet there is evidence from studies of knee OA that ultrasound offers no benefit over placebo [53]. Given that hand joints are more superficial than the knee joint, ultrasound may have different effects in hand OA and is worthy of investigation. Likewise, the effect of TENS for the management of hand OA should be investigated given that some [53,54] but not all [57] systematic reviews in knee OA show that TENS has significant pain-relieving benefits. One study involving TENS, excluded from our review but included in that of Towheed [8], found that use of a glove electrode was, overall, more effective than use of a carbon electrode when using TENS in individuals with hand OA. Other rehabilitative interventions we excluded from our review involved a yoga program [29], which was reported to be effective in improving pain, tenderness, and ROM, and leech therapy, which was more effective than treatment with the drug diclofenac [58].\nThere are several limitations to this review. First, the statistical power of most studies was rather low. To detect a medium effect size of 0.5 (with α = 0.5 and power at 80%), the sample size per group needs to be at least 50 [20]. This is particularly relevant given that many studies reported a lack of treatment effect on the measured outcomes, and this lack of effect may simply reflect inadequate statistical power. Furthermore, despite contacting authors requesting additional information where required, we were unable to calculate effect sizes for two trials included in the review. Second, we did not confine our studies to RCTs, given the likely lack of studies in this area, and instead included one quasi-RCT [39] and two crossover trials [33,35] on the assumption that hand OA is a non-curable condition and that carry-over of treatment effect across periods may be less likely. The findings of these studies need to be interpreted cautiously given these study designs. Third, the methodological assessment revealed some threats to the validity of the included trials, with around half the studies rated as being of lower quality. A summary of the evidence was therefore made with higher-quality studies graded by means of the PEDro system. Fourth, there was variable use of outcome measures across the trials, making it difficult to compare and pool results across studies.", "Pain relief has been reported as the primary treatment goal for hand OA because of its direct correlation with increased hand function [44]. In this review, the use of long-term night splinting was found to be the only effective intervention for both pain reduction and improved physical function [24]. This relative paucity of effect on pain is somewhat surprising given that RCTs for knee and hip OA have reported positive effects on pain from a variety of rehabilitative interventions [45]. However, this discrepancy may reflect the different disease characteristics, such as different risk factors for development and progression, biomechanical features, and physical impairments of hand OA when compared with lower-extremity OA.\nNight splinting of the thumb has particularly been recommended for OA of the hand [46] as CMC joint OA has a greater impact on pain and dysfunction than IP OA does [47]. A 7-year prospective study [48] showed that thumb splinting improved hand function and, importantly, reduced the need for surgery. EULAR [49] also recommends using splints to prevent/correct lateral angulation and flexion deformity at the thumb. Our review found evidence from a higher-quality adequately powered RCT that a custom-made neoprene night splint led to significant improvements compared with usual care for 12 months, although it did not improve pain or ROM in the short term (1 month) [31]. In the trial by Rannou and colleagues [31], participants were instructed to use the night splint for 12 months. Adherence was good: 86% wore the splint 5 to 7 nights a week [31].\nEvidence from this review did not support the use of laser therapy, heat treatment, exercise, or acupuncture for reducing both pain and improving function in hand OA. However, Stamm and colleagues [34] reported a higher proportion of patients with an at least 10% increase in global hand function using exercise. This was the only exercise study to report an improvement in hand function; however, as the exercise was combined with joint protection education, it is difficult to truly isolate the independent effects of exercise [34].\nLow-level laser therapy has been found to regulate chondrocytic proliferation and stimulate collagen synthesis in animals [50,51]. It is thought to have analgesic effects as well as biomodulatory effects of microcirculation [52]. Despite these physiological effects, the two high-quality, well-powered RCTs in our review reported no significant positive clinical effects of laser therapy delivered thrice weekly for 3 to 6 weeks on pain and hand function. This contrasts with findings for laser therapy in the treatment of knee OA, for which there is moderate-quality evidence of beneficial effects, including pain reduction and functional improvement [53,54]. It may be that different devices, method and site of application, wavelength, treatment regime, and measurement tools influence the result.\nMassage therapy was shown to be effective in reducing pain in patients with hand OA; however, owing to the lower quality (3 on the PEDro scale) of the one study on massage [27], it is hard to draw definitive conclusions about massage therapy. The single trial of acupuncture did not support its use for hand OA for pain and function, but no detail was provided about the treatment dosage, including the acupuncture points, used. This lack of effect of acupuncture is consistent with findings of a recent systematic review of acupuncture for all OA; the review showed that, while there were statistically significant benefits in sham-controlled trials, the benefits were small, did not meet predefined thresholds for clinical relevance, and were possibly due at least partially to placebo effects from incomplete blinding [55].", "Improvements of hand strength and ROM and reduction of stiffness are also common goals of rehabilitation on hand OA [3]. The use of night splints in both the short term and long term was shown to have a treatment effect on strength and ROM but not on stiffness. Interestingly, the use of night splinting produced a small negative treatment effect (SMD = -0.4) in the short term but a large positive effect (SMD = 3.3) in the long term on ROM in one study [24]. This finding is important knowledge for therapists when providing advice on the duration of night splint use when the goal is to improve ROM.\nExercise is considered a mainstay of treatment for OA and yet, in this review, only three RCTs [30,33,34] of lower quality investigated the effects of various exercise programs to improve strength, ROM, or stiffness. Surprisingly, the exercise programs that incorporated strengthening exercises failed to find strength gains yet found an effect on ROM [30,33], while a large significant improvement in grip strength was found with a program that involved ROM exercises [34]. These programs all differed in their exercise content and dosage. Precise details on the intensity of the exercise program were limited. It is possible that the intensity of the strengthening exercises was insufficient for change to occur, especially given that increases in strength were not evident. Further studies that address the optimal intensity of strengthening exercises for hand OA are required.\nNo studies found significant positive effects of splints, laser, heat, or exercise on stiffness. Further trials using larger sample sizes and a more rigorous methodology are needed to evaluate different forms of exercise on improving strength and ROM and reducing stiffness in patients with hand OA. Constraining outcome measures to only self-reported methods, such as using the 1-item AUSCAN stiffness subscale to measure stiffness, may reduce the ability to capture the full dimension of the impairment [56]. The additional use of performance-based outcome measures that can complement self-reported measures needs to be considered when assessing outcomes, such as stiffness, to assist in capturing this extent of impairment and function in hand OA.\nThe only other rehabilitation intervention reported to improve strength or ROM was laser therapy [24]. This high-quality, well-powered RCT found a benefit of laser therapy delivered thrice weekly for 3 to 6 weeks on grip strength and CMC opposition. Other treatment modalities investigating the effect of heat therapy for patients with hand OA did not find improvements in strength or stiffness when using either the heat provided by a tiled stove [35] or infrared radiation [39]. No studies on the application of wax or hot packs were included in this review.", "No studies fulfilling our inclusion criteria were found for ultrasound or transcutaneous electrical nerve stimulation (TENS). Ultrasound is recommended by EULAR for the management of OA, yet there is evidence from studies of knee OA that ultrasound offers no benefit over placebo [53]. Given that hand joints are more superficial than the knee joint, ultrasound may have different effects in hand OA and is worthy of investigation. Likewise, the effect of TENS for the management of hand OA should be investigated given that some [53,54] but not all [57] systematic reviews in knee OA show that TENS has significant pain-relieving benefits. One study involving TENS, excluded from our review but included in that of Towheed [8], found that use of a glove electrode was, overall, more effective than use of a carbon electrode when using TENS in individuals with hand OA. Other rehabilitative interventions we excluded from our review involved a yoga program [29], which was reported to be effective in improving pain, tenderness, and ROM, and leech therapy, which was more effective than treatment with the drug diclofenac [58].\nThere are several limitations to this review. First, the statistical power of most studies was rather low. To detect a medium effect size of 0.5 (with α = 0.5 and power at 80%), the sample size per group needs to be at least 50 [20]. This is particularly relevant given that many studies reported a lack of treatment effect on the measured outcomes, and this lack of effect may simply reflect inadequate statistical power. Furthermore, despite contacting authors requesting additional information where required, we were unable to calculate effect sizes for two trials included in the review. Second, we did not confine our studies to RCTs, given the likely lack of studies in this area, and instead included one quasi-RCT [39] and two crossover trials [33,35] on the assumption that hand OA is a non-curable condition and that carry-over of treatment effect across periods may be less likely. The findings of these studies need to be interpreted cautiously given these study designs. Third, the methodological assessment revealed some threats to the validity of the included trials, with around half the studies rated as being of lower quality. A summary of the evidence was therefore made with higher-quality studies graded by means of the PEDro system. Fourth, there was variable use of outcome measures across the trials, making it difficult to compare and pool results across studies.", "This systematic review establishes that there is emerging high-quality evidence to support that certain rehabilitation interventions provide benefits to specific treatment goals in individuals with hand OA. A summary of the higher-quality evidence is provided to assist with clinical decision making based on current evidence. In this review, the evidence suggests the following: (a) long-term use of a night splint offers significant benefits to improve pain, hand function, strength, and ROM for patients with OA; (b) programs of joint protection, advice, and home exercises are effective at improving grip strength and hand function; (c) low-level laser therapy is effective at improving ROM; and (d) no rehabilitation interventions were found to improve stiffness.\nThough recommended for OA, exercise programs have not yet been shown to reduce pain in this patient group. We concur with previous systematic reviews suggesting that further high-quality research is urgently needed concerning the effects of rehabilitation interventions on specific patient goals for individuals with hand OA. Specifically, the future agenda should include (a) the use of a common set of outcome measures that adequately capture the dimensions of impairments and function; (b) the use of higher-quality, well-powered studies that adhere to the CONSORT (Consolidated Standards of Reporting Trials) guidelines for non-pharmacological treatments [59]; and (c) the role of exercise on specific patient goals for individuals with hand OA with consideration of the optimal frequency and intensity of training.", "AUSCAN: Australian/Canadian osteoarthritis hand index; CI: confidence interval; CMC: carpometacarpal; DIP: distal interphalangeal; EULAR: European League Against Rheumatism; IP: interphalangeal; OA: osteoarthritis; PEDro: Physiotherapy Evidence Database; PIP: proximal interphalangeal; RCT: randomized controlled trial; ROM: range of motion; SMD: standardized mean difference; TENS: transcutaneous electrical nerve stimulation.", "The authors declare that they have no competing interests.", "LY participated in the study design and in the acquisition, analysis, and interpretation of data and drafted the manuscript. LK participated in the study design and in the acquisition and analysis of the data and helped to draft the manuscript. AS participated in the study design and in the analysis and interpretation of the data and helped to draft the manuscript. FD participated in data acquisition, analysis, and interpretation and drafted the final revisions of the manuscript. KB participated in the study concept and design and in the interpretation of the data and assisted with the drafting of the manuscript. All authors read and approved the final manuscript.", "Appendix 1: Detailed search strategy is attached as an appendix.\nClick here for file" ]
[ null, "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, "discussion", null, null, null, "conclusions", null, null, null, "supplementary-material" ]
[]
Development of an erythropoietin prescription simulator to improve abilities for the prescription of erythropoietin stimulating agents: is it feasible?
21332992
The increasing use of erythropoietins with long half-lives and the tendency to lengthen the administration interval to monthly injections call for raising awareness on the pharmacokinetics and risks of new erythropoietin stimulating agents (ESA). Their pharmacodynamic complexity and individual variability limit the possibility of attaining comprehensive clinical experience. In order to help physicians acquiring prescription abilities, we have built a prescription computer model to be used both as a simulator and education tool.
BACKGROUND
The pharmacokinetic computer model was developed using Visual Basic on Excel and tested with 3 different ESA half-lives (24, 48 and 138 hours) and 2 administration intervals (weekly vs. monthly). Two groups of 25 nephrologists were exposed to the six randomised combinations of half-life and administration interval. They were asked to achieve and maintain, as precisely as possible, the haemoglobin target of 11-12 g/dL in a simulated naïve patient. Each simulation was repeated twice, with or without randomly generated bleeding episodes.
METHODS
The simulation using an ESA with a half-life of 138 hours, administered monthly, compared to the other combinations of half-lives and administration intervals, showed an overshooting tendency (percentages of Hb values > 13 g/dL 15.8 ± 18.3 vs. 6.9 ± 12.2; P < 0.01), which was quickly corrected with experience. The prescription ability appeared to be optimal with a 24 hour half-life and weekly administration (ability score indexing values in the target 1.52 ± 0.70 vs. 1.24 ± 0.37; P < 0.05). The monthly prescription interval, as suggested in the literature, was accompanied by less therapeutic adjustments (4.9 ± 2.2 vs. 8.2 ± 4.9; P < 0.001); a direct correlation between haemoglobin variability and number of therapy modifications was found (P < 0.01).
RESULTS
Computer-based simulations can be a useful tool for improving ESA prescription abilities among nephrologists by raising awareness about the pharmacokinetic characteristics of the various ESAs and recognizing the factors that influence haemoglobin variability.
CONCLUSIONS
[ "Computer Simulation", "Erythropoietin", "Feasibility Studies", "Hemoglobins", "Humans", "Models, Biological", "Prescriptions" ]
3055807
null
null
Methods
[SUBTITLE] Simulator characteristics [SUBSECTION] The "epoietin prescription simulator" was developed in Visual Basic on Excel. The version used in the study is annexed to this document (see Additional File 1 named The Epoetin Prescription Simulator). The user's manual as well as the formulae, including a mono-exponential one that relates the erythropoietin half-life selected for the simulation to ESA's concentration and its effect on the production of new red blood cells, are detailed in the Additional File 2 named Appendix 1. The simulator randomly defines, for an ESA naïve patient, the starting haemoglobin (Hb), with 0.1 g/dL increments, in a range between 7.0 and 8.0 g/dL. To better adjust to clinical practice, it automatically includes, for the duration of the simulation, incidental fluctuations in Hb with an absolute magnitude between -0.5 and +0.5 g/dL [24]. The subject's sensitivity to epoetin is randomly assigned, making sure that the population's average weekly need for epoetin in order to reach the pre-established haemoglobin target of 11-12 g/dL [25] is at approximately 6,000 units. The mean red blood cell (RBC) lifespan in the initial configuration is always set at 61.2 days, and will fluctuate during the simulation according to the erythrocyte age distribution. The amount of weekly epoetin needed, the initial RBC lifespan and the pre-erythrocyte kinetics are simulated according to the data of the literature [26-32]. The home page contains a window with the half-life selected for the test (restricted to 24, 48 and 138 hours), another with the current haemoglobin, an active window where the epoetin dosage can be entered (initially weekly or biweekly and, after 8 weeks, weekly or monthly as predetermined), and finally one showing the statistics of the test in progress from the first week. Statistics are automatically updated during the simulation and summarise the following parameters: mean Hb with SD, variability based on the delta Hb (average of the difference between consecutive values), mean RBC lifespan, percentage of Hb values < 11, >12 and >13 g/dL, and a score ("ability score") starting from 1 (meaning that 100% of the values are outside the target range); the score increases with the decrease of Hb values outside the predetermined optimal range of 11-12 g/dL (a haemoglobin value above 13 g/dl is counted as a double error: one point for Hb > 12 and another for Hb > 13 g/dL; see appendix 1 for details). The model also includes the possibility of randomly adding an acute bleeding episode with depletion of blood volume between 0% and 30%. Considering that the software is annexed to the paper, we remind those users who are outside the current study that, taking into account the simplification of the biological process on which the design was based, and the fact that pharmacodynamic data for erythropoietin are incomplete and affected by significant differences among individuals, the model cannot be used to compare erythropoietin products currently on the market or to prescribe erythropoietin in clinical practice. The "epoietin prescription simulator" was developed in Visual Basic on Excel. The version used in the study is annexed to this document (see Additional File 1 named The Epoetin Prescription Simulator). The user's manual as well as the formulae, including a mono-exponential one that relates the erythropoietin half-life selected for the simulation to ESA's concentration and its effect on the production of new red blood cells, are detailed in the Additional File 2 named Appendix 1. The simulator randomly defines, for an ESA naïve patient, the starting haemoglobin (Hb), with 0.1 g/dL increments, in a range between 7.0 and 8.0 g/dL. To better adjust to clinical practice, it automatically includes, for the duration of the simulation, incidental fluctuations in Hb with an absolute magnitude between -0.5 and +0.5 g/dL [24]. The subject's sensitivity to epoetin is randomly assigned, making sure that the population's average weekly need for epoetin in order to reach the pre-established haemoglobin target of 11-12 g/dL [25] is at approximately 6,000 units. The mean red blood cell (RBC) lifespan in the initial configuration is always set at 61.2 days, and will fluctuate during the simulation according to the erythrocyte age distribution. The amount of weekly epoetin needed, the initial RBC lifespan and the pre-erythrocyte kinetics are simulated according to the data of the literature [26-32]. The home page contains a window with the half-life selected for the test (restricted to 24, 48 and 138 hours), another with the current haemoglobin, an active window where the epoetin dosage can be entered (initially weekly or biweekly and, after 8 weeks, weekly or monthly as predetermined), and finally one showing the statistics of the test in progress from the first week. Statistics are automatically updated during the simulation and summarise the following parameters: mean Hb with SD, variability based on the delta Hb (average of the difference between consecutive values), mean RBC lifespan, percentage of Hb values < 11, >12 and >13 g/dL, and a score ("ability score") starting from 1 (meaning that 100% of the values are outside the target range); the score increases with the decrease of Hb values outside the predetermined optimal range of 11-12 g/dL (a haemoglobin value above 13 g/dl is counted as a double error: one point for Hb > 12 and another for Hb > 13 g/dL; see appendix 1 for details). The model also includes the possibility of randomly adding an acute bleeding episode with depletion of blood volume between 0% and 30%. Considering that the software is annexed to the paper, we remind those users who are outside the current study that, taking into account the simplification of the biological process on which the design was based, and the fact that pharmacodynamic data for erythropoietin are incomplete and affected by significant differences among individuals, the model cannot be used to compare erythropoietin products currently on the market or to prescribe erythropoietin in clinical practice. [SUBTITLE] Selected population and procedure [SUBSECTION] In order to meet the study's needs, we asked two independent groups of 25 nephrologists (both graduates or still pursuing their degree) selected during a national-level meeting to participate by completing the 6 predetermined simulations in random order (erythropoietin with 24, 48 or 138 hours of half-life combined with weekly or monthly administration interval). Each group's participation was planned to be separated by nine months. The first session was scheduled during CERA's premarketing stage and was designed to be carried out with candidates without experience with the new molecule; the second one was the opposite, taking place after CERA had entered the market. In the first session (simulation A), in order to double the number of equilibration events to which each candidate was subjected, and to analyse the learning curve, an acute bleeding episode (0-30% of blood volume) was randomly introduced into the model between weeks 18 and 24. During the second session (simulation B) we exposed each candidate to the six predetermined simulations but without acute bleeding episodes. In each simulation, candidates were asked to enter an erythropoietin dosage, taking into account the erythropoietin half-life and administration interval already entered with the purpose of reaching as fast as possible and with the maximum precision the Hb target (between 11 and 12 g/dL), adapting the dosage in the following weeks/months as if it was a patient on haemodialysis. In order to meet the study's needs, we asked two independent groups of 25 nephrologists (both graduates or still pursuing their degree) selected during a national-level meeting to participate by completing the 6 predetermined simulations in random order (erythropoietin with 24, 48 or 138 hours of half-life combined with weekly or monthly administration interval). Each group's participation was planned to be separated by nine months. The first session was scheduled during CERA's premarketing stage and was designed to be carried out with candidates without experience with the new molecule; the second one was the opposite, taking place after CERA had entered the market. In the first session (simulation A), in order to double the number of equilibration events to which each candidate was subjected, and to analyse the learning curve, an acute bleeding episode (0-30% of blood volume) was randomly introduced into the model between weeks 18 and 24. During the second session (simulation B) we exposed each candidate to the six predetermined simulations but without acute bleeding episodes. In each simulation, candidates were asked to enter an erythropoietin dosage, taking into account the erythropoietin half-life and administration interval already entered with the purpose of reaching as fast as possible and with the maximum precision the Hb target (between 11 and 12 g/dL), adapting the dosage in the following weeks/months as if it was a patient on haemodialysis. [SUBTITLE] Statistical analysis [SUBSECTION] Statistical analyses were performed using a statistical software package (SPSS 12.0; SPSS Inc., Chicago, IL, USA). Results were expressed as mean ± SD. Intra-patient [33] haemoglobin variability other than SD was estimated from the average of the absolute value of the differences between consecutive parameters defined in the text "delta Hb". The haemoglobin target was selected with a narrow margin (11-12 g/dL) to improve the likelihood of finding differences between the groups. Comparisons between parameters were carried out with a paired t-test, while Hb profiles as a function of time were compared using a trapezoidal estimation of the area under the curves followed by a Wilcoxon Signed Ranks test. Percentages were compared using a Fisher Exact test. In all cases, a P ≤ 0.05 was considered statistically significant; P was expressed as ns (not significant), 0.05, <0.05, <0.01 and <0.001. Statistical analyses were performed using a statistical software package (SPSS 12.0; SPSS Inc., Chicago, IL, USA). Results were expressed as mean ± SD. Intra-patient [33] haemoglobin variability other than SD was estimated from the average of the absolute value of the differences between consecutive parameters defined in the text "delta Hb". The haemoglobin target was selected with a narrow margin (11-12 g/dL) to improve the likelihood of finding differences between the groups. Comparisons between parameters were carried out with a paired t-test, while Hb profiles as a function of time were compared using a trapezoidal estimation of the area under the curves followed by a Wilcoxon Signed Ranks test. Percentages were compared using a Fisher Exact test. In all cases, a P ≤ 0.05 was considered statistically significant; P was expressed as ns (not significant), 0.05, <0.05, <0.01 and <0.001.
null
null
null
null
[ "Background", "Simulator characteristics", "Selected population and procedure", "Statistical analysis", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The recent development of long-acting erythropoietin stimulating agents (ESA), and the clinical trend to increase ESA administration intervals, have markedly changed the ESA prescription profile in nephrology. Thus, from a half-life of 19.4 ± 2.4 hours for erythropoietin alpha [1] and 24.2 ± 2.6 hours for erythropoietin beta [2] after subcutaneous administration, and a recommendation to administer these agents up to three times per week, we are now using compounds such as darbepoetin [3] -and, recently, CERA- that have a much longer half-life (48.8 ± 5.2 hours and 139 ± 20 hours, [4,5] respectively), with the suggestion to lengthen the administration interval up to once monthly. However, the correct application of the new strategies implies that users are well aware of the pharmacological characteristics and risks of the newer long-acting molecules.\nAlthough these assumptions have probably not been met, prescribers seem to have quickly and spontaneously adapted to the new conditions, but numerous observations document important - and sometimes cyclical - fluctuations in the haemoglobin values of chronically-treated patients. This has led to numerous questions, which the current clinical, pharmacokinetic and pharmacodynamic knowledge has been able to answer only in part [6-18]. With respect to haemoglobin cycling in particular, the prescription strategy, especially considering frequent and sudden adjustments in erythropoietin dosage, has been considered a possible triggering factor [19-21], while so far no direct effect of a longer administration interval on haemoglobin stability has been noted. The debate on the causes of haemoglobin variability would be purely academic if the stability of haemoglobin were not to be within a therapeutic margin that, in these last few years, has become narrower, and if the haemoglobin fluctuations had not been associated with a greater morbidity [22].\nIn order to help physicians acquire prescription ability, and with the hope of reducing haemoglobin variability, we felt it was relevant to build a pharmacokinetic computer model to be used as an ESA prescription simulator. The aim of this simulator is to raise awareness among ESA users about the implications of recent changes in erythropoietin half-life and prescription intervals. For this purpose, we have built a simulator that asks users to prescribe various ESAs for a naïve patient and to adjust the dose as precisely as possible in a 12-week equilibration or balance phase followed by a 20-week maintenance phase (haemoglobin target 11-12 g/dL). The epoetin half-life and the interval of administration will be assigned to the user at the start of the exercise, while the initial haemoglobin and the patient's erythropoietin sensitivity are randomly generated by the software.\nAlthough such a simulator, contrary to other models based, for instance, on Artificial Neural Networks or Bayesian Adaptive Control [23], is not a prediction tool applicable to clinical practice, we feel that it could be a good way to improve the physicians' ability to prescribe ESAs. Moreover, the simulator should enable us to answer the following questions: (1) Is the ability to keep haemoglobin within the target (primary end point) and haemoglobin stability (secondary end point) influenced by the erythropoietin half-life and administration interval and/or by the use of the simulator (learning effect)? (2) Does the number of changes in ESA dosing correlate with the fluctuations in haemoglobin values and administration interval (secondary end point)? (3) Is the intra-patient delta haemoglobin a more sensitive indicator of haemoglobin stability compared to the standard deviation (secondary end point)?", "The \"epoietin prescription simulator\" was developed in Visual Basic on Excel. The version used in the study is annexed to this document (see Additional File 1 named The Epoetin Prescription Simulator). The user's manual as well as the formulae, including a mono-exponential one that relates the erythropoietin half-life selected for the simulation to ESA's concentration and its effect on the production of new red blood cells, are detailed in the Additional File 2 named Appendix 1.\nThe simulator randomly defines, for an ESA naïve patient, the starting haemoglobin (Hb), with 0.1 g/dL increments, in a range between 7.0 and 8.0 g/dL. To better adjust to clinical practice, it automatically includes, for the duration of the simulation, incidental fluctuations in Hb with an absolute magnitude between -0.5 and +0.5 g/dL [24]. The subject's sensitivity to epoetin is randomly assigned, making sure that the population's average weekly need for epoetin in order to reach the pre-established haemoglobin target of 11-12 g/dL [25] is at approximately 6,000 units. The mean red blood cell (RBC) lifespan in the initial configuration is always set at 61.2 days, and will fluctuate during the simulation according to the erythrocyte age distribution. The amount of weekly epoetin needed, the initial RBC lifespan and the pre-erythrocyte kinetics are simulated according to the data of the literature [26-32].\nThe home page contains a window with the half-life selected for the test (restricted to 24, 48 and 138 hours), another with the current haemoglobin, an active window where the epoetin dosage can be entered (initially weekly or biweekly and, after 8 weeks, weekly or monthly as predetermined), and finally one showing the statistics of the test in progress from the first week.\nStatistics are automatically updated during the simulation and summarise the following parameters: mean Hb with SD, variability based on the delta Hb (average of the difference between consecutive values), mean RBC lifespan, percentage of Hb values < 11, >12 and >13 g/dL, and a score (\"ability score\") starting from 1 (meaning that 100% of the values are outside the target range); the score increases with the decrease of Hb values outside the predetermined optimal range of 11-12 g/dL (a haemoglobin value above 13 g/dl is counted as a double error: one point for Hb > 12 and another for Hb > 13 g/dL; see appendix 1 for details). The model also includes the possibility of randomly adding an acute bleeding episode with depletion of blood volume between 0% and 30%.\nConsidering that the software is annexed to the paper, we remind those users who are outside the current study that, taking into account the simplification of the biological process on which the design was based, and the fact that pharmacodynamic data for erythropoietin are incomplete and affected by significant differences among individuals, the model cannot be used to compare erythropoietin products currently on the market or to prescribe erythropoietin in clinical practice.", "In order to meet the study's needs, we asked two independent groups of 25 nephrologists (both graduates or still pursuing their degree) selected during a national-level meeting to participate by completing the 6 predetermined simulations in random order (erythropoietin with 24, 48 or 138 hours of half-life combined with weekly or monthly administration interval). Each group's participation was planned to be separated by nine months.\nThe first session was scheduled during CERA's premarketing stage and was designed to be carried out with candidates without experience with the new molecule; the second one was the opposite, taking place after CERA had entered the market. In the first session (simulation A), in order to double the number of equilibration events to which each candidate was subjected, and to analyse the learning curve, an acute bleeding episode (0-30% of blood volume) was randomly introduced into the model between weeks 18 and 24. During the second session (simulation B) we exposed each candidate to the six predetermined simulations but without acute bleeding episodes. In each simulation, candidates were asked to enter an erythropoietin dosage, taking into account the erythropoietin half-life and administration interval already entered with the purpose of reaching as fast as possible and with the maximum precision the Hb target (between 11 and 12 g/dL), adapting the dosage in the following weeks/months as if it was a patient on haemodialysis.", "Statistical analyses were performed using a statistical software package (SPSS 12.0; SPSS Inc., Chicago, IL, USA). Results were expressed as mean ± SD. Intra-patient [33] haemoglobin variability other than SD was estimated from the average of the absolute value of the differences between consecutive parameters defined in the text \"delta Hb\". The haemoglobin target was selected with a narrow margin (11-12 g/dL) to improve the likelihood of finding differences between the groups. Comparisons between parameters were carried out with a paired t-test, while Hb profiles as a function of time were compared using a trapezoidal estimation of the area under the curves followed by a Wilcoxon Signed Ranks test. Percentages were compared using a Fisher Exact test. In all cases, a P ≤ 0.05 was considered statistically significant; P was expressed as ns (not significant), 0.05, <0.05, <0.01 and <0.001.", "The Hb course as a function of time (expressed as simulation weeks) and the average Hb in each simulation modality are shown in Figure 1 and Table 1, respectively. In order to facilitate the comparison of the different curves, the bleeding phases are not represented in the graph and the curves have been synchronised. Thus, the second equilibration phase starts on the graph in the same week for each patient and each modality. As can be observed, and even if the curves with the exception of 138M were not statistically different, the combination of half-life and administration interval that, in the equilibration phase, mostly respected the target range was 24 hours with weekly administration (24W). The only curve on Figure 1 that was significantly different from the others (P < 0.01) was associated with overshooting: the 138M (138 h half-life and monthly administration interval). With this combination, and comparing with the other simulations, the percentage of Hb values > 13 g/dL was 15.8 ± 18.3% vs. 6.9 ± 12.2%; P < 0.01). To support a favourable learning effect, no significant differences between groups were observed during the equilibration phase following the bleeding episode. In this regard, comparing with the first equilibration phase, the overall dispersion of Hb values outside the target range was significantly smaller: average Hb dispersion in the last 8 weeks of the 2 equilibration phases 1.32 ± 0.28 vs. 0.63 ± 0.26 g/dL; P < 0.01; Hb percentage values outside the target of 11-12 g/dL 50.0 ± 30.9 vs. 18.8 ± 28.8%; P = 0.05) (secondary end-point). Haemoglobin variability is detailed in Table 1 using as parameters the SD and the average of the absolute value of the difference between consecutive measurements \"delta Hb\".\nHaemoglobin course; simulation A (before CERA's marketing). Hb as a function of the weeks elapsed since the start of the simulation for the 6 combinations of half-lives and administration intervals (half-life in h: 24, 48 and 138; administration interval, weekly (W) or monthly (M); 24W solid line with black triangles, 48W solid line with black squares, 138W solid line with black diamonds, 24M dotted line with white triangles, 48M dotted line with white squares, 138M dotted line with white diamonds). The randomly-assigned bleeding phase between the two equilibration exercises is not represented; the second equilibration phase is synchronised. N = 25.\nHaemoglobin variability in simulation A\nAverage Hb and variability expressed as standard deviation and delta Hb (average of the absolute value of the difference between consecutive measurements) for the 6 modalities. Half-life in h: 24, 48 and 138; administration interval, weekly (W) or monthly (M). N = 25\nChanges in Hb observed during the second simulation (simulation B), which was carried out after the marketing of CERA are shown in Figure 2. As shown in the Figure, no curves were characterized by an evident overshooting of (mean Hb > 13g/dL).\nHaemoglobin course in simulation B (after CERA's marketing). Hb as a function of the weeks elapsed since the start of the simulation for the 6 combinations of half-lives and administration intervals (half-life in h: 24, 48 and 138; administration interval, weekly (W) or monthly (M); 24W solid line with black triangles, 48W solid line with black squares, 138W solid line with black diamonds, 24M dotted line with white triangles, 48M dotted line with white squares, 138M dotted line with white diamonds). To better evaluate the equilibration phase no bleeding episodes were introduced. N = 25.\nThe comparison between monthly and weekly administration is shown in Figure 3 (where only Hb values that were visible to the simulator user are represented) and Figure 4 (representing the weekly haemoglobin values for the entire simulation).\nMonthly haemoglobin course comparing monthly and weekly administration intervals; simulation B. Monthly Hb as a function of the weeks elapsed since the start of the simulation for the 2 administration intervals: monthly M (dotted line with white squares) or weekly W (solid line with black squares). N = 25\nWeekly haemoglobin course comparing monthly and weekly administration intervals; simulation B. Weekly Hb as a function of the weeks elapsed since the start of the simulation for the 2 administration intervals: monthly M (dotted line with black squares) or weekly W (solid line with black squares). The simulator user was aware of the monthly values only. N = 25.\nHaemoglobin variability during the maintenance phase is represented in Figure 5, using the SD and the average of the absolute value of the difference \"delta Hb\" between consecutive measurements. The monthly administration interval compared with the weekly one was associated with a significantly higher delta Hb (0.76 vs. 0.34 g/dL; P < 0.01). The SD was not able to detect the difference (secondary end-point). The same parameters are shown in Table 2 for the entire simulation.\nHaemoglobin variability in simulation B. Hb variability expressed as standard deviation (SD) and delta Hb (absolute value of the difference between consecutive measurements) comparing the 2 administration intervals: monthly M (black columns) and weekly W (white columns). The difference between columns in \"Delta Hb\" is significant; P < 0.01. N = 25.\nHaemoglobin values and ability score; simulation B\nMean haemoglobin with SD and delta Hb, value percentages outside the target (during the 32 weeks of the simulation) and ability score for the 6 modalities, comparing monthly to weekly administration (half-life of 24, 48 and 138 hours; monthly M and weekly W administration) (simulation B). N = 25\nTable 2 shows the percentage of values outside the target during the 32 weeks of the simulation for each modality, comparing monthly to weekly administration. Monthly administration is characterised by a lower percentage of Hb values at the target of 11-12 g/dL (18.4 vs. 26.8%; P < 0.05), but there are also fewer values above 12 or 13 g/dL (11.8 vs. 25.3%; P < 0.01 and 4.3 vs. 13.0; P < 0.05). With respect to the ability score, the only modality that stands out from the rest in a significant way is that associating the short half-life (24 hours) with the weekly administration (1.52 ± 0.70 vs. 1.24 ± 0.37; P < 0.05) (primary end point).\nThe error in determining the maintenance dose was estimated by calculating the difference between the mean weekly dose of erythropoietin used in the equilibration phase and that used in the maintenance phase (6084 ± 3057 vs. 5575 ± 1828 U/Week). The discrepancy, as illustrated in Figure 6, was found to be larger for weekly than for monthly administration (14.5 ± 15.9 vs. 9.1 ± 11.0%; P < 0.05), with differences increasing together with the half-life of the ESA.\nEpoetin dose reduction, equilibration vs. maintenance in simulation B. Error in determining the maintenance dose estimated by calculating the percentage difference between the mean erythropoietin dose used in the equilibration phase and that used in the maintenance phase (half-life of 24, 48 and 138 h; monthly M and weekly W administration). N = 25.\nWhatever the drug half-life, the weekly administration of ESA was associated with a significantly higher number of adjustments of the doses (4.9 ± 2.2 vs. 8.2 ± 4.9; P < 0.001). Figure 7 presents the significant direct correlation between the number of therapy modifications and haemoglobin variability, expressed as delta Hb, during the maintenance phase (secondary end-point).\nHaemoglobin variability and epoetin dose adjustments in simulation B. Correlation between the number of therapy modifications and Hb variability (expressed as delta Hb) in the maintenance phase. N = 25.\nInterestingly, both the half-life of erythropoietin and the administration interval appear to affect the age distribution of red blood cells (RBC) within the circulating population, modifying the mean RBC lifespan. The variability of the RBC lifespan is expressed in Figure 8 as SD. The variability decreases with an increase in the half-life of erythropoietin and with a decrease in the administration interval.\nRBC lifespan variability in simulation B. RBC lifespan variability (expressed as standard deviation) as a function of erythropoietin half-life and administration interval (half-life in hours, 24, 48 and 138; administration interval, weekly W or monthly M). N = 25.", "The purpose of this study was to develop a new tool to improve the physicians' ability to prescribe erythropoietin stimulating agents in dialysis patients and hence to raise awareness on the pharmacological consequences resulting from the use of ESAs with a very long half-life over longer administration intervals. Our two simulations (A and B), performed before and after CERA's marketing, enable us to conclude that the simulator is user-friendly and that its use is associated with learning (less dispersion of Hb values and improvement of the ability to respect the predetermined target).\nSimulation modality A was tested with a group of nephrologists who had never used erythropoietins with half-lives longer than 48 hours, showing that the first approach with a prolonged half-life (138 hours) and a monthly administration interval could have overshooting as a consequence. Nevertheless, the risk of overshooting is quickly corrected thanks to the learning, and the prescriber is already able to avoid the administration of an excessive dose in the second equilibration phase of the same simulation.\nInterestingly, in simulation B, performed after CERA's marketing with a group of nephrologists being aware of its long half-life (about 139 hours), overshooting tendencies were not observed. The same simulation, compared with the first one, allowed a more detailed analysis of the maintenance phase following the initial equilibration phase. Surprisingly, the simulation has shown that nephrologists have been more careful and conservative with prescriptions when faced with an administration interval of 4 weeks. The result was that monthly administration has been translated into a significantly lower average Hb value (10.3 vs. 11.1 g/dL; P < 0.01), as well as a lower percentage of Hb values at the target of 11-12 g/dL (18.4 vs. 26.8%; P < 0.05). Accordingly, with the type of ESA and the administration interval mostly used at the time the study was carried out, the only modality that was characterised with a prescription ability (calculated by indexing the percentage of parameter values on target, below target, above target, and above the safety Hb value of 13 g/dL) that was statistically superior compared to the others (primary end point) was the combination of the shortest half-life (24 hours) with weekly administration (ability score 1.52 vs. 1.24; P < 0.05). The smaller adjustments needed, regarding the mean ESA dose, between the equilibration and maintenance phases using the monthly administration interval (9.1 vs. 14.5%; P < 0.05) was probably due to the lower mean Hb value at the end of the equilibration phase when the monthly interval was used; so it should not be interpreted as better prescription ability but as more prudence when faced with long half-lives.\nAs can be observed when comparing Figures 2 and 3, the weekly Hb value shows larger variability when using monthly compared to weekly administration (only the monthly values as shown in Figure 2 were visible to the prescriber). The greatest variability in the model is generated by red blood cell sub-populations of different ages (a fact shown by the curve behaviour in Figure 3 as well as by the significant difference in the fluctuation of red blood cell lifespan during the simulation summarised in Figure 8), caused by the single monthly dose of erythropoietin that has resulted in a non-homogeneous generation and elimination of red blood cells over time.\nHb variability analysis enables a critical evaluation of the meaning of standard deviation (average distance of individual values from the mean) compared to delta Hb (average of the absolute value of the difference between consecutive measurements). In the particular case of the simulation, delta Hb is in fact the parameter that best translates incidental changes in Hb values.\nUsing a monthly instead of a weekly administration interval, the number of modifications to the erythropoietin dose is significantly lower (4.9 vs. 8.2; P < 0.001). As suggested in the literature [19] and demonstrated with the significant direct correlation between Hb variability and number of epoetin dose adjustements (Figure 7), this fact could have favourable consequences on Hb stability.", "In conclusion, bearing in mind the limitations of our model, the results obtained with our simulator could be used as a path for further experimental studies. An ESA prescription simulator can be a useful educational tool to raise awareness about the possible consequences of changing the medication's half-life or its administration interval. The first time users were faced with an erythropoietin compound with a half-life of 138 hours and a monthly administration interval, they were exposed to a risk of overshooting, which was corrected by the training on the simulator. With respect to possible consequences, very long half-lives and monthly administration intervals translate into a conservative prescription that maintains Hb values below the desired level. As a consequence the shortest half-life (24 hours) with the weekly administration interval is associated with the better prescription ability. The monthly prescription interval however, as suggested in the literature, is accompanied by less therapeutic adjustments; in this regard a direct correlation between Hb variability and number of therapy modifications has been demonstrated. In our simulations, the delta Hb has proven to be a better tool for evaluating intra-patient Hb variability than standard deviation.\nComputer-based simulation tools can be particularly useful for improving prescription patterns and for testing new working hypotheses such as determining the factors that influence Hb variability. Additional knowledge on the pharmacodynamic of ESAs is necessary to fine-tune the models and bring them closer to the level of regular clinical experience. However, the research of possible consequences of half-life and administration interval for ESAs that are currently marketed with respect to the Hb stability will require specific targeted clinical trials.", "The authors declare that they have no competing interests.", "LG was involved in the study design, sample collection, analysis and interpretation of the data, in building the simulation model and in writing the report; FN and VF participated in the sample collection, analysis and interpretation of the data and in writing the paper; FR and WN participated in building the simulation model, MB helped formulate the study design, the data analysis strategy and contributed to writing the paper. All authors have read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2369/12/11/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Simulator characteristics", "Selected population and procedure", "Statistical analysis", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "The recent development of long-acting erythropoietin stimulating agents (ESA), and the clinical trend to increase ESA administration intervals, have markedly changed the ESA prescription profile in nephrology. Thus, from a half-life of 19.4 ± 2.4 hours for erythropoietin alpha [1] and 24.2 ± 2.6 hours for erythropoietin beta [2] after subcutaneous administration, and a recommendation to administer these agents up to three times per week, we are now using compounds such as darbepoetin [3] -and, recently, CERA- that have a much longer half-life (48.8 ± 5.2 hours and 139 ± 20 hours, [4,5] respectively), with the suggestion to lengthen the administration interval up to once monthly. However, the correct application of the new strategies implies that users are well aware of the pharmacological characteristics and risks of the newer long-acting molecules.\nAlthough these assumptions have probably not been met, prescribers seem to have quickly and spontaneously adapted to the new conditions, but numerous observations document important - and sometimes cyclical - fluctuations in the haemoglobin values of chronically-treated patients. This has led to numerous questions, which the current clinical, pharmacokinetic and pharmacodynamic knowledge has been able to answer only in part [6-18]. With respect to haemoglobin cycling in particular, the prescription strategy, especially considering frequent and sudden adjustments in erythropoietin dosage, has been considered a possible triggering factor [19-21], while so far no direct effect of a longer administration interval on haemoglobin stability has been noted. The debate on the causes of haemoglobin variability would be purely academic if the stability of haemoglobin were not to be within a therapeutic margin that, in these last few years, has become narrower, and if the haemoglobin fluctuations had not been associated with a greater morbidity [22].\nIn order to help physicians acquire prescription ability, and with the hope of reducing haemoglobin variability, we felt it was relevant to build a pharmacokinetic computer model to be used as an ESA prescription simulator. The aim of this simulator is to raise awareness among ESA users about the implications of recent changes in erythropoietin half-life and prescription intervals. For this purpose, we have built a simulator that asks users to prescribe various ESAs for a naïve patient and to adjust the dose as precisely as possible in a 12-week equilibration or balance phase followed by a 20-week maintenance phase (haemoglobin target 11-12 g/dL). The epoetin half-life and the interval of administration will be assigned to the user at the start of the exercise, while the initial haemoglobin and the patient's erythropoietin sensitivity are randomly generated by the software.\nAlthough such a simulator, contrary to other models based, for instance, on Artificial Neural Networks or Bayesian Adaptive Control [23], is not a prediction tool applicable to clinical practice, we feel that it could be a good way to improve the physicians' ability to prescribe ESAs. Moreover, the simulator should enable us to answer the following questions: (1) Is the ability to keep haemoglobin within the target (primary end point) and haemoglobin stability (secondary end point) influenced by the erythropoietin half-life and administration interval and/or by the use of the simulator (learning effect)? (2) Does the number of changes in ESA dosing correlate with the fluctuations in haemoglobin values and administration interval (secondary end point)? (3) Is the intra-patient delta haemoglobin a more sensitive indicator of haemoglobin stability compared to the standard deviation (secondary end point)?", "[SUBTITLE] Simulator characteristics [SUBSECTION] The \"epoietin prescription simulator\" was developed in Visual Basic on Excel. The version used in the study is annexed to this document (see Additional File 1 named The Epoetin Prescription Simulator). The user's manual as well as the formulae, including a mono-exponential one that relates the erythropoietin half-life selected for the simulation to ESA's concentration and its effect on the production of new red blood cells, are detailed in the Additional File 2 named Appendix 1.\nThe simulator randomly defines, for an ESA naïve patient, the starting haemoglobin (Hb), with 0.1 g/dL increments, in a range between 7.0 and 8.0 g/dL. To better adjust to clinical practice, it automatically includes, for the duration of the simulation, incidental fluctuations in Hb with an absolute magnitude between -0.5 and +0.5 g/dL [24]. The subject's sensitivity to epoetin is randomly assigned, making sure that the population's average weekly need for epoetin in order to reach the pre-established haemoglobin target of 11-12 g/dL [25] is at approximately 6,000 units. The mean red blood cell (RBC) lifespan in the initial configuration is always set at 61.2 days, and will fluctuate during the simulation according to the erythrocyte age distribution. The amount of weekly epoetin needed, the initial RBC lifespan and the pre-erythrocyte kinetics are simulated according to the data of the literature [26-32].\nThe home page contains a window with the half-life selected for the test (restricted to 24, 48 and 138 hours), another with the current haemoglobin, an active window where the epoetin dosage can be entered (initially weekly or biweekly and, after 8 weeks, weekly or monthly as predetermined), and finally one showing the statistics of the test in progress from the first week.\nStatistics are automatically updated during the simulation and summarise the following parameters: mean Hb with SD, variability based on the delta Hb (average of the difference between consecutive values), mean RBC lifespan, percentage of Hb values < 11, >12 and >13 g/dL, and a score (\"ability score\") starting from 1 (meaning that 100% of the values are outside the target range); the score increases with the decrease of Hb values outside the predetermined optimal range of 11-12 g/dL (a haemoglobin value above 13 g/dl is counted as a double error: one point for Hb > 12 and another for Hb > 13 g/dL; see appendix 1 for details). The model also includes the possibility of randomly adding an acute bleeding episode with depletion of blood volume between 0% and 30%.\nConsidering that the software is annexed to the paper, we remind those users who are outside the current study that, taking into account the simplification of the biological process on which the design was based, and the fact that pharmacodynamic data for erythropoietin are incomplete and affected by significant differences among individuals, the model cannot be used to compare erythropoietin products currently on the market or to prescribe erythropoietin in clinical practice.\nThe \"epoietin prescription simulator\" was developed in Visual Basic on Excel. The version used in the study is annexed to this document (see Additional File 1 named The Epoetin Prescription Simulator). The user's manual as well as the formulae, including a mono-exponential one that relates the erythropoietin half-life selected for the simulation to ESA's concentration and its effect on the production of new red blood cells, are detailed in the Additional File 2 named Appendix 1.\nThe simulator randomly defines, for an ESA naïve patient, the starting haemoglobin (Hb), with 0.1 g/dL increments, in a range between 7.0 and 8.0 g/dL. To better adjust to clinical practice, it automatically includes, for the duration of the simulation, incidental fluctuations in Hb with an absolute magnitude between -0.5 and +0.5 g/dL [24]. The subject's sensitivity to epoetin is randomly assigned, making sure that the population's average weekly need for epoetin in order to reach the pre-established haemoglobin target of 11-12 g/dL [25] is at approximately 6,000 units. The mean red blood cell (RBC) lifespan in the initial configuration is always set at 61.2 days, and will fluctuate during the simulation according to the erythrocyte age distribution. The amount of weekly epoetin needed, the initial RBC lifespan and the pre-erythrocyte kinetics are simulated according to the data of the literature [26-32].\nThe home page contains a window with the half-life selected for the test (restricted to 24, 48 and 138 hours), another with the current haemoglobin, an active window where the epoetin dosage can be entered (initially weekly or biweekly and, after 8 weeks, weekly or monthly as predetermined), and finally one showing the statistics of the test in progress from the first week.\nStatistics are automatically updated during the simulation and summarise the following parameters: mean Hb with SD, variability based on the delta Hb (average of the difference between consecutive values), mean RBC lifespan, percentage of Hb values < 11, >12 and >13 g/dL, and a score (\"ability score\") starting from 1 (meaning that 100% of the values are outside the target range); the score increases with the decrease of Hb values outside the predetermined optimal range of 11-12 g/dL (a haemoglobin value above 13 g/dl is counted as a double error: one point for Hb > 12 and another for Hb > 13 g/dL; see appendix 1 for details). The model also includes the possibility of randomly adding an acute bleeding episode with depletion of blood volume between 0% and 30%.\nConsidering that the software is annexed to the paper, we remind those users who are outside the current study that, taking into account the simplification of the biological process on which the design was based, and the fact that pharmacodynamic data for erythropoietin are incomplete and affected by significant differences among individuals, the model cannot be used to compare erythropoietin products currently on the market or to prescribe erythropoietin in clinical practice.\n[SUBTITLE] Selected population and procedure [SUBSECTION] In order to meet the study's needs, we asked two independent groups of 25 nephrologists (both graduates or still pursuing their degree) selected during a national-level meeting to participate by completing the 6 predetermined simulations in random order (erythropoietin with 24, 48 or 138 hours of half-life combined with weekly or monthly administration interval). Each group's participation was planned to be separated by nine months.\nThe first session was scheduled during CERA's premarketing stage and was designed to be carried out with candidates without experience with the new molecule; the second one was the opposite, taking place after CERA had entered the market. In the first session (simulation A), in order to double the number of equilibration events to which each candidate was subjected, and to analyse the learning curve, an acute bleeding episode (0-30% of blood volume) was randomly introduced into the model between weeks 18 and 24. During the second session (simulation B) we exposed each candidate to the six predetermined simulations but without acute bleeding episodes. In each simulation, candidates were asked to enter an erythropoietin dosage, taking into account the erythropoietin half-life and administration interval already entered with the purpose of reaching as fast as possible and with the maximum precision the Hb target (between 11 and 12 g/dL), adapting the dosage in the following weeks/months as if it was a patient on haemodialysis.\nIn order to meet the study's needs, we asked two independent groups of 25 nephrologists (both graduates or still pursuing their degree) selected during a national-level meeting to participate by completing the 6 predetermined simulations in random order (erythropoietin with 24, 48 or 138 hours of half-life combined with weekly or monthly administration interval). Each group's participation was planned to be separated by nine months.\nThe first session was scheduled during CERA's premarketing stage and was designed to be carried out with candidates without experience with the new molecule; the second one was the opposite, taking place after CERA had entered the market. In the first session (simulation A), in order to double the number of equilibration events to which each candidate was subjected, and to analyse the learning curve, an acute bleeding episode (0-30% of blood volume) was randomly introduced into the model between weeks 18 and 24. During the second session (simulation B) we exposed each candidate to the six predetermined simulations but without acute bleeding episodes. In each simulation, candidates were asked to enter an erythropoietin dosage, taking into account the erythropoietin half-life and administration interval already entered with the purpose of reaching as fast as possible and with the maximum precision the Hb target (between 11 and 12 g/dL), adapting the dosage in the following weeks/months as if it was a patient on haemodialysis.\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analyses were performed using a statistical software package (SPSS 12.0; SPSS Inc., Chicago, IL, USA). Results were expressed as mean ± SD. Intra-patient [33] haemoglobin variability other than SD was estimated from the average of the absolute value of the differences between consecutive parameters defined in the text \"delta Hb\". The haemoglobin target was selected with a narrow margin (11-12 g/dL) to improve the likelihood of finding differences between the groups. Comparisons between parameters were carried out with a paired t-test, while Hb profiles as a function of time were compared using a trapezoidal estimation of the area under the curves followed by a Wilcoxon Signed Ranks test. Percentages were compared using a Fisher Exact test. In all cases, a P ≤ 0.05 was considered statistically significant; P was expressed as ns (not significant), 0.05, <0.05, <0.01 and <0.001.\nStatistical analyses were performed using a statistical software package (SPSS 12.0; SPSS Inc., Chicago, IL, USA). Results were expressed as mean ± SD. Intra-patient [33] haemoglobin variability other than SD was estimated from the average of the absolute value of the differences between consecutive parameters defined in the text \"delta Hb\". The haemoglobin target was selected with a narrow margin (11-12 g/dL) to improve the likelihood of finding differences between the groups. Comparisons between parameters were carried out with a paired t-test, while Hb profiles as a function of time were compared using a trapezoidal estimation of the area under the curves followed by a Wilcoxon Signed Ranks test. Percentages were compared using a Fisher Exact test. In all cases, a P ≤ 0.05 was considered statistically significant; P was expressed as ns (not significant), 0.05, <0.05, <0.01 and <0.001.", "The \"epoietin prescription simulator\" was developed in Visual Basic on Excel. The version used in the study is annexed to this document (see Additional File 1 named The Epoetin Prescription Simulator). The user's manual as well as the formulae, including a mono-exponential one that relates the erythropoietin half-life selected for the simulation to ESA's concentration and its effect on the production of new red blood cells, are detailed in the Additional File 2 named Appendix 1.\nThe simulator randomly defines, for an ESA naïve patient, the starting haemoglobin (Hb), with 0.1 g/dL increments, in a range between 7.0 and 8.0 g/dL. To better adjust to clinical practice, it automatically includes, for the duration of the simulation, incidental fluctuations in Hb with an absolute magnitude between -0.5 and +0.5 g/dL [24]. The subject's sensitivity to epoetin is randomly assigned, making sure that the population's average weekly need for epoetin in order to reach the pre-established haemoglobin target of 11-12 g/dL [25] is at approximately 6,000 units. The mean red blood cell (RBC) lifespan in the initial configuration is always set at 61.2 days, and will fluctuate during the simulation according to the erythrocyte age distribution. The amount of weekly epoetin needed, the initial RBC lifespan and the pre-erythrocyte kinetics are simulated according to the data of the literature [26-32].\nThe home page contains a window with the half-life selected for the test (restricted to 24, 48 and 138 hours), another with the current haemoglobin, an active window where the epoetin dosage can be entered (initially weekly or biweekly and, after 8 weeks, weekly or monthly as predetermined), and finally one showing the statistics of the test in progress from the first week.\nStatistics are automatically updated during the simulation and summarise the following parameters: mean Hb with SD, variability based on the delta Hb (average of the difference between consecutive values), mean RBC lifespan, percentage of Hb values < 11, >12 and >13 g/dL, and a score (\"ability score\") starting from 1 (meaning that 100% of the values are outside the target range); the score increases with the decrease of Hb values outside the predetermined optimal range of 11-12 g/dL (a haemoglobin value above 13 g/dl is counted as a double error: one point for Hb > 12 and another for Hb > 13 g/dL; see appendix 1 for details). The model also includes the possibility of randomly adding an acute bleeding episode with depletion of blood volume between 0% and 30%.\nConsidering that the software is annexed to the paper, we remind those users who are outside the current study that, taking into account the simplification of the biological process on which the design was based, and the fact that pharmacodynamic data for erythropoietin are incomplete and affected by significant differences among individuals, the model cannot be used to compare erythropoietin products currently on the market or to prescribe erythropoietin in clinical practice.", "In order to meet the study's needs, we asked two independent groups of 25 nephrologists (both graduates or still pursuing their degree) selected during a national-level meeting to participate by completing the 6 predetermined simulations in random order (erythropoietin with 24, 48 or 138 hours of half-life combined with weekly or monthly administration interval). Each group's participation was planned to be separated by nine months.\nThe first session was scheduled during CERA's premarketing stage and was designed to be carried out with candidates without experience with the new molecule; the second one was the opposite, taking place after CERA had entered the market. In the first session (simulation A), in order to double the number of equilibration events to which each candidate was subjected, and to analyse the learning curve, an acute bleeding episode (0-30% of blood volume) was randomly introduced into the model between weeks 18 and 24. During the second session (simulation B) we exposed each candidate to the six predetermined simulations but without acute bleeding episodes. In each simulation, candidates were asked to enter an erythropoietin dosage, taking into account the erythropoietin half-life and administration interval already entered with the purpose of reaching as fast as possible and with the maximum precision the Hb target (between 11 and 12 g/dL), adapting the dosage in the following weeks/months as if it was a patient on haemodialysis.", "Statistical analyses were performed using a statistical software package (SPSS 12.0; SPSS Inc., Chicago, IL, USA). Results were expressed as mean ± SD. Intra-patient [33] haemoglobin variability other than SD was estimated from the average of the absolute value of the differences between consecutive parameters defined in the text \"delta Hb\". The haemoglobin target was selected with a narrow margin (11-12 g/dL) to improve the likelihood of finding differences between the groups. Comparisons between parameters were carried out with a paired t-test, while Hb profiles as a function of time were compared using a trapezoidal estimation of the area under the curves followed by a Wilcoxon Signed Ranks test. Percentages were compared using a Fisher Exact test. In all cases, a P ≤ 0.05 was considered statistically significant; P was expressed as ns (not significant), 0.05, <0.05, <0.01 and <0.001.", "The Hb course as a function of time (expressed as simulation weeks) and the average Hb in each simulation modality are shown in Figure 1 and Table 1, respectively. In order to facilitate the comparison of the different curves, the bleeding phases are not represented in the graph and the curves have been synchronised. Thus, the second equilibration phase starts on the graph in the same week for each patient and each modality. As can be observed, and even if the curves with the exception of 138M were not statistically different, the combination of half-life and administration interval that, in the equilibration phase, mostly respected the target range was 24 hours with weekly administration (24W). The only curve on Figure 1 that was significantly different from the others (P < 0.01) was associated with overshooting: the 138M (138 h half-life and monthly administration interval). With this combination, and comparing with the other simulations, the percentage of Hb values > 13 g/dL was 15.8 ± 18.3% vs. 6.9 ± 12.2%; P < 0.01). To support a favourable learning effect, no significant differences between groups were observed during the equilibration phase following the bleeding episode. In this regard, comparing with the first equilibration phase, the overall dispersion of Hb values outside the target range was significantly smaller: average Hb dispersion in the last 8 weeks of the 2 equilibration phases 1.32 ± 0.28 vs. 0.63 ± 0.26 g/dL; P < 0.01; Hb percentage values outside the target of 11-12 g/dL 50.0 ± 30.9 vs. 18.8 ± 28.8%; P = 0.05) (secondary end-point). Haemoglobin variability is detailed in Table 1 using as parameters the SD and the average of the absolute value of the difference between consecutive measurements \"delta Hb\".\nHaemoglobin course; simulation A (before CERA's marketing). Hb as a function of the weeks elapsed since the start of the simulation for the 6 combinations of half-lives and administration intervals (half-life in h: 24, 48 and 138; administration interval, weekly (W) or monthly (M); 24W solid line with black triangles, 48W solid line with black squares, 138W solid line with black diamonds, 24M dotted line with white triangles, 48M dotted line with white squares, 138M dotted line with white diamonds). The randomly-assigned bleeding phase between the two equilibration exercises is not represented; the second equilibration phase is synchronised. N = 25.\nHaemoglobin variability in simulation A\nAverage Hb and variability expressed as standard deviation and delta Hb (average of the absolute value of the difference between consecutive measurements) for the 6 modalities. Half-life in h: 24, 48 and 138; administration interval, weekly (W) or monthly (M). N = 25\nChanges in Hb observed during the second simulation (simulation B), which was carried out after the marketing of CERA are shown in Figure 2. As shown in the Figure, no curves were characterized by an evident overshooting of (mean Hb > 13g/dL).\nHaemoglobin course in simulation B (after CERA's marketing). Hb as a function of the weeks elapsed since the start of the simulation for the 6 combinations of half-lives and administration intervals (half-life in h: 24, 48 and 138; administration interval, weekly (W) or monthly (M); 24W solid line with black triangles, 48W solid line with black squares, 138W solid line with black diamonds, 24M dotted line with white triangles, 48M dotted line with white squares, 138M dotted line with white diamonds). To better evaluate the equilibration phase no bleeding episodes were introduced. N = 25.\nThe comparison between monthly and weekly administration is shown in Figure 3 (where only Hb values that were visible to the simulator user are represented) and Figure 4 (representing the weekly haemoglobin values for the entire simulation).\nMonthly haemoglobin course comparing monthly and weekly administration intervals; simulation B. Monthly Hb as a function of the weeks elapsed since the start of the simulation for the 2 administration intervals: monthly M (dotted line with white squares) or weekly W (solid line with black squares). N = 25\nWeekly haemoglobin course comparing monthly and weekly administration intervals; simulation B. Weekly Hb as a function of the weeks elapsed since the start of the simulation for the 2 administration intervals: monthly M (dotted line with black squares) or weekly W (solid line with black squares). The simulator user was aware of the monthly values only. N = 25.\nHaemoglobin variability during the maintenance phase is represented in Figure 5, using the SD and the average of the absolute value of the difference \"delta Hb\" between consecutive measurements. The monthly administration interval compared with the weekly one was associated with a significantly higher delta Hb (0.76 vs. 0.34 g/dL; P < 0.01). The SD was not able to detect the difference (secondary end-point). The same parameters are shown in Table 2 for the entire simulation.\nHaemoglobin variability in simulation B. Hb variability expressed as standard deviation (SD) and delta Hb (absolute value of the difference between consecutive measurements) comparing the 2 administration intervals: monthly M (black columns) and weekly W (white columns). The difference between columns in \"Delta Hb\" is significant; P < 0.01. N = 25.\nHaemoglobin values and ability score; simulation B\nMean haemoglobin with SD and delta Hb, value percentages outside the target (during the 32 weeks of the simulation) and ability score for the 6 modalities, comparing monthly to weekly administration (half-life of 24, 48 and 138 hours; monthly M and weekly W administration) (simulation B). N = 25\nTable 2 shows the percentage of values outside the target during the 32 weeks of the simulation for each modality, comparing monthly to weekly administration. Monthly administration is characterised by a lower percentage of Hb values at the target of 11-12 g/dL (18.4 vs. 26.8%; P < 0.05), but there are also fewer values above 12 or 13 g/dL (11.8 vs. 25.3%; P < 0.01 and 4.3 vs. 13.0; P < 0.05). With respect to the ability score, the only modality that stands out from the rest in a significant way is that associating the short half-life (24 hours) with the weekly administration (1.52 ± 0.70 vs. 1.24 ± 0.37; P < 0.05) (primary end point).\nThe error in determining the maintenance dose was estimated by calculating the difference between the mean weekly dose of erythropoietin used in the equilibration phase and that used in the maintenance phase (6084 ± 3057 vs. 5575 ± 1828 U/Week). The discrepancy, as illustrated in Figure 6, was found to be larger for weekly than for monthly administration (14.5 ± 15.9 vs. 9.1 ± 11.0%; P < 0.05), with differences increasing together with the half-life of the ESA.\nEpoetin dose reduction, equilibration vs. maintenance in simulation B. Error in determining the maintenance dose estimated by calculating the percentage difference between the mean erythropoietin dose used in the equilibration phase and that used in the maintenance phase (half-life of 24, 48 and 138 h; monthly M and weekly W administration). N = 25.\nWhatever the drug half-life, the weekly administration of ESA was associated with a significantly higher number of adjustments of the doses (4.9 ± 2.2 vs. 8.2 ± 4.9; P < 0.001). Figure 7 presents the significant direct correlation between the number of therapy modifications and haemoglobin variability, expressed as delta Hb, during the maintenance phase (secondary end-point).\nHaemoglobin variability and epoetin dose adjustments in simulation B. Correlation between the number of therapy modifications and Hb variability (expressed as delta Hb) in the maintenance phase. N = 25.\nInterestingly, both the half-life of erythropoietin and the administration interval appear to affect the age distribution of red blood cells (RBC) within the circulating population, modifying the mean RBC lifespan. The variability of the RBC lifespan is expressed in Figure 8 as SD. The variability decreases with an increase in the half-life of erythropoietin and with a decrease in the administration interval.\nRBC lifespan variability in simulation B. RBC lifespan variability (expressed as standard deviation) as a function of erythropoietin half-life and administration interval (half-life in hours, 24, 48 and 138; administration interval, weekly W or monthly M). N = 25.", "The purpose of this study was to develop a new tool to improve the physicians' ability to prescribe erythropoietin stimulating agents in dialysis patients and hence to raise awareness on the pharmacological consequences resulting from the use of ESAs with a very long half-life over longer administration intervals. Our two simulations (A and B), performed before and after CERA's marketing, enable us to conclude that the simulator is user-friendly and that its use is associated with learning (less dispersion of Hb values and improvement of the ability to respect the predetermined target).\nSimulation modality A was tested with a group of nephrologists who had never used erythropoietins with half-lives longer than 48 hours, showing that the first approach with a prolonged half-life (138 hours) and a monthly administration interval could have overshooting as a consequence. Nevertheless, the risk of overshooting is quickly corrected thanks to the learning, and the prescriber is already able to avoid the administration of an excessive dose in the second equilibration phase of the same simulation.\nInterestingly, in simulation B, performed after CERA's marketing with a group of nephrologists being aware of its long half-life (about 139 hours), overshooting tendencies were not observed. The same simulation, compared with the first one, allowed a more detailed analysis of the maintenance phase following the initial equilibration phase. Surprisingly, the simulation has shown that nephrologists have been more careful and conservative with prescriptions when faced with an administration interval of 4 weeks. The result was that monthly administration has been translated into a significantly lower average Hb value (10.3 vs. 11.1 g/dL; P < 0.01), as well as a lower percentage of Hb values at the target of 11-12 g/dL (18.4 vs. 26.8%; P < 0.05). Accordingly, with the type of ESA and the administration interval mostly used at the time the study was carried out, the only modality that was characterised with a prescription ability (calculated by indexing the percentage of parameter values on target, below target, above target, and above the safety Hb value of 13 g/dL) that was statistically superior compared to the others (primary end point) was the combination of the shortest half-life (24 hours) with weekly administration (ability score 1.52 vs. 1.24; P < 0.05). The smaller adjustments needed, regarding the mean ESA dose, between the equilibration and maintenance phases using the monthly administration interval (9.1 vs. 14.5%; P < 0.05) was probably due to the lower mean Hb value at the end of the equilibration phase when the monthly interval was used; so it should not be interpreted as better prescription ability but as more prudence when faced with long half-lives.\nAs can be observed when comparing Figures 2 and 3, the weekly Hb value shows larger variability when using monthly compared to weekly administration (only the monthly values as shown in Figure 2 were visible to the prescriber). The greatest variability in the model is generated by red blood cell sub-populations of different ages (a fact shown by the curve behaviour in Figure 3 as well as by the significant difference in the fluctuation of red blood cell lifespan during the simulation summarised in Figure 8), caused by the single monthly dose of erythropoietin that has resulted in a non-homogeneous generation and elimination of red blood cells over time.\nHb variability analysis enables a critical evaluation of the meaning of standard deviation (average distance of individual values from the mean) compared to delta Hb (average of the absolute value of the difference between consecutive measurements). In the particular case of the simulation, delta Hb is in fact the parameter that best translates incidental changes in Hb values.\nUsing a monthly instead of a weekly administration interval, the number of modifications to the erythropoietin dose is significantly lower (4.9 vs. 8.2; P < 0.001). As suggested in the literature [19] and demonstrated with the significant direct correlation between Hb variability and number of epoetin dose adjustements (Figure 7), this fact could have favourable consequences on Hb stability.", "In conclusion, bearing in mind the limitations of our model, the results obtained with our simulator could be used as a path for further experimental studies. An ESA prescription simulator can be a useful educational tool to raise awareness about the possible consequences of changing the medication's half-life or its administration interval. The first time users were faced with an erythropoietin compound with a half-life of 138 hours and a monthly administration interval, they were exposed to a risk of overshooting, which was corrected by the training on the simulator. With respect to possible consequences, very long half-lives and monthly administration intervals translate into a conservative prescription that maintains Hb values below the desired level. As a consequence the shortest half-life (24 hours) with the weekly administration interval is associated with the better prescription ability. The monthly prescription interval however, as suggested in the literature, is accompanied by less therapeutic adjustments; in this regard a direct correlation between Hb variability and number of therapy modifications has been demonstrated. In our simulations, the delta Hb has proven to be a better tool for evaluating intra-patient Hb variability than standard deviation.\nComputer-based simulation tools can be particularly useful for improving prescription patterns and for testing new working hypotheses such as determining the factors that influence Hb variability. Additional knowledge on the pharmacodynamic of ESAs is necessary to fine-tune the models and bring them closer to the level of regular clinical experience. However, the research of possible consequences of half-life and administration interval for ESAs that are currently marketed with respect to the Hb stability will require specific targeted clinical trials.", "The authors declare that they have no competing interests.", "LG was involved in the study design, sample collection, analysis and interpretation of the data, in building the simulation model and in writing the report; FN and VF participated in the sample collection, analysis and interpretation of the data and in writing the paper; FR and WN participated in building the simulation model, MB helped formulate the study design, the data analysis strategy and contributed to writing the paper. All authors have read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2369/12/11/prepub\n", "The epoetin prescription simulator. Visual Basic version on Excel of the pharmacokinetic simulation tool used in the study.\nClick here for file\nAppendix 1. Characteristics of the \"Epoetin Prescription Simulator\" tool and user's manual.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Trends in self-reported prevalence and management of hypertension, hypercholesterolemia and diabetes in Swiss adults, 1997-2007.
21332996
Switzerland has a low mortality rate from cardiovascular diseases, but little is known regarding prevalence and management of cardiovascular risk factors (CV RFs: hypertension, hypercholesterolemia and diabetes) in the general population. In this study, we assessed 10-year trends in self-reported prevalence and management of cardiovascular risk factors in Switzerland.
BACKGROUND
data from three national health interview surveys conducted between 1997 and 2007 in representative samples of the Swiss adult population (49,261 subjects overall). Self-reported CV RFs prevalence, treatment and control levels were computed. The sample was weighted to match the sex - and age distribution, geographical location and nationality of the entire adult population of Switzerland.
METHODS
self-reported prevalence of hypertension, hypercholesterolemia and diabetes increased from 22.1%, 11.9% and 3.3% in 1997 to 24.1%, 17.4% and 4.8% in 2007, respectively. Prevalence of self-reported treatment among subjects with CV RFs also increased from 52.1%, 18.5% and 50.0% in 1997 to 60.4%, 38.8% and 53.3% in 2007 for hypertension, hypercholesterolemia and diabetes, respectively. Self-reported control levels increased from 56.4%, 52.9% and 50.0% in 1997 to 80.6%, 75.1% and 53.3% in 2007 for hypertension, hypercholesterolemia and diabetes, respectively. Finally, screening during the last 12 months increased from 84.5%, 86.5% and 87.4% in 1997 to 94.0%, 94.6% and 94.1% in 2007 for hypertension, hypercholesterolemia and diabetes, respectively.
RESULTS
in Switzerland, the prevalences of self-reported hypertension, hypercholesterolemia and diabetes have increased between 1997 and 2007. Management and screening have improved, but further improvements can still be achieved as over one third of subjects with reported CV RFs are not treated.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Cross-Sectional Studies", "Diabetes Mellitus", "Female", "Humans", "Hypercholesterolemia", "Hypertension", "Male", "Mass Screening", "Middle Aged", "Self Care", "Switzerland", "Young Adult" ]
3051907
null
null
Methods
[SUBTITLE] Swiss Health Survey [SUBSECTION] Data from the Swiss Health Surveys (SHS) were obtained from the Swiss Federal Statistical Office (http://www.bfs.admin.ch). The SHS is a cross-sectional, nationwide, population-based telephone survey conducted every 5 years since 1992 (1992, 1997, 2002 and 2007) [5]. The SHS aims to track public health trends in a representative sample of the resident population of Switzerland aged 15 and over. The study population was chosen by stratified random sampling of a database of all private Swiss households with fixed line telephones. It is currently estimated that over 90% of the Swiss households have fixed telephones. The first sampling stratum consisted of the seven main regions: West "Léman", West-Central "Mittelland", Northwest, Zurich, North-Eastern, Central and South. The second stratum consisted of the cantons, and the number of households drawn was proportional to the population of the canton. In some cantons, oversampling of the households was made to obtain accurate cantonal estimates. Extra strata were used for two large cantons of Zurich and Bern. Within these strata, households were randomly drawn and, within the household, one member was randomly selected within all members aged 15 years and over. A letter inviting this household member to participate in the survey was sent, then contacted by phone and interviewed using computer-assisted software managing both dialling and data collection. The interviews were carried out in German, French or Italian, as appropriate. People who did not speak any of these three languages were excluded from the survey. Other criteria for exclusion were: asylum seeker status, households without a fixed line telephone, very poor health status and living in a nursing home [6]. Four sampling waves were performed (Winter, Spring, Summer and Autumn). Participation rate was 71% in 1992, 85% in 1997, 64% in 2002, and 66% in 2007. More details available at http://www.bfs.admin.ch/bfs/portal/fr/index/infothek/erhebungen__quellen/blank/blank/ess/04.html. As too many data were missing in 1992 (no information on hypertension and diabetes), only data for the three last surveys (1997, 2002 and 2007) was used. Data from the Swiss Health Surveys (SHS) were obtained from the Swiss Federal Statistical Office (http://www.bfs.admin.ch). The SHS is a cross-sectional, nationwide, population-based telephone survey conducted every 5 years since 1992 (1992, 1997, 2002 and 2007) [5]. The SHS aims to track public health trends in a representative sample of the resident population of Switzerland aged 15 and over. The study population was chosen by stratified random sampling of a database of all private Swiss households with fixed line telephones. It is currently estimated that over 90% of the Swiss households have fixed telephones. The first sampling stratum consisted of the seven main regions: West "Léman", West-Central "Mittelland", Northwest, Zurich, North-Eastern, Central and South. The second stratum consisted of the cantons, and the number of households drawn was proportional to the population of the canton. In some cantons, oversampling of the households was made to obtain accurate cantonal estimates. Extra strata were used for two large cantons of Zurich and Bern. Within these strata, households were randomly drawn and, within the household, one member was randomly selected within all members aged 15 years and over. A letter inviting this household member to participate in the survey was sent, then contacted by phone and interviewed using computer-assisted software managing both dialling and data collection. The interviews were carried out in German, French or Italian, as appropriate. People who did not speak any of these three languages were excluded from the survey. Other criteria for exclusion were: asylum seeker status, households without a fixed line telephone, very poor health status and living in a nursing home [6]. Four sampling waves were performed (Winter, Spring, Summer and Autumn). Participation rate was 71% in 1992, 85% in 1997, 64% in 2002, and 66% in 2007. More details available at http://www.bfs.admin.ch/bfs/portal/fr/index/infothek/erhebungen__quellen/blank/blank/ess/04.html. As too many data were missing in 1992 (no information on hypertension and diabetes), only data for the three last surveys (1997, 2002 and 2007) was used. [SUBTITLE] Data collected [SUBSECTION] Three age categories were considered: 18 to 44, 45 to 64, and ≥ 65 years. Education was categorized as follows: 1) no education completed, 2) first level (primary school), 3) lower secondary level, 4) upper secondary level and 5) tertiary level, which included university and other forms of education after the secondary level. We defined "low education" (categories 1 and 2), "middle education" (categories 3 and 4), and "high education" (category 5) groups. Self reported height and weight allowed the calculation of Body Mass Index (BMI). Three BMI categories were considered: normal (< 25 kg/m2), overweight (≥ 25 to < 30 kg/m2) and obese (≥ 30 kg/m2). Citizenship was defined as Swiss (having a Swiss passport) or foreigner. The self-reported prevalence of hypertension, hypercholesterolemia or diabetes was assessed by the questions: "Did a doctor or a health professional tell you that you have high blood pressure/a high cholesterol level/diabetes?", respectively. Subjects were considered as treated for hypertension, hypercholesterolemia or diabetes if they answered positively to the questions "Are you treated for blood pressure/to decrease your cholesterol levels/for diabetes?" respectively. Self-reported prevalence of antihypertensive, hypolipidaemic or antidiabetic treatment was calculated as the ratio of subjects reporting being treated by the number of subjects reporting the disease (i.e. number of subjects reported being treated for hypertension divided by the number of subjects reporting being hypertensive). A further question on doctor-prescribed medicines was asked. All subjects being treated were considered irrespective of the answer to the latter question. Adequate treatment of hypertension, hypercholesterolemia or diabetes was considered if the subjects answered "normal or too low" to the questions: "Currently, how is your blood pressure/cholesterol level/glycaemia?" respectively. Self-reported prevalence of adequate CV RF management was calculated as the ratio of subjects reporting being treated and answering "normal or too low" divided by the overall number of subjects reporting being treated. Missing answers were considered as negative (i.e. high levels). As the questionnaires changed slightly between surveys, some questions were missing, i.e., the question on control of hypertension was not asked in 2002. All subjects, irrespective of their status, were asked when they last had their blood pressure, cholesterol or glucose levels measured. Adequate screening was considered if the measurement had been performed during the last 12 months. Three age categories were considered: 18 to 44, 45 to 64, and ≥ 65 years. Education was categorized as follows: 1) no education completed, 2) first level (primary school), 3) lower secondary level, 4) upper secondary level and 5) tertiary level, which included university and other forms of education after the secondary level. We defined "low education" (categories 1 and 2), "middle education" (categories 3 and 4), and "high education" (category 5) groups. Self reported height and weight allowed the calculation of Body Mass Index (BMI). Three BMI categories were considered: normal (< 25 kg/m2), overweight (≥ 25 to < 30 kg/m2) and obese (≥ 30 kg/m2). Citizenship was defined as Swiss (having a Swiss passport) or foreigner. The self-reported prevalence of hypertension, hypercholesterolemia or diabetes was assessed by the questions: "Did a doctor or a health professional tell you that you have high blood pressure/a high cholesterol level/diabetes?", respectively. Subjects were considered as treated for hypertension, hypercholesterolemia or diabetes if they answered positively to the questions "Are you treated for blood pressure/to decrease your cholesterol levels/for diabetes?" respectively. Self-reported prevalence of antihypertensive, hypolipidaemic or antidiabetic treatment was calculated as the ratio of subjects reporting being treated by the number of subjects reporting the disease (i.e. number of subjects reported being treated for hypertension divided by the number of subjects reporting being hypertensive). A further question on doctor-prescribed medicines was asked. All subjects being treated were considered irrespective of the answer to the latter question. Adequate treatment of hypertension, hypercholesterolemia or diabetes was considered if the subjects answered "normal or too low" to the questions: "Currently, how is your blood pressure/cholesterol level/glycaemia?" respectively. Self-reported prevalence of adequate CV RF management was calculated as the ratio of subjects reporting being treated and answering "normal or too low" divided by the overall number of subjects reporting being treated. Missing answers were considered as negative (i.e. high levels). As the questionnaires changed slightly between surveys, some questions were missing, i.e., the question on control of hypertension was not asked in 2002. All subjects, irrespective of their status, were asked when they last had their blood pressure, cholesterol or glucose levels measured. Adequate screening was considered if the measurement had been performed during the last 12 months. [SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was conducted using Stata version 10 (Statacorp, College Station, TX, USA) and SAS Enterprise Guide version 4.1 (SAS Inc, Cary, NC; USA). Results were expressed as number of subjects and (percentage) or mean ± standard deviation. Comparisons were performed using chi-square for categorical data or analysis of variance (ANOVA) for continuous data. A first analysis was conducted using the original data. A second analysis was conducted after probability weighting each subject according to the formula w i h = H i ⋅ N h n h n Where Nh is the average number of telephone numbers in stratum h (h = 29), Hi is the household size, i.e. the number of subjects aged 15 and over living in household i, and nhn is the number of telephone numbers in the sample Sh corresponding to stratum h to the power n (n = sample size in stratum h). Weights were further corrected taking into account the percentage of nonresponders by raking ratio estimation [7]. Weighting partly allowed the correction for bias, i.e. subjects with given characteristics who are under-represented in the original sample were attributed a higher weight [8]. The sum of weights thus corresponds to the Swiss adult population for the period considered. For simplicity, the weighted results will be presented and commented, as the conclusions arising from the unweighted data are similar (see Additional file 1). A third analysis using multivariate logistic regression adjusting for age group, sex, nationality, education and BMI classes was conducted to assess trends during the study period, using either the original (see Additional file 1) or the weighted data (presented here). The results were expressed as Odds ratio and [95% confidence interval]. Statistical significance was considered for p < 0.05. Statistical analysis was conducted using Stata version 10 (Statacorp, College Station, TX, USA) and SAS Enterprise Guide version 4.1 (SAS Inc, Cary, NC; USA). Results were expressed as number of subjects and (percentage) or mean ± standard deviation. Comparisons were performed using chi-square for categorical data or analysis of variance (ANOVA) for continuous data. A first analysis was conducted using the original data. A second analysis was conducted after probability weighting each subject according to the formula w i h = H i ⋅ N h n h n Where Nh is the average number of telephone numbers in stratum h (h = 29), Hi is the household size, i.e. the number of subjects aged 15 and over living in household i, and nhn is the number of telephone numbers in the sample Sh corresponding to stratum h to the power n (n = sample size in stratum h). Weights were further corrected taking into account the percentage of nonresponders by raking ratio estimation [7]. Weighting partly allowed the correction for bias, i.e. subjects with given characteristics who are under-represented in the original sample were attributed a higher weight [8]. The sum of weights thus corresponds to the Swiss adult population for the period considered. For simplicity, the weighted results will be presented and commented, as the conclusions arising from the unweighted data are similar (see Additional file 1). A third analysis using multivariate logistic regression adjusting for age group, sex, nationality, education and BMI classes was conducted to assess trends during the study period, using either the original (see Additional file 1) or the weighted data (presented here). The results were expressed as Odds ratio and [95% confidence interval]. Statistical significance was considered for p < 0.05.
null
null
null
null
[ "Background", "Swiss Health Survey", "Data collected", "Statistical analysis", "Results", "Characteristics of the subjects", "Hypertension", "Hypercholesterolemia", "Diabetes", "Discussion", "Hypertension", "Hypercholesterolemia", "Diabetes", "Limitations", "Conclusion", "Conflict of Interest Statement", "Authors' contributions", "Pre-publication history" ]
[ "Cardiovascular disease is the main cause of premature death in industrialized countries, and its incidence is increasing worldwide [1]. In Switzerland, between 1970 and 2004, mortality rates from ischemic heart and cerebrovascular disease have decreased by circa 50% in men, and by a third by women [2]. Whether those decreases are due to a decrease in cardiovascular risk factors prevalence and/or management is currently unknown.\nThere are few data regarding trends of cardiovascular risk factors in the Swiss population. The MONICA study showed an increase between 1984 and 1993 in the prevalence of hypertension in men and a decrease in women. For the same time period, a decrease in the prevalence of hypercholesterolemia (defined as a total cholesterol level > 6.5 mmol/L) was also reported for both genders [3]. More recently, data from Geneva showed a decrease in the prevalence of hypertension for both genders between 1993 and 2000. For the same period, an increase in the prevalence of hypercholesterolemia was reported [4]. Still, it is not known if the results of this study also apply to the whole country. Thus, we used the data from the National Health Surveys conducted in representative samples of the Swiss population to assess the trends in self-reported prevalence, treatment and control of hypertension, hypercholesterolemia and diabetes in Switzerland, as well as to identify the groups at higher risk.", "Data from the Swiss Health Surveys (SHS) were obtained from the Swiss Federal Statistical Office (http://www.bfs.admin.ch). The SHS is a cross-sectional, nationwide, population-based telephone survey conducted every 5 years since 1992 (1992, 1997, 2002 and 2007) [5]. The SHS aims to track public health trends in a representative sample of the resident population of Switzerland aged 15 and over.\nThe study population was chosen by stratified random sampling of a database of all private Swiss households with fixed line telephones. It is currently estimated that over 90% of the Swiss households have fixed telephones. The first sampling stratum consisted of the seven main regions: West \"Léman\", West-Central \"Mittelland\", Northwest, Zurich, North-Eastern, Central and South. The second stratum consisted of the cantons, and the number of households drawn was proportional to the population of the canton. In some cantons, oversampling of the households was made to obtain accurate cantonal estimates. Extra strata were used for two large cantons of Zurich and Bern. Within these strata, households were randomly drawn and, within the household, one member was randomly selected within all members aged 15 years and over. A letter inviting this household member to participate in the survey was sent, then contacted by phone and interviewed using computer-assisted software managing both dialling and data collection. The interviews were carried out in German, French or Italian, as appropriate. People who did not speak any of these three languages were excluded from the survey. Other criteria for exclusion were: asylum seeker status, households without a fixed line telephone, very poor health status and living in a nursing home [6]. Four sampling waves were performed (Winter, Spring, Summer and Autumn). Participation rate was 71% in 1992, 85% in 1997, 64% in 2002, and 66% in 2007. More details available at http://www.bfs.admin.ch/bfs/portal/fr/index/infothek/erhebungen__quellen/blank/blank/ess/04.html. As too many data were missing in 1992 (no information on hypertension and diabetes), only data for the three last surveys (1997, 2002 and 2007) was used.", "Three age categories were considered: 18 to 44, 45 to 64, and ≥ 65 years. Education was categorized as follows: 1) no education completed, 2) first level (primary school), 3) lower secondary level, 4) upper secondary level and 5) tertiary level, which included university and other forms of education after the secondary level. We defined \"low education\" (categories 1 and 2), \"middle education\" (categories 3 and 4), and \"high education\" (category 5) groups. Self reported height and weight allowed the calculation of Body Mass Index (BMI). Three BMI categories were considered: normal (< 25 kg/m2), overweight (≥ 25 to < 30 kg/m2) and obese (≥ 30 kg/m2). Citizenship was defined as Swiss (having a Swiss passport) or foreigner.\nThe self-reported prevalence of hypertension, hypercholesterolemia or diabetes was assessed by the questions: \"Did a doctor or a health professional tell you that you have high blood pressure/a high cholesterol level/diabetes?\", respectively. Subjects were considered as treated for hypertension, hypercholesterolemia or diabetes if they answered positively to the questions \"Are you treated for blood pressure/to decrease your cholesterol levels/for diabetes?\" respectively. Self-reported prevalence of antihypertensive, hypolipidaemic or antidiabetic treatment was calculated as the ratio of subjects reporting being treated by the number of subjects reporting the disease (i.e. number of subjects reported being treated for hypertension divided by the number of subjects reporting being hypertensive). A further question on doctor-prescribed medicines was asked. All subjects being treated were considered irrespective of the answer to the latter question.\nAdequate treatment of hypertension, hypercholesterolemia or diabetes was considered if the subjects answered \"normal or too low\" to the questions: \"Currently, how is your blood pressure/cholesterol level/glycaemia?\" respectively. Self-reported prevalence of adequate CV RF management was calculated as the ratio of subjects reporting being treated and answering \"normal or too low\" divided by the overall number of subjects reporting being treated. Missing answers were considered as negative (i.e. high levels). As the questionnaires changed slightly between surveys, some questions were missing, i.e., the question on control of hypertension was not asked in 2002.\nAll subjects, irrespective of their status, were asked when they last had their blood pressure, cholesterol or glucose levels measured. Adequate screening was considered if the measurement had been performed during the last 12 months.", "Statistical analysis was conducted using Stata version 10 (Statacorp, College Station, TX, USA) and SAS Enterprise Guide version 4.1 (SAS Inc, Cary, NC; USA). Results were expressed as number of subjects and (percentage) or mean ± standard deviation. Comparisons were performed using chi-square for categorical data or analysis of variance (ANOVA) for continuous data. A first analysis was conducted using the original data. A second analysis was conducted after probability weighting each subject according to the formula\n\n\n\n\n\nw\ni\nh\n\n=\n\nH\ni\n\n⋅\n\n\n\nN\nh\n\n\n\n\nn\nh\nn\n\n\n\n\n\n\n\nWhere Nh is the average number of telephone numbers in stratum h (h = 29), Hi is the household size, i.e. the number of subjects aged 15 and over living in household i, and nhn is the number of telephone numbers in the sample Sh corresponding to stratum h to the power n (n = sample size in stratum h). Weights were further corrected taking into account the percentage of nonresponders by raking ratio estimation [7]. Weighting partly allowed the correction for bias, i.e. subjects with given characteristics who are under-represented in the original sample were attributed a higher weight [8]. The sum of weights thus corresponds to the Swiss adult population for the period considered. For simplicity, the weighted results will be presented and commented, as the conclusions arising from the unweighted data are similar (see Additional file 1). A third analysis using multivariate logistic regression adjusting for age group, sex, nationality, education and BMI classes was conducted to assess trends during the study period, using either the original (see Additional file 1) or the weighted data (presented here). The results were expressed as Odds ratio and [95% confidence interval]. Statistical significance was considered for p < 0.05.", "[SUBTITLE] Characteristics of the subjects [SUBSECTION] The characteristics of subjects according to survey are summarized in table 1. Between 1997 and 2007 mean age increased and the percentage of subjects with low or middle education decreased while the percentage of subjects with high education increased.\ncharacteristics of the samples\nResults are expressed as weighted percentage and average ± standard deviation. § no education completed + first level (primary school). §§ lower + upper secondary level. §§§ tertiary level + other education after secondary level.\nThe characteristics of subjects according to survey are summarized in table 1. Between 1997 and 2007 mean age increased and the percentage of subjects with low or middle education decreased while the percentage of subjects with high education increased.\ncharacteristics of the samples\nResults are expressed as weighted percentage and average ± standard deviation. § no education completed + first level (primary school). §§ lower + upper secondary level. §§§ tertiary level + other education after secondary level.\n[SUBTITLE] Hypertension [SUBSECTION] The trends in self-reported prevalence of hypertension are shown in table 2. Between 1997 and 2007, self-reported hypertension in the Swiss general population increased, and this was further confirmed after multivariate adjustment (table 3). Subjects aged over 65 years or obese had a higher odds ratio, while subjects with university level or foreigners had a lower odds ratio of reporting being hypertensive (table 3). Self-reported treatment increased (table 2); on multivariate analysis, subjects aged over 45 or obese had a higher odds ratio, while women and foreigners had a lower odds ratio of reporting being treated (table 3). Self-reported prevalence of treatment prescribed by the doctor was 96.0%, 99.4% and 99.6% while the daily taking of an antihypertensive drug was 89.6%, 95.3% and 97.1% in 1997, 2002 and 2007, respectively. The self-reported prevalence of controlled hypertension increased and the self-reported prevalence of uncontrolled and untreated hypertension decreased (table 2); on multivariate adjustment, subjects over 65 presented a higher odds ratio of reporting being controlled (table 3). Hypertension screening also increased (table 2), and on multivariate analysis, men, foreigners, subjects aged over 45, overweight or obese had a higher odds ratio of being screened (table 3).\ntrends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypertensive; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypertension; **, among treated subjects. -, data not available.\nThe trends in self-reported prevalence of hypertension are shown in table 2. Between 1997 and 2007, self-reported hypertension in the Swiss general population increased, and this was further confirmed after multivariate adjustment (table 3). Subjects aged over 65 years or obese had a higher odds ratio, while subjects with university level or foreigners had a lower odds ratio of reporting being hypertensive (table 3). Self-reported treatment increased (table 2); on multivariate analysis, subjects aged over 45 or obese had a higher odds ratio, while women and foreigners had a lower odds ratio of reporting being treated (table 3). Self-reported prevalence of treatment prescribed by the doctor was 96.0%, 99.4% and 99.6% while the daily taking of an antihypertensive drug was 89.6%, 95.3% and 97.1% in 1997, 2002 and 2007, respectively. The self-reported prevalence of controlled hypertension increased and the self-reported prevalence of uncontrolled and untreated hypertension decreased (table 2); on multivariate adjustment, subjects over 65 presented a higher odds ratio of reporting being controlled (table 3). Hypertension screening also increased (table 2), and on multivariate analysis, men, foreigners, subjects aged over 45, overweight or obese had a higher odds ratio of being screened (table 3).\ntrends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypertensive; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypertension; **, among treated subjects. -, data not available.\n[SUBTITLE] Hypercholesterolemia [SUBSECTION] Self-reported prevalence of hypercholesterolemia increased considerably between 1997 and 2007 (table 4) and this increase was further confirmed by multivariate analysis (table 5). Women, subjects over 45 years, with higher education or presenting with overweight or obesity had higher odds of reporting being hypercholesterolemic (table 5).\ntrends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypercholesterolemic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypercholesterolemia; **, among treated subjects.-, data not available.\nSelf-reported hypolipidemic drug treatment increased between 1997 and 2007 (table 4); multivariate analysis showed women, older subjects, subjects with a higher education or presenting with overweight or obesity to have higher odds of being treated (table 5). In 2007, 99.1% of hypolipidemic drug treatment was prescribed by the doctor and daily medication use was reported by 94.8% of treated subjects. The self-reported prevalence of controlled hypercholesterolemia increased (table 4); on multivariate analysis, women, subjects over 45 years, subjects with a medium and high education had a higher odds ratio, while foreigners had a lower odds ratio of reporting being adequately controlled (table 5). Conversely, the self-reported prevalence of uncontrolled and untreated hypercholesterolemia remained stable (table 4). Hypercholesterolemia screening increased (table 4); on multivariate analysis, a higher odds ratio of being screened was found for foreigners, subjects aged over 45, and in overweight or obese subjects, while women, subjects with a medium and a high education had a lower odds ratio of being screened (table 5).\nSelf-reported prevalence of hypercholesterolemia increased considerably between 1997 and 2007 (table 4) and this increase was further confirmed by multivariate analysis (table 5). Women, subjects over 45 years, with higher education or presenting with overweight or obesity had higher odds of reporting being hypercholesterolemic (table 5).\ntrends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypercholesterolemic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypercholesterolemia; **, among treated subjects.-, data not available.\nSelf-reported hypolipidemic drug treatment increased between 1997 and 2007 (table 4); multivariate analysis showed women, older subjects, subjects with a higher education or presenting with overweight or obesity to have higher odds of being treated (table 5). In 2007, 99.1% of hypolipidemic drug treatment was prescribed by the doctor and daily medication use was reported by 94.8% of treated subjects. The self-reported prevalence of controlled hypercholesterolemia increased (table 4); on multivariate analysis, women, subjects over 45 years, subjects with a medium and high education had a higher odds ratio, while foreigners had a lower odds ratio of reporting being adequately controlled (table 5). Conversely, the self-reported prevalence of uncontrolled and untreated hypercholesterolemia remained stable (table 4). Hypercholesterolemia screening increased (table 4); on multivariate analysis, a higher odds ratio of being screened was found for foreigners, subjects aged over 45, and in overweight or obese subjects, while women, subjects with a medium and a high education had a lower odds ratio of being screened (table 5).\n[SUBTITLE] Diabetes [SUBSECTION] Self-reported prevalence of diabetes increased between 1997 and 2007 (table 6), a finding confirmed by multivariate analysis (table 7) which also showed men and subjects with increasing age or BMI to have a higher odds ratio, while subjects with middle or high education had a lower odds ratio of reporting being diabetic. Self-reported prevalence of diabetes treatment increased (table 6); multivariate analysis showed men, subjects aged over 45 or presenting with overweight or obesity to have a higher odds ratio, while foreigners had a lower odds ratio of being treated (table 7). Self-reported diabetes control also increased and the self-reported prevalence of uncontrolled and untreated diabetes decreased (table 6); multivariate analysis showed subjects aged 45-64 years, presenting with overweight or obesity or foreigners to have a lower odds ratio, while high educated subjects had a higher odds ratio of being controlled (table 7). Finally, diabetes screening increased during the study period (table 6) and multivariate analysis showed foreigners, subjects aged over 45, overweight or obese to have a higher odds ratio, while men and subjects with medium or high education to have a lower odds ratio of being screened (table 7).\ntrends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being diabetic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported diabetes; **, among treated subjects. -, data not available.\nSelf-reported prevalence of diabetes increased between 1997 and 2007 (table 6), a finding confirmed by multivariate analysis (table 7) which also showed men and subjects with increasing age or BMI to have a higher odds ratio, while subjects with middle or high education had a lower odds ratio of reporting being diabetic. Self-reported prevalence of diabetes treatment increased (table 6); multivariate analysis showed men, subjects aged over 45 or presenting with overweight or obesity to have a higher odds ratio, while foreigners had a lower odds ratio of being treated (table 7). Self-reported diabetes control also increased and the self-reported prevalence of uncontrolled and untreated diabetes decreased (table 6); multivariate analysis showed subjects aged 45-64 years, presenting with overweight or obesity or foreigners to have a lower odds ratio, while high educated subjects had a higher odds ratio of being controlled (table 7). Finally, diabetes screening increased during the study period (table 6) and multivariate analysis showed foreigners, subjects aged over 45, overweight or obese to have a higher odds ratio, while men and subjects with medium or high education to have a lower odds ratio of being screened (table 7).\ntrends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being diabetic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported diabetes; **, among treated subjects. -, data not available.", "The characteristics of subjects according to survey are summarized in table 1. Between 1997 and 2007 mean age increased and the percentage of subjects with low or middle education decreased while the percentage of subjects with high education increased.\ncharacteristics of the samples\nResults are expressed as weighted percentage and average ± standard deviation. § no education completed + first level (primary school). §§ lower + upper secondary level. §§§ tertiary level + other education after secondary level.", "The trends in self-reported prevalence of hypertension are shown in table 2. Between 1997 and 2007, self-reported hypertension in the Swiss general population increased, and this was further confirmed after multivariate adjustment (table 3). Subjects aged over 65 years or obese had a higher odds ratio, while subjects with university level or foreigners had a lower odds ratio of reporting being hypertensive (table 3). Self-reported treatment increased (table 2); on multivariate analysis, subjects aged over 45 or obese had a higher odds ratio, while women and foreigners had a lower odds ratio of reporting being treated (table 3). Self-reported prevalence of treatment prescribed by the doctor was 96.0%, 99.4% and 99.6% while the daily taking of an antihypertensive drug was 89.6%, 95.3% and 97.1% in 1997, 2002 and 2007, respectively. The self-reported prevalence of controlled hypertension increased and the self-reported prevalence of uncontrolled and untreated hypertension decreased (table 2); on multivariate adjustment, subjects over 65 presented a higher odds ratio of reporting being controlled (table 3). Hypertension screening also increased (table 2), and on multivariate analysis, men, foreigners, subjects aged over 45, overweight or obese had a higher odds ratio of being screened (table 3).\ntrends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypertensive; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypertension; **, among treated subjects. -, data not available.", "Self-reported prevalence of hypercholesterolemia increased considerably between 1997 and 2007 (table 4) and this increase was further confirmed by multivariate analysis (table 5). Women, subjects over 45 years, with higher education or presenting with overweight or obesity had higher odds of reporting being hypercholesterolemic (table 5).\ntrends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypercholesterolemic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypercholesterolemia; **, among treated subjects.-, data not available.\nSelf-reported hypolipidemic drug treatment increased between 1997 and 2007 (table 4); multivariate analysis showed women, older subjects, subjects with a higher education or presenting with overweight or obesity to have higher odds of being treated (table 5). In 2007, 99.1% of hypolipidemic drug treatment was prescribed by the doctor and daily medication use was reported by 94.8% of treated subjects. The self-reported prevalence of controlled hypercholesterolemia increased (table 4); on multivariate analysis, women, subjects over 45 years, subjects with a medium and high education had a higher odds ratio, while foreigners had a lower odds ratio of reporting being adequately controlled (table 5). Conversely, the self-reported prevalence of uncontrolled and untreated hypercholesterolemia remained stable (table 4). Hypercholesterolemia screening increased (table 4); on multivariate analysis, a higher odds ratio of being screened was found for foreigners, subjects aged over 45, and in overweight or obese subjects, while women, subjects with a medium and a high education had a lower odds ratio of being screened (table 5).", "Self-reported prevalence of diabetes increased between 1997 and 2007 (table 6), a finding confirmed by multivariate analysis (table 7) which also showed men and subjects with increasing age or BMI to have a higher odds ratio, while subjects with middle or high education had a lower odds ratio of reporting being diabetic. Self-reported prevalence of diabetes treatment increased (table 6); multivariate analysis showed men, subjects aged over 45 or presenting with overweight or obesity to have a higher odds ratio, while foreigners had a lower odds ratio of being treated (table 7). Self-reported diabetes control also increased and the self-reported prevalence of uncontrolled and untreated diabetes decreased (table 6); multivariate analysis showed subjects aged 45-64 years, presenting with overweight or obesity or foreigners to have a lower odds ratio, while high educated subjects had a higher odds ratio of being controlled (table 7). Finally, diabetes screening increased during the study period (table 6) and multivariate analysis showed foreigners, subjects aged over 45, overweight or obese to have a higher odds ratio, while men and subjects with medium or high education to have a lower odds ratio of being screened (table 7).\ntrends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being diabetic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported diabetes; **, among treated subjects. -, data not available.", "Since the MONICA study in the nineties [3] and the Bus Santé study in Geneva [4], there has been little information on trends of hypertension, hypercholesterolemia and diabetes in Switzerland. The data from the Swiss National Health Surveys thus provide important information regarding the self-reported prevalence and management of those cardiovascular risk factors in the Swiss population. As the sampling frame covers about 90% of Swiss households and the participation rate was relatively high for all studies, this study is a good reflect of the Swiss situation. The fact that the weighted and unweighted results were quite similar also suggests the absence of important bias.\n[SUBTITLE] Hypertension [SUBSECTION] Prevalence of self-reported hypertension increased between 1997 and 2007 and was comparable to those reported using measured data by US [9] and German [10] studies and with other studies using self-reported data (table 8). This increase could be due either to an increase in the true prevalence of hypertension, to a more widespread screening, or both. The second hypothesis might be more likely, as the prevalence of subjects reporting having their blood pressure measured during the previous 12 months also increased during this period, a finding already reported in the literature [11]. Another likely determinant is decrease in the thresholds to define hypertension from ≥ 160/95 mmHg in 1993 [3] to ≥ 140/90 mmHg afterwards. Still, self-reported prevalence rates are probably underestimated, as a recent study conducted in Lausanne has shown that less than two thirds of hypertensive subjects are actually aware of their condition [12].\ntrends in self-reported prevalence of cardiovascular risk factors in Switzerland and in other countries\nReferences: Switzerland 1, current study; Spain, [21]; Greece, [28]; USA, [29]; France, [18].\nA higher prevalence of reported hypertension was found among subjects aged over 45 years or presenting with overweight or obesity. Those findings are in agreement with the literature [9,13,14] and might be due to an increased screening with age or because of the presence of other risk factors [15]. Conversely, foreigners had a lower self-reported prevalence of hypertension, and this could not be attributed to a lower screening frequency or to differences in age or BMI status. Possible explanations include differences in dietary or genetic background, but further studies are needed to better assess this point. The self-reported prevalence of hypertension was also inversely related with educational level, a finding in agreement with the literature [16]. This finding might be related to a better lifestyle, namely regarding dietary salt intake, although data from the Geneva study showed no improvement in salt intake in the general population [17].\nSelf-reported treatment of hypertension increased during the study period, suggesting an improvement in the management of this risk factor. Still, in 2007, only six out of ten hypertensive subjects indicated they were on antihypertensive treatment. Although the remaining 40% might be under nonpharmacological antihypertensive measures such as diet or specific lifestyle modifications, our findings suggest that there is still room for improvement regarding pharmacological management of hypertension, a finding reported previously [12].\nIn agreement with objectively measured data from the US [9,13] and France [18], an increase in self-reported control of hypertension was found for the period 1997-2007. This increase might be related to an improvement in antihypertensive treatment, namely the appearance of more potent and new antihypertensive drugs, and/or an improvement of subject's compliance. Still, our results are probably overestimated because some treated subjects might report being controlled just because they are taking antihypertensive drugs. Indeed, a previous study conducted in Lausanne showed that a consistent fraction of treated hypertensive subjects actually presented with high blood pressure levels [12]. Hence, it is likely that the true prevalence of controlled hypertension in Switzerland might actually be lower. Nevertheless, the fact that the self-reported prevalence of uncontrolled and untreated hypertension also decreased suggests that the overall management of hypertension in the Swiss population is improving.\nPrevalence of self-reported hypertension increased between 1997 and 2007 and was comparable to those reported using measured data by US [9] and German [10] studies and with other studies using self-reported data (table 8). This increase could be due either to an increase in the true prevalence of hypertension, to a more widespread screening, or both. The second hypothesis might be more likely, as the prevalence of subjects reporting having their blood pressure measured during the previous 12 months also increased during this period, a finding already reported in the literature [11]. Another likely determinant is decrease in the thresholds to define hypertension from ≥ 160/95 mmHg in 1993 [3] to ≥ 140/90 mmHg afterwards. Still, self-reported prevalence rates are probably underestimated, as a recent study conducted in Lausanne has shown that less than two thirds of hypertensive subjects are actually aware of their condition [12].\ntrends in self-reported prevalence of cardiovascular risk factors in Switzerland and in other countries\nReferences: Switzerland 1, current study; Spain, [21]; Greece, [28]; USA, [29]; France, [18].\nA higher prevalence of reported hypertension was found among subjects aged over 45 years or presenting with overweight or obesity. Those findings are in agreement with the literature [9,13,14] and might be due to an increased screening with age or because of the presence of other risk factors [15]. Conversely, foreigners had a lower self-reported prevalence of hypertension, and this could not be attributed to a lower screening frequency or to differences in age or BMI status. Possible explanations include differences in dietary or genetic background, but further studies are needed to better assess this point. The self-reported prevalence of hypertension was also inversely related with educational level, a finding in agreement with the literature [16]. This finding might be related to a better lifestyle, namely regarding dietary salt intake, although data from the Geneva study showed no improvement in salt intake in the general population [17].\nSelf-reported treatment of hypertension increased during the study period, suggesting an improvement in the management of this risk factor. Still, in 2007, only six out of ten hypertensive subjects indicated they were on antihypertensive treatment. Although the remaining 40% might be under nonpharmacological antihypertensive measures such as diet or specific lifestyle modifications, our findings suggest that there is still room for improvement regarding pharmacological management of hypertension, a finding reported previously [12].\nIn agreement with objectively measured data from the US [9,13] and France [18], an increase in self-reported control of hypertension was found for the period 1997-2007. This increase might be related to an improvement in antihypertensive treatment, namely the appearance of more potent and new antihypertensive drugs, and/or an improvement of subject's compliance. Still, our results are probably overestimated because some treated subjects might report being controlled just because they are taking antihypertensive drugs. Indeed, a previous study conducted in Lausanne showed that a consistent fraction of treated hypertensive subjects actually presented with high blood pressure levels [12]. Hence, it is likely that the true prevalence of controlled hypertension in Switzerland might actually be lower. Nevertheless, the fact that the self-reported prevalence of uncontrolled and untreated hypertension also decreased suggests that the overall management of hypertension in the Swiss population is improving.\n[SUBTITLE] Hypercholesterolemia [SUBSECTION] Self-reported prevalence of hypercholesterolemia was within values published for other countries which used self-reported data (table 8), but lower than the values obtained in a smaller Swiss population-based study using objectively measured data (table 9). Still, and in agreement with previous Swiss [4], French [18] and German [10] studies based on objectively measured data and with studies using self-reported data, the self-reported prevalence of hypercholesterolemia increased between 1997 and 2007. As for hypertension, possible explanations include a true increase in the prevalence of hypercholesterolemia, an increase in screening, a decrease in the threshold values to define hypercholesterolemia [19] or a mixture of them. Interestingly, cholesterol screening increased considerably during the study period, and the prevalence of subjects reporting having their blood cholesterol levels assessed during the previous 12 months was actually higher than other studies [11]. Still, in 2007, the self-reported prevalence of hypercholesterolemia in Switzerland was lower than the USA [11] or France [18]. Two explanations are possible, i.e. the prevalence of hypercholesterolemia being indeed lower in Switzerland, or a lower screening by Swiss GPs. Indeed, it has been shown that only 75% of Swiss physicians consider that screening for high cholesterol is very important, versus 93% for blood pressure [20]. Those differences could partly explain the lower percentage of self-reported hypercholesterolemia relative to hypertension.\ncomparison of prevalences or hypertension and hypercholesterolaemia based on self-reported and measured data for subjects aged 35-75, Switzerland\nResults are expressed as percentage. *, among subjects with the selected risk factor; **, among treated subjects. CoLaus data from [12] for hypertension and from [23] for hypercholesterolemia.\nA higher self-reported prevalence of hypercholesterolemia was found among subjects aged over 45 years, with high education or presenting with overweight or obesity in agreement with other studies [16,18] but not with others [21]. Still, our results suggest that, contrary to hypertension, a higher education is related to a higher self-reported prevalence of hypercholesterolemia. This higher self-reported prevalence is not due to higher screening rates among highly educated subjects, as their odds of being screened were significantly lower (table 4). A possible explanation is the fact that highly educated subjects know better their medical situation [22], but again further studies are needed to better assess this point.\nThe self reported hypolipidemic treatment doubled during the study period, in line with other French [18] and German [10] studies. Nevertheless, in 2007, only four out of ten Swiss patients who had been told they presented with hypercholesterolemia reported being treated, a value similar to the one reported in the CoLaus study [23] (table 9). Although diet has been shown to lower cholesterol levels [24], it is unlikely that 60% of patients diagnosed with hypercholesterolemia are on a diet alone. Hence, and as for hypertension, our findings suggest that there is room for improvement regarding pharmacological management of hypercholesterolemia.\nAn increase in self-reported control of hypercholesterolemia was found, a finding also found in other countries [18,25]. Two hypotheses are possible, i.e. an improvement in hypolipidemic drugs and/or subject's compliance. Again, these results are certainly overestimated, either because the subjects believed they were controlled just because they were treated, or because their GP considered them as treated despite borderline high values [26].\nSelf-reported prevalence of hypercholesterolemia was within values published for other countries which used self-reported data (table 8), but lower than the values obtained in a smaller Swiss population-based study using objectively measured data (table 9). Still, and in agreement with previous Swiss [4], French [18] and German [10] studies based on objectively measured data and with studies using self-reported data, the self-reported prevalence of hypercholesterolemia increased between 1997 and 2007. As for hypertension, possible explanations include a true increase in the prevalence of hypercholesterolemia, an increase in screening, a decrease in the threshold values to define hypercholesterolemia [19] or a mixture of them. Interestingly, cholesterol screening increased considerably during the study period, and the prevalence of subjects reporting having their blood cholesterol levels assessed during the previous 12 months was actually higher than other studies [11]. Still, in 2007, the self-reported prevalence of hypercholesterolemia in Switzerland was lower than the USA [11] or France [18]. Two explanations are possible, i.e. the prevalence of hypercholesterolemia being indeed lower in Switzerland, or a lower screening by Swiss GPs. Indeed, it has been shown that only 75% of Swiss physicians consider that screening for high cholesterol is very important, versus 93% for blood pressure [20]. Those differences could partly explain the lower percentage of self-reported hypercholesterolemia relative to hypertension.\ncomparison of prevalences or hypertension and hypercholesterolaemia based on self-reported and measured data for subjects aged 35-75, Switzerland\nResults are expressed as percentage. *, among subjects with the selected risk factor; **, among treated subjects. CoLaus data from [12] for hypertension and from [23] for hypercholesterolemia.\nA higher self-reported prevalence of hypercholesterolemia was found among subjects aged over 45 years, with high education or presenting with overweight or obesity in agreement with other studies [16,18] but not with others [21]. Still, our results suggest that, contrary to hypertension, a higher education is related to a higher self-reported prevalence of hypercholesterolemia. This higher self-reported prevalence is not due to higher screening rates among highly educated subjects, as their odds of being screened were significantly lower (table 4). A possible explanation is the fact that highly educated subjects know better their medical situation [22], but again further studies are needed to better assess this point.\nThe self reported hypolipidemic treatment doubled during the study period, in line with other French [18] and German [10] studies. Nevertheless, in 2007, only four out of ten Swiss patients who had been told they presented with hypercholesterolemia reported being treated, a value similar to the one reported in the CoLaus study [23] (table 9). Although diet has been shown to lower cholesterol levels [24], it is unlikely that 60% of patients diagnosed with hypercholesterolemia are on a diet alone. Hence, and as for hypertension, our findings suggest that there is room for improvement regarding pharmacological management of hypercholesterolemia.\nAn increase in self-reported control of hypercholesterolemia was found, a finding also found in other countries [18,25]. Two hypotheses are possible, i.e. an improvement in hypolipidemic drugs and/or subject's compliance. Again, these results are certainly overestimated, either because the subjects believed they were controlled just because they were treated, or because their GP considered them as treated despite borderline high values [26].\n[SUBTITLE] Diabetes [SUBSECTION] The self-reported prevalence of diabetes increased during period 1997-2007. Still, in 2007, the self-reported prevalence was lower than reported for France [18] or the US [27], probably due to the self-reported (instead of objectively measured) diabetic status. Still, comparing our data with self-reported data from other countries [18,21,28,29] led to similar conclusions (table 8). Possible explanations include the relatively low prevalence of obesity in Switzerland [30,31] albeit other factors might be at play. Interestingly, the increase in the self-reported prevalence of diabetes persisted after adjustment for overweight and obesity, suggesting that other factors might intervene [32], namely a better screening. Indeed, the prevalence of subjects reporting having their blood glucose assessed the previous 12 months increased between 1997 and 2007, a finding in agreement with other studies [33,34].\nAlso in agreement with the literature [32], a higher self-reported prevalence of diabetes was found among men, subjects aged over 45 years or presenting with overweight or obesity. Similarly, and as reported previously [16,32], a lower prevalence of self-reported diabetes was found among subjects with high educational level. High educated subjects could have more financial means to adapt their lifestyle, e.g., to buy higher quality food [35] or exercise more, thus preventing the occurrence of diabetes. They could better know their health state despite less screened, but further studies are needed to better assess this point.\nSelf reported antidiabetic treatment increased, a trend also reported for France [18] and Italy [36]. Still and again, in 2007, only half of the subjects diagnosed with diabetes reported being treated, and, as for hypercholesterolemia, it is rather unlikely that the remaining half was only on diet. Overall, our data indicate that, in Switzerland, many diabetic subjects are probably undertreated, and that further efforts should be made to implement (non) pharmacological treatment.\nThe increase in self-reported diabetic control found in this study has also been reported elsewhere [25]. This improvement is probably due to a change in therapies and/or an improvement of the subject's compliance. Still, in 2007, one third of treated diabetic subjects reported having high glycaemia, and again this figure is certainly underestimated because many treated subjects believed they are controlled simply due to the fact they receive a drug. Nevertheless, the fact that the prevalence of uncontrolled and untreated diabetes also decreased suggests that the overall management of diabetes in the Swiss population is improving.\nThe self-reported prevalence of diabetes increased during period 1997-2007. Still, in 2007, the self-reported prevalence was lower than reported for France [18] or the US [27], probably due to the self-reported (instead of objectively measured) diabetic status. Still, comparing our data with self-reported data from other countries [18,21,28,29] led to similar conclusions (table 8). Possible explanations include the relatively low prevalence of obesity in Switzerland [30,31] albeit other factors might be at play. Interestingly, the increase in the self-reported prevalence of diabetes persisted after adjustment for overweight and obesity, suggesting that other factors might intervene [32], namely a better screening. Indeed, the prevalence of subjects reporting having their blood glucose assessed the previous 12 months increased between 1997 and 2007, a finding in agreement with other studies [33,34].\nAlso in agreement with the literature [32], a higher self-reported prevalence of diabetes was found among men, subjects aged over 45 years or presenting with overweight or obesity. Similarly, and as reported previously [16,32], a lower prevalence of self-reported diabetes was found among subjects with high educational level. High educated subjects could have more financial means to adapt their lifestyle, e.g., to buy higher quality food [35] or exercise more, thus preventing the occurrence of diabetes. They could better know their health state despite less screened, but further studies are needed to better assess this point.\nSelf reported antidiabetic treatment increased, a trend also reported for France [18] and Italy [36]. Still and again, in 2007, only half of the subjects diagnosed with diabetes reported being treated, and, as for hypercholesterolemia, it is rather unlikely that the remaining half was only on diet. Overall, our data indicate that, in Switzerland, many diabetic subjects are probably undertreated, and that further efforts should be made to implement (non) pharmacological treatment.\nThe increase in self-reported diabetic control found in this study has also been reported elsewhere [25]. This improvement is probably due to a change in therapies and/or an improvement of the subject's compliance. Still, in 2007, one third of treated diabetic subjects reported having high glycaemia, and again this figure is certainly underestimated because many treated subjects believed they are controlled simply due to the fact they receive a drug. Nevertheless, the fact that the prevalence of uncontrolled and untreated diabetes also decreased suggests that the overall management of diabetes in the Swiss population is improving.\n[SUBTITLE] Limitations [SUBSECTION] First, and as indicated previously, the self-reporting of the cardiovascular risk factors might underestimate the real prevalence in the population, as it can be inferred from the results of table 9. Still, it represents the result of the screening done by doctors and health professionals and has been used in other studies for the assessment of trends [21,28,29,37]. Further, it has been shown that self-reported data on cardiovascular risk factors is valid and can be used to assess prevalence rates in most cases [38,39]. Second, increasing rates occurred mainly between 2002 and 2007, when the sample becomes much more educated, raising the issue of a possible selection bias, more educated participants tending to respond more easily. The presence of other unmeasured confounders such as changes in dietary intake could also influence results. Since other unmeasured predictors of disease treatment and control were likely to change along with education, the trends in treatment and control are thus likely to be biased away from the null. Still, in the absence of another nationally representative sample, this study provides the best estimates regarding self-reported prevalence and management of cardiovascular risk factors. Third, the fact that the unweighted and weighted estimates are similar does not remove the potential for response bias. Still, the weighting procedure gives some strata which are less represented in the sample (i.e. young males) a higher weight, thus partially reducing this bias. It should be noted that some studies only standardized on age [29] or even made no adjustment [28], while in this study weighting included gender, age, geographical location and nationality [8]. Forth, although several studies conducted in the USA [40,41] indicate a high level of undiagnosed hypertension and hypercholesterolemia among uninsured subjects, this is rather unlikely to occur in Switzerland as all subjects living in Switzerland have a health insurance (federal law 832.10 of march 18th, 1994, available at http://www.admin.ch/ch/f/rs/c832_10.html). Still, as a nontrivial percentage of subjects with hypertension, dyslipidemia or diabetes might be unaware of their status [12,42], our prevalence estimates might be underestimated. Finally, no information was available regarding nonpharmacological treatment of cardiovascular risk factors, so it was not possible to assess the percentage of subjects not treated with drugs but with other nonpharmacological measures.\nFirst, and as indicated previously, the self-reporting of the cardiovascular risk factors might underestimate the real prevalence in the population, as it can be inferred from the results of table 9. Still, it represents the result of the screening done by doctors and health professionals and has been used in other studies for the assessment of trends [21,28,29,37]. Further, it has been shown that self-reported data on cardiovascular risk factors is valid and can be used to assess prevalence rates in most cases [38,39]. Second, increasing rates occurred mainly between 2002 and 2007, when the sample becomes much more educated, raising the issue of a possible selection bias, more educated participants tending to respond more easily. The presence of other unmeasured confounders such as changes in dietary intake could also influence results. Since other unmeasured predictors of disease treatment and control were likely to change along with education, the trends in treatment and control are thus likely to be biased away from the null. Still, in the absence of another nationally representative sample, this study provides the best estimates regarding self-reported prevalence and management of cardiovascular risk factors. Third, the fact that the unweighted and weighted estimates are similar does not remove the potential for response bias. Still, the weighting procedure gives some strata which are less represented in the sample (i.e. young males) a higher weight, thus partially reducing this bias. It should be noted that some studies only standardized on age [29] or even made no adjustment [28], while in this study weighting included gender, age, geographical location and nationality [8]. Forth, although several studies conducted in the USA [40,41] indicate a high level of undiagnosed hypertension and hypercholesterolemia among uninsured subjects, this is rather unlikely to occur in Switzerland as all subjects living in Switzerland have a health insurance (federal law 832.10 of march 18th, 1994, available at http://www.admin.ch/ch/f/rs/c832_10.html). Still, as a nontrivial percentage of subjects with hypertension, dyslipidemia or diabetes might be unaware of their status [12,42], our prevalence estimates might be underestimated. Finally, no information was available regarding nonpharmacological treatment of cardiovascular risk factors, so it was not possible to assess the percentage of subjects not treated with drugs but with other nonpharmacological measures.", "Prevalence of self-reported hypertension increased between 1997 and 2007 and was comparable to those reported using measured data by US [9] and German [10] studies and with other studies using self-reported data (table 8). This increase could be due either to an increase in the true prevalence of hypertension, to a more widespread screening, or both. The second hypothesis might be more likely, as the prevalence of subjects reporting having their blood pressure measured during the previous 12 months also increased during this period, a finding already reported in the literature [11]. Another likely determinant is decrease in the thresholds to define hypertension from ≥ 160/95 mmHg in 1993 [3] to ≥ 140/90 mmHg afterwards. Still, self-reported prevalence rates are probably underestimated, as a recent study conducted in Lausanne has shown that less than two thirds of hypertensive subjects are actually aware of their condition [12].\ntrends in self-reported prevalence of cardiovascular risk factors in Switzerland and in other countries\nReferences: Switzerland 1, current study; Spain, [21]; Greece, [28]; USA, [29]; France, [18].\nA higher prevalence of reported hypertension was found among subjects aged over 45 years or presenting with overweight or obesity. Those findings are in agreement with the literature [9,13,14] and might be due to an increased screening with age or because of the presence of other risk factors [15]. Conversely, foreigners had a lower self-reported prevalence of hypertension, and this could not be attributed to a lower screening frequency or to differences in age or BMI status. Possible explanations include differences in dietary or genetic background, but further studies are needed to better assess this point. The self-reported prevalence of hypertension was also inversely related with educational level, a finding in agreement with the literature [16]. This finding might be related to a better lifestyle, namely regarding dietary salt intake, although data from the Geneva study showed no improvement in salt intake in the general population [17].\nSelf-reported treatment of hypertension increased during the study period, suggesting an improvement in the management of this risk factor. Still, in 2007, only six out of ten hypertensive subjects indicated they were on antihypertensive treatment. Although the remaining 40% might be under nonpharmacological antihypertensive measures such as diet or specific lifestyle modifications, our findings suggest that there is still room for improvement regarding pharmacological management of hypertension, a finding reported previously [12].\nIn agreement with objectively measured data from the US [9,13] and France [18], an increase in self-reported control of hypertension was found for the period 1997-2007. This increase might be related to an improvement in antihypertensive treatment, namely the appearance of more potent and new antihypertensive drugs, and/or an improvement of subject's compliance. Still, our results are probably overestimated because some treated subjects might report being controlled just because they are taking antihypertensive drugs. Indeed, a previous study conducted in Lausanne showed that a consistent fraction of treated hypertensive subjects actually presented with high blood pressure levels [12]. Hence, it is likely that the true prevalence of controlled hypertension in Switzerland might actually be lower. Nevertheless, the fact that the self-reported prevalence of uncontrolled and untreated hypertension also decreased suggests that the overall management of hypertension in the Swiss population is improving.", "Self-reported prevalence of hypercholesterolemia was within values published for other countries which used self-reported data (table 8), but lower than the values obtained in a smaller Swiss population-based study using objectively measured data (table 9). Still, and in agreement with previous Swiss [4], French [18] and German [10] studies based on objectively measured data and with studies using self-reported data, the self-reported prevalence of hypercholesterolemia increased between 1997 and 2007. As for hypertension, possible explanations include a true increase in the prevalence of hypercholesterolemia, an increase in screening, a decrease in the threshold values to define hypercholesterolemia [19] or a mixture of them. Interestingly, cholesterol screening increased considerably during the study period, and the prevalence of subjects reporting having their blood cholesterol levels assessed during the previous 12 months was actually higher than other studies [11]. Still, in 2007, the self-reported prevalence of hypercholesterolemia in Switzerland was lower than the USA [11] or France [18]. Two explanations are possible, i.e. the prevalence of hypercholesterolemia being indeed lower in Switzerland, or a lower screening by Swiss GPs. Indeed, it has been shown that only 75% of Swiss physicians consider that screening for high cholesterol is very important, versus 93% for blood pressure [20]. Those differences could partly explain the lower percentage of self-reported hypercholesterolemia relative to hypertension.\ncomparison of prevalences or hypertension and hypercholesterolaemia based on self-reported and measured data for subjects aged 35-75, Switzerland\nResults are expressed as percentage. *, among subjects with the selected risk factor; **, among treated subjects. CoLaus data from [12] for hypertension and from [23] for hypercholesterolemia.\nA higher self-reported prevalence of hypercholesterolemia was found among subjects aged over 45 years, with high education or presenting with overweight or obesity in agreement with other studies [16,18] but not with others [21]. Still, our results suggest that, contrary to hypertension, a higher education is related to a higher self-reported prevalence of hypercholesterolemia. This higher self-reported prevalence is not due to higher screening rates among highly educated subjects, as their odds of being screened were significantly lower (table 4). A possible explanation is the fact that highly educated subjects know better their medical situation [22], but again further studies are needed to better assess this point.\nThe self reported hypolipidemic treatment doubled during the study period, in line with other French [18] and German [10] studies. Nevertheless, in 2007, only four out of ten Swiss patients who had been told they presented with hypercholesterolemia reported being treated, a value similar to the one reported in the CoLaus study [23] (table 9). Although diet has been shown to lower cholesterol levels [24], it is unlikely that 60% of patients diagnosed with hypercholesterolemia are on a diet alone. Hence, and as for hypertension, our findings suggest that there is room for improvement regarding pharmacological management of hypercholesterolemia.\nAn increase in self-reported control of hypercholesterolemia was found, a finding also found in other countries [18,25]. Two hypotheses are possible, i.e. an improvement in hypolipidemic drugs and/or subject's compliance. Again, these results are certainly overestimated, either because the subjects believed they were controlled just because they were treated, or because their GP considered them as treated despite borderline high values [26].", "The self-reported prevalence of diabetes increased during period 1997-2007. Still, in 2007, the self-reported prevalence was lower than reported for France [18] or the US [27], probably due to the self-reported (instead of objectively measured) diabetic status. Still, comparing our data with self-reported data from other countries [18,21,28,29] led to similar conclusions (table 8). Possible explanations include the relatively low prevalence of obesity in Switzerland [30,31] albeit other factors might be at play. Interestingly, the increase in the self-reported prevalence of diabetes persisted after adjustment for overweight and obesity, suggesting that other factors might intervene [32], namely a better screening. Indeed, the prevalence of subjects reporting having their blood glucose assessed the previous 12 months increased between 1997 and 2007, a finding in agreement with other studies [33,34].\nAlso in agreement with the literature [32], a higher self-reported prevalence of diabetes was found among men, subjects aged over 45 years or presenting with overweight or obesity. Similarly, and as reported previously [16,32], a lower prevalence of self-reported diabetes was found among subjects with high educational level. High educated subjects could have more financial means to adapt their lifestyle, e.g., to buy higher quality food [35] or exercise more, thus preventing the occurrence of diabetes. They could better know their health state despite less screened, but further studies are needed to better assess this point.\nSelf reported antidiabetic treatment increased, a trend also reported for France [18] and Italy [36]. Still and again, in 2007, only half of the subjects diagnosed with diabetes reported being treated, and, as for hypercholesterolemia, it is rather unlikely that the remaining half was only on diet. Overall, our data indicate that, in Switzerland, many diabetic subjects are probably undertreated, and that further efforts should be made to implement (non) pharmacological treatment.\nThe increase in self-reported diabetic control found in this study has also been reported elsewhere [25]. This improvement is probably due to a change in therapies and/or an improvement of the subject's compliance. Still, in 2007, one third of treated diabetic subjects reported having high glycaemia, and again this figure is certainly underestimated because many treated subjects believed they are controlled simply due to the fact they receive a drug. Nevertheless, the fact that the prevalence of uncontrolled and untreated diabetes also decreased suggests that the overall management of diabetes in the Swiss population is improving.", "First, and as indicated previously, the self-reporting of the cardiovascular risk factors might underestimate the real prevalence in the population, as it can be inferred from the results of table 9. Still, it represents the result of the screening done by doctors and health professionals and has been used in other studies for the assessment of trends [21,28,29,37]. Further, it has been shown that self-reported data on cardiovascular risk factors is valid and can be used to assess prevalence rates in most cases [38,39]. Second, increasing rates occurred mainly between 2002 and 2007, when the sample becomes much more educated, raising the issue of a possible selection bias, more educated participants tending to respond more easily. The presence of other unmeasured confounders such as changes in dietary intake could also influence results. Since other unmeasured predictors of disease treatment and control were likely to change along with education, the trends in treatment and control are thus likely to be biased away from the null. Still, in the absence of another nationally representative sample, this study provides the best estimates regarding self-reported prevalence and management of cardiovascular risk factors. Third, the fact that the unweighted and weighted estimates are similar does not remove the potential for response bias. Still, the weighting procedure gives some strata which are less represented in the sample (i.e. young males) a higher weight, thus partially reducing this bias. It should be noted that some studies only standardized on age [29] or even made no adjustment [28], while in this study weighting included gender, age, geographical location and nationality [8]. Forth, although several studies conducted in the USA [40,41] indicate a high level of undiagnosed hypertension and hypercholesterolemia among uninsured subjects, this is rather unlikely to occur in Switzerland as all subjects living in Switzerland have a health insurance (federal law 832.10 of march 18th, 1994, available at http://www.admin.ch/ch/f/rs/c832_10.html). Still, as a nontrivial percentage of subjects with hypertension, dyslipidemia or diabetes might be unaware of their status [12,42], our prevalence estimates might be underestimated. Finally, no information was available regarding nonpharmacological treatment of cardiovascular risk factors, so it was not possible to assess the percentage of subjects not treated with drugs but with other nonpharmacological measures.", "In Switzerland, self-reported prevalence of hypertension, hypercholesterolemia and diabetes have increased between 1997 and 2007. Management and screening have improved, but further improvements can still be achieved as over one third of subjects with reported CV RFs are not treated.", "The authors hereby indicate no conflict of interest", "DE performed the statistical analysis and wrote most of the manuscript. PMV designed the study (data analysis), obtained the data, performed some statistical analyses and wrote part of the manuscript. FP and PV participated in the study design and coordination and revised the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/114/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Swiss Health Survey", "Data collected", "Statistical analysis", "Results", "Characteristics of the subjects", "Hypertension", "Hypercholesterolemia", "Diabetes", "Discussion", "Hypertension", "Hypercholesterolemia", "Diabetes", "Limitations", "Conclusion", "Conflict of Interest Statement", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Cardiovascular disease is the main cause of premature death in industrialized countries, and its incidence is increasing worldwide [1]. In Switzerland, between 1970 and 2004, mortality rates from ischemic heart and cerebrovascular disease have decreased by circa 50% in men, and by a third by women [2]. Whether those decreases are due to a decrease in cardiovascular risk factors prevalence and/or management is currently unknown.\nThere are few data regarding trends of cardiovascular risk factors in the Swiss population. The MONICA study showed an increase between 1984 and 1993 in the prevalence of hypertension in men and a decrease in women. For the same time period, a decrease in the prevalence of hypercholesterolemia (defined as a total cholesterol level > 6.5 mmol/L) was also reported for both genders [3]. More recently, data from Geneva showed a decrease in the prevalence of hypertension for both genders between 1993 and 2000. For the same period, an increase in the prevalence of hypercholesterolemia was reported [4]. Still, it is not known if the results of this study also apply to the whole country. Thus, we used the data from the National Health Surveys conducted in representative samples of the Swiss population to assess the trends in self-reported prevalence, treatment and control of hypertension, hypercholesterolemia and diabetes in Switzerland, as well as to identify the groups at higher risk.", "[SUBTITLE] Swiss Health Survey [SUBSECTION] Data from the Swiss Health Surveys (SHS) were obtained from the Swiss Federal Statistical Office (http://www.bfs.admin.ch). The SHS is a cross-sectional, nationwide, population-based telephone survey conducted every 5 years since 1992 (1992, 1997, 2002 and 2007) [5]. The SHS aims to track public health trends in a representative sample of the resident population of Switzerland aged 15 and over.\nThe study population was chosen by stratified random sampling of a database of all private Swiss households with fixed line telephones. It is currently estimated that over 90% of the Swiss households have fixed telephones. The first sampling stratum consisted of the seven main regions: West \"Léman\", West-Central \"Mittelland\", Northwest, Zurich, North-Eastern, Central and South. The second stratum consisted of the cantons, and the number of households drawn was proportional to the population of the canton. In some cantons, oversampling of the households was made to obtain accurate cantonal estimates. Extra strata were used for two large cantons of Zurich and Bern. Within these strata, households were randomly drawn and, within the household, one member was randomly selected within all members aged 15 years and over. A letter inviting this household member to participate in the survey was sent, then contacted by phone and interviewed using computer-assisted software managing both dialling and data collection. The interviews were carried out in German, French or Italian, as appropriate. People who did not speak any of these three languages were excluded from the survey. Other criteria for exclusion were: asylum seeker status, households without a fixed line telephone, very poor health status and living in a nursing home [6]. Four sampling waves were performed (Winter, Spring, Summer and Autumn). Participation rate was 71% in 1992, 85% in 1997, 64% in 2002, and 66% in 2007. More details available at http://www.bfs.admin.ch/bfs/portal/fr/index/infothek/erhebungen__quellen/blank/blank/ess/04.html. As too many data were missing in 1992 (no information on hypertension and diabetes), only data for the three last surveys (1997, 2002 and 2007) was used.\nData from the Swiss Health Surveys (SHS) were obtained from the Swiss Federal Statistical Office (http://www.bfs.admin.ch). The SHS is a cross-sectional, nationwide, population-based telephone survey conducted every 5 years since 1992 (1992, 1997, 2002 and 2007) [5]. The SHS aims to track public health trends in a representative sample of the resident population of Switzerland aged 15 and over.\nThe study population was chosen by stratified random sampling of a database of all private Swiss households with fixed line telephones. It is currently estimated that over 90% of the Swiss households have fixed telephones. The first sampling stratum consisted of the seven main regions: West \"Léman\", West-Central \"Mittelland\", Northwest, Zurich, North-Eastern, Central and South. The second stratum consisted of the cantons, and the number of households drawn was proportional to the population of the canton. In some cantons, oversampling of the households was made to obtain accurate cantonal estimates. Extra strata were used for two large cantons of Zurich and Bern. Within these strata, households were randomly drawn and, within the household, one member was randomly selected within all members aged 15 years and over. A letter inviting this household member to participate in the survey was sent, then contacted by phone and interviewed using computer-assisted software managing both dialling and data collection. The interviews were carried out in German, French or Italian, as appropriate. People who did not speak any of these three languages were excluded from the survey. Other criteria for exclusion were: asylum seeker status, households without a fixed line telephone, very poor health status and living in a nursing home [6]. Four sampling waves were performed (Winter, Spring, Summer and Autumn). Participation rate was 71% in 1992, 85% in 1997, 64% in 2002, and 66% in 2007. More details available at http://www.bfs.admin.ch/bfs/portal/fr/index/infothek/erhebungen__quellen/blank/blank/ess/04.html. As too many data were missing in 1992 (no information on hypertension and diabetes), only data for the three last surveys (1997, 2002 and 2007) was used.\n[SUBTITLE] Data collected [SUBSECTION] Three age categories were considered: 18 to 44, 45 to 64, and ≥ 65 years. Education was categorized as follows: 1) no education completed, 2) first level (primary school), 3) lower secondary level, 4) upper secondary level and 5) tertiary level, which included university and other forms of education after the secondary level. We defined \"low education\" (categories 1 and 2), \"middle education\" (categories 3 and 4), and \"high education\" (category 5) groups. Self reported height and weight allowed the calculation of Body Mass Index (BMI). Three BMI categories were considered: normal (< 25 kg/m2), overweight (≥ 25 to < 30 kg/m2) and obese (≥ 30 kg/m2). Citizenship was defined as Swiss (having a Swiss passport) or foreigner.\nThe self-reported prevalence of hypertension, hypercholesterolemia or diabetes was assessed by the questions: \"Did a doctor or a health professional tell you that you have high blood pressure/a high cholesterol level/diabetes?\", respectively. Subjects were considered as treated for hypertension, hypercholesterolemia or diabetes if they answered positively to the questions \"Are you treated for blood pressure/to decrease your cholesterol levels/for diabetes?\" respectively. Self-reported prevalence of antihypertensive, hypolipidaemic or antidiabetic treatment was calculated as the ratio of subjects reporting being treated by the number of subjects reporting the disease (i.e. number of subjects reported being treated for hypertension divided by the number of subjects reporting being hypertensive). A further question on doctor-prescribed medicines was asked. All subjects being treated were considered irrespective of the answer to the latter question.\nAdequate treatment of hypertension, hypercholesterolemia or diabetes was considered if the subjects answered \"normal or too low\" to the questions: \"Currently, how is your blood pressure/cholesterol level/glycaemia?\" respectively. Self-reported prevalence of adequate CV RF management was calculated as the ratio of subjects reporting being treated and answering \"normal or too low\" divided by the overall number of subjects reporting being treated. Missing answers were considered as negative (i.e. high levels). As the questionnaires changed slightly between surveys, some questions were missing, i.e., the question on control of hypertension was not asked in 2002.\nAll subjects, irrespective of their status, were asked when they last had their blood pressure, cholesterol or glucose levels measured. Adequate screening was considered if the measurement had been performed during the last 12 months.\nThree age categories were considered: 18 to 44, 45 to 64, and ≥ 65 years. Education was categorized as follows: 1) no education completed, 2) first level (primary school), 3) lower secondary level, 4) upper secondary level and 5) tertiary level, which included university and other forms of education after the secondary level. We defined \"low education\" (categories 1 and 2), \"middle education\" (categories 3 and 4), and \"high education\" (category 5) groups. Self reported height and weight allowed the calculation of Body Mass Index (BMI). Three BMI categories were considered: normal (< 25 kg/m2), overweight (≥ 25 to < 30 kg/m2) and obese (≥ 30 kg/m2). Citizenship was defined as Swiss (having a Swiss passport) or foreigner.\nThe self-reported prevalence of hypertension, hypercholesterolemia or diabetes was assessed by the questions: \"Did a doctor or a health professional tell you that you have high blood pressure/a high cholesterol level/diabetes?\", respectively. Subjects were considered as treated for hypertension, hypercholesterolemia or diabetes if they answered positively to the questions \"Are you treated for blood pressure/to decrease your cholesterol levels/for diabetes?\" respectively. Self-reported prevalence of antihypertensive, hypolipidaemic or antidiabetic treatment was calculated as the ratio of subjects reporting being treated by the number of subjects reporting the disease (i.e. number of subjects reported being treated for hypertension divided by the number of subjects reporting being hypertensive). A further question on doctor-prescribed medicines was asked. All subjects being treated were considered irrespective of the answer to the latter question.\nAdequate treatment of hypertension, hypercholesterolemia or diabetes was considered if the subjects answered \"normal or too low\" to the questions: \"Currently, how is your blood pressure/cholesterol level/glycaemia?\" respectively. Self-reported prevalence of adequate CV RF management was calculated as the ratio of subjects reporting being treated and answering \"normal or too low\" divided by the overall number of subjects reporting being treated. Missing answers were considered as negative (i.e. high levels). As the questionnaires changed slightly between surveys, some questions were missing, i.e., the question on control of hypertension was not asked in 2002.\nAll subjects, irrespective of their status, were asked when they last had their blood pressure, cholesterol or glucose levels measured. Adequate screening was considered if the measurement had been performed during the last 12 months.\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was conducted using Stata version 10 (Statacorp, College Station, TX, USA) and SAS Enterprise Guide version 4.1 (SAS Inc, Cary, NC; USA). Results were expressed as number of subjects and (percentage) or mean ± standard deviation. Comparisons were performed using chi-square for categorical data or analysis of variance (ANOVA) for continuous data. A first analysis was conducted using the original data. A second analysis was conducted after probability weighting each subject according to the formula\n\n\n\n\n\nw\ni\nh\n\n=\n\nH\ni\n\n⋅\n\n\n\nN\nh\n\n\n\n\nn\nh\nn\n\n\n\n\n\n\n\nWhere Nh is the average number of telephone numbers in stratum h (h = 29), Hi is the household size, i.e. the number of subjects aged 15 and over living in household i, and nhn is the number of telephone numbers in the sample Sh corresponding to stratum h to the power n (n = sample size in stratum h). Weights were further corrected taking into account the percentage of nonresponders by raking ratio estimation [7]. Weighting partly allowed the correction for bias, i.e. subjects with given characteristics who are under-represented in the original sample were attributed a higher weight [8]. The sum of weights thus corresponds to the Swiss adult population for the period considered. For simplicity, the weighted results will be presented and commented, as the conclusions arising from the unweighted data are similar (see Additional file 1). A third analysis using multivariate logistic regression adjusting for age group, sex, nationality, education and BMI classes was conducted to assess trends during the study period, using either the original (see Additional file 1) or the weighted data (presented here). The results were expressed as Odds ratio and [95% confidence interval]. Statistical significance was considered for p < 0.05.\nStatistical analysis was conducted using Stata version 10 (Statacorp, College Station, TX, USA) and SAS Enterprise Guide version 4.1 (SAS Inc, Cary, NC; USA). Results were expressed as number of subjects and (percentage) or mean ± standard deviation. Comparisons were performed using chi-square for categorical data or analysis of variance (ANOVA) for continuous data. A first analysis was conducted using the original data. A second analysis was conducted after probability weighting each subject according to the formula\n\n\n\n\n\nw\ni\nh\n\n=\n\nH\ni\n\n⋅\n\n\n\nN\nh\n\n\n\n\nn\nh\nn\n\n\n\n\n\n\n\nWhere Nh is the average number of telephone numbers in stratum h (h = 29), Hi is the household size, i.e. the number of subjects aged 15 and over living in household i, and nhn is the number of telephone numbers in the sample Sh corresponding to stratum h to the power n (n = sample size in stratum h). Weights were further corrected taking into account the percentage of nonresponders by raking ratio estimation [7]. Weighting partly allowed the correction for bias, i.e. subjects with given characteristics who are under-represented in the original sample were attributed a higher weight [8]. The sum of weights thus corresponds to the Swiss adult population for the period considered. For simplicity, the weighted results will be presented and commented, as the conclusions arising from the unweighted data are similar (see Additional file 1). A third analysis using multivariate logistic regression adjusting for age group, sex, nationality, education and BMI classes was conducted to assess trends during the study period, using either the original (see Additional file 1) or the weighted data (presented here). The results were expressed as Odds ratio and [95% confidence interval]. Statistical significance was considered for p < 0.05.", "Data from the Swiss Health Surveys (SHS) were obtained from the Swiss Federal Statistical Office (http://www.bfs.admin.ch). The SHS is a cross-sectional, nationwide, population-based telephone survey conducted every 5 years since 1992 (1992, 1997, 2002 and 2007) [5]. The SHS aims to track public health trends in a representative sample of the resident population of Switzerland aged 15 and over.\nThe study population was chosen by stratified random sampling of a database of all private Swiss households with fixed line telephones. It is currently estimated that over 90% of the Swiss households have fixed telephones. The first sampling stratum consisted of the seven main regions: West \"Léman\", West-Central \"Mittelland\", Northwest, Zurich, North-Eastern, Central and South. The second stratum consisted of the cantons, and the number of households drawn was proportional to the population of the canton. In some cantons, oversampling of the households was made to obtain accurate cantonal estimates. Extra strata were used for two large cantons of Zurich and Bern. Within these strata, households were randomly drawn and, within the household, one member was randomly selected within all members aged 15 years and over. A letter inviting this household member to participate in the survey was sent, then contacted by phone and interviewed using computer-assisted software managing both dialling and data collection. The interviews were carried out in German, French or Italian, as appropriate. People who did not speak any of these three languages were excluded from the survey. Other criteria for exclusion were: asylum seeker status, households without a fixed line telephone, very poor health status and living in a nursing home [6]. Four sampling waves were performed (Winter, Spring, Summer and Autumn). Participation rate was 71% in 1992, 85% in 1997, 64% in 2002, and 66% in 2007. More details available at http://www.bfs.admin.ch/bfs/portal/fr/index/infothek/erhebungen__quellen/blank/blank/ess/04.html. As too many data were missing in 1992 (no information on hypertension and diabetes), only data for the three last surveys (1997, 2002 and 2007) was used.", "Three age categories were considered: 18 to 44, 45 to 64, and ≥ 65 years. Education was categorized as follows: 1) no education completed, 2) first level (primary school), 3) lower secondary level, 4) upper secondary level and 5) tertiary level, which included university and other forms of education after the secondary level. We defined \"low education\" (categories 1 and 2), \"middle education\" (categories 3 and 4), and \"high education\" (category 5) groups. Self reported height and weight allowed the calculation of Body Mass Index (BMI). Three BMI categories were considered: normal (< 25 kg/m2), overweight (≥ 25 to < 30 kg/m2) and obese (≥ 30 kg/m2). Citizenship was defined as Swiss (having a Swiss passport) or foreigner.\nThe self-reported prevalence of hypertension, hypercholesterolemia or diabetes was assessed by the questions: \"Did a doctor or a health professional tell you that you have high blood pressure/a high cholesterol level/diabetes?\", respectively. Subjects were considered as treated for hypertension, hypercholesterolemia or diabetes if they answered positively to the questions \"Are you treated for blood pressure/to decrease your cholesterol levels/for diabetes?\" respectively. Self-reported prevalence of antihypertensive, hypolipidaemic or antidiabetic treatment was calculated as the ratio of subjects reporting being treated by the number of subjects reporting the disease (i.e. number of subjects reported being treated for hypertension divided by the number of subjects reporting being hypertensive). A further question on doctor-prescribed medicines was asked. All subjects being treated were considered irrespective of the answer to the latter question.\nAdequate treatment of hypertension, hypercholesterolemia or diabetes was considered if the subjects answered \"normal or too low\" to the questions: \"Currently, how is your blood pressure/cholesterol level/glycaemia?\" respectively. Self-reported prevalence of adequate CV RF management was calculated as the ratio of subjects reporting being treated and answering \"normal or too low\" divided by the overall number of subjects reporting being treated. Missing answers were considered as negative (i.e. high levels). As the questionnaires changed slightly between surveys, some questions were missing, i.e., the question on control of hypertension was not asked in 2002.\nAll subjects, irrespective of their status, were asked when they last had their blood pressure, cholesterol or glucose levels measured. Adequate screening was considered if the measurement had been performed during the last 12 months.", "Statistical analysis was conducted using Stata version 10 (Statacorp, College Station, TX, USA) and SAS Enterprise Guide version 4.1 (SAS Inc, Cary, NC; USA). Results were expressed as number of subjects and (percentage) or mean ± standard deviation. Comparisons were performed using chi-square for categorical data or analysis of variance (ANOVA) for continuous data. A first analysis was conducted using the original data. A second analysis was conducted after probability weighting each subject according to the formula\n\n\n\n\n\nw\ni\nh\n\n=\n\nH\ni\n\n⋅\n\n\n\nN\nh\n\n\n\n\nn\nh\nn\n\n\n\n\n\n\n\nWhere Nh is the average number of telephone numbers in stratum h (h = 29), Hi is the household size, i.e. the number of subjects aged 15 and over living in household i, and nhn is the number of telephone numbers in the sample Sh corresponding to stratum h to the power n (n = sample size in stratum h). Weights were further corrected taking into account the percentage of nonresponders by raking ratio estimation [7]. Weighting partly allowed the correction for bias, i.e. subjects with given characteristics who are under-represented in the original sample were attributed a higher weight [8]. The sum of weights thus corresponds to the Swiss adult population for the period considered. For simplicity, the weighted results will be presented and commented, as the conclusions arising from the unweighted data are similar (see Additional file 1). A third analysis using multivariate logistic regression adjusting for age group, sex, nationality, education and BMI classes was conducted to assess trends during the study period, using either the original (see Additional file 1) or the weighted data (presented here). The results were expressed as Odds ratio and [95% confidence interval]. Statistical significance was considered for p < 0.05.", "[SUBTITLE] Characteristics of the subjects [SUBSECTION] The characteristics of subjects according to survey are summarized in table 1. Between 1997 and 2007 mean age increased and the percentage of subjects with low or middle education decreased while the percentage of subjects with high education increased.\ncharacteristics of the samples\nResults are expressed as weighted percentage and average ± standard deviation. § no education completed + first level (primary school). §§ lower + upper secondary level. §§§ tertiary level + other education after secondary level.\nThe characteristics of subjects according to survey are summarized in table 1. Between 1997 and 2007 mean age increased and the percentage of subjects with low or middle education decreased while the percentage of subjects with high education increased.\ncharacteristics of the samples\nResults are expressed as weighted percentage and average ± standard deviation. § no education completed + first level (primary school). §§ lower + upper secondary level. §§§ tertiary level + other education after secondary level.\n[SUBTITLE] Hypertension [SUBSECTION] The trends in self-reported prevalence of hypertension are shown in table 2. Between 1997 and 2007, self-reported hypertension in the Swiss general population increased, and this was further confirmed after multivariate adjustment (table 3). Subjects aged over 65 years or obese had a higher odds ratio, while subjects with university level or foreigners had a lower odds ratio of reporting being hypertensive (table 3). Self-reported treatment increased (table 2); on multivariate analysis, subjects aged over 45 or obese had a higher odds ratio, while women and foreigners had a lower odds ratio of reporting being treated (table 3). Self-reported prevalence of treatment prescribed by the doctor was 96.0%, 99.4% and 99.6% while the daily taking of an antihypertensive drug was 89.6%, 95.3% and 97.1% in 1997, 2002 and 2007, respectively. The self-reported prevalence of controlled hypertension increased and the self-reported prevalence of uncontrolled and untreated hypertension decreased (table 2); on multivariate adjustment, subjects over 65 presented a higher odds ratio of reporting being controlled (table 3). Hypertension screening also increased (table 2), and on multivariate analysis, men, foreigners, subjects aged over 45, overweight or obese had a higher odds ratio of being screened (table 3).\ntrends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypertensive; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypertension; **, among treated subjects. -, data not available.\nThe trends in self-reported prevalence of hypertension are shown in table 2. Between 1997 and 2007, self-reported hypertension in the Swiss general population increased, and this was further confirmed after multivariate adjustment (table 3). Subjects aged over 65 years or obese had a higher odds ratio, while subjects with university level or foreigners had a lower odds ratio of reporting being hypertensive (table 3). Self-reported treatment increased (table 2); on multivariate analysis, subjects aged over 45 or obese had a higher odds ratio, while women and foreigners had a lower odds ratio of reporting being treated (table 3). Self-reported prevalence of treatment prescribed by the doctor was 96.0%, 99.4% and 99.6% while the daily taking of an antihypertensive drug was 89.6%, 95.3% and 97.1% in 1997, 2002 and 2007, respectively. The self-reported prevalence of controlled hypertension increased and the self-reported prevalence of uncontrolled and untreated hypertension decreased (table 2); on multivariate adjustment, subjects over 65 presented a higher odds ratio of reporting being controlled (table 3). Hypertension screening also increased (table 2), and on multivariate analysis, men, foreigners, subjects aged over 45, overweight or obese had a higher odds ratio of being screened (table 3).\ntrends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypertensive; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypertension; **, among treated subjects. -, data not available.\n[SUBTITLE] Hypercholesterolemia [SUBSECTION] Self-reported prevalence of hypercholesterolemia increased considerably between 1997 and 2007 (table 4) and this increase was further confirmed by multivariate analysis (table 5). Women, subjects over 45 years, with higher education or presenting with overweight or obesity had higher odds of reporting being hypercholesterolemic (table 5).\ntrends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypercholesterolemic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypercholesterolemia; **, among treated subjects.-, data not available.\nSelf-reported hypolipidemic drug treatment increased between 1997 and 2007 (table 4); multivariate analysis showed women, older subjects, subjects with a higher education or presenting with overweight or obesity to have higher odds of being treated (table 5). In 2007, 99.1% of hypolipidemic drug treatment was prescribed by the doctor and daily medication use was reported by 94.8% of treated subjects. The self-reported prevalence of controlled hypercholesterolemia increased (table 4); on multivariate analysis, women, subjects over 45 years, subjects with a medium and high education had a higher odds ratio, while foreigners had a lower odds ratio of reporting being adequately controlled (table 5). Conversely, the self-reported prevalence of uncontrolled and untreated hypercholesterolemia remained stable (table 4). Hypercholesterolemia screening increased (table 4); on multivariate analysis, a higher odds ratio of being screened was found for foreigners, subjects aged over 45, and in overweight or obese subjects, while women, subjects with a medium and a high education had a lower odds ratio of being screened (table 5).\nSelf-reported prevalence of hypercholesterolemia increased considerably between 1997 and 2007 (table 4) and this increase was further confirmed by multivariate analysis (table 5). Women, subjects over 45 years, with higher education or presenting with overweight or obesity had higher odds of reporting being hypercholesterolemic (table 5).\ntrends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypercholesterolemic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypercholesterolemia; **, among treated subjects.-, data not available.\nSelf-reported hypolipidemic drug treatment increased between 1997 and 2007 (table 4); multivariate analysis showed women, older subjects, subjects with a higher education or presenting with overweight or obesity to have higher odds of being treated (table 5). In 2007, 99.1% of hypolipidemic drug treatment was prescribed by the doctor and daily medication use was reported by 94.8% of treated subjects. The self-reported prevalence of controlled hypercholesterolemia increased (table 4); on multivariate analysis, women, subjects over 45 years, subjects with a medium and high education had a higher odds ratio, while foreigners had a lower odds ratio of reporting being adequately controlled (table 5). Conversely, the self-reported prevalence of uncontrolled and untreated hypercholesterolemia remained stable (table 4). Hypercholesterolemia screening increased (table 4); on multivariate analysis, a higher odds ratio of being screened was found for foreigners, subjects aged over 45, and in overweight or obese subjects, while women, subjects with a medium and a high education had a lower odds ratio of being screened (table 5).\n[SUBTITLE] Diabetes [SUBSECTION] Self-reported prevalence of diabetes increased between 1997 and 2007 (table 6), a finding confirmed by multivariate analysis (table 7) which also showed men and subjects with increasing age or BMI to have a higher odds ratio, while subjects with middle or high education had a lower odds ratio of reporting being diabetic. Self-reported prevalence of diabetes treatment increased (table 6); multivariate analysis showed men, subjects aged over 45 or presenting with overweight or obesity to have a higher odds ratio, while foreigners had a lower odds ratio of being treated (table 7). Self-reported diabetes control also increased and the self-reported prevalence of uncontrolled and untreated diabetes decreased (table 6); multivariate analysis showed subjects aged 45-64 years, presenting with overweight or obesity or foreigners to have a lower odds ratio, while high educated subjects had a higher odds ratio of being controlled (table 7). Finally, diabetes screening increased during the study period (table 6) and multivariate analysis showed foreigners, subjects aged over 45, overweight or obese to have a higher odds ratio, while men and subjects with medium or high education to have a lower odds ratio of being screened (table 7).\ntrends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being diabetic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported diabetes; **, among treated subjects. -, data not available.\nSelf-reported prevalence of diabetes increased between 1997 and 2007 (table 6), a finding confirmed by multivariate analysis (table 7) which also showed men and subjects with increasing age or BMI to have a higher odds ratio, while subjects with middle or high education had a lower odds ratio of reporting being diabetic. Self-reported prevalence of diabetes treatment increased (table 6); multivariate analysis showed men, subjects aged over 45 or presenting with overweight or obesity to have a higher odds ratio, while foreigners had a lower odds ratio of being treated (table 7). Self-reported diabetes control also increased and the self-reported prevalence of uncontrolled and untreated diabetes decreased (table 6); multivariate analysis showed subjects aged 45-64 years, presenting with overweight or obesity or foreigners to have a lower odds ratio, while high educated subjects had a higher odds ratio of being controlled (table 7). Finally, diabetes screening increased during the study period (table 6) and multivariate analysis showed foreigners, subjects aged over 45, overweight or obese to have a higher odds ratio, while men and subjects with medium or high education to have a lower odds ratio of being screened (table 7).\ntrends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being diabetic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported diabetes; **, among treated subjects. -, data not available.", "The characteristics of subjects according to survey are summarized in table 1. Between 1997 and 2007 mean age increased and the percentage of subjects with low or middle education decreased while the percentage of subjects with high education increased.\ncharacteristics of the samples\nResults are expressed as weighted percentage and average ± standard deviation. § no education completed + first level (primary school). §§ lower + upper secondary level. §§§ tertiary level + other education after secondary level.", "The trends in self-reported prevalence of hypertension are shown in table 2. Between 1997 and 2007, self-reported hypertension in the Swiss general population increased, and this was further confirmed after multivariate adjustment (table 3). Subjects aged over 65 years or obese had a higher odds ratio, while subjects with university level or foreigners had a lower odds ratio of reporting being hypertensive (table 3). Self-reported treatment increased (table 2); on multivariate analysis, subjects aged over 45 or obese had a higher odds ratio, while women and foreigners had a lower odds ratio of reporting being treated (table 3). Self-reported prevalence of treatment prescribed by the doctor was 96.0%, 99.4% and 99.6% while the daily taking of an antihypertensive drug was 89.6%, 95.3% and 97.1% in 1997, 2002 and 2007, respectively. The self-reported prevalence of controlled hypertension increased and the self-reported prevalence of uncontrolled and untreated hypertension decreased (table 2); on multivariate adjustment, subjects over 65 presented a higher odds ratio of reporting being controlled (table 3). Hypertension screening also increased (table 2), and on multivariate analysis, men, foreigners, subjects aged over 45, overweight or obese had a higher odds ratio of being screened (table 3).\ntrends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypertensive; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypertension in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypertension; **, among treated subjects. -, data not available.", "Self-reported prevalence of hypercholesterolemia increased considerably between 1997 and 2007 (table 4) and this increase was further confirmed by multivariate analysis (table 5). Women, subjects over 45 years, with higher education or presenting with overweight or obesity had higher odds of reporting being hypercholesterolemic (table 5).\ntrends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being hypercholesterolemic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of hypercholesterolemia in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported hypercholesterolemia; **, among treated subjects.-, data not available.\nSelf-reported hypolipidemic drug treatment increased between 1997 and 2007 (table 4); multivariate analysis showed women, older subjects, subjects with a higher education or presenting with overweight or obesity to have higher odds of being treated (table 5). In 2007, 99.1% of hypolipidemic drug treatment was prescribed by the doctor and daily medication use was reported by 94.8% of treated subjects. The self-reported prevalence of controlled hypercholesterolemia increased (table 4); on multivariate analysis, women, subjects over 45 years, subjects with a medium and high education had a higher odds ratio, while foreigners had a lower odds ratio of reporting being adequately controlled (table 5). Conversely, the self-reported prevalence of uncontrolled and untreated hypercholesterolemia remained stable (table 4). Hypercholesterolemia screening increased (table 4); on multivariate analysis, a higher odds ratio of being screened was found for foreigners, subjects aged over 45, and in overweight or obese subjects, while women, subjects with a medium and a high education had a lower odds ratio of being screened (table 5).", "Self-reported prevalence of diabetes increased between 1997 and 2007 (table 6), a finding confirmed by multivariate analysis (table 7) which also showed men and subjects with increasing age or BMI to have a higher odds ratio, while subjects with middle or high education had a lower odds ratio of reporting being diabetic. Self-reported prevalence of diabetes treatment increased (table 6); multivariate analysis showed men, subjects aged over 45 or presenting with overweight or obesity to have a higher odds ratio, while foreigners had a lower odds ratio of being treated (table 7). Self-reported diabetes control also increased and the self-reported prevalence of uncontrolled and untreated diabetes decreased (table 6); multivariate analysis showed subjects aged 45-64 years, presenting with overweight or obesity or foreigners to have a lower odds ratio, while high educated subjects had a higher odds ratio of being controlled (table 7). Finally, diabetes screening increased during the study period (table 6) and multivariate analysis showed foreigners, subjects aged over 45, overweight or obese to have a higher odds ratio, while men and subjects with medium or high education to have a lower odds ratio of being screened (table 7).\ntrends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as weighted percentage. *, among subjects reporting being diabetic; **, among treated subjects. -, data not available.\nmultivariate analysis of the trends in self-reported prevalence and management of diabetes in the Swiss population, 1997 - 2007\nResults are expressed as multivariate-adjusted odds ratio and [95% confidence interval]. *, among subjects with reported diabetes; **, among treated subjects. -, data not available.", "Since the MONICA study in the nineties [3] and the Bus Santé study in Geneva [4], there has been little information on trends of hypertension, hypercholesterolemia and diabetes in Switzerland. The data from the Swiss National Health Surveys thus provide important information regarding the self-reported prevalence and management of those cardiovascular risk factors in the Swiss population. As the sampling frame covers about 90% of Swiss households and the participation rate was relatively high for all studies, this study is a good reflect of the Swiss situation. The fact that the weighted and unweighted results were quite similar also suggests the absence of important bias.\n[SUBTITLE] Hypertension [SUBSECTION] Prevalence of self-reported hypertension increased between 1997 and 2007 and was comparable to those reported using measured data by US [9] and German [10] studies and with other studies using self-reported data (table 8). This increase could be due either to an increase in the true prevalence of hypertension, to a more widespread screening, or both. The second hypothesis might be more likely, as the prevalence of subjects reporting having their blood pressure measured during the previous 12 months also increased during this period, a finding already reported in the literature [11]. Another likely determinant is decrease in the thresholds to define hypertension from ≥ 160/95 mmHg in 1993 [3] to ≥ 140/90 mmHg afterwards. Still, self-reported prevalence rates are probably underestimated, as a recent study conducted in Lausanne has shown that less than two thirds of hypertensive subjects are actually aware of their condition [12].\ntrends in self-reported prevalence of cardiovascular risk factors in Switzerland and in other countries\nReferences: Switzerland 1, current study; Spain, [21]; Greece, [28]; USA, [29]; France, [18].\nA higher prevalence of reported hypertension was found among subjects aged over 45 years or presenting with overweight or obesity. Those findings are in agreement with the literature [9,13,14] and might be due to an increased screening with age or because of the presence of other risk factors [15]. Conversely, foreigners had a lower self-reported prevalence of hypertension, and this could not be attributed to a lower screening frequency or to differences in age or BMI status. Possible explanations include differences in dietary or genetic background, but further studies are needed to better assess this point. The self-reported prevalence of hypertension was also inversely related with educational level, a finding in agreement with the literature [16]. This finding might be related to a better lifestyle, namely regarding dietary salt intake, although data from the Geneva study showed no improvement in salt intake in the general population [17].\nSelf-reported treatment of hypertension increased during the study period, suggesting an improvement in the management of this risk factor. Still, in 2007, only six out of ten hypertensive subjects indicated they were on antihypertensive treatment. Although the remaining 40% might be under nonpharmacological antihypertensive measures such as diet or specific lifestyle modifications, our findings suggest that there is still room for improvement regarding pharmacological management of hypertension, a finding reported previously [12].\nIn agreement with objectively measured data from the US [9,13] and France [18], an increase in self-reported control of hypertension was found for the period 1997-2007. This increase might be related to an improvement in antihypertensive treatment, namely the appearance of more potent and new antihypertensive drugs, and/or an improvement of subject's compliance. Still, our results are probably overestimated because some treated subjects might report being controlled just because they are taking antihypertensive drugs. Indeed, a previous study conducted in Lausanne showed that a consistent fraction of treated hypertensive subjects actually presented with high blood pressure levels [12]. Hence, it is likely that the true prevalence of controlled hypertension in Switzerland might actually be lower. Nevertheless, the fact that the self-reported prevalence of uncontrolled and untreated hypertension also decreased suggests that the overall management of hypertension in the Swiss population is improving.\nPrevalence of self-reported hypertension increased between 1997 and 2007 and was comparable to those reported using measured data by US [9] and German [10] studies and with other studies using self-reported data (table 8). This increase could be due either to an increase in the true prevalence of hypertension, to a more widespread screening, or both. The second hypothesis might be more likely, as the prevalence of subjects reporting having their blood pressure measured during the previous 12 months also increased during this period, a finding already reported in the literature [11]. Another likely determinant is decrease in the thresholds to define hypertension from ≥ 160/95 mmHg in 1993 [3] to ≥ 140/90 mmHg afterwards. Still, self-reported prevalence rates are probably underestimated, as a recent study conducted in Lausanne has shown that less than two thirds of hypertensive subjects are actually aware of their condition [12].\ntrends in self-reported prevalence of cardiovascular risk factors in Switzerland and in other countries\nReferences: Switzerland 1, current study; Spain, [21]; Greece, [28]; USA, [29]; France, [18].\nA higher prevalence of reported hypertension was found among subjects aged over 45 years or presenting with overweight or obesity. Those findings are in agreement with the literature [9,13,14] and might be due to an increased screening with age or because of the presence of other risk factors [15]. Conversely, foreigners had a lower self-reported prevalence of hypertension, and this could not be attributed to a lower screening frequency or to differences in age or BMI status. Possible explanations include differences in dietary or genetic background, but further studies are needed to better assess this point. The self-reported prevalence of hypertension was also inversely related with educational level, a finding in agreement with the literature [16]. This finding might be related to a better lifestyle, namely regarding dietary salt intake, although data from the Geneva study showed no improvement in salt intake in the general population [17].\nSelf-reported treatment of hypertension increased during the study period, suggesting an improvement in the management of this risk factor. Still, in 2007, only six out of ten hypertensive subjects indicated they were on antihypertensive treatment. Although the remaining 40% might be under nonpharmacological antihypertensive measures such as diet or specific lifestyle modifications, our findings suggest that there is still room for improvement regarding pharmacological management of hypertension, a finding reported previously [12].\nIn agreement with objectively measured data from the US [9,13] and France [18], an increase in self-reported control of hypertension was found for the period 1997-2007. This increase might be related to an improvement in antihypertensive treatment, namely the appearance of more potent and new antihypertensive drugs, and/or an improvement of subject's compliance. Still, our results are probably overestimated because some treated subjects might report being controlled just because they are taking antihypertensive drugs. Indeed, a previous study conducted in Lausanne showed that a consistent fraction of treated hypertensive subjects actually presented with high blood pressure levels [12]. Hence, it is likely that the true prevalence of controlled hypertension in Switzerland might actually be lower. Nevertheless, the fact that the self-reported prevalence of uncontrolled and untreated hypertension also decreased suggests that the overall management of hypertension in the Swiss population is improving.\n[SUBTITLE] Hypercholesterolemia [SUBSECTION] Self-reported prevalence of hypercholesterolemia was within values published for other countries which used self-reported data (table 8), but lower than the values obtained in a smaller Swiss population-based study using objectively measured data (table 9). Still, and in agreement with previous Swiss [4], French [18] and German [10] studies based on objectively measured data and with studies using self-reported data, the self-reported prevalence of hypercholesterolemia increased between 1997 and 2007. As for hypertension, possible explanations include a true increase in the prevalence of hypercholesterolemia, an increase in screening, a decrease in the threshold values to define hypercholesterolemia [19] or a mixture of them. Interestingly, cholesterol screening increased considerably during the study period, and the prevalence of subjects reporting having their blood cholesterol levels assessed during the previous 12 months was actually higher than other studies [11]. Still, in 2007, the self-reported prevalence of hypercholesterolemia in Switzerland was lower than the USA [11] or France [18]. Two explanations are possible, i.e. the prevalence of hypercholesterolemia being indeed lower in Switzerland, or a lower screening by Swiss GPs. Indeed, it has been shown that only 75% of Swiss physicians consider that screening for high cholesterol is very important, versus 93% for blood pressure [20]. Those differences could partly explain the lower percentage of self-reported hypercholesterolemia relative to hypertension.\ncomparison of prevalences or hypertension and hypercholesterolaemia based on self-reported and measured data for subjects aged 35-75, Switzerland\nResults are expressed as percentage. *, among subjects with the selected risk factor; **, among treated subjects. CoLaus data from [12] for hypertension and from [23] for hypercholesterolemia.\nA higher self-reported prevalence of hypercholesterolemia was found among subjects aged over 45 years, with high education or presenting with overweight or obesity in agreement with other studies [16,18] but not with others [21]. Still, our results suggest that, contrary to hypertension, a higher education is related to a higher self-reported prevalence of hypercholesterolemia. This higher self-reported prevalence is not due to higher screening rates among highly educated subjects, as their odds of being screened were significantly lower (table 4). A possible explanation is the fact that highly educated subjects know better their medical situation [22], but again further studies are needed to better assess this point.\nThe self reported hypolipidemic treatment doubled during the study period, in line with other French [18] and German [10] studies. Nevertheless, in 2007, only four out of ten Swiss patients who had been told they presented with hypercholesterolemia reported being treated, a value similar to the one reported in the CoLaus study [23] (table 9). Although diet has been shown to lower cholesterol levels [24], it is unlikely that 60% of patients diagnosed with hypercholesterolemia are on a diet alone. Hence, and as for hypertension, our findings suggest that there is room for improvement regarding pharmacological management of hypercholesterolemia.\nAn increase in self-reported control of hypercholesterolemia was found, a finding also found in other countries [18,25]. Two hypotheses are possible, i.e. an improvement in hypolipidemic drugs and/or subject's compliance. Again, these results are certainly overestimated, either because the subjects believed they were controlled just because they were treated, or because their GP considered them as treated despite borderline high values [26].\nSelf-reported prevalence of hypercholesterolemia was within values published for other countries which used self-reported data (table 8), but lower than the values obtained in a smaller Swiss population-based study using objectively measured data (table 9). Still, and in agreement with previous Swiss [4], French [18] and German [10] studies based on objectively measured data and with studies using self-reported data, the self-reported prevalence of hypercholesterolemia increased between 1997 and 2007. As for hypertension, possible explanations include a true increase in the prevalence of hypercholesterolemia, an increase in screening, a decrease in the threshold values to define hypercholesterolemia [19] or a mixture of them. Interestingly, cholesterol screening increased considerably during the study period, and the prevalence of subjects reporting having their blood cholesterol levels assessed during the previous 12 months was actually higher than other studies [11]. Still, in 2007, the self-reported prevalence of hypercholesterolemia in Switzerland was lower than the USA [11] or France [18]. Two explanations are possible, i.e. the prevalence of hypercholesterolemia being indeed lower in Switzerland, or a lower screening by Swiss GPs. Indeed, it has been shown that only 75% of Swiss physicians consider that screening for high cholesterol is very important, versus 93% for blood pressure [20]. Those differences could partly explain the lower percentage of self-reported hypercholesterolemia relative to hypertension.\ncomparison of prevalences or hypertension and hypercholesterolaemia based on self-reported and measured data for subjects aged 35-75, Switzerland\nResults are expressed as percentage. *, among subjects with the selected risk factor; **, among treated subjects. CoLaus data from [12] for hypertension and from [23] for hypercholesterolemia.\nA higher self-reported prevalence of hypercholesterolemia was found among subjects aged over 45 years, with high education or presenting with overweight or obesity in agreement with other studies [16,18] but not with others [21]. Still, our results suggest that, contrary to hypertension, a higher education is related to a higher self-reported prevalence of hypercholesterolemia. This higher self-reported prevalence is not due to higher screening rates among highly educated subjects, as their odds of being screened were significantly lower (table 4). A possible explanation is the fact that highly educated subjects know better their medical situation [22], but again further studies are needed to better assess this point.\nThe self reported hypolipidemic treatment doubled during the study period, in line with other French [18] and German [10] studies. Nevertheless, in 2007, only four out of ten Swiss patients who had been told they presented with hypercholesterolemia reported being treated, a value similar to the one reported in the CoLaus study [23] (table 9). Although diet has been shown to lower cholesterol levels [24], it is unlikely that 60% of patients diagnosed with hypercholesterolemia are on a diet alone. Hence, and as for hypertension, our findings suggest that there is room for improvement regarding pharmacological management of hypercholesterolemia.\nAn increase in self-reported control of hypercholesterolemia was found, a finding also found in other countries [18,25]. Two hypotheses are possible, i.e. an improvement in hypolipidemic drugs and/or subject's compliance. Again, these results are certainly overestimated, either because the subjects believed they were controlled just because they were treated, or because their GP considered them as treated despite borderline high values [26].\n[SUBTITLE] Diabetes [SUBSECTION] The self-reported prevalence of diabetes increased during period 1997-2007. Still, in 2007, the self-reported prevalence was lower than reported for France [18] or the US [27], probably due to the self-reported (instead of objectively measured) diabetic status. Still, comparing our data with self-reported data from other countries [18,21,28,29] led to similar conclusions (table 8). Possible explanations include the relatively low prevalence of obesity in Switzerland [30,31] albeit other factors might be at play. Interestingly, the increase in the self-reported prevalence of diabetes persisted after adjustment for overweight and obesity, suggesting that other factors might intervene [32], namely a better screening. Indeed, the prevalence of subjects reporting having their blood glucose assessed the previous 12 months increased between 1997 and 2007, a finding in agreement with other studies [33,34].\nAlso in agreement with the literature [32], a higher self-reported prevalence of diabetes was found among men, subjects aged over 45 years or presenting with overweight or obesity. Similarly, and as reported previously [16,32], a lower prevalence of self-reported diabetes was found among subjects with high educational level. High educated subjects could have more financial means to adapt their lifestyle, e.g., to buy higher quality food [35] or exercise more, thus preventing the occurrence of diabetes. They could better know their health state despite less screened, but further studies are needed to better assess this point.\nSelf reported antidiabetic treatment increased, a trend also reported for France [18] and Italy [36]. Still and again, in 2007, only half of the subjects diagnosed with diabetes reported being treated, and, as for hypercholesterolemia, it is rather unlikely that the remaining half was only on diet. Overall, our data indicate that, in Switzerland, many diabetic subjects are probably undertreated, and that further efforts should be made to implement (non) pharmacological treatment.\nThe increase in self-reported diabetic control found in this study has also been reported elsewhere [25]. This improvement is probably due to a change in therapies and/or an improvement of the subject's compliance. Still, in 2007, one third of treated diabetic subjects reported having high glycaemia, and again this figure is certainly underestimated because many treated subjects believed they are controlled simply due to the fact they receive a drug. Nevertheless, the fact that the prevalence of uncontrolled and untreated diabetes also decreased suggests that the overall management of diabetes in the Swiss population is improving.\nThe self-reported prevalence of diabetes increased during period 1997-2007. Still, in 2007, the self-reported prevalence was lower than reported for France [18] or the US [27], probably due to the self-reported (instead of objectively measured) diabetic status. Still, comparing our data with self-reported data from other countries [18,21,28,29] led to similar conclusions (table 8). Possible explanations include the relatively low prevalence of obesity in Switzerland [30,31] albeit other factors might be at play. Interestingly, the increase in the self-reported prevalence of diabetes persisted after adjustment for overweight and obesity, suggesting that other factors might intervene [32], namely a better screening. Indeed, the prevalence of subjects reporting having their blood glucose assessed the previous 12 months increased between 1997 and 2007, a finding in agreement with other studies [33,34].\nAlso in agreement with the literature [32], a higher self-reported prevalence of diabetes was found among men, subjects aged over 45 years or presenting with overweight or obesity. Similarly, and as reported previously [16,32], a lower prevalence of self-reported diabetes was found among subjects with high educational level. High educated subjects could have more financial means to adapt their lifestyle, e.g., to buy higher quality food [35] or exercise more, thus preventing the occurrence of diabetes. They could better know their health state despite less screened, but further studies are needed to better assess this point.\nSelf reported antidiabetic treatment increased, a trend also reported for France [18] and Italy [36]. Still and again, in 2007, only half of the subjects diagnosed with diabetes reported being treated, and, as for hypercholesterolemia, it is rather unlikely that the remaining half was only on diet. Overall, our data indicate that, in Switzerland, many diabetic subjects are probably undertreated, and that further efforts should be made to implement (non) pharmacological treatment.\nThe increase in self-reported diabetic control found in this study has also been reported elsewhere [25]. This improvement is probably due to a change in therapies and/or an improvement of the subject's compliance. Still, in 2007, one third of treated diabetic subjects reported having high glycaemia, and again this figure is certainly underestimated because many treated subjects believed they are controlled simply due to the fact they receive a drug. Nevertheless, the fact that the prevalence of uncontrolled and untreated diabetes also decreased suggests that the overall management of diabetes in the Swiss population is improving.\n[SUBTITLE] Limitations [SUBSECTION] First, and as indicated previously, the self-reporting of the cardiovascular risk factors might underestimate the real prevalence in the population, as it can be inferred from the results of table 9. Still, it represents the result of the screening done by doctors and health professionals and has been used in other studies for the assessment of trends [21,28,29,37]. Further, it has been shown that self-reported data on cardiovascular risk factors is valid and can be used to assess prevalence rates in most cases [38,39]. Second, increasing rates occurred mainly between 2002 and 2007, when the sample becomes much more educated, raising the issue of a possible selection bias, more educated participants tending to respond more easily. The presence of other unmeasured confounders such as changes in dietary intake could also influence results. Since other unmeasured predictors of disease treatment and control were likely to change along with education, the trends in treatment and control are thus likely to be biased away from the null. Still, in the absence of another nationally representative sample, this study provides the best estimates regarding self-reported prevalence and management of cardiovascular risk factors. Third, the fact that the unweighted and weighted estimates are similar does not remove the potential for response bias. Still, the weighting procedure gives some strata which are less represented in the sample (i.e. young males) a higher weight, thus partially reducing this bias. It should be noted that some studies only standardized on age [29] or even made no adjustment [28], while in this study weighting included gender, age, geographical location and nationality [8]. Forth, although several studies conducted in the USA [40,41] indicate a high level of undiagnosed hypertension and hypercholesterolemia among uninsured subjects, this is rather unlikely to occur in Switzerland as all subjects living in Switzerland have a health insurance (federal law 832.10 of march 18th, 1994, available at http://www.admin.ch/ch/f/rs/c832_10.html). Still, as a nontrivial percentage of subjects with hypertension, dyslipidemia or diabetes might be unaware of their status [12,42], our prevalence estimates might be underestimated. Finally, no information was available regarding nonpharmacological treatment of cardiovascular risk factors, so it was not possible to assess the percentage of subjects not treated with drugs but with other nonpharmacological measures.\nFirst, and as indicated previously, the self-reporting of the cardiovascular risk factors might underestimate the real prevalence in the population, as it can be inferred from the results of table 9. Still, it represents the result of the screening done by doctors and health professionals and has been used in other studies for the assessment of trends [21,28,29,37]. Further, it has been shown that self-reported data on cardiovascular risk factors is valid and can be used to assess prevalence rates in most cases [38,39]. Second, increasing rates occurred mainly between 2002 and 2007, when the sample becomes much more educated, raising the issue of a possible selection bias, more educated participants tending to respond more easily. The presence of other unmeasured confounders such as changes in dietary intake could also influence results. Since other unmeasured predictors of disease treatment and control were likely to change along with education, the trends in treatment and control are thus likely to be biased away from the null. Still, in the absence of another nationally representative sample, this study provides the best estimates regarding self-reported prevalence and management of cardiovascular risk factors. Third, the fact that the unweighted and weighted estimates are similar does not remove the potential for response bias. Still, the weighting procedure gives some strata which are less represented in the sample (i.e. young males) a higher weight, thus partially reducing this bias. It should be noted that some studies only standardized on age [29] or even made no adjustment [28], while in this study weighting included gender, age, geographical location and nationality [8]. Forth, although several studies conducted in the USA [40,41] indicate a high level of undiagnosed hypertension and hypercholesterolemia among uninsured subjects, this is rather unlikely to occur in Switzerland as all subjects living in Switzerland have a health insurance (federal law 832.10 of march 18th, 1994, available at http://www.admin.ch/ch/f/rs/c832_10.html). Still, as a nontrivial percentage of subjects with hypertension, dyslipidemia or diabetes might be unaware of their status [12,42], our prevalence estimates might be underestimated. Finally, no information was available regarding nonpharmacological treatment of cardiovascular risk factors, so it was not possible to assess the percentage of subjects not treated with drugs but with other nonpharmacological measures.", "Prevalence of self-reported hypertension increased between 1997 and 2007 and was comparable to those reported using measured data by US [9] and German [10] studies and with other studies using self-reported data (table 8). This increase could be due either to an increase in the true prevalence of hypertension, to a more widespread screening, or both. The second hypothesis might be more likely, as the prevalence of subjects reporting having their blood pressure measured during the previous 12 months also increased during this period, a finding already reported in the literature [11]. Another likely determinant is decrease in the thresholds to define hypertension from ≥ 160/95 mmHg in 1993 [3] to ≥ 140/90 mmHg afterwards. Still, self-reported prevalence rates are probably underestimated, as a recent study conducted in Lausanne has shown that less than two thirds of hypertensive subjects are actually aware of their condition [12].\ntrends in self-reported prevalence of cardiovascular risk factors in Switzerland and in other countries\nReferences: Switzerland 1, current study; Spain, [21]; Greece, [28]; USA, [29]; France, [18].\nA higher prevalence of reported hypertension was found among subjects aged over 45 years or presenting with overweight or obesity. Those findings are in agreement with the literature [9,13,14] and might be due to an increased screening with age or because of the presence of other risk factors [15]. Conversely, foreigners had a lower self-reported prevalence of hypertension, and this could not be attributed to a lower screening frequency or to differences in age or BMI status. Possible explanations include differences in dietary or genetic background, but further studies are needed to better assess this point. The self-reported prevalence of hypertension was also inversely related with educational level, a finding in agreement with the literature [16]. This finding might be related to a better lifestyle, namely regarding dietary salt intake, although data from the Geneva study showed no improvement in salt intake in the general population [17].\nSelf-reported treatment of hypertension increased during the study period, suggesting an improvement in the management of this risk factor. Still, in 2007, only six out of ten hypertensive subjects indicated they were on antihypertensive treatment. Although the remaining 40% might be under nonpharmacological antihypertensive measures such as diet or specific lifestyle modifications, our findings suggest that there is still room for improvement regarding pharmacological management of hypertension, a finding reported previously [12].\nIn agreement with objectively measured data from the US [9,13] and France [18], an increase in self-reported control of hypertension was found for the period 1997-2007. This increase might be related to an improvement in antihypertensive treatment, namely the appearance of more potent and new antihypertensive drugs, and/or an improvement of subject's compliance. Still, our results are probably overestimated because some treated subjects might report being controlled just because they are taking antihypertensive drugs. Indeed, a previous study conducted in Lausanne showed that a consistent fraction of treated hypertensive subjects actually presented with high blood pressure levels [12]. Hence, it is likely that the true prevalence of controlled hypertension in Switzerland might actually be lower. Nevertheless, the fact that the self-reported prevalence of uncontrolled and untreated hypertension also decreased suggests that the overall management of hypertension in the Swiss population is improving.", "Self-reported prevalence of hypercholesterolemia was within values published for other countries which used self-reported data (table 8), but lower than the values obtained in a smaller Swiss population-based study using objectively measured data (table 9). Still, and in agreement with previous Swiss [4], French [18] and German [10] studies based on objectively measured data and with studies using self-reported data, the self-reported prevalence of hypercholesterolemia increased between 1997 and 2007. As for hypertension, possible explanations include a true increase in the prevalence of hypercholesterolemia, an increase in screening, a decrease in the threshold values to define hypercholesterolemia [19] or a mixture of them. Interestingly, cholesterol screening increased considerably during the study period, and the prevalence of subjects reporting having their blood cholesterol levels assessed during the previous 12 months was actually higher than other studies [11]. Still, in 2007, the self-reported prevalence of hypercholesterolemia in Switzerland was lower than the USA [11] or France [18]. Two explanations are possible, i.e. the prevalence of hypercholesterolemia being indeed lower in Switzerland, or a lower screening by Swiss GPs. Indeed, it has been shown that only 75% of Swiss physicians consider that screening for high cholesterol is very important, versus 93% for blood pressure [20]. Those differences could partly explain the lower percentage of self-reported hypercholesterolemia relative to hypertension.\ncomparison of prevalences or hypertension and hypercholesterolaemia based on self-reported and measured data for subjects aged 35-75, Switzerland\nResults are expressed as percentage. *, among subjects with the selected risk factor; **, among treated subjects. CoLaus data from [12] for hypertension and from [23] for hypercholesterolemia.\nA higher self-reported prevalence of hypercholesterolemia was found among subjects aged over 45 years, with high education or presenting with overweight or obesity in agreement with other studies [16,18] but not with others [21]. Still, our results suggest that, contrary to hypertension, a higher education is related to a higher self-reported prevalence of hypercholesterolemia. This higher self-reported prevalence is not due to higher screening rates among highly educated subjects, as their odds of being screened were significantly lower (table 4). A possible explanation is the fact that highly educated subjects know better their medical situation [22], but again further studies are needed to better assess this point.\nThe self reported hypolipidemic treatment doubled during the study period, in line with other French [18] and German [10] studies. Nevertheless, in 2007, only four out of ten Swiss patients who had been told they presented with hypercholesterolemia reported being treated, a value similar to the one reported in the CoLaus study [23] (table 9). Although diet has been shown to lower cholesterol levels [24], it is unlikely that 60% of patients diagnosed with hypercholesterolemia are on a diet alone. Hence, and as for hypertension, our findings suggest that there is room for improvement regarding pharmacological management of hypercholesterolemia.\nAn increase in self-reported control of hypercholesterolemia was found, a finding also found in other countries [18,25]. Two hypotheses are possible, i.e. an improvement in hypolipidemic drugs and/or subject's compliance. Again, these results are certainly overestimated, either because the subjects believed they were controlled just because they were treated, or because their GP considered them as treated despite borderline high values [26].", "The self-reported prevalence of diabetes increased during period 1997-2007. Still, in 2007, the self-reported prevalence was lower than reported for France [18] or the US [27], probably due to the self-reported (instead of objectively measured) diabetic status. Still, comparing our data with self-reported data from other countries [18,21,28,29] led to similar conclusions (table 8). Possible explanations include the relatively low prevalence of obesity in Switzerland [30,31] albeit other factors might be at play. Interestingly, the increase in the self-reported prevalence of diabetes persisted after adjustment for overweight and obesity, suggesting that other factors might intervene [32], namely a better screening. Indeed, the prevalence of subjects reporting having their blood glucose assessed the previous 12 months increased between 1997 and 2007, a finding in agreement with other studies [33,34].\nAlso in agreement with the literature [32], a higher self-reported prevalence of diabetes was found among men, subjects aged over 45 years or presenting with overweight or obesity. Similarly, and as reported previously [16,32], a lower prevalence of self-reported diabetes was found among subjects with high educational level. High educated subjects could have more financial means to adapt their lifestyle, e.g., to buy higher quality food [35] or exercise more, thus preventing the occurrence of diabetes. They could better know their health state despite less screened, but further studies are needed to better assess this point.\nSelf reported antidiabetic treatment increased, a trend also reported for France [18] and Italy [36]. Still and again, in 2007, only half of the subjects diagnosed with diabetes reported being treated, and, as for hypercholesterolemia, it is rather unlikely that the remaining half was only on diet. Overall, our data indicate that, in Switzerland, many diabetic subjects are probably undertreated, and that further efforts should be made to implement (non) pharmacological treatment.\nThe increase in self-reported diabetic control found in this study has also been reported elsewhere [25]. This improvement is probably due to a change in therapies and/or an improvement of the subject's compliance. Still, in 2007, one third of treated diabetic subjects reported having high glycaemia, and again this figure is certainly underestimated because many treated subjects believed they are controlled simply due to the fact they receive a drug. Nevertheless, the fact that the prevalence of uncontrolled and untreated diabetes also decreased suggests that the overall management of diabetes in the Swiss population is improving.", "First, and as indicated previously, the self-reporting of the cardiovascular risk factors might underestimate the real prevalence in the population, as it can be inferred from the results of table 9. Still, it represents the result of the screening done by doctors and health professionals and has been used in other studies for the assessment of trends [21,28,29,37]. Further, it has been shown that self-reported data on cardiovascular risk factors is valid and can be used to assess prevalence rates in most cases [38,39]. Second, increasing rates occurred mainly between 2002 and 2007, when the sample becomes much more educated, raising the issue of a possible selection bias, more educated participants tending to respond more easily. The presence of other unmeasured confounders such as changes in dietary intake could also influence results. Since other unmeasured predictors of disease treatment and control were likely to change along with education, the trends in treatment and control are thus likely to be biased away from the null. Still, in the absence of another nationally representative sample, this study provides the best estimates regarding self-reported prevalence and management of cardiovascular risk factors. Third, the fact that the unweighted and weighted estimates are similar does not remove the potential for response bias. Still, the weighting procedure gives some strata which are less represented in the sample (i.e. young males) a higher weight, thus partially reducing this bias. It should be noted that some studies only standardized on age [29] or even made no adjustment [28], while in this study weighting included gender, age, geographical location and nationality [8]. Forth, although several studies conducted in the USA [40,41] indicate a high level of undiagnosed hypertension and hypercholesterolemia among uninsured subjects, this is rather unlikely to occur in Switzerland as all subjects living in Switzerland have a health insurance (federal law 832.10 of march 18th, 1994, available at http://www.admin.ch/ch/f/rs/c832_10.html). Still, as a nontrivial percentage of subjects with hypertension, dyslipidemia or diabetes might be unaware of their status [12,42], our prevalence estimates might be underestimated. Finally, no information was available regarding nonpharmacological treatment of cardiovascular risk factors, so it was not possible to assess the percentage of subjects not treated with drugs but with other nonpharmacological measures.", "In Switzerland, self-reported prevalence of hypertension, hypercholesterolemia and diabetes have increased between 1997 and 2007. Management and screening have improved, but further improvements can still be achieved as over one third of subjects with reported CV RFs are not treated.", "The authors hereby indicate no conflict of interest", "DE performed the statistical analysis and wrote most of the manuscript. PMV designed the study (data analysis), obtained the data, performed some statistical analyses and wrote part of the manuscript. FP and PV participated in the study design and coordination and revised the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/114/prepub\n", "Supplementary tables.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
The relationship of air pollution and surrogate markers of endothelial dysfunction in a population-based sample of children.
21332998
This study aimed to assess the relationship of air pollution and plasma surrogate markers of endothelial dysfunction in the pediatric age group.
BACKGROUND
This cross-sectional study was conducted in 2009-2010 among 125 participants aged 10-18 years. They were randomly selected from different areas of Isfahan city, the second large and air-polluted city in Iran. The association of air pollutants' levels with serum thrombomodulin (TM) and tissue factor (TF) was determined after adjustment for age, gender, anthropometric measures, dietary and physical activity habits.
METHODS
Data of 118 participants was complete and was analyzed. The mean age was 12.79 (2.35) years. The mean pollution standards index (PSI) value was at moderate level, the mean particular matter measuring up to 10 μm (PM10) was more than twice the normal level. Multiple linear regression analysis showed that TF had significant relationship with all air pollutants except than carbon monoxide, and TM had significant inverse relationship with ozone. The odds ratio of elevated TF was significantly higher in the upper vs. the lowest quartiles of PM10, ozone and PSI. The corresponding figures were in opposite direction for TM.
RESULTS
The relationship of air pollutants with endothelial dysfunction and pro-coagulant state can be an important factor in the development of atherosclerosis from early life. This finding should be confirmed in future longitudinal studies. Concerns about the harmful effects of air pollution on children's health should be considered a top priority for public health policy; it should be underscored in primordial and primary prevention of chronic diseases.
CONCLUSIONS
[ "Adolescent", "Air Pollutants", "Anthropometry", "Biomarkers", "Child", "Cross-Sectional Studies", "Endothelium, Vascular", "Environmental Exposure", "Humans", "Iran", "Linear Models", "Odds Ratio", "Particle Size", "Surveys and Questionnaires", "Thrombomodulin", "Thromboplastin", "Vascular Diseases" ]
3061912
null
null
Methods
This cross-sectional study was conducted from November 2009 to February 2010 in Isfahan, which is the second large and air-polluted city in Iran. The study was approved in the Research Council &Ethics Committee of the School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran. It was conducted according to the Declaration of Helsinki. After providing detailed oral information, we obtained written informed consent from the parents and oral assent from participants. [SUBTITLE] Participants [SUBSECTION] Those children and adolescents were eligible who were aged 10 to 18 years, lived for at least 6 months in areas of the city which had air pollution measurement stations, and their homes and schools were located less than 1 kilometer far from these stations. Those individuals who had a history of active or passive smoking, chronic disease, long-term medication use, or a history of acute infectious diseases in the past two weeks were not included in the study. By considering a power of 95% and the statistical significance of 5%, the sample size was calculated as 110, but because of possible attrition of participants, the study was conducted on 125 students. To avoid socioeconomic bias, they were selected by multistage-random cluster sampling, with consideration to the proportion of the different types of schools (public/private). Those children and adolescents were eligible who were aged 10 to 18 years, lived for at least 6 months in areas of the city which had air pollution measurement stations, and their homes and schools were located less than 1 kilometer far from these stations. Those individuals who had a history of active or passive smoking, chronic disease, long-term medication use, or a history of acute infectious diseases in the past two weeks were not included in the study. By considering a power of 95% and the statistical significance of 5%, the sample size was calculated as 110, but because of possible attrition of participants, the study was conducted on 125 students. To avoid socioeconomic bias, they were selected by multistage-random cluster sampling, with consideration to the proportion of the different types of schools (public/private). [SUBTITLE] Study area [SUBSECTION] Isfahan is an industrial city with a population of near 1894382, located in the center of Iranian plateau, with an average altitude of 1500 m from the sea level bounded by NW-SE mountain range of 3000 m. The average monthly temperature is 16°C with maximum 29°C in July and minimum 3°C in December with mild winds from west and south. Also the air of the city of Isfahan is predominantly affected by industrial emissions and motor traffic which can lead to a buildup of elevated concentrations during stagnant conditions [24,25]. Isfahan is an industrial city with a population of near 1894382, located in the center of Iranian plateau, with an average altitude of 1500 m from the sea level bounded by NW-SE mountain range of 3000 m. The average monthly temperature is 16°C with maximum 29°C in July and minimum 3°C in December with mild winds from west and south. Also the air of the city of Isfahan is predominantly affected by industrial emissions and motor traffic which can lead to a buildup of elevated concentrations during stagnant conditions [24,25]. [SUBTITLE] Clinical study and laboratory methods [SUBSECTION] After inviting the selected students to a health center, a trained nurse completed a questionnaire on demographic data, and physical examination was done by the same trained general practitioner and under the supervision of the same pediatrician. Subcutaneous fat of the biceps and triceps muscles were measured with a skinfold caliper (Mojtahedi, Iran), the percent body fat was determined by bio-electrical impedence using a Body Fat Monitor (Omron HBF-300, Japan). For assessment of dietary habits, the Healthy Eating Index (HEI) was computed as described before [10]. Physical activity level was assessed by the international Physical Activity Questionnaire for Children [26], previously validated in Iranian children [27]. While one of parents accompanied his or her child, blood samples were taken from the ante-cubital vein. Serums were obtained after centrifuging blood samples, and were kept frozen at -70°C until assayed and analyzed in the laboratory of the Applied Physiology Research Center affiliated to Isfahan University of Medical Sciences. Enzyme-linked immunosorbent assay (ELISA) kits from Abcam Company (UK) with code Ab46508 were used for measurement of TM, and TF ELISA kits R&D Company (UK) with code DCF300 for laboratory examinations. After inviting the selected students to a health center, a trained nurse completed a questionnaire on demographic data, and physical examination was done by the same trained general practitioner and under the supervision of the same pediatrician. Subcutaneous fat of the biceps and triceps muscles were measured with a skinfold caliper (Mojtahedi, Iran), the percent body fat was determined by bio-electrical impedence using a Body Fat Monitor (Omron HBF-300, Japan). For assessment of dietary habits, the Healthy Eating Index (HEI) was computed as described before [10]. Physical activity level was assessed by the international Physical Activity Questionnaire for Children [26], previously validated in Iranian children [27]. While one of parents accompanied his or her child, blood samples were taken from the ante-cubital vein. Serums were obtained after centrifuging blood samples, and were kept frozen at -70°C until assayed and analyzed in the laboratory of the Applied Physiology Research Center affiliated to Isfahan University of Medical Sciences. Enzyme-linked immunosorbent assay (ELISA) kits from Abcam Company (UK) with code Ab46508 were used for measurement of TM, and TF ELISA kits R&D Company (UK) with code DCF300 for laboratory examinations. [SUBTITLE] Air pollution data [SUBSECTION] The mean daily temperature, sunlight duration, humidity and wind speed were recorded. Data from 5 air pollution measurement stations in Isfahan city were recorded daily for the 7 days prior to blood sampling. Daily data pertaining to main air pollutants, i.e. sulfur dioxide (SO2), Ozone (O3), PM10, Nitrogen dioxide (NO2) and carbon monoxide (CO) as well as the Pollutant Standards Index (PSI) were recorded. The mean values of seven 24-hour means of air pollutants and PSI were considered for statistical analysis. The mean daily temperature, sunlight duration, humidity and wind speed were recorded. Data from 5 air pollution measurement stations in Isfahan city were recorded daily for the 7 days prior to blood sampling. Daily data pertaining to main air pollutants, i.e. sulfur dioxide (SO2), Ozone (O3), PM10, Nitrogen dioxide (NO2) and carbon monoxide (CO) as well as the Pollutant Standards Index (PSI) were recorded. The mean values of seven 24-hour means of air pollutants and PSI were considered for statistical analysis. [SUBTITLE] Statistical analysis [SUBSECTION] SPSS for Windows (version 15:0, SPSS Inc., Chicago, IL) was used for data analysis. Analyses were initially stratified by gender, but as the differences were not significant, results are presented for girls and boys combined. We used log-transformed concentrations of variables to achieve normal distributions. The relationships of air pollutants with serum TM and TF were determined by Pearson correlation test. The associations between air pollutants and markers of endothelial dysfunction were assessed by multiple linear regression after adjustment for age, gender, body mass index, waist circumference, healthy eating index and physical activity level. The concentrations of biomarkers and air pollutants were categorized to quartiles, and the upper quartile was considered as elevated value. As there is no cutoff value to determine the high and low levels of TM and TF in the pediatric age group, we considered their lowest quartiles as low, and their highest quartiles as high levels. Then, we examined the association of the dichotomized concentrations of these biomarkers (upper quartile vs. lower quartile) across the quartiles of air pollutants by using logistic regression analysis after adjustment for the abovementioned confounders. The significance level was set at p < 0.05. SPSS for Windows (version 15:0, SPSS Inc., Chicago, IL) was used for data analysis. Analyses were initially stratified by gender, but as the differences were not significant, results are presented for girls and boys combined. We used log-transformed concentrations of variables to achieve normal distributions. The relationships of air pollutants with serum TM and TF were determined by Pearson correlation test. The associations between air pollutants and markers of endothelial dysfunction were assessed by multiple linear regression after adjustment for age, gender, body mass index, waist circumference, healthy eating index and physical activity level. The concentrations of biomarkers and air pollutants were categorized to quartiles, and the upper quartile was considered as elevated value. As there is no cutoff value to determine the high and low levels of TM and TF in the pediatric age group, we considered their lowest quartiles as low, and their highest quartiles as high levels. Then, we examined the association of the dichotomized concentrations of these biomarkers (upper quartile vs. lower quartile) across the quartiles of air pollutants by using logistic regression analysis after adjustment for the abovementioned confounders. The significance level was set at p < 0.05.
null
null
null
null
[ "Background", "Participants", "Study area", "Clinical study and laboratory methods", "Air pollution data", "Statistical analysis", "Results", "Discussion", "Study limitations& strengths", "Conclusion", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Today, air pollution is one of the major health threats both in developing and developed countries [1]. Children are more sensitive than adults to health effects of air pollutants; this might be because of the developmental changes in their respiratory system and their higher number of breaths per minute than adults. Moreover, compared to adults they may spend more time outdoors, and their activity level is higher [2,3].\nThe harmful effects of air pollution on cardiovascular system are well-documented, but the underlying mechanisms remain to be determined. A recent experimental study showed for the first time that pulmonary exposure to the particulate matter (PM) within diesel exhaust enhances atherogenesis [4]. The human blood vessel endothelium is a sensitive target for air pollutants [5]. The interactions of the inflammation and coagulation systems are of the main mechanisms involved in impairment of endothelial function and eventually cardiovascular diseases [6].\nThe effect of air pollution on inflammation, oxidative stress and cardiovascular risk factors has been demonstrated not only in older adults [7,8], but also in young adults [9] as well as in children and adolescents [10,11].\nThe inflammation process stimulates the coagulation system, and result in increased secretion of tissue factor (TF). Endothelial function has key roles in anticoagulant and fibrinolytic systems. In vitro studies have demonstrated significant decrease in endogenous anticoagulation activity, thrombomodulin (TM), endothelial protein C receptor antigen and culture of endothelial cells during the inflammation process [12-14].\nA growing body of evidence suggests that the effects of air pollution on the inflammation and the coagulation systems may have a role in endothelial dysfunction and in turn in the progression of cardiovascular diseases [15-17]. Findings of experimental studies suggest that exposure to air pollution may result in increase in TF and decrease in TM [18,19].\nAtherogenesis starts from the fetal life through interrelations of traditional risk factors with inflammatory, immune, and endothelial biomarkers [20]. Air pollution has various harmful effects on this process from early life [21-23]. Studying the effects of environmental factors on early stages of atherosclerosis in early life can help identify the underlying mechanisms.\nTo the best of our knowledge, no previous human study had determined the association of air pollutants with TF and TM in the pediatric age group. In the current study, we aimed to determine the relationship of air pollution with TS and TM, as surrogate markers of endothelial dysfunction, in a population-based sample of children and adolescents.", "Those children and adolescents were eligible who were aged 10 to 18 years, lived for at least 6 months in areas of the city which had air pollution measurement stations, and their homes and schools were located less than 1 kilometer far from these stations. Those individuals who had a history of active or passive smoking, chronic disease, long-term medication use, or a history of acute infectious diseases in the past two weeks were not included in the study.\nBy considering a power of 95% and the statistical significance of 5%, the sample size was calculated as 110, but because of possible attrition of participants, the study was conducted on 125 students. To avoid socioeconomic bias, they were selected by multistage-random cluster sampling, with consideration to the proportion of the different types of schools (public/private).", "Isfahan is an industrial city with a population of near 1894382, located in the center of Iranian plateau, with an average altitude of 1500 m from the sea level bounded by NW-SE mountain range of 3000 m. The average monthly temperature is 16°C with maximum 29°C in July and minimum 3°C in December with mild winds from west and south. Also the air of the city of Isfahan is predominantly affected by industrial emissions and motor traffic which can lead to a buildup of elevated concentrations during stagnant conditions [24,25].", "After inviting the selected students to a health center, a trained nurse completed a questionnaire on demographic data, and physical examination was done by the same trained general practitioner and under the supervision of the same pediatrician. Subcutaneous fat of the biceps and triceps muscles were measured with a skinfold caliper (Mojtahedi, Iran), the percent body fat was determined by bio-electrical impedence using a Body Fat Monitor (Omron HBF-300, Japan).\nFor assessment of dietary habits, the Healthy Eating Index (HEI) was computed as described before [10]. Physical activity level was assessed by the international Physical Activity Questionnaire for Children [26], previously validated in Iranian children [27].\nWhile one of parents accompanied his or her child, blood samples were taken from the ante-cubital vein. Serums were obtained after centrifuging blood samples, and were kept frozen at -70°C until assayed and analyzed in the laboratory of the Applied Physiology Research Center affiliated to Isfahan University of Medical Sciences. Enzyme-linked immunosorbent assay (ELISA) kits from Abcam Company (UK) with code Ab46508 were used for measurement of TM, and TF ELISA kits R&D Company (UK) with code DCF300 for laboratory examinations.", "The mean daily temperature, sunlight duration, humidity and wind speed were recorded. Data from 5 air pollution measurement stations in Isfahan city were recorded daily for the 7 days prior to blood sampling. Daily data pertaining to main air pollutants, i.e. sulfur dioxide (SO2), Ozone (O3), PM10, Nitrogen dioxide (NO2) and carbon monoxide (CO) as well as the Pollutant Standards Index (PSI) were recorded. The mean values of seven 24-hour means of air pollutants and PSI were considered for statistical analysis.", "SPSS for Windows (version 15:0, SPSS Inc., Chicago, IL) was used for data analysis. Analyses were initially stratified by gender, but as the differences were not significant, results are presented for girls and boys combined. We used log-transformed concentrations of variables to achieve normal distributions. The relationships of air pollutants with serum TM and TF were determined by Pearson correlation test. The associations between air pollutants and markers of endothelial dysfunction were assessed by multiple linear regression after adjustment for age, gender, body mass index, waist circumference, healthy eating index and physical activity level. The concentrations of biomarkers and air pollutants were categorized to quartiles, and the upper quartile was considered as elevated value. As there is no cutoff value to determine the high and low levels of TM and TF in the pediatric age group, we considered their lowest quartiles as low, and their highest quartiles as high levels. Then, we examined the association of the dichotomized concentrations of these biomarkers (upper quartile vs. lower quartile) across the quartiles of air pollutants by using logistic regression analysis after adjustment for the abovementioned confounders. The significance level was set at p < 0.05.", "Data of 118 out of the125 students studied was complete and included in the statistical analysis. The study participants consisted of 57 (48.3%) boys and 61 (51.7%) girls with a mean age of 12.79 ± 2.35 years.\nThe mean (SD) of anthropometric and biochemical variables of the study participants is presented in Table 1. TF had a mean level of 64.77 ± 17.35 pg/mL, with the following quartile (Q) ranges: Q1: 27.11-63.79; Q2:63.80-78.84; Q3:78.85-89.17; Q4: 89.18-102.74. The corresponding figures for TM were 5.64 ± 3.27 ng/mL, Q1: 2.18-5.14; Q2: 5.15-9.27; Q3: 9.28-14.25; Q4:14.26-15.71, respectively.\nCharacteristics of study participants\nSD: standard deviation\nThe environmental characteristics are presented in Table 2. It shows moderate levels of mean PSI, i.e. an inappropriate level for sensitive groups. Mean levels of O3, NO2 and SO2 were higher than acceptable values; the mean PM10 level was remarkably high, reaching more than twice the normal level (120.48 vs. 50 μg/m3).\nEnvironmental characteristics during the study period\nSD: standard deviation; PM10: particular matter 10 (acceptable level: 50 μg/m3); CO: carbon monoxide (acceptable level: 9 ppm); SO2: sulfur dioxide (acceptable level: 0.03 ppb)\nNO2: Nitrogen dioxide (acceptable level: 0.05 ppb); O3: ozone (acceptable level: 0.08 ppb)\nPSI; Pollution Standards Index (0-50: Good; 51-100: Moderate; 101-199: Unhealthful; 200-299: Very unhealthful; ≥ 300: Hazardous)\nResults of the Pearson correlation analyses of air pollutants level with serum markers showed that PSI and PM10 had significant correlation with TF (r = 0.3, p = 0.001) and non-significant inverse correlation with TM. CO had weak but significant correlation with TM and TF (r = 0.25, p = 0.01).\nMultiple linear regression analysis showed that after adjustment for confounding factors, serum TF level had significant relationship with PSI. This relationship existed more or less with air pollutants, notably PM10 (Table 3).\nRegression coefficients* for the relation of air pollutants and Pollutant Standards Index with serum concentrations of biomarkers\n*: Adjusted for age, gender, anthropometric measures, as well as dietary and physical activity habits\nSE: standard error; PM10: particular matter 10 ((μg/m3); CO: carbon monoxide(ppm); SO2: sulfur dioxide (ppb); NO2: Nitrogen dioxide (ppb); O3: ozone(ppb); PSI; Pollution Standards Index\nThe odds ratio (OR) of elevated TF increased as the quartiles of PM10, O3 and PSI increased; however these associations reached to significant level only in the highest quartile of PM10 and PSI. The corresponding figures for TM were in opposite direction, i.e. the OR was lower in the highest vs. lowest quartiles of PM10, O3 and PSI (Table 4).\nAssociatio n* of the quartiles of air pollutants and Pollutant Standards Index with upper quartile of tissue factor and thrombomodulin\n*: Values represent odds ratio (95%CI) adjusted for age, gender, anthropometric measures, dietary and physical activity habits\nTF: Tissue Factor(Pg/mL); TM: Thrombomodulin(μmol/L); PM10: particular matter 10((μg/m3); CO: carbon monoxide(ppm); SO2: sulfur dioxide(ppb); NO2: Nitrogen dioxide(ppb); O3: ozone(ppb); PSI; Pollution Standards Index", "This study, which to the best of our knowledge is the first of its kind in the pediatric age group, revealed significant association of air pollutants (notably PM10 and O3 ) and PSI with surrogate markers of endothelial dysfunction in early life. Increased levels of PM10, O3 and PSI increased the OR of elevated TF and reduced TM.\nFindings of different studies, mostly of experimental type, have linked air pollution with various predisposing factors of cardiovascular diseases, i.e. the progression of atherosclerosis, endothelial dysfunction, vasoconstriction, high blood pressure, changes in coagulation system, inflammation, oxidative stress, autonomic imbalance and arrhythmias [28].The latest statement of the American Heart Association provides growing body of evidence about a causal relationship between ultrafine PM (PM2.5) exposure and cardiovascular morbidity and mortality [29].\nOur findings provide confirmatory evidence for the health effects of PM, however similar to our previous study in the pediatric age group [10]; we found associations between larger particulate matter (PM10) and factors associated with atherosclerotic cardiovascular disease. This might suggest the higher susceptibility to the threats of air pollutants in the cardiovascular system of children than adults.\nWhatever the underlying pathophysiological mechanisms, acute and chronic exposures to air pollutants, notably PM, have been linked to a wide spectrum of cardiovascular disorders characterized by endothelial dysfunction [30]. Episodic exposure to PM 2.5 induces vascular injury, reflected in part by depletion of circulating endothelial progenitor cell [31]. Adverse vascular effects of diesel exhaust inhalation occur over different running conditions [32], e.g. PM 2.5 may be an important environmental risk factor for increased arterial blood pressure [33]. A double-blind study on fifteen healthy men exposed to diesel exhaust (PM concentration 300 μg/m3) showed a selective and persistent impairment of endothelium-dependent vasodilatation that occurs in the presence of mild systemic inflammation [34]. A study among seniors showed that increases in black carbon and PM2.5 were associated with increases in blood pressure, heart rate, endothelin-1, vascular endothelial growth factor and oxidative stress, along with a decrease in brachial artery diameter [35]. The vascular effects of air pollution are also confirmed in young adults [36] and in children [11] as well.\nAlthough the interaction of coagulation, inflammation and endothelial dysfunction is well documented, but different effects of air pollutants have been documented on these systems, and the mechanisms underlying the associations of air pollutants with endothelial function are unclear. The interaction between the inflammation and coagulation systems is reciprocal, with protein C and TF playing a key role in this process [37,38]. We found significant association between PSI and TF, and among air pollutants the highest correlation was documented for PM10 and TF. This finding is consistent with various experimental studies that showed exposure to ultrafine PM increase TF expression in atherosclerotic lesions [39-41]. Study on cultured human smooth muscle cells and monocytes documented dose-dependent increases in TF in response to in vivo and in vitro exposure to ambient PM2.5 [42]. Some studies support potential pro-coagulant and thrombotic effects of PM [40], not only its ultrafine particles but also PM10 [43].\nFindings of a study on 37 workers in a steel production plant revealed that short-term PM exposure is weakly associated with TM, but it did not show any effect on coagulation system [44]. On the other hand, an experimental study suggested that exposure to the soluble organic fraction of PM and diesel exhaust induced oxidative stress and reduced the PAI-1 production of endothelial cells, but it did not affect the TM production [45]. It is suggested that PM activates circulating monocytes directly or indirectly; it might stimulate other cells as pulmonary endothelial cells and might induce pulmonary and/or systemic inflammation [46]. However in an experimental study, ultrafine PM was associated with prothrombotic changes on the endothelial surface but did not trigger an inflammatory reaction and did not induce microvascular tissue injury [47].\nA longitudinal study in the US found significant association between black carbon and markers of endothelial function and inflammation, with larger effect in obese individuals [48]. In the current study, similar to our previous experience among children adolescents [10], the association of air pollutants with biochemical markers remained significant even after adjustment for anthropometric measures. This difference might be because of the cross-sectional nature of our studies and the higher susceptibility of the pediatric age group in our study than older age group in the abovementioned study. Moreover, the genetic differences related to oxidative defense and stress response gene expression [41,48] should be considered as well.\nHarmful effects of air pollution are mostly attributed to its PM content [29], however exposure to other air pollutants might have various health consequences [49]. Different air pollutants can have independent and possibly synergistic or opposed effects with each other and with PM; and the health impact of exposure to combinations of air pollutants remains to be determined [29].\nIn an experimental study, asbestos and mineral fibers affected TM level in human umbilical vein endothelial cells [50]. Furthermore, it is documented that a single O3 exposure might induce significant biological response in TM level, inflammatory and pro-coagulant reactions in the lungs of mice [18]. Consistent with this study, we found that O3 level was significantly associated with TF and TM. Although the only significant relationships of TM with air pollutants were its inverse associations with O3 followed by CO, but in higher quartiles of PSI, PM10 and O3, the OR of elevated TM decreased, and vice versa was documented for TF. This finding may have implications for understanding the systemic effects and possible pro-coagulant state induced by air pollutants; additional studies in this regard seem warranted.\nIsfahan is the second most polluted industrial city in Iran, where the number of factories, cars and motorcycles is rapidly increasing [24,25]. Although during the time period of the current study, the urban air had a moderate level of PSI in general, but air pollutants had independent association with surrogate markers of endothelial dysfunction. This association might be because of the vulnerability of children to environmental threats, and/or due to considerably high level of PM10, being more than twice as high as standard. Furthermore, this association might be the result of the long-term exposure of the children studied to improper air quality year-round. In addition to the effects of air pollution on coronary artery diseases in the elderly and mortality [51,52] as the main focus of many statements, the impact of air pollutants on children's health should be also considered as a public health priority.\n[SUBTITLE] Study limitations& strengths [SUBSECTION] The findings of the current study should be considered with its limitations. As with all ecological studies, this study is limited by the lack of precise exposure estimates, and that the ambient pollution concentrations may not adequately reflect exposures of individual subjects. Because of the cross-sectional nature of the study, cause-effect relations cannot be inferred. We compared our findings in areas with different levels of air pollution, repeated measures design particularly capturing episodes of mild and severe pollution might provide more information. Noteworthy to mention that the existing equipment was unable to measure more specific particles such as PM2.5, although we found significant association of larger particle (PM10) with biomarkers studied, but studying ultrafine particle might result to stronger associations. Moreover, we measured systemic biomarkers, more localized investigation e.g. assessment of lung tissue inflammatory response in broncho-alveolar lavage may yield better results.\nThe strengths of this study are mainly its novelty in the pediatric age group and in assessment of potential confounding factors and controlling them in our analysis for studying the independent association of surrogate markers of endothelial dysfunction with air pollutants in a representative population-based sample of healthy children.\nThe findings of the current study should be considered with its limitations. As with all ecological studies, this study is limited by the lack of precise exposure estimates, and that the ambient pollution concentrations may not adequately reflect exposures of individual subjects. Because of the cross-sectional nature of the study, cause-effect relations cannot be inferred. We compared our findings in areas with different levels of air pollution, repeated measures design particularly capturing episodes of mild and severe pollution might provide more information. Noteworthy to mention that the existing equipment was unable to measure more specific particles such as PM2.5, although we found significant association of larger particle (PM10) with biomarkers studied, but studying ultrafine particle might result to stronger associations. Moreover, we measured systemic biomarkers, more localized investigation e.g. assessment of lung tissue inflammatory response in broncho-alveolar lavage may yield better results.\nThe strengths of this study are mainly its novelty in the pediatric age group and in assessment of potential confounding factors and controlling them in our analysis for studying the independent association of surrogate markers of endothelial dysfunction with air pollutants in a representative population-based sample of healthy children.", "The findings of the current study should be considered with its limitations. As with all ecological studies, this study is limited by the lack of precise exposure estimates, and that the ambient pollution concentrations may not adequately reflect exposures of individual subjects. Because of the cross-sectional nature of the study, cause-effect relations cannot be inferred. We compared our findings in areas with different levels of air pollution, repeated measures design particularly capturing episodes of mild and severe pollution might provide more information. Noteworthy to mention that the existing equipment was unable to measure more specific particles such as PM2.5, although we found significant association of larger particle (PM10) with biomarkers studied, but studying ultrafine particle might result to stronger associations. Moreover, we measured systemic biomarkers, more localized investigation e.g. assessment of lung tissue inflammatory response in broncho-alveolar lavage may yield better results.\nThe strengths of this study are mainly its novelty in the pediatric age group and in assessment of potential confounding factors and controlling them in our analysis for studying the independent association of surrogate markers of endothelial dysfunction with air pollutants in a representative population-based sample of healthy children.", "The independent association of air pollutants with surrogate markers of endothelial dysfunction and a possible pro-coagulant state is underscored. The presence of these associations with PM10, larger than PM2.5 usually considered as harmful, and in a moderate air quality, which is commonly considered with few or no health effect for the general population, highlights the need to re-examine environmental health policies and standards for the pediatric age group. Further studies on the effects of air pollution on the first stages of atherosclerosis in early life are needed. Concerns about the harmful effects of air pollution on children's health should be considered a top priority for public health policy; it should be underscored in primordial and primary prevention of chronic diseases.", "PSI: Pollutant Standards Index; PM: Particular matter; TF: Tissue factor;TM: Thrombomodulin; SO2: Sulfur dioxide; O3: Ozone; NO2: Nitrogen dioxide; CO: Carbon monoxide; ELISA: Enzyme-linked immunosorbent assay; SD: Standard deviation; OR: Odds ratio", "The authors declare that they have no competing interests.", "PP participated in the design and conducting the study and helped to draft the manuscript. RK participated in the design and conducting the study and helped to draft and edit the manuscript. AL helped in conducting the study. MM participated in the design of the study and its conduction. SHJ participated in the design of the study and its conduction. RA participated in the design and conducting the study. MMA participated in the design of the study and helped to revise the manuscript. FM participated in the design of the study and helped to revise the manuscript. AA helped in conducting the study. BS participated in conducting the study. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/115/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Study area", "Clinical study and laboratory methods", "Air pollution data", "Statistical analysis", "Results", "Discussion", "Study limitations& strengths", "Conclusion", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Today, air pollution is one of the major health threats both in developing and developed countries [1]. Children are more sensitive than adults to health effects of air pollutants; this might be because of the developmental changes in their respiratory system and their higher number of breaths per minute than adults. Moreover, compared to adults they may spend more time outdoors, and their activity level is higher [2,3].\nThe harmful effects of air pollution on cardiovascular system are well-documented, but the underlying mechanisms remain to be determined. A recent experimental study showed for the first time that pulmonary exposure to the particulate matter (PM) within diesel exhaust enhances atherogenesis [4]. The human blood vessel endothelium is a sensitive target for air pollutants [5]. The interactions of the inflammation and coagulation systems are of the main mechanisms involved in impairment of endothelial function and eventually cardiovascular diseases [6].\nThe effect of air pollution on inflammation, oxidative stress and cardiovascular risk factors has been demonstrated not only in older adults [7,8], but also in young adults [9] as well as in children and adolescents [10,11].\nThe inflammation process stimulates the coagulation system, and result in increased secretion of tissue factor (TF). Endothelial function has key roles in anticoagulant and fibrinolytic systems. In vitro studies have demonstrated significant decrease in endogenous anticoagulation activity, thrombomodulin (TM), endothelial protein C receptor antigen and culture of endothelial cells during the inflammation process [12-14].\nA growing body of evidence suggests that the effects of air pollution on the inflammation and the coagulation systems may have a role in endothelial dysfunction and in turn in the progression of cardiovascular diseases [15-17]. Findings of experimental studies suggest that exposure to air pollution may result in increase in TF and decrease in TM [18,19].\nAtherogenesis starts from the fetal life through interrelations of traditional risk factors with inflammatory, immune, and endothelial biomarkers [20]. Air pollution has various harmful effects on this process from early life [21-23]. Studying the effects of environmental factors on early stages of atherosclerosis in early life can help identify the underlying mechanisms.\nTo the best of our knowledge, no previous human study had determined the association of air pollutants with TF and TM in the pediatric age group. In the current study, we aimed to determine the relationship of air pollution with TS and TM, as surrogate markers of endothelial dysfunction, in a population-based sample of children and adolescents.", "This cross-sectional study was conducted from November 2009 to February 2010 in Isfahan, which is the second large and air-polluted city in Iran. The study was approved in the Research Council &Ethics Committee of the School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran. It was conducted according to the Declaration of Helsinki. After providing detailed oral information, we obtained written informed consent from the parents and oral assent from participants.\n[SUBTITLE] Participants [SUBSECTION] Those children and adolescents were eligible who were aged 10 to 18 years, lived for at least 6 months in areas of the city which had air pollution measurement stations, and their homes and schools were located less than 1 kilometer far from these stations. Those individuals who had a history of active or passive smoking, chronic disease, long-term medication use, or a history of acute infectious diseases in the past two weeks were not included in the study.\nBy considering a power of 95% and the statistical significance of 5%, the sample size was calculated as 110, but because of possible attrition of participants, the study was conducted on 125 students. To avoid socioeconomic bias, they were selected by multistage-random cluster sampling, with consideration to the proportion of the different types of schools (public/private).\nThose children and adolescents were eligible who were aged 10 to 18 years, lived for at least 6 months in areas of the city which had air pollution measurement stations, and their homes and schools were located less than 1 kilometer far from these stations. Those individuals who had a history of active or passive smoking, chronic disease, long-term medication use, or a history of acute infectious diseases in the past two weeks were not included in the study.\nBy considering a power of 95% and the statistical significance of 5%, the sample size was calculated as 110, but because of possible attrition of participants, the study was conducted on 125 students. To avoid socioeconomic bias, they were selected by multistage-random cluster sampling, with consideration to the proportion of the different types of schools (public/private).\n[SUBTITLE] Study area [SUBSECTION] Isfahan is an industrial city with a population of near 1894382, located in the center of Iranian plateau, with an average altitude of 1500 m from the sea level bounded by NW-SE mountain range of 3000 m. The average monthly temperature is 16°C with maximum 29°C in July and minimum 3°C in December with mild winds from west and south. Also the air of the city of Isfahan is predominantly affected by industrial emissions and motor traffic which can lead to a buildup of elevated concentrations during stagnant conditions [24,25].\nIsfahan is an industrial city with a population of near 1894382, located in the center of Iranian plateau, with an average altitude of 1500 m from the sea level bounded by NW-SE mountain range of 3000 m. The average monthly temperature is 16°C with maximum 29°C in July and minimum 3°C in December with mild winds from west and south. Also the air of the city of Isfahan is predominantly affected by industrial emissions and motor traffic which can lead to a buildup of elevated concentrations during stagnant conditions [24,25].\n[SUBTITLE] Clinical study and laboratory methods [SUBSECTION] After inviting the selected students to a health center, a trained nurse completed a questionnaire on demographic data, and physical examination was done by the same trained general practitioner and under the supervision of the same pediatrician. Subcutaneous fat of the biceps and triceps muscles were measured with a skinfold caliper (Mojtahedi, Iran), the percent body fat was determined by bio-electrical impedence using a Body Fat Monitor (Omron HBF-300, Japan).\nFor assessment of dietary habits, the Healthy Eating Index (HEI) was computed as described before [10]. Physical activity level was assessed by the international Physical Activity Questionnaire for Children [26], previously validated in Iranian children [27].\nWhile one of parents accompanied his or her child, blood samples were taken from the ante-cubital vein. Serums were obtained after centrifuging blood samples, and were kept frozen at -70°C until assayed and analyzed in the laboratory of the Applied Physiology Research Center affiliated to Isfahan University of Medical Sciences. Enzyme-linked immunosorbent assay (ELISA) kits from Abcam Company (UK) with code Ab46508 were used for measurement of TM, and TF ELISA kits R&D Company (UK) with code DCF300 for laboratory examinations.\nAfter inviting the selected students to a health center, a trained nurse completed a questionnaire on demographic data, and physical examination was done by the same trained general practitioner and under the supervision of the same pediatrician. Subcutaneous fat of the biceps and triceps muscles were measured with a skinfold caliper (Mojtahedi, Iran), the percent body fat was determined by bio-electrical impedence using a Body Fat Monitor (Omron HBF-300, Japan).\nFor assessment of dietary habits, the Healthy Eating Index (HEI) was computed as described before [10]. Physical activity level was assessed by the international Physical Activity Questionnaire for Children [26], previously validated in Iranian children [27].\nWhile one of parents accompanied his or her child, blood samples were taken from the ante-cubital vein. Serums were obtained after centrifuging blood samples, and were kept frozen at -70°C until assayed and analyzed in the laboratory of the Applied Physiology Research Center affiliated to Isfahan University of Medical Sciences. Enzyme-linked immunosorbent assay (ELISA) kits from Abcam Company (UK) with code Ab46508 were used for measurement of TM, and TF ELISA kits R&D Company (UK) with code DCF300 for laboratory examinations.\n[SUBTITLE] Air pollution data [SUBSECTION] The mean daily temperature, sunlight duration, humidity and wind speed were recorded. Data from 5 air pollution measurement stations in Isfahan city were recorded daily for the 7 days prior to blood sampling. Daily data pertaining to main air pollutants, i.e. sulfur dioxide (SO2), Ozone (O3), PM10, Nitrogen dioxide (NO2) and carbon monoxide (CO) as well as the Pollutant Standards Index (PSI) were recorded. The mean values of seven 24-hour means of air pollutants and PSI were considered for statistical analysis.\nThe mean daily temperature, sunlight duration, humidity and wind speed were recorded. Data from 5 air pollution measurement stations in Isfahan city were recorded daily for the 7 days prior to blood sampling. Daily data pertaining to main air pollutants, i.e. sulfur dioxide (SO2), Ozone (O3), PM10, Nitrogen dioxide (NO2) and carbon monoxide (CO) as well as the Pollutant Standards Index (PSI) were recorded. The mean values of seven 24-hour means of air pollutants and PSI were considered for statistical analysis.\n[SUBTITLE] Statistical analysis [SUBSECTION] SPSS for Windows (version 15:0, SPSS Inc., Chicago, IL) was used for data analysis. Analyses were initially stratified by gender, but as the differences were not significant, results are presented for girls and boys combined. We used log-transformed concentrations of variables to achieve normal distributions. The relationships of air pollutants with serum TM and TF were determined by Pearson correlation test. The associations between air pollutants and markers of endothelial dysfunction were assessed by multiple linear regression after adjustment for age, gender, body mass index, waist circumference, healthy eating index and physical activity level. The concentrations of biomarkers and air pollutants were categorized to quartiles, and the upper quartile was considered as elevated value. As there is no cutoff value to determine the high and low levels of TM and TF in the pediatric age group, we considered their lowest quartiles as low, and their highest quartiles as high levels. Then, we examined the association of the dichotomized concentrations of these biomarkers (upper quartile vs. lower quartile) across the quartiles of air pollutants by using logistic regression analysis after adjustment for the abovementioned confounders. The significance level was set at p < 0.05.\nSPSS for Windows (version 15:0, SPSS Inc., Chicago, IL) was used for data analysis. Analyses were initially stratified by gender, but as the differences were not significant, results are presented for girls and boys combined. We used log-transformed concentrations of variables to achieve normal distributions. The relationships of air pollutants with serum TM and TF were determined by Pearson correlation test. The associations between air pollutants and markers of endothelial dysfunction were assessed by multiple linear regression after adjustment for age, gender, body mass index, waist circumference, healthy eating index and physical activity level. The concentrations of biomarkers and air pollutants were categorized to quartiles, and the upper quartile was considered as elevated value. As there is no cutoff value to determine the high and low levels of TM and TF in the pediatric age group, we considered their lowest quartiles as low, and their highest quartiles as high levels. Then, we examined the association of the dichotomized concentrations of these biomarkers (upper quartile vs. lower quartile) across the quartiles of air pollutants by using logistic regression analysis after adjustment for the abovementioned confounders. The significance level was set at p < 0.05.", "Those children and adolescents were eligible who were aged 10 to 18 years, lived for at least 6 months in areas of the city which had air pollution measurement stations, and their homes and schools were located less than 1 kilometer far from these stations. Those individuals who had a history of active or passive smoking, chronic disease, long-term medication use, or a history of acute infectious diseases in the past two weeks were not included in the study.\nBy considering a power of 95% and the statistical significance of 5%, the sample size was calculated as 110, but because of possible attrition of participants, the study was conducted on 125 students. To avoid socioeconomic bias, they were selected by multistage-random cluster sampling, with consideration to the proportion of the different types of schools (public/private).", "Isfahan is an industrial city with a population of near 1894382, located in the center of Iranian plateau, with an average altitude of 1500 m from the sea level bounded by NW-SE mountain range of 3000 m. The average monthly temperature is 16°C with maximum 29°C in July and minimum 3°C in December with mild winds from west and south. Also the air of the city of Isfahan is predominantly affected by industrial emissions and motor traffic which can lead to a buildup of elevated concentrations during stagnant conditions [24,25].", "After inviting the selected students to a health center, a trained nurse completed a questionnaire on demographic data, and physical examination was done by the same trained general practitioner and under the supervision of the same pediatrician. Subcutaneous fat of the biceps and triceps muscles were measured with a skinfold caliper (Mojtahedi, Iran), the percent body fat was determined by bio-electrical impedence using a Body Fat Monitor (Omron HBF-300, Japan).\nFor assessment of dietary habits, the Healthy Eating Index (HEI) was computed as described before [10]. Physical activity level was assessed by the international Physical Activity Questionnaire for Children [26], previously validated in Iranian children [27].\nWhile one of parents accompanied his or her child, blood samples were taken from the ante-cubital vein. Serums were obtained after centrifuging blood samples, and were kept frozen at -70°C until assayed and analyzed in the laboratory of the Applied Physiology Research Center affiliated to Isfahan University of Medical Sciences. Enzyme-linked immunosorbent assay (ELISA) kits from Abcam Company (UK) with code Ab46508 were used for measurement of TM, and TF ELISA kits R&D Company (UK) with code DCF300 for laboratory examinations.", "The mean daily temperature, sunlight duration, humidity and wind speed were recorded. Data from 5 air pollution measurement stations in Isfahan city were recorded daily for the 7 days prior to blood sampling. Daily data pertaining to main air pollutants, i.e. sulfur dioxide (SO2), Ozone (O3), PM10, Nitrogen dioxide (NO2) and carbon monoxide (CO) as well as the Pollutant Standards Index (PSI) were recorded. The mean values of seven 24-hour means of air pollutants and PSI were considered for statistical analysis.", "SPSS for Windows (version 15:0, SPSS Inc., Chicago, IL) was used for data analysis. Analyses were initially stratified by gender, but as the differences were not significant, results are presented for girls and boys combined. We used log-transformed concentrations of variables to achieve normal distributions. The relationships of air pollutants with serum TM and TF were determined by Pearson correlation test. The associations between air pollutants and markers of endothelial dysfunction were assessed by multiple linear regression after adjustment for age, gender, body mass index, waist circumference, healthy eating index and physical activity level. The concentrations of biomarkers and air pollutants were categorized to quartiles, and the upper quartile was considered as elevated value. As there is no cutoff value to determine the high and low levels of TM and TF in the pediatric age group, we considered their lowest quartiles as low, and their highest quartiles as high levels. Then, we examined the association of the dichotomized concentrations of these biomarkers (upper quartile vs. lower quartile) across the quartiles of air pollutants by using logistic regression analysis after adjustment for the abovementioned confounders. The significance level was set at p < 0.05.", "Data of 118 out of the125 students studied was complete and included in the statistical analysis. The study participants consisted of 57 (48.3%) boys and 61 (51.7%) girls with a mean age of 12.79 ± 2.35 years.\nThe mean (SD) of anthropometric and biochemical variables of the study participants is presented in Table 1. TF had a mean level of 64.77 ± 17.35 pg/mL, with the following quartile (Q) ranges: Q1: 27.11-63.79; Q2:63.80-78.84; Q3:78.85-89.17; Q4: 89.18-102.74. The corresponding figures for TM were 5.64 ± 3.27 ng/mL, Q1: 2.18-5.14; Q2: 5.15-9.27; Q3: 9.28-14.25; Q4:14.26-15.71, respectively.\nCharacteristics of study participants\nSD: standard deviation\nThe environmental characteristics are presented in Table 2. It shows moderate levels of mean PSI, i.e. an inappropriate level for sensitive groups. Mean levels of O3, NO2 and SO2 were higher than acceptable values; the mean PM10 level was remarkably high, reaching more than twice the normal level (120.48 vs. 50 μg/m3).\nEnvironmental characteristics during the study period\nSD: standard deviation; PM10: particular matter 10 (acceptable level: 50 μg/m3); CO: carbon monoxide (acceptable level: 9 ppm); SO2: sulfur dioxide (acceptable level: 0.03 ppb)\nNO2: Nitrogen dioxide (acceptable level: 0.05 ppb); O3: ozone (acceptable level: 0.08 ppb)\nPSI; Pollution Standards Index (0-50: Good; 51-100: Moderate; 101-199: Unhealthful; 200-299: Very unhealthful; ≥ 300: Hazardous)\nResults of the Pearson correlation analyses of air pollutants level with serum markers showed that PSI and PM10 had significant correlation with TF (r = 0.3, p = 0.001) and non-significant inverse correlation with TM. CO had weak but significant correlation with TM and TF (r = 0.25, p = 0.01).\nMultiple linear regression analysis showed that after adjustment for confounding factors, serum TF level had significant relationship with PSI. This relationship existed more or less with air pollutants, notably PM10 (Table 3).\nRegression coefficients* for the relation of air pollutants and Pollutant Standards Index with serum concentrations of biomarkers\n*: Adjusted for age, gender, anthropometric measures, as well as dietary and physical activity habits\nSE: standard error; PM10: particular matter 10 ((μg/m3); CO: carbon monoxide(ppm); SO2: sulfur dioxide (ppb); NO2: Nitrogen dioxide (ppb); O3: ozone(ppb); PSI; Pollution Standards Index\nThe odds ratio (OR) of elevated TF increased as the quartiles of PM10, O3 and PSI increased; however these associations reached to significant level only in the highest quartile of PM10 and PSI. The corresponding figures for TM were in opposite direction, i.e. the OR was lower in the highest vs. lowest quartiles of PM10, O3 and PSI (Table 4).\nAssociatio n* of the quartiles of air pollutants and Pollutant Standards Index with upper quartile of tissue factor and thrombomodulin\n*: Values represent odds ratio (95%CI) adjusted for age, gender, anthropometric measures, dietary and physical activity habits\nTF: Tissue Factor(Pg/mL); TM: Thrombomodulin(μmol/L); PM10: particular matter 10((μg/m3); CO: carbon monoxide(ppm); SO2: sulfur dioxide(ppb); NO2: Nitrogen dioxide(ppb); O3: ozone(ppb); PSI; Pollution Standards Index", "This study, which to the best of our knowledge is the first of its kind in the pediatric age group, revealed significant association of air pollutants (notably PM10 and O3 ) and PSI with surrogate markers of endothelial dysfunction in early life. Increased levels of PM10, O3 and PSI increased the OR of elevated TF and reduced TM.\nFindings of different studies, mostly of experimental type, have linked air pollution with various predisposing factors of cardiovascular diseases, i.e. the progression of atherosclerosis, endothelial dysfunction, vasoconstriction, high blood pressure, changes in coagulation system, inflammation, oxidative stress, autonomic imbalance and arrhythmias [28].The latest statement of the American Heart Association provides growing body of evidence about a causal relationship between ultrafine PM (PM2.5) exposure and cardiovascular morbidity and mortality [29].\nOur findings provide confirmatory evidence for the health effects of PM, however similar to our previous study in the pediatric age group [10]; we found associations between larger particulate matter (PM10) and factors associated with atherosclerotic cardiovascular disease. This might suggest the higher susceptibility to the threats of air pollutants in the cardiovascular system of children than adults.\nWhatever the underlying pathophysiological mechanisms, acute and chronic exposures to air pollutants, notably PM, have been linked to a wide spectrum of cardiovascular disorders characterized by endothelial dysfunction [30]. Episodic exposure to PM 2.5 induces vascular injury, reflected in part by depletion of circulating endothelial progenitor cell [31]. Adverse vascular effects of diesel exhaust inhalation occur over different running conditions [32], e.g. PM 2.5 may be an important environmental risk factor for increased arterial blood pressure [33]. A double-blind study on fifteen healthy men exposed to diesel exhaust (PM concentration 300 μg/m3) showed a selective and persistent impairment of endothelium-dependent vasodilatation that occurs in the presence of mild systemic inflammation [34]. A study among seniors showed that increases in black carbon and PM2.5 were associated with increases in blood pressure, heart rate, endothelin-1, vascular endothelial growth factor and oxidative stress, along with a decrease in brachial artery diameter [35]. The vascular effects of air pollution are also confirmed in young adults [36] and in children [11] as well.\nAlthough the interaction of coagulation, inflammation and endothelial dysfunction is well documented, but different effects of air pollutants have been documented on these systems, and the mechanisms underlying the associations of air pollutants with endothelial function are unclear. The interaction between the inflammation and coagulation systems is reciprocal, with protein C and TF playing a key role in this process [37,38]. We found significant association between PSI and TF, and among air pollutants the highest correlation was documented for PM10 and TF. This finding is consistent with various experimental studies that showed exposure to ultrafine PM increase TF expression in atherosclerotic lesions [39-41]. Study on cultured human smooth muscle cells and monocytes documented dose-dependent increases in TF in response to in vivo and in vitro exposure to ambient PM2.5 [42]. Some studies support potential pro-coagulant and thrombotic effects of PM [40], not only its ultrafine particles but also PM10 [43].\nFindings of a study on 37 workers in a steel production plant revealed that short-term PM exposure is weakly associated with TM, but it did not show any effect on coagulation system [44]. On the other hand, an experimental study suggested that exposure to the soluble organic fraction of PM and diesel exhaust induced oxidative stress and reduced the PAI-1 production of endothelial cells, but it did not affect the TM production [45]. It is suggested that PM activates circulating monocytes directly or indirectly; it might stimulate other cells as pulmonary endothelial cells and might induce pulmonary and/or systemic inflammation [46]. However in an experimental study, ultrafine PM was associated with prothrombotic changes on the endothelial surface but did not trigger an inflammatory reaction and did not induce microvascular tissue injury [47].\nA longitudinal study in the US found significant association between black carbon and markers of endothelial function and inflammation, with larger effect in obese individuals [48]. In the current study, similar to our previous experience among children adolescents [10], the association of air pollutants with biochemical markers remained significant even after adjustment for anthropometric measures. This difference might be because of the cross-sectional nature of our studies and the higher susceptibility of the pediatric age group in our study than older age group in the abovementioned study. Moreover, the genetic differences related to oxidative defense and stress response gene expression [41,48] should be considered as well.\nHarmful effects of air pollution are mostly attributed to its PM content [29], however exposure to other air pollutants might have various health consequences [49]. Different air pollutants can have independent and possibly synergistic or opposed effects with each other and with PM; and the health impact of exposure to combinations of air pollutants remains to be determined [29].\nIn an experimental study, asbestos and mineral fibers affected TM level in human umbilical vein endothelial cells [50]. Furthermore, it is documented that a single O3 exposure might induce significant biological response in TM level, inflammatory and pro-coagulant reactions in the lungs of mice [18]. Consistent with this study, we found that O3 level was significantly associated with TF and TM. Although the only significant relationships of TM with air pollutants were its inverse associations with O3 followed by CO, but in higher quartiles of PSI, PM10 and O3, the OR of elevated TM decreased, and vice versa was documented for TF. This finding may have implications for understanding the systemic effects and possible pro-coagulant state induced by air pollutants; additional studies in this regard seem warranted.\nIsfahan is the second most polluted industrial city in Iran, where the number of factories, cars and motorcycles is rapidly increasing [24,25]. Although during the time period of the current study, the urban air had a moderate level of PSI in general, but air pollutants had independent association with surrogate markers of endothelial dysfunction. This association might be because of the vulnerability of children to environmental threats, and/or due to considerably high level of PM10, being more than twice as high as standard. Furthermore, this association might be the result of the long-term exposure of the children studied to improper air quality year-round. In addition to the effects of air pollution on coronary artery diseases in the elderly and mortality [51,52] as the main focus of many statements, the impact of air pollutants on children's health should be also considered as a public health priority.\n[SUBTITLE] Study limitations& strengths [SUBSECTION] The findings of the current study should be considered with its limitations. As with all ecological studies, this study is limited by the lack of precise exposure estimates, and that the ambient pollution concentrations may not adequately reflect exposures of individual subjects. Because of the cross-sectional nature of the study, cause-effect relations cannot be inferred. We compared our findings in areas with different levels of air pollution, repeated measures design particularly capturing episodes of mild and severe pollution might provide more information. Noteworthy to mention that the existing equipment was unable to measure more specific particles such as PM2.5, although we found significant association of larger particle (PM10) with biomarkers studied, but studying ultrafine particle might result to stronger associations. Moreover, we measured systemic biomarkers, more localized investigation e.g. assessment of lung tissue inflammatory response in broncho-alveolar lavage may yield better results.\nThe strengths of this study are mainly its novelty in the pediatric age group and in assessment of potential confounding factors and controlling them in our analysis for studying the independent association of surrogate markers of endothelial dysfunction with air pollutants in a representative population-based sample of healthy children.\nThe findings of the current study should be considered with its limitations. As with all ecological studies, this study is limited by the lack of precise exposure estimates, and that the ambient pollution concentrations may not adequately reflect exposures of individual subjects. Because of the cross-sectional nature of the study, cause-effect relations cannot be inferred. We compared our findings in areas with different levels of air pollution, repeated measures design particularly capturing episodes of mild and severe pollution might provide more information. Noteworthy to mention that the existing equipment was unable to measure more specific particles such as PM2.5, although we found significant association of larger particle (PM10) with biomarkers studied, but studying ultrafine particle might result to stronger associations. Moreover, we measured systemic biomarkers, more localized investigation e.g. assessment of lung tissue inflammatory response in broncho-alveolar lavage may yield better results.\nThe strengths of this study are mainly its novelty in the pediatric age group and in assessment of potential confounding factors and controlling them in our analysis for studying the independent association of surrogate markers of endothelial dysfunction with air pollutants in a representative population-based sample of healthy children.", "The findings of the current study should be considered with its limitations. As with all ecological studies, this study is limited by the lack of precise exposure estimates, and that the ambient pollution concentrations may not adequately reflect exposures of individual subjects. Because of the cross-sectional nature of the study, cause-effect relations cannot be inferred. We compared our findings in areas with different levels of air pollution, repeated measures design particularly capturing episodes of mild and severe pollution might provide more information. Noteworthy to mention that the existing equipment was unable to measure more specific particles such as PM2.5, although we found significant association of larger particle (PM10) with biomarkers studied, but studying ultrafine particle might result to stronger associations. Moreover, we measured systemic biomarkers, more localized investigation e.g. assessment of lung tissue inflammatory response in broncho-alveolar lavage may yield better results.\nThe strengths of this study are mainly its novelty in the pediatric age group and in assessment of potential confounding factors and controlling them in our analysis for studying the independent association of surrogate markers of endothelial dysfunction with air pollutants in a representative population-based sample of healthy children.", "The independent association of air pollutants with surrogate markers of endothelial dysfunction and a possible pro-coagulant state is underscored. The presence of these associations with PM10, larger than PM2.5 usually considered as harmful, and in a moderate air quality, which is commonly considered with few or no health effect for the general population, highlights the need to re-examine environmental health policies and standards for the pediatric age group. Further studies on the effects of air pollution on the first stages of atherosclerosis in early life are needed. Concerns about the harmful effects of air pollution on children's health should be considered a top priority for public health policy; it should be underscored in primordial and primary prevention of chronic diseases.", "PSI: Pollutant Standards Index; PM: Particular matter; TF: Tissue factor;TM: Thrombomodulin; SO2: Sulfur dioxide; O3: Ozone; NO2: Nitrogen dioxide; CO: Carbon monoxide; ELISA: Enzyme-linked immunosorbent assay; SD: Standard deviation; OR: Odds ratio", "The authors declare that they have no competing interests.", "PP participated in the design and conducting the study and helped to draft the manuscript. RK participated in the design and conducting the study and helped to draft and edit the manuscript. AL helped in conducting the study. MM participated in the design of the study and its conduction. SHJ participated in the design of the study and its conduction. RA participated in the design and conducting the study. MMA participated in the design of the study and helped to revise the manuscript. FM participated in the design of the study and helped to revise the manuscript. AA helped in conducting the study. BS participated in conducting the study. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/115/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Hormone replacement therapy in morphine-induced hypogonadic male chronic pain patients.
21332999
In male patients suffering from chronic pain, opioid administration induces severe hypogonadism, leading to impaired physical and psychological conditions such as fatigue, anaemia and depression. Hormone replacement therapy is rarely considered for these hypogonadic patients, notwithstanding the various pharmacological solutions available.
BACKGROUND
To treat hypogonadism and to evaluate the consequent endocrine, physical and psychological changes in male chronic pain patients treated with morphine (epidural route), we tested the administration of testosterone via a gel formulation for one year. Hormonal (total testosterone, estradiol, free testosterone, DHT, cortisol), pain (VAS and other pain questionnaires), andrological (Ageing Males' Symptoms Scale-AMS) and psychological (POMS, CES-D and SF-36) parameters were evaluated at baseline (T0) and after 3, 6 and 12 months (T3, T6, T12 respectively).
METHODS
The daily administration of testosterone increased total and free testosterone and DHT at T3, and the levels remained high until T12. Pain rating indexes (QUID) progressively improved from T3 to T12 while the other pain parameters (VAS, Area%) remained unchanged. The AMS sexual dimension and SF-36 Mental Index displayed a significant improvement over time.
RESULTS
In conclusion, our results suggest that a constant, long-term supply of testosterone can induce a general improvement of the male chronic pain patient's quality of life, an important clinical aspect of pain management.
CONCLUSIONS
[ "Adult", "Aged", "Chronic Disease", "Hormone Replacement Therapy", "Humans", "Hypogonadism", "Male", "Middle Aged", "Morphine", "Pain", "Prospective Studies", "Quality of Life", "Testosterone" ]
3049183
null
null
Methods
[SUBTITLE] Subjects [SUBSECTION] Male chronic pain patients referred to two Italian pain centres (Forlì, Cagliari) were asked to take part in this study. To obtain the most homogeneous group of patients possible, only male subjects suffering from non-oncological chronic pain and undergoing epidural morphine were considered; morphine treatment had to have been carried out for at least six months; hypogonadism was diagnosed for testosterone values less than 2-3 ng/mL in at least two determinations in the previous 3-4 months. Other inclusion criteria were the presence of clear symptoms indicative of hypogonadism (fatigue, depressed mood, decreased libido) and negative urological problems. Experimental procedures were carried out in agreement with the Code of Ethics of the World Medical Association (Helsinki Declaration) and were approved by the Local Ethics Committee. Once included in the study, each subject met with the experimental team once a month for one year during the morphine pump refilling session and underwent clinical, psychological and hormonal evaluations. In particular, the following procedures were performed during the monthly meeting: - Blood collection, - Complete clinical evaluation by the pain therapist, including the administration of specific pain questionnaires for pain assessment (VAS, QUID, Area%), - Psychological evaluation, including the administration of specific questionnaires for anxiety, depression and quality of life (POMS, CES-D, SF-36), - Andrological evaluation, including manual exploration of the prostate and transrectal prostate echography when necessary. The patient was asked to complete a specific andrological questionnaire (AMS). The subject was also examined for any adverse effects that might be related to the gel, including local skin irritation. At each visit, the subject was given a one-month supply of testosterone gel, a hydroalcoholic compound containing 50 mg testosterone in 5 g gel in each sachet. So as not to introduce other variables, the dose of testosterone gel was maintained constant independently of the blood testosterone levels shown by the patient. The patient was asked to apply the gel on the skin of the upper arm/shoulder or abdomen each morning at the same time, alternating the site of application each day and not taking a shower until 5 hours after application. He received verbal instructions to record in a diary any change that may have occurred in his daily activities and physical and mental performances. All procedures took about 2 hours per patient per month. [SUBTITLE] Questionnaires [SUBSECTION] All questionnaires were self-administered, but supported by the presence of the expert clinician. All questionnaires were self-administered, but supported by the presence of the expert clinician. Male chronic pain patients referred to two Italian pain centres (Forlì, Cagliari) were asked to take part in this study. To obtain the most homogeneous group of patients possible, only male subjects suffering from non-oncological chronic pain and undergoing epidural morphine were considered; morphine treatment had to have been carried out for at least six months; hypogonadism was diagnosed for testosterone values less than 2-3 ng/mL in at least two determinations in the previous 3-4 months. Other inclusion criteria were the presence of clear symptoms indicative of hypogonadism (fatigue, depressed mood, decreased libido) and negative urological problems. Experimental procedures were carried out in agreement with the Code of Ethics of the World Medical Association (Helsinki Declaration) and were approved by the Local Ethics Committee. Once included in the study, each subject met with the experimental team once a month for one year during the morphine pump refilling session and underwent clinical, psychological and hormonal evaluations. In particular, the following procedures were performed during the monthly meeting: - Blood collection, - Complete clinical evaluation by the pain therapist, including the administration of specific pain questionnaires for pain assessment (VAS, QUID, Area%), - Psychological evaluation, including the administration of specific questionnaires for anxiety, depression and quality of life (POMS, CES-D, SF-36), - Andrological evaluation, including manual exploration of the prostate and transrectal prostate echography when necessary. The patient was asked to complete a specific andrological questionnaire (AMS). The subject was also examined for any adverse effects that might be related to the gel, including local skin irritation. At each visit, the subject was given a one-month supply of testosterone gel, a hydroalcoholic compound containing 50 mg testosterone in 5 g gel in each sachet. So as not to introduce other variables, the dose of testosterone gel was maintained constant independently of the blood testosterone levels shown by the patient. The patient was asked to apply the gel on the skin of the upper arm/shoulder or abdomen each morning at the same time, alternating the site of application each day and not taking a shower until 5 hours after application. He received verbal instructions to record in a diary any change that may have occurred in his daily activities and physical and mental performances. All procedures took about 2 hours per patient per month. [SUBTITLE] Questionnaires [SUBSECTION] All questionnaires were self-administered, but supported by the presence of the expert clinician. All questionnaires were self-administered, but supported by the presence of the expert clinician. [SUBTITLE] Pain assessment [SUBSECTION] A Visual Analogue Scale, VAS (0-100), was used [16] to estimate the current intensity of pain and the peak intensity during the previous week. VAS is a 10 cm horizontal line, anchored at the extremes by "no pain " and "worst pain possible". The quality and intensity of the current pain experience was evaluated by the Italian Pain Questionnaire, QUID [17]. The QUID is a reconstructed Italian version of the McGill Pain Questionnaire (MPQ). It is a semantic interval scale consisting of 42 descriptors divided into four main classes: sensory, affective, evaluative, miscellaneous. The Pain Rating Index rank value (PRIr) for each dimension and for the whole experience (Pain Rating Index rank-Total [PRIr-T], given by the sum of all the rank values) indicate the quality and intensity of pain. The percentage of body surface area in pain (Area%) was measured according the Margolis method [18]. It consists of a human front-back body map in which the patient marks the area in pain. Thus, the drawing gives the pain distribution and the percentage of body surface area affected by pain. A Visual Analogue Scale, VAS (0-100), was used [16] to estimate the current intensity of pain and the peak intensity during the previous week. VAS is a 10 cm horizontal line, anchored at the extremes by "no pain " and "worst pain possible". The quality and intensity of the current pain experience was evaluated by the Italian Pain Questionnaire, QUID [17]. The QUID is a reconstructed Italian version of the McGill Pain Questionnaire (MPQ). It is a semantic interval scale consisting of 42 descriptors divided into four main classes: sensory, affective, evaluative, miscellaneous. The Pain Rating Index rank value (PRIr) for each dimension and for the whole experience (Pain Rating Index rank-Total [PRIr-T], given by the sum of all the rank values) indicate the quality and intensity of pain. The percentage of body surface area in pain (Area%) was measured according the Margolis method [18]. It consists of a human front-back body map in which the patient marks the area in pain. Thus, the drawing gives the pain distribution and the percentage of body surface area affected by pain. [SUBTITLE] Andrological and urological evaluation [SUBSECTION] The impact of treatment on the androgens-related quality of life was measured by the Ageing Males' Symptoms scale (AMS) questionnaire [19]. The AMS is composed of three subscales, measuring respectively: psychological, somatic and sexual symptoms. Items (n = 17) are scored on a 1 (absent) to 5 (very severe) Likert scale. The sum-score for each subscale is obtained by adding up the ratings of the items. The total score is the sum of the three subscale ratings. The higher the value, the worse the condition of the subject. The severity categories are: mild (27-36), moderate (37-49), severe (≥50). The impact of treatment on the androgens-related quality of life was measured by the Ageing Males' Symptoms scale (AMS) questionnaire [19]. The AMS is composed of three subscales, measuring respectively: psychological, somatic and sexual symptoms. Items (n = 17) are scored on a 1 (absent) to 5 (very severe) Likert scale. The sum-score for each subscale is obtained by adding up the ratings of the items. The total score is the sum of the three subscale ratings. The higher the value, the worse the condition of the subject. The severity categories are: mild (27-36), moderate (37-49), severe (≥50). [SUBTITLE] Psychological evaluation [SUBSECTION] To study psychological characteristics of the patients and their time course, the following questionnaires were administered: Profile of Mood State (POMS). The POMS measures the current psychological state of the participant [20] and consists of 58 feelings rated on a 5-point scale. It comprises six subscales: Tension-Anxiety (T-A), Depression-Dejection (D-D), Anger-Hostility (A-H), Vigor-Activity (V-A), Fatigue-Inertia (F-I) and Confusion-Bewilderment (C-B). In each subscale, values higher (T-A, D-D, A-H, F-I and C-B) or lower (V-A) than 55 were considered significantly altered with respect to the normal population [21]. Centre for Epidemiological Studies Depression Scale (CES-D). The Italian version of CES-D [22] was used to determine the level of depression symptoms for research purposes [23]. The CES-D is a 20-item self-reported measure of symptoms, scored on a scale from 0 to 3 frequency ratings ("less than 1 day" to "most or all (5-7) days") for symptoms within the past week. Previous studies have verified that a score of 16 or above on the CES-D indicates clinically significant depressive symptoms [23]. Italian version of the SF-36 [24] questionnaire. SF-36 is a 36-item questionnaire measuring the patient's level of performance in eight health domains: Physical Functioning, Role Physical (role limitations caused by physical problems), Social Functioning, Bodily Pain, Mental Health, Role Emotional (role limitations caused by emotional problems), Vitality and General Perception of Health. Individual items are scored on a 0-100 standardized Likert scale. For each domain, including Bodily Pain, a higher score indicates a better quality of life. Two indices (Physical and Mental) based on pertinent subscale clustering summarize the respective functioning. Due to the critical conditions shown by most of the patients at the beginning of the observations, some of the SF-36 data related to physical evaluation were lost. Thus only the Mental Index (MI) was taken into consideration. [SUBTITLE] Hormones [SUBSECTION] Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25]. Total testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory. Free testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory. Sex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory. Dihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory. Estradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory. Cortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory. Prostate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory. Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25]. Total testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory. Free testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory. Sex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory. Dihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory. Estradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory. Cortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory. Prostate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory. To study psychological characteristics of the patients and their time course, the following questionnaires were administered: Profile of Mood State (POMS). The POMS measures the current psychological state of the participant [20] and consists of 58 feelings rated on a 5-point scale. It comprises six subscales: Tension-Anxiety (T-A), Depression-Dejection (D-D), Anger-Hostility (A-H), Vigor-Activity (V-A), Fatigue-Inertia (F-I) and Confusion-Bewilderment (C-B). In each subscale, values higher (T-A, D-D, A-H, F-I and C-B) or lower (V-A) than 55 were considered significantly altered with respect to the normal population [21]. Centre for Epidemiological Studies Depression Scale (CES-D). The Italian version of CES-D [22] was used to determine the level of depression symptoms for research purposes [23]. The CES-D is a 20-item self-reported measure of symptoms, scored on a scale from 0 to 3 frequency ratings ("less than 1 day" to "most or all (5-7) days") for symptoms within the past week. Previous studies have verified that a score of 16 or above on the CES-D indicates clinically significant depressive symptoms [23]. Italian version of the SF-36 [24] questionnaire. SF-36 is a 36-item questionnaire measuring the patient's level of performance in eight health domains: Physical Functioning, Role Physical (role limitations caused by physical problems), Social Functioning, Bodily Pain, Mental Health, Role Emotional (role limitations caused by emotional problems), Vitality and General Perception of Health. Individual items are scored on a 0-100 standardized Likert scale. For each domain, including Bodily Pain, a higher score indicates a better quality of life. Two indices (Physical and Mental) based on pertinent subscale clustering summarize the respective functioning. Due to the critical conditions shown by most of the patients at the beginning of the observations, some of the SF-36 data related to physical evaluation were lost. Thus only the Mental Index (MI) was taken into consideration. [SUBTITLE] Hormones [SUBSECTION] Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25]. Total testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory. Free testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory. Sex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory. Dihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory. Estradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory. Cortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory. Prostate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory. Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25]. Total testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory. Free testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory. Sex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory. Dihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory. Estradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory. Cortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory. Prostate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory. [SUBTITLE] Statistical analysis [SUBSECTION] Data are expressed as mean ± SEM. Changes of the data across time were analysed by Friedman analysis of variance (ANOVA) and Kendall concordance. To better define any possible drug effect, we compared the values at each time of observation (T3, T6 and T12) with the basal level (T0) using the non-parametric Wilcoxon test. All analyses were performed with Statistica software. A level of P ≤ 0.05 was considered statistically significant. Data are expressed as mean ± SEM. Changes of the data across time were analysed by Friedman analysis of variance (ANOVA) and Kendall concordance. To better define any possible drug effect, we compared the values at each time of observation (T3, T6 and T12) with the basal level (T0) using the non-parametric Wilcoxon test. All analyses were performed with Statistica software. A level of P ≤ 0.05 was considered statistically significant.
null
null
null
null
[ "Background", "Subjects", "Questionnaires", "Pain assessment", "Andrological and urological evaluation", "Psychological evaluation", "Hormones", "Statistical analysis", "Results", "Clinical observations", "Morphine dose", "Pain assessment", "VAS", "Margolis", "QUID", "Andrological evaluation", "AMS scale", "Psychological outcomes", "POMS, CES-D and SF-36", "Hormonal parameters", "Total testosterone (TT), free testosterone (fT), bioavailable testosterone (BioT) and SHBG", "Serum levels and ratios of the testosterone metabolites", "Discussion", "Andrology", "Pain", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Awareness of the need to adequately treat pain, particularly chronic pain, is slowly increasing among patients and pain therapists. Opioids are among the most prescribed analgesic drugs but present several side effects such as nausea, itching, constipation and hypogonadism [1,2]. For hypogonadism, there is clear evidence in the literature for both humans [3-7] and experimental animals [8,9]. Symptoms and signs related to gonadal hypofunction and consequent testosterone depletion include peripheral effects (muscle hypotrophy, bone loss, anaemia, etc.) and severe and disabling central effects (decreased attention, disappearance of libido, depressive state, etc.). Moreover, there is much evidence for a close relationship between testosterone and neuroprotective processes. For instance, it was reported that older men with low levels of free circulating testosterone appear to be at higher risk of developing Alzheimer Disease than men with higher levels of this hormone [10], and gonadal and adrenal androgen levels were found to be lower in men and women suffering from rheumatoid arthritis than in healthy controls [11]. In experimental animals, testosterone administration in inflammatory painful conditions, such as formalin- or adjuvant-induced pain, was found to decrease pain-induced responses [12-14].\nOpioid-induced hypogonadism in males (Opioid-Induced Androgen Deficiency, OPIAD) is usually ignored by pain physicians and rarely considered for treatment [1] despite its high frequency (almost 100%) and persistence. Daniell [1] described the beneficial effects of testosterone replacement in opioid-treated patients for six months.\nThe aim of the present study was to describe the time course/interactions between opioids and testosterone replacement therapy in a clinical sample of male chronic pain patients diagnosed with OPIAD and receiving morphine via the epidural route. This kind of patient was chosen to reduce variables due to the high numbers of opiates available, their dosage and combinations. Moreover, all these patients had been suffering from chronic pain for many years and their pain condition was considered severe and stable. To obtain a broad picture of the effects induced by this treatment, we considered different aspects using a multidisciplinary approach. A team of experts, namely an andrologist, a psychologist, the project co-ordinator and a local pain therapist, met with each patient monthly for one year to check the clinical and laboratory outcomes. Testosterone was replaced via a gel formulation able to restore and maintain testosterone levels [15] by virtue of the fact that its application on the skin prevents first liver clearance and allows steroids to be directly absorbed and stored in the stratum corneum. From this reservoir, testosterone or its metabolites (17-beta estradiol and DHT) are slowly released into the circulation over several hours. Testosterone and its related hormones (estradiol, DHT) were measured in the blood, together with cortisol as an index of adrenal activity. Pain intensity and its features were studied with the VAS, the Italian version of the McGill Questionnaire (QUID) and the Margolis method to determine the percentage of the body in pain. The andrological condition was studied with the dedicated questionnaire Ageing Males' Symptoms Scale (AMS), and different aspects of the psychological characteristics were evaluated through questionnaires able to study anxiety, depression and quality of life (POMS, CES-D and SF-36). The study was carried out for 12 months. The results suggest beneficial effects on the patients' quality of life.", "Male chronic pain patients referred to two Italian pain centres (Forlì, Cagliari) were asked to take part in this study. To obtain the most homogeneous group of patients possible, only male subjects suffering from non-oncological chronic pain and undergoing epidural morphine were considered; morphine treatment had to have been carried out for at least six months; hypogonadism was diagnosed for testosterone values less than 2-3 ng/mL in at least two determinations in the previous 3-4 months. Other inclusion criteria were the presence of clear symptoms indicative of hypogonadism (fatigue, depressed mood, decreased libido) and negative urological problems. Experimental procedures were carried out in agreement with the Code of Ethics of the World Medical Association (Helsinki Declaration) and were approved by the Local Ethics Committee.\nOnce included in the study, each subject met with the experimental team once a month for one year during the morphine pump refilling session and underwent clinical, psychological and hormonal evaluations. In particular, the following procedures were performed during the monthly meeting:\n- Blood collection,\n- Complete clinical evaluation by the pain therapist, including the administration of specific pain questionnaires for pain assessment (VAS, QUID, Area%),\n- Psychological evaluation, including the administration of specific questionnaires for anxiety, depression and quality of life (POMS, CES-D, SF-36),\n- Andrological evaluation, including manual exploration of the prostate and transrectal prostate echography when necessary. The patient was asked to complete a specific andrological questionnaire (AMS). The subject was also examined for any adverse effects that might be related to the gel, including local skin irritation. At each visit, the subject was given a one-month supply of testosterone gel, a hydroalcoholic compound containing 50 mg testosterone in 5 g gel in each sachet. So as not to introduce other variables, the dose of testosterone gel was maintained constant independently of the blood testosterone levels shown by the patient. The patient was asked to apply the gel on the skin of the upper arm/shoulder or abdomen each morning at the same time, alternating the site of application each day and not taking a shower until 5 hours after application. He received verbal instructions to record in a diary any change that may have occurred in his daily activities and physical and mental performances.\nAll procedures took about 2 hours per patient per month.\n[SUBTITLE] Questionnaires [SUBSECTION] All questionnaires were self-administered, but supported by the presence of the expert clinician.\nAll questionnaires were self-administered, but supported by the presence of the expert clinician.", "All questionnaires were self-administered, but supported by the presence of the expert clinician.", "A Visual Analogue Scale, VAS (0-100), was used [16] to estimate the current intensity of pain and the peak intensity during the previous week. VAS is a 10 cm horizontal line, anchored at the extremes by \"no pain \" and \"worst pain possible\".\nThe quality and intensity of the current pain experience was evaluated by the Italian Pain Questionnaire, QUID [17]. The QUID is a reconstructed Italian version of the McGill Pain Questionnaire (MPQ). It is a semantic interval scale consisting of 42 descriptors divided into four main classes: sensory, affective, evaluative, miscellaneous. The Pain Rating Index rank value (PRIr) for each dimension and for the whole experience (Pain Rating Index rank-Total [PRIr-T], given by the sum of all the rank values) indicate the quality and intensity of pain.\nThe percentage of body surface area in pain (Area%) was measured according the Margolis method [18]. It consists of a human front-back body map in which the patient marks the area in pain. Thus, the drawing gives the pain distribution and the percentage of body surface area affected by pain.", "The impact of treatment on the androgens-related quality of life was measured by the Ageing Males' Symptoms scale (AMS) questionnaire [19]. The AMS is composed of three subscales, measuring respectively: psychological, somatic and sexual symptoms. Items (n = 17) are scored on a 1 (absent) to 5 (very severe) Likert scale. The sum-score for each subscale is obtained by adding up the ratings of the items. The total score is the sum of the three subscale ratings. The higher the value, the worse the condition of the subject. The severity categories are: mild (27-36), moderate (37-49), severe (≥50).", "To study psychological characteristics of the patients and their time course, the following questionnaires were administered:\nProfile of Mood State (POMS). The POMS measures the current psychological state of the participant [20] and consists of 58 feelings rated on a 5-point scale. It comprises six subscales: Tension-Anxiety (T-A), Depression-Dejection (D-D), Anger-Hostility (A-H), Vigor-Activity (V-A), Fatigue-Inertia (F-I) and Confusion-Bewilderment (C-B). In each subscale, values higher (T-A, D-D, A-H, F-I and C-B) or lower (V-A) than 55 were considered significantly altered with respect to the normal population [21].\nCentre for Epidemiological Studies Depression Scale (CES-D). The Italian version of CES-D [22] was used to determine the level of depression symptoms for research purposes [23]. The CES-D is a 20-item self-reported measure of symptoms, scored on a scale from 0 to 3 frequency ratings (\"less than 1 day\" to \"most or all (5-7) days\") for symptoms within the past week. Previous studies have verified that a score of 16 or above on the CES-D indicates clinically significant depressive symptoms [23].\nItalian version of the SF-36 [24] questionnaire. SF-36 is a 36-item questionnaire measuring the patient's level of performance in eight health domains: Physical Functioning, Role Physical (role limitations caused by physical problems), Social Functioning, Bodily Pain, Mental Health, Role Emotional (role limitations caused by emotional problems), Vitality and General Perception of Health. Individual items are scored on a 0-100 standardized Likert scale. For each domain, including Bodily Pain, a higher score indicates a better quality of life. Two indices (Physical and Mental) based on pertinent subscale clustering summarize the respective functioning. Due to the critical conditions shown by most of the patients at the beginning of the observations, some of the SF-36 data related to physical evaluation were lost. Thus only the Mental Index (MI) was taken into consideration.\n[SUBTITLE] Hormones [SUBSECTION] Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.\nBlood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.", "Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.", "Data are expressed as mean ± SEM. Changes of the data across time were analysed by Friedman analysis of variance (ANOVA) and Kendall concordance. To better define any possible drug effect, we compared the values at each time of observation (T3, T6 and T12) with the basal level (T0) using the non-parametric Wilcoxon test. All analyses were performed with Statistica software. A level of P ≤ 0.05 was considered statistically significant.", "In the six-month period devoted to patient selection, 25 outpatients referred to the pain clinics involved in the study and meeting the inclusion criteria were asked to participate; 17 of them met the enrolment criteria and signed a written informed consent form. The study design was prospective, pre-post analysis. All patients were affected by intractable non-cancer chronic pain treated for many years with different pharmacological regimens until the present when they received morphine (epidural route). In some instances, as in tetraplegic patients, morphine was associated with an adjuvant (baclofen, see Table 1). Neuropathic pain was the most common pain type, while pain from chronic pancreatitis was present in three subjects.\nPersonal and clinical data of the selected male patients.\nTotal testosterone values are reported to prove the hypogonadic condition. Reference range of total testosterone values: 3.5-8.5 ng/ml.\nOf the 17 men included in the study (Table 1), three abandoned it within the first month and two after the third month for reasons independent of the study. In two subjects the immediate appearance of headache and in one subject the occurrence of urinary block obliged them to leave the study. The remaining 9 subjects (age 59.0 ± 4.4, range: 38-74) completed the 12 months of observation (Table 1). In these patients, it was not necessary to include any other treatment (i.e. physiotherapy, counselling) during the one-year study.\n[SUBTITLE] Clinical observations [SUBSECTION] In all patients, treatment appeared to be immediately effective, as verbally confirmed during the interview by the patients and their relatives. Indeed, starting from the first month, patients reported general changes such as faster beard growth and increased appetite. Manual prostate exploration never revealed clinically relevant features and the PSA levels always remained within the physiological range (<3 ng/mL, mean 0.8 ng/mL).\nIn all patients, treatment appeared to be immediately effective, as verbally confirmed during the interview by the patients and their relatives. Indeed, starting from the first month, patients reported general changes such as faster beard growth and increased appetite. Manual prostate exploration never revealed clinically relevant features and the PSA levels always remained within the physiological range (<3 ng/mL, mean 0.8 ng/mL).\n[SUBTITLE] Morphine dose [SUBSECTION] The morphine dose was changed according to patient requirements (Figure 1). The dose remained stable in three patients, was decreased in two subjects and was increased in four. The last group included patients in which the improved general condition allowed a morphine increase in the attempt to complete the pain relief. This was made possible by the lesser number of side effects reported.\nMorphine dose assessment. Time course of the morphine dose at basal time (T0) and after 3, 6 and 12 months of testosterone replacement therapy (T3, T6 and T12, respectively) in patients participating in the entire study. The lines represent the time course of the individual patients numbered as in Table 1.\n[SUBTITLE] Pain assessment [SUBSECTION] [SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] Andrological evaluation [SUBSECTION] [SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] Psychological outcomes [SUBSECTION] [SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] Hormonal parameters [SUBSECTION] Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nTestosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nThe morphine dose was changed according to patient requirements (Figure 1). The dose remained stable in three patients, was decreased in two subjects and was increased in four. The last group included patients in which the improved general condition allowed a morphine increase in the attempt to complete the pain relief. This was made possible by the lesser number of side effects reported.\nMorphine dose assessment. Time course of the morphine dose at basal time (T0) and after 3, 6 and 12 months of testosterone replacement therapy (T3, T6 and T12, respectively) in patients participating in the entire study. The lines represent the time course of the individual patients numbered as in Table 1.\n[SUBTITLE] Pain assessment [SUBSECTION] [SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] Andrological evaluation [SUBSECTION] [SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] Psychological outcomes [SUBSECTION] [SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] Hormonal parameters [SUBSECTION] Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nTestosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\n[SUBTITLE] Total testosterone (TT), free testosterone (fT), bioavailable testosterone (BioT) and SHBG [SUBSECTION] Total (TT), free (fT) and bioavailable testosterone (BioT) showed very low levels at T0 (TT 1.16 ± 0.28 ng/mL; fT 4.33 ± 0.89 pg/mL, BioT 0.34 ± 0.1 ng/dL) with respect to the normal range, while SHBG was close to the upper limit of the normal range (77.77 ± 12.12 nmol/L) (Figure 2). Daily administration of testosterone significantly increased the TT serum levels at T3, remaining at approximately the same level until T12; there were similar results for fT, which became significant at T12 (p < 0.028). BioT was significantly increased at T3 (p < 0.028) and remained significantly higher at T6 (p < 0.05) and at T12 (p < 0.018).\nSerum total testosterone. Serum total testosterone (TT) values at basal time (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. The insert reports the TT levels at basal time (T0) and 1, 2 and 3 months from the beginning of testosterone replacement therapy (T1, T2 and T3, respectively) of the first four patients used to evaluate the time course of tissue androgen permeation. Data are mean ± SEM. * p < 0.05 vs T0.\nTo evaluate the ratio between the unbound fraction (fT) and TT, we calculated a percentage (%fT). The %fT tended to decrease at T3, but did not reach significance. SHBG did not change significantly (Figure 3, Table 5).\nHormonal parameters evaluation. Serum dihydrotestosterone (DHT, A), estradiol (E2, B), DHT/T ratio (C), E/T ratio (D), bioavailable testosterone (BioT, E) and free testosterone (fT, F) values at basal level (T0) and after 3, 6 and 12 months of testosterone replacement (T3, T6 and T12, respectively). Data are mean ± SEM. * p < 0.05 vs T0.\nHormonal parameters\nTT: total testosterone, fT: free testosterone; BioT: bioavailable testosterone; SHBG: steroid hormone binding globuline; %fT: percentage of free testosterone; DHT: dehydrotestosterone; E2: estradiol; C: cortisol. Hormone levels determined before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.05 vs T0; ** p < 0.01 vs T0.\nTotal (TT), free (fT) and bioavailable testosterone (BioT) showed very low levels at T0 (TT 1.16 ± 0.28 ng/mL; fT 4.33 ± 0.89 pg/mL, BioT 0.34 ± 0.1 ng/dL) with respect to the normal range, while SHBG was close to the upper limit of the normal range (77.77 ± 12.12 nmol/L) (Figure 2). Daily administration of testosterone significantly increased the TT serum levels at T3, remaining at approximately the same level until T12; there were similar results for fT, which became significant at T12 (p < 0.028). BioT was significantly increased at T3 (p < 0.028) and remained significantly higher at T6 (p < 0.05) and at T12 (p < 0.018).\nSerum total testosterone. Serum total testosterone (TT) values at basal time (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. The insert reports the TT levels at basal time (T0) and 1, 2 and 3 months from the beginning of testosterone replacement therapy (T1, T2 and T3, respectively) of the first four patients used to evaluate the time course of tissue androgen permeation. Data are mean ± SEM. * p < 0.05 vs T0.\nTo evaluate the ratio between the unbound fraction (fT) and TT, we calculated a percentage (%fT). The %fT tended to decrease at T3, but did not reach significance. SHBG did not change significantly (Figure 3, Table 5).\nHormonal parameters evaluation. Serum dihydrotestosterone (DHT, A), estradiol (E2, B), DHT/T ratio (C), E/T ratio (D), bioavailable testosterone (BioT, E) and free testosterone (fT, F) values at basal level (T0) and after 3, 6 and 12 months of testosterone replacement (T3, T6 and T12, respectively). Data are mean ± SEM. * p < 0.05 vs T0.\nHormonal parameters\nTT: total testosterone, fT: free testosterone; BioT: bioavailable testosterone; SHBG: steroid hormone binding globuline; %fT: percentage of free testosterone; DHT: dehydrotestosterone; E2: estradiol; C: cortisol. Hormone levels determined before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.05 vs T0; ** p < 0.01 vs T0.\n[SUBTITLE] Serum levels and ratios of the testosterone metabolites [SUBSECTION] The two metabolites of testosterone, DHT and E2, changed differently. DHT increased progressively, reaching significance at T12 (p < 0.02), whereas E2 did not vary. The ratio DHT/TT did not change while the ratio E2/TT was decreased significantly already at T3 and remained stable till T12 (p < 0.015, p < 0.021 and p < 0.011 at T3, T6 and T12, respectively).\nCortisol showed a slow, non-significant reduction during the one-year therapy (Table 5).\nThe two metabolites of testosterone, DHT and E2, changed differently. DHT increased progressively, reaching significance at T12 (p < 0.02), whereas E2 did not vary. The ratio DHT/TT did not change while the ratio E2/TT was decreased significantly already at T3 and remained stable till T12 (p < 0.015, p < 0.021 and p < 0.011 at T3, T6 and T12, respectively).\nCortisol showed a slow, non-significant reduction during the one-year therapy (Table 5).", "In all patients, treatment appeared to be immediately effective, as verbally confirmed during the interview by the patients and their relatives. Indeed, starting from the first month, patients reported general changes such as faster beard growth and increased appetite. Manual prostate exploration never revealed clinically relevant features and the PSA levels always remained within the physiological range (<3 ng/mL, mean 0.8 ng/mL).", "The morphine dose was changed according to patient requirements (Figure 1). The dose remained stable in three patients, was decreased in two subjects and was increased in four. The last group included patients in which the improved general condition allowed a morphine increase in the attempt to complete the pain relief. This was made possible by the lesser number of side effects reported.\nMorphine dose assessment. Time course of the morphine dose at basal time (T0) and after 3, 6 and 12 months of testosterone replacement therapy (T3, T6 and T12, respectively) in patients participating in the entire study. The lines represent the time course of the individual patients numbered as in Table 1.\n[SUBTITLE] Pain assessment [SUBSECTION] [SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] Andrological evaluation [SUBSECTION] [SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] Psychological outcomes [SUBSECTION] [SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] Hormonal parameters [SUBSECTION] Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nTestosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.", "[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).", "Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0", "The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).", "In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).", "[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.", "The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.", "[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0", "All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0", "Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.", "Total (TT), free (fT) and bioavailable testosterone (BioT) showed very low levels at T0 (TT 1.16 ± 0.28 ng/mL; fT 4.33 ± 0.89 pg/mL, BioT 0.34 ± 0.1 ng/dL) with respect to the normal range, while SHBG was close to the upper limit of the normal range (77.77 ± 12.12 nmol/L) (Figure 2). Daily administration of testosterone significantly increased the TT serum levels at T3, remaining at approximately the same level until T12; there were similar results for fT, which became significant at T12 (p < 0.028). BioT was significantly increased at T3 (p < 0.028) and remained significantly higher at T6 (p < 0.05) and at T12 (p < 0.018).\nSerum total testosterone. Serum total testosterone (TT) values at basal time (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. The insert reports the TT levels at basal time (T0) and 1, 2 and 3 months from the beginning of testosterone replacement therapy (T1, T2 and T3, respectively) of the first four patients used to evaluate the time course of tissue androgen permeation. Data are mean ± SEM. * p < 0.05 vs T0.\nTo evaluate the ratio between the unbound fraction (fT) and TT, we calculated a percentage (%fT). The %fT tended to decrease at T3, but did not reach significance. SHBG did not change significantly (Figure 3, Table 5).\nHormonal parameters evaluation. Serum dihydrotestosterone (DHT, A), estradiol (E2, B), DHT/T ratio (C), E/T ratio (D), bioavailable testosterone (BioT, E) and free testosterone (fT, F) values at basal level (T0) and after 3, 6 and 12 months of testosterone replacement (T3, T6 and T12, respectively). Data are mean ± SEM. * p < 0.05 vs T0.\nHormonal parameters\nTT: total testosterone, fT: free testosterone; BioT: bioavailable testosterone; SHBG: steroid hormone binding globuline; %fT: percentage of free testosterone; DHT: dehydrotestosterone; E2: estradiol; C: cortisol. Hormone levels determined before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.05 vs T0; ** p < 0.01 vs T0.", "The two metabolites of testosterone, DHT and E2, changed differently. DHT increased progressively, reaching significance at T12 (p < 0.02), whereas E2 did not vary. The ratio DHT/TT did not change while the ratio E2/TT was decreased significantly already at T3 and remained stable till T12 (p < 0.015, p < 0.021 and p < 0.011 at T3, T6 and T12, respectively).\nCortisol showed a slow, non-significant reduction during the one-year therapy (Table 5).", "Evidence from this study suggests that one year of testosterone replacement therapy in male patients suffering from a severe form of chronic pain and diagnosed with morphine-induced hypogonadism (OPIAD) is able to positively change hormonal and behavioural 'indicators' chosen to measure different aspects of their condition.\nAlthough pain research is continuously coming up with new products to treat chronic pain, opioids remain the reference therapy. Unfortunately this approach involves several side effects, which increasing morbidity would induce the discontinuation of therapy [27]. With continued opioid use, many side effects diminish or resolve; conversely, others such as immune alteration and hypogonadism persist and are even more apparent after long-term therapy [3,4,28]. The subjects considered in this study, all long-term opioid users, were also clearly suffering from hypogonadism, as revealed by the low plasma testosterone levels and/or clinical symptoms indicative of this condition.\nTestosterone replacement therapy is a common approach in ageing males with partial androgen deficit or in young hypogonadic men due to traumatic or surgical causes; in these subjects, testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, it must be underlined that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment. Further research is needed to better study the possible interaction among pain, opioid and androgen metabolism.\nSeveral hypotheses have been advanced to explain OPIAD [29]. One suggests that the hypogonadism is due to opioid-induced inhibition of gonadotropin release [30]; however, another hypothesis suggests that the inhibitory action is also exerted in the gonads. Indeed, opioid receptors have been described in the pituitary as well as in the gonads, and opioids have been found to up-regulate their own receptors. Another mechanism involves testosterone metabolism. It is known that morphine increases 5-alpha reductase activity [9] and we have shown an excitatory effect on aromatase activity in vitro [31] and of both enzymes in ex-vivo tissues [32]; these enzymes, present in the liver but also widespread in body tissues, are involved in the transformation of testosterone to its metabolites dihydrotestosterone (DHT) and estradiol.\nFinally, we cannot exclude either a direct effect of pain on the HPG axis, since observations have indicated a depressant effect of experimental pain on testosterone blood and brain levels in rats through the stress system, or the quite common use of other drugs able, like opioids, to inhibit the HPA axis [32,33].\n[SUBTITLE] Andrology [SUBSECTION] While the psychological and somatic aspects of the painful condition are usually taken into consideration by physicians, the sexual aspect of the patient's life is generally disregarded. In the present study, as in a previous one by Daniell's group [1], the andrological questionnaires administered at the beginning of the observations confirmed the presence of altered conditions in all patients, probably related not only to pain but also to the altered hormone levels. Indeed also in the present study, the AMS revealed 'moderate' andrological disturbances, indicating the presence of clinically relevant alterations. As expected, testosterone replacement progressively improved the sexual aspect, as confirmed by the AMS's sexual dimension.\nWhile the psychological and somatic aspects of the painful condition are usually taken into consideration by physicians, the sexual aspect of the patient's life is generally disregarded. In the present study, as in a previous one by Daniell's group [1], the andrological questionnaires administered at the beginning of the observations confirmed the presence of altered conditions in all patients, probably related not only to pain but also to the altered hormone levels. Indeed also in the present study, the AMS revealed 'moderate' andrological disturbances, indicating the presence of clinically relevant alterations. As expected, testosterone replacement progressively improved the sexual aspect, as confirmed by the AMS's sexual dimension.\n[SUBTITLE] Pain [SUBSECTION] We provide preliminary data suggesting that the pain experience, as assessed by QUID (a validated Italian pain questionnaire), is affected by testosterone replacement. In fact, QUID dimensions improved by the third month, reaching the best values at the sixth month. This was a very substantial result present in all subjects. QUID was more sensitive than the other pain measurement tools administered (VAS, %Area), probably because the questionnaire is composed of a series of descriptors requiring patients to carefully analyse and define the ongoing pain experience. Furthermore, since this instrument studies the different dimensions of the pain experience (i.e. sensory, evaluative, affective), we were able to verify that all these components benefited from the testosterone therapy, albeit following different time courses.\nThe morphine dose matched the change in pain shown by QUID. Although three subjects asked for escalating doses to complete the pain relief (also in virtue of the low side effects they experienced), the others maintained or reduced the morphine amount, thus implicitly expressing pain amelioration. The results of the SF-36 Mental Index agreed with these findings, as they indicated an improvement in the respective domains, i.e. daily activities, emotional competence and social relations. Of course, the improvement in these aspects may have positively influenced the pain perception and vice versa, since a virtuous circle was established.\nThe extension and distribution of the body surface area in pain remained unchanged, as did the VAS scores. From the beginning of the epidural morphine administration, all the patients complained that the last week before the refill was the worst of the month, even though the dose of morphine was repeatedly verified to be constant in the last days of the cycle. Our interpretation of the high VAS scores involves two mechanisms, the one involving patient attitudes and the other the instrument itself. Very long-lasting and unresponsive pain is not easily consciously modified since a feeling of helplessness is unavoidable in these subjects, impeding their realization that any change has occurred, especially when it is positive. VAS does not contrast this attitude. Due to its oversimplified nature, it does not require the patient to analyse and weigh his pain, and it facilitates recollection bias. In the end, the patient under-evaluates the measurement and over-emphasizes the pain he is rating. The findings on QUID, which requires the patient to focus on his experience, match perfectly with this explanation, making this instrument more reliable in the setting described herein. Interestingly these results are very similar to those reported by Daniell and colleagues (2006) [1], in which the general improvement of living conditions provided by patch testosterone replacement were not followed by a clear decrease in pain measures.\nThe affective state of chronic pain is characterized by deep depression and high anxiety; sometimes such an affective condition depends not only on the pain severity but also on the underlying disease, especially when it is highly vexing and disabling. Our patients were affected by intense long-lasting pain and a serious illness. Thus, they were anxious and depressed, as shown by the POMS and CES-D scores. These ratings did not vary throughout the observational period, indicating that testosterone therapy did not influence the patients' affective state. It is likely that the sense of uncontrollability of their condition was so deeply rooted in their mind that it was not easily attenuated. On the other hand, the improvement of pain was incomplete and too recent with respect to its history for the attitudes toward it to be changed. Furthermore, the patients knew that their disease was incurable. Interestingly the persistent high levels of anxiety and depression indicate that the decrease of pain shown by QUID cannot be attributed to the psychological component of the pain experience, as may occur in purposely treated chronic pain patients.\nWe provide preliminary data suggesting that the pain experience, as assessed by QUID (a validated Italian pain questionnaire), is affected by testosterone replacement. In fact, QUID dimensions improved by the third month, reaching the best values at the sixth month. This was a very substantial result present in all subjects. QUID was more sensitive than the other pain measurement tools administered (VAS, %Area), probably because the questionnaire is composed of a series of descriptors requiring patients to carefully analyse and define the ongoing pain experience. Furthermore, since this instrument studies the different dimensions of the pain experience (i.e. sensory, evaluative, affective), we were able to verify that all these components benefited from the testosterone therapy, albeit following different time courses.\nThe morphine dose matched the change in pain shown by QUID. Although three subjects asked for escalating doses to complete the pain relief (also in virtue of the low side effects they experienced), the others maintained or reduced the morphine amount, thus implicitly expressing pain amelioration. The results of the SF-36 Mental Index agreed with these findings, as they indicated an improvement in the respective domains, i.e. daily activities, emotional competence and social relations. Of course, the improvement in these aspects may have positively influenced the pain perception and vice versa, since a virtuous circle was established.\nThe extension and distribution of the body surface area in pain remained unchanged, as did the VAS scores. From the beginning of the epidural morphine administration, all the patients complained that the last week before the refill was the worst of the month, even though the dose of morphine was repeatedly verified to be constant in the last days of the cycle. Our interpretation of the high VAS scores involves two mechanisms, the one involving patient attitudes and the other the instrument itself. Very long-lasting and unresponsive pain is not easily consciously modified since a feeling of helplessness is unavoidable in these subjects, impeding their realization that any change has occurred, especially when it is positive. VAS does not contrast this attitude. Due to its oversimplified nature, it does not require the patient to analyse and weigh his pain, and it facilitates recollection bias. In the end, the patient under-evaluates the measurement and over-emphasizes the pain he is rating. The findings on QUID, which requires the patient to focus on his experience, match perfectly with this explanation, making this instrument more reliable in the setting described herein. Interestingly these results are very similar to those reported by Daniell and colleagues (2006) [1], in which the general improvement of living conditions provided by patch testosterone replacement were not followed by a clear decrease in pain measures.\nThe affective state of chronic pain is characterized by deep depression and high anxiety; sometimes such an affective condition depends not only on the pain severity but also on the underlying disease, especially when it is highly vexing and disabling. Our patients were affected by intense long-lasting pain and a serious illness. Thus, they were anxious and depressed, as shown by the POMS and CES-D scores. These ratings did not vary throughout the observational period, indicating that testosterone therapy did not influence the patients' affective state. It is likely that the sense of uncontrollability of their condition was so deeply rooted in their mind that it was not easily attenuated. On the other hand, the improvement of pain was incomplete and too recent with respect to its history for the attitudes toward it to be changed. Furthermore, the patients knew that their disease was incurable. Interestingly the persistent high levels of anxiety and depression indicate that the decrease of pain shown by QUID cannot be attributed to the psychological component of the pain experience, as may occur in purposely treated chronic pain patients.", "While the psychological and somatic aspects of the painful condition are usually taken into consideration by physicians, the sexual aspect of the patient's life is generally disregarded. In the present study, as in a previous one by Daniell's group [1], the andrological questionnaires administered at the beginning of the observations confirmed the presence of altered conditions in all patients, probably related not only to pain but also to the altered hormone levels. Indeed also in the present study, the AMS revealed 'moderate' andrological disturbances, indicating the presence of clinically relevant alterations. As expected, testosterone replacement progressively improved the sexual aspect, as confirmed by the AMS's sexual dimension.", "We provide preliminary data suggesting that the pain experience, as assessed by QUID (a validated Italian pain questionnaire), is affected by testosterone replacement. In fact, QUID dimensions improved by the third month, reaching the best values at the sixth month. This was a very substantial result present in all subjects. QUID was more sensitive than the other pain measurement tools administered (VAS, %Area), probably because the questionnaire is composed of a series of descriptors requiring patients to carefully analyse and define the ongoing pain experience. Furthermore, since this instrument studies the different dimensions of the pain experience (i.e. sensory, evaluative, affective), we were able to verify that all these components benefited from the testosterone therapy, albeit following different time courses.\nThe morphine dose matched the change in pain shown by QUID. Although three subjects asked for escalating doses to complete the pain relief (also in virtue of the low side effects they experienced), the others maintained or reduced the morphine amount, thus implicitly expressing pain amelioration. The results of the SF-36 Mental Index agreed with these findings, as they indicated an improvement in the respective domains, i.e. daily activities, emotional competence and social relations. Of course, the improvement in these aspects may have positively influenced the pain perception and vice versa, since a virtuous circle was established.\nThe extension and distribution of the body surface area in pain remained unchanged, as did the VAS scores. From the beginning of the epidural morphine administration, all the patients complained that the last week before the refill was the worst of the month, even though the dose of morphine was repeatedly verified to be constant in the last days of the cycle. Our interpretation of the high VAS scores involves two mechanisms, the one involving patient attitudes and the other the instrument itself. Very long-lasting and unresponsive pain is not easily consciously modified since a feeling of helplessness is unavoidable in these subjects, impeding their realization that any change has occurred, especially when it is positive. VAS does not contrast this attitude. Due to its oversimplified nature, it does not require the patient to analyse and weigh his pain, and it facilitates recollection bias. In the end, the patient under-evaluates the measurement and over-emphasizes the pain he is rating. The findings on QUID, which requires the patient to focus on his experience, match perfectly with this explanation, making this instrument more reliable in the setting described herein. Interestingly these results are very similar to those reported by Daniell and colleagues (2006) [1], in which the general improvement of living conditions provided by patch testosterone replacement were not followed by a clear decrease in pain measures.\nThe affective state of chronic pain is characterized by deep depression and high anxiety; sometimes such an affective condition depends not only on the pain severity but also on the underlying disease, especially when it is highly vexing and disabling. Our patients were affected by intense long-lasting pain and a serious illness. Thus, they were anxious and depressed, as shown by the POMS and CES-D scores. These ratings did not vary throughout the observational period, indicating that testosterone therapy did not influence the patients' affective state. It is likely that the sense of uncontrollability of their condition was so deeply rooted in their mind that it was not easily attenuated. On the other hand, the improvement of pain was incomplete and too recent with respect to its history for the attitudes toward it to be changed. Furthermore, the patients knew that their disease was incurable. Interestingly the persistent high levels of anxiety and depression indicate that the decrease of pain shown by QUID cannot be attributed to the psychological component of the pain experience, as may occur in purposely treated chronic pain patients.", "The present results, although carried out in a small number of subjects and with a constant treatment dose, underline the possibility of successful testosterone replacement therapy in chronic pain patients treated with morphine. Hypogonadism is a usual consequence of opioid treatment, but it is rarely taken into consideration. Our results strongly suggest that this therapy can positively modulate the dimensions of pain. This effect allows us to propose the use of testosterone in clinics as an adjuvant, in combination with opioid therapy.", "The authors declare that they have no competing interests.", "AMA, IC, VB and GP conceived and supervised the project and edited the manuscript.\nIC, GdP, MC, AS, GS, SM, VP, LR, GP participated in the experimental process and data analysis. All authors contributed to data interpretation. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Questionnaires", "Pain assessment", "Andrological and urological evaluation", "Psychological evaluation", "Hormones", "Statistical analysis", "Results", "Clinical observations", "Morphine dose", "Pain assessment", "VAS", "Margolis", "QUID", "Andrological evaluation", "AMS scale", "Psychological outcomes", "POMS, CES-D and SF-36", "Hormonal parameters", "Total testosterone (TT), free testosterone (fT), bioavailable testosterone (BioT) and SHBG", "Serum levels and ratios of the testosterone metabolites", "Discussion", "Andrology", "Pain", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Awareness of the need to adequately treat pain, particularly chronic pain, is slowly increasing among patients and pain therapists. Opioids are among the most prescribed analgesic drugs but present several side effects such as nausea, itching, constipation and hypogonadism [1,2]. For hypogonadism, there is clear evidence in the literature for both humans [3-7] and experimental animals [8,9]. Symptoms and signs related to gonadal hypofunction and consequent testosterone depletion include peripheral effects (muscle hypotrophy, bone loss, anaemia, etc.) and severe and disabling central effects (decreased attention, disappearance of libido, depressive state, etc.). Moreover, there is much evidence for a close relationship between testosterone and neuroprotective processes. For instance, it was reported that older men with low levels of free circulating testosterone appear to be at higher risk of developing Alzheimer Disease than men with higher levels of this hormone [10], and gonadal and adrenal androgen levels were found to be lower in men and women suffering from rheumatoid arthritis than in healthy controls [11]. In experimental animals, testosterone administration in inflammatory painful conditions, such as formalin- or adjuvant-induced pain, was found to decrease pain-induced responses [12-14].\nOpioid-induced hypogonadism in males (Opioid-Induced Androgen Deficiency, OPIAD) is usually ignored by pain physicians and rarely considered for treatment [1] despite its high frequency (almost 100%) and persistence. Daniell [1] described the beneficial effects of testosterone replacement in opioid-treated patients for six months.\nThe aim of the present study was to describe the time course/interactions between opioids and testosterone replacement therapy in a clinical sample of male chronic pain patients diagnosed with OPIAD and receiving morphine via the epidural route. This kind of patient was chosen to reduce variables due to the high numbers of opiates available, their dosage and combinations. Moreover, all these patients had been suffering from chronic pain for many years and their pain condition was considered severe and stable. To obtain a broad picture of the effects induced by this treatment, we considered different aspects using a multidisciplinary approach. A team of experts, namely an andrologist, a psychologist, the project co-ordinator and a local pain therapist, met with each patient monthly for one year to check the clinical and laboratory outcomes. Testosterone was replaced via a gel formulation able to restore and maintain testosterone levels [15] by virtue of the fact that its application on the skin prevents first liver clearance and allows steroids to be directly absorbed and stored in the stratum corneum. From this reservoir, testosterone or its metabolites (17-beta estradiol and DHT) are slowly released into the circulation over several hours. Testosterone and its related hormones (estradiol, DHT) were measured in the blood, together with cortisol as an index of adrenal activity. Pain intensity and its features were studied with the VAS, the Italian version of the McGill Questionnaire (QUID) and the Margolis method to determine the percentage of the body in pain. The andrological condition was studied with the dedicated questionnaire Ageing Males' Symptoms Scale (AMS), and different aspects of the psychological characteristics were evaluated through questionnaires able to study anxiety, depression and quality of life (POMS, CES-D and SF-36). The study was carried out for 12 months. The results suggest beneficial effects on the patients' quality of life.", "[SUBTITLE] Subjects [SUBSECTION] Male chronic pain patients referred to two Italian pain centres (Forlì, Cagliari) were asked to take part in this study. To obtain the most homogeneous group of patients possible, only male subjects suffering from non-oncological chronic pain and undergoing epidural morphine were considered; morphine treatment had to have been carried out for at least six months; hypogonadism was diagnosed for testosterone values less than 2-3 ng/mL in at least two determinations in the previous 3-4 months. Other inclusion criteria were the presence of clear symptoms indicative of hypogonadism (fatigue, depressed mood, decreased libido) and negative urological problems. Experimental procedures were carried out in agreement with the Code of Ethics of the World Medical Association (Helsinki Declaration) and were approved by the Local Ethics Committee.\nOnce included in the study, each subject met with the experimental team once a month for one year during the morphine pump refilling session and underwent clinical, psychological and hormonal evaluations. In particular, the following procedures were performed during the monthly meeting:\n- Blood collection,\n- Complete clinical evaluation by the pain therapist, including the administration of specific pain questionnaires for pain assessment (VAS, QUID, Area%),\n- Psychological evaluation, including the administration of specific questionnaires for anxiety, depression and quality of life (POMS, CES-D, SF-36),\n- Andrological evaluation, including manual exploration of the prostate and transrectal prostate echography when necessary. The patient was asked to complete a specific andrological questionnaire (AMS). The subject was also examined for any adverse effects that might be related to the gel, including local skin irritation. At each visit, the subject was given a one-month supply of testosterone gel, a hydroalcoholic compound containing 50 mg testosterone in 5 g gel in each sachet. So as not to introduce other variables, the dose of testosterone gel was maintained constant independently of the blood testosterone levels shown by the patient. The patient was asked to apply the gel on the skin of the upper arm/shoulder or abdomen each morning at the same time, alternating the site of application each day and not taking a shower until 5 hours after application. He received verbal instructions to record in a diary any change that may have occurred in his daily activities and physical and mental performances.\nAll procedures took about 2 hours per patient per month.\n[SUBTITLE] Questionnaires [SUBSECTION] All questionnaires were self-administered, but supported by the presence of the expert clinician.\nAll questionnaires were self-administered, but supported by the presence of the expert clinician.\nMale chronic pain patients referred to two Italian pain centres (Forlì, Cagliari) were asked to take part in this study. To obtain the most homogeneous group of patients possible, only male subjects suffering from non-oncological chronic pain and undergoing epidural morphine were considered; morphine treatment had to have been carried out for at least six months; hypogonadism was diagnosed for testosterone values less than 2-3 ng/mL in at least two determinations in the previous 3-4 months. Other inclusion criteria were the presence of clear symptoms indicative of hypogonadism (fatigue, depressed mood, decreased libido) and negative urological problems. Experimental procedures were carried out in agreement with the Code of Ethics of the World Medical Association (Helsinki Declaration) and were approved by the Local Ethics Committee.\nOnce included in the study, each subject met with the experimental team once a month for one year during the morphine pump refilling session and underwent clinical, psychological and hormonal evaluations. In particular, the following procedures were performed during the monthly meeting:\n- Blood collection,\n- Complete clinical evaluation by the pain therapist, including the administration of specific pain questionnaires for pain assessment (VAS, QUID, Area%),\n- Psychological evaluation, including the administration of specific questionnaires for anxiety, depression and quality of life (POMS, CES-D, SF-36),\n- Andrological evaluation, including manual exploration of the prostate and transrectal prostate echography when necessary. The patient was asked to complete a specific andrological questionnaire (AMS). The subject was also examined for any adverse effects that might be related to the gel, including local skin irritation. At each visit, the subject was given a one-month supply of testosterone gel, a hydroalcoholic compound containing 50 mg testosterone in 5 g gel in each sachet. So as not to introduce other variables, the dose of testosterone gel was maintained constant independently of the blood testosterone levels shown by the patient. The patient was asked to apply the gel on the skin of the upper arm/shoulder or abdomen each morning at the same time, alternating the site of application each day and not taking a shower until 5 hours after application. He received verbal instructions to record in a diary any change that may have occurred in his daily activities and physical and mental performances.\nAll procedures took about 2 hours per patient per month.\n[SUBTITLE] Questionnaires [SUBSECTION] All questionnaires were self-administered, but supported by the presence of the expert clinician.\nAll questionnaires were self-administered, but supported by the presence of the expert clinician.\n[SUBTITLE] Pain assessment [SUBSECTION] A Visual Analogue Scale, VAS (0-100), was used [16] to estimate the current intensity of pain and the peak intensity during the previous week. VAS is a 10 cm horizontal line, anchored at the extremes by \"no pain \" and \"worst pain possible\".\nThe quality and intensity of the current pain experience was evaluated by the Italian Pain Questionnaire, QUID [17]. The QUID is a reconstructed Italian version of the McGill Pain Questionnaire (MPQ). It is a semantic interval scale consisting of 42 descriptors divided into four main classes: sensory, affective, evaluative, miscellaneous. The Pain Rating Index rank value (PRIr) for each dimension and for the whole experience (Pain Rating Index rank-Total [PRIr-T], given by the sum of all the rank values) indicate the quality and intensity of pain.\nThe percentage of body surface area in pain (Area%) was measured according the Margolis method [18]. It consists of a human front-back body map in which the patient marks the area in pain. Thus, the drawing gives the pain distribution and the percentage of body surface area affected by pain.\nA Visual Analogue Scale, VAS (0-100), was used [16] to estimate the current intensity of pain and the peak intensity during the previous week. VAS is a 10 cm horizontal line, anchored at the extremes by \"no pain \" and \"worst pain possible\".\nThe quality and intensity of the current pain experience was evaluated by the Italian Pain Questionnaire, QUID [17]. The QUID is a reconstructed Italian version of the McGill Pain Questionnaire (MPQ). It is a semantic interval scale consisting of 42 descriptors divided into four main classes: sensory, affective, evaluative, miscellaneous. The Pain Rating Index rank value (PRIr) for each dimension and for the whole experience (Pain Rating Index rank-Total [PRIr-T], given by the sum of all the rank values) indicate the quality and intensity of pain.\nThe percentage of body surface area in pain (Area%) was measured according the Margolis method [18]. It consists of a human front-back body map in which the patient marks the area in pain. Thus, the drawing gives the pain distribution and the percentage of body surface area affected by pain.\n[SUBTITLE] Andrological and urological evaluation [SUBSECTION] The impact of treatment on the androgens-related quality of life was measured by the Ageing Males' Symptoms scale (AMS) questionnaire [19]. The AMS is composed of three subscales, measuring respectively: psychological, somatic and sexual symptoms. Items (n = 17) are scored on a 1 (absent) to 5 (very severe) Likert scale. The sum-score for each subscale is obtained by adding up the ratings of the items. The total score is the sum of the three subscale ratings. The higher the value, the worse the condition of the subject. The severity categories are: mild (27-36), moderate (37-49), severe (≥50).\nThe impact of treatment on the androgens-related quality of life was measured by the Ageing Males' Symptoms scale (AMS) questionnaire [19]. The AMS is composed of three subscales, measuring respectively: psychological, somatic and sexual symptoms. Items (n = 17) are scored on a 1 (absent) to 5 (very severe) Likert scale. The sum-score for each subscale is obtained by adding up the ratings of the items. The total score is the sum of the three subscale ratings. The higher the value, the worse the condition of the subject. The severity categories are: mild (27-36), moderate (37-49), severe (≥50).\n[SUBTITLE] Psychological evaluation [SUBSECTION] To study psychological characteristics of the patients and their time course, the following questionnaires were administered:\nProfile of Mood State (POMS). The POMS measures the current psychological state of the participant [20] and consists of 58 feelings rated on a 5-point scale. It comprises six subscales: Tension-Anxiety (T-A), Depression-Dejection (D-D), Anger-Hostility (A-H), Vigor-Activity (V-A), Fatigue-Inertia (F-I) and Confusion-Bewilderment (C-B). In each subscale, values higher (T-A, D-D, A-H, F-I and C-B) or lower (V-A) than 55 were considered significantly altered with respect to the normal population [21].\nCentre for Epidemiological Studies Depression Scale (CES-D). The Italian version of CES-D [22] was used to determine the level of depression symptoms for research purposes [23]. The CES-D is a 20-item self-reported measure of symptoms, scored on a scale from 0 to 3 frequency ratings (\"less than 1 day\" to \"most or all (5-7) days\") for symptoms within the past week. Previous studies have verified that a score of 16 or above on the CES-D indicates clinically significant depressive symptoms [23].\nItalian version of the SF-36 [24] questionnaire. SF-36 is a 36-item questionnaire measuring the patient's level of performance in eight health domains: Physical Functioning, Role Physical (role limitations caused by physical problems), Social Functioning, Bodily Pain, Mental Health, Role Emotional (role limitations caused by emotional problems), Vitality and General Perception of Health. Individual items are scored on a 0-100 standardized Likert scale. For each domain, including Bodily Pain, a higher score indicates a better quality of life. Two indices (Physical and Mental) based on pertinent subscale clustering summarize the respective functioning. Due to the critical conditions shown by most of the patients at the beginning of the observations, some of the SF-36 data related to physical evaluation were lost. Thus only the Mental Index (MI) was taken into consideration.\n[SUBTITLE] Hormones [SUBSECTION] Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.\nBlood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.\nTo study psychological characteristics of the patients and their time course, the following questionnaires were administered:\nProfile of Mood State (POMS). The POMS measures the current psychological state of the participant [20] and consists of 58 feelings rated on a 5-point scale. It comprises six subscales: Tension-Anxiety (T-A), Depression-Dejection (D-D), Anger-Hostility (A-H), Vigor-Activity (V-A), Fatigue-Inertia (F-I) and Confusion-Bewilderment (C-B). In each subscale, values higher (T-A, D-D, A-H, F-I and C-B) or lower (V-A) than 55 were considered significantly altered with respect to the normal population [21].\nCentre for Epidemiological Studies Depression Scale (CES-D). The Italian version of CES-D [22] was used to determine the level of depression symptoms for research purposes [23]. The CES-D is a 20-item self-reported measure of symptoms, scored on a scale from 0 to 3 frequency ratings (\"less than 1 day\" to \"most or all (5-7) days\") for symptoms within the past week. Previous studies have verified that a score of 16 or above on the CES-D indicates clinically significant depressive symptoms [23].\nItalian version of the SF-36 [24] questionnaire. SF-36 is a 36-item questionnaire measuring the patient's level of performance in eight health domains: Physical Functioning, Role Physical (role limitations caused by physical problems), Social Functioning, Bodily Pain, Mental Health, Role Emotional (role limitations caused by emotional problems), Vitality and General Perception of Health. Individual items are scored on a 0-100 standardized Likert scale. For each domain, including Bodily Pain, a higher score indicates a better quality of life. Two indices (Physical and Mental) based on pertinent subscale clustering summarize the respective functioning. Due to the critical conditions shown by most of the patients at the beginning of the observations, some of the SF-36 data related to physical evaluation were lost. Thus only the Mental Index (MI) was taken into consideration.\n[SUBTITLE] Hormones [SUBSECTION] Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.\nBlood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data are expressed as mean ± SEM. Changes of the data across time were analysed by Friedman analysis of variance (ANOVA) and Kendall concordance. To better define any possible drug effect, we compared the values at each time of observation (T3, T6 and T12) with the basal level (T0) using the non-parametric Wilcoxon test. All analyses were performed with Statistica software. A level of P ≤ 0.05 was considered statistically significant.\nData are expressed as mean ± SEM. Changes of the data across time were analysed by Friedman analysis of variance (ANOVA) and Kendall concordance. To better define any possible drug effect, we compared the values at each time of observation (T3, T6 and T12) with the basal level (T0) using the non-parametric Wilcoxon test. All analyses were performed with Statistica software. A level of P ≤ 0.05 was considered statistically significant.", "Male chronic pain patients referred to two Italian pain centres (Forlì, Cagliari) were asked to take part in this study. To obtain the most homogeneous group of patients possible, only male subjects suffering from non-oncological chronic pain and undergoing epidural morphine were considered; morphine treatment had to have been carried out for at least six months; hypogonadism was diagnosed for testosterone values less than 2-3 ng/mL in at least two determinations in the previous 3-4 months. Other inclusion criteria were the presence of clear symptoms indicative of hypogonadism (fatigue, depressed mood, decreased libido) and negative urological problems. Experimental procedures were carried out in agreement with the Code of Ethics of the World Medical Association (Helsinki Declaration) and were approved by the Local Ethics Committee.\nOnce included in the study, each subject met with the experimental team once a month for one year during the morphine pump refilling session and underwent clinical, psychological and hormonal evaluations. In particular, the following procedures were performed during the monthly meeting:\n- Blood collection,\n- Complete clinical evaluation by the pain therapist, including the administration of specific pain questionnaires for pain assessment (VAS, QUID, Area%),\n- Psychological evaluation, including the administration of specific questionnaires for anxiety, depression and quality of life (POMS, CES-D, SF-36),\n- Andrological evaluation, including manual exploration of the prostate and transrectal prostate echography when necessary. The patient was asked to complete a specific andrological questionnaire (AMS). The subject was also examined for any adverse effects that might be related to the gel, including local skin irritation. At each visit, the subject was given a one-month supply of testosterone gel, a hydroalcoholic compound containing 50 mg testosterone in 5 g gel in each sachet. So as not to introduce other variables, the dose of testosterone gel was maintained constant independently of the blood testosterone levels shown by the patient. The patient was asked to apply the gel on the skin of the upper arm/shoulder or abdomen each morning at the same time, alternating the site of application each day and not taking a shower until 5 hours after application. He received verbal instructions to record in a diary any change that may have occurred in his daily activities and physical and mental performances.\nAll procedures took about 2 hours per patient per month.\n[SUBTITLE] Questionnaires [SUBSECTION] All questionnaires were self-administered, but supported by the presence of the expert clinician.\nAll questionnaires were self-administered, but supported by the presence of the expert clinician.", "All questionnaires were self-administered, but supported by the presence of the expert clinician.", "A Visual Analogue Scale, VAS (0-100), was used [16] to estimate the current intensity of pain and the peak intensity during the previous week. VAS is a 10 cm horizontal line, anchored at the extremes by \"no pain \" and \"worst pain possible\".\nThe quality and intensity of the current pain experience was evaluated by the Italian Pain Questionnaire, QUID [17]. The QUID is a reconstructed Italian version of the McGill Pain Questionnaire (MPQ). It is a semantic interval scale consisting of 42 descriptors divided into four main classes: sensory, affective, evaluative, miscellaneous. The Pain Rating Index rank value (PRIr) for each dimension and for the whole experience (Pain Rating Index rank-Total [PRIr-T], given by the sum of all the rank values) indicate the quality and intensity of pain.\nThe percentage of body surface area in pain (Area%) was measured according the Margolis method [18]. It consists of a human front-back body map in which the patient marks the area in pain. Thus, the drawing gives the pain distribution and the percentage of body surface area affected by pain.", "The impact of treatment on the androgens-related quality of life was measured by the Ageing Males' Symptoms scale (AMS) questionnaire [19]. The AMS is composed of three subscales, measuring respectively: psychological, somatic and sexual symptoms. Items (n = 17) are scored on a 1 (absent) to 5 (very severe) Likert scale. The sum-score for each subscale is obtained by adding up the ratings of the items. The total score is the sum of the three subscale ratings. The higher the value, the worse the condition of the subject. The severity categories are: mild (27-36), moderate (37-49), severe (≥50).", "To study psychological characteristics of the patients and their time course, the following questionnaires were administered:\nProfile of Mood State (POMS). The POMS measures the current psychological state of the participant [20] and consists of 58 feelings rated on a 5-point scale. It comprises six subscales: Tension-Anxiety (T-A), Depression-Dejection (D-D), Anger-Hostility (A-H), Vigor-Activity (V-A), Fatigue-Inertia (F-I) and Confusion-Bewilderment (C-B). In each subscale, values higher (T-A, D-D, A-H, F-I and C-B) or lower (V-A) than 55 were considered significantly altered with respect to the normal population [21].\nCentre for Epidemiological Studies Depression Scale (CES-D). The Italian version of CES-D [22] was used to determine the level of depression symptoms for research purposes [23]. The CES-D is a 20-item self-reported measure of symptoms, scored on a scale from 0 to 3 frequency ratings (\"less than 1 day\" to \"most or all (5-7) days\") for symptoms within the past week. Previous studies have verified that a score of 16 or above on the CES-D indicates clinically significant depressive symptoms [23].\nItalian version of the SF-36 [24] questionnaire. SF-36 is a 36-item questionnaire measuring the patient's level of performance in eight health domains: Physical Functioning, Role Physical (role limitations caused by physical problems), Social Functioning, Bodily Pain, Mental Health, Role Emotional (role limitations caused by emotional problems), Vitality and General Perception of Health. Individual items are scored on a 0-100 standardized Likert scale. For each domain, including Bodily Pain, a higher score indicates a better quality of life. Two indices (Physical and Mental) based on pertinent subscale clustering summarize the respective functioning. Due to the critical conditions shown by most of the patients at the beginning of the observations, some of the SF-36 data related to physical evaluation were lost. Thus only the Mental Index (MI) was taken into consideration.\n[SUBTITLE] Hormones [SUBSECTION] Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.\nBlood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.", "Blood was collected between 8.00 and 10.00 am in one EDTA-added tube (5 ml to obtain plasma) and one empty tube (10 ml to obtain serum). In addition to general laboratory exams performed locally (including albumin levels), serum and plasma samples were taken to the Stress and Pain Neurophysiology Laboratory of the University of Siena, where the following hormones were determined at baseline (T0), 3 (T3), 6 (T6) and 12 (T 12) months: total testosterone (TT), free testosterone (fT), dihydrotestosterone (DHT), estradiol (E2), sex hormone-binding globulin (SHBG), cortisol (C). Prostate-specific antigen (PSA) was determined each month. These values were used to calculate bioavailable testosterone (BioT) through the website http://www.issam.ch/freetesto.htm, according to Vermeulen's formula [25].\nTotal testosterone (TT) was measured by RIA using a kit from RADIM (Pomezia, Italy). The cross reactivity of the antiserum coated in the tubes was 5.6% for DHT, 1.6% for androstenedione and lower than 0.1% for androstenediol, SHBG, estrone, DHEAS, estradiol. The lower limit of quantitation of TT measured by this assay was 0.017 ng/mL. The intra- and inter-assay coefficients were 1.5% and 7.8%, respectively, at the normal adult male range: 3.5-8.5 ng/mL in our laboratory.\nFree testosterone (fT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 0.35% for 19-nor testosterone, 0.21% for 17 alpha-methyltestosterone, 0.13% for 11-oxo-testosterone and non-detectable reactivity for DHT, DHEA, DHEA-S, progesterone, estradiol, corticosterone and other androgens. The lower limit of quantitation of fT measured by this assay was 0.18 pg/mL. The intra- and inter-assay coefficients were 4.5% and 7.9%, respectively, at the normal adult male range: 14.7-32.7 pg/mL in our laboratory.\nSex hormone-binding globulin (SHBG) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Concerning the specificity, no human serum protein is known to cross react with the antibodies employed in the DSL SHBG IRMA system. The lower limit of quantitation of SHBG measured by this assay was 3 nmol/L. The intra- and inter-assay coefficients were 2.7% and 10.2%, respectively, at the normal adult male range: 28-94 nmol/L in our laboratory.\nDihydrotestosterone (DHT) was measured by RIA using a kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 3.3% for androstandiol glucuronide, 0.6% for testosterone, 0.03% for androstandiol and no reactivity for androstenedione, estradiol, androsterone glucuronide, dehydroepiandrosterone, cortisol, deoxycortisol, 17 alpha-OH progesterone, progesterone. The lower limit of quantitation of DHT measured by this assay was 4 pg/mL. The intra- and inter-assay coefficients were 5.5% and 9.5%, respectively, at the normal adult male range: 250-750 pg/mL in our laboratory.\nEstradiol (E2) was measured by RIA using an ultra-sensitive kit from Diagnostic Systems Laboratories (Webster, Texas, USA). The cross reactivity of the antiserum coated in the tubes was 2.4% for estrone, 0.21% for 17 alpha-estradiol and 16 keto-estradiol, 0.64% for estriol. The lower limit of quantitation of E2 measured by this assay was 2.2 pg/mL. The intra- and inter-assay coefficients were 6.5% and 9.3%, respectively, at the normal adult male range: 10.0-25.1 pg/mL in our laboratory.\nCortisol (C) was measured by RIA using a kit from RADIM (Pomezia, Italy). The present method has not shown cross reaction with the following steroids: estradiol, testosterone, prednisone, cortisone, corticosterone, deoxycorticosterone and 11-deoxycortisol. The lower limit of quantitation of serum C measured by this assay was 0.9 microg/L. The intra- and inter-assay coefficients were 4.9% and 7.9%, respectively, at the normal adult male range: 50-250 microg/L in our laboratory.\nProstate-specific antigen (PSA) was measured by IRMA using a kit from Diagnostic Systems Laboratories (DSL, Webster, Texas, USA). Ferritin, hCG, prolactin, atropine, flutamide, diethylstilbesterol, acetylsalicylic acid, caffeine, ibuprofen, indomethacin do not interfere with the measurement of PSA in the DSL-9700 ACTIVE® PSA coated-Tube IRMA assay. The lower limit of quantitation of serum PSA measured by this assay was 0.013 ng/mL. The intra- and inter-assay coefficients were 4.6% and 9.8%, respectively, at the normal adult male range: 0.42-2.20 ng/mL in our laboratory.", "Data are expressed as mean ± SEM. Changes of the data across time were analysed by Friedman analysis of variance (ANOVA) and Kendall concordance. To better define any possible drug effect, we compared the values at each time of observation (T3, T6 and T12) with the basal level (T0) using the non-parametric Wilcoxon test. All analyses were performed with Statistica software. A level of P ≤ 0.05 was considered statistically significant.", "In the six-month period devoted to patient selection, 25 outpatients referred to the pain clinics involved in the study and meeting the inclusion criteria were asked to participate; 17 of them met the enrolment criteria and signed a written informed consent form. The study design was prospective, pre-post analysis. All patients were affected by intractable non-cancer chronic pain treated for many years with different pharmacological regimens until the present when they received morphine (epidural route). In some instances, as in tetraplegic patients, morphine was associated with an adjuvant (baclofen, see Table 1). Neuropathic pain was the most common pain type, while pain from chronic pancreatitis was present in three subjects.\nPersonal and clinical data of the selected male patients.\nTotal testosterone values are reported to prove the hypogonadic condition. Reference range of total testosterone values: 3.5-8.5 ng/ml.\nOf the 17 men included in the study (Table 1), three abandoned it within the first month and two after the third month for reasons independent of the study. In two subjects the immediate appearance of headache and in one subject the occurrence of urinary block obliged them to leave the study. The remaining 9 subjects (age 59.0 ± 4.4, range: 38-74) completed the 12 months of observation (Table 1). In these patients, it was not necessary to include any other treatment (i.e. physiotherapy, counselling) during the one-year study.\n[SUBTITLE] Clinical observations [SUBSECTION] In all patients, treatment appeared to be immediately effective, as verbally confirmed during the interview by the patients and their relatives. Indeed, starting from the first month, patients reported general changes such as faster beard growth and increased appetite. Manual prostate exploration never revealed clinically relevant features and the PSA levels always remained within the physiological range (<3 ng/mL, mean 0.8 ng/mL).\nIn all patients, treatment appeared to be immediately effective, as verbally confirmed during the interview by the patients and their relatives. Indeed, starting from the first month, patients reported general changes such as faster beard growth and increased appetite. Manual prostate exploration never revealed clinically relevant features and the PSA levels always remained within the physiological range (<3 ng/mL, mean 0.8 ng/mL).\n[SUBTITLE] Morphine dose [SUBSECTION] The morphine dose was changed according to patient requirements (Figure 1). The dose remained stable in three patients, was decreased in two subjects and was increased in four. The last group included patients in which the improved general condition allowed a morphine increase in the attempt to complete the pain relief. This was made possible by the lesser number of side effects reported.\nMorphine dose assessment. Time course of the morphine dose at basal time (T0) and after 3, 6 and 12 months of testosterone replacement therapy (T3, T6 and T12, respectively) in patients participating in the entire study. The lines represent the time course of the individual patients numbered as in Table 1.\n[SUBTITLE] Pain assessment [SUBSECTION] [SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] Andrological evaluation [SUBSECTION] [SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] Psychological outcomes [SUBSECTION] [SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] Hormonal parameters [SUBSECTION] Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nTestosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nThe morphine dose was changed according to patient requirements (Figure 1). The dose remained stable in three patients, was decreased in two subjects and was increased in four. The last group included patients in which the improved general condition allowed a morphine increase in the attempt to complete the pain relief. This was made possible by the lesser number of side effects reported.\nMorphine dose assessment. Time course of the morphine dose at basal time (T0) and after 3, 6 and 12 months of testosterone replacement therapy (T3, T6 and T12, respectively) in patients participating in the entire study. The lines represent the time course of the individual patients numbered as in Table 1.\n[SUBTITLE] Pain assessment [SUBSECTION] [SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] Andrological evaluation [SUBSECTION] [SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] Psychological outcomes [SUBSECTION] [SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] Hormonal parameters [SUBSECTION] Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nTestosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\n[SUBTITLE] Total testosterone (TT), free testosterone (fT), bioavailable testosterone (BioT) and SHBG [SUBSECTION] Total (TT), free (fT) and bioavailable testosterone (BioT) showed very low levels at T0 (TT 1.16 ± 0.28 ng/mL; fT 4.33 ± 0.89 pg/mL, BioT 0.34 ± 0.1 ng/dL) with respect to the normal range, while SHBG was close to the upper limit of the normal range (77.77 ± 12.12 nmol/L) (Figure 2). Daily administration of testosterone significantly increased the TT serum levels at T3, remaining at approximately the same level until T12; there were similar results for fT, which became significant at T12 (p < 0.028). BioT was significantly increased at T3 (p < 0.028) and remained significantly higher at T6 (p < 0.05) and at T12 (p < 0.018).\nSerum total testosterone. Serum total testosterone (TT) values at basal time (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. The insert reports the TT levels at basal time (T0) and 1, 2 and 3 months from the beginning of testosterone replacement therapy (T1, T2 and T3, respectively) of the first four patients used to evaluate the time course of tissue androgen permeation. Data are mean ± SEM. * p < 0.05 vs T0.\nTo evaluate the ratio between the unbound fraction (fT) and TT, we calculated a percentage (%fT). The %fT tended to decrease at T3, but did not reach significance. SHBG did not change significantly (Figure 3, Table 5).\nHormonal parameters evaluation. Serum dihydrotestosterone (DHT, A), estradiol (E2, B), DHT/T ratio (C), E/T ratio (D), bioavailable testosterone (BioT, E) and free testosterone (fT, F) values at basal level (T0) and after 3, 6 and 12 months of testosterone replacement (T3, T6 and T12, respectively). Data are mean ± SEM. * p < 0.05 vs T0.\nHormonal parameters\nTT: total testosterone, fT: free testosterone; BioT: bioavailable testosterone; SHBG: steroid hormone binding globuline; %fT: percentage of free testosterone; DHT: dehydrotestosterone; E2: estradiol; C: cortisol. Hormone levels determined before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.05 vs T0; ** p < 0.01 vs T0.\nTotal (TT), free (fT) and bioavailable testosterone (BioT) showed very low levels at T0 (TT 1.16 ± 0.28 ng/mL; fT 4.33 ± 0.89 pg/mL, BioT 0.34 ± 0.1 ng/dL) with respect to the normal range, while SHBG was close to the upper limit of the normal range (77.77 ± 12.12 nmol/L) (Figure 2). Daily administration of testosterone significantly increased the TT serum levels at T3, remaining at approximately the same level until T12; there were similar results for fT, which became significant at T12 (p < 0.028). BioT was significantly increased at T3 (p < 0.028) and remained significantly higher at T6 (p < 0.05) and at T12 (p < 0.018).\nSerum total testosterone. Serum total testosterone (TT) values at basal time (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. The insert reports the TT levels at basal time (T0) and 1, 2 and 3 months from the beginning of testosterone replacement therapy (T1, T2 and T3, respectively) of the first four patients used to evaluate the time course of tissue androgen permeation. Data are mean ± SEM. * p < 0.05 vs T0.\nTo evaluate the ratio between the unbound fraction (fT) and TT, we calculated a percentage (%fT). The %fT tended to decrease at T3, but did not reach significance. SHBG did not change significantly (Figure 3, Table 5).\nHormonal parameters evaluation. Serum dihydrotestosterone (DHT, A), estradiol (E2, B), DHT/T ratio (C), E/T ratio (D), bioavailable testosterone (BioT, E) and free testosterone (fT, F) values at basal level (T0) and after 3, 6 and 12 months of testosterone replacement (T3, T6 and T12, respectively). Data are mean ± SEM. * p < 0.05 vs T0.\nHormonal parameters\nTT: total testosterone, fT: free testosterone; BioT: bioavailable testosterone; SHBG: steroid hormone binding globuline; %fT: percentage of free testosterone; DHT: dehydrotestosterone; E2: estradiol; C: cortisol. Hormone levels determined before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.05 vs T0; ** p < 0.01 vs T0.\n[SUBTITLE] Serum levels and ratios of the testosterone metabolites [SUBSECTION] The two metabolites of testosterone, DHT and E2, changed differently. DHT increased progressively, reaching significance at T12 (p < 0.02), whereas E2 did not vary. The ratio DHT/TT did not change while the ratio E2/TT was decreased significantly already at T3 and remained stable till T12 (p < 0.015, p < 0.021 and p < 0.011 at T3, T6 and T12, respectively).\nCortisol showed a slow, non-significant reduction during the one-year therapy (Table 5).\nThe two metabolites of testosterone, DHT and E2, changed differently. DHT increased progressively, reaching significance at T12 (p < 0.02), whereas E2 did not vary. The ratio DHT/TT did not change while the ratio E2/TT was decreased significantly already at T3 and remained stable till T12 (p < 0.015, p < 0.021 and p < 0.011 at T3, T6 and T12, respectively).\nCortisol showed a slow, non-significant reduction during the one-year therapy (Table 5).", "In all patients, treatment appeared to be immediately effective, as verbally confirmed during the interview by the patients and their relatives. Indeed, starting from the first month, patients reported general changes such as faster beard growth and increased appetite. Manual prostate exploration never revealed clinically relevant features and the PSA levels always remained within the physiological range (<3 ng/mL, mean 0.8 ng/mL).", "The morphine dose was changed according to patient requirements (Figure 1). The dose remained stable in three patients, was decreased in two subjects and was increased in four. The last group included patients in which the improved general condition allowed a morphine increase in the attempt to complete the pain relief. This was made possible by the lesser number of side effects reported.\nMorphine dose assessment. Time course of the morphine dose at basal time (T0) and after 3, 6 and 12 months of testosterone replacement therapy (T3, T6 and T12, respectively) in patients participating in the entire study. The lines represent the time course of the individual patients numbered as in Table 1.\n[SUBTITLE] Pain assessment [SUBSECTION] [SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\n[SUBTITLE] Andrological evaluation [SUBSECTION] [SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\n[SUBTITLE] Psychological outcomes [SUBSECTION] [SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\n[SUBTITLE] Hormonal parameters [SUBSECTION] Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.\nTestosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.", "[SUBTITLE] VAS [SUBSECTION] Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\nBoth VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0\n[SUBTITLE] Margolis [SUBSECTION] The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\nThe percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).\n[SUBTITLE] QUID [SUBSECTION] In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).\nIn all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).", "Both VAS values (current and last week peak) were high and not modified by treatment; ANOVA did not show any significant variations (χ 2 = 3.98, p = 0.26 and χ 2 = 3.86, p = 0.27, respectively) (Table 2).\nPain assessment\nVAS, Area% and QUID, values recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.01 vs T0; # p < 0.02 vs T0; § p < 0.04 vs T0", "The percentage of body surface area in pain remained stable across time (ANOVA: χ2 = 1.74, p = 0.63) (Table 2).", "In all patients, the QUID values at T0 were quite high in total and partial ratings. Over time, there was a significant decrease in the PRIr-Total (χ 2 = 13.87, p < 0.003) and in the sensory (χ 2 = 8.29, p < 0.04), affective (χ 2 = 8.06, p < 0.04) and evaluative PRIr (χ 2 = 11.08, p < 0.01). The PRIr-sensory and PRIr-evaluative were already lower at T3 (p < 0.018 for both), the miscellaneous at T6 (p < 0.046) and the PRIr-affective at T12 (p < 0.046) (Table 2).", "[SUBTITLE] AMS scale [SUBSECTION] The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.\nThe mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.", "The mean baseline score of AMS Total was 44.1 ± 4.1. This value indicates \"moderate\" andrological impairment. Detailed analysis of the subscale ratings revealed that there was a significant decrease, i.e. improvement, of the 'sexual' dimension (ANOVA: χ 2 = 8.77, p < 0.03) (Table 3).\nAndrological assessment\nAMS scale scores recorded before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.03 vs T0.", "[SUBTITLE] POMS, CES-D and SF-36 [SUBSECTION] All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0\nAll the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0", "All the basal values obtained in these questionnaires were outside the 'normal' ranges. In particular, POMS was higher than 55 in all subscales except Vigor-Activity (lower than normal), while CES-D was always higher than 16 and SF-36 lower than 50. None of the POMS subscale scores or CES-D ratings showed significant changes, while the SF-36 Mental Index displayed a significant improvement over time (ANOVA: χ 2 = 11.35, p = 0.009); in particular, the score increased progressively to become significantly higher at T12 than at T0 (p < 0.04) (Table 4).\nOther questionnaires\nScores of POMS, SF-36 and CESD questionnaires administered before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.04 vs T0", "Testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, the first result is that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment as shown in the insert in Figure 1 where is depicted the time monthly time course recorded in the first 4 patients monitored to evaluate the efficacy of treatment.", "Total (TT), free (fT) and bioavailable testosterone (BioT) showed very low levels at T0 (TT 1.16 ± 0.28 ng/mL; fT 4.33 ± 0.89 pg/mL, BioT 0.34 ± 0.1 ng/dL) with respect to the normal range, while SHBG was close to the upper limit of the normal range (77.77 ± 12.12 nmol/L) (Figure 2). Daily administration of testosterone significantly increased the TT serum levels at T3, remaining at approximately the same level until T12; there were similar results for fT, which became significant at T12 (p < 0.028). BioT was significantly increased at T3 (p < 0.028) and remained significantly higher at T6 (p < 0.05) and at T12 (p < 0.018).\nSerum total testosterone. Serum total testosterone (TT) values at basal time (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. The insert reports the TT levels at basal time (T0) and 1, 2 and 3 months from the beginning of testosterone replacement therapy (T1, T2 and T3, respectively) of the first four patients used to evaluate the time course of tissue androgen permeation. Data are mean ± SEM. * p < 0.05 vs T0.\nTo evaluate the ratio between the unbound fraction (fT) and TT, we calculated a percentage (%fT). The %fT tended to decrease at T3, but did not reach significance. SHBG did not change significantly (Figure 3, Table 5).\nHormonal parameters evaluation. Serum dihydrotestosterone (DHT, A), estradiol (E2, B), DHT/T ratio (C), E/T ratio (D), bioavailable testosterone (BioT, E) and free testosterone (fT, F) values at basal level (T0) and after 3, 6 and 12 months of testosterone replacement (T3, T6 and T12, respectively). Data are mean ± SEM. * p < 0.05 vs T0.\nHormonal parameters\nTT: total testosterone, fT: free testosterone; BioT: bioavailable testosterone; SHBG: steroid hormone binding globuline; %fT: percentage of free testosterone; DHT: dehydrotestosterone; E2: estradiol; C: cortisol. Hormone levels determined before (T0) and after 3 (T3), 6 (T6) and 12 (T12) months of testosterone replacement therapy. Values are mean ± SEM (n = 9). Wilcoxon test: * p < 0.05 vs T0; ** p < 0.01 vs T0.", "The two metabolites of testosterone, DHT and E2, changed differently. DHT increased progressively, reaching significance at T12 (p < 0.02), whereas E2 did not vary. The ratio DHT/TT did not change while the ratio E2/TT was decreased significantly already at T3 and remained stable till T12 (p < 0.015, p < 0.021 and p < 0.011 at T3, T6 and T12, respectively).\nCortisol showed a slow, non-significant reduction during the one-year therapy (Table 5).", "Evidence from this study suggests that one year of testosterone replacement therapy in male patients suffering from a severe form of chronic pain and diagnosed with morphine-induced hypogonadism (OPIAD) is able to positively change hormonal and behavioural 'indicators' chosen to measure different aspects of their condition.\nAlthough pain research is continuously coming up with new products to treat chronic pain, opioids remain the reference therapy. Unfortunately this approach involves several side effects, which increasing morbidity would induce the discontinuation of therapy [27]. With continued opioid use, many side effects diminish or resolve; conversely, others such as immune alteration and hypogonadism persist and are even more apparent after long-term therapy [3,4,28]. The subjects considered in this study, all long-term opioid users, were also clearly suffering from hypogonadism, as revealed by the low plasma testosterone levels and/or clinical symptoms indicative of this condition.\nTestosterone replacement therapy is a common approach in ageing males with partial androgen deficit or in young hypogonadic men due to traumatic or surgical causes; in these subjects, testosterone administration via gel is known to increase total testosterone serum levels immediately, reaching a steady state in 1-2 weeks [26]. Thus, it must be underlined that, although all our patients exhibited visible changes in body features (beard and hair increase) and in some habits closely dependent on androgens (increased food intake), the first significant increase in blood levels only occurred after 2 months of treatment. Further research is needed to better study the possible interaction among pain, opioid and androgen metabolism.\nSeveral hypotheses have been advanced to explain OPIAD [29]. One suggests that the hypogonadism is due to opioid-induced inhibition of gonadotropin release [30]; however, another hypothesis suggests that the inhibitory action is also exerted in the gonads. Indeed, opioid receptors have been described in the pituitary as well as in the gonads, and opioids have been found to up-regulate their own receptors. Another mechanism involves testosterone metabolism. It is known that morphine increases 5-alpha reductase activity [9] and we have shown an excitatory effect on aromatase activity in vitro [31] and of both enzymes in ex-vivo tissues [32]; these enzymes, present in the liver but also widespread in body tissues, are involved in the transformation of testosterone to its metabolites dihydrotestosterone (DHT) and estradiol.\nFinally, we cannot exclude either a direct effect of pain on the HPG axis, since observations have indicated a depressant effect of experimental pain on testosterone blood and brain levels in rats through the stress system, or the quite common use of other drugs able, like opioids, to inhibit the HPA axis [32,33].\n[SUBTITLE] Andrology [SUBSECTION] While the psychological and somatic aspects of the painful condition are usually taken into consideration by physicians, the sexual aspect of the patient's life is generally disregarded. In the present study, as in a previous one by Daniell's group [1], the andrological questionnaires administered at the beginning of the observations confirmed the presence of altered conditions in all patients, probably related not only to pain but also to the altered hormone levels. Indeed also in the present study, the AMS revealed 'moderate' andrological disturbances, indicating the presence of clinically relevant alterations. As expected, testosterone replacement progressively improved the sexual aspect, as confirmed by the AMS's sexual dimension.\nWhile the psychological and somatic aspects of the painful condition are usually taken into consideration by physicians, the sexual aspect of the patient's life is generally disregarded. In the present study, as in a previous one by Daniell's group [1], the andrological questionnaires administered at the beginning of the observations confirmed the presence of altered conditions in all patients, probably related not only to pain but also to the altered hormone levels. Indeed also in the present study, the AMS revealed 'moderate' andrological disturbances, indicating the presence of clinically relevant alterations. As expected, testosterone replacement progressively improved the sexual aspect, as confirmed by the AMS's sexual dimension.\n[SUBTITLE] Pain [SUBSECTION] We provide preliminary data suggesting that the pain experience, as assessed by QUID (a validated Italian pain questionnaire), is affected by testosterone replacement. In fact, QUID dimensions improved by the third month, reaching the best values at the sixth month. This was a very substantial result present in all subjects. QUID was more sensitive than the other pain measurement tools administered (VAS, %Area), probably because the questionnaire is composed of a series of descriptors requiring patients to carefully analyse and define the ongoing pain experience. Furthermore, since this instrument studies the different dimensions of the pain experience (i.e. sensory, evaluative, affective), we were able to verify that all these components benefited from the testosterone therapy, albeit following different time courses.\nThe morphine dose matched the change in pain shown by QUID. Although three subjects asked for escalating doses to complete the pain relief (also in virtue of the low side effects they experienced), the others maintained or reduced the morphine amount, thus implicitly expressing pain amelioration. The results of the SF-36 Mental Index agreed with these findings, as they indicated an improvement in the respective domains, i.e. daily activities, emotional competence and social relations. Of course, the improvement in these aspects may have positively influenced the pain perception and vice versa, since a virtuous circle was established.\nThe extension and distribution of the body surface area in pain remained unchanged, as did the VAS scores. From the beginning of the epidural morphine administration, all the patients complained that the last week before the refill was the worst of the month, even though the dose of morphine was repeatedly verified to be constant in the last days of the cycle. Our interpretation of the high VAS scores involves two mechanisms, the one involving patient attitudes and the other the instrument itself. Very long-lasting and unresponsive pain is not easily consciously modified since a feeling of helplessness is unavoidable in these subjects, impeding their realization that any change has occurred, especially when it is positive. VAS does not contrast this attitude. Due to its oversimplified nature, it does not require the patient to analyse and weigh his pain, and it facilitates recollection bias. In the end, the patient under-evaluates the measurement and over-emphasizes the pain he is rating. The findings on QUID, which requires the patient to focus on his experience, match perfectly with this explanation, making this instrument more reliable in the setting described herein. Interestingly these results are very similar to those reported by Daniell and colleagues (2006) [1], in which the general improvement of living conditions provided by patch testosterone replacement were not followed by a clear decrease in pain measures.\nThe affective state of chronic pain is characterized by deep depression and high anxiety; sometimes such an affective condition depends not only on the pain severity but also on the underlying disease, especially when it is highly vexing and disabling. Our patients were affected by intense long-lasting pain and a serious illness. Thus, they were anxious and depressed, as shown by the POMS and CES-D scores. These ratings did not vary throughout the observational period, indicating that testosterone therapy did not influence the patients' affective state. It is likely that the sense of uncontrollability of their condition was so deeply rooted in their mind that it was not easily attenuated. On the other hand, the improvement of pain was incomplete and too recent with respect to its history for the attitudes toward it to be changed. Furthermore, the patients knew that their disease was incurable. Interestingly the persistent high levels of anxiety and depression indicate that the decrease of pain shown by QUID cannot be attributed to the psychological component of the pain experience, as may occur in purposely treated chronic pain patients.\nWe provide preliminary data suggesting that the pain experience, as assessed by QUID (a validated Italian pain questionnaire), is affected by testosterone replacement. In fact, QUID dimensions improved by the third month, reaching the best values at the sixth month. This was a very substantial result present in all subjects. QUID was more sensitive than the other pain measurement tools administered (VAS, %Area), probably because the questionnaire is composed of a series of descriptors requiring patients to carefully analyse and define the ongoing pain experience. Furthermore, since this instrument studies the different dimensions of the pain experience (i.e. sensory, evaluative, affective), we were able to verify that all these components benefited from the testosterone therapy, albeit following different time courses.\nThe morphine dose matched the change in pain shown by QUID. Although three subjects asked for escalating doses to complete the pain relief (also in virtue of the low side effects they experienced), the others maintained or reduced the morphine amount, thus implicitly expressing pain amelioration. The results of the SF-36 Mental Index agreed with these findings, as they indicated an improvement in the respective domains, i.e. daily activities, emotional competence and social relations. Of course, the improvement in these aspects may have positively influenced the pain perception and vice versa, since a virtuous circle was established.\nThe extension and distribution of the body surface area in pain remained unchanged, as did the VAS scores. From the beginning of the epidural morphine administration, all the patients complained that the last week before the refill was the worst of the month, even though the dose of morphine was repeatedly verified to be constant in the last days of the cycle. Our interpretation of the high VAS scores involves two mechanisms, the one involving patient attitudes and the other the instrument itself. Very long-lasting and unresponsive pain is not easily consciously modified since a feeling of helplessness is unavoidable in these subjects, impeding their realization that any change has occurred, especially when it is positive. VAS does not contrast this attitude. Due to its oversimplified nature, it does not require the patient to analyse and weigh his pain, and it facilitates recollection bias. In the end, the patient under-evaluates the measurement and over-emphasizes the pain he is rating. The findings on QUID, which requires the patient to focus on his experience, match perfectly with this explanation, making this instrument more reliable in the setting described herein. Interestingly these results are very similar to those reported by Daniell and colleagues (2006) [1], in which the general improvement of living conditions provided by patch testosterone replacement were not followed by a clear decrease in pain measures.\nThe affective state of chronic pain is characterized by deep depression and high anxiety; sometimes such an affective condition depends not only on the pain severity but also on the underlying disease, especially when it is highly vexing and disabling. Our patients were affected by intense long-lasting pain and a serious illness. Thus, they were anxious and depressed, as shown by the POMS and CES-D scores. These ratings did not vary throughout the observational period, indicating that testosterone therapy did not influence the patients' affective state. It is likely that the sense of uncontrollability of their condition was so deeply rooted in their mind that it was not easily attenuated. On the other hand, the improvement of pain was incomplete and too recent with respect to its history for the attitudes toward it to be changed. Furthermore, the patients knew that their disease was incurable. Interestingly the persistent high levels of anxiety and depression indicate that the decrease of pain shown by QUID cannot be attributed to the psychological component of the pain experience, as may occur in purposely treated chronic pain patients.", "While the psychological and somatic aspects of the painful condition are usually taken into consideration by physicians, the sexual aspect of the patient's life is generally disregarded. In the present study, as in a previous one by Daniell's group [1], the andrological questionnaires administered at the beginning of the observations confirmed the presence of altered conditions in all patients, probably related not only to pain but also to the altered hormone levels. Indeed also in the present study, the AMS revealed 'moderate' andrological disturbances, indicating the presence of clinically relevant alterations. As expected, testosterone replacement progressively improved the sexual aspect, as confirmed by the AMS's sexual dimension.", "We provide preliminary data suggesting that the pain experience, as assessed by QUID (a validated Italian pain questionnaire), is affected by testosterone replacement. In fact, QUID dimensions improved by the third month, reaching the best values at the sixth month. This was a very substantial result present in all subjects. QUID was more sensitive than the other pain measurement tools administered (VAS, %Area), probably because the questionnaire is composed of a series of descriptors requiring patients to carefully analyse and define the ongoing pain experience. Furthermore, since this instrument studies the different dimensions of the pain experience (i.e. sensory, evaluative, affective), we were able to verify that all these components benefited from the testosterone therapy, albeit following different time courses.\nThe morphine dose matched the change in pain shown by QUID. Although three subjects asked for escalating doses to complete the pain relief (also in virtue of the low side effects they experienced), the others maintained or reduced the morphine amount, thus implicitly expressing pain amelioration. The results of the SF-36 Mental Index agreed with these findings, as they indicated an improvement in the respective domains, i.e. daily activities, emotional competence and social relations. Of course, the improvement in these aspects may have positively influenced the pain perception and vice versa, since a virtuous circle was established.\nThe extension and distribution of the body surface area in pain remained unchanged, as did the VAS scores. From the beginning of the epidural morphine administration, all the patients complained that the last week before the refill was the worst of the month, even though the dose of morphine was repeatedly verified to be constant in the last days of the cycle. Our interpretation of the high VAS scores involves two mechanisms, the one involving patient attitudes and the other the instrument itself. Very long-lasting and unresponsive pain is not easily consciously modified since a feeling of helplessness is unavoidable in these subjects, impeding their realization that any change has occurred, especially when it is positive. VAS does not contrast this attitude. Due to its oversimplified nature, it does not require the patient to analyse and weigh his pain, and it facilitates recollection bias. In the end, the patient under-evaluates the measurement and over-emphasizes the pain he is rating. The findings on QUID, which requires the patient to focus on his experience, match perfectly with this explanation, making this instrument more reliable in the setting described herein. Interestingly these results are very similar to those reported by Daniell and colleagues (2006) [1], in which the general improvement of living conditions provided by patch testosterone replacement were not followed by a clear decrease in pain measures.\nThe affective state of chronic pain is characterized by deep depression and high anxiety; sometimes such an affective condition depends not only on the pain severity but also on the underlying disease, especially when it is highly vexing and disabling. Our patients were affected by intense long-lasting pain and a serious illness. Thus, they were anxious and depressed, as shown by the POMS and CES-D scores. These ratings did not vary throughout the observational period, indicating that testosterone therapy did not influence the patients' affective state. It is likely that the sense of uncontrollability of their condition was so deeply rooted in their mind that it was not easily attenuated. On the other hand, the improvement of pain was incomplete and too recent with respect to its history for the attitudes toward it to be changed. Furthermore, the patients knew that their disease was incurable. Interestingly the persistent high levels of anxiety and depression indicate that the decrease of pain shown by QUID cannot be attributed to the psychological component of the pain experience, as may occur in purposely treated chronic pain patients.", "The present results, although carried out in a small number of subjects and with a constant treatment dose, underline the possibility of successful testosterone replacement therapy in chronic pain patients treated with morphine. Hypogonadism is a usual consequence of opioid treatment, but it is rarely taken into consideration. Our results strongly suggest that this therapy can positively modulate the dimensions of pain. This effect allows us to propose the use of testosterone in clinics as an adjuvant, in combination with opioid therapy.", "The authors declare that they have no competing interests.", "AMA, IC, VB and GP conceived and supervised the project and edited the manuscript.\nIC, GdP, MC, AS, GS, SM, VP, LR, GP participated in the experimental process and data analysis. All authors contributed to data interpretation. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Perceptions of anti-smoking messages amongst high school students in Pakistan.
21333006
Surveys have provided evidence that tobacco use is widely prevalent amongst the youth in Pakistan. Several reviews have evaluated the effectiveness of various tobacco control programs, however, few have taken into account the perceptions of students themselves regarding these measures. The aim of this study was to determine the most effective anti-smoking messages that can be delivered to high-school students in Pakistan, based on their self-rated perceptions. It also aimed to assess the impact of pictorial/multi-media messages compared with written health warnings and to discover differences in perceptions of smokers to those of non-smokers to health warning messages.
BACKGROUND
This study was carried out in five major cities of Pakistan in private English-medium schools. A presentation was delivered at each school that highlighted the well-established health consequences of smoking using both written health warnings and pictorial/multi-media health messages. Following the presentation, the participants filled out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking. The Friedman test was used to rank responses to each of the questions in the form. The Wilcoxon Signed Rank test used to analyze the impact of pictorial/multi-media messages over written statements. The Mann Whitney U test was used to compare responses of smokers with those of non-smokers.
METHODS
Picture of an oral cavity cancer, videos of a cancer patient using an electronic voice box and a patient on a ventilator, were perceived to be the most effective anti-smoking messages by students. Addiction, harming others through passive smoking and impact of smoking on disposable incomes were perceived to be less effective messages. Pictorial/multi-media messages were perceived to be more effective than written health warnings. Health warnings were perceived as less effective amongst smokers compared to non-smokers.
RESULTS
Graphic pictorial/multi-media health warnings that depict cosmetic and functional distortions were perceived as effective anti-smoking messages by English-medium high school students in Pakistan. Smokers demonstrated greater resistance to health promotion messages compared with non-smokers. Targeted interventions for high school students may be beneficial.
CONCLUSION
[ "Adolescent", "Advertising", "Data Collection", "Female", "Humans", "Male", "Pakistan", "Persuasive Communication", "Risk Factors", "Smoking Prevention", "Students" ]
3051908
null
null
Methods
[SUBTITLE] Study Setting [SUBSECTION] This study was carried out in five cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Faisalabad and Karachi, from January to February, 2010. These cities represent the major urban centers of Pakistan where the youth has access to tobacco products and is influenced by advertising. A minimum of two high schools from each city were identified for carrying out this study. These schools allowed convenient access to adolescents and were an appropriate setting to conduct the study. This study was carried out in five cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Faisalabad and Karachi, from January to February, 2010. These cities represent the major urban centers of Pakistan where the youth has access to tobacco products and is influenced by advertising. A minimum of two high schools from each city were identified for carrying out this study. These schools allowed convenient access to adolescents and were an appropriate setting to conduct the study. [SUBTITLE] Study Design and Procedure [SUBSECTION] A Microsoft PowerPoint presentation was developed that highlighted some of the most important and well-established health consequences of smoking. These were derived and adapted from the US Surgeon General's report published in collaboration with the Centre for Disease Control[11]. The presentation consisted of slides that provided details and written warnings of each tobacco related illness. Following the health warnings of each specific illness, a slide utilizing different pictorial/multi-media aids was used to show the health outcome of the disease. These included a picture of smokers' lungs, pictures of oral cancer, a video of a person using an electronic voice box, a video of a patient on a ventilator and a video of a person paralyzed due to stroke. The pictures and multi-media aids were obtained from open-access websites such as YouTube. In addition, other harmful effects of tobacco such as addiction, social implications and dangers posed by secondhand smoke exposure were also highlighted in the presentation. The presentation was delivered at each of the schools selected for the study. At the end of the presentation, the students were asked to fill out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking (Table 1). The questionnaire form consisted of a total of 20 questions related to the anti-smoking messages with responses ranging from 1 to 5 (1 = Not at all, 2 = Unlikely, 3 = Unsure, 4 = Likely, 5 = Definitely). The demographics of the participants including age and gender were noted along with smoking status. Tobacco usage greater than once in the month preceding the administration of the questionnaire was taken as positive for both cigarette smoking and water-pipe smoking. This figure was adapted from the criteria used in the GYTS[4]. Prior ethical clearance was sought at the Aga Khan University. Mean rank scores of responses using the Friedman test Friedman's p value < 0.001 A Microsoft PowerPoint presentation was developed that highlighted some of the most important and well-established health consequences of smoking. These were derived and adapted from the US Surgeon General's report published in collaboration with the Centre for Disease Control[11]. The presentation consisted of slides that provided details and written warnings of each tobacco related illness. Following the health warnings of each specific illness, a slide utilizing different pictorial/multi-media aids was used to show the health outcome of the disease. These included a picture of smokers' lungs, pictures of oral cancer, a video of a person using an electronic voice box, a video of a patient on a ventilator and a video of a person paralyzed due to stroke. The pictures and multi-media aids were obtained from open-access websites such as YouTube. In addition, other harmful effects of tobacco such as addiction, social implications and dangers posed by secondhand smoke exposure were also highlighted in the presentation. The presentation was delivered at each of the schools selected for the study. At the end of the presentation, the students were asked to fill out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking (Table 1). The questionnaire form consisted of a total of 20 questions related to the anti-smoking messages with responses ranging from 1 to 5 (1 = Not at all, 2 = Unlikely, 3 = Unsure, 4 = Likely, 5 = Definitely). The demographics of the participants including age and gender were noted along with smoking status. Tobacco usage greater than once in the month preceding the administration of the questionnaire was taken as positive for both cigarette smoking and water-pipe smoking. This figure was adapted from the criteria used in the GYTS[4]. Prior ethical clearance was sought at the Aga Khan University. Mean rank scores of responses using the Friedman test Friedman's p value < 0.001 [SUBTITLE] Sample [SUBSECTION] All of the schools enrolled in the study were private schools where the medium of instruction was in English. This was due to certain logistical and financial constraints of conducting the study in government-run schools such as, availability of multi-media projectors and back-up generators in case of power failures. A recent survey showed that a third of Pakistanis are educated in English-medium private schools and a further 15% are in English-medium government schools[12]. This sample may therefore not be representative of the remaining students belonging from a lower socio-economic group that are currently enrolled in other government schools or madrassahs. However, efforts are being made to convert these into English-medium facilities in the future[13]. The presentation was delivered to students in small groups consisting of approximately 40 students each. Both male and female students as well as current smokers and non-smokers were included. Approval from the faculty and the administration of the schools where the study was conducted was sought before delivering the presentation. The responsible faculty members were approached, briefed on the purpose of the study and were shown the details of the presentation for their approval. Student groups were then arranged by the schools' administration prior to the delivery of the presentation. The students were asked to sign a consent form that was included with the questionnaire for participating in the study. All of the schools enrolled in the study were private schools where the medium of instruction was in English. This was due to certain logistical and financial constraints of conducting the study in government-run schools such as, availability of multi-media projectors and back-up generators in case of power failures. A recent survey showed that a third of Pakistanis are educated in English-medium private schools and a further 15% are in English-medium government schools[12]. This sample may therefore not be representative of the remaining students belonging from a lower socio-economic group that are currently enrolled in other government schools or madrassahs. However, efforts are being made to convert these into English-medium facilities in the future[13]. The presentation was delivered to students in small groups consisting of approximately 40 students each. Both male and female students as well as current smokers and non-smokers were included. Approval from the faculty and the administration of the schools where the study was conducted was sought before delivering the presentation. The responsible faculty members were approached, briefed on the purpose of the study and were shown the details of the presentation for their approval. Student groups were then arranged by the schools' administration prior to the delivery of the presentation. The students were asked to sign a consent form that was included with the questionnaire for participating in the study. [SUBTITLE] Data Analysis [SUBSECTION] The data was analyzed using Statistical Package for Social Sciences (SPSS v16.0). To compare responses to questions across the data set, the Friedman test for non-parametric data was utilized. This test was used to generate ranks between individual questions in the dataset. These were utilized to show which risk factors and multi-media aids adolescents considered as the most effective anti-smoking messages. To assess the impact of pictorial/multi-media health warnings, five questions pertaining to these were each paired with questions of their associated written text warnings (Table 2). The Wilcoxon Signed Ranks Test was utilized to assess whether there was any statistically significant difference in the responses to the questions within these pairs. The Mann Whitney U test was utilized to compare the responses of smokers to those of non-smokers. A p value of < 0.05 was taken as significant for each of the tests. Comparison of text warnings with multi-media warnings as deterrents from smoking The data was analyzed using Statistical Package for Social Sciences (SPSS v16.0). To compare responses to questions across the data set, the Friedman test for non-parametric data was utilized. This test was used to generate ranks between individual questions in the dataset. These were utilized to show which risk factors and multi-media aids adolescents considered as the most effective anti-smoking messages. To assess the impact of pictorial/multi-media health warnings, five questions pertaining to these were each paired with questions of their associated written text warnings (Table 2). The Wilcoxon Signed Ranks Test was utilized to assess whether there was any statistically significant difference in the responses to the questions within these pairs. The Mann Whitney U test was utilized to compare the responses of smokers to those of non-smokers. A p value of < 0.05 was taken as significant for each of the tests. Comparison of text warnings with multi-media warnings as deterrents from smoking
null
null
null
null
[ "Background", "Study Setting", "Study Design and Procedure", "Sample", "Data Analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Evidence indicates that most smokers initiate the habit before attaining adulthood[1]. In addition, young smokers and adolescents are more likely to develop nicotine dependence[2]. This acquires particular significance in developing countries with demographic patterns similar to Pakistan's, where the median age of the population is only 21 years[3]. Here, future trends of tobacco attributable mortality are likely to be influenced by current tobacco consumption and perceptions amongst the youth. This school-going age group comprises the largest segment of the population and is also the most susceptible towards experimentation with tobacco. Therefore, achieving tobacco control amongst this age group is critical for mitigating the public health consequences of this emerging epidemic.\nVarious surveys have provided evidence that tobacco use is widely prevalent amongst students and adolescents in the urban areas of Pakistan. The results of the Global Youth Tobacco Survey (GYTS) showed that tobacco use prevalence was 12.8% in males and 8.0% in females aged 13-15[4]. A study in Karachi evaluating smoking in males showed that prevalence increases to 19.2% in ages 15-17, 26.5% in ages 18-20 and reaches 65% in 21 years and above[5]. A survey in Karachi targeting adolescent female smoking showed a prevalence of 16.3% in the above 15 age group[6]. In addition shisha use is becoming increasingly popular in the student age group[7].\nWhilst several reviews have evaluated the effectiveness of various tobacco control programs, few have taken into account the perceptions of students themselves regarding these measures. It is important to discover the factors that the youth considers as significant in either encouraging them to cease the habit or from initiating smoking in the first place. The current cigarettes pack warnings in the country that consist only of text, for example, may be less effective than pictorial warnings, as has been demonstrated elsewhere[8]. In addition, students that have already initiated smoking may be more resistant to anti-smoking messages. Such data is essential for review before effective health promotion advertisements, curricular material in textbooks and appropriate legislation can be introduced. Although a majority of anti-tobacco modalities are not specifically designed for the youth, there is evidence to suggest that such targeted interventions are highly effective ways of curtailing the tobacco epidemic [9,10].\nTherefore, the aim of this study was to determine the most effective anti-smoking messages that can be delivered in order to reduce tobacco consumption amongst high-school students in Pakistan based on their own, self-rated perceptions and to highlight which risk-factors related to tobacco consumption did the students consider most significant in deterring them from smoking. We also aimed to test the hypothesis that pictorial/multi-media warnings will be more effective than text-only warnings and to discover whether there was any difference in the perceptions of smokers to those of non-smokers to these health messages.", "This study was carried out in five cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Faisalabad and Karachi, from January to February, 2010. These cities represent the major urban centers of Pakistan where the youth has access to tobacco products and is influenced by advertising. A minimum of two high schools from each city were identified for carrying out this study. These schools allowed convenient access to adolescents and were an appropriate setting to conduct the study.", "A Microsoft PowerPoint presentation was developed that highlighted some of the most important and well-established health consequences of smoking. These were derived and adapted from the US Surgeon General's report published in collaboration with the Centre for Disease Control[11]. The presentation consisted of slides that provided details and written warnings of each tobacco related illness. Following the health warnings of each specific illness, a slide utilizing different pictorial/multi-media aids was used to show the health outcome of the disease. These included a picture of smokers' lungs, pictures of oral cancer, a video of a person using an electronic voice box, a video of a patient on a ventilator and a video of a person paralyzed due to stroke. The pictures and multi-media aids were obtained from open-access websites such as YouTube. In addition, other harmful effects of tobacco such as addiction, social implications and dangers posed by secondhand smoke exposure were also highlighted in the presentation.\nThe presentation was delivered at each of the schools selected for the study. At the end of the presentation, the students were asked to fill out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking (Table 1). The questionnaire form consisted of a total of 20 questions related to the anti-smoking messages with responses ranging from 1 to 5 (1 = Not at all, 2 = Unlikely, 3 = Unsure, 4 = Likely, 5 = Definitely). The demographics of the participants including age and gender were noted along with smoking status. Tobacco usage greater than once in the month preceding the administration of the questionnaire was taken as positive for both cigarette smoking and water-pipe smoking. This figure was adapted from the criteria used in the GYTS[4]. Prior ethical clearance was sought at the Aga Khan University.\nMean rank scores of responses using the Friedman test\nFriedman's p value < 0.001", "All of the schools enrolled in the study were private schools where the medium of instruction was in English. This was due to certain logistical and financial constraints of conducting the study in government-run schools such as, availability of multi-media projectors and back-up generators in case of power failures. A recent survey showed that a third of Pakistanis are educated in English-medium private schools and a further 15% are in English-medium government schools[12]. This sample may therefore not be representative of the remaining students belonging from a lower socio-economic group that are currently enrolled in other government schools or madrassahs. However, efforts are being made to convert these into English-medium facilities in the future[13]. The presentation was delivered to students in small groups consisting of approximately 40 students each. Both male and female students as well as current smokers and non-smokers were included. Approval from the faculty and the administration of the schools where the study was conducted was sought before delivering the presentation. The responsible faculty members were approached, briefed on the purpose of the study and were shown the details of the presentation for their approval. Student groups were then arranged by the schools' administration prior to the delivery of the presentation. The students were asked to sign a consent form that was included with the questionnaire for participating in the study.", "The data was analyzed using Statistical Package for Social Sciences (SPSS v16.0). To compare responses to questions across the data set, the Friedman test for non-parametric data was utilized. This test was used to generate ranks between individual questions in the dataset. These were utilized to show which risk factors and multi-media aids adolescents considered as the most effective anti-smoking messages. To assess the impact of pictorial/multi-media health warnings, five questions pertaining to these were each paired with questions of their associated written text warnings (Table 2). The Wilcoxon Signed Ranks Test was utilized to assess whether there was any statistically significant difference in the responses to the questions within these pairs. The Mann Whitney U test was utilized to compare the responses of smokers to those of non-smokers. A p value of < 0.05 was taken as significant for each of the tests.\nComparison of text warnings with multi-media warnings as deterrents from smoking", "A total of 388 high school students were included in the study out which 245 were males and 142 were females. The mean age of the sample population was 17 with a standard deviation of 1.51. Out of the sample, a total of 97 (25.5%) identified themselves to be smokers out of which 70 were males (28.5% of males) and 27 were females (19% of females). A total of 150 (38.7%) participants answered positively for shisha smoking out of which 104 were males (42.5% of males) and 46 were females (32.4% of females).\nTable 1 shows the mean rank scores generated using the Friedman test for the responses of each of the questions. \"Did watching a picture of an oral cavity cancer convince you of the harmful effects of smoking,\" had the highest rank. \"Smoking addiction adversely affects disposable incomes-Does knowing this risk stop you from smoking,\" ranked the lowest. The Friedman's p-value was < 0.001.\nTable 2 shows the comparison of responses to questions regarding written health warnings with their associated multi-media messages. Responses were significantly greater for the pictorial/multi-media messages in each of the pairs except for \"Video of a person recovering from stroke,\" which was not significantly different from the written statement.\nThe comparison of responses given by smokers to those of non-smokers yielded significantly lower scores (p < 0.01) by the former group across the question set.\nOverall an encouraging response was received from the faculty and from the students to both the presentation and to the study in the schools that were visited. All of the students that were attending the presentations consented to be a part of the study. One of the schools approached, which was only for girls however, did not consent to the documentation of smoking prevalence of the students. The survey was not carried out at this school and only the presentation was delivered. An alternative school was subsequently selected for inclusion in the study.", "Pakistan has taken a number of tangible steps towards reducing adolescent tobacco consumption in the country such as enforcing bans on tobacco advertising and underage sales. A recent decision by the Ministry of Health to introduce pictorial warnings on cigarette packs could also have a major impact[14]. However, for comprehensive enforcement of the Framework Convention for Tobacco Control (FCTC), the government will need to ensure that the warnings are rotated, are of appropriate size and are present on all packaging and labeling[15]. In addition, current tobacco control legislation is not directed against shisha smoking that is acquiring increasingly popularity amongst the youth[16].\nOur results suggest that the effectiveness of the health messages could also be determined by the type of warning that is delivered. Graphic visual images, such as, pictures of oral cavity cancers were perceived to have the greatest impact in deterring students from smoking. Multi-media aids that conveyed messages students could relate to, both anatomically and functionally, ranked higher than the more commonly used pictures of a 'smoker's lungs,' that could perhaps not convey the health warning with a similar impact. Amongst these multi-media aids also included videos of a patient using an electronic larynx and a patient on a ventilator. These findings suggest that such multi-media aids may be effective advertisements for health promotion campaigns.\nOur findings give further support to the use of pictorial and multi-media health warnings instead of warnings consisting only of text that were perceived to be less effective. This is particularly pertinent in countries with poor literacy rates such as Pakistan. In addition, cigarette pack warnings in the country are often in English, which is understood by a limited segment of the population, hence, obfuscating the necessary health promotion messages. Multi-media anti-smoking messages could therefore may improve awareness of the health consequences of smoking amongst the youth in Pakistan. Modifying label packaging to include graphic health warnings has been demonstrated as an effective means of reducing tobacco consumption and improving awareness of the health consequences of smoking in other countries within this age group[10,17-19].\nThe participants did not perceive the current ban on smoking in indoor public areas to be an impediment to smoking. This suggests that they are either unaware of the relevant legislation or that they do not believe the laws will be enforced and any violations will be dealt with. They also did not perceive harming others through second hand smoke to be a major deterring factor. These findings suggest that there is a substantial lack of awareness regarding the hazards of second hand smoke amongst adolescents. In addition, the low scores for responses to questions relating to addiction and to cigarettes as a 'gateway drug' also suggest a lack of awareness of the severity of these conditions. This is of significance in the school-going age group as addiction is cited as the commonest reason for failure of smoking cessation during adulthood[20].\nFinally, those who identified themselves as smokers gave significantly lower responses to those of non-smokers across the question set. This suggests that the susceptibility to anti-smoking messages may decrease substantially once the habit has been initiated on a regular basis. Such early demonstration of intransigence to health promotion messages does not portend well for future smoking cessation during adulthood. This suggests that early, directed interventions aimed at students and adolescents may be beneficial as appropriate messages are delivered before the habit is initiated.\nThe study was limited by the fact it was carried out in private schools where the medium of instruction is English and the students belonged to relatively higher socio-economic group. This could explain why the impact of smoking on disposable incomes was not cited as a major deterring factor. This could however, be of greater relevance for adolescents belonging to a lower socio-economic group. Based on these findings, a follow-up study is now being carried in public schools where Urdu is the medium of instruction.", "Graphic visual images and multi-media aids that vividly depict cosmetic distortions and loss of normal organ function as outcomes of the diseases associated with smoking are perceived by high school students as the most effective modalities in deterring them from smoking. These aids, in the form of health warnings, health promotion campaigns and material in school curricula, may be useful as effective tobacco control modalities in developing countries with young populations. Students that have already initiated the habit may be more resistant to tobacco control messages, hence, early intervention may prove to be beneficial. In addition, lack of awareness of other hazardous effects of smoking such as addiction and secondhand smoke exposure needs to be addressed in Pakistan.", "The authors declare that they have no competing interests.", "SMAZ participated in the study design, manuscript writing, data collection and analysis. ALB participated in data analysis and manuscript writing. AS participated in data collection and analysis. SHI participated in data collection. JAK participated in study design, coordination and provided technical supervision.\nAll authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/117/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study Setting", "Study Design and Procedure", "Sample", "Data Analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Evidence indicates that most smokers initiate the habit before attaining adulthood[1]. In addition, young smokers and adolescents are more likely to develop nicotine dependence[2]. This acquires particular significance in developing countries with demographic patterns similar to Pakistan's, where the median age of the population is only 21 years[3]. Here, future trends of tobacco attributable mortality are likely to be influenced by current tobacco consumption and perceptions amongst the youth. This school-going age group comprises the largest segment of the population and is also the most susceptible towards experimentation with tobacco. Therefore, achieving tobacco control amongst this age group is critical for mitigating the public health consequences of this emerging epidemic.\nVarious surveys have provided evidence that tobacco use is widely prevalent amongst students and adolescents in the urban areas of Pakistan. The results of the Global Youth Tobacco Survey (GYTS) showed that tobacco use prevalence was 12.8% in males and 8.0% in females aged 13-15[4]. A study in Karachi evaluating smoking in males showed that prevalence increases to 19.2% in ages 15-17, 26.5% in ages 18-20 and reaches 65% in 21 years and above[5]. A survey in Karachi targeting adolescent female smoking showed a prevalence of 16.3% in the above 15 age group[6]. In addition shisha use is becoming increasingly popular in the student age group[7].\nWhilst several reviews have evaluated the effectiveness of various tobacco control programs, few have taken into account the perceptions of students themselves regarding these measures. It is important to discover the factors that the youth considers as significant in either encouraging them to cease the habit or from initiating smoking in the first place. The current cigarettes pack warnings in the country that consist only of text, for example, may be less effective than pictorial warnings, as has been demonstrated elsewhere[8]. In addition, students that have already initiated smoking may be more resistant to anti-smoking messages. Such data is essential for review before effective health promotion advertisements, curricular material in textbooks and appropriate legislation can be introduced. Although a majority of anti-tobacco modalities are not specifically designed for the youth, there is evidence to suggest that such targeted interventions are highly effective ways of curtailing the tobacco epidemic [9,10].\nTherefore, the aim of this study was to determine the most effective anti-smoking messages that can be delivered in order to reduce tobacco consumption amongst high-school students in Pakistan based on their own, self-rated perceptions and to highlight which risk-factors related to tobacco consumption did the students consider most significant in deterring them from smoking. We also aimed to test the hypothesis that pictorial/multi-media warnings will be more effective than text-only warnings and to discover whether there was any difference in the perceptions of smokers to those of non-smokers to these health messages.", "[SUBTITLE] Study Setting [SUBSECTION] This study was carried out in five cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Faisalabad and Karachi, from January to February, 2010. These cities represent the major urban centers of Pakistan where the youth has access to tobacco products and is influenced by advertising. A minimum of two high schools from each city were identified for carrying out this study. These schools allowed convenient access to adolescents and were an appropriate setting to conduct the study.\nThis study was carried out in five cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Faisalabad and Karachi, from January to February, 2010. These cities represent the major urban centers of Pakistan where the youth has access to tobacco products and is influenced by advertising. A minimum of two high schools from each city were identified for carrying out this study. These schools allowed convenient access to adolescents and were an appropriate setting to conduct the study.\n[SUBTITLE] Study Design and Procedure [SUBSECTION] A Microsoft PowerPoint presentation was developed that highlighted some of the most important and well-established health consequences of smoking. These were derived and adapted from the US Surgeon General's report published in collaboration with the Centre for Disease Control[11]. The presentation consisted of slides that provided details and written warnings of each tobacco related illness. Following the health warnings of each specific illness, a slide utilizing different pictorial/multi-media aids was used to show the health outcome of the disease. These included a picture of smokers' lungs, pictures of oral cancer, a video of a person using an electronic voice box, a video of a patient on a ventilator and a video of a person paralyzed due to stroke. The pictures and multi-media aids were obtained from open-access websites such as YouTube. In addition, other harmful effects of tobacco such as addiction, social implications and dangers posed by secondhand smoke exposure were also highlighted in the presentation.\nThe presentation was delivered at each of the schools selected for the study. At the end of the presentation, the students were asked to fill out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking (Table 1). The questionnaire form consisted of a total of 20 questions related to the anti-smoking messages with responses ranging from 1 to 5 (1 = Not at all, 2 = Unlikely, 3 = Unsure, 4 = Likely, 5 = Definitely). The demographics of the participants including age and gender were noted along with smoking status. Tobacco usage greater than once in the month preceding the administration of the questionnaire was taken as positive for both cigarette smoking and water-pipe smoking. This figure was adapted from the criteria used in the GYTS[4]. Prior ethical clearance was sought at the Aga Khan University.\nMean rank scores of responses using the Friedman test\nFriedman's p value < 0.001\nA Microsoft PowerPoint presentation was developed that highlighted some of the most important and well-established health consequences of smoking. These were derived and adapted from the US Surgeon General's report published in collaboration with the Centre for Disease Control[11]. The presentation consisted of slides that provided details and written warnings of each tobacco related illness. Following the health warnings of each specific illness, a slide utilizing different pictorial/multi-media aids was used to show the health outcome of the disease. These included a picture of smokers' lungs, pictures of oral cancer, a video of a person using an electronic voice box, a video of a patient on a ventilator and a video of a person paralyzed due to stroke. The pictures and multi-media aids were obtained from open-access websites such as YouTube. In addition, other harmful effects of tobacco such as addiction, social implications and dangers posed by secondhand smoke exposure were also highlighted in the presentation.\nThe presentation was delivered at each of the schools selected for the study. At the end of the presentation, the students were asked to fill out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking (Table 1). The questionnaire form consisted of a total of 20 questions related to the anti-smoking messages with responses ranging from 1 to 5 (1 = Not at all, 2 = Unlikely, 3 = Unsure, 4 = Likely, 5 = Definitely). The demographics of the participants including age and gender were noted along with smoking status. Tobacco usage greater than once in the month preceding the administration of the questionnaire was taken as positive for both cigarette smoking and water-pipe smoking. This figure was adapted from the criteria used in the GYTS[4]. Prior ethical clearance was sought at the Aga Khan University.\nMean rank scores of responses using the Friedman test\nFriedman's p value < 0.001\n[SUBTITLE] Sample [SUBSECTION] All of the schools enrolled in the study were private schools where the medium of instruction was in English. This was due to certain logistical and financial constraints of conducting the study in government-run schools such as, availability of multi-media projectors and back-up generators in case of power failures. A recent survey showed that a third of Pakistanis are educated in English-medium private schools and a further 15% are in English-medium government schools[12]. This sample may therefore not be representative of the remaining students belonging from a lower socio-economic group that are currently enrolled in other government schools or madrassahs. However, efforts are being made to convert these into English-medium facilities in the future[13]. The presentation was delivered to students in small groups consisting of approximately 40 students each. Both male and female students as well as current smokers and non-smokers were included. Approval from the faculty and the administration of the schools where the study was conducted was sought before delivering the presentation. The responsible faculty members were approached, briefed on the purpose of the study and were shown the details of the presentation for their approval. Student groups were then arranged by the schools' administration prior to the delivery of the presentation. The students were asked to sign a consent form that was included with the questionnaire for participating in the study.\nAll of the schools enrolled in the study were private schools where the medium of instruction was in English. This was due to certain logistical and financial constraints of conducting the study in government-run schools such as, availability of multi-media projectors and back-up generators in case of power failures. A recent survey showed that a third of Pakistanis are educated in English-medium private schools and a further 15% are in English-medium government schools[12]. This sample may therefore not be representative of the remaining students belonging from a lower socio-economic group that are currently enrolled in other government schools or madrassahs. However, efforts are being made to convert these into English-medium facilities in the future[13]. The presentation was delivered to students in small groups consisting of approximately 40 students each. Both male and female students as well as current smokers and non-smokers were included. Approval from the faculty and the administration of the schools where the study was conducted was sought before delivering the presentation. The responsible faculty members were approached, briefed on the purpose of the study and were shown the details of the presentation for their approval. Student groups were then arranged by the schools' administration prior to the delivery of the presentation. The students were asked to sign a consent form that was included with the questionnaire for participating in the study.\n[SUBTITLE] Data Analysis [SUBSECTION] The data was analyzed using Statistical Package for Social Sciences (SPSS v16.0). To compare responses to questions across the data set, the Friedman test for non-parametric data was utilized. This test was used to generate ranks between individual questions in the dataset. These were utilized to show which risk factors and multi-media aids adolescents considered as the most effective anti-smoking messages. To assess the impact of pictorial/multi-media health warnings, five questions pertaining to these were each paired with questions of their associated written text warnings (Table 2). The Wilcoxon Signed Ranks Test was utilized to assess whether there was any statistically significant difference in the responses to the questions within these pairs. The Mann Whitney U test was utilized to compare the responses of smokers to those of non-smokers. A p value of < 0.05 was taken as significant for each of the tests.\nComparison of text warnings with multi-media warnings as deterrents from smoking\nThe data was analyzed using Statistical Package for Social Sciences (SPSS v16.0). To compare responses to questions across the data set, the Friedman test for non-parametric data was utilized. This test was used to generate ranks between individual questions in the dataset. These were utilized to show which risk factors and multi-media aids adolescents considered as the most effective anti-smoking messages. To assess the impact of pictorial/multi-media health warnings, five questions pertaining to these were each paired with questions of their associated written text warnings (Table 2). The Wilcoxon Signed Ranks Test was utilized to assess whether there was any statistically significant difference in the responses to the questions within these pairs. The Mann Whitney U test was utilized to compare the responses of smokers to those of non-smokers. A p value of < 0.05 was taken as significant for each of the tests.\nComparison of text warnings with multi-media warnings as deterrents from smoking", "This study was carried out in five cities of Pakistan, including Islamabad, Rawalpindi, Lahore, Faisalabad and Karachi, from January to February, 2010. These cities represent the major urban centers of Pakistan where the youth has access to tobacco products and is influenced by advertising. A minimum of two high schools from each city were identified for carrying out this study. These schools allowed convenient access to adolescents and were an appropriate setting to conduct the study.", "A Microsoft PowerPoint presentation was developed that highlighted some of the most important and well-established health consequences of smoking. These were derived and adapted from the US Surgeon General's report published in collaboration with the Centre for Disease Control[11]. The presentation consisted of slides that provided details and written warnings of each tobacco related illness. Following the health warnings of each specific illness, a slide utilizing different pictorial/multi-media aids was used to show the health outcome of the disease. These included a picture of smokers' lungs, pictures of oral cancer, a video of a person using an electronic voice box, a video of a patient on a ventilator and a video of a person paralyzed due to stroke. The pictures and multi-media aids were obtained from open-access websites such as YouTube. In addition, other harmful effects of tobacco such as addiction, social implications and dangers posed by secondhand smoke exposure were also highlighted in the presentation.\nThe presentation was delivered at each of the schools selected for the study. At the end of the presentation, the students were asked to fill out a graded questionnaire form, using which they rated the risk-factors and messages that they thought were most effective in stopping or preventing them from smoking (Table 1). The questionnaire form consisted of a total of 20 questions related to the anti-smoking messages with responses ranging from 1 to 5 (1 = Not at all, 2 = Unlikely, 3 = Unsure, 4 = Likely, 5 = Definitely). The demographics of the participants including age and gender were noted along with smoking status. Tobacco usage greater than once in the month preceding the administration of the questionnaire was taken as positive for both cigarette smoking and water-pipe smoking. This figure was adapted from the criteria used in the GYTS[4]. Prior ethical clearance was sought at the Aga Khan University.\nMean rank scores of responses using the Friedman test\nFriedman's p value < 0.001", "All of the schools enrolled in the study were private schools where the medium of instruction was in English. This was due to certain logistical and financial constraints of conducting the study in government-run schools such as, availability of multi-media projectors and back-up generators in case of power failures. A recent survey showed that a third of Pakistanis are educated in English-medium private schools and a further 15% are in English-medium government schools[12]. This sample may therefore not be representative of the remaining students belonging from a lower socio-economic group that are currently enrolled in other government schools or madrassahs. However, efforts are being made to convert these into English-medium facilities in the future[13]. The presentation was delivered to students in small groups consisting of approximately 40 students each. Both male and female students as well as current smokers and non-smokers were included. Approval from the faculty and the administration of the schools where the study was conducted was sought before delivering the presentation. The responsible faculty members were approached, briefed on the purpose of the study and were shown the details of the presentation for their approval. Student groups were then arranged by the schools' administration prior to the delivery of the presentation. The students were asked to sign a consent form that was included with the questionnaire for participating in the study.", "The data was analyzed using Statistical Package for Social Sciences (SPSS v16.0). To compare responses to questions across the data set, the Friedman test for non-parametric data was utilized. This test was used to generate ranks between individual questions in the dataset. These were utilized to show which risk factors and multi-media aids adolescents considered as the most effective anti-smoking messages. To assess the impact of pictorial/multi-media health warnings, five questions pertaining to these were each paired with questions of their associated written text warnings (Table 2). The Wilcoxon Signed Ranks Test was utilized to assess whether there was any statistically significant difference in the responses to the questions within these pairs. The Mann Whitney U test was utilized to compare the responses of smokers to those of non-smokers. A p value of < 0.05 was taken as significant for each of the tests.\nComparison of text warnings with multi-media warnings as deterrents from smoking", "A total of 388 high school students were included in the study out which 245 were males and 142 were females. The mean age of the sample population was 17 with a standard deviation of 1.51. Out of the sample, a total of 97 (25.5%) identified themselves to be smokers out of which 70 were males (28.5% of males) and 27 were females (19% of females). A total of 150 (38.7%) participants answered positively for shisha smoking out of which 104 were males (42.5% of males) and 46 were females (32.4% of females).\nTable 1 shows the mean rank scores generated using the Friedman test for the responses of each of the questions. \"Did watching a picture of an oral cavity cancer convince you of the harmful effects of smoking,\" had the highest rank. \"Smoking addiction adversely affects disposable incomes-Does knowing this risk stop you from smoking,\" ranked the lowest. The Friedman's p-value was < 0.001.\nTable 2 shows the comparison of responses to questions regarding written health warnings with their associated multi-media messages. Responses were significantly greater for the pictorial/multi-media messages in each of the pairs except for \"Video of a person recovering from stroke,\" which was not significantly different from the written statement.\nThe comparison of responses given by smokers to those of non-smokers yielded significantly lower scores (p < 0.01) by the former group across the question set.\nOverall an encouraging response was received from the faculty and from the students to both the presentation and to the study in the schools that were visited. All of the students that were attending the presentations consented to be a part of the study. One of the schools approached, which was only for girls however, did not consent to the documentation of smoking prevalence of the students. The survey was not carried out at this school and only the presentation was delivered. An alternative school was subsequently selected for inclusion in the study.", "Pakistan has taken a number of tangible steps towards reducing adolescent tobacco consumption in the country such as enforcing bans on tobacco advertising and underage sales. A recent decision by the Ministry of Health to introduce pictorial warnings on cigarette packs could also have a major impact[14]. However, for comprehensive enforcement of the Framework Convention for Tobacco Control (FCTC), the government will need to ensure that the warnings are rotated, are of appropriate size and are present on all packaging and labeling[15]. In addition, current tobacco control legislation is not directed against shisha smoking that is acquiring increasingly popularity amongst the youth[16].\nOur results suggest that the effectiveness of the health messages could also be determined by the type of warning that is delivered. Graphic visual images, such as, pictures of oral cavity cancers were perceived to have the greatest impact in deterring students from smoking. Multi-media aids that conveyed messages students could relate to, both anatomically and functionally, ranked higher than the more commonly used pictures of a 'smoker's lungs,' that could perhaps not convey the health warning with a similar impact. Amongst these multi-media aids also included videos of a patient using an electronic larynx and a patient on a ventilator. These findings suggest that such multi-media aids may be effective advertisements for health promotion campaigns.\nOur findings give further support to the use of pictorial and multi-media health warnings instead of warnings consisting only of text that were perceived to be less effective. This is particularly pertinent in countries with poor literacy rates such as Pakistan. In addition, cigarette pack warnings in the country are often in English, which is understood by a limited segment of the population, hence, obfuscating the necessary health promotion messages. Multi-media anti-smoking messages could therefore may improve awareness of the health consequences of smoking amongst the youth in Pakistan. Modifying label packaging to include graphic health warnings has been demonstrated as an effective means of reducing tobacco consumption and improving awareness of the health consequences of smoking in other countries within this age group[10,17-19].\nThe participants did not perceive the current ban on smoking in indoor public areas to be an impediment to smoking. This suggests that they are either unaware of the relevant legislation or that they do not believe the laws will be enforced and any violations will be dealt with. They also did not perceive harming others through second hand smoke to be a major deterring factor. These findings suggest that there is a substantial lack of awareness regarding the hazards of second hand smoke amongst adolescents. In addition, the low scores for responses to questions relating to addiction and to cigarettes as a 'gateway drug' also suggest a lack of awareness of the severity of these conditions. This is of significance in the school-going age group as addiction is cited as the commonest reason for failure of smoking cessation during adulthood[20].\nFinally, those who identified themselves as smokers gave significantly lower responses to those of non-smokers across the question set. This suggests that the susceptibility to anti-smoking messages may decrease substantially once the habit has been initiated on a regular basis. Such early demonstration of intransigence to health promotion messages does not portend well for future smoking cessation during adulthood. This suggests that early, directed interventions aimed at students and adolescents may be beneficial as appropriate messages are delivered before the habit is initiated.\nThe study was limited by the fact it was carried out in private schools where the medium of instruction is English and the students belonged to relatively higher socio-economic group. This could explain why the impact of smoking on disposable incomes was not cited as a major deterring factor. This could however, be of greater relevance for adolescents belonging to a lower socio-economic group. Based on these findings, a follow-up study is now being carried in public schools where Urdu is the medium of instruction.", "Graphic visual images and multi-media aids that vividly depict cosmetic distortions and loss of normal organ function as outcomes of the diseases associated with smoking are perceived by high school students as the most effective modalities in deterring them from smoking. These aids, in the form of health warnings, health promotion campaigns and material in school curricula, may be useful as effective tobacco control modalities in developing countries with young populations. Students that have already initiated the habit may be more resistant to tobacco control messages, hence, early intervention may prove to be beneficial. In addition, lack of awareness of other hazardous effects of smoking such as addiction and secondhand smoke exposure needs to be addressed in Pakistan.", "The authors declare that they have no competing interests.", "SMAZ participated in the study design, manuscript writing, data collection and analysis. ALB participated in data analysis and manuscript writing. AS participated in data collection and analysis. SHI participated in data collection. JAK participated in study design, coordination and provided technical supervision.\nAll authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/117/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[]
Prevalence and perceived health effect of alcohol use among male undergraduate students in Owerri, South-East Nigeria: a descriptive cross-sectional study.
21333007
Alcohol use during adolescence and young adulthood remains a prominent public health problem. Despite growing problems of global alcohol abuse, accurate information on the prevalence and pattern of use in Nigeria remain sparse. This study examines the prevalence and perceived health effects of alcohol use among undergraduate students in Owerri, Nigeria.
BACKGROUND
The prevalence and perceived health effects of alcohol was estimated for 482 male undergraduates of four higher institutions in Owerri, South-East Nigeria between October 2008 and March 2009. Information was obtained using a semi-structured, self-administered questionnaire.
METHOD
The mean age of the students was 24.7 years. Majority of the respondents confirmed they were current users of alcohol given a prevalence of 78.4%, with twenty-seven percent of them being heavy drinkers (≥ 4 drinks per day). Reasons given by respondents for alcohol drinking include: makes them feel high (24.4%); makes them belong to the group of "most happening guys" on campus (6.6%); makes them feel relaxed (52.6%) while (16.4%) drinks it because their best friends do. Perceived health impacts of alcohol use among current users include: it enhances pleasure during moment of sex (51.1%), causes drowsiness and weakness (63.8%), may precipitate defective memory and impaired perception (64.3%) and serves as risk factor for most chronic diseases (68.5%).
RESULT
High prevalence of alcohol use was established among study groups. Evaluation of full-scale community-level intervention, including community mobilisation and media advocacy aimed at supporting changes in policies on drinking, access and sales of alcohol to young people, could be helpful in reducing the trend.
CONCLUSION
[ "Adult", "Alcohol Drinking", "Cross-Sectional Studies", "Health Knowledge, Attitudes, Practice", "Humans", "Male", "Nigeria", "Prevalence", "Students", "Universities", "Young Adult" ]
3049753
null
null
Methods
From October 2008 to March 2009 a descriptive cross-sectional survey was conducted among Undergraduate students from four tertiary institutions in Owerri. These institutions include Federal University of Technology (FUTO), Imo State University (IMSU), Alvan Ikoku Federal College of Education (AIFCE) and Federal Polytechnic Nekede (FPN), all in Owerri South-east, Nigeria. Owerri is a city in South-Eastern Nigeria. It is the capital of Imo State and is set in the heart of the Igbo land. Owerri consists of three LGAs (Owerri Municipal, Owerri North and Owerri West). It currently has a population of about 400,000 and is approximately 40 square miles (100 km2) in area. It is bordered by the Otamiri River to the east and the Nworie River to the south. It occupies the area lying between coordinates 5.484°N and 7.035°E. A multi stage sampling design was used to select participants. Interviews using structured questionnaires were conducted with randomly selected respondents in six faculties from four institutions. The questions focused on various sub-themes like socio-demographic information, prevalence of alcohol use, reasons for alcohol use, number of bottles drank per day and perceived impact of alcohol use among the respondents. A copy of the questionnaire containing the variables of interest is contained in additional file 1. Questionnaire was prepared in English and was self-administered. It was administered after explaining the purpose of the study and criteria used in selecting each respondent. Permission to conduct the survey was requested and obtained from the university ethical review board. Informed verbal and written consent was obtained from participants. Confidentiality of information was maintained throughout the study. The data collected was manually sorted out, edited and coded. It was thereafter imputed into the computer for analysis using SPSS version 15.0 statistical package. Frequency tables were generated for demographic characteristics of the respondents. Qualitative variables were summarized by proportions. Students who drank four bottles or more per day were classified as heavy drinkers while those dinking less were classified as non-heavy drinkers. Statistical significance for association was tested using chi-square, with p-value less than 0.05 considered statistically significant. The commonest alcoholic drink among the study population was 'Star'. A bottle of Star is 60centiliters in volume with an alcohol content of 5.1% per volume.
null
null
null
null
[ "Background", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Authors Information", "Pre-publication history" ]
[ "Alcohol consumption has occurred for thousands of years. In many parts of the world, drinking alcoholic beverages is a common feature of social gatherings [1]. While alcohol use is deeply embedded in many societies, recent years have seen changes in drinking patterns across the globe: rates of consumption, drinking to excess among the general population and heavy episodic drinking among young people are on the rise in many countries. Nevertheless, the consumption of alcohol carries a risk of adverse health and social consequences related to its intoxicating, toxic and dependence-producing properties [1].\nAlcohol consumption has health and social consequences via intoxication (drunkenness), dependence (habitual, compulsive and long-term drinking), and other biochemical effects. In addition to chronic diseases that may affect drinkers after many years of heavy use, alcohol contributes to traumatic outcomes that kill or disable one at a relatively young age, resulting in the loss of many years of life to death or disability. Alcohol is estimated to cause about 20-30% worldwide disease of oesophageal cancer, liver cancer, cirrhosis of the liver, homicide, epilepsy, and motor vehicle accidents [1].\nGlobally, alcohol consumption has increased in recent decades, with all or most of that increase in developing countries. World-wide, five percent of all deaths of people between the ages of 5 and 29 in 1990 were attributable to alcohol use. The Global Burden of Disease Study found that alcohol was responsible in 1990 for 3.5 percent of all disability-adjusted life years (more than tobacco or illegal drugs). While adverse health outcomes from long-term chronic alcohol use may not cause death or disability until late in life, acute health consequences of alcohol use, including intentional and unintentional injuries, are far more common among younger people [2].\nIn Nigeria, psychoactive substance misuse especially alcohol has for many years been an issue of increasing health and social importance. This is especially so for the critical adolescent period marked by several changes including the psychological phenomenon of experimentation. Studies carried out in the last decade in Nigeria have identified adolescents as a major group involved in the use of alcohol. This study describes the prevalence of alcohol use among male undergraduate students in Owerri, South-East, Nigeria. It also documents the perceived health effects of alcohol use among the study group.", "A total of 482 records of male tertiary undergraduates from four higher institutions in Owerri were available and analysed. Majority of the respondents, 42.5% compared to 16.6% who were in the 16 - 20 years age group. Most of the respondents (79.0%) were unmarried whereas only 21.0% were married as shown in Table 1. The sample studied includes 24.5% of IMSU students, 23.9% of FUTO students, 26.6% of FPN students and 25.1% of AIFCE students. The percentage of individuals who were Christians is higher than those that were Muslims.\nDemographic Characteristics of Respondents\nThe prevalence of alcohol consumption among respondents vis-a-vis their demographic information is shown in Table 2. Prevalence of alcohol use is high with 78.4% among all respondents and 92.2% among students aged 26 years and above (p-value < 0.001). The percentage of single students that use alcoholic drink was significantly higher than those that are married (p-value < 0.001).\nPrevalence of Alcohol Use among Respondents\nTable 3 present reasons why respondents use alcohol and other important parameter. About 24.4% of the respondents said it makes them feel high (on top of the world); 6.6% claimed that it makes them belong to the group of \"most happening guys\" on campus; 52.6% said it makes them feel relaxed, helps in cooling off stress; while the remaining 16.4% respondents said they indulged in the act of using alcohol because their best friends drink it.\nDistribution of Reasons for Alcohol Use and related parameters among Respondents\nTable 4 documents the impact/effects of alcohol use on the respondents. The table shows that 45.5% of the 378 alcohol users admitted that it makes them feel bad, while 55.5% said it gives them good feeling. Majority said it enhances pleasure during moment of sex; 46.3% usually have residual depressive feeling of remorse hours after use; while 63.8% reported it causes drowsiness, weakness, hangovers, dangerous driving speed and may lead to accident. The respondents when classified according to the number of bottles of alcohol drunk per day revealed that 101 (26.7%) of 378 respondents that have ever used alcohol were heavy drinkers.\nImpacts/Effects of Alcohol use Among Respondents", "Valid information on prevalence of alcohol is important input for public health policy. The results of this investigation indicate that alcohol consumption has a prevalence of 78.4% among the respondents. Survey and anecdotal data from countries around the globe suggest that a culture of sporadic heavy or \"binge\" drinking among young people may be spreading from the developed to the developing countries [3].\nA number of school and college surveys in Nigeria have found alcohol use to be common among students, with many drinking students having had their first drink in family settings [4]. In June 1988 a questionnaire survey of 636 undergraduate students at the University of Ilorin in Kwara State found that 77% reported lifetime alcohol use (81 per cent of men and 68 per cent of women) [5]. In response to a 1988 survey of 1,041 senior secondary school students in Ilorin, 12% reported current use of alcohol [6].\nFindings from this study show that majority of the respondents were initiated into the use of alcohol at a tender age of 16 to 20 years. Previous studies have shown that age of initiation of alcohol use is important. Research in the US has found that the earlier the age at which people begin drinking, the more likely they are to become alcohol dependent later in life [7]. Those who begin drinking in their teenage years are also more likely to experience alcohol-related unintentional injuries (such as motor vehicle injuries, falls, burns, drowning) than those who begin drinking at a later age [8]. Adverse effects of early onset of drinking may be shorter term as well: prospective research has found a younger age of initiation to be strongly related to a higher level of alcohol misuse at ages 17 and 18 [9].\nMoreover, certain individuals (26.7%) vividly reported that they were introduced to alcohol by their family members. This is consistent with a collaborative report from WHO's European Regional Office which estimated that 4.5 million young people lived in families adversely affected by alcohol [10]. Problems for the young people in such homes may include instability or collapse of marriages and family structures, increased risk of physical or sexual abuse, neglect, and strain on family finances. Such family problems may in turn put young people at greater risk of developing anti-social behaviours, emotional problems and problems in the school environment [11].\nDocumentation of quantity of alcohol consumed revealed that 26.7% were heavy drinkers and at risk of most of the health complication associated with alcohol consumption. Harmful use of alcohol encompasses several aspects of drinking; one is the volume drunk over time. The strongest drinking-related predictor of many chronic illnesses is the cumulated amount of alcohol consumed over a period of a year. Also, the risks of intentional and unintentional injuries and of transmission of certain infectious diseases are predicted by the pattern of drinking, occasional or regular drinking, drinking to intoxication and the drink context.\nThe range of adverse physical consequences stemming from heavy use of alcohol on a single occasion is well documented. The most obvious of these is alcohol poisoning, which although relatively rare is often emblematic of young drinkers' inexperience with alcohol. Alcohol may have a more immediate and severe effect on young people because their muscle mass is smaller than that of adults (WHO, 2009).\nWhile evidence is inconclusive regarding the direct impact of alcohol use on the physical development of young people, there are indications that heavy alcohol use at a young age is predictive of a range of psychological and physical problems. Protracted and continuous abuse of alcohol may be predictive of more severe health problems in general for boys and girls [12]. Alcohol may cause physical harm to children, although the evidence remains preliminary. Studies in laboratory animals have found that high doses of alcohol may delay the onset of puberty, retard bone growth and result in weaker bones [13-15].\nSurveys of young people in European countries have looked at a wide range of behavioural consequences of alcohol use. These include individual problems, defined by self-reports on young people's reduced performance at school or at work, damage to objects or clothing, loss of money or other valuable items, and accident or injury as a result of alcohol use. Relationship problems cover self-reported quarrels or arguments, and problems in relationships with friends, teachers or parents as a result of drinking alcohol. Young people also reported on whether they had engaged in unwanted sexual experiences or unprotected sex. Finally, delinquency problems included self-reports of alcohol-related scuffles or fights, victimisation by robbery, or trouble with the police, as well as driving a motorcycle or a car under the influence of alcohol.\nWhen related to their health status, 68.5% accepted that it forms high risk factor for most health problems like ling cancer, liver, sexually transmitted disease, HIV/AIDS, low birth weight in women, stroke and sudden death.\nEven though majority (83.4%) accepted the establishment of awareness programmes in their institution on alcohol abuse with seminars, workshops, crusades and conferences held at their school premises, these actions were still not enough to combat the use of alcohol and in the institutions. More effort should be put to strengthen these existing programmes and new line of action developed that can help combat this epidemic.\nAmong the shortcomings of this study is the fact that there were no previous validated measure for alcohol consumption and alcohol related problems, thus making comparison difficult.", "In conclusion, the prevalence of alcohol consumption (78.4%) is quite high among undergraduates in Owerri and 26.7% of them are at risk of health complications of alcohol consumption. A lot of factors such as peer group pressure, influence of family members, role model, and wrong perception portrayed in advertisement contribute immensely to its use. These factors if checked on time will reduce the prevalence of alcohol consumption among undergraduates and the country at large.\nHealth implications have proved to be a good reason for non-alcohol use. This means that health education in tertiary institutions will to a reasonable extent reduce the prevalence. Also, since our media stations (especially foreign stations) remain the major media for alcohol advertisement, these media can serve to campaign against their use in the society.\nThough, alcohol use is voluntary and is the right of an individual to take alcohol or not, it is equally important and advisable for the government to enforce ban on the sale of alcohol to adolescence so as to protect the health of the future work force of the nation. The government whose ultimate aim is to see to the health of its citizens should allocate adequate resources for campaigns against alcohol use, just as HIV/AIDS awareness has received adequate attention on the part of government.\nNumerous evaluation research studies have found that changing certain public policies results in significant effects both on young people's behaviour and on negative outcomes of alcohol consumption. Evaluation of full-scale community-level intervention, including community mobilisation and media advocacy aimed at supporting changes in policies on drinking and driving, access and sales of alcohol to young people, have shown very promising results.", "The authors declare that they have no competing interests.", "CICE conceived the study, designed the questionnaire, performed the statistical analysis, and also contributed in drafting of the manuscript.\nMOM participated in drafting and critical review of the manuscript.", "C. I. C. E. PhD Epidemiology and Disease Control Technology (In-view), M.Sc Epidemiology and Medical Statistics. Lecturer, Public Health Technology Department, Federal University of Technology Owerri, Nigeria.\nM. O. M. PhD Environmental Management & Toxicology (In-view), MPH Environmental Health. Lecturer, Public Health Technology Department, Federal University of Technology Owerri, Nigeria.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/118/prepub\n" ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Authors Information", "Pre-publication history", "Supplementary Material" ]
[ "Alcohol consumption has occurred for thousands of years. In many parts of the world, drinking alcoholic beverages is a common feature of social gatherings [1]. While alcohol use is deeply embedded in many societies, recent years have seen changes in drinking patterns across the globe: rates of consumption, drinking to excess among the general population and heavy episodic drinking among young people are on the rise in many countries. Nevertheless, the consumption of alcohol carries a risk of adverse health and social consequences related to its intoxicating, toxic and dependence-producing properties [1].\nAlcohol consumption has health and social consequences via intoxication (drunkenness), dependence (habitual, compulsive and long-term drinking), and other biochemical effects. In addition to chronic diseases that may affect drinkers after many years of heavy use, alcohol contributes to traumatic outcomes that kill or disable one at a relatively young age, resulting in the loss of many years of life to death or disability. Alcohol is estimated to cause about 20-30% worldwide disease of oesophageal cancer, liver cancer, cirrhosis of the liver, homicide, epilepsy, and motor vehicle accidents [1].\nGlobally, alcohol consumption has increased in recent decades, with all or most of that increase in developing countries. World-wide, five percent of all deaths of people between the ages of 5 and 29 in 1990 were attributable to alcohol use. The Global Burden of Disease Study found that alcohol was responsible in 1990 for 3.5 percent of all disability-adjusted life years (more than tobacco or illegal drugs). While adverse health outcomes from long-term chronic alcohol use may not cause death or disability until late in life, acute health consequences of alcohol use, including intentional and unintentional injuries, are far more common among younger people [2].\nIn Nigeria, psychoactive substance misuse especially alcohol has for many years been an issue of increasing health and social importance. This is especially so for the critical adolescent period marked by several changes including the psychological phenomenon of experimentation. Studies carried out in the last decade in Nigeria have identified adolescents as a major group involved in the use of alcohol. This study describes the prevalence of alcohol use among male undergraduate students in Owerri, South-East, Nigeria. It also documents the perceived health effects of alcohol use among the study group.", "From October 2008 to March 2009 a descriptive cross-sectional survey was conducted among Undergraduate students from four tertiary institutions in Owerri. These institutions include Federal University of Technology (FUTO), Imo State University (IMSU), Alvan Ikoku Federal College of Education (AIFCE) and Federal Polytechnic Nekede (FPN), all in Owerri South-east, Nigeria.\nOwerri is a city in South-Eastern Nigeria. It is the capital of Imo State and is set in the heart of the Igbo land. Owerri consists of three LGAs (Owerri Municipal, Owerri North and Owerri West). It currently has a population of about 400,000 and is approximately 40 square miles (100 km2) in area. It is bordered by the Otamiri River to the east and the Nworie River to the south. It occupies the area lying between coordinates 5.484°N and 7.035°E.\nA multi stage sampling design was used to select participants. Interviews using structured questionnaires were conducted with randomly selected respondents in six faculties from four institutions. The questions focused on various sub-themes like socio-demographic information, prevalence of alcohol use, reasons for alcohol use, number of bottles drank per day and perceived impact of alcohol use among the respondents. A copy of the questionnaire containing the variables of interest is contained in additional file 1.\nQuestionnaire was prepared in English and was self-administered. It was administered after explaining the purpose of the study and criteria used in selecting each respondent. Permission to conduct the survey was requested and obtained from the university ethical review board. Informed verbal and written consent was obtained from participants. Confidentiality of information was maintained throughout the study.\nThe data collected was manually sorted out, edited and coded. It was thereafter imputed into the computer for analysis using SPSS version 15.0 statistical package. Frequency tables were generated for demographic characteristics of the respondents. Qualitative variables were summarized by proportions. Students who drank four bottles or more per day were classified as heavy drinkers while those dinking less were classified as non-heavy drinkers. Statistical significance for association was tested using chi-square, with p-value less than 0.05 considered statistically significant. The commonest alcoholic drink among the study population was 'Star'. A bottle of Star is 60centiliters in volume with an alcohol content of 5.1% per volume.", "A total of 482 records of male tertiary undergraduates from four higher institutions in Owerri were available and analysed. Majority of the respondents, 42.5% compared to 16.6% who were in the 16 - 20 years age group. Most of the respondents (79.0%) were unmarried whereas only 21.0% were married as shown in Table 1. The sample studied includes 24.5% of IMSU students, 23.9% of FUTO students, 26.6% of FPN students and 25.1% of AIFCE students. The percentage of individuals who were Christians is higher than those that were Muslims.\nDemographic Characteristics of Respondents\nThe prevalence of alcohol consumption among respondents vis-a-vis their demographic information is shown in Table 2. Prevalence of alcohol use is high with 78.4% among all respondents and 92.2% among students aged 26 years and above (p-value < 0.001). The percentage of single students that use alcoholic drink was significantly higher than those that are married (p-value < 0.001).\nPrevalence of Alcohol Use among Respondents\nTable 3 present reasons why respondents use alcohol and other important parameter. About 24.4% of the respondents said it makes them feel high (on top of the world); 6.6% claimed that it makes them belong to the group of \"most happening guys\" on campus; 52.6% said it makes them feel relaxed, helps in cooling off stress; while the remaining 16.4% respondents said they indulged in the act of using alcohol because their best friends drink it.\nDistribution of Reasons for Alcohol Use and related parameters among Respondents\nTable 4 documents the impact/effects of alcohol use on the respondents. The table shows that 45.5% of the 378 alcohol users admitted that it makes them feel bad, while 55.5% said it gives them good feeling. Majority said it enhances pleasure during moment of sex; 46.3% usually have residual depressive feeling of remorse hours after use; while 63.8% reported it causes drowsiness, weakness, hangovers, dangerous driving speed and may lead to accident. The respondents when classified according to the number of bottles of alcohol drunk per day revealed that 101 (26.7%) of 378 respondents that have ever used alcohol were heavy drinkers.\nImpacts/Effects of Alcohol use Among Respondents", "Valid information on prevalence of alcohol is important input for public health policy. The results of this investigation indicate that alcohol consumption has a prevalence of 78.4% among the respondents. Survey and anecdotal data from countries around the globe suggest that a culture of sporadic heavy or \"binge\" drinking among young people may be spreading from the developed to the developing countries [3].\nA number of school and college surveys in Nigeria have found alcohol use to be common among students, with many drinking students having had their first drink in family settings [4]. In June 1988 a questionnaire survey of 636 undergraduate students at the University of Ilorin in Kwara State found that 77% reported lifetime alcohol use (81 per cent of men and 68 per cent of women) [5]. In response to a 1988 survey of 1,041 senior secondary school students in Ilorin, 12% reported current use of alcohol [6].\nFindings from this study show that majority of the respondents were initiated into the use of alcohol at a tender age of 16 to 20 years. Previous studies have shown that age of initiation of alcohol use is important. Research in the US has found that the earlier the age at which people begin drinking, the more likely they are to become alcohol dependent later in life [7]. Those who begin drinking in their teenage years are also more likely to experience alcohol-related unintentional injuries (such as motor vehicle injuries, falls, burns, drowning) than those who begin drinking at a later age [8]. Adverse effects of early onset of drinking may be shorter term as well: prospective research has found a younger age of initiation to be strongly related to a higher level of alcohol misuse at ages 17 and 18 [9].\nMoreover, certain individuals (26.7%) vividly reported that they were introduced to alcohol by their family members. This is consistent with a collaborative report from WHO's European Regional Office which estimated that 4.5 million young people lived in families adversely affected by alcohol [10]. Problems for the young people in such homes may include instability or collapse of marriages and family structures, increased risk of physical or sexual abuse, neglect, and strain on family finances. Such family problems may in turn put young people at greater risk of developing anti-social behaviours, emotional problems and problems in the school environment [11].\nDocumentation of quantity of alcohol consumed revealed that 26.7% were heavy drinkers and at risk of most of the health complication associated with alcohol consumption. Harmful use of alcohol encompasses several aspects of drinking; one is the volume drunk over time. The strongest drinking-related predictor of many chronic illnesses is the cumulated amount of alcohol consumed over a period of a year. Also, the risks of intentional and unintentional injuries and of transmission of certain infectious diseases are predicted by the pattern of drinking, occasional or regular drinking, drinking to intoxication and the drink context.\nThe range of adverse physical consequences stemming from heavy use of alcohol on a single occasion is well documented. The most obvious of these is alcohol poisoning, which although relatively rare is often emblematic of young drinkers' inexperience with alcohol. Alcohol may have a more immediate and severe effect on young people because their muscle mass is smaller than that of adults (WHO, 2009).\nWhile evidence is inconclusive regarding the direct impact of alcohol use on the physical development of young people, there are indications that heavy alcohol use at a young age is predictive of a range of psychological and physical problems. Protracted and continuous abuse of alcohol may be predictive of more severe health problems in general for boys and girls [12]. Alcohol may cause physical harm to children, although the evidence remains preliminary. Studies in laboratory animals have found that high doses of alcohol may delay the onset of puberty, retard bone growth and result in weaker bones [13-15].\nSurveys of young people in European countries have looked at a wide range of behavioural consequences of alcohol use. These include individual problems, defined by self-reports on young people's reduced performance at school or at work, damage to objects or clothing, loss of money or other valuable items, and accident or injury as a result of alcohol use. Relationship problems cover self-reported quarrels or arguments, and problems in relationships with friends, teachers or parents as a result of drinking alcohol. Young people also reported on whether they had engaged in unwanted sexual experiences or unprotected sex. Finally, delinquency problems included self-reports of alcohol-related scuffles or fights, victimisation by robbery, or trouble with the police, as well as driving a motorcycle or a car under the influence of alcohol.\nWhen related to their health status, 68.5% accepted that it forms high risk factor for most health problems like ling cancer, liver, sexually transmitted disease, HIV/AIDS, low birth weight in women, stroke and sudden death.\nEven though majority (83.4%) accepted the establishment of awareness programmes in their institution on alcohol abuse with seminars, workshops, crusades and conferences held at their school premises, these actions were still not enough to combat the use of alcohol and in the institutions. More effort should be put to strengthen these existing programmes and new line of action developed that can help combat this epidemic.\nAmong the shortcomings of this study is the fact that there were no previous validated measure for alcohol consumption and alcohol related problems, thus making comparison difficult.", "In conclusion, the prevalence of alcohol consumption (78.4%) is quite high among undergraduates in Owerri and 26.7% of them are at risk of health complications of alcohol consumption. A lot of factors such as peer group pressure, influence of family members, role model, and wrong perception portrayed in advertisement contribute immensely to its use. These factors if checked on time will reduce the prevalence of alcohol consumption among undergraduates and the country at large.\nHealth implications have proved to be a good reason for non-alcohol use. This means that health education in tertiary institutions will to a reasonable extent reduce the prevalence. Also, since our media stations (especially foreign stations) remain the major media for alcohol advertisement, these media can serve to campaign against their use in the society.\nThough, alcohol use is voluntary and is the right of an individual to take alcohol or not, it is equally important and advisable for the government to enforce ban on the sale of alcohol to adolescence so as to protect the health of the future work force of the nation. The government whose ultimate aim is to see to the health of its citizens should allocate adequate resources for campaigns against alcohol use, just as HIV/AIDS awareness has received adequate attention on the part of government.\nNumerous evaluation research studies have found that changing certain public policies results in significant effects both on young people's behaviour and on negative outcomes of alcohol consumption. Evaluation of full-scale community-level intervention, including community mobilisation and media advocacy aimed at supporting changes in policies on drinking and driving, access and sales of alcohol to young people, have shown very promising results.", "The authors declare that they have no competing interests.", "CICE conceived the study, designed the questionnaire, performed the statistical analysis, and also contributed in drafting of the manuscript.\nMOM participated in drafting and critical review of the manuscript.", "C. I. C. E. PhD Epidemiology and Disease Control Technology (In-view), M.Sc Epidemiology and Medical Statistics. Lecturer, Public Health Technology Department, Federal University of Technology Owerri, Nigeria.\nM. O. M. PhD Environmental Management & Toxicology (In-view), MPH Environmental Health. Lecturer, Public Health Technology Department, Federal University of Technology Owerri, Nigeria.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/118/prepub\n", "Questionnaire on Prevalence and Perceived Health Effect of Alcohol Use among Male Undergraduate Students in Owerri, South-East Nigeria. It is a semi-structured questionnaire containing questions on the following sections; demographic characteristics, prevalence of alcohol use, knowledge of health effects of alcohol abuse and perceived health effects of alcohol abuse.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, "supplementary-material" ]
[]
Food for thought: an exploratory study of how physicians experience poor workplace nutrition.
21333008
Nutrition is often a casualty of the busy work day for physicians. We aimed to explore physicians' views of their nutrition in the workplace including their perceptions of the impact of inadequate nutrition upon their personal wellness and their professional performance.
BACKGROUND
This is a qualitative study of a sample of 20 physicians practicing in a large urban teaching hospital. Semi-structured open ended interviews were conducted to explore physicians' views of workplace nutrition. The same physicians had agreed to participate in a related nutrition based wellness intervention study that compared nutritional intake and cognitive function during a day of usual nutrition patterns against another day with scheduled nutrition breaks. A second set of interviews was conducted after the intervention study to explore how participation in the intervention impacted these views. Detailed interview content notes were transcribed and analyzed independently with differences reconciled by discussion.
METHODS
At initial interview, participants reported difficulty accessing adequate nutrition at work, linking this deficit with emotional (irritable and frustrated), physical (tired and hungry), and cognitive (difficulty concentrating and poor decision making) symptoms. In addition to identifying practical barriers such as lack of time to stop and eat, inconvenient access to food and poor food choices, the physicians described how their sense of professionalism and work ethic also hinder their work nutrition practices. After participating in the intervention, most physicians reported heightened awareness of their nutrition patterns and intentions to improve their workplace nutrition.
RESULTS
Physicians report that inadequate workplace nutrition has a significant negative impact on their personal wellness and professional performance. Given this threat to health care delivery, health care organizations and the medical profession need to address both the practical and professional barriers identified.
CONCLUSIONS
[ "Adult", "Cognition", "Delivery of Health Care", "Energy Intake", "Feeding Behavior", "Female", "Food", "Humans", "Interviews as Topic", "Male", "Middle Aged", "Nutritional Status", "Physicians", "Professional Competence", "Surveys and Questionnaires", "Workplace" ]
3068081
null
null
Methods
During May and June of 2008, we interviewed a volunteer sample of 20 staff physicians from various medical specialties who were recruited from the doctors' lounge of a large urban teaching hospital. The average age of participants was 47 (range = 36 to 64 years), and 3 were women. Their average BMI was 25 kg/m2 (range = 20 - 38), all were non-smokers and most (75%) exercised at moderate to high intensity for 30 minutes or longer for at least 2 to 4 days per week. They represented a variety of medical practice areas where 10 participants were from a medical specialty (e.g., General Internal Medicine, Neurology, Intensive Care Medicine), 8 from a surgical specialty (e.g., General Surgery, Plastic Surgery, Otolaryngology), and 2 were from primary care (e.g., Family Medicine, Hospitalist). All had also agreed to participate in a related study involving a nutrition based wellness intervention [15]. The physicians were first interviewed, prior to participating in the intervention study, to explore their views of how they experience poor nutrition in the workplace, and again, after their participation in the intervention, to explore how their participation impacted these views. No physicians dropped out of the study. The related intervention study [15] assessed hospital physicians' nutritional intake, cognitive function and hypoglycemic symptoms during two similar work days: one where they followed their usual eating and drinking habits and another where they were provided with nutritious food and drink at enforced scheduled intervals. The intervention was designed to ensure that physicians consumed nutrients and fluids at regular intervals throughout their work day and had four key elements: providing healthy nutrition choices; enforcing nutrition breaks; maximizing ease of accessibility; and offering cost free nutrition. Nutrition provided during the study period on the intervention day was based on the recommendations of Canada's Food Guide. On average, food and beverages were provided in six small meals. This varied according to the number of hours worked by each participant and their individual nutritional choices. At each scheduled nutrition break, the research team contacted participants through hospital paging. Ready-to-consume and cost free nutrition was either waiting for physicians at the centrally located doctors' lounge or was brought to their practice location. On both of the study days, cognitive function, nutrient/fluid intake, urinary output, frequency of hypoglycemic symptoms and capillary blood glucose were measured by the research coordinator every two hours. Analysis of these data showed a higher nutrient intake and improved hydration for the study participants on intervention versus baseline days. The intervention was associated with significantly superior scores on cognitive function testing, and lower and less variable blood glucose levels. Although not statistically significant, there was also a trend toward the reporting of fewer hypoglycemic symptoms. The participants were blinded to these results during the intervention study and also at the time of the second interview exploring the impact of the intervention on their views of workplace nutrition. The interviews were conducted by the first author, a female internal medicine consultant, clinical professor of medicine, and colleague of the participants. The interviewer also holds an official appointment as Vice Chair, Physician Wellness and Vitality, Department of Medicine and has prior multidisciplinary research experience. The interviews took place at the hospital, in a private setting and lasted between 15 and 45 minutes. The interview questions analyzed in this paper are outlined in the results section below and were embedded in a more detailed set of questions that served as the entry and exit interviews for the nutrition based wellness intervention studying the association between workplace nutrition and cognition. The questions were guided by previous data from local research where both qualitative and quantitative data repeatedly identified poor workplace nutrition as a wellness concern for physicians [13,14]. In general, the initial questions required a yes/no response in order to determine what proportion of the physicians reported certain nutritional experiences. These were followed by open ended probes intended to elicit specific examples of how the physicians experience poor workplace nutrition. In addition, prompts were used in order to further explore the question, again followed by open ended probes framed to elicit the physicians' personal experiences. The questions were not pilot tested, and no interviews were repeated for the same content. The detailed interview notes were transcribed into a typed summary and independently reviewed by the two lead co-investigators using an inductive strategy through open and selective coding to derive the predominant themes. Given the exploratory nature of the research, the interviews were completed with all 20 participants even if data saturation occurred on occasion with some of the major themes. This strategy also produced a richness of illustrative examples. Interview transcripts were not returned to participants for comments or corrections. All, however, were sent a summary report of the analyzed interview data and asked for comments, and no negative feedback related to accuracy or interpretation was received. Ethics approval for this study was obtained from the Conjoint Health Ethics Review Board of the University of Calgary.
null
null
null
null
[ "Background", "Results", "Impact of Inadequate Nutrition", "Barriers to Adequate Nutrition", "Impact of Participating in a Nutrition Based Wellness Initiative", "Discussion", "Conclusion", "Authors' contributions", "Conflict of interest statement" ]
[ "There is growing empirical evidence of an association between physicians' wellness and their ability to deliver quality health care [1-9] and yet physicians are often unable to tend to their wellness [10]. Physicians report that they frequently cannot eat and drink properly or at all during work hours [11-14]. Research has also shown an association between workplace nutrition and cognition for physicians [15]. The primary objective of our study was to explore hospital based physicians' views of how they experience poor nutrition in the workplace. Specifically, we asked about whether or not they have difficulty meeting their nutritional needs at work, their perceptions of the personal and professional impact of suboptimal nutrition, and the contributing barriers. The secondary objective was to further explore the same physicians' views and attitudes towards workplace nutrition after they had participated in a related study involving a nutrition-based intervention [15].", "During the first interview, the physicians were asked questions aimed at exploring their perceptions of workplace nutrition. We inquired as to how they experience inadequate nutrition in the workplace in terms of their personal wellness and professional performance, and whether or not they perceived a link between their nutrition and cognitive performance. They were also asked about what they viewed as the potential barriers to nutrition in the workplace. After completion of the intervention study described above, the participants were interviewed again to explore whether or not their participation had influenced their awareness of their work nutrition patterns and their intention to modify their future nutrition practices. The following themes emerged (Table 1).\nPhysicians' views of their workplace nutrition\n[SUBTITLE] Impact of Inadequate Nutrition [SUBSECTION] We asked the physicians \"Do you sometimes have difficulty eating and drinking during work hours?\" and all answered yes. We then asked \"Think back to a busy work day where you maybe did not have time to eat and drink properly. Do you think it had an impact on you? In what way? How did you feel physically, psychologically, cognitively? Do you think it had an impact on your work? In what way? Do you think it had an impact on how you treated your colleagues, other health care professionals? Did it impact your ability to complete your work?\" All of the participants reported that they thought that inadequate nutrition had some impact on them and almost all reported experiencing emotional symptoms including irritability, frustration, decreased patience and feeling emotionally drained. Most reported they have felt physical symptoms such as fatigue, hypoglycemic symptoms (shaky, tremors, sweats, headache, lightheaded, hunger, nausea), and general malaise. Many also described a perceived impact on their cognition, citing symptoms such as difficulty concentrating, lack of focus, inability to think clearly, poor or slow decision making, and generalized inefficiency. Many study participants admitted that they felt the impact of inadequate nutrition at times led to an inability to complete their work, and they offered examples such as decreased efficiency, lack of focus, less willingness to discuss patient care issues with colleagues and having to be much more deliberate and methodical when hungry. Examples of negative interpersonal interactions included being less patient and less gregarious. The following quotations illustrate some examples of the impact of inadequate nutrition experienced by physicians.\nI think you can be distracted, irritable, have poor interactions if hungry enough. Your agenda changes and you just want to get the hell out of here. Hopefully, you don't give in to it. People get irritated and emotionally tapped out and don't want to think through the problem. (#20)\nIf I eat breakfast, I need lunch or I feel diaphoretic and shaky. So I cut out breakfast years ago because I can't get regular lunch. So if I miss meals irregularly it becomes an issue. I'm not hungry at lunch, I have no cravings, I have trained my body to miss lunch. If I deviate from my routine, yes, otherwise, no impact [of poor nutrition]. I feel shaky, get sweats, have decreased concentration, lack of focus, things don't seem to go right. I need fine motor skills during surgery and I will note a little bit of a tremor that almost makes the case impossible to do. Then I get frustrated and it makes things worse. (#12)\nMy attention is poor and I think about food and drink, which is a distraction. I get headaches, feel fatigue, less motivated and irritable. I'm not aware that it impacts my interactions with colleagues and my ability to complete work, but overall, it [hunger] may decrease efficiency. (#9)\n[Impact of poor nutrition on work] Probably, but I would like to think not! [Impact on how he treats colleagues] I don't know, it's hard to say, hard to compare to what you could do best on that day. Maybe it takes you longer and you have to focus more. [Ability to complete work] Probably. At night, when you get older, it's harder. I have to focus on the real meat of it [work] and feel massive fatigue, especially if I haven't eaten. (#14)\nIn order to further explore the link between nutrition and cognition, we next asked the participants \"Do you think there is an association between a physician's nutritional intake during work hours and their cognitive function? Why or why not? What do you think the important consequences might be?\" Most participants felt strongly that there was an association between their nutrition and cognition. The physicians who either did not think there was an association or were not sure, often went on to qualify their opinion by adding comments suggesting they recognize within themselves the potential for lack of acknowledgment of this linkage. The following quotations illustrate some examples of the physicians' perceptions of the link between nutrition and cognition, and the conflict voiced by the physicians.\nAbsolutely [link between nutrition and cognition]. I know that myself. When I feel hungry, I can't focus. I get slower around 4 am when I am hungry and tired and not drinking enough water, not peeing. Consequences for patient care? For sure! (#1)\nAbsolutely! If I grab food on the run it doesn't feel as well as a rest and a read of the newspaper or a visit or a curbside consult [with colleagues]. Rest and food also equal socialization. Consequences [of good nutrition] are performance and less stress. I am thinking clearly, and am more efficient and organized. (#2)\nI don't believe so [association between nutrition and cognition]. You have to be excessively hypoglycemic before cognitive impairment, or I hope that is true! I do feel sleepy after the mid day meal on the Monday. At times I am short tempered with staff and surly and sometimes they will bring me food and it helps. Does it affect how I treat colleagues? Yes, from an emotional and interpersonal way, but heaven forbid it would impact patient care. [Ability to complete work]I don't think so. I would notice it and eat. (#8)\nI don't notice it in myself [association between nutrition and cognition]. I feel hungry but I don't think I \"bonk\". But perhaps I am lacking insight. I feel I perform well all the time although I do notice if I get on my bike to ride home and haven't eaten well, then I will bonk. (#11)\nSuboptimal nutrition and restfulness should impact on cognition. Maybe it is surgical arrogance that makes me think not. To me, nutrition and sleep deprivation are linked. I am more likely to do a procedure not as slickly if it is routine, run of the mill, by rote, but when it is super technical, it is way easier to get switched on. For example, last week, the surgery took 6 hours, went fine, no issues, went well. When I was writing the discharge prescription at the end of the case, I made a prescription error, when the hard part was over. There is also a narrow view of \"standard of care.\" How about our role of communicator, and educator? What if the patient has a ton of questions, and I put them aside because of fatigue and hunger? (#17)\nWe asked the physicians \"Do you sometimes have difficulty eating and drinking during work hours?\" and all answered yes. We then asked \"Think back to a busy work day where you maybe did not have time to eat and drink properly. Do you think it had an impact on you? In what way? How did you feel physically, psychologically, cognitively? Do you think it had an impact on your work? In what way? Do you think it had an impact on how you treated your colleagues, other health care professionals? Did it impact your ability to complete your work?\" All of the participants reported that they thought that inadequate nutrition had some impact on them and almost all reported experiencing emotional symptoms including irritability, frustration, decreased patience and feeling emotionally drained. Most reported they have felt physical symptoms such as fatigue, hypoglycemic symptoms (shaky, tremors, sweats, headache, lightheaded, hunger, nausea), and general malaise. Many also described a perceived impact on their cognition, citing symptoms such as difficulty concentrating, lack of focus, inability to think clearly, poor or slow decision making, and generalized inefficiency. Many study participants admitted that they felt the impact of inadequate nutrition at times led to an inability to complete their work, and they offered examples such as decreased efficiency, lack of focus, less willingness to discuss patient care issues with colleagues and having to be much more deliberate and methodical when hungry. Examples of negative interpersonal interactions included being less patient and less gregarious. The following quotations illustrate some examples of the impact of inadequate nutrition experienced by physicians.\nI think you can be distracted, irritable, have poor interactions if hungry enough. Your agenda changes and you just want to get the hell out of here. Hopefully, you don't give in to it. People get irritated and emotionally tapped out and don't want to think through the problem. (#20)\nIf I eat breakfast, I need lunch or I feel diaphoretic and shaky. So I cut out breakfast years ago because I can't get regular lunch. So if I miss meals irregularly it becomes an issue. I'm not hungry at lunch, I have no cravings, I have trained my body to miss lunch. If I deviate from my routine, yes, otherwise, no impact [of poor nutrition]. I feel shaky, get sweats, have decreased concentration, lack of focus, things don't seem to go right. I need fine motor skills during surgery and I will note a little bit of a tremor that almost makes the case impossible to do. Then I get frustrated and it makes things worse. (#12)\nMy attention is poor and I think about food and drink, which is a distraction. I get headaches, feel fatigue, less motivated and irritable. I'm not aware that it impacts my interactions with colleagues and my ability to complete work, but overall, it [hunger] may decrease efficiency. (#9)\n[Impact of poor nutrition on work] Probably, but I would like to think not! [Impact on how he treats colleagues] I don't know, it's hard to say, hard to compare to what you could do best on that day. Maybe it takes you longer and you have to focus more. [Ability to complete work] Probably. At night, when you get older, it's harder. I have to focus on the real meat of it [work] and feel massive fatigue, especially if I haven't eaten. (#14)\nIn order to further explore the link between nutrition and cognition, we next asked the participants \"Do you think there is an association between a physician's nutritional intake during work hours and their cognitive function? Why or why not? What do you think the important consequences might be?\" Most participants felt strongly that there was an association between their nutrition and cognition. The physicians who either did not think there was an association or were not sure, often went on to qualify their opinion by adding comments suggesting they recognize within themselves the potential for lack of acknowledgment of this linkage. The following quotations illustrate some examples of the physicians' perceptions of the link between nutrition and cognition, and the conflict voiced by the physicians.\nAbsolutely [link between nutrition and cognition]. I know that myself. When I feel hungry, I can't focus. I get slower around 4 am when I am hungry and tired and not drinking enough water, not peeing. Consequences for patient care? For sure! (#1)\nAbsolutely! If I grab food on the run it doesn't feel as well as a rest and a read of the newspaper or a visit or a curbside consult [with colleagues]. Rest and food also equal socialization. Consequences [of good nutrition] are performance and less stress. I am thinking clearly, and am more efficient and organized. (#2)\nI don't believe so [association between nutrition and cognition]. You have to be excessively hypoglycemic before cognitive impairment, or I hope that is true! I do feel sleepy after the mid day meal on the Monday. At times I am short tempered with staff and surly and sometimes they will bring me food and it helps. Does it affect how I treat colleagues? Yes, from an emotional and interpersonal way, but heaven forbid it would impact patient care. [Ability to complete work]I don't think so. I would notice it and eat. (#8)\nI don't notice it in myself [association between nutrition and cognition]. I feel hungry but I don't think I \"bonk\". But perhaps I am lacking insight. I feel I perform well all the time although I do notice if I get on my bike to ride home and haven't eaten well, then I will bonk. (#11)\nSuboptimal nutrition and restfulness should impact on cognition. Maybe it is surgical arrogance that makes me think not. To me, nutrition and sleep deprivation are linked. I am more likely to do a procedure not as slickly if it is routine, run of the mill, by rote, but when it is super technical, it is way easier to get switched on. For example, last week, the surgery took 6 hours, went fine, no issues, went well. When I was writing the discharge prescription at the end of the case, I made a prescription error, when the hard part was over. There is also a narrow view of \"standard of care.\" How about our role of communicator, and educator? What if the patient has a ton of questions, and I put them aside because of fatigue and hunger? (#17)\n[SUBTITLE] Barriers to Adequate Nutrition [SUBSECTION] We asked the physicians \"Do you think that you and other doctors are not always able to ensure adequate nutrition during work hours? If so, what barriers make it difficult for you to eat and drink in a healthy way during work hours?\" The major theme that emerged was that of lack of time. Doctors stated they are just too busy to even think about eating, that there is no time during their work day to stop and eat, that their workload prevents them from taking time to eat, and that their work schedules (e.g. business meetings or meetings with patients' families over the lunch hour, operating room cases or clinics running through lunch) make it difficult to access nutrition on a regular basis. The second major theme was limited or inconvenient access to nutrition during the physicians' work day. They described how the physical spaces within the hospital hinder their access to food, including having to walk long distances from the unit where they work to the food areas, waiting for elevators, and waiting in long line ups once they get to the food stations. In addition, they noted that current limited after-hours access to food is impractical given that the hospital staff work 24 hours per day. The participants also described inadequate food storage areas for items they bring from home. Many participants identified food choices as a major barrier, describing poor quality food, restricted healthy choices, and limited variety, while very few identified cost of healthy food as a barrier.\nIn addition to these practical access issues, the physicians also identified issues that reflect the culture of medicine and how physicians' professionalism deters them from taking care of their workplace nutrition needs. For example, many physicians reported that work and their patients come first, that they feel compelled to \"just get the work done\", and that taking time for nutrition delays caring for ill patients. Several participants also described profession related barriers, including the medical profession's low prioritization of workplace nutrition for physicians in general and a feeling of awkwardness or unprofessionalism when either carrying food around in the workplace or eating in patient care settings. The following quotes illustrate examples of some of the practical and professionalism barriers that participants identified.\nI'm too busy to give it [nutrition] the thought it needs. It's hard to carry food around and I think I lack interest in nutrition. (#11)\nIt's not an access issue, the quality [of food] sucks. I would pay more if there was better food. I like to sit down to eat. Eating interrupts the work day and lengthens it. I am stubborn. Work stuff comes first. I cannot be non productive or inefficient. (#8)\nI can't bear the food. The doctors' lounge [food] is OK. Bringing in three meals is too much hassle and choices for healthy foods are limited. I can't stand in line for food. There is no time with urgency of cases and the workload. (#4)\nBarriers: Time is number one; too many pressing things that need to be done right now. There are awkward social pressures also, I feel awkward eating in the clinic. There is no microwave, I don't want to carry forks and knives, and I don't want to walk past a bank of patients with food, I feel self conscious. So I am trapped in the [clinic] eating cookies and chocolate and coffee and creamers and juice, no access to healthy foods, and technically am eating patient food. Club soda and orange juice is restorative because it reminds me of being home on my porch watching the dog drinking soda lime. (#19)\nBarriers are time, the pressure of getting things done during the day that you can't put off, and poor availability [of food] especially in the evening and nights, when not a lot is available. If I am in the hospital all night, I try to eat early otherwise I can't get proper food. What is available? Chicken fingers and fries. Now at least the sandwich bar is open later and occasionally I can get a veggie burger or salmon burger. During the day reasonable food is available but I can't get to it. (#14)\nThe operating room is a food wasteland. There used to be food on the 7th floor [operating room area] but now there are vending machines. It makes it difficult to get hydration and caloric intake. Same for the anesthesiologists. There is a lot of pressure in the OR. We need to be conscientious about intake. And there is no readily available drinking water. The social trend that I abhor is the lack of clean drinking water. No drinking fountains and need to buy a three dollar bottle of [brand] water that is an environmental nightmare. They removed juices from the operating room. In a pinch, the nurses will try to give you a drink and they might get the straw up your nose! (#13)\nWe asked the physicians \"Do you think that you and other doctors are not always able to ensure adequate nutrition during work hours? If so, what barriers make it difficult for you to eat and drink in a healthy way during work hours?\" The major theme that emerged was that of lack of time. Doctors stated they are just too busy to even think about eating, that there is no time during their work day to stop and eat, that their workload prevents them from taking time to eat, and that their work schedules (e.g. business meetings or meetings with patients' families over the lunch hour, operating room cases or clinics running through lunch) make it difficult to access nutrition on a regular basis. The second major theme was limited or inconvenient access to nutrition during the physicians' work day. They described how the physical spaces within the hospital hinder their access to food, including having to walk long distances from the unit where they work to the food areas, waiting for elevators, and waiting in long line ups once they get to the food stations. In addition, they noted that current limited after-hours access to food is impractical given that the hospital staff work 24 hours per day. The participants also described inadequate food storage areas for items they bring from home. Many participants identified food choices as a major barrier, describing poor quality food, restricted healthy choices, and limited variety, while very few identified cost of healthy food as a barrier.\nIn addition to these practical access issues, the physicians also identified issues that reflect the culture of medicine and how physicians' professionalism deters them from taking care of their workplace nutrition needs. For example, many physicians reported that work and their patients come first, that they feel compelled to \"just get the work done\", and that taking time for nutrition delays caring for ill patients. Several participants also described profession related barriers, including the medical profession's low prioritization of workplace nutrition for physicians in general and a feeling of awkwardness or unprofessionalism when either carrying food around in the workplace or eating in patient care settings. The following quotes illustrate examples of some of the practical and professionalism barriers that participants identified.\nI'm too busy to give it [nutrition] the thought it needs. It's hard to carry food around and I think I lack interest in nutrition. (#11)\nIt's not an access issue, the quality [of food] sucks. I would pay more if there was better food. I like to sit down to eat. Eating interrupts the work day and lengthens it. I am stubborn. Work stuff comes first. I cannot be non productive or inefficient. (#8)\nI can't bear the food. The doctors' lounge [food] is OK. Bringing in three meals is too much hassle and choices for healthy foods are limited. I can't stand in line for food. There is no time with urgency of cases and the workload. (#4)\nBarriers: Time is number one; too many pressing things that need to be done right now. There are awkward social pressures also, I feel awkward eating in the clinic. There is no microwave, I don't want to carry forks and knives, and I don't want to walk past a bank of patients with food, I feel self conscious. So I am trapped in the [clinic] eating cookies and chocolate and coffee and creamers and juice, no access to healthy foods, and technically am eating patient food. Club soda and orange juice is restorative because it reminds me of being home on my porch watching the dog drinking soda lime. (#19)\nBarriers are time, the pressure of getting things done during the day that you can't put off, and poor availability [of food] especially in the evening and nights, when not a lot is available. If I am in the hospital all night, I try to eat early otherwise I can't get proper food. What is available? Chicken fingers and fries. Now at least the sandwich bar is open later and occasionally I can get a veggie burger or salmon burger. During the day reasonable food is available but I can't get to it. (#14)\nThe operating room is a food wasteland. There used to be food on the 7th floor [operating room area] but now there are vending machines. It makes it difficult to get hydration and caloric intake. Same for the anesthesiologists. There is a lot of pressure in the OR. We need to be conscientious about intake. And there is no readily available drinking water. The social trend that I abhor is the lack of clean drinking water. No drinking fountains and need to buy a three dollar bottle of [brand] water that is an environmental nightmare. They removed juices from the operating room. In a pinch, the nurses will try to give you a drink and they might get the straw up your nose! (#13)\n[SUBTITLE] Impact of Participating in a Nutrition Based Wellness Initiative [SUBSECTION] We asked the physicians, \"Has your participation increased your awareness of your nutrition patterns during your work day? If yes, how so?\" Almost all of the physicians responded that their participation in the initiative has increased their awareness of their own nutrition patterns, describing how they became more aware of their irregular or poor eating and drinking habits, and/or noted increased awareness through reinforcement and improved knowledge about nutrition. Some physicians reported an increased awareness of the link between their nutrition patterns and their mood and performance. The physicians who did not think that participation in the initiative had increased their awareness of their nutrition patterns reported that they already knew they had good or bad nutritional habits.\nWe also asked the physicians, \"Has your participation influenced how you will eat and drink during your work day? If yes, how so?\" The majority of the physicians reported that their future nutrition patterns will be influenced, commenting that they will snack and drink more regularly while at work, make an effort to prepare snacks to bring to the hospital, and be more careful about their eating habits such as having breakfast daily and not skipping other meals. Several participants felt they will not change their nutrition patterns. Reasons stated included affirmation that they will continue their current patterns despite knowing they should change them for the better or that their current eating habits are effective. Some physicians were ambivalent, and suggested that they might change their nutrition patterns if food was more readily available and accessible, or if they had more information on healthy nutrition. The following quotes illustrate examples of some of the physicians' views after participation in the nutrition based wellness intervention study.\nFirst, it [the intervention] outlined the importance of breakfast. It makes you feel better. Second, the whole notion of spacing meals, a little here and a little there, fewer calories at a time, so there is no catch-up phenomenon. It adds up to fewer calories than my usual smorgasbord late in the day. (#7)\nFor sure! I just became really aware of how hungry I really was and that hunger would add to my sarcasm, I would get glib [with patients]. The irritability, getting spaced out without food is improved. I bought a water bottle and cut back on coffee. I still bring food from home and feeling better on the second day of the study added to my work happiness. (#19)\nI recognize that when I get yawny or tired, it is because of low blood sugar. Then I like to run up and get something to eat or drink. Being in the study has reinforced the snack thing. Now I bring 2 fruit and trail mix. I don't get as many symptoms and am not as tired when I eat snacks. I come regularly up to the doctors' lounge or elsewhere to get snacks. Before, it was intermittent. I pace myself now to assure regular eating. I come for lunch regularly now, before it was intermittent. The nutrition issues were brought to my attention by the study, otherwise I would not have changed my habits. (#5)\nI think I will be more careful about eating regularly. Funnily enough, I have been more careful the last couple of years because I can't do it like I used to. I don't have the same stamina or the same focus for hours on end. It's the same with food; if I do miss it it's harder. (#14)\nBefore I tended to not remember when I had eaten. Now maybe I can better correlate mood and performance. Because of the hypoglycemic questionnaire I started to realize I had symptoms of that even before I knew it. I am trying not to skip meals, and trying to snack, and to prepare food for work ahead of time. (#9)\nI've started to hunt down trail mix so I can graze in between meals. (#4)\nBeing in the study did not have a high impact. I already knew they [eating habits] were crappy and suboptimal. Maybe it will cause somewhat of a change, but not a big change. I already knew that if I wasn't getting food I would get irritable and hungry. It [initiative participation] has influenced how I would like to eat and drink but due to workplace and personal issues, I just can't make that happen. (#20)\nWe asked the physicians, \"Has your participation increased your awareness of your nutrition patterns during your work day? If yes, how so?\" Almost all of the physicians responded that their participation in the initiative has increased their awareness of their own nutrition patterns, describing how they became more aware of their irregular or poor eating and drinking habits, and/or noted increased awareness through reinforcement and improved knowledge about nutrition. Some physicians reported an increased awareness of the link between their nutrition patterns and their mood and performance. The physicians who did not think that participation in the initiative had increased their awareness of their nutrition patterns reported that they already knew they had good or bad nutritional habits.\nWe also asked the physicians, \"Has your participation influenced how you will eat and drink during your work day? If yes, how so?\" The majority of the physicians reported that their future nutrition patterns will be influenced, commenting that they will snack and drink more regularly while at work, make an effort to prepare snacks to bring to the hospital, and be more careful about their eating habits such as having breakfast daily and not skipping other meals. Several participants felt they will not change their nutrition patterns. Reasons stated included affirmation that they will continue their current patterns despite knowing they should change them for the better or that their current eating habits are effective. Some physicians were ambivalent, and suggested that they might change their nutrition patterns if food was more readily available and accessible, or if they had more information on healthy nutrition. The following quotes illustrate examples of some of the physicians' views after participation in the nutrition based wellness intervention study.\nFirst, it [the intervention] outlined the importance of breakfast. It makes you feel better. Second, the whole notion of spacing meals, a little here and a little there, fewer calories at a time, so there is no catch-up phenomenon. It adds up to fewer calories than my usual smorgasbord late in the day. (#7)\nFor sure! I just became really aware of how hungry I really was and that hunger would add to my sarcasm, I would get glib [with patients]. The irritability, getting spaced out without food is improved. I bought a water bottle and cut back on coffee. I still bring food from home and feeling better on the second day of the study added to my work happiness. (#19)\nI recognize that when I get yawny or tired, it is because of low blood sugar. Then I like to run up and get something to eat or drink. Being in the study has reinforced the snack thing. Now I bring 2 fruit and trail mix. I don't get as many symptoms and am not as tired when I eat snacks. I come regularly up to the doctors' lounge or elsewhere to get snacks. Before, it was intermittent. I pace myself now to assure regular eating. I come for lunch regularly now, before it was intermittent. The nutrition issues were brought to my attention by the study, otherwise I would not have changed my habits. (#5)\nI think I will be more careful about eating regularly. Funnily enough, I have been more careful the last couple of years because I can't do it like I used to. I don't have the same stamina or the same focus for hours on end. It's the same with food; if I do miss it it's harder. (#14)\nBefore I tended to not remember when I had eaten. Now maybe I can better correlate mood and performance. Because of the hypoglycemic questionnaire I started to realize I had symptoms of that even before I knew it. I am trying not to skip meals, and trying to snack, and to prepare food for work ahead of time. (#9)\nI've started to hunt down trail mix so I can graze in between meals. (#4)\nBeing in the study did not have a high impact. I already knew they [eating habits] were crappy and suboptimal. Maybe it will cause somewhat of a change, but not a big change. I already knew that if I wasn't getting food I would get irritable and hungry. It [initiative participation] has influenced how I would like to eat and drink but due to workplace and personal issues, I just can't make that happen. (#20)", "We asked the physicians \"Do you sometimes have difficulty eating and drinking during work hours?\" and all answered yes. We then asked \"Think back to a busy work day where you maybe did not have time to eat and drink properly. Do you think it had an impact on you? In what way? How did you feel physically, psychologically, cognitively? Do you think it had an impact on your work? In what way? Do you think it had an impact on how you treated your colleagues, other health care professionals? Did it impact your ability to complete your work?\" All of the participants reported that they thought that inadequate nutrition had some impact on them and almost all reported experiencing emotional symptoms including irritability, frustration, decreased patience and feeling emotionally drained. Most reported they have felt physical symptoms such as fatigue, hypoglycemic symptoms (shaky, tremors, sweats, headache, lightheaded, hunger, nausea), and general malaise. Many also described a perceived impact on their cognition, citing symptoms such as difficulty concentrating, lack of focus, inability to think clearly, poor or slow decision making, and generalized inefficiency. Many study participants admitted that they felt the impact of inadequate nutrition at times led to an inability to complete their work, and they offered examples such as decreased efficiency, lack of focus, less willingness to discuss patient care issues with colleagues and having to be much more deliberate and methodical when hungry. Examples of negative interpersonal interactions included being less patient and less gregarious. The following quotations illustrate some examples of the impact of inadequate nutrition experienced by physicians.\nI think you can be distracted, irritable, have poor interactions if hungry enough. Your agenda changes and you just want to get the hell out of here. Hopefully, you don't give in to it. People get irritated and emotionally tapped out and don't want to think through the problem. (#20)\nIf I eat breakfast, I need lunch or I feel diaphoretic and shaky. So I cut out breakfast years ago because I can't get regular lunch. So if I miss meals irregularly it becomes an issue. I'm not hungry at lunch, I have no cravings, I have trained my body to miss lunch. If I deviate from my routine, yes, otherwise, no impact [of poor nutrition]. I feel shaky, get sweats, have decreased concentration, lack of focus, things don't seem to go right. I need fine motor skills during surgery and I will note a little bit of a tremor that almost makes the case impossible to do. Then I get frustrated and it makes things worse. (#12)\nMy attention is poor and I think about food and drink, which is a distraction. I get headaches, feel fatigue, less motivated and irritable. I'm not aware that it impacts my interactions with colleagues and my ability to complete work, but overall, it [hunger] may decrease efficiency. (#9)\n[Impact of poor nutrition on work] Probably, but I would like to think not! [Impact on how he treats colleagues] I don't know, it's hard to say, hard to compare to what you could do best on that day. Maybe it takes you longer and you have to focus more. [Ability to complete work] Probably. At night, when you get older, it's harder. I have to focus on the real meat of it [work] and feel massive fatigue, especially if I haven't eaten. (#14)\nIn order to further explore the link between nutrition and cognition, we next asked the participants \"Do you think there is an association between a physician's nutritional intake during work hours and their cognitive function? Why or why not? What do you think the important consequences might be?\" Most participants felt strongly that there was an association between their nutrition and cognition. The physicians who either did not think there was an association or were not sure, often went on to qualify their opinion by adding comments suggesting they recognize within themselves the potential for lack of acknowledgment of this linkage. The following quotations illustrate some examples of the physicians' perceptions of the link between nutrition and cognition, and the conflict voiced by the physicians.\nAbsolutely [link between nutrition and cognition]. I know that myself. When I feel hungry, I can't focus. I get slower around 4 am when I am hungry and tired and not drinking enough water, not peeing. Consequences for patient care? For sure! (#1)\nAbsolutely! If I grab food on the run it doesn't feel as well as a rest and a read of the newspaper or a visit or a curbside consult [with colleagues]. Rest and food also equal socialization. Consequences [of good nutrition] are performance and less stress. I am thinking clearly, and am more efficient and organized. (#2)\nI don't believe so [association between nutrition and cognition]. You have to be excessively hypoglycemic before cognitive impairment, or I hope that is true! I do feel sleepy after the mid day meal on the Monday. At times I am short tempered with staff and surly and sometimes they will bring me food and it helps. Does it affect how I treat colleagues? Yes, from an emotional and interpersonal way, but heaven forbid it would impact patient care. [Ability to complete work]I don't think so. I would notice it and eat. (#8)\nI don't notice it in myself [association between nutrition and cognition]. I feel hungry but I don't think I \"bonk\". But perhaps I am lacking insight. I feel I perform well all the time although I do notice if I get on my bike to ride home and haven't eaten well, then I will bonk. (#11)\nSuboptimal nutrition and restfulness should impact on cognition. Maybe it is surgical arrogance that makes me think not. To me, nutrition and sleep deprivation are linked. I am more likely to do a procedure not as slickly if it is routine, run of the mill, by rote, but when it is super technical, it is way easier to get switched on. For example, last week, the surgery took 6 hours, went fine, no issues, went well. When I was writing the discharge prescription at the end of the case, I made a prescription error, when the hard part was over. There is also a narrow view of \"standard of care.\" How about our role of communicator, and educator? What if the patient has a ton of questions, and I put them aside because of fatigue and hunger? (#17)", "We asked the physicians \"Do you think that you and other doctors are not always able to ensure adequate nutrition during work hours? If so, what barriers make it difficult for you to eat and drink in a healthy way during work hours?\" The major theme that emerged was that of lack of time. Doctors stated they are just too busy to even think about eating, that there is no time during their work day to stop and eat, that their workload prevents them from taking time to eat, and that their work schedules (e.g. business meetings or meetings with patients' families over the lunch hour, operating room cases or clinics running through lunch) make it difficult to access nutrition on a regular basis. The second major theme was limited or inconvenient access to nutrition during the physicians' work day. They described how the physical spaces within the hospital hinder their access to food, including having to walk long distances from the unit where they work to the food areas, waiting for elevators, and waiting in long line ups once they get to the food stations. In addition, they noted that current limited after-hours access to food is impractical given that the hospital staff work 24 hours per day. The participants also described inadequate food storage areas for items they bring from home. Many participants identified food choices as a major barrier, describing poor quality food, restricted healthy choices, and limited variety, while very few identified cost of healthy food as a barrier.\nIn addition to these practical access issues, the physicians also identified issues that reflect the culture of medicine and how physicians' professionalism deters them from taking care of their workplace nutrition needs. For example, many physicians reported that work and their patients come first, that they feel compelled to \"just get the work done\", and that taking time for nutrition delays caring for ill patients. Several participants also described profession related barriers, including the medical profession's low prioritization of workplace nutrition for physicians in general and a feeling of awkwardness or unprofessionalism when either carrying food around in the workplace or eating in patient care settings. The following quotes illustrate examples of some of the practical and professionalism barriers that participants identified.\nI'm too busy to give it [nutrition] the thought it needs. It's hard to carry food around and I think I lack interest in nutrition. (#11)\nIt's not an access issue, the quality [of food] sucks. I would pay more if there was better food. I like to sit down to eat. Eating interrupts the work day and lengthens it. I am stubborn. Work stuff comes first. I cannot be non productive or inefficient. (#8)\nI can't bear the food. The doctors' lounge [food] is OK. Bringing in three meals is too much hassle and choices for healthy foods are limited. I can't stand in line for food. There is no time with urgency of cases and the workload. (#4)\nBarriers: Time is number one; too many pressing things that need to be done right now. There are awkward social pressures also, I feel awkward eating in the clinic. There is no microwave, I don't want to carry forks and knives, and I don't want to walk past a bank of patients with food, I feel self conscious. So I am trapped in the [clinic] eating cookies and chocolate and coffee and creamers and juice, no access to healthy foods, and technically am eating patient food. Club soda and orange juice is restorative because it reminds me of being home on my porch watching the dog drinking soda lime. (#19)\nBarriers are time, the pressure of getting things done during the day that you can't put off, and poor availability [of food] especially in the evening and nights, when not a lot is available. If I am in the hospital all night, I try to eat early otherwise I can't get proper food. What is available? Chicken fingers and fries. Now at least the sandwich bar is open later and occasionally I can get a veggie burger or salmon burger. During the day reasonable food is available but I can't get to it. (#14)\nThe operating room is a food wasteland. There used to be food on the 7th floor [operating room area] but now there are vending machines. It makes it difficult to get hydration and caloric intake. Same for the anesthesiologists. There is a lot of pressure in the OR. We need to be conscientious about intake. And there is no readily available drinking water. The social trend that I abhor is the lack of clean drinking water. No drinking fountains and need to buy a three dollar bottle of [brand] water that is an environmental nightmare. They removed juices from the operating room. In a pinch, the nurses will try to give you a drink and they might get the straw up your nose! (#13)", "We asked the physicians, \"Has your participation increased your awareness of your nutrition patterns during your work day? If yes, how so?\" Almost all of the physicians responded that their participation in the initiative has increased their awareness of their own nutrition patterns, describing how they became more aware of their irregular or poor eating and drinking habits, and/or noted increased awareness through reinforcement and improved knowledge about nutrition. Some physicians reported an increased awareness of the link between their nutrition patterns and their mood and performance. The physicians who did not think that participation in the initiative had increased their awareness of their nutrition patterns reported that they already knew they had good or bad nutritional habits.\nWe also asked the physicians, \"Has your participation influenced how you will eat and drink during your work day? If yes, how so?\" The majority of the physicians reported that their future nutrition patterns will be influenced, commenting that they will snack and drink more regularly while at work, make an effort to prepare snacks to bring to the hospital, and be more careful about their eating habits such as having breakfast daily and not skipping other meals. Several participants felt they will not change their nutrition patterns. Reasons stated included affirmation that they will continue their current patterns despite knowing they should change them for the better or that their current eating habits are effective. Some physicians were ambivalent, and suggested that they might change their nutrition patterns if food was more readily available and accessible, or if they had more information on healthy nutrition. The following quotes illustrate examples of some of the physicians' views after participation in the nutrition based wellness intervention study.\nFirst, it [the intervention] outlined the importance of breakfast. It makes you feel better. Second, the whole notion of spacing meals, a little here and a little there, fewer calories at a time, so there is no catch-up phenomenon. It adds up to fewer calories than my usual smorgasbord late in the day. (#7)\nFor sure! I just became really aware of how hungry I really was and that hunger would add to my sarcasm, I would get glib [with patients]. The irritability, getting spaced out without food is improved. I bought a water bottle and cut back on coffee. I still bring food from home and feeling better on the second day of the study added to my work happiness. (#19)\nI recognize that when I get yawny or tired, it is because of low blood sugar. Then I like to run up and get something to eat or drink. Being in the study has reinforced the snack thing. Now I bring 2 fruit and trail mix. I don't get as many symptoms and am not as tired when I eat snacks. I come regularly up to the doctors' lounge or elsewhere to get snacks. Before, it was intermittent. I pace myself now to assure regular eating. I come for lunch regularly now, before it was intermittent. The nutrition issues were brought to my attention by the study, otherwise I would not have changed my habits. (#5)\nI think I will be more careful about eating regularly. Funnily enough, I have been more careful the last couple of years because I can't do it like I used to. I don't have the same stamina or the same focus for hours on end. It's the same with food; if I do miss it it's harder. (#14)\nBefore I tended to not remember when I had eaten. Now maybe I can better correlate mood and performance. Because of the hypoglycemic questionnaire I started to realize I had symptoms of that even before I knew it. I am trying not to skip meals, and trying to snack, and to prepare food for work ahead of time. (#9)\nI've started to hunt down trail mix so I can graze in between meals. (#4)\nBeing in the study did not have a high impact. I already knew they [eating habits] were crappy and suboptimal. Maybe it will cause somewhat of a change, but not a big change. I already knew that if I wasn't getting food I would get irritable and hungry. It [initiative participation] has influenced how I would like to eat and drink but due to workplace and personal issues, I just can't make that happen. (#20)", "Although there is a wide belief that physicians may not always attain appropriate nutrition during their work day, few empirical studies are available that help us to understand how physicians perceive the impact of inadequate nutrition upon their personal wellness and professional performance, and the barriers to achieving adequate nutrition in the workplace. Most physicians in our study believe that through improved nutrition at work they would feel better and benefit from enhanced performance, and they identified the challenges in achieving this goal. These beliefs are supported more broadly by previous research showing that physicians who suffer other workplace related strains such as stress, burnout, and sleep deprivation pose a risk both to themselves [16,17] and to the delivery of quality health care [1-9].\nNutrition is a basic wellness necessity, and although research reports some physician healthy eating behaviors [18], there is mounting evidence that physicians have difficulty accessing proper nutrition during their work day. For example, a recent cross sectional survey study by Winston found that less than half the doctors who replied reported taking regular meal breaks and that limited canteen opening times, lack of selection and lack of breaks were the most commonly perceived barriers to healthy eating while at work [11]. Examples of the importance of physicians' nutrition include a recent study demonstrating that skipping breakfast and eating meals irregularly was associated with fatigue in medical students [19] and a study linking medical students' personal nutritional habits to their patient counseling practices [20]. The physicians interviewed in our study also participated in a related nutrition based intervention study where they were fed nutritious meals and snacks at scheduled intervals during their work day. Recently published results of this research show significantly improved cognition, higher caloric intake, improved hydration, and a trend toward fewer hypoglycemic symptoms for the physicians on the intervention day compared to the day when they followed their usual workplace eating habits [15].\nOur study adds to the existing literature in that the physicians interviewed not only conceded that they do not always have access to appropriate nutrition, they also aptly described their perceptions of the impact of inadequate nutrition. They portrayed the negative impact upon both their personal wellness and their ability to complete their work, recognizing that when they are undernourished, they are less likely to function at their best. In addition to identifying the practical barriers, the physicians recounted how their sense of professionalism and work ethic also hinder their work nutrition practices, a finding not previously reported to our knowledge. Lastly, the physicians in our study described how participation in a physician wellness intervention raised their awareness and altered their views of nutrition in the workplace. Unfortunately, many of the doctors admitted that the practical and professional related barriers to ensuring adequate nutrition in the workplace would still present challenges for them.\nStudy limitations include the relatively small sample of predominantly male physicians from a single hospital acute care setting. Also, they may have volunteered for the study because of their personal interest in nutrition and the opportunity to explore a potential association between nutrition and cognition during the wellness intervention. Future research might explore whether similar outcomes to inadequate nutrition and difficulties in accessing proper nutrition are experienced by physicians in other work settings. The interviews conducted in our study were linked to a related nutrition based wellness intervention study exploring the association between physician nutrition and cognition. The questions posed were guided by both the local knowledge of prior reporting of poor workplace nutrition and the underlying research questions: is there an association between physician nutrition and cognition, are physicians aware of this link, and are there barriers to achieving nutrition in the workplace? It is thus possible that the way in which the questions were framed may have influenced the participants' responses. However, this is not likely the case, as the participants followed up their yes/no answers and/or the subsequent probes by offering, in their own words, many and varied examples of how they personally experienced poor workplace nutrition.", "Physicians report that inadequate workplace nutrition has a significant negative impact on their personal wellness and professional performance. Given this threat to health care delivery, health care organizations and the medical profession need to address both the practical and professional barriers that impede physicians' basic self care issues such as nutrition, and implement physician wellness workplace initiatives that may positively influence health care organization policy, patient care, and physician behavior. Future work should include quantitative studies to determine if the major themes identified from the interviews represent those of a broader sample of physicians.", "JL contributed to the study conception and design, acquisition of data, analysis and interpretation of data, drafting and critical revision of the manuscript, obtaining funding, and administrative support. JW contributed to the study conception and design, analysis and interpretation of data, critical revision of the manuscript, and obtaining funding. KD contributed to the study design, critical revision of the manuscript and technical support. DR contributed to the conception and design of the study, critical revision of the manuscript, and technical and material support. JL had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis, and had final responsibility for the decision to submit the manuscript for publication. All authors read and approved the final manuscript.", "The authors declare that they have no competing interests." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Impact of Inadequate Nutrition", "Barriers to Adequate Nutrition", "Impact of Participating in a Nutrition Based Wellness Initiative", "Discussion", "Conclusion", "Authors' contributions", "Conflict of interest statement" ]
[ "There is growing empirical evidence of an association between physicians' wellness and their ability to deliver quality health care [1-9] and yet physicians are often unable to tend to their wellness [10]. Physicians report that they frequently cannot eat and drink properly or at all during work hours [11-14]. Research has also shown an association between workplace nutrition and cognition for physicians [15]. The primary objective of our study was to explore hospital based physicians' views of how they experience poor nutrition in the workplace. Specifically, we asked about whether or not they have difficulty meeting their nutritional needs at work, their perceptions of the personal and professional impact of suboptimal nutrition, and the contributing barriers. The secondary objective was to further explore the same physicians' views and attitudes towards workplace nutrition after they had participated in a related study involving a nutrition-based intervention [15].", "During May and June of 2008, we interviewed a volunteer sample of 20 staff physicians from various medical specialties who were recruited from the doctors' lounge of a large urban teaching hospital. The average age of participants was 47 (range = 36 to 64 years), and 3 were women. Their average BMI was 25 kg/m2 (range = 20 - 38), all were non-smokers and most (75%) exercised at moderate to high intensity for 30 minutes or longer for at least 2 to 4 days per week. They represented a variety of medical practice areas where 10 participants were from a medical specialty (e.g., General Internal Medicine, Neurology, Intensive Care Medicine), 8 from a surgical specialty (e.g., General Surgery, Plastic Surgery, Otolaryngology), and 2 were from primary care (e.g., Family Medicine, Hospitalist). All had also agreed to participate in a related study involving a nutrition based wellness intervention [15]. The physicians were first interviewed, prior to participating in the intervention study, to explore their views of how they experience poor nutrition in the workplace, and again, after their participation in the intervention, to explore how their participation impacted these views. No physicians dropped out of the study.\nThe related intervention study [15] assessed hospital physicians' nutritional intake, cognitive function and hypoglycemic symptoms during two similar work days: one where they followed their usual eating and drinking habits and another where they were provided with nutritious food and drink at enforced scheduled intervals. The intervention was designed to ensure that physicians consumed nutrients and fluids at regular intervals throughout their work day and had four key elements: providing healthy nutrition choices; enforcing nutrition breaks; maximizing ease of accessibility; and offering cost free nutrition. Nutrition provided during the study period on the intervention day was based on the recommendations of Canada's Food Guide. On average, food and beverages were provided in six small meals. This varied according to the number of hours worked by each participant and their individual nutritional choices. At each scheduled nutrition break, the research team contacted participants through hospital paging. Ready-to-consume and cost free nutrition was either waiting for physicians at the centrally located doctors' lounge or was brought to their practice location. On both of the study days, cognitive function, nutrient/fluid intake, urinary output, frequency of hypoglycemic symptoms and capillary blood glucose were measured by the research coordinator every two hours. Analysis of these data showed a higher nutrient intake and improved hydration for the study participants on intervention versus baseline days. The intervention was associated with significantly superior scores on cognitive function testing, and lower and less variable blood glucose levels. Although not statistically significant, there was also a trend toward the reporting of fewer hypoglycemic symptoms. The participants were blinded to these results during the intervention study and also at the time of the second interview exploring the impact of the intervention on their views of workplace nutrition.\nThe interviews were conducted by the first author, a female internal medicine consultant, clinical professor of medicine, and colleague of the participants. The interviewer also holds an official appointment as Vice Chair, Physician Wellness and Vitality, Department of Medicine and has prior multidisciplinary research experience. The interviews took place at the hospital, in a private setting and lasted between 15 and 45 minutes. The interview questions analyzed in this paper are outlined in the results section below and were embedded in a more detailed set of questions that served as the entry and exit interviews for the nutrition based wellness intervention studying the association between workplace nutrition and cognition. The questions were guided by previous data from local research where both qualitative and quantitative data repeatedly identified poor workplace nutrition as a wellness concern for physicians [13,14]. In general, the initial questions required a yes/no response in order to determine what proportion of the physicians reported certain nutritional experiences. These were followed by open ended probes intended to elicit specific examples of how the physicians experience poor workplace nutrition. In addition, prompts were used in order to further explore the question, again followed by open ended probes framed to elicit the physicians' personal experiences.\nThe questions were not pilot tested, and no interviews were repeated for the same content. The detailed interview notes were transcribed into a typed summary and independently reviewed by the two lead co-investigators using an inductive strategy through open and selective coding to derive the predominant themes. Given the exploratory nature of the research, the interviews were completed with all 20 participants even if data saturation occurred on occasion with some of the major themes. This strategy also produced a richness of illustrative examples. Interview transcripts were not returned to participants for comments or corrections. All, however, were sent a summary report of the analyzed interview data and asked for comments, and no negative feedback related to accuracy or interpretation was received. Ethics approval for this study was obtained from the Conjoint Health Ethics Review Board of the University of Calgary.", "During the first interview, the physicians were asked questions aimed at exploring their perceptions of workplace nutrition. We inquired as to how they experience inadequate nutrition in the workplace in terms of their personal wellness and professional performance, and whether or not they perceived a link between their nutrition and cognitive performance. They were also asked about what they viewed as the potential barriers to nutrition in the workplace. After completion of the intervention study described above, the participants were interviewed again to explore whether or not their participation had influenced their awareness of their work nutrition patterns and their intention to modify their future nutrition practices. The following themes emerged (Table 1).\nPhysicians' views of their workplace nutrition\n[SUBTITLE] Impact of Inadequate Nutrition [SUBSECTION] We asked the physicians \"Do you sometimes have difficulty eating and drinking during work hours?\" and all answered yes. We then asked \"Think back to a busy work day where you maybe did not have time to eat and drink properly. Do you think it had an impact on you? In what way? How did you feel physically, psychologically, cognitively? Do you think it had an impact on your work? In what way? Do you think it had an impact on how you treated your colleagues, other health care professionals? Did it impact your ability to complete your work?\" All of the participants reported that they thought that inadequate nutrition had some impact on them and almost all reported experiencing emotional symptoms including irritability, frustration, decreased patience and feeling emotionally drained. Most reported they have felt physical symptoms such as fatigue, hypoglycemic symptoms (shaky, tremors, sweats, headache, lightheaded, hunger, nausea), and general malaise. Many also described a perceived impact on their cognition, citing symptoms such as difficulty concentrating, lack of focus, inability to think clearly, poor or slow decision making, and generalized inefficiency. Many study participants admitted that they felt the impact of inadequate nutrition at times led to an inability to complete their work, and they offered examples such as decreased efficiency, lack of focus, less willingness to discuss patient care issues with colleagues and having to be much more deliberate and methodical when hungry. Examples of negative interpersonal interactions included being less patient and less gregarious. The following quotations illustrate some examples of the impact of inadequate nutrition experienced by physicians.\nI think you can be distracted, irritable, have poor interactions if hungry enough. Your agenda changes and you just want to get the hell out of here. Hopefully, you don't give in to it. People get irritated and emotionally tapped out and don't want to think through the problem. (#20)\nIf I eat breakfast, I need lunch or I feel diaphoretic and shaky. So I cut out breakfast years ago because I can't get regular lunch. So if I miss meals irregularly it becomes an issue. I'm not hungry at lunch, I have no cravings, I have trained my body to miss lunch. If I deviate from my routine, yes, otherwise, no impact [of poor nutrition]. I feel shaky, get sweats, have decreased concentration, lack of focus, things don't seem to go right. I need fine motor skills during surgery and I will note a little bit of a tremor that almost makes the case impossible to do. Then I get frustrated and it makes things worse. (#12)\nMy attention is poor and I think about food and drink, which is a distraction. I get headaches, feel fatigue, less motivated and irritable. I'm not aware that it impacts my interactions with colleagues and my ability to complete work, but overall, it [hunger] may decrease efficiency. (#9)\n[Impact of poor nutrition on work] Probably, but I would like to think not! [Impact on how he treats colleagues] I don't know, it's hard to say, hard to compare to what you could do best on that day. Maybe it takes you longer and you have to focus more. [Ability to complete work] Probably. At night, when you get older, it's harder. I have to focus on the real meat of it [work] and feel massive fatigue, especially if I haven't eaten. (#14)\nIn order to further explore the link between nutrition and cognition, we next asked the participants \"Do you think there is an association between a physician's nutritional intake during work hours and their cognitive function? Why or why not? What do you think the important consequences might be?\" Most participants felt strongly that there was an association between their nutrition and cognition. The physicians who either did not think there was an association or were not sure, often went on to qualify their opinion by adding comments suggesting they recognize within themselves the potential for lack of acknowledgment of this linkage. The following quotations illustrate some examples of the physicians' perceptions of the link between nutrition and cognition, and the conflict voiced by the physicians.\nAbsolutely [link between nutrition and cognition]. I know that myself. When I feel hungry, I can't focus. I get slower around 4 am when I am hungry and tired and not drinking enough water, not peeing. Consequences for patient care? For sure! (#1)\nAbsolutely! If I grab food on the run it doesn't feel as well as a rest and a read of the newspaper or a visit or a curbside consult [with colleagues]. Rest and food also equal socialization. Consequences [of good nutrition] are performance and less stress. I am thinking clearly, and am more efficient and organized. (#2)\nI don't believe so [association between nutrition and cognition]. You have to be excessively hypoglycemic before cognitive impairment, or I hope that is true! I do feel sleepy after the mid day meal on the Monday. At times I am short tempered with staff and surly and sometimes they will bring me food and it helps. Does it affect how I treat colleagues? Yes, from an emotional and interpersonal way, but heaven forbid it would impact patient care. [Ability to complete work]I don't think so. I would notice it and eat. (#8)\nI don't notice it in myself [association between nutrition and cognition]. I feel hungry but I don't think I \"bonk\". But perhaps I am lacking insight. I feel I perform well all the time although I do notice if I get on my bike to ride home and haven't eaten well, then I will bonk. (#11)\nSuboptimal nutrition and restfulness should impact on cognition. Maybe it is surgical arrogance that makes me think not. To me, nutrition and sleep deprivation are linked. I am more likely to do a procedure not as slickly if it is routine, run of the mill, by rote, but when it is super technical, it is way easier to get switched on. For example, last week, the surgery took 6 hours, went fine, no issues, went well. When I was writing the discharge prescription at the end of the case, I made a prescription error, when the hard part was over. There is also a narrow view of \"standard of care.\" How about our role of communicator, and educator? What if the patient has a ton of questions, and I put them aside because of fatigue and hunger? (#17)\nWe asked the physicians \"Do you sometimes have difficulty eating and drinking during work hours?\" and all answered yes. We then asked \"Think back to a busy work day where you maybe did not have time to eat and drink properly. Do you think it had an impact on you? In what way? How did you feel physically, psychologically, cognitively? Do you think it had an impact on your work? In what way? Do you think it had an impact on how you treated your colleagues, other health care professionals? Did it impact your ability to complete your work?\" All of the participants reported that they thought that inadequate nutrition had some impact on them and almost all reported experiencing emotional symptoms including irritability, frustration, decreased patience and feeling emotionally drained. Most reported they have felt physical symptoms such as fatigue, hypoglycemic symptoms (shaky, tremors, sweats, headache, lightheaded, hunger, nausea), and general malaise. Many also described a perceived impact on their cognition, citing symptoms such as difficulty concentrating, lack of focus, inability to think clearly, poor or slow decision making, and generalized inefficiency. Many study participants admitted that they felt the impact of inadequate nutrition at times led to an inability to complete their work, and they offered examples such as decreased efficiency, lack of focus, less willingness to discuss patient care issues with colleagues and having to be much more deliberate and methodical when hungry. Examples of negative interpersonal interactions included being less patient and less gregarious. The following quotations illustrate some examples of the impact of inadequate nutrition experienced by physicians.\nI think you can be distracted, irritable, have poor interactions if hungry enough. Your agenda changes and you just want to get the hell out of here. Hopefully, you don't give in to it. People get irritated and emotionally tapped out and don't want to think through the problem. (#20)\nIf I eat breakfast, I need lunch or I feel diaphoretic and shaky. So I cut out breakfast years ago because I can't get regular lunch. So if I miss meals irregularly it becomes an issue. I'm not hungry at lunch, I have no cravings, I have trained my body to miss lunch. If I deviate from my routine, yes, otherwise, no impact [of poor nutrition]. I feel shaky, get sweats, have decreased concentration, lack of focus, things don't seem to go right. I need fine motor skills during surgery and I will note a little bit of a tremor that almost makes the case impossible to do. Then I get frustrated and it makes things worse. (#12)\nMy attention is poor and I think about food and drink, which is a distraction. I get headaches, feel fatigue, less motivated and irritable. I'm not aware that it impacts my interactions with colleagues and my ability to complete work, but overall, it [hunger] may decrease efficiency. (#9)\n[Impact of poor nutrition on work] Probably, but I would like to think not! [Impact on how he treats colleagues] I don't know, it's hard to say, hard to compare to what you could do best on that day. Maybe it takes you longer and you have to focus more. [Ability to complete work] Probably. At night, when you get older, it's harder. I have to focus on the real meat of it [work] and feel massive fatigue, especially if I haven't eaten. (#14)\nIn order to further explore the link between nutrition and cognition, we next asked the participants \"Do you think there is an association between a physician's nutritional intake during work hours and their cognitive function? Why or why not? What do you think the important consequences might be?\" Most participants felt strongly that there was an association between their nutrition and cognition. The physicians who either did not think there was an association or were not sure, often went on to qualify their opinion by adding comments suggesting they recognize within themselves the potential for lack of acknowledgment of this linkage. The following quotations illustrate some examples of the physicians' perceptions of the link between nutrition and cognition, and the conflict voiced by the physicians.\nAbsolutely [link between nutrition and cognition]. I know that myself. When I feel hungry, I can't focus. I get slower around 4 am when I am hungry and tired and not drinking enough water, not peeing. Consequences for patient care? For sure! (#1)\nAbsolutely! If I grab food on the run it doesn't feel as well as a rest and a read of the newspaper or a visit or a curbside consult [with colleagues]. Rest and food also equal socialization. Consequences [of good nutrition] are performance and less stress. I am thinking clearly, and am more efficient and organized. (#2)\nI don't believe so [association between nutrition and cognition]. You have to be excessively hypoglycemic before cognitive impairment, or I hope that is true! I do feel sleepy after the mid day meal on the Monday. At times I am short tempered with staff and surly and sometimes they will bring me food and it helps. Does it affect how I treat colleagues? Yes, from an emotional and interpersonal way, but heaven forbid it would impact patient care. [Ability to complete work]I don't think so. I would notice it and eat. (#8)\nI don't notice it in myself [association between nutrition and cognition]. I feel hungry but I don't think I \"bonk\". But perhaps I am lacking insight. I feel I perform well all the time although I do notice if I get on my bike to ride home and haven't eaten well, then I will bonk. (#11)\nSuboptimal nutrition and restfulness should impact on cognition. Maybe it is surgical arrogance that makes me think not. To me, nutrition and sleep deprivation are linked. I am more likely to do a procedure not as slickly if it is routine, run of the mill, by rote, but when it is super technical, it is way easier to get switched on. For example, last week, the surgery took 6 hours, went fine, no issues, went well. When I was writing the discharge prescription at the end of the case, I made a prescription error, when the hard part was over. There is also a narrow view of \"standard of care.\" How about our role of communicator, and educator? What if the patient has a ton of questions, and I put them aside because of fatigue and hunger? (#17)\n[SUBTITLE] Barriers to Adequate Nutrition [SUBSECTION] We asked the physicians \"Do you think that you and other doctors are not always able to ensure adequate nutrition during work hours? If so, what barriers make it difficult for you to eat and drink in a healthy way during work hours?\" The major theme that emerged was that of lack of time. Doctors stated they are just too busy to even think about eating, that there is no time during their work day to stop and eat, that their workload prevents them from taking time to eat, and that their work schedules (e.g. business meetings or meetings with patients' families over the lunch hour, operating room cases or clinics running through lunch) make it difficult to access nutrition on a regular basis. The second major theme was limited or inconvenient access to nutrition during the physicians' work day. They described how the physical spaces within the hospital hinder their access to food, including having to walk long distances from the unit where they work to the food areas, waiting for elevators, and waiting in long line ups once they get to the food stations. In addition, they noted that current limited after-hours access to food is impractical given that the hospital staff work 24 hours per day. The participants also described inadequate food storage areas for items they bring from home. Many participants identified food choices as a major barrier, describing poor quality food, restricted healthy choices, and limited variety, while very few identified cost of healthy food as a barrier.\nIn addition to these practical access issues, the physicians also identified issues that reflect the culture of medicine and how physicians' professionalism deters them from taking care of their workplace nutrition needs. For example, many physicians reported that work and their patients come first, that they feel compelled to \"just get the work done\", and that taking time for nutrition delays caring for ill patients. Several participants also described profession related barriers, including the medical profession's low prioritization of workplace nutrition for physicians in general and a feeling of awkwardness or unprofessionalism when either carrying food around in the workplace or eating in patient care settings. The following quotes illustrate examples of some of the practical and professionalism barriers that participants identified.\nI'm too busy to give it [nutrition] the thought it needs. It's hard to carry food around and I think I lack interest in nutrition. (#11)\nIt's not an access issue, the quality [of food] sucks. I would pay more if there was better food. I like to sit down to eat. Eating interrupts the work day and lengthens it. I am stubborn. Work stuff comes first. I cannot be non productive or inefficient. (#8)\nI can't bear the food. The doctors' lounge [food] is OK. Bringing in three meals is too much hassle and choices for healthy foods are limited. I can't stand in line for food. There is no time with urgency of cases and the workload. (#4)\nBarriers: Time is number one; too many pressing things that need to be done right now. There are awkward social pressures also, I feel awkward eating in the clinic. There is no microwave, I don't want to carry forks and knives, and I don't want to walk past a bank of patients with food, I feel self conscious. So I am trapped in the [clinic] eating cookies and chocolate and coffee and creamers and juice, no access to healthy foods, and technically am eating patient food. Club soda and orange juice is restorative because it reminds me of being home on my porch watching the dog drinking soda lime. (#19)\nBarriers are time, the pressure of getting things done during the day that you can't put off, and poor availability [of food] especially in the evening and nights, when not a lot is available. If I am in the hospital all night, I try to eat early otherwise I can't get proper food. What is available? Chicken fingers and fries. Now at least the sandwich bar is open later and occasionally I can get a veggie burger or salmon burger. During the day reasonable food is available but I can't get to it. (#14)\nThe operating room is a food wasteland. There used to be food on the 7th floor [operating room area] but now there are vending machines. It makes it difficult to get hydration and caloric intake. Same for the anesthesiologists. There is a lot of pressure in the OR. We need to be conscientious about intake. And there is no readily available drinking water. The social trend that I abhor is the lack of clean drinking water. No drinking fountains and need to buy a three dollar bottle of [brand] water that is an environmental nightmare. They removed juices from the operating room. In a pinch, the nurses will try to give you a drink and they might get the straw up your nose! (#13)\nWe asked the physicians \"Do you think that you and other doctors are not always able to ensure adequate nutrition during work hours? If so, what barriers make it difficult for you to eat and drink in a healthy way during work hours?\" The major theme that emerged was that of lack of time. Doctors stated they are just too busy to even think about eating, that there is no time during their work day to stop and eat, that their workload prevents them from taking time to eat, and that their work schedules (e.g. business meetings or meetings with patients' families over the lunch hour, operating room cases or clinics running through lunch) make it difficult to access nutrition on a regular basis. The second major theme was limited or inconvenient access to nutrition during the physicians' work day. They described how the physical spaces within the hospital hinder their access to food, including having to walk long distances from the unit where they work to the food areas, waiting for elevators, and waiting in long line ups once they get to the food stations. In addition, they noted that current limited after-hours access to food is impractical given that the hospital staff work 24 hours per day. The participants also described inadequate food storage areas for items they bring from home. Many participants identified food choices as a major barrier, describing poor quality food, restricted healthy choices, and limited variety, while very few identified cost of healthy food as a barrier.\nIn addition to these practical access issues, the physicians also identified issues that reflect the culture of medicine and how physicians' professionalism deters them from taking care of their workplace nutrition needs. For example, many physicians reported that work and their patients come first, that they feel compelled to \"just get the work done\", and that taking time for nutrition delays caring for ill patients. Several participants also described profession related barriers, including the medical profession's low prioritization of workplace nutrition for physicians in general and a feeling of awkwardness or unprofessionalism when either carrying food around in the workplace or eating in patient care settings. The following quotes illustrate examples of some of the practical and professionalism barriers that participants identified.\nI'm too busy to give it [nutrition] the thought it needs. It's hard to carry food around and I think I lack interest in nutrition. (#11)\nIt's not an access issue, the quality [of food] sucks. I would pay more if there was better food. I like to sit down to eat. Eating interrupts the work day and lengthens it. I am stubborn. Work stuff comes first. I cannot be non productive or inefficient. (#8)\nI can't bear the food. The doctors' lounge [food] is OK. Bringing in three meals is too much hassle and choices for healthy foods are limited. I can't stand in line for food. There is no time with urgency of cases and the workload. (#4)\nBarriers: Time is number one; too many pressing things that need to be done right now. There are awkward social pressures also, I feel awkward eating in the clinic. There is no microwave, I don't want to carry forks and knives, and I don't want to walk past a bank of patients with food, I feel self conscious. So I am trapped in the [clinic] eating cookies and chocolate and coffee and creamers and juice, no access to healthy foods, and technically am eating patient food. Club soda and orange juice is restorative because it reminds me of being home on my porch watching the dog drinking soda lime. (#19)\nBarriers are time, the pressure of getting things done during the day that you can't put off, and poor availability [of food] especially in the evening and nights, when not a lot is available. If I am in the hospital all night, I try to eat early otherwise I can't get proper food. What is available? Chicken fingers and fries. Now at least the sandwich bar is open later and occasionally I can get a veggie burger or salmon burger. During the day reasonable food is available but I can't get to it. (#14)\nThe operating room is a food wasteland. There used to be food on the 7th floor [operating room area] but now there are vending machines. It makes it difficult to get hydration and caloric intake. Same for the anesthesiologists. There is a lot of pressure in the OR. We need to be conscientious about intake. And there is no readily available drinking water. The social trend that I abhor is the lack of clean drinking water. No drinking fountains and need to buy a three dollar bottle of [brand] water that is an environmental nightmare. They removed juices from the operating room. In a pinch, the nurses will try to give you a drink and they might get the straw up your nose! (#13)\n[SUBTITLE] Impact of Participating in a Nutrition Based Wellness Initiative [SUBSECTION] We asked the physicians, \"Has your participation increased your awareness of your nutrition patterns during your work day? If yes, how so?\" Almost all of the physicians responded that their participation in the initiative has increased their awareness of their own nutrition patterns, describing how they became more aware of their irregular or poor eating and drinking habits, and/or noted increased awareness through reinforcement and improved knowledge about nutrition. Some physicians reported an increased awareness of the link between their nutrition patterns and their mood and performance. The physicians who did not think that participation in the initiative had increased their awareness of their nutrition patterns reported that they already knew they had good or bad nutritional habits.\nWe also asked the physicians, \"Has your participation influenced how you will eat and drink during your work day? If yes, how so?\" The majority of the physicians reported that their future nutrition patterns will be influenced, commenting that they will snack and drink more regularly while at work, make an effort to prepare snacks to bring to the hospital, and be more careful about their eating habits such as having breakfast daily and not skipping other meals. Several participants felt they will not change their nutrition patterns. Reasons stated included affirmation that they will continue their current patterns despite knowing they should change them for the better or that their current eating habits are effective. Some physicians were ambivalent, and suggested that they might change their nutrition patterns if food was more readily available and accessible, or if they had more information on healthy nutrition. The following quotes illustrate examples of some of the physicians' views after participation in the nutrition based wellness intervention study.\nFirst, it [the intervention] outlined the importance of breakfast. It makes you feel better. Second, the whole notion of spacing meals, a little here and a little there, fewer calories at a time, so there is no catch-up phenomenon. It adds up to fewer calories than my usual smorgasbord late in the day. (#7)\nFor sure! I just became really aware of how hungry I really was and that hunger would add to my sarcasm, I would get glib [with patients]. The irritability, getting spaced out without food is improved. I bought a water bottle and cut back on coffee. I still bring food from home and feeling better on the second day of the study added to my work happiness. (#19)\nI recognize that when I get yawny or tired, it is because of low blood sugar. Then I like to run up and get something to eat or drink. Being in the study has reinforced the snack thing. Now I bring 2 fruit and trail mix. I don't get as many symptoms and am not as tired when I eat snacks. I come regularly up to the doctors' lounge or elsewhere to get snacks. Before, it was intermittent. I pace myself now to assure regular eating. I come for lunch regularly now, before it was intermittent. The nutrition issues were brought to my attention by the study, otherwise I would not have changed my habits. (#5)\nI think I will be more careful about eating regularly. Funnily enough, I have been more careful the last couple of years because I can't do it like I used to. I don't have the same stamina or the same focus for hours on end. It's the same with food; if I do miss it it's harder. (#14)\nBefore I tended to not remember when I had eaten. Now maybe I can better correlate mood and performance. Because of the hypoglycemic questionnaire I started to realize I had symptoms of that even before I knew it. I am trying not to skip meals, and trying to snack, and to prepare food for work ahead of time. (#9)\nI've started to hunt down trail mix so I can graze in between meals. (#4)\nBeing in the study did not have a high impact. I already knew they [eating habits] were crappy and suboptimal. Maybe it will cause somewhat of a change, but not a big change. I already knew that if I wasn't getting food I would get irritable and hungry. It [initiative participation] has influenced how I would like to eat and drink but due to workplace and personal issues, I just can't make that happen. (#20)\nWe asked the physicians, \"Has your participation increased your awareness of your nutrition patterns during your work day? If yes, how so?\" Almost all of the physicians responded that their participation in the initiative has increased their awareness of their own nutrition patterns, describing how they became more aware of their irregular or poor eating and drinking habits, and/or noted increased awareness through reinforcement and improved knowledge about nutrition. Some physicians reported an increased awareness of the link between their nutrition patterns and their mood and performance. The physicians who did not think that participation in the initiative had increased their awareness of their nutrition patterns reported that they already knew they had good or bad nutritional habits.\nWe also asked the physicians, \"Has your participation influenced how you will eat and drink during your work day? If yes, how so?\" The majority of the physicians reported that their future nutrition patterns will be influenced, commenting that they will snack and drink more regularly while at work, make an effort to prepare snacks to bring to the hospital, and be more careful about their eating habits such as having breakfast daily and not skipping other meals. Several participants felt they will not change their nutrition patterns. Reasons stated included affirmation that they will continue their current patterns despite knowing they should change them for the better or that their current eating habits are effective. Some physicians were ambivalent, and suggested that they might change their nutrition patterns if food was more readily available and accessible, or if they had more information on healthy nutrition. The following quotes illustrate examples of some of the physicians' views after participation in the nutrition based wellness intervention study.\nFirst, it [the intervention] outlined the importance of breakfast. It makes you feel better. Second, the whole notion of spacing meals, a little here and a little there, fewer calories at a time, so there is no catch-up phenomenon. It adds up to fewer calories than my usual smorgasbord late in the day. (#7)\nFor sure! I just became really aware of how hungry I really was and that hunger would add to my sarcasm, I would get glib [with patients]. The irritability, getting spaced out without food is improved. I bought a water bottle and cut back on coffee. I still bring food from home and feeling better on the second day of the study added to my work happiness. (#19)\nI recognize that when I get yawny or tired, it is because of low blood sugar. Then I like to run up and get something to eat or drink. Being in the study has reinforced the snack thing. Now I bring 2 fruit and trail mix. I don't get as many symptoms and am not as tired when I eat snacks. I come regularly up to the doctors' lounge or elsewhere to get snacks. Before, it was intermittent. I pace myself now to assure regular eating. I come for lunch regularly now, before it was intermittent. The nutrition issues were brought to my attention by the study, otherwise I would not have changed my habits. (#5)\nI think I will be more careful about eating regularly. Funnily enough, I have been more careful the last couple of years because I can't do it like I used to. I don't have the same stamina or the same focus for hours on end. It's the same with food; if I do miss it it's harder. (#14)\nBefore I tended to not remember when I had eaten. Now maybe I can better correlate mood and performance. Because of the hypoglycemic questionnaire I started to realize I had symptoms of that even before I knew it. I am trying not to skip meals, and trying to snack, and to prepare food for work ahead of time. (#9)\nI've started to hunt down trail mix so I can graze in between meals. (#4)\nBeing in the study did not have a high impact. I already knew they [eating habits] were crappy and suboptimal. Maybe it will cause somewhat of a change, but not a big change. I already knew that if I wasn't getting food I would get irritable and hungry. It [initiative participation] has influenced how I would like to eat and drink but due to workplace and personal issues, I just can't make that happen. (#20)", "We asked the physicians \"Do you sometimes have difficulty eating and drinking during work hours?\" and all answered yes. We then asked \"Think back to a busy work day where you maybe did not have time to eat and drink properly. Do you think it had an impact on you? In what way? How did you feel physically, psychologically, cognitively? Do you think it had an impact on your work? In what way? Do you think it had an impact on how you treated your colleagues, other health care professionals? Did it impact your ability to complete your work?\" All of the participants reported that they thought that inadequate nutrition had some impact on them and almost all reported experiencing emotional symptoms including irritability, frustration, decreased patience and feeling emotionally drained. Most reported they have felt physical symptoms such as fatigue, hypoglycemic symptoms (shaky, tremors, sweats, headache, lightheaded, hunger, nausea), and general malaise. Many also described a perceived impact on their cognition, citing symptoms such as difficulty concentrating, lack of focus, inability to think clearly, poor or slow decision making, and generalized inefficiency. Many study participants admitted that they felt the impact of inadequate nutrition at times led to an inability to complete their work, and they offered examples such as decreased efficiency, lack of focus, less willingness to discuss patient care issues with colleagues and having to be much more deliberate and methodical when hungry. Examples of negative interpersonal interactions included being less patient and less gregarious. The following quotations illustrate some examples of the impact of inadequate nutrition experienced by physicians.\nI think you can be distracted, irritable, have poor interactions if hungry enough. Your agenda changes and you just want to get the hell out of here. Hopefully, you don't give in to it. People get irritated and emotionally tapped out and don't want to think through the problem. (#20)\nIf I eat breakfast, I need lunch or I feel diaphoretic and shaky. So I cut out breakfast years ago because I can't get regular lunch. So if I miss meals irregularly it becomes an issue. I'm not hungry at lunch, I have no cravings, I have trained my body to miss lunch. If I deviate from my routine, yes, otherwise, no impact [of poor nutrition]. I feel shaky, get sweats, have decreased concentration, lack of focus, things don't seem to go right. I need fine motor skills during surgery and I will note a little bit of a tremor that almost makes the case impossible to do. Then I get frustrated and it makes things worse. (#12)\nMy attention is poor and I think about food and drink, which is a distraction. I get headaches, feel fatigue, less motivated and irritable. I'm not aware that it impacts my interactions with colleagues and my ability to complete work, but overall, it [hunger] may decrease efficiency. (#9)\n[Impact of poor nutrition on work] Probably, but I would like to think not! [Impact on how he treats colleagues] I don't know, it's hard to say, hard to compare to what you could do best on that day. Maybe it takes you longer and you have to focus more. [Ability to complete work] Probably. At night, when you get older, it's harder. I have to focus on the real meat of it [work] and feel massive fatigue, especially if I haven't eaten. (#14)\nIn order to further explore the link between nutrition and cognition, we next asked the participants \"Do you think there is an association between a physician's nutritional intake during work hours and their cognitive function? Why or why not? What do you think the important consequences might be?\" Most participants felt strongly that there was an association between their nutrition and cognition. The physicians who either did not think there was an association or were not sure, often went on to qualify their opinion by adding comments suggesting they recognize within themselves the potential for lack of acknowledgment of this linkage. The following quotations illustrate some examples of the physicians' perceptions of the link between nutrition and cognition, and the conflict voiced by the physicians.\nAbsolutely [link between nutrition and cognition]. I know that myself. When I feel hungry, I can't focus. I get slower around 4 am when I am hungry and tired and not drinking enough water, not peeing. Consequences for patient care? For sure! (#1)\nAbsolutely! If I grab food on the run it doesn't feel as well as a rest and a read of the newspaper or a visit or a curbside consult [with colleagues]. Rest and food also equal socialization. Consequences [of good nutrition] are performance and less stress. I am thinking clearly, and am more efficient and organized. (#2)\nI don't believe so [association between nutrition and cognition]. You have to be excessively hypoglycemic before cognitive impairment, or I hope that is true! I do feel sleepy after the mid day meal on the Monday. At times I am short tempered with staff and surly and sometimes they will bring me food and it helps. Does it affect how I treat colleagues? Yes, from an emotional and interpersonal way, but heaven forbid it would impact patient care. [Ability to complete work]I don't think so. I would notice it and eat. (#8)\nI don't notice it in myself [association between nutrition and cognition]. I feel hungry but I don't think I \"bonk\". But perhaps I am lacking insight. I feel I perform well all the time although I do notice if I get on my bike to ride home and haven't eaten well, then I will bonk. (#11)\nSuboptimal nutrition and restfulness should impact on cognition. Maybe it is surgical arrogance that makes me think not. To me, nutrition and sleep deprivation are linked. I am more likely to do a procedure not as slickly if it is routine, run of the mill, by rote, but when it is super technical, it is way easier to get switched on. For example, last week, the surgery took 6 hours, went fine, no issues, went well. When I was writing the discharge prescription at the end of the case, I made a prescription error, when the hard part was over. There is also a narrow view of \"standard of care.\" How about our role of communicator, and educator? What if the patient has a ton of questions, and I put them aside because of fatigue and hunger? (#17)", "We asked the physicians \"Do you think that you and other doctors are not always able to ensure adequate nutrition during work hours? If so, what barriers make it difficult for you to eat and drink in a healthy way during work hours?\" The major theme that emerged was that of lack of time. Doctors stated they are just too busy to even think about eating, that there is no time during their work day to stop and eat, that their workload prevents them from taking time to eat, and that their work schedules (e.g. business meetings or meetings with patients' families over the lunch hour, operating room cases or clinics running through lunch) make it difficult to access nutrition on a regular basis. The second major theme was limited or inconvenient access to nutrition during the physicians' work day. They described how the physical spaces within the hospital hinder their access to food, including having to walk long distances from the unit where they work to the food areas, waiting for elevators, and waiting in long line ups once they get to the food stations. In addition, they noted that current limited after-hours access to food is impractical given that the hospital staff work 24 hours per day. The participants also described inadequate food storage areas for items they bring from home. Many participants identified food choices as a major barrier, describing poor quality food, restricted healthy choices, and limited variety, while very few identified cost of healthy food as a barrier.\nIn addition to these practical access issues, the physicians also identified issues that reflect the culture of medicine and how physicians' professionalism deters them from taking care of their workplace nutrition needs. For example, many physicians reported that work and their patients come first, that they feel compelled to \"just get the work done\", and that taking time for nutrition delays caring for ill patients. Several participants also described profession related barriers, including the medical profession's low prioritization of workplace nutrition for physicians in general and a feeling of awkwardness or unprofessionalism when either carrying food around in the workplace or eating in patient care settings. The following quotes illustrate examples of some of the practical and professionalism barriers that participants identified.\nI'm too busy to give it [nutrition] the thought it needs. It's hard to carry food around and I think I lack interest in nutrition. (#11)\nIt's not an access issue, the quality [of food] sucks. I would pay more if there was better food. I like to sit down to eat. Eating interrupts the work day and lengthens it. I am stubborn. Work stuff comes first. I cannot be non productive or inefficient. (#8)\nI can't bear the food. The doctors' lounge [food] is OK. Bringing in three meals is too much hassle and choices for healthy foods are limited. I can't stand in line for food. There is no time with urgency of cases and the workload. (#4)\nBarriers: Time is number one; too many pressing things that need to be done right now. There are awkward social pressures also, I feel awkward eating in the clinic. There is no microwave, I don't want to carry forks and knives, and I don't want to walk past a bank of patients with food, I feel self conscious. So I am trapped in the [clinic] eating cookies and chocolate and coffee and creamers and juice, no access to healthy foods, and technically am eating patient food. Club soda and orange juice is restorative because it reminds me of being home on my porch watching the dog drinking soda lime. (#19)\nBarriers are time, the pressure of getting things done during the day that you can't put off, and poor availability [of food] especially in the evening and nights, when not a lot is available. If I am in the hospital all night, I try to eat early otherwise I can't get proper food. What is available? Chicken fingers and fries. Now at least the sandwich bar is open later and occasionally I can get a veggie burger or salmon burger. During the day reasonable food is available but I can't get to it. (#14)\nThe operating room is a food wasteland. There used to be food on the 7th floor [operating room area] but now there are vending machines. It makes it difficult to get hydration and caloric intake. Same for the anesthesiologists. There is a lot of pressure in the OR. We need to be conscientious about intake. And there is no readily available drinking water. The social trend that I abhor is the lack of clean drinking water. No drinking fountains and need to buy a three dollar bottle of [brand] water that is an environmental nightmare. They removed juices from the operating room. In a pinch, the nurses will try to give you a drink and they might get the straw up your nose! (#13)", "We asked the physicians, \"Has your participation increased your awareness of your nutrition patterns during your work day? If yes, how so?\" Almost all of the physicians responded that their participation in the initiative has increased their awareness of their own nutrition patterns, describing how they became more aware of their irregular or poor eating and drinking habits, and/or noted increased awareness through reinforcement and improved knowledge about nutrition. Some physicians reported an increased awareness of the link between their nutrition patterns and their mood and performance. The physicians who did not think that participation in the initiative had increased their awareness of their nutrition patterns reported that they already knew they had good or bad nutritional habits.\nWe also asked the physicians, \"Has your participation influenced how you will eat and drink during your work day? If yes, how so?\" The majority of the physicians reported that their future nutrition patterns will be influenced, commenting that they will snack and drink more regularly while at work, make an effort to prepare snacks to bring to the hospital, and be more careful about their eating habits such as having breakfast daily and not skipping other meals. Several participants felt they will not change their nutrition patterns. Reasons stated included affirmation that they will continue their current patterns despite knowing they should change them for the better or that their current eating habits are effective. Some physicians were ambivalent, and suggested that they might change their nutrition patterns if food was more readily available and accessible, or if they had more information on healthy nutrition. The following quotes illustrate examples of some of the physicians' views after participation in the nutrition based wellness intervention study.\nFirst, it [the intervention] outlined the importance of breakfast. It makes you feel better. Second, the whole notion of spacing meals, a little here and a little there, fewer calories at a time, so there is no catch-up phenomenon. It adds up to fewer calories than my usual smorgasbord late in the day. (#7)\nFor sure! I just became really aware of how hungry I really was and that hunger would add to my sarcasm, I would get glib [with patients]. The irritability, getting spaced out without food is improved. I bought a water bottle and cut back on coffee. I still bring food from home and feeling better on the second day of the study added to my work happiness. (#19)\nI recognize that when I get yawny or tired, it is because of low blood sugar. Then I like to run up and get something to eat or drink. Being in the study has reinforced the snack thing. Now I bring 2 fruit and trail mix. I don't get as many symptoms and am not as tired when I eat snacks. I come regularly up to the doctors' lounge or elsewhere to get snacks. Before, it was intermittent. I pace myself now to assure regular eating. I come for lunch regularly now, before it was intermittent. The nutrition issues were brought to my attention by the study, otherwise I would not have changed my habits. (#5)\nI think I will be more careful about eating regularly. Funnily enough, I have been more careful the last couple of years because I can't do it like I used to. I don't have the same stamina or the same focus for hours on end. It's the same with food; if I do miss it it's harder. (#14)\nBefore I tended to not remember when I had eaten. Now maybe I can better correlate mood and performance. Because of the hypoglycemic questionnaire I started to realize I had symptoms of that even before I knew it. I am trying not to skip meals, and trying to snack, and to prepare food for work ahead of time. (#9)\nI've started to hunt down trail mix so I can graze in between meals. (#4)\nBeing in the study did not have a high impact. I already knew they [eating habits] were crappy and suboptimal. Maybe it will cause somewhat of a change, but not a big change. I already knew that if I wasn't getting food I would get irritable and hungry. It [initiative participation] has influenced how I would like to eat and drink but due to workplace and personal issues, I just can't make that happen. (#20)", "Although there is a wide belief that physicians may not always attain appropriate nutrition during their work day, few empirical studies are available that help us to understand how physicians perceive the impact of inadequate nutrition upon their personal wellness and professional performance, and the barriers to achieving adequate nutrition in the workplace. Most physicians in our study believe that through improved nutrition at work they would feel better and benefit from enhanced performance, and they identified the challenges in achieving this goal. These beliefs are supported more broadly by previous research showing that physicians who suffer other workplace related strains such as stress, burnout, and sleep deprivation pose a risk both to themselves [16,17] and to the delivery of quality health care [1-9].\nNutrition is a basic wellness necessity, and although research reports some physician healthy eating behaviors [18], there is mounting evidence that physicians have difficulty accessing proper nutrition during their work day. For example, a recent cross sectional survey study by Winston found that less than half the doctors who replied reported taking regular meal breaks and that limited canteen opening times, lack of selection and lack of breaks were the most commonly perceived barriers to healthy eating while at work [11]. Examples of the importance of physicians' nutrition include a recent study demonstrating that skipping breakfast and eating meals irregularly was associated with fatigue in medical students [19] and a study linking medical students' personal nutritional habits to their patient counseling practices [20]. The physicians interviewed in our study also participated in a related nutrition based intervention study where they were fed nutritious meals and snacks at scheduled intervals during their work day. Recently published results of this research show significantly improved cognition, higher caloric intake, improved hydration, and a trend toward fewer hypoglycemic symptoms for the physicians on the intervention day compared to the day when they followed their usual workplace eating habits [15].\nOur study adds to the existing literature in that the physicians interviewed not only conceded that they do not always have access to appropriate nutrition, they also aptly described their perceptions of the impact of inadequate nutrition. They portrayed the negative impact upon both their personal wellness and their ability to complete their work, recognizing that when they are undernourished, they are less likely to function at their best. In addition to identifying the practical barriers, the physicians recounted how their sense of professionalism and work ethic also hinder their work nutrition practices, a finding not previously reported to our knowledge. Lastly, the physicians in our study described how participation in a physician wellness intervention raised their awareness and altered their views of nutrition in the workplace. Unfortunately, many of the doctors admitted that the practical and professional related barriers to ensuring adequate nutrition in the workplace would still present challenges for them.\nStudy limitations include the relatively small sample of predominantly male physicians from a single hospital acute care setting. Also, they may have volunteered for the study because of their personal interest in nutrition and the opportunity to explore a potential association between nutrition and cognition during the wellness intervention. Future research might explore whether similar outcomes to inadequate nutrition and difficulties in accessing proper nutrition are experienced by physicians in other work settings. The interviews conducted in our study were linked to a related nutrition based wellness intervention study exploring the association between physician nutrition and cognition. The questions posed were guided by both the local knowledge of prior reporting of poor workplace nutrition and the underlying research questions: is there an association between physician nutrition and cognition, are physicians aware of this link, and are there barriers to achieving nutrition in the workplace? It is thus possible that the way in which the questions were framed may have influenced the participants' responses. However, this is not likely the case, as the participants followed up their yes/no answers and/or the subsequent probes by offering, in their own words, many and varied examples of how they personally experienced poor workplace nutrition.", "Physicians report that inadequate workplace nutrition has a significant negative impact on their personal wellness and professional performance. Given this threat to health care delivery, health care organizations and the medical profession need to address both the practical and professional barriers that impede physicians' basic self care issues such as nutrition, and implement physician wellness workplace initiatives that may positively influence health care organization policy, patient care, and physician behavior. Future work should include quantitative studies to determine if the major themes identified from the interviews represent those of a broader sample of physicians.", "JL contributed to the study conception and design, acquisition of data, analysis and interpretation of data, drafting and critical revision of the manuscript, obtaining funding, and administrative support. JW contributed to the study conception and design, analysis and interpretation of data, critical revision of the manuscript, and obtaining funding. KD contributed to the study design, critical revision of the manuscript and technical support. DR contributed to the conception and design of the study, critical revision of the manuscript, and technical and material support. JL had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis, and had final responsibility for the decision to submit the manuscript for publication. All authors read and approved the final manuscript.", "The authors declare that they have no competing interests." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[]
Levator anguli oris muscle based flaps for nasal reconstruction following resection of nasal skin tumours.
21333010
surgical excision remains the best tool for management of skin tumors affecting nasal skin, however many surgical techniques have been used for reconstruction of the nasal defects caused by excisional surgery. The aim of this work is the evaluation of the feasibility and outcome of levator anguli oris muscle based flaps.
BACKGROUND
Ninety patients of malignant nasal skin tumours were included in this study. Age was ranged from four to 78 years. For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap was used in 45 patients. For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap was used in 23 patients. Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps were used to reconstruct the defect in 22 patients.
METHODS
Wound dehiscence was the commonest complication. Minor complications, in the form of haematoma and minor flap loss were managed conservatively. Partial flap loss was encountered in 6 patients with relatively larger tumours or diabetic co-morbidity, three of whom were required operative re-intervention in the form of debridement and flap refashioning, while total flap loss was not occurred at all.
RESULTS
Immediate nasal reconstruction for nasal skin and mucosal tumours with levator anguli oris muscle based flaps (LAOMC, LAOMCM) is feasible and spares the patient the psychic trauma due to organ loss.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Child", "Child, Preschool", "Facial Muscles", "Female", "Humans", "Male", "Middle Aged", "Neoplasm Recurrence, Local", "Nose", "Nose Neoplasms", "Prognosis", "Plastic Surgery Procedures", "Skin Neoplasms", "Surgical Flaps", "Young Adult" ]
3046908
null
null
Patients and Methods
Over the period between July 2007 and July 2010, ninety patients of malignant skin tumours located in the nasal skin were enrolled in this study at surgical oncology unit, Mansoura University. They included 63 patients with primary lesions and 27 with recurrent tumors. There were 51 males and 39 females. Age of the patients was ranged from four to 78 years. Young aged patient were those having Xeroderma pigmentosa (9 patients). BCC was presented in 56 patients, while squamous cell carcinoma presented in 33 patients and one patient had melanoma (Table 1). Patients with advanced age or extensive comorbidity were excluded from this study. Patients characteristics: Wide local excision with three dimensional safety margins was carried out and was guided by intraoperative frozen section in all patients prior starting the reconstructive procedure. Any infiltrated margin was dealt immediately by re-excision. [SUBTITLE] Nasal Reconstructive Technique [SUBSECTION] A skin paddle countered to the size and shape of the nasal defect was outlined in the nasolabial fold (Figure 2a &3a), it was positioned along the nasolabial fold to permit the transposition of the flap to reconstruct the nasal defect without tension. The defect location and the infraorbital rim determined its position; it is about 1 cm below the orbital rim, which is the pivot point of the pedicle. Illustration of the operative technique of levator anguli oris muscle based flaps: 2a: design and planning. 2b: surgical elevation of the flap. 2c: closure of defects. Operative illustration of the LAOMC flap. 3a: design and planning. 3b: closure of defects. An incision is made through the skin around the borders of the outlined skin paddle, and then from uppermost border of the skin paddle another incision was made upwards for 3-5 cm (Figure 2b). Skin flaps were raised widely in the subdermal fat to get the levator anguli oris myocautaneous (LAOMC) flap. When the mucous membrane was desired for compound nasal loss of skin and mucous membrane, the incision around the skin paddle was deepened to oral mucous membrane with a piece of gauze inside the mouth cavity till a part of mucous membrane equal to that of the defect was included in the levator anguli oris myocautaneous mucosal (LAOMCM) flap. The dissection was continued upwards below the levator anguli oris taking care not to injure the infraorbital artery which lies between the levator anguli oris above and the levator labii superioris below, as it emerges from the infraorbital foramen, so the dissection continued to 1 cm below the orbital margin where the vascular pedicle can be seen and preserved. The skin bridge between the flap and the defect was elevated to create a subcutaneous tunnel and deliver the skin paddle into the defect through it or was pedicled above the skin, and later on transected after 10 to 15 days. The mucous membrane of the levator anguli oris myocautaneous mucosal (LAOMCM) flap was first sutured to the mucous membrane of the nose and then the skin of the flap was sutured to the nasal skin (Figure 2c &3b). The defect of the donor site is hidden in the nasolabial fold that is closed easily and primarily using polyglactin 3/0 and then subcuticular closure of the skin, but when a part of the mucus membrane of the oral cavity mucosa is also being transferred, it is closed first. Types of nasal reconstruction were as follows: 1- For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap that includes both skin and muscle was used in 45 patients (Figure 4), of them 16 patients with tunneled flap (Figure 5 &6). A 47 years woman presented with BCC on the left side of the nasal skin, to whom a pedicled LAOMC flap was used to reconstruct the nasal defect. 4a: preoperative. 4b: two weeks postoperative. 4c: two months postoperative. A 26 years old female presented with melanoma to whom a tunneled LAOMC flap was used to reconstruct the nasal defect. 5a: preoperative. 5b: one month postoperative. A 17 years XDP boy presented with BCC at the dorsum of nasal skin treated with a tunneled LAOMC flap. 6a: preoperative. 6b: three weeks postoperative. 2- For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap (Figure 7) ± other advancement flaps depending on the site and size of the defect, was used in 23 patients. A 62 years woman presented with BCC at tip of the nose encroaching on the left side with unilateral compound loss of skin and mucus membrane and treated with a pedicled LAOMCM flap. 7a: preoperative. 7b: six weeks postoperative. 3- Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps (Figure 8) were used to reconstruct the defect in 22 patients. A 50 years man presented with large advanced tumor with bilateral compound loss of skin and mucus membrane that treated with combined bilateral LAOMCM flap with supraorbital glabellar flap. 8a: preoperative. 8b: three weeks postoperative. 8c: three months postoperative. Routine immediate and late follow up was undertaken for evaluation of the viability of the cover method, the degree of success of coverage, recipient and donor sites morbidity, operative time, hospital stay, immediate and late overall morbidity and mortality, and finally tumour recurrence within the follow up period. A skin paddle countered to the size and shape of the nasal defect was outlined in the nasolabial fold (Figure 2a &3a), it was positioned along the nasolabial fold to permit the transposition of the flap to reconstruct the nasal defect without tension. The defect location and the infraorbital rim determined its position; it is about 1 cm below the orbital rim, which is the pivot point of the pedicle. Illustration of the operative technique of levator anguli oris muscle based flaps: 2a: design and planning. 2b: surgical elevation of the flap. 2c: closure of defects. Operative illustration of the LAOMC flap. 3a: design and planning. 3b: closure of defects. An incision is made through the skin around the borders of the outlined skin paddle, and then from uppermost border of the skin paddle another incision was made upwards for 3-5 cm (Figure 2b). Skin flaps were raised widely in the subdermal fat to get the levator anguli oris myocautaneous (LAOMC) flap. When the mucous membrane was desired for compound nasal loss of skin and mucous membrane, the incision around the skin paddle was deepened to oral mucous membrane with a piece of gauze inside the mouth cavity till a part of mucous membrane equal to that of the defect was included in the levator anguli oris myocautaneous mucosal (LAOMCM) flap. The dissection was continued upwards below the levator anguli oris taking care not to injure the infraorbital artery which lies between the levator anguli oris above and the levator labii superioris below, as it emerges from the infraorbital foramen, so the dissection continued to 1 cm below the orbital margin where the vascular pedicle can be seen and preserved. The skin bridge between the flap and the defect was elevated to create a subcutaneous tunnel and deliver the skin paddle into the defect through it or was pedicled above the skin, and later on transected after 10 to 15 days. The mucous membrane of the levator anguli oris myocautaneous mucosal (LAOMCM) flap was first sutured to the mucous membrane of the nose and then the skin of the flap was sutured to the nasal skin (Figure 2c &3b). The defect of the donor site is hidden in the nasolabial fold that is closed easily and primarily using polyglactin 3/0 and then subcuticular closure of the skin, but when a part of the mucus membrane of the oral cavity mucosa is also being transferred, it is closed first. Types of nasal reconstruction were as follows: 1- For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap that includes both skin and muscle was used in 45 patients (Figure 4), of them 16 patients with tunneled flap (Figure 5 &6). A 47 years woman presented with BCC on the left side of the nasal skin, to whom a pedicled LAOMC flap was used to reconstruct the nasal defect. 4a: preoperative. 4b: two weeks postoperative. 4c: two months postoperative. A 26 years old female presented with melanoma to whom a tunneled LAOMC flap was used to reconstruct the nasal defect. 5a: preoperative. 5b: one month postoperative. A 17 years XDP boy presented with BCC at the dorsum of nasal skin treated with a tunneled LAOMC flap. 6a: preoperative. 6b: three weeks postoperative. 2- For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap (Figure 7) ± other advancement flaps depending on the site and size of the defect, was used in 23 patients. A 62 years woman presented with BCC at tip of the nose encroaching on the left side with unilateral compound loss of skin and mucus membrane and treated with a pedicled LAOMCM flap. 7a: preoperative. 7b: six weeks postoperative. 3- Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps (Figure 8) were used to reconstruct the defect in 22 patients. A 50 years man presented with large advanced tumor with bilateral compound loss of skin and mucus membrane that treated with combined bilateral LAOMCM flap with supraorbital glabellar flap. 8a: preoperative. 8b: three weeks postoperative. 8c: three months postoperative. Routine immediate and late follow up was undertaken for evaluation of the viability of the cover method, the degree of success of coverage, recipient and donor sites morbidity, operative time, hospital stay, immediate and late overall morbidity and mortality, and finally tumour recurrence within the follow up period.
null
null
null
null
[ "Introduction", "Nasal Reconstructive Technique", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "The skin is the most common site of cancer development in humans. More than one million new skin cancer cases are diagnosed in the United States annually, compared with about 1.3 million cases of all other types of cancer combined. Therefore, skin cancers constituted fully one-half of all cancers diagnosed [1].\nThe nose, being exposed to sun light, is a common site for skin malignancy. Surgical excision remains the best tool for management of skin tumors affecting nasal skin, reconstruction of defects caused by excisional surgery have been done using many techniques including median and paramedian forehead flaps [2], Rhombic bilobed flap, and other advancement flaps [3].\nThe modern era of nasal reconstruction has brought significant advancements and offers unparalleled opportunities for reconstructive surgeons to maximize functional and aesthetic outcomes [4]. The forehead flap has been used for many centuries and remains a workhorse flap for major nasal resurfacing [4].\nThe scalping forehead flap, with the aim of using it in total nasal reconstruction, has a rich net of arterial and venous vessels that constitute the basic pattern of its blood supply through three principal pedicles: superficial temporal, supraorbital, and supratrochlear. It was described for nasal reconstruction, but due to its characteristics, such as colour of the frontal skin, texture, hairless skin, and reliable perfusion, it can be used in the reconstruction of other facial areas [5]\nBurget is correctly credited with bringing the science of major nasal reconstruction to a new level. He developed a method of nasal reconstruction emphasizing the use of thin but highly vascular local lining and cover flaps to allow successful primary placement of delicate cartilage grafts. The cartilage fabrication provides projection in space, airway patency, and, when visible through conforming skin cover, the delicate contour of the normal nose. Because tissue is replaced in kind and quantity, the need for multiple revisions to sculpt and debulk is decreased [6].\nWhen performing aesthetic restoration of the nose, the reconstructive surgeon must take into account the concept of nasal aesthetic subunits. The nose is made of alternating concave and convex surfaces, or subunits, which are separated from one another by depressions and elevations of the surrounding nasal skin. When a large portion of a given subunit has been lost, replacing the entire subunit rather than simply patching the defect often produces a superior aesthetic result. This approach places the scars of flaps and grafts within the normal depressions and elevations of the nose where they are best camouflaged.\nLevator anguli oris muscle raises the angle of the mouth and assists in producing the nasolabial furrow; it arises from the canine fossa, just below the infraorbital fossa, and is inserted in the angle of the mouth (Figure 1), intermingling with the fibers of the zygomaticus major, depressor anguli oris, and orbicularis oris.\nLevator anguli oris muscle among other facial muscles.\nLevator anguli oris muscle based flaps are new flaps that we are the first authors who defined it. The aim of this work is to evaluate the feasibility and outcome of levator anguli oris muscle based flaps (LAOMC, LAOMCM), in combination with other flaps when needed, in the nasal reconstruction after excision of malignant tumors.", "A skin paddle countered to the size and shape of the nasal defect was outlined in the nasolabial fold (Figure 2a &3a), it was positioned along the nasolabial fold to permit the transposition of the flap to reconstruct the nasal defect without tension. The defect location and the infraorbital rim determined its position; it is about 1 cm below the orbital rim, which is the pivot point of the pedicle.\nIllustration of the operative technique of levator anguli oris muscle based flaps: 2a: design and planning. 2b: surgical elevation of the flap. 2c: closure of defects.\nOperative illustration of the LAOMC flap. 3a: design and planning. 3b: closure of defects.\nAn incision is made through the skin around the borders of the outlined skin paddle, and then from uppermost border of the skin paddle another incision was made upwards for 3-5 cm (Figure 2b). Skin flaps were raised widely in the subdermal fat to get the levator anguli oris myocautaneous (LAOMC) flap.\nWhen the mucous membrane was desired for compound nasal loss of skin and mucous membrane, the incision around the skin paddle was deepened to oral mucous membrane with a piece of gauze inside the mouth cavity till a part of mucous membrane equal to that of the defect was included in the levator anguli oris myocautaneous mucosal (LAOMCM) flap.\nThe dissection was continued upwards below the levator anguli oris taking care not to injure the infraorbital artery which lies between the levator anguli oris above and the levator labii superioris below, as it emerges from the infraorbital foramen, so the dissection continued to 1 cm below the orbital margin where the vascular pedicle can be seen and preserved.\nThe skin bridge between the flap and the defect was elevated to create a subcutaneous tunnel and deliver the skin paddle into the defect through it or was pedicled above the skin, and later on transected after 10 to 15 days.\nThe mucous membrane of the levator anguli oris myocautaneous mucosal (LAOMCM) flap was first sutured to the mucous membrane of the nose and then the skin of the flap was sutured to the nasal skin (Figure 2c &3b).\nThe defect of the donor site is hidden in the nasolabial fold that is closed easily and primarily using polyglactin 3/0 and then subcuticular closure of the skin, but when a part of the mucus membrane of the oral cavity mucosa is also being transferred, it is closed first.\nTypes of nasal reconstruction were as follows:\n1- For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap that includes both skin and muscle was used in 45 patients (Figure 4), of them 16 patients with tunneled flap (Figure 5 &6).\nA 47 years woman presented with BCC on the left side of the nasal skin, to whom a pedicled LAOMC flap was used to reconstruct the nasal defect. 4a: preoperative. 4b: two weeks postoperative. 4c: two months postoperative.\nA 26 years old female presented with melanoma to whom a tunneled LAOMC flap was used to reconstruct the nasal defect. 5a: preoperative. 5b: one month postoperative.\nA 17 years XDP boy presented with BCC at the dorsum of nasal skin treated with a tunneled LAOMC flap. 6a: preoperative. 6b: three weeks postoperative.\n2- For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap (Figure 7) ± other advancement flaps depending on the site and size of the defect, was used in 23 patients.\nA 62 years woman presented with BCC at tip of the nose encroaching on the left side with unilateral compound loss of skin and mucus membrane and treated with a pedicled LAOMCM flap. 7a: preoperative. 7b: six weeks postoperative.\n3- Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps (Figure 8) were used to reconstruct the defect in 22 patients.\nA 50 years man presented with large advanced tumor with bilateral compound loss of skin and mucus membrane that treated with combined bilateral LAOMCM flap with supraorbital glabellar flap. 8a: preoperative. 8b: three weeks postoperative. 8c: three months postoperative.\nRoutine immediate and late follow up was undertaken for evaluation of the viability of the cover method, the degree of success of coverage, recipient and donor sites morbidity, operative time, hospital stay, immediate and late overall morbidity and mortality, and finally tumour recurrence within the follow up period.", "Ninety patients with pathologically proven malignant nasal skin tumours were enrolled in this study. Patients' age ranged from four to 78 years (median, 40.5). Pathologic types were: 56 patients of BCC, 33 patients of squamous cell carcinoma and one patient had melanoma (Table 1). Average operating time was 1.5 - 2.5 hours and the average hospital stay was 6-8 days.\nComplications are summarized in (Table 2). Wound dehiscence was the commonest complication, it accounts for 7.7% of all complications and only 2 out of 7 patients were liable to wound re-suturing. Minor complications, in the form of haematoma (3 patients) and minor flap loss (5 patients) were managed conservatively. Partial flap loss was encountered in 6 patients with relatively larger tumours or diabetic co-morbidity, three of whom were required operative re-intervention in the form of debridement and flap refashioning, while total flap loss was not occurred at all.\nThe complications:\nSubjective patient satisfaction was excellent in 50, good in 28, fair in ten and poor in two cases. Patients were followed for a median of 22.4 (range; 6-36) months. During this period, no episode of local recurrence was observed.", "The nose is not only the centrepiece of focus of the face for aesthetic reasons, but it is also critical in maintaining an adequate airway for breathing.\nAdvanced nasal skin tumours are not uncommon and can be cured with aggressive wide excision [7]. Intraoperative frozen section evaluation of safety margins is important before starting reconstruction to ensure complete tumour resection and decrease local recurrence rate.\nThe position of the nose as the focal point of the face makes its reconstruction a procedure requiring acute attention to detail and to preservation of the nasal three-dimensional integrity. Reconstructive procedures on the nose range from a straight forward direct linear closure to a complex multistage procedure requiring reconstruction of the internal lining and the cartilage support of the nose, as well as the external covering.\nThe scalping flap thus has several advantages over other options for nasal reconstruction. For all but the largest defects, skin for the permanent defect can be taken from the upper and lateral portion of the forehead, thus minimizing the visible scar. The donor defect can be covered with a full thickness skin graft from the retroauricular or supraclavicular region, which gives a good colour and texture match. Most of the incision is behind the hairline, and once the pedicle of the flap is divided at the second stage, the hair-bearing scalp skin is returned, leaving scars, however it needs at least two-stage procedure but the final result can be acceptable. It can be used when other flaps are contraindicated and in case of advanced lesions either alone or combining it with other techniques [8].\nForehead flap either median or paramedian provides ample skin, which matches the missing skin in both texture and thickness, it is relatively simple in concept. However, we found the only disadvantage is that it needs at least two stage procedure and sometimes require a touch up surgery to provide the possible cosmetic outcome [9], this flap provides adequate tissue bulk as the there is no need to replace missing cartilage, this was also found by Burget et al 1994 [10].\nAs the need to replace a whole missing aesthetic nasal unit, this was dependant on patient type and the type of the defect as well, in general especially in patients with Xeroderma pigmentosa only limited surgery provides better surgery which also may be applied for some other patients [9].\nNaoshige described a method to repair full thickness defects of the nose using a glabellar flap as the lining of the nasal cavity and an expanded forehead flap for external closure. He considered his method useful in the reconstruction of a nose with a full thickness defect for which the flap donor site is limited. In our series, Glabellar flap gave good aesthetic results and it has a large available donor area that makes its use very important in case of large defects resulted from excision of locally advanced tumors [11].\nThe nasal lining and cartilage support is another issue of challenge in the field of nasal reconstruction. Cartilage grafts of septal, auricular or costal origins could be used, which are easy to shape and resistant both to infection, and to resorption. Moreover, the auricular cartilage is a source of grafts for reconstruction of all the cartilaginous structures of the nasal pyramid [12-14].\nWhen the alar defect is large, the composite free flap from the root of the helix provides cover, framework and lining reconstruction of the ala and the columella as well [15-18].\nThe aim of cartilage grafts is to be shaped in order to emulate the external form of each subunit and to prevent sidewall collapse as well as soft tissue retraction. In many cases in our study, we preserve a part of central nasal cartilage, which is considered as a natural barrier, and remove only the infiltrated parts. However, in some cases, we could not preserve this cartilage and subsequently this affects their cosmetic result outcome for somewhat.\nIn case of small superficial lesions, we preferred the use of either full thickness skin grafts or other small local flaps. In more complex situations, the use of more than one flap is used as we can combine the use of forehead flap with cheek advancement flaps in reconstructing a defect resulted from excision of a lesion in the nasal ala that extends to the cheek by this combination. The angle between the nose and cheek is preserved and the cosmetic outcome is much better.\nThe use of LAOMCM flap can combine the reconstruction of the nose from both the mucosal surface and nasal skin in only one flap, which minimizes the risk of suboptimal reconstruction and makes the reconstruction much easier. It gives the best cosmetic result in case of small lesions, which require full thickness reconstruction. Another advantage of this technique that it can be easily used either unilaterally or bilaterally for larger and central defects. It gives very acceptable donor site scar result.\nThe only disadvantage noted with the use of LAOMCM flap is the loss of the angle between the cheek and the nose which is straightened in contradiction to forehead flaps which can preserve this angle.\nDepending upon the patient's anxieties and self-image, the nose can be the most difficult area of the face to repair to a patient's satisfaction. Often the most difficult cases are those involving patients with small defects of the nasal tip. These patients expect little or no scar to result from their reconstructive surgery. Defects as small as 4-5 mm may represent the reconstructive surgeon's greatest challenge in terms of meeting patient expectations. Neither the degree of surgery required in a forehead flap nor the skin mismatch that can often result from grafting techniques is easily understood by the patient with a relatively small lesion.", "Nasal reconstruction at the time of surgery for nasal skin tumors is feasible by using levator anguli oris muscle based flaps (LAOMC, LAOMCM), and spares the patient the psychic trauma due to organ loss; it is oncologically safe after frozen section examination of the resected tumor.", "The authors declare that they have no competing interests.", "AD carried out the surgical techniques, conceived of the study and drafted the manuscript. OF participated in the design of the study, drafted the manuscript and assisted in surgical techniques. TF participated in the design of the study, drafted the manuscript and assisted in surgical technique. FS performed the statistical analysis, and participated in its coordination. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Patients and Methods", "Nasal Reconstructive Technique", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "The skin is the most common site of cancer development in humans. More than one million new skin cancer cases are diagnosed in the United States annually, compared with about 1.3 million cases of all other types of cancer combined. Therefore, skin cancers constituted fully one-half of all cancers diagnosed [1].\nThe nose, being exposed to sun light, is a common site for skin malignancy. Surgical excision remains the best tool for management of skin tumors affecting nasal skin, reconstruction of defects caused by excisional surgery have been done using many techniques including median and paramedian forehead flaps [2], Rhombic bilobed flap, and other advancement flaps [3].\nThe modern era of nasal reconstruction has brought significant advancements and offers unparalleled opportunities for reconstructive surgeons to maximize functional and aesthetic outcomes [4]. The forehead flap has been used for many centuries and remains a workhorse flap for major nasal resurfacing [4].\nThe scalping forehead flap, with the aim of using it in total nasal reconstruction, has a rich net of arterial and venous vessels that constitute the basic pattern of its blood supply through three principal pedicles: superficial temporal, supraorbital, and supratrochlear. It was described for nasal reconstruction, but due to its characteristics, such as colour of the frontal skin, texture, hairless skin, and reliable perfusion, it can be used in the reconstruction of other facial areas [5]\nBurget is correctly credited with bringing the science of major nasal reconstruction to a new level. He developed a method of nasal reconstruction emphasizing the use of thin but highly vascular local lining and cover flaps to allow successful primary placement of delicate cartilage grafts. The cartilage fabrication provides projection in space, airway patency, and, when visible through conforming skin cover, the delicate contour of the normal nose. Because tissue is replaced in kind and quantity, the need for multiple revisions to sculpt and debulk is decreased [6].\nWhen performing aesthetic restoration of the nose, the reconstructive surgeon must take into account the concept of nasal aesthetic subunits. The nose is made of alternating concave and convex surfaces, or subunits, which are separated from one another by depressions and elevations of the surrounding nasal skin. When a large portion of a given subunit has been lost, replacing the entire subunit rather than simply patching the defect often produces a superior aesthetic result. This approach places the scars of flaps and grafts within the normal depressions and elevations of the nose where they are best camouflaged.\nLevator anguli oris muscle raises the angle of the mouth and assists in producing the nasolabial furrow; it arises from the canine fossa, just below the infraorbital fossa, and is inserted in the angle of the mouth (Figure 1), intermingling with the fibers of the zygomaticus major, depressor anguli oris, and orbicularis oris.\nLevator anguli oris muscle among other facial muscles.\nLevator anguli oris muscle based flaps are new flaps that we are the first authors who defined it. The aim of this work is to evaluate the feasibility and outcome of levator anguli oris muscle based flaps (LAOMC, LAOMCM), in combination with other flaps when needed, in the nasal reconstruction after excision of malignant tumors.", "Over the period between July 2007 and July 2010, ninety patients of malignant skin tumours located in the nasal skin were enrolled in this study at surgical oncology unit, Mansoura University. They included 63 patients with primary lesions and 27 with recurrent tumors. There were 51 males and 39 females. Age of the patients was ranged from four to 78 years. Young aged patient were those having Xeroderma pigmentosa (9 patients). BCC was presented in 56 patients, while squamous cell carcinoma presented in 33 patients and one patient had melanoma (Table 1). Patients with advanced age or extensive comorbidity were excluded from this study.\nPatients characteristics:\nWide local excision with three dimensional safety margins was carried out and was guided by intraoperative frozen section in all patients prior starting the reconstructive procedure. Any infiltrated margin was dealt immediately by re-excision.\n[SUBTITLE] Nasal Reconstructive Technique [SUBSECTION] A skin paddle countered to the size and shape of the nasal defect was outlined in the nasolabial fold (Figure 2a &3a), it was positioned along the nasolabial fold to permit the transposition of the flap to reconstruct the nasal defect without tension. The defect location and the infraorbital rim determined its position; it is about 1 cm below the orbital rim, which is the pivot point of the pedicle.\nIllustration of the operative technique of levator anguli oris muscle based flaps: 2a: design and planning. 2b: surgical elevation of the flap. 2c: closure of defects.\nOperative illustration of the LAOMC flap. 3a: design and planning. 3b: closure of defects.\nAn incision is made through the skin around the borders of the outlined skin paddle, and then from uppermost border of the skin paddle another incision was made upwards for 3-5 cm (Figure 2b). Skin flaps were raised widely in the subdermal fat to get the levator anguli oris myocautaneous (LAOMC) flap.\nWhen the mucous membrane was desired for compound nasal loss of skin and mucous membrane, the incision around the skin paddle was deepened to oral mucous membrane with a piece of gauze inside the mouth cavity till a part of mucous membrane equal to that of the defect was included in the levator anguli oris myocautaneous mucosal (LAOMCM) flap.\nThe dissection was continued upwards below the levator anguli oris taking care not to injure the infraorbital artery which lies between the levator anguli oris above and the levator labii superioris below, as it emerges from the infraorbital foramen, so the dissection continued to 1 cm below the orbital margin where the vascular pedicle can be seen and preserved.\nThe skin bridge between the flap and the defect was elevated to create a subcutaneous tunnel and deliver the skin paddle into the defect through it or was pedicled above the skin, and later on transected after 10 to 15 days.\nThe mucous membrane of the levator anguli oris myocautaneous mucosal (LAOMCM) flap was first sutured to the mucous membrane of the nose and then the skin of the flap was sutured to the nasal skin (Figure 2c &3b).\nThe defect of the donor site is hidden in the nasolabial fold that is closed easily and primarily using polyglactin 3/0 and then subcuticular closure of the skin, but when a part of the mucus membrane of the oral cavity mucosa is also being transferred, it is closed first.\nTypes of nasal reconstruction were as follows:\n1- For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap that includes both skin and muscle was used in 45 patients (Figure 4), of them 16 patients with tunneled flap (Figure 5 &6).\nA 47 years woman presented with BCC on the left side of the nasal skin, to whom a pedicled LAOMC flap was used to reconstruct the nasal defect. 4a: preoperative. 4b: two weeks postoperative. 4c: two months postoperative.\nA 26 years old female presented with melanoma to whom a tunneled LAOMC flap was used to reconstruct the nasal defect. 5a: preoperative. 5b: one month postoperative.\nA 17 years XDP boy presented with BCC at the dorsum of nasal skin treated with a tunneled LAOMC flap. 6a: preoperative. 6b: three weeks postoperative.\n2- For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap (Figure 7) ± other advancement flaps depending on the site and size of the defect, was used in 23 patients.\nA 62 years woman presented with BCC at tip of the nose encroaching on the left side with unilateral compound loss of skin and mucus membrane and treated with a pedicled LAOMCM flap. 7a: preoperative. 7b: six weeks postoperative.\n3- Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps (Figure 8) were used to reconstruct the defect in 22 patients.\nA 50 years man presented with large advanced tumor with bilateral compound loss of skin and mucus membrane that treated with combined bilateral LAOMCM flap with supraorbital glabellar flap. 8a: preoperative. 8b: three weeks postoperative. 8c: three months postoperative.\nRoutine immediate and late follow up was undertaken for evaluation of the viability of the cover method, the degree of success of coverage, recipient and donor sites morbidity, operative time, hospital stay, immediate and late overall morbidity and mortality, and finally tumour recurrence within the follow up period.\nA skin paddle countered to the size and shape of the nasal defect was outlined in the nasolabial fold (Figure 2a &3a), it was positioned along the nasolabial fold to permit the transposition of the flap to reconstruct the nasal defect without tension. The defect location and the infraorbital rim determined its position; it is about 1 cm below the orbital rim, which is the pivot point of the pedicle.\nIllustration of the operative technique of levator anguli oris muscle based flaps: 2a: design and planning. 2b: surgical elevation of the flap. 2c: closure of defects.\nOperative illustration of the LAOMC flap. 3a: design and planning. 3b: closure of defects.\nAn incision is made through the skin around the borders of the outlined skin paddle, and then from uppermost border of the skin paddle another incision was made upwards for 3-5 cm (Figure 2b). Skin flaps were raised widely in the subdermal fat to get the levator anguli oris myocautaneous (LAOMC) flap.\nWhen the mucous membrane was desired for compound nasal loss of skin and mucous membrane, the incision around the skin paddle was deepened to oral mucous membrane with a piece of gauze inside the mouth cavity till a part of mucous membrane equal to that of the defect was included in the levator anguli oris myocautaneous mucosal (LAOMCM) flap.\nThe dissection was continued upwards below the levator anguli oris taking care not to injure the infraorbital artery which lies between the levator anguli oris above and the levator labii superioris below, as it emerges from the infraorbital foramen, so the dissection continued to 1 cm below the orbital margin where the vascular pedicle can be seen and preserved.\nThe skin bridge between the flap and the defect was elevated to create a subcutaneous tunnel and deliver the skin paddle into the defect through it or was pedicled above the skin, and later on transected after 10 to 15 days.\nThe mucous membrane of the levator anguli oris myocautaneous mucosal (LAOMCM) flap was first sutured to the mucous membrane of the nose and then the skin of the flap was sutured to the nasal skin (Figure 2c &3b).\nThe defect of the donor site is hidden in the nasolabial fold that is closed easily and primarily using polyglactin 3/0 and then subcuticular closure of the skin, but when a part of the mucus membrane of the oral cavity mucosa is also being transferred, it is closed first.\nTypes of nasal reconstruction were as follows:\n1- For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap that includes both skin and muscle was used in 45 patients (Figure 4), of them 16 patients with tunneled flap (Figure 5 &6).\nA 47 years woman presented with BCC on the left side of the nasal skin, to whom a pedicled LAOMC flap was used to reconstruct the nasal defect. 4a: preoperative. 4b: two weeks postoperative. 4c: two months postoperative.\nA 26 years old female presented with melanoma to whom a tunneled LAOMC flap was used to reconstruct the nasal defect. 5a: preoperative. 5b: one month postoperative.\nA 17 years XDP boy presented with BCC at the dorsum of nasal skin treated with a tunneled LAOMC flap. 6a: preoperative. 6b: three weeks postoperative.\n2- For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap (Figure 7) ± other advancement flaps depending on the site and size of the defect, was used in 23 patients.\nA 62 years woman presented with BCC at tip of the nose encroaching on the left side with unilateral compound loss of skin and mucus membrane and treated with a pedicled LAOMCM flap. 7a: preoperative. 7b: six weeks postoperative.\n3- Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps (Figure 8) were used to reconstruct the defect in 22 patients.\nA 50 years man presented with large advanced tumor with bilateral compound loss of skin and mucus membrane that treated with combined bilateral LAOMCM flap with supraorbital glabellar flap. 8a: preoperative. 8b: three weeks postoperative. 8c: three months postoperative.\nRoutine immediate and late follow up was undertaken for evaluation of the viability of the cover method, the degree of success of coverage, recipient and donor sites morbidity, operative time, hospital stay, immediate and late overall morbidity and mortality, and finally tumour recurrence within the follow up period.", "A skin paddle countered to the size and shape of the nasal defect was outlined in the nasolabial fold (Figure 2a &3a), it was positioned along the nasolabial fold to permit the transposition of the flap to reconstruct the nasal defect without tension. The defect location and the infraorbital rim determined its position; it is about 1 cm below the orbital rim, which is the pivot point of the pedicle.\nIllustration of the operative technique of levator anguli oris muscle based flaps: 2a: design and planning. 2b: surgical elevation of the flap. 2c: closure of defects.\nOperative illustration of the LAOMC flap. 3a: design and planning. 3b: closure of defects.\nAn incision is made through the skin around the borders of the outlined skin paddle, and then from uppermost border of the skin paddle another incision was made upwards for 3-5 cm (Figure 2b). Skin flaps were raised widely in the subdermal fat to get the levator anguli oris myocautaneous (LAOMC) flap.\nWhen the mucous membrane was desired for compound nasal loss of skin and mucous membrane, the incision around the skin paddle was deepened to oral mucous membrane with a piece of gauze inside the mouth cavity till a part of mucous membrane equal to that of the defect was included in the levator anguli oris myocautaneous mucosal (LAOMCM) flap.\nThe dissection was continued upwards below the levator anguli oris taking care not to injure the infraorbital artery which lies between the levator anguli oris above and the levator labii superioris below, as it emerges from the infraorbital foramen, so the dissection continued to 1 cm below the orbital margin where the vascular pedicle can be seen and preserved.\nThe skin bridge between the flap and the defect was elevated to create a subcutaneous tunnel and deliver the skin paddle into the defect through it or was pedicled above the skin, and later on transected after 10 to 15 days.\nThe mucous membrane of the levator anguli oris myocautaneous mucosal (LAOMCM) flap was first sutured to the mucous membrane of the nose and then the skin of the flap was sutured to the nasal skin (Figure 2c &3b).\nThe defect of the donor site is hidden in the nasolabial fold that is closed easily and primarily using polyglactin 3/0 and then subcuticular closure of the skin, but when a part of the mucus membrane of the oral cavity mucosa is also being transferred, it is closed first.\nTypes of nasal reconstruction were as follows:\n1- For small unilateral defects affecting only one side ala nasi, levator anguli oris myocautaneous (LAOMC) flap that includes both skin and muscle was used in 45 patients (Figure 4), of them 16 patients with tunneled flap (Figure 5 &6).\nA 47 years woman presented with BCC on the left side of the nasal skin, to whom a pedicled LAOMC flap was used to reconstruct the nasal defect. 4a: preoperative. 4b: two weeks postoperative. 4c: two months postoperative.\nA 26 years old female presented with melanoma to whom a tunneled LAOMC flap was used to reconstruct the nasal defect. 5a: preoperative. 5b: one month postoperative.\nA 17 years XDP boy presented with BCC at the dorsum of nasal skin treated with a tunneled LAOMC flap. 6a: preoperative. 6b: three weeks postoperative.\n2- For unilateral compound loss of skin and mucus membrane, levator anguli oris myocautaneous mucosal (LAOMCM) flap (Figure 7) ± other advancement flaps depending on the site and size of the defect, was used in 23 patients.\nA 62 years woman presented with BCC at tip of the nose encroaching on the left side with unilateral compound loss of skin and mucus membrane and treated with a pedicled LAOMCM flap. 7a: preoperative. 7b: six weeks postoperative.\n3- Very large defects; bilateral either LAOMC or LAOMCM flaps combined with forehead glabellar flaps (Figure 8) were used to reconstruct the defect in 22 patients.\nA 50 years man presented with large advanced tumor with bilateral compound loss of skin and mucus membrane that treated with combined bilateral LAOMCM flap with supraorbital glabellar flap. 8a: preoperative. 8b: three weeks postoperative. 8c: three months postoperative.\nRoutine immediate and late follow up was undertaken for evaluation of the viability of the cover method, the degree of success of coverage, recipient and donor sites morbidity, operative time, hospital stay, immediate and late overall morbidity and mortality, and finally tumour recurrence within the follow up period.", "Ninety patients with pathologically proven malignant nasal skin tumours were enrolled in this study. Patients' age ranged from four to 78 years (median, 40.5). Pathologic types were: 56 patients of BCC, 33 patients of squamous cell carcinoma and one patient had melanoma (Table 1). Average operating time was 1.5 - 2.5 hours and the average hospital stay was 6-8 days.\nComplications are summarized in (Table 2). Wound dehiscence was the commonest complication, it accounts for 7.7% of all complications and only 2 out of 7 patients were liable to wound re-suturing. Minor complications, in the form of haematoma (3 patients) and minor flap loss (5 patients) were managed conservatively. Partial flap loss was encountered in 6 patients with relatively larger tumours or diabetic co-morbidity, three of whom were required operative re-intervention in the form of debridement and flap refashioning, while total flap loss was not occurred at all.\nThe complications:\nSubjective patient satisfaction was excellent in 50, good in 28, fair in ten and poor in two cases. Patients were followed for a median of 22.4 (range; 6-36) months. During this period, no episode of local recurrence was observed.", "The nose is not only the centrepiece of focus of the face for aesthetic reasons, but it is also critical in maintaining an adequate airway for breathing.\nAdvanced nasal skin tumours are not uncommon and can be cured with aggressive wide excision [7]. Intraoperative frozen section evaluation of safety margins is important before starting reconstruction to ensure complete tumour resection and decrease local recurrence rate.\nThe position of the nose as the focal point of the face makes its reconstruction a procedure requiring acute attention to detail and to preservation of the nasal three-dimensional integrity. Reconstructive procedures on the nose range from a straight forward direct linear closure to a complex multistage procedure requiring reconstruction of the internal lining and the cartilage support of the nose, as well as the external covering.\nThe scalping flap thus has several advantages over other options for nasal reconstruction. For all but the largest defects, skin for the permanent defect can be taken from the upper and lateral portion of the forehead, thus minimizing the visible scar. The donor defect can be covered with a full thickness skin graft from the retroauricular or supraclavicular region, which gives a good colour and texture match. Most of the incision is behind the hairline, and once the pedicle of the flap is divided at the second stage, the hair-bearing scalp skin is returned, leaving scars, however it needs at least two-stage procedure but the final result can be acceptable. It can be used when other flaps are contraindicated and in case of advanced lesions either alone or combining it with other techniques [8].\nForehead flap either median or paramedian provides ample skin, which matches the missing skin in both texture and thickness, it is relatively simple in concept. However, we found the only disadvantage is that it needs at least two stage procedure and sometimes require a touch up surgery to provide the possible cosmetic outcome [9], this flap provides adequate tissue bulk as the there is no need to replace missing cartilage, this was also found by Burget et al 1994 [10].\nAs the need to replace a whole missing aesthetic nasal unit, this was dependant on patient type and the type of the defect as well, in general especially in patients with Xeroderma pigmentosa only limited surgery provides better surgery which also may be applied for some other patients [9].\nNaoshige described a method to repair full thickness defects of the nose using a glabellar flap as the lining of the nasal cavity and an expanded forehead flap for external closure. He considered his method useful in the reconstruction of a nose with a full thickness defect for which the flap donor site is limited. In our series, Glabellar flap gave good aesthetic results and it has a large available donor area that makes its use very important in case of large defects resulted from excision of locally advanced tumors [11].\nThe nasal lining and cartilage support is another issue of challenge in the field of nasal reconstruction. Cartilage grafts of septal, auricular or costal origins could be used, which are easy to shape and resistant both to infection, and to resorption. Moreover, the auricular cartilage is a source of grafts for reconstruction of all the cartilaginous structures of the nasal pyramid [12-14].\nWhen the alar defect is large, the composite free flap from the root of the helix provides cover, framework and lining reconstruction of the ala and the columella as well [15-18].\nThe aim of cartilage grafts is to be shaped in order to emulate the external form of each subunit and to prevent sidewall collapse as well as soft tissue retraction. In many cases in our study, we preserve a part of central nasal cartilage, which is considered as a natural barrier, and remove only the infiltrated parts. However, in some cases, we could not preserve this cartilage and subsequently this affects their cosmetic result outcome for somewhat.\nIn case of small superficial lesions, we preferred the use of either full thickness skin grafts or other small local flaps. In more complex situations, the use of more than one flap is used as we can combine the use of forehead flap with cheek advancement flaps in reconstructing a defect resulted from excision of a lesion in the nasal ala that extends to the cheek by this combination. The angle between the nose and cheek is preserved and the cosmetic outcome is much better.\nThe use of LAOMCM flap can combine the reconstruction of the nose from both the mucosal surface and nasal skin in only one flap, which minimizes the risk of suboptimal reconstruction and makes the reconstruction much easier. It gives the best cosmetic result in case of small lesions, which require full thickness reconstruction. Another advantage of this technique that it can be easily used either unilaterally or bilaterally for larger and central defects. It gives very acceptable donor site scar result.\nThe only disadvantage noted with the use of LAOMCM flap is the loss of the angle between the cheek and the nose which is straightened in contradiction to forehead flaps which can preserve this angle.\nDepending upon the patient's anxieties and self-image, the nose can be the most difficult area of the face to repair to a patient's satisfaction. Often the most difficult cases are those involving patients with small defects of the nasal tip. These patients expect little or no scar to result from their reconstructive surgery. Defects as small as 4-5 mm may represent the reconstructive surgeon's greatest challenge in terms of meeting patient expectations. Neither the degree of surgery required in a forehead flap nor the skin mismatch that can often result from grafting techniques is easily understood by the patient with a relatively small lesion.", "Nasal reconstruction at the time of surgery for nasal skin tumors is feasible by using levator anguli oris muscle based flaps (LAOMC, LAOMCM), and spares the patient the psychic trauma due to organ loss; it is oncologically safe after frozen section examination of the resected tumor.", "The authors declare that they have no competing interests.", "AD carried out the surgical techniques, conceived of the study and drafted the manuscript. OF participated in the design of the study, drafted the manuscript and assisted in surgical techniques. TF participated in the design of the study, drafted the manuscript and assisted in surgical technique. FS performed the statistical analysis, and participated in its coordination. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null ]
[]
Systematic review of reviews of intervention components associated with increased effectiveness in dietary and physical activity interventions.
21333011
To develop more efficient programmes for promoting dietary and/or physical activity change (in order to prevent type 2 diabetes) it is critical to ensure that the intervention components and characteristics most strongly associated with effectiveness are included. The aim of this systematic review of reviews was to identify intervention components that are associated with increased change in diet and/or physical activity in individuals at risk of type 2 diabetes.
BACKGROUND
MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library were searched for systematic reviews of interventions targeting diet and/or physical activity in adults at risk of developing type 2 diabetes from 1998 to 2008. Two reviewers independently selected reviews and rated methodological quality. Individual analyses from reviews relating effectiveness to intervention components were extracted, graded for evidence quality and summarised.
METHODS
Of 3856 identified articles, 30 met the inclusion criteria and 129 analyses related intervention components to effectiveness. These included causal analyses (based on randomisation of participants to different intervention conditions) and associative analyses (e.g. meta-regression). Overall, interventions produced clinically meaningful weight loss (3-5 kg at 12 months; 2-3 kg at 36 months) and increased physical activity (30-60 mins/week of moderate activity at 12-18 months). Based on causal analyses, intervention effectiveness was increased by engaging social support, targeting both diet and physical activity, and using well-defined/established behaviour change techniques. Increased effectiveness was also associated with increased contact frequency and using a specific cluster of "self-regulatory" behaviour change techniques (e.g. goal-setting, self-monitoring). No clear relationships were found between effectiveness and intervention setting, delivery mode, study population or delivery provider. Evidence on long-term effectiveness suggested the need for greater consideration of behaviour maintenance strategies.
RESULTS
This comprehensive review of reviews identifies specific components which are associated with increased effectiveness in interventions to promote change in diet and/or physical activity. To maximise the efficiency of programmes for diabetes prevention, practitioners and commissioning organisations should consider including these components.
CONCLUSIONS
[ "Diabetes Mellitus, Type 2", "Diet", "Exercise", "Health Promotion", "Humans", "Motivation", "Risk Reduction Behavior" ]
3048531
null
null
Methods
[SUBTITLE] Data Sources and Search Strategy [SUBSECTION] One author (KS) searched MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library for systematic reviews in the English language, published between January 1998 and May 2008 (the search terms were reviewed by several authors (CG, CA, WH) and are provided in Additional file 1 Table S1). Reference lists of selected reviews and relevant clinical guidelines were also searched and experts in the area were contacted in order to identify unpublished reviews. One author (KS) searched MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library for systematic reviews in the English language, published between January 1998 and May 2008 (the search terms were reviewed by several authors (CG, CA, WH) and are provided in Additional file 1 Table S1). Reference lists of selected reviews and relevant clinical guidelines were also searched and experts in the area were contacted in order to identify unpublished reviews. [SUBTITLE] Review selection [SUBSECTION] Two reviewers (KS, CG) independently examined titles and abstracts. Relevant review articles were obtained in full, and assessed against the inclusion and study quality criteria described below. Inter-reviewer agreement on inclusion was assessed using kappa statistics and any disagreements were resolved through discussion. [SUBTITLE] Inclusion criteria [SUBSECTION] 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham). 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham). [SUBTITLE] Exclusion criteria [SUBSECTION] 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health). 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health). [SUBTITLE] Outcomes [SUBSECTION] We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes. We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes. Two reviewers (KS, CG) independently examined titles and abstracts. Relevant review articles were obtained in full, and assessed against the inclusion and study quality criteria described below. Inter-reviewer agreement on inclusion was assessed using kappa statistics and any disagreements were resolved through discussion. [SUBTITLE] Inclusion criteria [SUBSECTION] 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham). 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham). [SUBTITLE] Exclusion criteria [SUBSECTION] 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health). 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health). [SUBTITLE] Outcomes [SUBSECTION] We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes. We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes. [SUBTITLE] Study quality assessment [SUBSECTION] Review quality was rated independently by two authors (KS, CG) for a sub-sample (35 out of 107) of the articles identified as potentially relevant, using the Overview Quality Assessment Questionnaire (OQAQ;[28] Additional file 1 Table S2). Thereafter, review quality was rated by one researcher (KS) and verified by another (CG). Reviews were included if their OQAQ score was 14 or more (possible range 0-18) and if they scored at least one point for either of the two OQAQ criteria about assessing quality/taking quality into account in analyses (this was intended to maximise the likely quality of evidence underlying the review-level analyses). A percentage score was calculated for inter-rater agreement (defined as ≤1 point of variation on OQAQ scores) and any disagreements were resolved by discussion. Review quality was rated independently by two authors (KS, CG) for a sub-sample (35 out of 107) of the articles identified as potentially relevant, using the Overview Quality Assessment Questionnaire (OQAQ;[28] Additional file 1 Table S2). Thereafter, review quality was rated by one researcher (KS) and verified by another (CG). Reviews were included if their OQAQ score was 14 or more (possible range 0-18) and if they scored at least one point for either of the two OQAQ criteria about assessing quality/taking quality into account in analyses (this was intended to maximise the likely quality of evidence underlying the review-level analyses). A percentage score was calculated for inter-rater agreement (defined as ≤1 point of variation on OQAQ scores) and any disagreements were resolved by discussion. [SUBTITLE] Data extraction [SUBSECTION] We extracted data on the effectiveness of interventions and on the relationship of effectiveness to seven pre-defined intervention components. These were: Theoretical basis (i.e. we extracted analyses relating effectiveness to the use of any stated theory of behaviour or behaviour change); Behaviour change techniques used (e.g. the use of specific techniques such as goal-setting, problem-solving or the planned use of some clearly defined set of behaviour change techniques: See Table 1 for examples); Mode of delivery (e.g. group-based, individual, self-delivery, mixed-mode); Intervention provider (e.g. general practitioner, counsellor); Intensity (e.g. number of sessions, total contact time); Characteristics of the target population (e.g. age, ethnicity, risk state); and Setting (e.g. primary care, workplace). Data were extracted against a data extraction template by one author (KS) and checked by another (CG) with reference to the full text of the article. Extracted data also included inclusion and exclusion criteria, reported analyses and analysis type. Definitions of 'established behaviour change techniques' We extracted data on the effectiveness of interventions and on the relationship of effectiveness to seven pre-defined intervention components. These were: Theoretical basis (i.e. we extracted analyses relating effectiveness to the use of any stated theory of behaviour or behaviour change); Behaviour change techniques used (e.g. the use of specific techniques such as goal-setting, problem-solving or the planned use of some clearly defined set of behaviour change techniques: See Table 1 for examples); Mode of delivery (e.g. group-based, individual, self-delivery, mixed-mode); Intervention provider (e.g. general practitioner, counsellor); Intensity (e.g. number of sessions, total contact time); Characteristics of the target population (e.g. age, ethnicity, risk state); and Setting (e.g. primary care, workplace). Data were extracted against a data extraction template by one author (KS) and checked by another (CG) with reference to the full text of the article. Extracted data also included inclusion and exclusion criteria, reported analyses and analysis type. Definitions of 'established behaviour change techniques' [SUBTITLE] Grading of evidence [SUBSECTION] An evidence grade was given to each reported analysis, based on the Scottish Intercollegiate Guidelines Network (SIGN) evidence grading system[29]. This system grades the risk of bias associated with a particular piece of evidence on a hierarchy from meta-analysis and RCT evidence (grade 1) down to expert opinion (grade 4), with additional indicators (++, + or -) to indicate methodological quality. The SIGN system was modified, as our review aimed to identify the relative effectiveness of intervention components, rather than effectiveness per se (see Additional file 1 Table S3 for full details). Although the SIGN evidence grading uses an alpha-numeric system (1++, 1+, 1-, 2++, 2+, 2-), for ease of reading we have converted this to a text-based format. For each analysis the quality of the evidence (the degree of confidence that the risk of bias is low) is described as either "high (++), medium (+) or low (-)". Each analysis is also categorised as being either "causal" evidence (SIGN grade 1; evidence from meta-analyses or summaries of RCTs where the component or characteristic of interest was experimentally manipulated) or "associative" evidence (SIGN grade 2; evidence from correlational or observational analyses). We also applied a category of "very low quality" for analyses with very low apparent power (total N < 100). The reporting that follows excludes this very low quality evidence, although it is included in the supplementary data tables for completeness. An evidence grade was given to each reported analysis, based on the Scottish Intercollegiate Guidelines Network (SIGN) evidence grading system[29]. This system grades the risk of bias associated with a particular piece of evidence on a hierarchy from meta-analysis and RCT evidence (grade 1) down to expert opinion (grade 4), with additional indicators (++, + or -) to indicate methodological quality. The SIGN system was modified, as our review aimed to identify the relative effectiveness of intervention components, rather than effectiveness per se (see Additional file 1 Table S3 for full details). Although the SIGN evidence grading uses an alpha-numeric system (1++, 1+, 1-, 2++, 2+, 2-), for ease of reading we have converted this to a text-based format. For each analysis the quality of the evidence (the degree of confidence that the risk of bias is low) is described as either "high (++), medium (+) or low (-)". Each analysis is also categorised as being either "causal" evidence (SIGN grade 1; evidence from meta-analyses or summaries of RCTs where the component or characteristic of interest was experimentally manipulated) or "associative" evidence (SIGN grade 2; evidence from correlational or observational analyses). We also applied a category of "very low quality" for analyses with very low apparent power (total N < 100). The reporting that follows excludes this very low quality evidence, although it is included in the supplementary data tables for completeness. [SUBTITLE] Analysis [SUBSECTION] No statistical analyses or meta-analyses were conducted. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a systematic format (Additional file 2 Tables S7 to S14). Each analysis was graded using the adapted SIGN criteria as described above and a narrative synthesis is presented below, indicating both the quality of the evidence (low, medium, high) and whether it is causal or associative in nature. In accordance with reporting guidelines for systematic reviews, a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist is available for this review (Additional file 3). No statistical analyses or meta-analyses were conducted. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a systematic format (Additional file 2 Tables S7 to S14). Each analysis was graded using the adapted SIGN criteria as described above and a narrative synthesis is presented below, indicating both the quality of the evidence (low, medium, high) and whether it is causal or associative in nature. In accordance with reporting guidelines for systematic reviews, a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist is available for this review (Additional file 3).
null
null
null
null
[ "Background", "Data Sources and Search Strategy", "Review selection", "Inclusion criteria", "Exclusion criteria", "Outcomes", "Study quality assessment", "Data extraction", "Grading of evidence", "Analysis", "Results", "Review characteristics", "Study quality", "Evidence synthesis", "Overall effectiveness (Additional file 2, Table S7)", "Weight Loss", "Physical Activity", "Dietary Intake", "Other Outcomes", "Theoretical basis (Additional file 2, Table S8)", "Behaviour change techniques (Additional file 2, Table S9)", "Use of specific behaviour change techniques", "Motivational interviewing", "Targeting multiple behaviours", "Mode of delivery (Additional file 2, Table S10)", "Intervention provider (Additional file 2, Table S11)", "Intervention intensity (Additional file 2, Table S12)", "Weight Loss", "Dietary Change", "Physical Activity", "Characteristics of the target population (Additional file 2, Table S13)", "Gender", "Ethnicity", "Age", "At risk populations", "Diabetes", "Weight", "Setting (Additional file 2, Table S14)", "Discussion", "Strengths and limitations", "Implications for practice and policy", "Directions for future research", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The development of type 2 diabetes is strongly associated with being overweight, obese or physically inactive[1,2]. Large randomised controlled trials (RCTs) have shown that relatively modest changes in lifestyle (increasing fibre (≥15 g/1000 kcal), reducing total fat (< 30% of energy consumed) and saturated fat (< 10% of energy consumed), engaging in moderate physical activity (≥30 mins/day), weight reduction (5%)) can reduce the risk of progression to type 2 diabetes in adults with impaired glucose regulation (also known as pre-diabetes) by around 50%[3-7]. In one study, achieving four or more of the above targets led to zero incidence of type 2 diabetes up to seven years later[8]. Consequently, promoting changes in physical activity and dietary intake is now recommended in national and international guidelines as a first line therapy for preventing type 2 diabetes[9-12].\nA number of diabetes prevention programmes have been developed internationally (e.g. in Finland,[13] Germany,[14,15] the US,[16,17] Australia[18] and China[19]). However, national diabetes prevention strategies are still lacking in many countries. The cost-effectiveness of lifestyle intervention approaches for diabetes prevention is already well established and is favourable in comparison to pharmacological approaches[20-22]. However, most interventions used to date in a research setting are considered to be too intensive for widespread implementation in health services[23]. For example, the US Diabetes Prevention Programme[4] involved 16 individual counselling sessions plus individual coaching and a maintenance programme with further individual and group sessions. A major challenge for healthcare providers therefore is how to achieve the lifestyle changes needed to prevent type 2 diabetes (and its associated cardiovascular risk) without overstretching existing budgets and available resources[24,25].\nIn translating the research evidence into practical programmes it is critical to ensure that the intervention components (i.e. behaviour change techniques and strategies) and characteristics (e.g. setting, delivery mode, intervention provider) most strongly associated with effectiveness are included.\nWe therefore aimed to systematically review existing systematic reviews to summarise the evidence relating the content of interventions for promoting dietary and/or physical activity change to their effectiveness in producing weight and behaviour change. The review focused on evidence relating to individuals at risk of type 2 diabetes due to lifestyle (e.g. inactivity) or clinical risk factors (e.g. overweight, elevated blood pressure).", "One author (KS) searched MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library for systematic reviews in the English language, published between January 1998 and May 2008 (the search terms were reviewed by several authors (CG, CA, WH) and are provided in Additional file 1 Table S1). Reference lists of selected reviews and relevant clinical guidelines were also searched and experts in the area were contacted in order to identify unpublished reviews.", "Two reviewers (KS, CG) independently examined titles and abstracts. Relevant review articles were obtained in full, and assessed against the inclusion and study quality criteria described below. Inter-reviewer agreement on inclusion was assessed using kappa statistics and any disagreements were resolved through discussion.\n[SUBTITLE] Inclusion criteria [SUBSECTION] 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n[SUBTITLE] Exclusion criteria [SUBSECTION] 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n[SUBTITLE] Outcomes [SUBSECTION] We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.\nWe selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.", "1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).", "1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).", "We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.", "Review quality was rated independently by two authors (KS, CG) for a sub-sample (35 out of 107) of the articles identified as potentially relevant, using the Overview Quality Assessment Questionnaire (OQAQ;[28] Additional file 1 Table S2). Thereafter, review quality was rated by one researcher (KS) and verified by another (CG). Reviews were included if their OQAQ score was 14 or more (possible range 0-18) and if they scored at least one point for either of the two OQAQ criteria about assessing quality/taking quality into account in analyses (this was intended to maximise the likely quality of evidence underlying the review-level analyses). A percentage score was calculated for inter-rater agreement (defined as ≤1 point of variation on OQAQ scores) and any disagreements were resolved by discussion.", "We extracted data on the effectiveness of interventions and on the relationship of effectiveness to seven pre-defined intervention components. These were: Theoretical basis (i.e. we extracted analyses relating effectiveness to the use of any stated theory of behaviour or behaviour change); Behaviour change techniques used (e.g. the use of specific techniques such as goal-setting, problem-solving or the planned use of some clearly defined set of behaviour change techniques: See Table 1 for examples); Mode of delivery (e.g. group-based, individual, self-delivery, mixed-mode); Intervention provider (e.g. general practitioner, counsellor); Intensity (e.g. number of sessions, total contact time); Characteristics of the target population (e.g. age, ethnicity, risk state); and Setting (e.g. primary care, workplace). Data were extracted against a data extraction template by one author (KS) and checked by another (CG) with reference to the full text of the article. Extracted data also included inclusion and exclusion criteria, reported analyses and analysis type.\nDefinitions of 'established behaviour change techniques'", "An evidence grade was given to each reported analysis, based on the Scottish Intercollegiate Guidelines Network (SIGN) evidence grading system[29]. This system grades the risk of bias associated with a particular piece of evidence on a hierarchy from meta-analysis and RCT evidence (grade 1) down to expert opinion (grade 4), with additional indicators (++, + or -) to indicate methodological quality. The SIGN system was modified, as our review aimed to identify the relative effectiveness of intervention components, rather than effectiveness per se (see Additional file 1 Table S3 for full details). Although the SIGN evidence grading uses an alpha-numeric system (1++, 1+, 1-, 2++, 2+, 2-), for ease of reading we have converted this to a text-based format. For each analysis the quality of the evidence (the degree of confidence that the risk of bias is low) is described as either \"high (++), medium (+) or low (-)\". Each analysis is also categorised as being either \"causal\" evidence (SIGN grade 1; evidence from meta-analyses or summaries of RCTs where the component or characteristic of interest was experimentally manipulated) or \"associative\" evidence (SIGN grade 2; evidence from correlational or observational analyses). We also applied a category of \"very low quality\" for analyses with very low apparent power (total N < 100). The reporting that follows excludes this very low quality evidence, although it is included in the supplementary data tables for completeness.", "No statistical analyses or meta-analyses were conducted. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a systematic format (Additional file 2 Tables S7 to S14). Each analysis was graded using the adapted SIGN criteria as described above and a narrative synthesis is presented below, indicating both the quality of the evidence (low, medium, high) and whether it is causal or associative in nature.\nIn accordance with reporting guidelines for systematic reviews, a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist is available for this review (Additional file 3).", "Searches identified 3856 potentially relevant articles. Following review of titles and abstracts, 96 articles were retrieved and quality-assessed. An additional 11 articles were identified through reference lists and grey literature. Of these 107 articles, 30 met both the selection and quality criteria (Figure 1) and these are identified by an asterisk in the reference list[30-59]. The inter-rater reliability (Kappa) for applying review selection criteria was 0.71 (95%CI: 0.61 to 0.80), and the proportion for inter-reviewer agreement on review quality was 0.70 (95%CI: 0.55 to 0.85).\nFlow diagram of study selection.\n[SUBTITLE] Review characteristics [SUBSECTION] The characteristics of the included and excluded reviews are summarised in Additional file 1 Tables S4 and S5. Ten reviews examined physical activity interventions, three examined dietary interventions and seventeen examined both. Reviews included data from a range of populations (e.g. sedentary, overweight, obese, impaired glucose tolerance) and delivery settings (e.g. home based, leisure centre based, primary care, workplace) and used a variety of descriptive, meta-analytic and meta-regression analyses to investigate the association of intervention components with effectiveness. We identified 129 analyses of relationships between intervention components and effectiveness, and 55 analyses of intervention effectiveness (Additional file 2 Tables S7 to S14). The dates of published studies included in the reviews examined ranged from 1966 to 2008.\nThe characteristics of the included and excluded reviews are summarised in Additional file 1 Tables S4 and S5. Ten reviews examined physical activity interventions, three examined dietary interventions and seventeen examined both. Reviews included data from a range of populations (e.g. sedentary, overweight, obese, impaired glucose tolerance) and delivery settings (e.g. home based, leisure centre based, primary care, workplace) and used a variety of descriptive, meta-analytic and meta-regression analyses to investigate the association of intervention components with effectiveness. We identified 129 analyses of relationships between intervention components and effectiveness, and 55 analyses of intervention effectiveness (Additional file 2 Tables S7 to S14). The dates of published studies included in the reviews examined ranged from 1966 to 2008.\n[SUBTITLE] Study quality [SUBSECTION] The methodological quality of included reviews (Additional file 1 Tables S4, S6) was generally good (median OQAQ score = 15.6). The most common methodological weaknesses were the lack of use of study quality data to inform analyses (e.g. by sensitivity analysis, or by constructing separate analyses which excluded low quality trials) and potential bias in the selection of articles (e.g. not using independent assessors).\nThe methodological quality of included reviews (Additional file 1 Tables S4, S6) was generally good (median OQAQ score = 15.6). The most common methodological weaknesses were the lack of use of study quality data to inform analyses (e.g. by sensitivity analysis, or by constructing separate analyses which excluded low quality trials) and potential bias in the selection of articles (e.g. not using independent assessors).\n[SUBTITLE] Evidence synthesis [SUBSECTION] The extracted analyses and evidence grades for each analysis are presented in Additional file 2 Tables S7 to S14. The findings can be summarised as follows:-\nThe extracted analyses and evidence grades for each analysis are presented in Additional file 2 Tables S7 to S14. The findings can be summarised as follows:-\n[SUBTITLE] Overall effectiveness (Additional file 2, Table S7) [SUBSECTION] [SUBTITLE] Weight Loss [SUBSECTION] High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\nHigh quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\n[SUBTITLE] Physical Activity [SUBSECTION] High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\nHigh quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\n[SUBTITLE] Dietary Intake [SUBSECTION] Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\nMedium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\n[SUBTITLE] Other Outcomes [SUBSECTION] High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\nHigh quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\n[SUBTITLE] Weight Loss [SUBSECTION] High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\nHigh quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\n[SUBTITLE] Physical Activity [SUBSECTION] High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\nHigh quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\n[SUBTITLE] Dietary Intake [SUBSECTION] Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\nMedium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\n[SUBTITLE] Other Outcomes [SUBSECTION] High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\nHigh quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\n[SUBTITLE] Theoretical basis (Additional file 2, Table S8) [SUBSECTION] One meta-regression analysis provided medium quality associative evidence (grade 2+) suggesting that interventions with an explicitly stated theoretical basis (e.g. Social Cognitive Theory,[60] Theory of Planned Behaviour[61]) were no more effective in producing changes in either weight or in combined dietary and physical activity outcomes than interventions with no stated theoretical basis[38]. However, four meta-regression analyses (all medium quality associative analyses) in two reviews[38,48] did find an association between the use of a theoretically specified cluster of 'self-regulatory' intervention techniques (specific goal-setting, prompting self-monitoring, providing feedback on performance, goal review) and increased effectiveness in terms of a) weight loss, b) change in dietary outcomes, c) change in physical activity and d) combined (standardised mean difference for either dietary change or physical activity) outcomes.\nOne meta-regression analysis provided medium quality associative evidence (grade 2+) suggesting that interventions with an explicitly stated theoretical basis (e.g. Social Cognitive Theory,[60] Theory of Planned Behaviour[61]) were no more effective in producing changes in either weight or in combined dietary and physical activity outcomes than interventions with no stated theoretical basis[38]. However, four meta-regression analyses (all medium quality associative analyses) in two reviews[38,48] did find an association between the use of a theoretically specified cluster of 'self-regulatory' intervention techniques (specific goal-setting, prompting self-monitoring, providing feedback on performance, goal review) and increased effectiveness in terms of a) weight loss, b) change in dietary outcomes, c) change in physical activity and d) combined (standardised mean difference for either dietary change or physical activity) outcomes.\n[SUBTITLE] Behaviour change techniques (Additional file 2, Table S9) [SUBSECTION] Categorisation of interventions varied greatly between reviews, with categories often conceptually overlapping and vaguely defined (e.g. diet vs. exercise vs. behavioural intervention). Despite this, we have summarised evidence on the use of what we have called \"established, well defined behaviour change techniques\", based on those reviews where clear and specific definitions were provided (see Table 1 for definitions). Further definition of the specific behaviour change techniques cited in Table 1 and those mentioned in the text below can be found in a recent taxonomy of behaviour change techniques[62].\nCausal evidence from one medium quality meta-analysis indicated that change in weight was greater when established, well defined behaviour change techniques were added to interventions (e.g. when dietary advice plus a well-defined behavioural intervention using established behaviour change techniques was compared with dietary advice alone). The weight loss achieved by adding established behaviour change techniques to interventions was 4.5 kg at a median 6 months of follow up[54]. This was supported by two associative analyses (one medium and one low quality) which compared the results of different groups of studies in which the interventions either did or did not use established, well-defined behaviour change techniques. Using established behaviour change techniques was associated with increased weight loss (2.5 to 5.5 kg) compared with non-behavioural interventions (0.1 to 0.9 kg)[46,47].\nEvidence from five low to medium quality associative analyses in two reviews attempted to relate the number of behaviour change techniques used to effectiveness in terms of weight loss or changes in diet or physical activity. The evidence was equivocal with the pattern of data suggesting a possible association, but only one analysis approached significance[38,48].\n[SUBTITLE] Use of specific behaviour change techniques [SUBSECTION] High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\nHigh quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\n[SUBTITLE] Motivational interviewing [SUBSECTION] Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\nMotivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\n[SUBTITLE] Targeting multiple behaviours [SUBSECTION] Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCausal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCategorisation of interventions varied greatly between reviews, with categories often conceptually overlapping and vaguely defined (e.g. diet vs. exercise vs. behavioural intervention). Despite this, we have summarised evidence on the use of what we have called \"established, well defined behaviour change techniques\", based on those reviews where clear and specific definitions were provided (see Table 1 for definitions). Further definition of the specific behaviour change techniques cited in Table 1 and those mentioned in the text below can be found in a recent taxonomy of behaviour change techniques[62].\nCausal evidence from one medium quality meta-analysis indicated that change in weight was greater when established, well defined behaviour change techniques were added to interventions (e.g. when dietary advice plus a well-defined behavioural intervention using established behaviour change techniques was compared with dietary advice alone). The weight loss achieved by adding established behaviour change techniques to interventions was 4.5 kg at a median 6 months of follow up[54]. This was supported by two associative analyses (one medium and one low quality) which compared the results of different groups of studies in which the interventions either did or did not use established, well-defined behaviour change techniques. Using established behaviour change techniques was associated with increased weight loss (2.5 to 5.5 kg) compared with non-behavioural interventions (0.1 to 0.9 kg)[46,47].\nEvidence from five low to medium quality associative analyses in two reviews attempted to relate the number of behaviour change techniques used to effectiveness in terms of weight loss or changes in diet or physical activity. The evidence was equivocal with the pattern of data suggesting a possible association, but only one analysis approached significance[38,48].\n[SUBTITLE] Use of specific behaviour change techniques [SUBSECTION] High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\nHigh quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\n[SUBTITLE] Motivational interviewing [SUBSECTION] Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\nMotivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\n[SUBTITLE] Targeting multiple behaviours [SUBSECTION] Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCausal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\n[SUBTITLE] Mode of delivery (Additional file 2, Table S10) [SUBSECTION] The evidence from five reviews of dietary and/or physical activity intervention was mixed. Five associative analyses (three medium and two low quality) from four reviews failed to find a clear association between effectiveness and mode of intervention delivery for weight loss, dietary change or physical activity change[38,46,48,51]. One review found medium quality associative evidence that 'mixed mode' (individual and group) delivery was significantly related to greater effectiveness, compared with individual delivery, for initial weight loss (up to 6 months), but not for weight loss maintenance (at a mean 19 months)[38]. However, it is worth noting that there is evidence from individual high quality RCTs (based on data in the evidence tables of the included reviews) that individual, group, and mixed mode interventions can all be effective in changing diet and/or physical activity[31,38,51].\nThe evidence from five reviews of dietary and/or physical activity intervention was mixed. Five associative analyses (three medium and two low quality) from four reviews failed to find a clear association between effectiveness and mode of intervention delivery for weight loss, dietary change or physical activity change[38,46,48,51]. One review found medium quality associative evidence that 'mixed mode' (individual and group) delivery was significantly related to greater effectiveness, compared with individual delivery, for initial weight loss (up to 6 months), but not for weight loss maintenance (at a mean 19 months)[38]. However, it is worth noting that there is evidence from individual high quality RCTs (based on data in the evidence tables of the included reviews) that individual, group, and mixed mode interventions can all be effective in changing diet and/or physical activity[31,38,51].\n[SUBTITLE] Intervention provider (Additional file 2, Table S11) [SUBSECTION] There was a lack of high quality evidence in this area for comparisons between specific types of intervention provider. Four associative analyses (two medium, two low) from four reviews provided no consistent or significant relationship between intervention provider and weight, physical activity or dietary outcomes at up to 12 months of follow up[38,40,48,51]. However, strong evidence from individual RCTs (based on data in the evidence tables of the included reviews) showed that a wide range of providers (with appropriate training) including doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, can deliver effective interventions for changing diet and/or physical activity[38,40,43,48,51,52].\nThere was a lack of high quality evidence in this area for comparisons between specific types of intervention provider. Four associative analyses (two medium, two low) from four reviews provided no consistent or significant relationship between intervention provider and weight, physical activity or dietary outcomes at up to 12 months of follow up[38,40,48,51]. However, strong evidence from individual RCTs (based on data in the evidence tables of the included reviews) showed that a wide range of providers (with appropriate training) including doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, can deliver effective interventions for changing diet and/or physical activity[38,40,43,48,51,52].\n[SUBTITLE] Intervention intensity (Additional file 2, Table S12) [SUBSECTION] Definitions of intervention intensity reported in the reviews varied considerably, incorporating frequency and total number of contacts, total contact time, duration of the intervention and the number of behaviour change techniques used. The frequency and duration of clinical contact varied widely, ranging from 1 to around 80 sessions, delivered daily to monthly and lasting anything from 15 to 150 minutes, over periods ranging from 1 day to 2 years. For instance, one review of 17 weight loss interventions that compared different intervention intensities, reported that the median contact frequency was weekly, the median session duration 60 minutes, and the median delivery period 10 weeks[54]. Physical activity interventions are often much more intensive due to a focus on practising the target behaviour (e.g. Shaw et al.[55] report interventions lasting 3 to 12 months with 3 to 5 sessions per week lasting a median 45 minutes each).\n[SUBTITLE] Weight Loss [SUBSECTION] Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\nOverall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\n[SUBTITLE] Dietary Change [SUBSECTION] Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\nTwo low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\n[SUBTITLE] Physical Activity [SUBSECTION] There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nThere was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nDefinitions of intervention intensity reported in the reviews varied considerably, incorporating frequency and total number of contacts, total contact time, duration of the intervention and the number of behaviour change techniques used. The frequency and duration of clinical contact varied widely, ranging from 1 to around 80 sessions, delivered daily to monthly and lasting anything from 15 to 150 minutes, over periods ranging from 1 day to 2 years. For instance, one review of 17 weight loss interventions that compared different intervention intensities, reported that the median contact frequency was weekly, the median session duration 60 minutes, and the median delivery period 10 weeks[54]. Physical activity interventions are often much more intensive due to a focus on practising the target behaviour (e.g. Shaw et al.[55] report interventions lasting 3 to 12 months with 3 to 5 sessions per week lasting a median 45 minutes each).\n[SUBTITLE] Weight Loss [SUBSECTION] Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\nOverall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\n[SUBTITLE] Dietary Change [SUBSECTION] Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\nTwo low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\n[SUBTITLE] Physical Activity [SUBSECTION] There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nThere was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\n[SUBTITLE] Characteristics of the target population (Additional file 2, Table S13) [SUBSECTION] [SUBTITLE] Gender [SUBSECTION] Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\nEight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\n[SUBTITLE] Ethnicity [SUBSECTION] Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\nAlthough there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\n[SUBTITLE] Age [SUBSECTION] Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\nAssociative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\n[SUBTITLE] At risk populations [SUBSECTION] A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\nA range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\n[SUBTITLE] Diabetes [SUBSECTION] In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\nIn two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\n[SUBTITLE] Weight [SUBSECTION] Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\nFour analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\n[SUBTITLE] Gender [SUBSECTION] Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\nEight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\n[SUBTITLE] Ethnicity [SUBSECTION] Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\nAlthough there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\n[SUBTITLE] Age [SUBSECTION] Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\nAssociative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\n[SUBTITLE] At risk populations [SUBSECTION] A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\nA range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\n[SUBTITLE] Diabetes [SUBSECTION] In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\nIn two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\n[SUBTITLE] Weight [SUBSECTION] Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\nFour analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\n[SUBTITLE] Setting (Additional file 2, Table S14) [SUBSECTION] Examples were found (based on data in the evidence tables of included reviews) of effective interventions delivered in a wide range of settings, including healthcare settings, the workplace, the home, and in the community[30,34]. Few reviews formally examined the impact of intervention setting on effectiveness. However, one medium quality associative analysis revealed no significant differences in outcomes (either dietary or physical activity change) at six months between interventions in primary care, community and workplace settings[48].\nExamples were found (based on data in the evidence tables of included reviews) of effective interventions delivered in a wide range of settings, including healthcare settings, the workplace, the home, and in the community[30,34]. Few reviews formally examined the impact of intervention setting on effectiveness. However, one medium quality associative analysis revealed no significant differences in outcomes (either dietary or physical activity change) at six months between interventions in primary care, community and workplace settings[48].", "The characteristics of the included and excluded reviews are summarised in Additional file 1 Tables S4 and S5. Ten reviews examined physical activity interventions, three examined dietary interventions and seventeen examined both. Reviews included data from a range of populations (e.g. sedentary, overweight, obese, impaired glucose tolerance) and delivery settings (e.g. home based, leisure centre based, primary care, workplace) and used a variety of descriptive, meta-analytic and meta-regression analyses to investigate the association of intervention components with effectiveness. We identified 129 analyses of relationships between intervention components and effectiveness, and 55 analyses of intervention effectiveness (Additional file 2 Tables S7 to S14). The dates of published studies included in the reviews examined ranged from 1966 to 2008.", "The methodological quality of included reviews (Additional file 1 Tables S4, S6) was generally good (median OQAQ score = 15.6). The most common methodological weaknesses were the lack of use of study quality data to inform analyses (e.g. by sensitivity analysis, or by constructing separate analyses which excluded low quality trials) and potential bias in the selection of articles (e.g. not using independent assessors).", "The extracted analyses and evidence grades for each analysis are presented in Additional file 2 Tables S7 to S14. The findings can be summarised as follows:-", "[SUBTITLE] Weight Loss [SUBSECTION] High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\nHigh quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\n[SUBTITLE] Physical Activity [SUBSECTION] High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\nHigh quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\n[SUBTITLE] Dietary Intake [SUBSECTION] Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\nMedium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\n[SUBTITLE] Other Outcomes [SUBSECTION] High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\nHigh quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).", "High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].", "High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.", "Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].", "High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).", "One meta-regression analysis provided medium quality associative evidence (grade 2+) suggesting that interventions with an explicitly stated theoretical basis (e.g. Social Cognitive Theory,[60] Theory of Planned Behaviour[61]) were no more effective in producing changes in either weight or in combined dietary and physical activity outcomes than interventions with no stated theoretical basis[38]. However, four meta-regression analyses (all medium quality associative analyses) in two reviews[38,48] did find an association between the use of a theoretically specified cluster of 'self-regulatory' intervention techniques (specific goal-setting, prompting self-monitoring, providing feedback on performance, goal review) and increased effectiveness in terms of a) weight loss, b) change in dietary outcomes, c) change in physical activity and d) combined (standardised mean difference for either dietary change or physical activity) outcomes.", "Categorisation of interventions varied greatly between reviews, with categories often conceptually overlapping and vaguely defined (e.g. diet vs. exercise vs. behavioural intervention). Despite this, we have summarised evidence on the use of what we have called \"established, well defined behaviour change techniques\", based on those reviews where clear and specific definitions were provided (see Table 1 for definitions). Further definition of the specific behaviour change techniques cited in Table 1 and those mentioned in the text below can be found in a recent taxonomy of behaviour change techniques[62].\nCausal evidence from one medium quality meta-analysis indicated that change in weight was greater when established, well defined behaviour change techniques were added to interventions (e.g. when dietary advice plus a well-defined behavioural intervention using established behaviour change techniques was compared with dietary advice alone). The weight loss achieved by adding established behaviour change techniques to interventions was 4.5 kg at a median 6 months of follow up[54]. This was supported by two associative analyses (one medium and one low quality) which compared the results of different groups of studies in which the interventions either did or did not use established, well-defined behaviour change techniques. Using established behaviour change techniques was associated with increased weight loss (2.5 to 5.5 kg) compared with non-behavioural interventions (0.1 to 0.9 kg)[46,47].\nEvidence from five low to medium quality associative analyses in two reviews attempted to relate the number of behaviour change techniques used to effectiveness in terms of weight loss or changes in diet or physical activity. The evidence was equivocal with the pattern of data suggesting a possible association, but only one analysis approached significance[38,48].\n[SUBTITLE] Use of specific behaviour change techniques [SUBSECTION] High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\nHigh quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\n[SUBTITLE] Motivational interviewing [SUBSECTION] Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\nMotivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\n[SUBTITLE] Targeting multiple behaviours [SUBSECTION] Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCausal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].", "High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).", "Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).", "Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].", "The evidence from five reviews of dietary and/or physical activity intervention was mixed. Five associative analyses (three medium and two low quality) from four reviews failed to find a clear association between effectiveness and mode of intervention delivery for weight loss, dietary change or physical activity change[38,46,48,51]. One review found medium quality associative evidence that 'mixed mode' (individual and group) delivery was significantly related to greater effectiveness, compared with individual delivery, for initial weight loss (up to 6 months), but not for weight loss maintenance (at a mean 19 months)[38]. However, it is worth noting that there is evidence from individual high quality RCTs (based on data in the evidence tables of the included reviews) that individual, group, and mixed mode interventions can all be effective in changing diet and/or physical activity[31,38,51].", "There was a lack of high quality evidence in this area for comparisons between specific types of intervention provider. Four associative analyses (two medium, two low) from four reviews provided no consistent or significant relationship between intervention provider and weight, physical activity or dietary outcomes at up to 12 months of follow up[38,40,48,51]. However, strong evidence from individual RCTs (based on data in the evidence tables of the included reviews) showed that a wide range of providers (with appropriate training) including doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, can deliver effective interventions for changing diet and/or physical activity[38,40,43,48,51,52].", "Definitions of intervention intensity reported in the reviews varied considerably, incorporating frequency and total number of contacts, total contact time, duration of the intervention and the number of behaviour change techniques used. The frequency and duration of clinical contact varied widely, ranging from 1 to around 80 sessions, delivered daily to monthly and lasting anything from 15 to 150 minutes, over periods ranging from 1 day to 2 years. For instance, one review of 17 weight loss interventions that compared different intervention intensities, reported that the median contact frequency was weekly, the median session duration 60 minutes, and the median delivery period 10 weeks[54]. Physical activity interventions are often much more intensive due to a focus on practising the target behaviour (e.g. Shaw et al.[55] report interventions lasting 3 to 12 months with 3 to 5 sessions per week lasting a median 45 minutes each).\n[SUBTITLE] Weight Loss [SUBSECTION] Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\nOverall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\n[SUBTITLE] Dietary Change [SUBSECTION] Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\nTwo low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\n[SUBTITLE] Physical Activity [SUBSECTION] There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nThere was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.", "Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.", "Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].", "There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.", "[SUBTITLE] Gender [SUBSECTION] Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\nEight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\n[SUBTITLE] Ethnicity [SUBSECTION] Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\nAlthough there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\n[SUBTITLE] Age [SUBSECTION] Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\nAssociative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\n[SUBTITLE] At risk populations [SUBSECTION] A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\nA range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\n[SUBTITLE] Diabetes [SUBSECTION] In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\nIn two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\n[SUBTITLE] Weight [SUBSECTION] Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\nFour analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].", "Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].", "Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.", "Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].", "A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].", "In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].", "Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].", "Examples were found (based on data in the evidence tables of included reviews) of effective interventions delivered in a wide range of settings, including healthcare settings, the workplace, the home, and in the community[30,34]. Few reviews formally examined the impact of intervention setting on effectiveness. However, one medium quality associative analysis revealed no significant differences in outcomes (either dietary or physical activity change) at six months between interventions in primary care, community and workplace settings[48].", "This review has, for the first time, systematically identified, synthesised and graded a wide range of evidence about the relationship of intervention content to effectiveness in individual-level interventions for promoting changes in diet and/or physical activity in adults at risk of type 2 diabetes.\nInterventions produced significant and clinically meaningful changes in physical activity (typically equivalent to 30-60 minutes of walking per week, for up to 18 months) and in weight (typically 3-5 kg at 12 months, 2-3 kg at 36 months). Greater effectiveness of interventions was causally linked (in meta-analyses and randomised trials which experimentally manipulated the use of these elements) with targeting both diet and physical activity, mobilising social support and the use of well-described/established behaviour change techniques. Greater effectiveness was also associated (in correlational analyses and non-randomised comparisons) with using a cluster of self-regulatory techniques (goal-setting, prompting self-monitoring, providing feedback on performance, goal review[62,64]), and providing a higher contact time or frequency of contacts. However, with regard to intensity, the amount of clinical contact in interventions varied widely (see ranges reported above) and the evidence did not support the recommendation of any particular minimum threshold. The evidence on patterns of effectiveness over time[37] also suggested that there is a need for an increased focus on the use of techniques to support behaviour maintenance.\nThere were no clear associations between provider, setting, delivery mode, ethnicity and age of the target group and effectiveness. This (and evidence from a range of individual RCTs cited in the reviews examined) suggests that interventions can be delivered successfully by a wide range of providers in a wide range of settings, in group or individual or combined modes, and can be effective for a wide range of ethnic and age groups.\nWhile the use of \"established, well-defined behaviour change techniques\" was associated with increased effectiveness, it is worth emphasising that individual techniques are rarely applied in isolation and should form part of a coherent intervention model. Therefore, a planned approach to intervention design is recommended, such as \"intervention mapping\",[65] or other systematic intervention development processes[66] which select intervention techniques to address targeted behaviour change processes (and that are tailored for the target population and setting).\nTaken together, the findings suggest a number of recommendations for optimising practice in the development and delivery of interventions to promote changes in diet and/or physical activity and these are outlined in Table 2. It is hoped that applying these findings will help to meet the growing need for less costly, but nonetheless effective, type 2 diabetes prevention programmes.\nRecommendations for practice\n1Key to grades of recommendations:\nA: At least one meta-analysis, systematic review, or RCT rated as 1++ and directly applicable to the target population; or A body of evidence\nconsisting principally of studies rated as 1+, directly applicable to the target population, and demonstrating overall consistency of results\nB: A body of evidence including studies rated as 2++, directly applicable to the target population, and demonstrating overall consistency of\nresults; or Extrapolated evidence from studies rated as 1++ or 1+\nC: A body of evidence including studies rated as 2+, directly applicable to the target population and demonstrating overall consistency of results;\nor Extrapolated evidence from studies rated as 2++\nD: Evidence level 3 or 4 (non-analytic studies or expert opinion); or Extrapolated evidence from studies rated as 2+\nAlthough providing a greater degree of depth with regard to intervention components, these findings are consistent with UK guidance for the prevention and treatment of obesity (which recommends engaging social (especially family based) support, and targeting both diet and exercise)[67]. The findings are also consistent with recent guidance from the American Heart Association[68] on the prevention of heart disease in adults aged over 18, which recommend the use of motivational interviewing as well as goal-setting, self-monitoring and a high contact frequency. Recent evidence-based guidance from the US Association of Diabetes Educators also recommends goal-setting, problem-solving (relapse prevention) and self-monitoring of plans (self-regulation) for supporting healthy eating and increased physical activity in people with type 2 diabetes[69]. Our findings may also be more widely generalisable to adults with diagnosed chronic disease (e.g. type 2 diabetes, heart disease) or to apparently healthy adults.\n[SUBTITLE] Strengths and limitations [SUBSECTION] Our review focused only on higher quality systematic reviews. We identified a substantial number of reviews which synthesised data from a large number of RCTs and other studies, in a wide range of age groups, clinical/risk groups and settings. Drawing together these findings in one place has generated a comprehensive, evidence-based overview of which intervention components are most likely to facilitate effectiveness.\nHowever, several challenges affecting the synthesis and interpretation of the available evidence were encountered. One of the limitations most commonly cited by review authors was an inadequate description of behavioural interventions in the individual study reports. This causes difficulties for the reviewer in categorising intervention content and conducting subsequent analyses to relate content to effectiveness. We therefore suggest that future intervention study reports (and reviews of individual studies) use an appropriate taxonomy to describe (and categorise) behaviour change techniques[62]. A major limitation in assessing the utility of specific theories and techniques underpinning interventions is that techniques may not be implemented rigorously or may not faithfully represent the specified theories[62,70]. Notably, none of the 30 reviews that we examined took intervention fidelity into account. Hence, the lack of an association between the use of a stated theory and effectiveness may reflect a lack of good theories or it may reflect poor implementation of theories. Other potentially important sources of bias include measurement issues (especially in relation to the use of self-report data); self-selection of intervention participants; and a failure to consider potential biases due to study quality in some reviews. Furthermore, it is worth noting that with associative evidence, other covariates than those analysed may account for the stated relationships (e.g. the association between intensity and effectiveness might be explained to some extent by lower quality of intervention being associated with lower intensity).\nA further potential source of bias which no review accounted for was the low sample size contributing to some of the analyses examined. In particular, it is worth noting that, whilst our recommendation (Table 2) on the usefulness of social support technically merits a grade A (as it is based on level 1+ evidence from a meta-analysis of randomised controlled trials), the total number of participants contributing to the meta-analysis was only 127. If the grading system had taken sample size into account, we may have given this recommendation a lower grade. In interpreting the above information, it should be noted that the analyses considered were in many cases based on overlapping sets of trials (and other studies). It should also be noted, as this is a review of reviews we were not able to synthesise or meta-analyse data from individual studies, which may have yielded valuable evidence. It is also worth noting that at the time of the literature search there were no high quality reviews on the use of internet-based interventions, so no evidence is presented in this area.\nOur review focused only on higher quality systematic reviews. We identified a substantial number of reviews which synthesised data from a large number of RCTs and other studies, in a wide range of age groups, clinical/risk groups and settings. Drawing together these findings in one place has generated a comprehensive, evidence-based overview of which intervention components are most likely to facilitate effectiveness.\nHowever, several challenges affecting the synthesis and interpretation of the available evidence were encountered. One of the limitations most commonly cited by review authors was an inadequate description of behavioural interventions in the individual study reports. This causes difficulties for the reviewer in categorising intervention content and conducting subsequent analyses to relate content to effectiveness. We therefore suggest that future intervention study reports (and reviews of individual studies) use an appropriate taxonomy to describe (and categorise) behaviour change techniques[62]. A major limitation in assessing the utility of specific theories and techniques underpinning interventions is that techniques may not be implemented rigorously or may not faithfully represent the specified theories[62,70]. Notably, none of the 30 reviews that we examined took intervention fidelity into account. Hence, the lack of an association between the use of a stated theory and effectiveness may reflect a lack of good theories or it may reflect poor implementation of theories. Other potentially important sources of bias include measurement issues (especially in relation to the use of self-report data); self-selection of intervention participants; and a failure to consider potential biases due to study quality in some reviews. Furthermore, it is worth noting that with associative evidence, other covariates than those analysed may account for the stated relationships (e.g. the association between intensity and effectiveness might be explained to some extent by lower quality of intervention being associated with lower intensity).\nA further potential source of bias which no review accounted for was the low sample size contributing to some of the analyses examined. In particular, it is worth noting that, whilst our recommendation (Table 2) on the usefulness of social support technically merits a grade A (as it is based on level 1+ evidence from a meta-analysis of randomised controlled trials), the total number of participants contributing to the meta-analysis was only 127. If the grading system had taken sample size into account, we may have given this recommendation a lower grade. In interpreting the above information, it should be noted that the analyses considered were in many cases based on overlapping sets of trials (and other studies). It should also be noted, as this is a review of reviews we were not able to synthesise or meta-analyse data from individual studies, which may have yielded valuable evidence. It is also worth noting that at the time of the literature search there were no high quality reviews on the use of internet-based interventions, so no evidence is presented in this area.\n[SUBTITLE] Implications for practice and policy [SUBSECTION] Our review has generated clear recommendations on how interventions for promoting lifestyle change within diabetes prevention programmes could be developed or refined to maximise effectiveness (Table 2). Our recommendations go considerably beyond the data on basic effectiveness presented in trials and systematic reviews of diabetes prevention programmes to date[3-8]. They can be useful, for example, in guiding the translation of effective, high-intensity/high resource-use interventions in research contexts into lower-cost (yet still effective) interventions for implementation in clinical practice.\nOur review has generated clear recommendations on how interventions for promoting lifestyle change within diabetes prevention programmes could be developed or refined to maximise effectiveness (Table 2). Our recommendations go considerably beyond the data on basic effectiveness presented in trials and systematic reviews of diabetes prevention programmes to date[3-8]. They can be useful, for example, in guiding the translation of effective, high-intensity/high resource-use interventions in research contexts into lower-cost (yet still effective) interventions for implementation in clinical practice.\n[SUBTITLE] Directions for future research [SUBSECTION] More rigorous evaluations of the effectiveness and cost-effectiveness of specific intervention components and clusters of techniques for promoting and maintaining change in diet and physical activity are needed. This will require experimental and theoretically driven manipulation of intervention components in well-powered and high-quality trials. Intervention studies need to provide careful descriptions of the hypothesised causal processes for achieving behaviour change and the specific techniques used to modify these processes. Trials should include process analyses to establish the validity or otherwise of the causal models proposed. Research is urgently needed to compare the cost-effectiveness of interventions with different providers, intervention modes and intensities (using clear and consistent conceptualisations of intensity and attempting to disentangle the different elements of intensity such as contact time, number of contacts and contact frequency). This should include the evaluation of remotely delivered and/or self-delivered (e.g. internet-based) approaches and other approaches that might provide high effectiveness for lower cost. Research is also needed to establish the impact of the intervention setting on effectiveness; to optimise intervention procedures for different ethnic, age and gender groups; to establish effective techniques for improving recruitment to interventions (and to address gender imbalances); and to assess the possible adverse affects of dietary and physical activity interventions.\nMore rigorous evaluations of the effectiveness and cost-effectiveness of specific intervention components and clusters of techniques for promoting and maintaining change in diet and physical activity are needed. This will require experimental and theoretically driven manipulation of intervention components in well-powered and high-quality trials. Intervention studies need to provide careful descriptions of the hypothesised causal processes for achieving behaviour change and the specific techniques used to modify these processes. Trials should include process analyses to establish the validity or otherwise of the causal models proposed. Research is urgently needed to compare the cost-effectiveness of interventions with different providers, intervention modes and intensities (using clear and consistent conceptualisations of intensity and attempting to disentangle the different elements of intensity such as contact time, number of contacts and contact frequency). This should include the evaluation of remotely delivered and/or self-delivered (e.g. internet-based) approaches and other approaches that might provide high effectiveness for lower cost. Research is also needed to establish the impact of the intervention setting on effectiveness; to optimise intervention procedures for different ethnic, age and gender groups; to establish effective techniques for improving recruitment to interventions (and to address gender imbalances); and to assess the possible adverse affects of dietary and physical activity interventions.", "Our review focused only on higher quality systematic reviews. We identified a substantial number of reviews which synthesised data from a large number of RCTs and other studies, in a wide range of age groups, clinical/risk groups and settings. Drawing together these findings in one place has generated a comprehensive, evidence-based overview of which intervention components are most likely to facilitate effectiveness.\nHowever, several challenges affecting the synthesis and interpretation of the available evidence were encountered. One of the limitations most commonly cited by review authors was an inadequate description of behavioural interventions in the individual study reports. This causes difficulties for the reviewer in categorising intervention content and conducting subsequent analyses to relate content to effectiveness. We therefore suggest that future intervention study reports (and reviews of individual studies) use an appropriate taxonomy to describe (and categorise) behaviour change techniques[62]. A major limitation in assessing the utility of specific theories and techniques underpinning interventions is that techniques may not be implemented rigorously or may not faithfully represent the specified theories[62,70]. Notably, none of the 30 reviews that we examined took intervention fidelity into account. Hence, the lack of an association between the use of a stated theory and effectiveness may reflect a lack of good theories or it may reflect poor implementation of theories. Other potentially important sources of bias include measurement issues (especially in relation to the use of self-report data); self-selection of intervention participants; and a failure to consider potential biases due to study quality in some reviews. Furthermore, it is worth noting that with associative evidence, other covariates than those analysed may account for the stated relationships (e.g. the association between intensity and effectiveness might be explained to some extent by lower quality of intervention being associated with lower intensity).\nA further potential source of bias which no review accounted for was the low sample size contributing to some of the analyses examined. In particular, it is worth noting that, whilst our recommendation (Table 2) on the usefulness of social support technically merits a grade A (as it is based on level 1+ evidence from a meta-analysis of randomised controlled trials), the total number of participants contributing to the meta-analysis was only 127. If the grading system had taken sample size into account, we may have given this recommendation a lower grade. In interpreting the above information, it should be noted that the analyses considered were in many cases based on overlapping sets of trials (and other studies). It should also be noted, as this is a review of reviews we were not able to synthesise or meta-analyse data from individual studies, which may have yielded valuable evidence. It is also worth noting that at the time of the literature search there were no high quality reviews on the use of internet-based interventions, so no evidence is presented in this area.", "Our review has generated clear recommendations on how interventions for promoting lifestyle change within diabetes prevention programmes could be developed or refined to maximise effectiveness (Table 2). Our recommendations go considerably beyond the data on basic effectiveness presented in trials and systematic reviews of diabetes prevention programmes to date[3-8]. They can be useful, for example, in guiding the translation of effective, high-intensity/high resource-use interventions in research contexts into lower-cost (yet still effective) interventions for implementation in clinical practice.", "More rigorous evaluations of the effectiveness and cost-effectiveness of specific intervention components and clusters of techniques for promoting and maintaining change in diet and physical activity are needed. This will require experimental and theoretically driven manipulation of intervention components in well-powered and high-quality trials. Intervention studies need to provide careful descriptions of the hypothesised causal processes for achieving behaviour change and the specific techniques used to modify these processes. Trials should include process analyses to establish the validity or otherwise of the causal models proposed. Research is urgently needed to compare the cost-effectiveness of interventions with different providers, intervention modes and intensities (using clear and consistent conceptualisations of intensity and attempting to disentangle the different elements of intensity such as contact time, number of contacts and contact frequency). This should include the evaluation of remotely delivered and/or self-delivered (e.g. internet-based) approaches and other approaches that might provide high effectiveness for lower cost. Research is also needed to establish the impact of the intervention setting on effectiveness; to optimise intervention procedures for different ethnic, age and gender groups; to establish effective techniques for improving recruitment to interventions (and to address gender imbalances); and to assess the possible adverse affects of dietary and physical activity interventions.", "Interventions to promote changes in diet and/or physical activity in adults with increased risk of diabetes or cardiovascular disease are more likely to be effective if they a) target both diet and physical activity, b) involve the planned use of established behaviour change techniques, c) mobilise social support, and d) have a clear plan for supporting maintenance of behaviour change. They may also benefit from providing a higher frequency or total number of contacts.\nTo maximise the effectiveness of intervention programmes to promote changes in diet and/or physical activity for diabetes prevention, practitioners and commissioning organisations should carefully consider the inclusion of the above components.", "The authors declare that they have no competing interests.", "CG conceived and coordinated the study. KS and CG conducted literature searches, data extraction, review selection, quality rating and evidence grading and drafted the manuscript. CA, WH, MR, PE and PS contributed to the design of the study and interpretation of the results. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/119/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data Sources and Search Strategy", "Review selection", "Inclusion criteria", "Exclusion criteria", "Outcomes", "Study quality assessment", "Data extraction", "Grading of evidence", "Analysis", "Results", "Review characteristics", "Study quality", "Evidence synthesis", "Overall effectiveness (Additional file 2, Table S7)", "Weight Loss", "Physical Activity", "Dietary Intake", "Other Outcomes", "Theoretical basis (Additional file 2, Table S8)", "Behaviour change techniques (Additional file 2, Table S9)", "Use of specific behaviour change techniques", "Motivational interviewing", "Targeting multiple behaviours", "Mode of delivery (Additional file 2, Table S10)", "Intervention provider (Additional file 2, Table S11)", "Intervention intensity (Additional file 2, Table S12)", "Weight Loss", "Dietary Change", "Physical Activity", "Characteristics of the target population (Additional file 2, Table S13)", "Gender", "Ethnicity", "Age", "At risk populations", "Diabetes", "Weight", "Setting (Additional file 2, Table S14)", "Discussion", "Strengths and limitations", "Implications for practice and policy", "Directions for future research", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "The development of type 2 diabetes is strongly associated with being overweight, obese or physically inactive[1,2]. Large randomised controlled trials (RCTs) have shown that relatively modest changes in lifestyle (increasing fibre (≥15 g/1000 kcal), reducing total fat (< 30% of energy consumed) and saturated fat (< 10% of energy consumed), engaging in moderate physical activity (≥30 mins/day), weight reduction (5%)) can reduce the risk of progression to type 2 diabetes in adults with impaired glucose regulation (also known as pre-diabetes) by around 50%[3-7]. In one study, achieving four or more of the above targets led to zero incidence of type 2 diabetes up to seven years later[8]. Consequently, promoting changes in physical activity and dietary intake is now recommended in national and international guidelines as a first line therapy for preventing type 2 diabetes[9-12].\nA number of diabetes prevention programmes have been developed internationally (e.g. in Finland,[13] Germany,[14,15] the US,[16,17] Australia[18] and China[19]). However, national diabetes prevention strategies are still lacking in many countries. The cost-effectiveness of lifestyle intervention approaches for diabetes prevention is already well established and is favourable in comparison to pharmacological approaches[20-22]. However, most interventions used to date in a research setting are considered to be too intensive for widespread implementation in health services[23]. For example, the US Diabetes Prevention Programme[4] involved 16 individual counselling sessions plus individual coaching and a maintenance programme with further individual and group sessions. A major challenge for healthcare providers therefore is how to achieve the lifestyle changes needed to prevent type 2 diabetes (and its associated cardiovascular risk) without overstretching existing budgets and available resources[24,25].\nIn translating the research evidence into practical programmes it is critical to ensure that the intervention components (i.e. behaviour change techniques and strategies) and characteristics (e.g. setting, delivery mode, intervention provider) most strongly associated with effectiveness are included.\nWe therefore aimed to systematically review existing systematic reviews to summarise the evidence relating the content of interventions for promoting dietary and/or physical activity change to their effectiveness in producing weight and behaviour change. The review focused on evidence relating to individuals at risk of type 2 diabetes due to lifestyle (e.g. inactivity) or clinical risk factors (e.g. overweight, elevated blood pressure).", "[SUBTITLE] Data Sources and Search Strategy [SUBSECTION] One author (KS) searched MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library for systematic reviews in the English language, published between January 1998 and May 2008 (the search terms were reviewed by several authors (CG, CA, WH) and are provided in Additional file 1 Table S1). Reference lists of selected reviews and relevant clinical guidelines were also searched and experts in the area were contacted in order to identify unpublished reviews.\nOne author (KS) searched MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library for systematic reviews in the English language, published between January 1998 and May 2008 (the search terms were reviewed by several authors (CG, CA, WH) and are provided in Additional file 1 Table S1). Reference lists of selected reviews and relevant clinical guidelines were also searched and experts in the area were contacted in order to identify unpublished reviews.\n[SUBTITLE] Review selection [SUBSECTION] Two reviewers (KS, CG) independently examined titles and abstracts. Relevant review articles were obtained in full, and assessed against the inclusion and study quality criteria described below. Inter-reviewer agreement on inclusion was assessed using kappa statistics and any disagreements were resolved through discussion.\n[SUBTITLE] Inclusion criteria [SUBSECTION] 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n[SUBTITLE] Exclusion criteria [SUBSECTION] 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n[SUBTITLE] Outcomes [SUBSECTION] We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.\nWe selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.\nTwo reviewers (KS, CG) independently examined titles and abstracts. Relevant review articles were obtained in full, and assessed against the inclusion and study quality criteria described below. Inter-reviewer agreement on inclusion was assessed using kappa statistics and any disagreements were resolved through discussion.\n[SUBTITLE] Inclusion criteria [SUBSECTION] 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n[SUBTITLE] Exclusion criteria [SUBSECTION] 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n[SUBTITLE] Outcomes [SUBSECTION] We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.\nWe selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.\n[SUBTITLE] Study quality assessment [SUBSECTION] Review quality was rated independently by two authors (KS, CG) for a sub-sample (35 out of 107) of the articles identified as potentially relevant, using the Overview Quality Assessment Questionnaire (OQAQ;[28] Additional file 1 Table S2). Thereafter, review quality was rated by one researcher (KS) and verified by another (CG). Reviews were included if their OQAQ score was 14 or more (possible range 0-18) and if they scored at least one point for either of the two OQAQ criteria about assessing quality/taking quality into account in analyses (this was intended to maximise the likely quality of evidence underlying the review-level analyses). A percentage score was calculated for inter-rater agreement (defined as ≤1 point of variation on OQAQ scores) and any disagreements were resolved by discussion.\nReview quality was rated independently by two authors (KS, CG) for a sub-sample (35 out of 107) of the articles identified as potentially relevant, using the Overview Quality Assessment Questionnaire (OQAQ;[28] Additional file 1 Table S2). Thereafter, review quality was rated by one researcher (KS) and verified by another (CG). Reviews were included if their OQAQ score was 14 or more (possible range 0-18) and if they scored at least one point for either of the two OQAQ criteria about assessing quality/taking quality into account in analyses (this was intended to maximise the likely quality of evidence underlying the review-level analyses). A percentage score was calculated for inter-rater agreement (defined as ≤1 point of variation on OQAQ scores) and any disagreements were resolved by discussion.\n[SUBTITLE] Data extraction [SUBSECTION] We extracted data on the effectiveness of interventions and on the relationship of effectiveness to seven pre-defined intervention components. These were: Theoretical basis (i.e. we extracted analyses relating effectiveness to the use of any stated theory of behaviour or behaviour change); Behaviour change techniques used (e.g. the use of specific techniques such as goal-setting, problem-solving or the planned use of some clearly defined set of behaviour change techniques: See Table 1 for examples); Mode of delivery (e.g. group-based, individual, self-delivery, mixed-mode); Intervention provider (e.g. general practitioner, counsellor); Intensity (e.g. number of sessions, total contact time); Characteristics of the target population (e.g. age, ethnicity, risk state); and Setting (e.g. primary care, workplace). Data were extracted against a data extraction template by one author (KS) and checked by another (CG) with reference to the full text of the article. Extracted data also included inclusion and exclusion criteria, reported analyses and analysis type.\nDefinitions of 'established behaviour change techniques'\nWe extracted data on the effectiveness of interventions and on the relationship of effectiveness to seven pre-defined intervention components. These were: Theoretical basis (i.e. we extracted analyses relating effectiveness to the use of any stated theory of behaviour or behaviour change); Behaviour change techniques used (e.g. the use of specific techniques such as goal-setting, problem-solving or the planned use of some clearly defined set of behaviour change techniques: See Table 1 for examples); Mode of delivery (e.g. group-based, individual, self-delivery, mixed-mode); Intervention provider (e.g. general practitioner, counsellor); Intensity (e.g. number of sessions, total contact time); Characteristics of the target population (e.g. age, ethnicity, risk state); and Setting (e.g. primary care, workplace). Data were extracted against a data extraction template by one author (KS) and checked by another (CG) with reference to the full text of the article. Extracted data also included inclusion and exclusion criteria, reported analyses and analysis type.\nDefinitions of 'established behaviour change techniques'\n[SUBTITLE] Grading of evidence [SUBSECTION] An evidence grade was given to each reported analysis, based on the Scottish Intercollegiate Guidelines Network (SIGN) evidence grading system[29]. This system grades the risk of bias associated with a particular piece of evidence on a hierarchy from meta-analysis and RCT evidence (grade 1) down to expert opinion (grade 4), with additional indicators (++, + or -) to indicate methodological quality. The SIGN system was modified, as our review aimed to identify the relative effectiveness of intervention components, rather than effectiveness per se (see Additional file 1 Table S3 for full details). Although the SIGN evidence grading uses an alpha-numeric system (1++, 1+, 1-, 2++, 2+, 2-), for ease of reading we have converted this to a text-based format. For each analysis the quality of the evidence (the degree of confidence that the risk of bias is low) is described as either \"high (++), medium (+) or low (-)\". Each analysis is also categorised as being either \"causal\" evidence (SIGN grade 1; evidence from meta-analyses or summaries of RCTs where the component or characteristic of interest was experimentally manipulated) or \"associative\" evidence (SIGN grade 2; evidence from correlational or observational analyses). We also applied a category of \"very low quality\" for analyses with very low apparent power (total N < 100). The reporting that follows excludes this very low quality evidence, although it is included in the supplementary data tables for completeness.\nAn evidence grade was given to each reported analysis, based on the Scottish Intercollegiate Guidelines Network (SIGN) evidence grading system[29]. This system grades the risk of bias associated with a particular piece of evidence on a hierarchy from meta-analysis and RCT evidence (grade 1) down to expert opinion (grade 4), with additional indicators (++, + or -) to indicate methodological quality. The SIGN system was modified, as our review aimed to identify the relative effectiveness of intervention components, rather than effectiveness per se (see Additional file 1 Table S3 for full details). Although the SIGN evidence grading uses an alpha-numeric system (1++, 1+, 1-, 2++, 2+, 2-), for ease of reading we have converted this to a text-based format. For each analysis the quality of the evidence (the degree of confidence that the risk of bias is low) is described as either \"high (++), medium (+) or low (-)\". Each analysis is also categorised as being either \"causal\" evidence (SIGN grade 1; evidence from meta-analyses or summaries of RCTs where the component or characteristic of interest was experimentally manipulated) or \"associative\" evidence (SIGN grade 2; evidence from correlational or observational analyses). We also applied a category of \"very low quality\" for analyses with very low apparent power (total N < 100). The reporting that follows excludes this very low quality evidence, although it is included in the supplementary data tables for completeness.\n[SUBTITLE] Analysis [SUBSECTION] No statistical analyses or meta-analyses were conducted. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a systematic format (Additional file 2 Tables S7 to S14). Each analysis was graded using the adapted SIGN criteria as described above and a narrative synthesis is presented below, indicating both the quality of the evidence (low, medium, high) and whether it is causal or associative in nature.\nIn accordance with reporting guidelines for systematic reviews, a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist is available for this review (Additional file 3).\nNo statistical analyses or meta-analyses were conducted. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a systematic format (Additional file 2 Tables S7 to S14). Each analysis was graded using the adapted SIGN criteria as described above and a narrative synthesis is presented below, indicating both the quality of the evidence (low, medium, high) and whether it is causal or associative in nature.\nIn accordance with reporting guidelines for systematic reviews, a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist is available for this review (Additional file 3).", "One author (KS) searched MEDLINE, EMBASE, CINAHL, PsycInfo, and the Cochrane Library for systematic reviews in the English language, published between January 1998 and May 2008 (the search terms were reviewed by several authors (CG, CA, WH) and are provided in Additional file 1 Table S1). Reference lists of selected reviews and relevant clinical guidelines were also searched and experts in the area were contacted in order to identify unpublished reviews.", "Two reviewers (KS, CG) independently examined titles and abstracts. Relevant review articles were obtained in full, and assessed against the inclusion and study quality criteria described below. Inter-reviewer agreement on inclusion was assessed using kappa statistics and any disagreements were resolved through discussion.\n[SUBTITLE] Inclusion criteria [SUBSECTION] 1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).\n[SUBTITLE] Exclusion criteria [SUBSECTION] 1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).\n[SUBTITLE] Outcomes [SUBSECTION] We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.\nWe selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.", "1) Type of study: Systematic reviews and meta-analyses including RCTs, observational studies, case-controlled or other quasi-experimental studies. Comparison groups could include usual care, no intervention or other interventions. 2) Type of intervention: Interventions promoting physical activity and/or dietary change at the individual-level (i.e. interventions delivered to individuals either singly or in group sessions, but not whole-community or whole-population level interventions such as media campaigns or changes in the local environment). 3) Study populations: Adults (18 years and over) at risk of developing type 2 diabetes, selected because they were obese, overweight, sedentary, had hypertension, impaired fasting glucose, impaired glucose tolerance, hyperlipidaemia, metabolic syndrome, polycystic ovarian syndrome, gestational diabetes, a family history of type 2 diabetes or cardiovascular disease, or had been identified as having a high cardiovascular disease risk score (e.g. using a validated risk score such as Q-RISK or Framingham).", "1) Reviews not meeting pre-defined criteria for methodological quality (Additional file 1 Table S2). 2) Reviews which focused on people with existing diabetes, cardiovascular disease, or solely on healthy adults, or which were confined to groups with significant co-morbidities (e.g. arthritis, mental health).", "We selected reviews where the primary outcome measure was weight, weight loss (kg or Body Mass Index (BMI), proportions of people achieving a target weight loss), changes in physical activity (e.g. frequency, met-hrs per week) or dietary behaviour. Behaviours could be measured objectively (e.g. with accelerometers) or by self-report (e.g. dietary intake questionnaires). Cardio-respiratory fitness was considered as a proxy for change in physical activity. As self-report increases the risk of measurement bias,[26,27] we have highlighted findings based on self-report in the data tables (Additional file 2 Tables S7-S14). We also examined papers for other outcomes which might be of interest in relation to change in weight, diet, or physical activity behaviour or in relation to the progression to type 2 diabetes.", "Review quality was rated independently by two authors (KS, CG) for a sub-sample (35 out of 107) of the articles identified as potentially relevant, using the Overview Quality Assessment Questionnaire (OQAQ;[28] Additional file 1 Table S2). Thereafter, review quality was rated by one researcher (KS) and verified by another (CG). Reviews were included if their OQAQ score was 14 or more (possible range 0-18) and if they scored at least one point for either of the two OQAQ criteria about assessing quality/taking quality into account in analyses (this was intended to maximise the likely quality of evidence underlying the review-level analyses). A percentage score was calculated for inter-rater agreement (defined as ≤1 point of variation on OQAQ scores) and any disagreements were resolved by discussion.", "We extracted data on the effectiveness of interventions and on the relationship of effectiveness to seven pre-defined intervention components. These were: Theoretical basis (i.e. we extracted analyses relating effectiveness to the use of any stated theory of behaviour or behaviour change); Behaviour change techniques used (e.g. the use of specific techniques such as goal-setting, problem-solving or the planned use of some clearly defined set of behaviour change techniques: See Table 1 for examples); Mode of delivery (e.g. group-based, individual, self-delivery, mixed-mode); Intervention provider (e.g. general practitioner, counsellor); Intensity (e.g. number of sessions, total contact time); Characteristics of the target population (e.g. age, ethnicity, risk state); and Setting (e.g. primary care, workplace). Data were extracted against a data extraction template by one author (KS) and checked by another (CG) with reference to the full text of the article. Extracted data also included inclusion and exclusion criteria, reported analyses and analysis type.\nDefinitions of 'established behaviour change techniques'", "An evidence grade was given to each reported analysis, based on the Scottish Intercollegiate Guidelines Network (SIGN) evidence grading system[29]. This system grades the risk of bias associated with a particular piece of evidence on a hierarchy from meta-analysis and RCT evidence (grade 1) down to expert opinion (grade 4), with additional indicators (++, + or -) to indicate methodological quality. The SIGN system was modified, as our review aimed to identify the relative effectiveness of intervention components, rather than effectiveness per se (see Additional file 1 Table S3 for full details). Although the SIGN evidence grading uses an alpha-numeric system (1++, 1+, 1-, 2++, 2+, 2-), for ease of reading we have converted this to a text-based format. For each analysis the quality of the evidence (the degree of confidence that the risk of bias is low) is described as either \"high (++), medium (+) or low (-)\". Each analysis is also categorised as being either \"causal\" evidence (SIGN grade 1; evidence from meta-analyses or summaries of RCTs where the component or characteristic of interest was experimentally manipulated) or \"associative\" evidence (SIGN grade 2; evidence from correlational or observational analyses). We also applied a category of \"very low quality\" for analyses with very low apparent power (total N < 100). The reporting that follows excludes this very low quality evidence, although it is included in the supplementary data tables for completeness.", "No statistical analyses or meta-analyses were conducted. Instead, the existing analyses reported in the articles reviewed were extracted and reported in a systematic format (Additional file 2 Tables S7 to S14). Each analysis was graded using the adapted SIGN criteria as described above and a narrative synthesis is presented below, indicating both the quality of the evidence (low, medium, high) and whether it is causal or associative in nature.\nIn accordance with reporting guidelines for systematic reviews, a PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist is available for this review (Additional file 3).", "Searches identified 3856 potentially relevant articles. Following review of titles and abstracts, 96 articles were retrieved and quality-assessed. An additional 11 articles were identified through reference lists and grey literature. Of these 107 articles, 30 met both the selection and quality criteria (Figure 1) and these are identified by an asterisk in the reference list[30-59]. The inter-rater reliability (Kappa) for applying review selection criteria was 0.71 (95%CI: 0.61 to 0.80), and the proportion for inter-reviewer agreement on review quality was 0.70 (95%CI: 0.55 to 0.85).\nFlow diagram of study selection.\n[SUBTITLE] Review characteristics [SUBSECTION] The characteristics of the included and excluded reviews are summarised in Additional file 1 Tables S4 and S5. Ten reviews examined physical activity interventions, three examined dietary interventions and seventeen examined both. Reviews included data from a range of populations (e.g. sedentary, overweight, obese, impaired glucose tolerance) and delivery settings (e.g. home based, leisure centre based, primary care, workplace) and used a variety of descriptive, meta-analytic and meta-regression analyses to investigate the association of intervention components with effectiveness. We identified 129 analyses of relationships between intervention components and effectiveness, and 55 analyses of intervention effectiveness (Additional file 2 Tables S7 to S14). The dates of published studies included in the reviews examined ranged from 1966 to 2008.\nThe characteristics of the included and excluded reviews are summarised in Additional file 1 Tables S4 and S5. Ten reviews examined physical activity interventions, three examined dietary interventions and seventeen examined both. Reviews included data from a range of populations (e.g. sedentary, overweight, obese, impaired glucose tolerance) and delivery settings (e.g. home based, leisure centre based, primary care, workplace) and used a variety of descriptive, meta-analytic and meta-regression analyses to investigate the association of intervention components with effectiveness. We identified 129 analyses of relationships between intervention components and effectiveness, and 55 analyses of intervention effectiveness (Additional file 2 Tables S7 to S14). The dates of published studies included in the reviews examined ranged from 1966 to 2008.\n[SUBTITLE] Study quality [SUBSECTION] The methodological quality of included reviews (Additional file 1 Tables S4, S6) was generally good (median OQAQ score = 15.6). The most common methodological weaknesses were the lack of use of study quality data to inform analyses (e.g. by sensitivity analysis, or by constructing separate analyses which excluded low quality trials) and potential bias in the selection of articles (e.g. not using independent assessors).\nThe methodological quality of included reviews (Additional file 1 Tables S4, S6) was generally good (median OQAQ score = 15.6). The most common methodological weaknesses were the lack of use of study quality data to inform analyses (e.g. by sensitivity analysis, or by constructing separate analyses which excluded low quality trials) and potential bias in the selection of articles (e.g. not using independent assessors).\n[SUBTITLE] Evidence synthesis [SUBSECTION] The extracted analyses and evidence grades for each analysis are presented in Additional file 2 Tables S7 to S14. The findings can be summarised as follows:-\nThe extracted analyses and evidence grades for each analysis are presented in Additional file 2 Tables S7 to S14. The findings can be summarised as follows:-\n[SUBTITLE] Overall effectiveness (Additional file 2, Table S7) [SUBSECTION] [SUBTITLE] Weight Loss [SUBSECTION] High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\nHigh quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\n[SUBTITLE] Physical Activity [SUBSECTION] High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\nHigh quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\n[SUBTITLE] Dietary Intake [SUBSECTION] Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\nMedium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\n[SUBTITLE] Other Outcomes [SUBSECTION] High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\nHigh quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\n[SUBTITLE] Weight Loss [SUBSECTION] High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\nHigh quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\n[SUBTITLE] Physical Activity [SUBSECTION] High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\nHigh quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\n[SUBTITLE] Dietary Intake [SUBSECTION] Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\nMedium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\n[SUBTITLE] Other Outcomes [SUBSECTION] High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\nHigh quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\n[SUBTITLE] Theoretical basis (Additional file 2, Table S8) [SUBSECTION] One meta-regression analysis provided medium quality associative evidence (grade 2+) suggesting that interventions with an explicitly stated theoretical basis (e.g. Social Cognitive Theory,[60] Theory of Planned Behaviour[61]) were no more effective in producing changes in either weight or in combined dietary and physical activity outcomes than interventions with no stated theoretical basis[38]. However, four meta-regression analyses (all medium quality associative analyses) in two reviews[38,48] did find an association between the use of a theoretically specified cluster of 'self-regulatory' intervention techniques (specific goal-setting, prompting self-monitoring, providing feedback on performance, goal review) and increased effectiveness in terms of a) weight loss, b) change in dietary outcomes, c) change in physical activity and d) combined (standardised mean difference for either dietary change or physical activity) outcomes.\nOne meta-regression analysis provided medium quality associative evidence (grade 2+) suggesting that interventions with an explicitly stated theoretical basis (e.g. Social Cognitive Theory,[60] Theory of Planned Behaviour[61]) were no more effective in producing changes in either weight or in combined dietary and physical activity outcomes than interventions with no stated theoretical basis[38]. However, four meta-regression analyses (all medium quality associative analyses) in two reviews[38,48] did find an association between the use of a theoretically specified cluster of 'self-regulatory' intervention techniques (specific goal-setting, prompting self-monitoring, providing feedback on performance, goal review) and increased effectiveness in terms of a) weight loss, b) change in dietary outcomes, c) change in physical activity and d) combined (standardised mean difference for either dietary change or physical activity) outcomes.\n[SUBTITLE] Behaviour change techniques (Additional file 2, Table S9) [SUBSECTION] Categorisation of interventions varied greatly between reviews, with categories often conceptually overlapping and vaguely defined (e.g. diet vs. exercise vs. behavioural intervention). Despite this, we have summarised evidence on the use of what we have called \"established, well defined behaviour change techniques\", based on those reviews where clear and specific definitions were provided (see Table 1 for definitions). Further definition of the specific behaviour change techniques cited in Table 1 and those mentioned in the text below can be found in a recent taxonomy of behaviour change techniques[62].\nCausal evidence from one medium quality meta-analysis indicated that change in weight was greater when established, well defined behaviour change techniques were added to interventions (e.g. when dietary advice plus a well-defined behavioural intervention using established behaviour change techniques was compared with dietary advice alone). The weight loss achieved by adding established behaviour change techniques to interventions was 4.5 kg at a median 6 months of follow up[54]. This was supported by two associative analyses (one medium and one low quality) which compared the results of different groups of studies in which the interventions either did or did not use established, well-defined behaviour change techniques. Using established behaviour change techniques was associated with increased weight loss (2.5 to 5.5 kg) compared with non-behavioural interventions (0.1 to 0.9 kg)[46,47].\nEvidence from five low to medium quality associative analyses in two reviews attempted to relate the number of behaviour change techniques used to effectiveness in terms of weight loss or changes in diet or physical activity. The evidence was equivocal with the pattern of data suggesting a possible association, but only one analysis approached significance[38,48].\n[SUBTITLE] Use of specific behaviour change techniques [SUBSECTION] High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\nHigh quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\n[SUBTITLE] Motivational interviewing [SUBSECTION] Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\nMotivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\n[SUBTITLE] Targeting multiple behaviours [SUBSECTION] Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCausal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCategorisation of interventions varied greatly between reviews, with categories often conceptually overlapping and vaguely defined (e.g. diet vs. exercise vs. behavioural intervention). Despite this, we have summarised evidence on the use of what we have called \"established, well defined behaviour change techniques\", based on those reviews where clear and specific definitions were provided (see Table 1 for definitions). Further definition of the specific behaviour change techniques cited in Table 1 and those mentioned in the text below can be found in a recent taxonomy of behaviour change techniques[62].\nCausal evidence from one medium quality meta-analysis indicated that change in weight was greater when established, well defined behaviour change techniques were added to interventions (e.g. when dietary advice plus a well-defined behavioural intervention using established behaviour change techniques was compared with dietary advice alone). The weight loss achieved by adding established behaviour change techniques to interventions was 4.5 kg at a median 6 months of follow up[54]. This was supported by two associative analyses (one medium and one low quality) which compared the results of different groups of studies in which the interventions either did or did not use established, well-defined behaviour change techniques. Using established behaviour change techniques was associated with increased weight loss (2.5 to 5.5 kg) compared with non-behavioural interventions (0.1 to 0.9 kg)[46,47].\nEvidence from five low to medium quality associative analyses in two reviews attempted to relate the number of behaviour change techniques used to effectiveness in terms of weight loss or changes in diet or physical activity. The evidence was equivocal with the pattern of data suggesting a possible association, but only one analysis approached significance[38,48].\n[SUBTITLE] Use of specific behaviour change techniques [SUBSECTION] High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\nHigh quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\n[SUBTITLE] Motivational interviewing [SUBSECTION] Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\nMotivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\n[SUBTITLE] Targeting multiple behaviours [SUBSECTION] Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCausal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\n[SUBTITLE] Mode of delivery (Additional file 2, Table S10) [SUBSECTION] The evidence from five reviews of dietary and/or physical activity intervention was mixed. Five associative analyses (three medium and two low quality) from four reviews failed to find a clear association between effectiveness and mode of intervention delivery for weight loss, dietary change or physical activity change[38,46,48,51]. One review found medium quality associative evidence that 'mixed mode' (individual and group) delivery was significantly related to greater effectiveness, compared with individual delivery, for initial weight loss (up to 6 months), but not for weight loss maintenance (at a mean 19 months)[38]. However, it is worth noting that there is evidence from individual high quality RCTs (based on data in the evidence tables of the included reviews) that individual, group, and mixed mode interventions can all be effective in changing diet and/or physical activity[31,38,51].\nThe evidence from five reviews of dietary and/or physical activity intervention was mixed. Five associative analyses (three medium and two low quality) from four reviews failed to find a clear association between effectiveness and mode of intervention delivery for weight loss, dietary change or physical activity change[38,46,48,51]. One review found medium quality associative evidence that 'mixed mode' (individual and group) delivery was significantly related to greater effectiveness, compared with individual delivery, for initial weight loss (up to 6 months), but not for weight loss maintenance (at a mean 19 months)[38]. However, it is worth noting that there is evidence from individual high quality RCTs (based on data in the evidence tables of the included reviews) that individual, group, and mixed mode interventions can all be effective in changing diet and/or physical activity[31,38,51].\n[SUBTITLE] Intervention provider (Additional file 2, Table S11) [SUBSECTION] There was a lack of high quality evidence in this area for comparisons between specific types of intervention provider. Four associative analyses (two medium, two low) from four reviews provided no consistent or significant relationship between intervention provider and weight, physical activity or dietary outcomes at up to 12 months of follow up[38,40,48,51]. However, strong evidence from individual RCTs (based on data in the evidence tables of the included reviews) showed that a wide range of providers (with appropriate training) including doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, can deliver effective interventions for changing diet and/or physical activity[38,40,43,48,51,52].\nThere was a lack of high quality evidence in this area for comparisons between specific types of intervention provider. Four associative analyses (two medium, two low) from four reviews provided no consistent or significant relationship between intervention provider and weight, physical activity or dietary outcomes at up to 12 months of follow up[38,40,48,51]. However, strong evidence from individual RCTs (based on data in the evidence tables of the included reviews) showed that a wide range of providers (with appropriate training) including doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, can deliver effective interventions for changing diet and/or physical activity[38,40,43,48,51,52].\n[SUBTITLE] Intervention intensity (Additional file 2, Table S12) [SUBSECTION] Definitions of intervention intensity reported in the reviews varied considerably, incorporating frequency and total number of contacts, total contact time, duration of the intervention and the number of behaviour change techniques used. The frequency and duration of clinical contact varied widely, ranging from 1 to around 80 sessions, delivered daily to monthly and lasting anything from 15 to 150 minutes, over periods ranging from 1 day to 2 years. For instance, one review of 17 weight loss interventions that compared different intervention intensities, reported that the median contact frequency was weekly, the median session duration 60 minutes, and the median delivery period 10 weeks[54]. Physical activity interventions are often much more intensive due to a focus on practising the target behaviour (e.g. Shaw et al.[55] report interventions lasting 3 to 12 months with 3 to 5 sessions per week lasting a median 45 minutes each).\n[SUBTITLE] Weight Loss [SUBSECTION] Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\nOverall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\n[SUBTITLE] Dietary Change [SUBSECTION] Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\nTwo low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\n[SUBTITLE] Physical Activity [SUBSECTION] There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nThere was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nDefinitions of intervention intensity reported in the reviews varied considerably, incorporating frequency and total number of contacts, total contact time, duration of the intervention and the number of behaviour change techniques used. The frequency and duration of clinical contact varied widely, ranging from 1 to around 80 sessions, delivered daily to monthly and lasting anything from 15 to 150 minutes, over periods ranging from 1 day to 2 years. For instance, one review of 17 weight loss interventions that compared different intervention intensities, reported that the median contact frequency was weekly, the median session duration 60 minutes, and the median delivery period 10 weeks[54]. Physical activity interventions are often much more intensive due to a focus on practising the target behaviour (e.g. Shaw et al.[55] report interventions lasting 3 to 12 months with 3 to 5 sessions per week lasting a median 45 minutes each).\n[SUBTITLE] Weight Loss [SUBSECTION] Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\nOverall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\n[SUBTITLE] Dietary Change [SUBSECTION] Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\nTwo low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\n[SUBTITLE] Physical Activity [SUBSECTION] There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nThere was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\n[SUBTITLE] Characteristics of the target population (Additional file 2, Table S13) [SUBSECTION] [SUBTITLE] Gender [SUBSECTION] Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\nEight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\n[SUBTITLE] Ethnicity [SUBSECTION] Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\nAlthough there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\n[SUBTITLE] Age [SUBSECTION] Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\nAssociative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\n[SUBTITLE] At risk populations [SUBSECTION] A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\nA range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\n[SUBTITLE] Diabetes [SUBSECTION] In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\nIn two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\n[SUBTITLE] Weight [SUBSECTION] Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\nFour analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\n[SUBTITLE] Gender [SUBSECTION] Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\nEight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\n[SUBTITLE] Ethnicity [SUBSECTION] Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\nAlthough there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\n[SUBTITLE] Age [SUBSECTION] Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\nAssociative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\n[SUBTITLE] At risk populations [SUBSECTION] A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\nA range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\n[SUBTITLE] Diabetes [SUBSECTION] In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\nIn two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\n[SUBTITLE] Weight [SUBSECTION] Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\nFour analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\n[SUBTITLE] Setting (Additional file 2, Table S14) [SUBSECTION] Examples were found (based on data in the evidence tables of included reviews) of effective interventions delivered in a wide range of settings, including healthcare settings, the workplace, the home, and in the community[30,34]. Few reviews formally examined the impact of intervention setting on effectiveness. However, one medium quality associative analysis revealed no significant differences in outcomes (either dietary or physical activity change) at six months between interventions in primary care, community and workplace settings[48].\nExamples were found (based on data in the evidence tables of included reviews) of effective interventions delivered in a wide range of settings, including healthcare settings, the workplace, the home, and in the community[30,34]. Few reviews formally examined the impact of intervention setting on effectiveness. However, one medium quality associative analysis revealed no significant differences in outcomes (either dietary or physical activity change) at six months between interventions in primary care, community and workplace settings[48].", "The characteristics of the included and excluded reviews are summarised in Additional file 1 Tables S4 and S5. Ten reviews examined physical activity interventions, three examined dietary interventions and seventeen examined both. Reviews included data from a range of populations (e.g. sedentary, overweight, obese, impaired glucose tolerance) and delivery settings (e.g. home based, leisure centre based, primary care, workplace) and used a variety of descriptive, meta-analytic and meta-regression analyses to investigate the association of intervention components with effectiveness. We identified 129 analyses of relationships between intervention components and effectiveness, and 55 analyses of intervention effectiveness (Additional file 2 Tables S7 to S14). The dates of published studies included in the reviews examined ranged from 1966 to 2008.", "The methodological quality of included reviews (Additional file 1 Tables S4, S6) was generally good (median OQAQ score = 15.6). The most common methodological weaknesses were the lack of use of study quality data to inform analyses (e.g. by sensitivity analysis, or by constructing separate analyses which excluded low quality trials) and potential bias in the selection of articles (e.g. not using independent assessors).", "The extracted analyses and evidence grades for each analysis are presented in Additional file 2 Tables S7 to S14. The findings can be summarised as follows:-", "[SUBTITLE] Weight Loss [SUBSECTION] High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\nHigh quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].\n[SUBTITLE] Physical Activity [SUBSECTION] High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\nHigh quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.\n[SUBTITLE] Dietary Intake [SUBSECTION] Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\nMedium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].\n[SUBTITLE] Other Outcomes [SUBSECTION] High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).\nHigh quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).", "High quality causal evidence (grade 1++) from eight meta-analyses of RCTs from four reviews showed that interventions to promote changes in diet (or both diet and physical activity) produced moderate and clinically meaningful effects on weight loss (typically 3-5 kg at 12 months, 2-3 kg at 36 months)[37,38,42,50]. The effectiveness of such interventions (as well as physical activity only interventions) in producing weight loss was further supported by medium and low quality causal evidence (grade 1+ and 1-) from 14 meta-analyses and summaries of RCTs from six reviews (eight medium, six low quality analyses)[31,39,49,54,57,59].", "High quality causal evidence was found from four meta-analyses of RCTs in two reviews that physical activity interventions can produce moderate changes in self-reported physical activity (standardised mean difference around 0.3; Odds Ratio for achieving healthy activity targets around 1.2 to 1.3) and cardio-respiratory fitness (standardised mean difference around 0.5) at a minimum 6 months of follow up[41,59]. This was supported by lower quality causal evidence from six meta-analyses of RCTs and summaries of RCTs and other studies (three medium and three low quality analyses) from three systematic reviews that interventions to increase physical activity increased self-reported physical activity (typically equivalent to 30-60 minutes of walking per week) at a median of 6 weeks to 19 months of follow up[38,40,51]. However, it is worth noting that there were few examples of trials with successful outcomes at more than 12 months.", "Medium and lower quality causal evidence from meta-analyses and descriptive summaries of RCTs (nine analyses from three separate reviews: six medium, three low) that found positive changes in self-reported diet (calorie, fat, fibre, fruit and vegetable intake) at 6 to 19 months of follow up for dietary interventions[38,34,44].", "High quality causal evidence (grade 1++) from one meta-analysis of RCTs[43] showed that interventions to promote changes in diet or physical activity (or both) produced moderate and clinically meaningful effects on the risk of progression to type 2 diabetes (relative risk reduction of 49% at 3.4 years) in people with impaired glucose regulation.\nOne review which examined variations in effectiveness over time[37] showed that weight loss tended to reverse once interventions ceased or moved from an active to a maintenance phase (net weight loss during active phase 0.08 BMI units per month; net weight gain during maintenance phase 0.03 BMI units per month).", "One meta-regression analysis provided medium quality associative evidence (grade 2+) suggesting that interventions with an explicitly stated theoretical basis (e.g. Social Cognitive Theory,[60] Theory of Planned Behaviour[61]) were no more effective in producing changes in either weight or in combined dietary and physical activity outcomes than interventions with no stated theoretical basis[38]. However, four meta-regression analyses (all medium quality associative analyses) in two reviews[38,48] did find an association between the use of a theoretically specified cluster of 'self-regulatory' intervention techniques (specific goal-setting, prompting self-monitoring, providing feedback on performance, goal review) and increased effectiveness in terms of a) weight loss, b) change in dietary outcomes, c) change in physical activity and d) combined (standardised mean difference for either dietary change or physical activity) outcomes.", "Categorisation of interventions varied greatly between reviews, with categories often conceptually overlapping and vaguely defined (e.g. diet vs. exercise vs. behavioural intervention). Despite this, we have summarised evidence on the use of what we have called \"established, well defined behaviour change techniques\", based on those reviews where clear and specific definitions were provided (see Table 1 for definitions). Further definition of the specific behaviour change techniques cited in Table 1 and those mentioned in the text below can be found in a recent taxonomy of behaviour change techniques[62].\nCausal evidence from one medium quality meta-analysis indicated that change in weight was greater when established, well defined behaviour change techniques were added to interventions (e.g. when dietary advice plus a well-defined behavioural intervention using established behaviour change techniques was compared with dietary advice alone). The weight loss achieved by adding established behaviour change techniques to interventions was 4.5 kg at a median 6 months of follow up[54]. This was supported by two associative analyses (one medium and one low quality) which compared the results of different groups of studies in which the interventions either did or did not use established, well-defined behaviour change techniques. Using established behaviour change techniques was associated with increased weight loss (2.5 to 5.5 kg) compared with non-behavioural interventions (0.1 to 0.9 kg)[46,47].\nEvidence from five low to medium quality associative analyses in two reviews attempted to relate the number of behaviour change techniques used to effectiveness in terms of weight loss or changes in diet or physical activity. The evidence was equivocal with the pattern of data suggesting a possible association, but only one analysis approached significance[38,48].\n[SUBTITLE] Use of specific behaviour change techniques [SUBSECTION] High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\nHigh quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).\n[SUBTITLE] Motivational interviewing [SUBSECTION] Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\nMotivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).\n[SUBTITLE] Targeting multiple behaviours [SUBSECTION] Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].\nCausal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].", "High quality causal evidence was found that adding social support to interventions (usually from family members) provided an additional weight loss of 3.0 kg at up to 12 months (compared with the same intervention with no social support element)[31].\nMedium to low quality associative evidence (from three meta-regression analyses and two associative analyses in three reviews) suggested that effectiveness for initial behaviour change (i.e. change in weight, diet or physical activity was associated with using the following techniques (NB: definitions of these can be found in a recent taxonomy of behaviour change techniques[62]): 1) For dietary change: providing instruction, establishing self-monitoring of behaviour, use of relapse prevention techniques[38,48]. 2) For physical activity change: prompting practice, establishing self-monitoring of behaviour, individual tailoring (e.g. of information or counselling content)[38,40,48]. One review also provided medium quality causal evidence (a descriptive summary of individual RCT findings) that brief advice, which usually included goal-setting, led to an increase in walking activity (27 mins/week walking at 12 months of follow up)[51]. Goal-setting alongside the use of pedometers was also associated with increased walking (see below).\nFurther medium quality associative evidence suggested that increased maintenance of behaviour change was associated with the use of time management techniques (for physical activity) and encouraging self-talk (for both dietary change and physical activity)[38].\nThree reviews examined interventions that used pedometers (i.e. self-monitoring of physical activity) to promote walking: Medium quality causal evidence (two analyses from two reviews) supported the effectiveness of pedometer based interventions for increasing walking activity[33,51] (mean increase of 2004 steps per day at a median 11 weeks; median increase in time walking of +54 min per week at a median 13 weeks). It must be noted that the vast majority of the interventions included in these meta-analyses included either step-goals or step diaries (or both) alongside the use of pedometers, so the evidence does not support the use of pedometers in isolation from these additional techniques. Indeed, associative analyses from one review[33] suggested that the use of a) a step diary (one low quality analysis) and b) goal-setting (one low and one medium quality analysis) in combination with use of a pedometer was associated with increased walking. Medium to high quality associative evidence (based on meta-analysis of only the intervention arms of studies) from two reviews[33,52] suggested that small changes in weight might also be achievable with pedometer based interventions (e.g. change in BMI of 0.38 kg/m2 at 11 weeks).", "Motivational interviewing is a distinct combination of behaviour change techniques (including decisional balance and relapse prevention techniques) delivered in a specific style (using patient centred empathy building techniques, such as rolling with resistance; affirmation and reflective listening)[63]. High quality causal evidence from one meta-analysis of RCTs[53] found that motivational interviewing was significantly more effective than traditional advice-giving for initiating changes in weight (producing a net difference of 0.72 BMI units compared with traditional advice-giving) at 3 to 24 months of follow up (mostly under 6 months). A further meta-analysis of RCTs[35] provided medium quality causal evidence of the effectiveness of motivational interviewing for a combined physical activity and dietary outcome, at up to 4 months of follow up (Standardised Mean Difference 0.53).", "Causal evidence from nine analyses in four reviews (one high, four medium and four low quality) showed that interventions which targeted both physical activity and diet rather than only one of these behaviours produced higher weight change (additional weight loss around 2-3 kg at up to 12 months)[31,36,37,54].", "The evidence from five reviews of dietary and/or physical activity intervention was mixed. Five associative analyses (three medium and two low quality) from four reviews failed to find a clear association between effectiveness and mode of intervention delivery for weight loss, dietary change or physical activity change[38,46,48,51]. One review found medium quality associative evidence that 'mixed mode' (individual and group) delivery was significantly related to greater effectiveness, compared with individual delivery, for initial weight loss (up to 6 months), but not for weight loss maintenance (at a mean 19 months)[38]. However, it is worth noting that there is evidence from individual high quality RCTs (based on data in the evidence tables of the included reviews) that individual, group, and mixed mode interventions can all be effective in changing diet and/or physical activity[31,38,51].", "There was a lack of high quality evidence in this area for comparisons between specific types of intervention provider. Four associative analyses (two medium, two low) from four reviews provided no consistent or significant relationship between intervention provider and weight, physical activity or dietary outcomes at up to 12 months of follow up[38,40,48,51]. However, strong evidence from individual RCTs (based on data in the evidence tables of the included reviews) showed that a wide range of providers (with appropriate training) including doctors, nurses, dieticians/nutritionists, exercise specialists and lay people, can deliver effective interventions for changing diet and/or physical activity[38,40,43,48,51,52].", "Definitions of intervention intensity reported in the reviews varied considerably, incorporating frequency and total number of contacts, total contact time, duration of the intervention and the number of behaviour change techniques used. The frequency and duration of clinical contact varied widely, ranging from 1 to around 80 sessions, delivered daily to monthly and lasting anything from 15 to 150 minutes, over periods ranging from 1 day to 2 years. For instance, one review of 17 weight loss interventions that compared different intervention intensities, reported that the median contact frequency was weekly, the median session duration 60 minutes, and the median delivery period 10 weeks[54]. Physical activity interventions are often much more intensive due to a focus on practising the target behaviour (e.g. Shaw et al.[55] report interventions lasting 3 to 12 months with 3 to 5 sessions per week lasting a median 45 minutes each).\n[SUBTITLE] Weight Loss [SUBSECTION] Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\nOverall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.\n[SUBTITLE] Dietary Change [SUBSECTION] Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\nTwo low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].\n[SUBTITLE] Physical Activity [SUBSECTION] There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.\nThere was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.", "Overall, 7 out of 9 analyses of intervention intensity favoured higher intensity interventions. One meta-analysis of ten small RCTs (N = 306) comparing different intervention intensities[54] found medium quality causal evidence that more intensive interventions (those including more behaviour change techniques, more contact time or a longer duration of intervention) generated significantly more weight loss than less intensive interventions (an additional 2.3 kg at a median seven months follow up). This was supported by a medium quality associative analysis from the same review. However, it was not possible to deduce from the available data which component of intensity drives this relationship.\nMedium to low quality evidence from three analyses in three reviews (one medium quality, two low quality) showed a positive association between the total number of contacts and weight loss at 12 to 38 months[46,50,57]. Associative evidence from two analyses in two reviews (one high quality, one low quality) found a relationship between increased frequency of contacts and weight loss at 6 to 15 months of follow up[37,47]. However, two associative analyses (one high and one medium quality) in two reviews[37,38] found no such relationship at 6 to 60 months. Two medium quality associative analyses found mixed evidence (one positive one negative) on the association between intervention duration and weight loss.", "Two low quality associative analyses within the same review found a positive relationship between number of contacts and self-reported dietary change at 12 months of follow up[34].", "There was a lack of evidence on the relationship between intervention intensity and physical activity outcomes. Two low quality associative analyses in two reviews[33,40] found no clear relationship between intervention intensity (duration) and physical activity outcomes.", "[SUBTITLE] Gender [SUBSECTION] Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\nEight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].\n[SUBTITLE] Ethnicity [SUBSECTION] Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\nAlthough there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.\n[SUBTITLE] Age [SUBSECTION] Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\nAssociative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].\n[SUBTITLE] At risk populations [SUBSECTION] A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\nA range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].\n[SUBTITLE] Diabetes [SUBSECTION] In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\nIn two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].\n[SUBTITLE] Weight [SUBSECTION] Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].\nFour analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].", "Eight associative analyses (three medium quality, five low quality) from six reviews found no consistent association between gender and changes in weight or physical activity at 10 weeks to 16 months of follow up[33,38,41,48,55,58].", "Although there is evidence (within some of the component trials in the reviews examined) that interventions can be effective for a number of ethnic groups[4] there was very little review-level evidence on the relationship between ethnicity and intervention effectiveness. One associative analysis (low quality) suggested that intervention studies with a higher percentage of white Caucasian participants achieved larger decreases in BMI at a median of 12 weeks of follow up[33]. Another (low quality) associative analysis in the same review reported no association between ethnicity and increased walking.", "Associative analyses (one medium quality, one low quality) from two reviews[33,55] suggested that older people lost more weight than younger people at 10.5 to 16 weeks of follow up[33]. Two further (low quality) analyses from two reviews found no relationship between age and physical activity at 3 and 6 months of follow up[33,41].", "A range of evidence, including strong causal evidence from two meta-analyses of sub-groups of studies and associative evidence from meta-regression analyses from several further reviews found that changes in weight and (at least short-term) physical activity are possible in high risk as well as lower risk populations, including high and low weight, high cardiovascular risk groups and sedentary and non-sedentary groups, at between 3 and 36 months of follow up[33,37,38,41-43,48,51]. Five analyses from four reviews provided mixed evidence as to whether targeting of interventions at people who are more sedentary was associated with larger increases in the amount of physical activity (two medium analyses (one positive, one negative), three low quality analyses (two negative, one trend)[33,41,48,51].", "In two associative analyses (one high quality, one medium quality), effectiveness for weight loss (at 3 to 60 months) was found to be considerably lower for people with type 2 diabetes than for people without type 2 diabetes[37,38].", "Four analyses in four reviews[33,41,42,48] provided mixed associative evidence (two medium (one positive, one negative), two low quality analyses (one positive, one negative)) as to whether targeting more overweight people was associated with larger increases in the amount of weight loss achieved. However, one high quality associative analysis showed that people with a higher starting weight achieve better health improvements at 2 to 4.6 years, in terms of a reduced incidence of type 2 diabetes[43].", "Examples were found (based on data in the evidence tables of included reviews) of effective interventions delivered in a wide range of settings, including healthcare settings, the workplace, the home, and in the community[30,34]. Few reviews formally examined the impact of intervention setting on effectiveness. However, one medium quality associative analysis revealed no significant differences in outcomes (either dietary or physical activity change) at six months between interventions in primary care, community and workplace settings[48].", "This review has, for the first time, systematically identified, synthesised and graded a wide range of evidence about the relationship of intervention content to effectiveness in individual-level interventions for promoting changes in diet and/or physical activity in adults at risk of type 2 diabetes.\nInterventions produced significant and clinically meaningful changes in physical activity (typically equivalent to 30-60 minutes of walking per week, for up to 18 months) and in weight (typically 3-5 kg at 12 months, 2-3 kg at 36 months). Greater effectiveness of interventions was causally linked (in meta-analyses and randomised trials which experimentally manipulated the use of these elements) with targeting both diet and physical activity, mobilising social support and the use of well-described/established behaviour change techniques. Greater effectiveness was also associated (in correlational analyses and non-randomised comparisons) with using a cluster of self-regulatory techniques (goal-setting, prompting self-monitoring, providing feedback on performance, goal review[62,64]), and providing a higher contact time or frequency of contacts. However, with regard to intensity, the amount of clinical contact in interventions varied widely (see ranges reported above) and the evidence did not support the recommendation of any particular minimum threshold. The evidence on patterns of effectiveness over time[37] also suggested that there is a need for an increased focus on the use of techniques to support behaviour maintenance.\nThere were no clear associations between provider, setting, delivery mode, ethnicity and age of the target group and effectiveness. This (and evidence from a range of individual RCTs cited in the reviews examined) suggests that interventions can be delivered successfully by a wide range of providers in a wide range of settings, in group or individual or combined modes, and can be effective for a wide range of ethnic and age groups.\nWhile the use of \"established, well-defined behaviour change techniques\" was associated with increased effectiveness, it is worth emphasising that individual techniques are rarely applied in isolation and should form part of a coherent intervention model. Therefore, a planned approach to intervention design is recommended, such as \"intervention mapping\",[65] or other systematic intervention development processes[66] which select intervention techniques to address targeted behaviour change processes (and that are tailored for the target population and setting).\nTaken together, the findings suggest a number of recommendations for optimising practice in the development and delivery of interventions to promote changes in diet and/or physical activity and these are outlined in Table 2. It is hoped that applying these findings will help to meet the growing need for less costly, but nonetheless effective, type 2 diabetes prevention programmes.\nRecommendations for practice\n1Key to grades of recommendations:\nA: At least one meta-analysis, systematic review, or RCT rated as 1++ and directly applicable to the target population; or A body of evidence\nconsisting principally of studies rated as 1+, directly applicable to the target population, and demonstrating overall consistency of results\nB: A body of evidence including studies rated as 2++, directly applicable to the target population, and demonstrating overall consistency of\nresults; or Extrapolated evidence from studies rated as 1++ or 1+\nC: A body of evidence including studies rated as 2+, directly applicable to the target population and demonstrating overall consistency of results;\nor Extrapolated evidence from studies rated as 2++\nD: Evidence level 3 or 4 (non-analytic studies or expert opinion); or Extrapolated evidence from studies rated as 2+\nAlthough providing a greater degree of depth with regard to intervention components, these findings are consistent with UK guidance for the prevention and treatment of obesity (which recommends engaging social (especially family based) support, and targeting both diet and exercise)[67]. The findings are also consistent with recent guidance from the American Heart Association[68] on the prevention of heart disease in adults aged over 18, which recommend the use of motivational interviewing as well as goal-setting, self-monitoring and a high contact frequency. Recent evidence-based guidance from the US Association of Diabetes Educators also recommends goal-setting, problem-solving (relapse prevention) and self-monitoring of plans (self-regulation) for supporting healthy eating and increased physical activity in people with type 2 diabetes[69]. Our findings may also be more widely generalisable to adults with diagnosed chronic disease (e.g. type 2 diabetes, heart disease) or to apparently healthy adults.\n[SUBTITLE] Strengths and limitations [SUBSECTION] Our review focused only on higher quality systematic reviews. We identified a substantial number of reviews which synthesised data from a large number of RCTs and other studies, in a wide range of age groups, clinical/risk groups and settings. Drawing together these findings in one place has generated a comprehensive, evidence-based overview of which intervention components are most likely to facilitate effectiveness.\nHowever, several challenges affecting the synthesis and interpretation of the available evidence were encountered. One of the limitations most commonly cited by review authors was an inadequate description of behavioural interventions in the individual study reports. This causes difficulties for the reviewer in categorising intervention content and conducting subsequent analyses to relate content to effectiveness. We therefore suggest that future intervention study reports (and reviews of individual studies) use an appropriate taxonomy to describe (and categorise) behaviour change techniques[62]. A major limitation in assessing the utility of specific theories and techniques underpinning interventions is that techniques may not be implemented rigorously or may not faithfully represent the specified theories[62,70]. Notably, none of the 30 reviews that we examined took intervention fidelity into account. Hence, the lack of an association between the use of a stated theory and effectiveness may reflect a lack of good theories or it may reflect poor implementation of theories. Other potentially important sources of bias include measurement issues (especially in relation to the use of self-report data); self-selection of intervention participants; and a failure to consider potential biases due to study quality in some reviews. Furthermore, it is worth noting that with associative evidence, other covariates than those analysed may account for the stated relationships (e.g. the association between intensity and effectiveness might be explained to some extent by lower quality of intervention being associated with lower intensity).\nA further potential source of bias which no review accounted for was the low sample size contributing to some of the analyses examined. In particular, it is worth noting that, whilst our recommendation (Table 2) on the usefulness of social support technically merits a grade A (as it is based on level 1+ evidence from a meta-analysis of randomised controlled trials), the total number of participants contributing to the meta-analysis was only 127. If the grading system had taken sample size into account, we may have given this recommendation a lower grade. In interpreting the above information, it should be noted that the analyses considered were in many cases based on overlapping sets of trials (and other studies). It should also be noted, as this is a review of reviews we were not able to synthesise or meta-analyse data from individual studies, which may have yielded valuable evidence. It is also worth noting that at the time of the literature search there were no high quality reviews on the use of internet-based interventions, so no evidence is presented in this area.\nOur review focused only on higher quality systematic reviews. We identified a substantial number of reviews which synthesised data from a large number of RCTs and other studies, in a wide range of age groups, clinical/risk groups and settings. Drawing together these findings in one place has generated a comprehensive, evidence-based overview of which intervention components are most likely to facilitate effectiveness.\nHowever, several challenges affecting the synthesis and interpretation of the available evidence were encountered. One of the limitations most commonly cited by review authors was an inadequate description of behavioural interventions in the individual study reports. This causes difficulties for the reviewer in categorising intervention content and conducting subsequent analyses to relate content to effectiveness. We therefore suggest that future intervention study reports (and reviews of individual studies) use an appropriate taxonomy to describe (and categorise) behaviour change techniques[62]. A major limitation in assessing the utility of specific theories and techniques underpinning interventions is that techniques may not be implemented rigorously or may not faithfully represent the specified theories[62,70]. Notably, none of the 30 reviews that we examined took intervention fidelity into account. Hence, the lack of an association between the use of a stated theory and effectiveness may reflect a lack of good theories or it may reflect poor implementation of theories. Other potentially important sources of bias include measurement issues (especially in relation to the use of self-report data); self-selection of intervention participants; and a failure to consider potential biases due to study quality in some reviews. Furthermore, it is worth noting that with associative evidence, other covariates than those analysed may account for the stated relationships (e.g. the association between intensity and effectiveness might be explained to some extent by lower quality of intervention being associated with lower intensity).\nA further potential source of bias which no review accounted for was the low sample size contributing to some of the analyses examined. In particular, it is worth noting that, whilst our recommendation (Table 2) on the usefulness of social support technically merits a grade A (as it is based on level 1+ evidence from a meta-analysis of randomised controlled trials), the total number of participants contributing to the meta-analysis was only 127. If the grading system had taken sample size into account, we may have given this recommendation a lower grade. In interpreting the above information, it should be noted that the analyses considered were in many cases based on overlapping sets of trials (and other studies). It should also be noted, as this is a review of reviews we were not able to synthesise or meta-analyse data from individual studies, which may have yielded valuable evidence. It is also worth noting that at the time of the literature search there were no high quality reviews on the use of internet-based interventions, so no evidence is presented in this area.\n[SUBTITLE] Implications for practice and policy [SUBSECTION] Our review has generated clear recommendations on how interventions for promoting lifestyle change within diabetes prevention programmes could be developed or refined to maximise effectiveness (Table 2). Our recommendations go considerably beyond the data on basic effectiveness presented in trials and systematic reviews of diabetes prevention programmes to date[3-8]. They can be useful, for example, in guiding the translation of effective, high-intensity/high resource-use interventions in research contexts into lower-cost (yet still effective) interventions for implementation in clinical practice.\nOur review has generated clear recommendations on how interventions for promoting lifestyle change within diabetes prevention programmes could be developed or refined to maximise effectiveness (Table 2). Our recommendations go considerably beyond the data on basic effectiveness presented in trials and systematic reviews of diabetes prevention programmes to date[3-8]. They can be useful, for example, in guiding the translation of effective, high-intensity/high resource-use interventions in research contexts into lower-cost (yet still effective) interventions for implementation in clinical practice.\n[SUBTITLE] Directions for future research [SUBSECTION] More rigorous evaluations of the effectiveness and cost-effectiveness of specific intervention components and clusters of techniques for promoting and maintaining change in diet and physical activity are needed. This will require experimental and theoretically driven manipulation of intervention components in well-powered and high-quality trials. Intervention studies need to provide careful descriptions of the hypothesised causal processes for achieving behaviour change and the specific techniques used to modify these processes. Trials should include process analyses to establish the validity or otherwise of the causal models proposed. Research is urgently needed to compare the cost-effectiveness of interventions with different providers, intervention modes and intensities (using clear and consistent conceptualisations of intensity and attempting to disentangle the different elements of intensity such as contact time, number of contacts and contact frequency). This should include the evaluation of remotely delivered and/or self-delivered (e.g. internet-based) approaches and other approaches that might provide high effectiveness for lower cost. Research is also needed to establish the impact of the intervention setting on effectiveness; to optimise intervention procedures for different ethnic, age and gender groups; to establish effective techniques for improving recruitment to interventions (and to address gender imbalances); and to assess the possible adverse affects of dietary and physical activity interventions.\nMore rigorous evaluations of the effectiveness and cost-effectiveness of specific intervention components and clusters of techniques for promoting and maintaining change in diet and physical activity are needed. This will require experimental and theoretically driven manipulation of intervention components in well-powered and high-quality trials. Intervention studies need to provide careful descriptions of the hypothesised causal processes for achieving behaviour change and the specific techniques used to modify these processes. Trials should include process analyses to establish the validity or otherwise of the causal models proposed. Research is urgently needed to compare the cost-effectiveness of interventions with different providers, intervention modes and intensities (using clear and consistent conceptualisations of intensity and attempting to disentangle the different elements of intensity such as contact time, number of contacts and contact frequency). This should include the evaluation of remotely delivered and/or self-delivered (e.g. internet-based) approaches and other approaches that might provide high effectiveness for lower cost. Research is also needed to establish the impact of the intervention setting on effectiveness; to optimise intervention procedures for different ethnic, age and gender groups; to establish effective techniques for improving recruitment to interventions (and to address gender imbalances); and to assess the possible adverse affects of dietary and physical activity interventions.", "Our review focused only on higher quality systematic reviews. We identified a substantial number of reviews which synthesised data from a large number of RCTs and other studies, in a wide range of age groups, clinical/risk groups and settings. Drawing together these findings in one place has generated a comprehensive, evidence-based overview of which intervention components are most likely to facilitate effectiveness.\nHowever, several challenges affecting the synthesis and interpretation of the available evidence were encountered. One of the limitations most commonly cited by review authors was an inadequate description of behavioural interventions in the individual study reports. This causes difficulties for the reviewer in categorising intervention content and conducting subsequent analyses to relate content to effectiveness. We therefore suggest that future intervention study reports (and reviews of individual studies) use an appropriate taxonomy to describe (and categorise) behaviour change techniques[62]. A major limitation in assessing the utility of specific theories and techniques underpinning interventions is that techniques may not be implemented rigorously or may not faithfully represent the specified theories[62,70]. Notably, none of the 30 reviews that we examined took intervention fidelity into account. Hence, the lack of an association between the use of a stated theory and effectiveness may reflect a lack of good theories or it may reflect poor implementation of theories. Other potentially important sources of bias include measurement issues (especially in relation to the use of self-report data); self-selection of intervention participants; and a failure to consider potential biases due to study quality in some reviews. Furthermore, it is worth noting that with associative evidence, other covariates than those analysed may account for the stated relationships (e.g. the association between intensity and effectiveness might be explained to some extent by lower quality of intervention being associated with lower intensity).\nA further potential source of bias which no review accounted for was the low sample size contributing to some of the analyses examined. In particular, it is worth noting that, whilst our recommendation (Table 2) on the usefulness of social support technically merits a grade A (as it is based on level 1+ evidence from a meta-analysis of randomised controlled trials), the total number of participants contributing to the meta-analysis was only 127. If the grading system had taken sample size into account, we may have given this recommendation a lower grade. In interpreting the above information, it should be noted that the analyses considered were in many cases based on overlapping sets of trials (and other studies). It should also be noted, as this is a review of reviews we were not able to synthesise or meta-analyse data from individual studies, which may have yielded valuable evidence. It is also worth noting that at the time of the literature search there were no high quality reviews on the use of internet-based interventions, so no evidence is presented in this area.", "Our review has generated clear recommendations on how interventions for promoting lifestyle change within diabetes prevention programmes could be developed or refined to maximise effectiveness (Table 2). Our recommendations go considerably beyond the data on basic effectiveness presented in trials and systematic reviews of diabetes prevention programmes to date[3-8]. They can be useful, for example, in guiding the translation of effective, high-intensity/high resource-use interventions in research contexts into lower-cost (yet still effective) interventions for implementation in clinical practice.", "More rigorous evaluations of the effectiveness and cost-effectiveness of specific intervention components and clusters of techniques for promoting and maintaining change in diet and physical activity are needed. This will require experimental and theoretically driven manipulation of intervention components in well-powered and high-quality trials. Intervention studies need to provide careful descriptions of the hypothesised causal processes for achieving behaviour change and the specific techniques used to modify these processes. Trials should include process analyses to establish the validity or otherwise of the causal models proposed. Research is urgently needed to compare the cost-effectiveness of interventions with different providers, intervention modes and intensities (using clear and consistent conceptualisations of intensity and attempting to disentangle the different elements of intensity such as contact time, number of contacts and contact frequency). This should include the evaluation of remotely delivered and/or self-delivered (e.g. internet-based) approaches and other approaches that might provide high effectiveness for lower cost. Research is also needed to establish the impact of the intervention setting on effectiveness; to optimise intervention procedures for different ethnic, age and gender groups; to establish effective techniques for improving recruitment to interventions (and to address gender imbalances); and to assess the possible adverse affects of dietary and physical activity interventions.", "Interventions to promote changes in diet and/or physical activity in adults with increased risk of diabetes or cardiovascular disease are more likely to be effective if they a) target both diet and physical activity, b) involve the planned use of established behaviour change techniques, c) mobilise social support, and d) have a clear plan for supporting maintenance of behaviour change. They may also benefit from providing a higher frequency or total number of contacts.\nTo maximise the effectiveness of intervention programmes to promote changes in diet and/or physical activity for diabetes prevention, practitioners and commissioning organisations should carefully consider the inclusion of the above components.", "The authors declare that they have no competing interests.", "CG conceived and coordinated the study. KS and CG conducted literature searches, data extraction, review selection, quality rating and evidence grading and drafted the manuscript. CA, WH, MR, PE and PS contributed to the design of the study and interpretation of the results. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/119/prepub\n", "Table S1: Search Strategy. Table S2 (and explanatory text): OQAQ: Quality assessment tool for systematic reviews and meta-analyses. Table S3 (and explanatory text): Evidence Grading System. Table S4: Characteristics of Included Reviews. Table S5: Excluded papers. Table S6: OQAQ scores.\nClick here for file\nTables S7-14: Data from analyses of: S7) Intervention Effectiveness; S8) Theoretical basis; S9) Behaviour change techniques; S10) Mode of delivery; S11) Intervention provider; S12) Intervention intensity; S13) Intervention population; S14) Intervention setting.\nClick here for file\nPRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2009 Checklist.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Genome-wide expression patterns associated with oncogenesis and sarcomatous transdifferentation of cholangiocarcinoma.
21333016
The molecular mechanisms of CC (cholangiocarcinoma) oncogenesis and progression are poorly understood. This study aimed to determine the genome-wide expression of genes related to CC oncogenesis and sarcomatous transdifferentiation.
BACKGROUND
Genes that were differentially expressed between CC cell lines or tissues and cultured normal biliary epithelial (NBE) cells were identified using DNA microarray technology. Expressions were validated in human CC tissues and cells.
METHODS
Using unsupervised hierarchical clustering analysis of the cell line and tissue samples, we identified a set of 342 commonly regulated (>2-fold change) genes. Of these, 53, including tumor-related genes, were upregulated, and 289, including tumor suppressor genes, were downregulated (<0.5 fold change). Expression of SPP1, EFNB2, E2F2, IRX3, PTTG1, PPARγ, KRT17, UCHL1, IGFBP7 and SPARC proteins was immunohistochemically verified in human and hamster CC tissues. Additional unsupervised hierarchical clustering analysis of sarcomatoid CC cells compared to three adenocarcinomatous CC cell lines revealed 292 differentially upregulated genes (>4-fold change), and 267 differentially downregulated genes (<0.25 fold change). The expression of 12 proteins was validated in the CC cell lines by immunoblot analysis and immunohistochemical staining. Of the proteins analyzed, we found upregulation of the expression of the epithelial-mesenchymal transition (EMT)-related proteins VIM and TWIST1, and restoration of the methylation-silenced proteins LDHB, BNIP3, UCHL1, and NPTX2 during sarcomatoid transdifferentiation of CC.
RESULTS
The deregulation of oncogenes, tumor suppressor genes, and methylation-related genes may be useful in identifying molecular targets for CC diagnosis and prognosis.
CONCLUSION
[ "Animals", "Bile Duct Neoplasms", "Bile Ducts, Intrahepatic", "Cell Line, Tumor", "Cell Transdifferentiation", "Cell Transformation, Neoplastic", "Cholangiocarcinoma", "Cricetinae", "Epithelial-Mesenchymal Transition", "Gene Expression Profiling", "Gene Expression Regulation, Neoplastic", "Genome, Human", "Humans", "Oligonucleotide Array Sequence Analysis", "Sarcoma", "Validation Studies as Topic" ]
3053267
null
null
Methods
[SUBTITLE] Cell lines and cultures [SUBSECTION] Tumor tissues were obtained from surgical specimens and biopsy specimens in Korean cholangiocarcinoma patients. Tumor tissues were washed three times in Opti-MEM I (Gibco, Grand Island, NY) containing antibiotics. Washed tissue was transferred to a sterile Petri dish and finely minced with scalpels into 1- to 2-mm3 fragments. Tissue fragments in culture medium were seeded in T25 culture flasks (Corning, Medfield, MA) in Opti-MEM supplemented with 10% fetal bovine serum (FBS, Gibco), 30-mM sodium bicarbonate and antibiotics. Tumor cells were cultured undisturbed and passaged as described [17]. Near the 20th passage, the medium was changed from Opti-MEM I to DMEM supplemented with 10% FBS and antibiotics. NBE cells were isolated from mucosal slices of normal bile ducts, with informed consent from liver transplantation donors, and ex-vivo cultured in T25 culture flasks in Opti-MEM supplemented with 10% FBS, 30 mM sodium bicarbonate and antibiotics at 37°C with 5% CO2 in air. Near-confluent NBE cells were harvested and stored at -80°C until use. Cells were routinely tested for mycoplasma and found to be negative using a Gen Probe kit (San Diego, CA). CC cell lines are in Table 1. Clinicopathological features of nine patients with intrahepatic cholangiocarcinomas used to generate CC cells lines. M, male; F, female; C. sinensis, clonorchis sinensis; HCC, hepatocellular carcinoma; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated. *International Hepato-Pancreato-Biliary Association classification. Tumor tissues were obtained from surgical specimens and biopsy specimens in Korean cholangiocarcinoma patients. Tumor tissues were washed three times in Opti-MEM I (Gibco, Grand Island, NY) containing antibiotics. Washed tissue was transferred to a sterile Petri dish and finely minced with scalpels into 1- to 2-mm3 fragments. Tissue fragments in culture medium were seeded in T25 culture flasks (Corning, Medfield, MA) in Opti-MEM supplemented with 10% fetal bovine serum (FBS, Gibco), 30-mM sodium bicarbonate and antibiotics. Tumor cells were cultured undisturbed and passaged as described [17]. Near the 20th passage, the medium was changed from Opti-MEM I to DMEM supplemented with 10% FBS and antibiotics. NBE cells were isolated from mucosal slices of normal bile ducts, with informed consent from liver transplantation donors, and ex-vivo cultured in T25 culture flasks in Opti-MEM supplemented with 10% FBS, 30 mM sodium bicarbonate and antibiotics at 37°C with 5% CO2 in air. Near-confluent NBE cells were harvested and stored at -80°C until use. Cells were routinely tested for mycoplasma and found to be negative using a Gen Probe kit (San Diego, CA). CC cell lines are in Table 1. Clinicopathological features of nine patients with intrahepatic cholangiocarcinomas used to generate CC cells lines. M, male; F, female; C. sinensis, clonorchis sinensis; HCC, hepatocellular carcinoma; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated. *International Hepato-Pancreato-Biliary Association classification. [SUBTITLE] 5-Aza-2'-deoxycytidine (Aza) treatment [SUBSECTION] Choi-CK, Cho-CK, and JCK cells were seeded at 1 × 106 cells/ml. After overnight culture, cells were treated with 5 μM of the DNA methylating agent Aza (Sigma-Aldrich, St. Louis, MO) for 4 days, and then harvested. Choi-CK, Cho-CK, and JCK cells were seeded at 1 × 106 cells/ml. After overnight culture, cells were treated with 5 μM of the DNA methylating agent Aza (Sigma-Aldrich, St. Louis, MO) for 4 days, and then harvested. [SUBTITLE] Patients and tissue samples [SUBSECTION] CC tissues were obtained with informed consent from Korean patients who underwent hepatectomy and common bile duct exploration at Chonbuk National University Hospital. All tumors were clinically and histologically diagnosed as cholangiocarcinoma. Detailed clinocopathological data of the 19 samples are in Table 2. All samples were immediately frozen in nitrogen tanks. Patient information was obtained from medical records. Clinical stage was determined according to the International Hepato-Pancreato-Biliary Association (IHPBA) classification [18]. Clinicopathological features of 19 CC samples used for microarray analysis. HCC, hepatocellular carcinoma; M, male; F, female; A, anterior segment; P, posterior segment; Med, medial segment; L, lateral segment; MF, mass forming type; PDI, periductal infiltrating type; IDG, intraductal growth type; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated; NA, not available. *International Hepato-Pancreato-Biliary Association classification. CC tissues were obtained with informed consent from Korean patients who underwent hepatectomy and common bile duct exploration at Chonbuk National University Hospital. All tumors were clinically and histologically diagnosed as cholangiocarcinoma. Detailed clinocopathological data of the 19 samples are in Table 2. All samples were immediately frozen in nitrogen tanks. Patient information was obtained from medical records. Clinical stage was determined according to the International Hepato-Pancreato-Biliary Association (IHPBA) classification [18]. Clinicopathological features of 19 CC samples used for microarray analysis. HCC, hepatocellular carcinoma; M, male; F, female; A, anterior segment; P, posterior segment; Med, medial segment; L, lateral segment; MF, mass forming type; PDI, periductal infiltrating type; IDG, intraductal growth type; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated; NA, not available. *International Hepato-Pancreato-Biliary Association classification. [SUBTITLE] Primer labeling and Illumina Beadchip array hybridization [SUBSECTION] Total RNA from CC samples was isolated using TRIzol reagent (Invitrogen, CA) according to the manufacturer's instructions. RNA quality was determined by gel electrophoresis, and concentrations were determined using an Ultrospec 3100 pro spectrophotometer (Amersham Bioscience, Buckinghamshire, UK). Biotin-labeled cRNA samples for hybridization were prepared according to Illumina's recommended sample-labeling procedure: 500 ng of total RNA was used for cDNA synthesis, followed by an amplification/labeling step (in vitro transcription) to synthesize biotin-labeled cRNA using the Illumina TotalPrep RNA Amplification kit (Ambion Inc., Austin, TX). cRNA concentrations were measured by the RiboGreen method (Quant-iT RiboGreen RNA assay kit; Invitrogen-Molecular Probes, ON, Canada) using a Victor3 spectrophotometer (PerkinElmer, CT), and cRNA quality was determined on a 1% agarose gel. Labeled, amplified material (1500 ng per array) was hybridized to Illumina Human-6 BeadChips v2 containing 48,701 probes for 24,498 genes, according to the manufacturer's instructions (Illumina, San Diego, CA). Array signals were developed by Amersham fluorolink streptavidin-Cy3 (GE Healthcare Bio-Sciences, Little Chalfont, UK) following the BeadChip manual. Arrays were scanned with an Illumina Bead-array Reader confocal scanner (BeadStation 500GXDW; Illumina) according to the manufacturer's instructions. Array data processing and analysis were performed using Illumina BeadStudio software. The BeadStudio Gene Expression Module is a tool for analyzing gene expression data from scanned microarray images generated by the Illumina BeadArray Reader. Total RNA from CC samples was isolated using TRIzol reagent (Invitrogen, CA) according to the manufacturer's instructions. RNA quality was determined by gel electrophoresis, and concentrations were determined using an Ultrospec 3100 pro spectrophotometer (Amersham Bioscience, Buckinghamshire, UK). Biotin-labeled cRNA samples for hybridization were prepared according to Illumina's recommended sample-labeling procedure: 500 ng of total RNA was used for cDNA synthesis, followed by an amplification/labeling step (in vitro transcription) to synthesize biotin-labeled cRNA using the Illumina TotalPrep RNA Amplification kit (Ambion Inc., Austin, TX). cRNA concentrations were measured by the RiboGreen method (Quant-iT RiboGreen RNA assay kit; Invitrogen-Molecular Probes, ON, Canada) using a Victor3 spectrophotometer (PerkinElmer, CT), and cRNA quality was determined on a 1% agarose gel. Labeled, amplified material (1500 ng per array) was hybridized to Illumina Human-6 BeadChips v2 containing 48,701 probes for 24,498 genes, according to the manufacturer's instructions (Illumina, San Diego, CA). Array signals were developed by Amersham fluorolink streptavidin-Cy3 (GE Healthcare Bio-Sciences, Little Chalfont, UK) following the BeadChip manual. Arrays were scanned with an Illumina Bead-array Reader confocal scanner (BeadStation 500GXDW; Illumina) according to the manufacturer's instructions. Array data processing and analysis were performed using Illumina BeadStudio software. The BeadStudio Gene Expression Module is a tool for analyzing gene expression data from scanned microarray images generated by the Illumina BeadArray Reader. [SUBTITLE] Data analysis [SUBSECTION] Normalization algorithms were used to adjust sample signals to minimize the effects of variation from non-biological factors. To reduce variation between microarrays, the intensity values for samples in each microarray were rescaled using a quartile normalization method in the BeadStudio module. Measured gene expression values were log2-transformed and median-centered across genes and samples for further analysis. To generate an overview of the gene expression profile and to identify major relationships in cell lines, we used unsupervised hierarchical clustering analysis. Genes with an expression ratio of at least a two-fold difference relative to the median gene expression level across all samples in at least 10% of samples were selected for clustering analysis. Average linkage hierarchical cluster analysis was carried out using a Pearson correlation as the similarity metric, using the GeneCluster/TreeView program (http://rana.lbl.gov/EisenSoftware.htm). Expression profiles for the differentially expressed genes were selected by t-test with false discovery rate (FDR) and q-values as gene significance measures, using R software (version 2.5). Because of varying significance in the analyzed comparisons, using a fixed FDR (or q-value) cut-off value was not practical. Therefore, we used t-test P = 0.01. To ascertain biological relevance, a fold-change cut-off value of 2 or 4 from the mean was chosen. The gene ontology (GO) program (http://david.abcc.ncifcrf.gov/) was used to categorize genes in subgroups based on biological function. Values for each GO group were calculated as a percentage of total mRNA change. For example, the Fisher exact test was used to determine whether the proportions of genes in each category differed by group. The microarray data were registered with the Gene Expression Omnibus (GEO) database (Accession No. GSE22633) Normalization algorithms were used to adjust sample signals to minimize the effects of variation from non-biological factors. To reduce variation between microarrays, the intensity values for samples in each microarray were rescaled using a quartile normalization method in the BeadStudio module. Measured gene expression values were log2-transformed and median-centered across genes and samples for further analysis. To generate an overview of the gene expression profile and to identify major relationships in cell lines, we used unsupervised hierarchical clustering analysis. Genes with an expression ratio of at least a two-fold difference relative to the median gene expression level across all samples in at least 10% of samples were selected for clustering analysis. Average linkage hierarchical cluster analysis was carried out using a Pearson correlation as the similarity metric, using the GeneCluster/TreeView program (http://rana.lbl.gov/EisenSoftware.htm). Expression profiles for the differentially expressed genes were selected by t-test with false discovery rate (FDR) and q-values as gene significance measures, using R software (version 2.5). Because of varying significance in the analyzed comparisons, using a fixed FDR (or q-value) cut-off value was not practical. Therefore, we used t-test P = 0.01. To ascertain biological relevance, a fold-change cut-off value of 2 or 4 from the mean was chosen. The gene ontology (GO) program (http://david.abcc.ncifcrf.gov/) was used to categorize genes in subgroups based on biological function. Values for each GO group were calculated as a percentage of total mRNA change. For example, the Fisher exact test was used to determine whether the proportions of genes in each category differed by group. The microarray data were registered with the Gene Expression Omnibus (GEO) database (Accession No. GSE22633) [SUBTITLE] Immunoblotting [SUBSECTION] Extracted protein (30 μg) from cell lysates was resolved by SDS-PAGE and transferred to a nitrocellulose membrane. Membranes were incubated for 1 h at room temperature with primary antibody at 1:1000 dilution. After incubation, blots were washed three times in TBS/0.1% Tween 20. Immunoreactivity was detected using alkaline phosphatase-conjugated goat anti-rabbit IgG or a commercial chemiluminescence detection kit (Amersham), according to the manufacturer's instructions. Extracted protein (30 μg) from cell lysates was resolved by SDS-PAGE and transferred to a nitrocellulose membrane. Membranes were incubated for 1 h at room temperature with primary antibody at 1:1000 dilution. After incubation, blots were washed three times in TBS/0.1% Tween 20. Immunoreactivity was detected using alkaline phosphatase-conjugated goat anti-rabbit IgG or a commercial chemiluminescence detection kit (Amersham), according to the manufacturer's instructions. [SUBTITLE] Immunohistochemistry [SUBSECTION] Immunohistochemical staining was performed on formalin-fixed, paraffin-embedded 4-μM tissue sections, as described preciously [19]. Briefly, a deparaffinized section was pretreated by microwave epitope retrieval (750 W during 15 min in citrate buffer 10 mmol; pH 6.0) after rehydration. Before applying primary antibodies, the endogenous peroxidase activity was inhibited with 3% hydrogen peroxide, and a blocking step with biotin and bovine albumin was performed. Primary monoclonal or polyclonal antibodies were detected using a secondary biotinylated antibody and a streptavidin-horseradish peroxidase conjugate according to the manufacturer's instructions (DAKO, Glostrup, Denmark). Counterstaining was performed using Meyer's hematoxylin. Tumors were evaluated for the percentage of positive cells and the staining intensity. Negative controls were samples incubated with either PBS or mouse IgG1 instead of primary antibody. Immunohistochemical staining was performed on formalin-fixed, paraffin-embedded 4-μM tissue sections, as described preciously [19]. Briefly, a deparaffinized section was pretreated by microwave epitope retrieval (750 W during 15 min in citrate buffer 10 mmol; pH 6.0) after rehydration. Before applying primary antibodies, the endogenous peroxidase activity was inhibited with 3% hydrogen peroxide, and a blocking step with biotin and bovine albumin was performed. Primary monoclonal or polyclonal antibodies were detected using a secondary biotinylated antibody and a streptavidin-horseradish peroxidase conjugate according to the manufacturer's instructions (DAKO, Glostrup, Denmark). Counterstaining was performed using Meyer's hematoxylin. Tumors were evaluated for the percentage of positive cells and the staining intensity. Negative controls were samples incubated with either PBS or mouse IgG1 instead of primary antibody. [SUBTITLE] Real-time RT-PCR [SUBSECTION] RNA prepared from dissected tissues was precipitated with isopropanol and dissolved in DEPC-treated distilled water. Reverse transcription (RT) was performed using 2 μg total RNA, 50 μM decamer and 1 μl (200 units) and RT-PCR Superscript II (Invitrogen) at 37°C for 50 min, as previously described. Specific primers for each gene were designed using the Primerdepot website (http://primerdepot.nci.nih.gov/) and are in Additional file 1. The control 18S ribosomal RNA primer was from Applied Biosystems (Foster City, CA) and was used as the invariant control. The real-time RT-PCR reaction mixture consisted of 10 ng reverse-transcribed total RNA, 167 nM forward and reverse primers, and 2 × PCR master mixture in a final volume of 10 μl PCR, was in 384-well plates using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems). RNA prepared from dissected tissues was precipitated with isopropanol and dissolved in DEPC-treated distilled water. Reverse transcription (RT) was performed using 2 μg total RNA, 50 μM decamer and 1 μl (200 units) and RT-PCR Superscript II (Invitrogen) at 37°C for 50 min, as previously described. Specific primers for each gene were designed using the Primerdepot website (http://primerdepot.nci.nih.gov/) and are in Additional file 1. The control 18S ribosomal RNA primer was from Applied Biosystems (Foster City, CA) and was used as the invariant control. The real-time RT-PCR reaction mixture consisted of 10 ng reverse-transcribed total RNA, 167 nM forward and reverse primers, and 2 × PCR master mixture in a final volume of 10 μl PCR, was in 384-well plates using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems). [SUBTITLE] Animal model of cholangiocarcinoma [SUBSECTION] The hamster CC model was modified from a previous study [20]. On the first day of the experiment, hamsters in the experimental group were infected with 15 metacercariae of the liver fluke, C. sinensis. One day after parasite infestation, hamsters received 15 ppm of dimethylnitrosamine (DMN; Kasei, Japan) in the drinking water for 4 weeks with a normal diet. Thereafter, hamsters were given tap water with a normal diet for the rest of the study. An interim stage of cholangiocarcinogenesis was confirmed at 8 weeks after experiment initiation. Control and CC model hamsters were maintained for a total of 27 weeks for CC to develop. The hamster CC model was modified from a previous study [20]. On the first day of the experiment, hamsters in the experimental group were infected with 15 metacercariae of the liver fluke, C. sinensis. One day after parasite infestation, hamsters received 15 ppm of dimethylnitrosamine (DMN; Kasei, Japan) in the drinking water for 4 weeks with a normal diet. Thereafter, hamsters were given tap water with a normal diet for the rest of the study. An interim stage of cholangiocarcinogenesis was confirmed at 8 weeks after experiment initiation. Control and CC model hamsters were maintained for a total of 27 weeks for CC to develop.
null
null
null
null
[ "Background", "Cell lines and cultures", "5-Aza-2'-deoxycytidine (Aza) treatment", "Patients and tissue samples", "Primer labeling and Illumina Beadchip array hybridization", "Data analysis", "Immunoblotting", "Immunohistochemistry", "Real-time RT-PCR", "Animal model of cholangiocarcinoma", "Results", "Gene expression patterns distinguish CC cells from cultured NBE cells", "Gene expression patterns distinguish CC tissues from cultured NBE cells", "Differential expression and verification of CC-related genes", "Immunohistochemical analysis of CC-related genes", "Immunohistochemical analysis in hamster model of CC", "Gene expression patterns distinguish the SCK cell line from three CC cell lines", "Expressions of transdifferentiation-related genes", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Cholangiocarcinoma (CC) is a highly lethal adenocarcinoma arising from bile duct epithelial cells. CC accounts for approximately 15% of the total liver cancer cases worldwide, and its incidence is rising [1,2]. The prognosis for CC is quite poor because of difficulties in early diagnosis, and relative resistance of the tumors to chemotherapy [3,4]. At the time of diagnosis, approximately 70% of CC patients have an occult metastasis or advanced local disease that precludes curative resection. Of candidates for curative resection, 30% develop recurrent disease at the anastomotic site or within the intrahepatic biliary tree, and succumb to disease progression or cholangitis [5]. Established risk factors for ductal cholangiocarcinomas include primary sclerosing cholangitis, infection with Clonorchis sinensis or Opisthorchis viverrini (liver flukes), Calori's disease, congenital choledochal cysts, and chronic intrahepatic lithiasis [6]. However, for most CCs, the cause is unknown.\nRecently, molecular investigations have provided evidence that CC carcinogenesis involves a number of genetic alterations, including activating point mutations in the K-ras oncogene, and in p53 and BRAF [7-9]. The deregulated expression of a number of other genes has also been reported, and cyclooxygenase-2 and c-erbB-2 are frequently overexpressed in CCs, suggesting an involvement in early biliary carcinogenesis [10]. In addition, increased expression of interleukin-6 is frequently observed in CC [11]. CC also develops after the liver-specific targeted disruption of the tumor suppressors SMAD4 and PTEN [12]. The incidence of sarcomatoid changes in CC is estimated to be approximately 5% [13], and sarcomatoid cells are thought to result from de-differentiation of ordinary carcinomatous CC cells. Sarcomatoid neoplasms are highly aggressive and have a poorer survival rate than ordinary CCs [14], but the underlying molecular alterations, which may be related to the epithelial-mesenchymal transition (EMT), remain unclear. Little extensive genome-wide information about altered gene expression in CCs is available, and only a few published studies have reported a comprehensive analysis of gene expression among biliary tract cancers in general [15,16]. The advancement of microarray technology now enables us to analyze genome-wide gene expression in a single experiment, opening avenues for the molecular classification of tumors, detection of the biological nature of tumors, and prediction of prognosis and sensitivity to treatments.\nIn this study, we generated genome-wide gene expression profiles of 10 cell lines (9 CC cell lines and 1 immortalized cholangiocyte line), and 19 CC tissues using a BeadChip oligonucleotide technology containing 48,000 genes. This procedure allowed us to observe a comprehensive pattern of gene expression in CC compared to cultured normal biliary epithelia (NBE). In addition, we identified a set of genes associated with sarcomatoid transdifferentiation. These data are useful not only because they provide a more profound understanding of cholangiocarcinogenesis and transdifferentiation, but also because they may help to develop diagnostic tools and improve the accuracy of CC prognosis.", "Tumor tissues were obtained from surgical specimens and biopsy specimens in Korean cholangiocarcinoma patients. Tumor tissues were washed three times in Opti-MEM I (Gibco, Grand Island, NY) containing antibiotics. Washed tissue was transferred to a sterile Petri dish and finely minced with scalpels into 1- to 2-mm3 fragments. Tissue fragments in culture medium were seeded in T25 culture flasks (Corning, Medfield, MA) in Opti-MEM supplemented with 10% fetal bovine serum (FBS, Gibco), 30-mM sodium bicarbonate and antibiotics. Tumor cells were cultured undisturbed and passaged as described [17]. Near the 20th passage, the medium was changed from Opti-MEM I to DMEM supplemented with 10% FBS and antibiotics. NBE cells were isolated from mucosal slices of normal bile ducts, with informed consent from liver transplantation donors, and ex-vivo cultured in T25 culture flasks in Opti-MEM supplemented with 10% FBS, 30 mM sodium bicarbonate and antibiotics at 37°C with 5% CO2 in air. Near-confluent NBE cells were harvested and stored at -80°C until use. Cells were routinely tested for mycoplasma and found to be negative using a Gen Probe kit (San Diego, CA). CC cell lines are in Table 1.\nClinicopathological features of nine patients with intrahepatic cholangiocarcinomas used to generate CC cells lines.\nM, male; F, female; C. sinensis, clonorchis sinensis; HCC, hepatocellular carcinoma; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated.\n*International Hepato-Pancreato-Biliary Association classification.", "Choi-CK, Cho-CK, and JCK cells were seeded at 1 × 106 cells/ml. After overnight culture, cells were treated with 5 μM of the DNA methylating agent Aza (Sigma-Aldrich, St. Louis, MO) for 4 days, and then harvested.", "CC tissues were obtained with informed consent from Korean patients who underwent hepatectomy and common bile duct exploration at Chonbuk National University Hospital. All tumors were clinically and histologically diagnosed as cholangiocarcinoma. Detailed clinocopathological data of the 19 samples are in Table 2. All samples were immediately frozen in nitrogen tanks. Patient information was obtained from medical records. Clinical stage was determined according to the International Hepato-Pancreato-Biliary Association (IHPBA) classification [18].\nClinicopathological features of 19 CC samples used for microarray analysis.\nHCC, hepatocellular carcinoma; M, male; F, female; A, anterior segment; P, posterior segment; Med, medial segment; L, lateral segment; MF, mass forming type; PDI, periductal infiltrating type; IDG, intraductal growth type; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated; NA, not available.\n*International Hepato-Pancreato-Biliary Association classification.", "Total RNA from CC samples was isolated using TRIzol reagent (Invitrogen, CA) according to the manufacturer's instructions. RNA quality was determined by gel electrophoresis, and concentrations were determined using an Ultrospec 3100 pro spectrophotometer (Amersham Bioscience, Buckinghamshire, UK). Biotin-labeled cRNA samples for hybridization were prepared according to Illumina's recommended sample-labeling procedure: 500 ng of total RNA was used for cDNA synthesis, followed by an amplification/labeling step (in vitro transcription) to synthesize biotin-labeled cRNA using the Illumina TotalPrep RNA Amplification kit (Ambion Inc., Austin, TX). cRNA concentrations were measured by the RiboGreen method (Quant-iT RiboGreen RNA assay kit; Invitrogen-Molecular Probes, ON, Canada) using a Victor3 spectrophotometer (PerkinElmer, CT), and cRNA quality was determined on a 1% agarose gel. Labeled, amplified material (1500 ng per array) was hybridized to Illumina Human-6 BeadChips v2 containing 48,701 probes for 24,498 genes, according to the manufacturer's instructions (Illumina, San Diego, CA). Array signals were developed by Amersham fluorolink streptavidin-Cy3 (GE Healthcare Bio-Sciences, Little Chalfont, UK) following the BeadChip manual. Arrays were scanned with an Illumina Bead-array Reader confocal scanner (BeadStation 500GXDW; Illumina) according to the manufacturer's instructions. Array data processing and analysis were performed using Illumina BeadStudio software. The BeadStudio Gene Expression Module is a tool for analyzing gene expression data from scanned microarray images generated by the Illumina BeadArray Reader.", "Normalization algorithms were used to adjust sample signals to minimize the effects of variation from non-biological factors. To reduce variation between microarrays, the intensity values for samples in each microarray were rescaled using a quartile normalization method in the BeadStudio module. Measured gene expression values were log2-transformed and median-centered across genes and samples for further analysis. To generate an overview of the gene expression profile and to identify major relationships in cell lines, we used unsupervised hierarchical clustering analysis. Genes with an expression ratio of at least a two-fold difference relative to the median gene expression level across all samples in at least 10% of samples were selected for clustering analysis. Average linkage hierarchical cluster analysis was carried out using a Pearson correlation as the similarity metric, using the GeneCluster/TreeView program (http://rana.lbl.gov/EisenSoftware.htm). Expression profiles for the differentially expressed genes were selected by t-test with false discovery rate (FDR) and q-values as gene significance measures, using R software (version 2.5). Because of varying significance in the analyzed comparisons, using a fixed FDR (or q-value) cut-off value was not practical. Therefore, we used t-test P = 0.01. To ascertain biological relevance, a fold-change cut-off value of 2 or 4 from the mean was chosen. The gene ontology (GO) program (http://david.abcc.ncifcrf.gov/) was used to categorize genes in subgroups based on biological function. Values for each GO group were calculated as a percentage of total mRNA change. For example, the Fisher exact test was used to determine whether the proportions of genes in each category differed by group. The microarray data were registered with the Gene Expression Omnibus (GEO) database (Accession No. GSE22633)", "Extracted protein (30 μg) from cell lysates was resolved by SDS-PAGE and transferred to a nitrocellulose membrane. Membranes were incubated for 1 h at room temperature with primary antibody at 1:1000 dilution. After incubation, blots were washed three times in TBS/0.1% Tween 20. Immunoreactivity was detected using alkaline phosphatase-conjugated goat anti-rabbit IgG or a commercial chemiluminescence detection kit (Amersham), according to the manufacturer's instructions.", "Immunohistochemical staining was performed on formalin-fixed, paraffin-embedded 4-μM tissue sections, as described preciously [19]. Briefly, a deparaffinized section was pretreated by microwave epitope retrieval (750 W during 15 min in citrate buffer 10 mmol; pH 6.0) after rehydration. Before applying primary antibodies, the endogenous peroxidase activity was inhibited with 3% hydrogen peroxide, and a blocking step with biotin and bovine albumin was performed. Primary monoclonal or polyclonal antibodies were detected using a secondary biotinylated antibody and a streptavidin-horseradish peroxidase conjugate according to the manufacturer's instructions (DAKO, Glostrup, Denmark). Counterstaining was performed using Meyer's hematoxylin. Tumors were evaluated for the percentage of positive cells and the staining intensity. Negative controls were samples incubated with either PBS or mouse IgG1 instead of primary antibody.", "RNA prepared from dissected tissues was precipitated with isopropanol and dissolved in DEPC-treated distilled water. Reverse transcription (RT) was performed using 2 μg total RNA, 50 μM decamer and 1 μl (200 units) and RT-PCR Superscript II (Invitrogen) at 37°C for 50 min, as previously described. Specific primers for each gene were designed using the Primerdepot website (http://primerdepot.nci.nih.gov/) and are in Additional file 1. The control 18S ribosomal RNA primer was from Applied Biosystems (Foster City, CA) and was used as the invariant control. The real-time RT-PCR reaction mixture consisted of 10 ng reverse-transcribed total RNA, 167 nM forward and reverse primers, and 2 × PCR master mixture in a final volume of 10 μl PCR, was in 384-well plates using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems).", "The hamster CC model was modified from a previous study [20]. On the first day of the experiment, hamsters in the experimental group were infected with 15 metacercariae of the liver fluke, C. sinensis. One day after parasite infestation, hamsters received 15 ppm of dimethylnitrosamine (DMN; Kasei, Japan) in the drinking water for 4 weeks with a normal diet. Thereafter, hamsters were given tap water with a normal diet for the rest of the study. An interim stage of cholangiocarcinogenesis was confirmed at 8 weeks after experiment initiation. Control and CC model hamsters were maintained for a total of 27 weeks for CC to develop.", "[SUBTITLE] Gene expression patterns distinguish CC cells from cultured NBE cells [SUBSECTION] Using BeadChip microarray analysis, we compared the gene expression profiles of nine CC cell lines, an immortalized biliary epithelial cell line, and four types of NBE cells. We selected 828 unique genes with a 2-fold or greater expression difference from the mean, with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis of all samples was based on the similarity in the expression pattern of all genes (Figure 1). Cell samples were separated into two main groups, the NBE cluster, and the transformed and immortalized biliary epithelial cells (CCC cluster). Each distinctive gene cluster was identified by delineation using a hierarchical clustering dendrogram. Cluster I consisted of genes upregulated in CC cells, which included tumor-related genes such as LGR4, AGR2, PCAF, TMEM97, FRAT2, EFNB2 and ZIC2 [21-27]. Cluster II included genes underexpressed in CC cells. These were mainly tumor suppressor genes such as GREM1, THY1, STC2, SERPINE1, SPARC and TAGLN [28-33]. Cluster III was genes upregulated in NBE cells, and contained the PDGFRA, CD248, and BDKRB1 genes.\nUnsupervised hierarchical clustering of four biliary epithelial cells, one immortalized cholangiocyte cell line and nine CC cells. Unsupervised hierarchical clustering separated the samples into two main groups, normal biliary epithelial cells (NBE) isolated from mucosal slices of normal bile ducts and ex-vivo cultured as described in Methods, and cholangiocarcinoma cells (CCC). Data are in matrix format, with columns representing individual cell lines and rows representing each gene. Red, high expression; green, low expression; black, no significant change in expression level between the mean and sample. A hierarchical clustering algorithm was applied to all cells and genes using the 1 - Pearson correlation coefficient as a similarity measure. Raw data for a single array were summarized using Illumina BeadStudio v3.0 and output to the user was as a set of 43,148 values for each individual hybridization. We selected 828 unique genes with a two-fold or greater difference from the mean and P < 0.01 by t-test, for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified in the hierarchical cluster of the genes differentially expressed in CCC compared with NBE. CC, cholangiocarcinoma; IMC, immortalized cholangiocytes.\nUsing BeadChip microarray analysis, we compared the gene expression profiles of nine CC cell lines, an immortalized biliary epithelial cell line, and four types of NBE cells. We selected 828 unique genes with a 2-fold or greater expression difference from the mean, with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis of all samples was based on the similarity in the expression pattern of all genes (Figure 1). Cell samples were separated into two main groups, the NBE cluster, and the transformed and immortalized biliary epithelial cells (CCC cluster). Each distinctive gene cluster was identified by delineation using a hierarchical clustering dendrogram. Cluster I consisted of genes upregulated in CC cells, which included tumor-related genes such as LGR4, AGR2, PCAF, TMEM97, FRAT2, EFNB2 and ZIC2 [21-27]. Cluster II included genes underexpressed in CC cells. These were mainly tumor suppressor genes such as GREM1, THY1, STC2, SERPINE1, SPARC and TAGLN [28-33]. Cluster III was genes upregulated in NBE cells, and contained the PDGFRA, CD248, and BDKRB1 genes.\nUnsupervised hierarchical clustering of four biliary epithelial cells, one immortalized cholangiocyte cell line and nine CC cells. Unsupervised hierarchical clustering separated the samples into two main groups, normal biliary epithelial cells (NBE) isolated from mucosal slices of normal bile ducts and ex-vivo cultured as described in Methods, and cholangiocarcinoma cells (CCC). Data are in matrix format, with columns representing individual cell lines and rows representing each gene. Red, high expression; green, low expression; black, no significant change in expression level between the mean and sample. A hierarchical clustering algorithm was applied to all cells and genes using the 1 - Pearson correlation coefficient as a similarity measure. Raw data for a single array were summarized using Illumina BeadStudio v3.0 and output to the user was as a set of 43,148 values for each individual hybridization. We selected 828 unique genes with a two-fold or greater difference from the mean and P < 0.01 by t-test, for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified in the hierarchical cluster of the genes differentially expressed in CCC compared with NBE. CC, cholangiocarcinoma; IMC, immortalized cholangiocytes.\n[SUBTITLE] Gene expression patterns distinguish CC tissues from cultured NBE cells [SUBSECTION] Using BeadChip microarrays, gene expression profiles of 19 CC tissues and 4 types of NBE cells were compared. We selected 1798 unique genes with a 2-fold or greater differences from the mean difference with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis was as described above (Figure 2A). All samples separated into two main groups, NBE and CC tissues (CCT). Each distinctive gene cluster was identified using a hierarchical clustering dendrogram as above. Intriguingly, the CC sample cluster was divided into two subclasses by tumor differentiation: differentiated (Df) and undifferentiated (Udf). Clustering data for the CC group revealed three clusters. Cluster I had genes upregulated in NBE and downregulated in CCT including SERPINB2, PAPPA, LRRC17, and GREM1. Cluster II contained genes upregulated in the Df CCT and downregulated in NBE. Cluster III included genes upregulated in poorly differentiated or Udf CCT, and downregulated in NBE. A supervised hierarchical clustering analysis was performed between the NBE class, and the Df and the Udf subclasses based on the similarity of expression pattern of all genes (Figure 2B and 2C). We selected 420 differentially expressed genes in the Df subclass, and 646 genes in the Udf subclass for comparison with the NBE class (Additional files 2 and 3).\nUnsupervised hierarchical clustering of 4 biliary epithelial cells and 19 CC tissues. (A) Unsupervised hierarchical clustering separated the samples into two main groups. We selected 1798 unique genes with two-fold or greater difference from the mean with P < 0.01 by t-test for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified of differentially expressed in CCT compared to NBE. (B) Supervised hierarchical clustering of four biliary epithelial cells and seven differentiated CC tissues. We selected 420 unique genes with four-fold or greater difference from the mean and P < 0.01 by t-test for hierarchical clustering analysis. (C) Supervised hierarchical clustering of 4 biliary epithelial cells and 10 undifferentiated CC tissues. We selected 646 unique genes with the criteria in B for hierarchical clustering analysis.\nUsing BeadChip microarrays, gene expression profiles of 19 CC tissues and 4 types of NBE cells were compared. We selected 1798 unique genes with a 2-fold or greater differences from the mean difference with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis was as described above (Figure 2A). All samples separated into two main groups, NBE and CC tissues (CCT). Each distinctive gene cluster was identified using a hierarchical clustering dendrogram as above. Intriguingly, the CC sample cluster was divided into two subclasses by tumor differentiation: differentiated (Df) and undifferentiated (Udf). Clustering data for the CC group revealed three clusters. Cluster I had genes upregulated in NBE and downregulated in CCT including SERPINB2, PAPPA, LRRC17, and GREM1. Cluster II contained genes upregulated in the Df CCT and downregulated in NBE. Cluster III included genes upregulated in poorly differentiated or Udf CCT, and downregulated in NBE. A supervised hierarchical clustering analysis was performed between the NBE class, and the Df and the Udf subclasses based on the similarity of expression pattern of all genes (Figure 2B and 2C). We selected 420 differentially expressed genes in the Df subclass, and 646 genes in the Udf subclass for comparison with the NBE class (Additional files 2 and 3).\nUnsupervised hierarchical clustering of 4 biliary epithelial cells and 19 CC tissues. (A) Unsupervised hierarchical clustering separated the samples into two main groups. We selected 1798 unique genes with two-fold or greater difference from the mean with P < 0.01 by t-test for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified of differentially expressed in CCT compared to NBE. (B) Supervised hierarchical clustering of four biliary epithelial cells and seven differentiated CC tissues. We selected 420 unique genes with four-fold or greater difference from the mean and P < 0.01 by t-test for hierarchical clustering analysis. (C) Supervised hierarchical clustering of 4 biliary epithelial cells and 10 undifferentiated CC tissues. We selected 646 unique genes with the criteria in B for hierarchical clustering analysis.\n[SUBTITLE] Differential expression and verification of CC-related genes [SUBSECTION] We compared the gene lists from the cell-based and tissue-based databases, and selected 342 commonly regulated genes, including 53 commonly upregulated genes and 289 commonly downregulated genes (Figure 3A). The top 25 commonly regulated genes in both CCC and CCT compared to NBE are in Additional file 4. To verify the microarray data, we examined the mRNA levels of the identified genes using real-time RT-PCR in human CC tissues. We selected five up-regulated genes from the commonly upregulated genes of both the cell and tissue sample classes (Figure 3B). We also chose the IRX3, PTTG1, and PPARγ genes, which were highly upregulated in only the cell sample class. These genes were preferentially expressed in CC cells and tissues. We also examined the expression of the commonly downregulated KRT17 and UCHL1 genes, as well as the cellular downregulated IGFBP7 and SPARC genes using real-time RT-PCR in human CC. The human NBE showed substantial expression of CK-17, UCHL1, IGFBP7 and SPARC, which were barely detected in CC tissues (Figure 3C).\nDifferentially regulated genes in human CC tissues compared to NBE cells. (A) Venn diagram of genes commonly regulated in the cell and tissue samples. The 342 genes included 53 upregulated and 289 downregulated genes, selected from the cell- and tissue-based microarray databases. (B) Real-time RT-PCR analysis of upregulated genes selected from the list of top 25 genes commonly upregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database. (C) Real-time RT-PCR analysis of downregulated genes selected from the list of top 25 genes commonly downregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database.\nWe compared the gene lists from the cell-based and tissue-based databases, and selected 342 commonly regulated genes, including 53 commonly upregulated genes and 289 commonly downregulated genes (Figure 3A). The top 25 commonly regulated genes in both CCC and CCT compared to NBE are in Additional file 4. To verify the microarray data, we examined the mRNA levels of the identified genes using real-time RT-PCR in human CC tissues. We selected five up-regulated genes from the commonly upregulated genes of both the cell and tissue sample classes (Figure 3B). We also chose the IRX3, PTTG1, and PPARγ genes, which were highly upregulated in only the cell sample class. These genes were preferentially expressed in CC cells and tissues. We also examined the expression of the commonly downregulated KRT17 and UCHL1 genes, as well as the cellular downregulated IGFBP7 and SPARC genes using real-time RT-PCR in human CC. The human NBE showed substantial expression of CK-17, UCHL1, IGFBP7 and SPARC, which were barely detected in CC tissues (Figure 3C).\nDifferentially regulated genes in human CC tissues compared to NBE cells. (A) Venn diagram of genes commonly regulated in the cell and tissue samples. The 342 genes included 53 upregulated and 289 downregulated genes, selected from the cell- and tissue-based microarray databases. (B) Real-time RT-PCR analysis of upregulated genes selected from the list of top 25 genes commonly upregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database. (C) Real-time RT-PCR analysis of downregulated genes selected from the list of top 25 genes commonly downregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database.\n[SUBTITLE] Immunohistochemical analysis of CC-related genes [SUBSECTION] To confirm the reliability of the microarray data and the robustness of the strategy for identifying genes with altered expression, we examined the protein levels of the identified genes using immunohistochemical analysis of human tissues (Figure 4A). We selected three upregulated genes from the genes that were upregulated in both cell and tissue samples. The SPP1, EFNB2 and E2F2 proteins were abnormally overexpressed in the CC cell cytoplasm, and weakly or barely expressed in HCC. We also examined the IRX3, PTTG1, and PPARγ proteins, which were highly upregulated in only the cell samples. IRX3 was the most highly upregulated, and we was strongly expressed in the nucleus of CC cells in the tissue sections, but was barely detectable in the NBE nuclei. PTTG1 and PPARγ were abnormally overexpressed in the CC cell cytoplasm, and their expression was attenuated in poorly differentiated CC. Next, we also used immunohistochemical staining of human CC to examine the KRT17 and UCHL1 proteins, whose genes were both downregulated in CC cells and tissues, and the IGFBP7 and SPARC proteins, which were downregulated in CC cells only. Human NBE showed substantial expression of the CK-17, UCHL1, IGFBP7, and SPARC proteins, but these were barely detectable in CC tissue. However, KRT-17 was clearly positive in HCC (Figure 4B).\nImmunohistochemical staining of differentially expressed proteins in the CC tissues. (A) Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, anti-IRX3, anti-PTTG1 or anti-PPARγ in NBE, human CC tissues with good differentiation (well), moderate differentiation (mod) or poor differentiation (poor), and HCC tissues. The representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens. (B) Immunohistochemical staining with anti-KRT-17, anti-UCHL1, anti-IGFBP7 or anti-SPARC in the human CC and HCC tissues. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens\nTo confirm the reliability of the microarray data and the robustness of the strategy for identifying genes with altered expression, we examined the protein levels of the identified genes using immunohistochemical analysis of human tissues (Figure 4A). We selected three upregulated genes from the genes that were upregulated in both cell and tissue samples. The SPP1, EFNB2 and E2F2 proteins were abnormally overexpressed in the CC cell cytoplasm, and weakly or barely expressed in HCC. We also examined the IRX3, PTTG1, and PPARγ proteins, which were highly upregulated in only the cell samples. IRX3 was the most highly upregulated, and we was strongly expressed in the nucleus of CC cells in the tissue sections, but was barely detectable in the NBE nuclei. PTTG1 and PPARγ were abnormally overexpressed in the CC cell cytoplasm, and their expression was attenuated in poorly differentiated CC. Next, we also used immunohistochemical staining of human CC to examine the KRT17 and UCHL1 proteins, whose genes were both downregulated in CC cells and tissues, and the IGFBP7 and SPARC proteins, which were downregulated in CC cells only. Human NBE showed substantial expression of the CK-17, UCHL1, IGFBP7, and SPARC proteins, but these were barely detectable in CC tissue. However, KRT-17 was clearly positive in HCC (Figure 4B).\nImmunohistochemical staining of differentially expressed proteins in the CC tissues. (A) Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, anti-IRX3, anti-PTTG1 or anti-PPARγ in NBE, human CC tissues with good differentiation (well), moderate differentiation (mod) or poor differentiation (poor), and HCC tissues. The representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens. (B) Immunohistochemical staining with anti-KRT-17, anti-UCHL1, anti-IGFBP7 or anti-SPARC in the human CC and HCC tissues. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens\n[SUBTITLE] Immunohistochemical analysis in hamster model of CC [SUBSECTION] Although it is unknown whether antibodies raised to human proteins recognize hamster proteins, we examined the protein levels of the identified genes using immunohistochemical analysis of hamster CC tissues (Additional file 5). As in humans, the SPP1, EFNB2, and E2F2 proteins were abnormally overexpressed in the hamster CC cell cytoplasm. IRX3 was also similarly expressed in the CC cell nucleus, and PTTG1 was differentially expressed in the CC cell cytoplasm. Interestingly, in contrast to human CC cells, PPARγ was preferentially expressed in the hamster CC cell nuclei. Therefore, the immunoreactivity of identified gene proteins in hamster CC seemed to be substantially consistent with that in human CC.\nAlthough it is unknown whether antibodies raised to human proteins recognize hamster proteins, we examined the protein levels of the identified genes using immunohistochemical analysis of hamster CC tissues (Additional file 5). As in humans, the SPP1, EFNB2, and E2F2 proteins were abnormally overexpressed in the hamster CC cell cytoplasm. IRX3 was also similarly expressed in the CC cell nucleus, and PTTG1 was differentially expressed in the CC cell cytoplasm. Interestingly, in contrast to human CC cells, PPARγ was preferentially expressed in the hamster CC cell nuclei. Therefore, the immunoreactivity of identified gene proteins in hamster CC seemed to be substantially consistent with that in human CC.\n[SUBTITLE] Gene expression patterns distinguish the SCK cell line from three CC cell lines [SUBSECTION] Previously, we established four human CC cell lines and characterized one with a typical sarcomatoid phenotype of SCK. We classified the other cell lines according to tumor cell differentition, as a poorly differentiated JCK, a moderately differentiated Cho-CK, and a well-differentiated Choi-CK cell line [34]. Two-way unsupervised hierarchical clustering analysis of quadruplicate samples for each cell line was conducted, based on the similarity of expression patterns of all genes (Figure 5). We selected 559 unique genes whose expression differed from the mean by four-fold or more with P < 0.005 by t-test. Cell samples were separated into two main groups, sarcomatoid (SC) and ordinary or adenocarcinomatous CC (AC), by the gene axis. The SC group contained 292 differentially upregulated genes (>four-fold change), and 267 downregulated genes (<0.25-fold change), compared to the AC group. The top 25 genes that were differentially expressed in the sarcomatoid SCK cells compared to the three adenocarcinomatous CC lines are in Additional file 6. Clustering data within groups revealed that the core clusters I and II were associated with transdifferentiation. Genes in cluster I appeared to be downregulated in the SCK cells, compared to ordinary CC cells. In contrast, the genes in cluster II were upregulated in the SCK cells and downregulated in the ordinary CC cells. Cluster I contained the GSTT1, TACSTD, BST2, RAB25, and MAL2 genes. Cluster II contained genes associated with tumor progression and metastasis, including HOXA9, MUC13, and members of the GAGE and CT-45 families [35-38]. Expression of methylation-silenced genes, such as LDHB, BNIP3, UCHL1, and NPTX2 [39-42], was barely detectable in the AC group, but appeared in this cluster.\nUnsupervised hierarchical cluster analysis of differentially expressed genes illustrated in a heat-map. Unsupervised hierarchical clustering separated the samples into two main groups: SC and AC. The samples were independently prepared from the cultured cells four times, and four kinds of CC cells were used: Choi-CK, Cho-CK, JCK and SCK cells. Samples were clustered closer within their own group than in samples from other groups. We selected 559 unique genes with a four-fold or greater difference from the mean and a P < 0.005 by t-test for hierarchical clustering analysis. Cluster I included genes differentially downregulated in the sarcomatoid CC cells compared to three adenomatous CC lines. Cluster II contained genes differentially upregulated in the sarcomatoid CC cells compared to three adenomatous CC lines.\nPreviously, we established four human CC cell lines and characterized one with a typical sarcomatoid phenotype of SCK. We classified the other cell lines according to tumor cell differentition, as a poorly differentiated JCK, a moderately differentiated Cho-CK, and a well-differentiated Choi-CK cell line [34]. Two-way unsupervised hierarchical clustering analysis of quadruplicate samples for each cell line was conducted, based on the similarity of expression patterns of all genes (Figure 5). We selected 559 unique genes whose expression differed from the mean by four-fold or more with P < 0.005 by t-test. Cell samples were separated into two main groups, sarcomatoid (SC) and ordinary or adenocarcinomatous CC (AC), by the gene axis. The SC group contained 292 differentially upregulated genes (>four-fold change), and 267 downregulated genes (<0.25-fold change), compared to the AC group. The top 25 genes that were differentially expressed in the sarcomatoid SCK cells compared to the three adenocarcinomatous CC lines are in Additional file 6. Clustering data within groups revealed that the core clusters I and II were associated with transdifferentiation. Genes in cluster I appeared to be downregulated in the SCK cells, compared to ordinary CC cells. In contrast, the genes in cluster II were upregulated in the SCK cells and downregulated in the ordinary CC cells. Cluster I contained the GSTT1, TACSTD, BST2, RAB25, and MAL2 genes. Cluster II contained genes associated with tumor progression and metastasis, including HOXA9, MUC13, and members of the GAGE and CT-45 families [35-38]. Expression of methylation-silenced genes, such as LDHB, BNIP3, UCHL1, and NPTX2 [39-42], was barely detectable in the AC group, but appeared in this cluster.\nUnsupervised hierarchical cluster analysis of differentially expressed genes illustrated in a heat-map. Unsupervised hierarchical clustering separated the samples into two main groups: SC and AC. The samples were independently prepared from the cultured cells four times, and four kinds of CC cells were used: Choi-CK, Cho-CK, JCK and SCK cells. Samples were clustered closer within their own group than in samples from other groups. We selected 559 unique genes with a four-fold or greater difference from the mean and a P < 0.005 by t-test for hierarchical clustering analysis. Cluster I included genes differentially downregulated in the sarcomatoid CC cells compared to three adenomatous CC lines. Cluster II contained genes differentially upregulated in the sarcomatoid CC cells compared to three adenomatous CC lines.\n[SUBTITLE] Expressions of transdifferentiation-related genes [SUBSECTION] From 559 genes that were differentially regulated between SCK cells and the three ordinary CC lines, we selected six upregulated genes and six downregulated genes, and examined their mRNA expression using real-time RT-PCR (Figure 6A), which verified the differential expression. We examined protein expression by Western blot analysis of the four CC lines. LDHB, Bnip3, HO-1, and UCHL1 were overexpressed exclusively in SCK cells. The expression of VIM and TWIST1 increased according to tumor dedifferentiation and was highest in SCK cells (Figure 6B, left). In contrast, LCN2, S100P, KRT7, KRT19, GPX1, and EFNA1 were preferentially expressed in Choi-CK, Cho-CK and JCK cells, but minimally expressed in SCK cells (Figure 6B, right). Because LDHB, BNIP3, and UCHL1 are well-known methylation-silenced genes in tumors [39-41], and are highly expressed in SCK cells, this suggested that DNA demethylation was involved in CC. To confirm this hypothesis, we treated the AC cells with the demethylating agent Aza, which dramatically restorated expression of the silenced UCHL1 gene in these cells (Figure 6C). In addition, we performed immunohistochemical examination of protein expression according to tumor dedifferentiation in human CC tissue (Figure 6D). As expected, HO-1 was exclusively overexpressed in SC, while TWIST1 was overexpressed in the poorly differentiated and SC cells. In contrast, LCN2 was exclusively downregulated in SC, while EFNA1 expression decreased with tumor dedifferentiation. Therefore, expression of these proteins clearly correlated with clinicopathological features such as tumor differentiation and EMT change, in CC tissues.\nGenes and proteins differentially expressed in sarcomatoid CC and adenomatous CC cells. (A) Real-time RT-PCR analysis of upregulated (left) and downregulated (right) genes selected from the list of top 25 genes differentially expressed in sarcomatoid SCK cells and three adenocarcinomatous CC cell lines. (B) Immunoblot of upregulated (left) and down-regulated (right) proteins in sarcomtoid SCK cells compared to three adenocarcinomatous CC cell lines. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines. Lane 1, Choi-CK cells. Lane 2, Cho-CK cells. Lane 3, JCK cells, Lane 4, SCK cells. (C) Immunoblot of UCHL1. Expression was restored in adenocarcimatous CC cells by treatment of 5 μM of Aza for 4 days, compared to vehicle control (VC). (D) Immunohistochemical staining of down- or upregulated proteins according to tumor dedifferentiation. The representative picture is one of three immunohistochemical staining replicates of different specimens. Well, well-differentiated CC. Mod, moderately differentiated CC. Poor, poorly differentiated CC. Sar, sarcomatoid CC. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines.\nFrom 559 genes that were differentially regulated between SCK cells and the three ordinary CC lines, we selected six upregulated genes and six downregulated genes, and examined their mRNA expression using real-time RT-PCR (Figure 6A), which verified the differential expression. We examined protein expression by Western blot analysis of the four CC lines. LDHB, Bnip3, HO-1, and UCHL1 were overexpressed exclusively in SCK cells. The expression of VIM and TWIST1 increased according to tumor dedifferentiation and was highest in SCK cells (Figure 6B, left). In contrast, LCN2, S100P, KRT7, KRT19, GPX1, and EFNA1 were preferentially expressed in Choi-CK, Cho-CK and JCK cells, but minimally expressed in SCK cells (Figure 6B, right). Because LDHB, BNIP3, and UCHL1 are well-known methylation-silenced genes in tumors [39-41], and are highly expressed in SCK cells, this suggested that DNA demethylation was involved in CC. To confirm this hypothesis, we treated the AC cells with the demethylating agent Aza, which dramatically restorated expression of the silenced UCHL1 gene in these cells (Figure 6C). In addition, we performed immunohistochemical examination of protein expression according to tumor dedifferentiation in human CC tissue (Figure 6D). As expected, HO-1 was exclusively overexpressed in SC, while TWIST1 was overexpressed in the poorly differentiated and SC cells. In contrast, LCN2 was exclusively downregulated in SC, while EFNA1 expression decreased with tumor dedifferentiation. Therefore, expression of these proteins clearly correlated with clinicopathological features such as tumor differentiation and EMT change, in CC tissues.\nGenes and proteins differentially expressed in sarcomatoid CC and adenomatous CC cells. (A) Real-time RT-PCR analysis of upregulated (left) and downregulated (right) genes selected from the list of top 25 genes differentially expressed in sarcomatoid SCK cells and three adenocarcinomatous CC cell lines. (B) Immunoblot of upregulated (left) and down-regulated (right) proteins in sarcomtoid SCK cells compared to three adenocarcinomatous CC cell lines. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines. Lane 1, Choi-CK cells. Lane 2, Cho-CK cells. Lane 3, JCK cells, Lane 4, SCK cells. (C) Immunoblot of UCHL1. Expression was restored in adenocarcimatous CC cells by treatment of 5 μM of Aza for 4 days, compared to vehicle control (VC). (D) Immunohistochemical staining of down- or upregulated proteins according to tumor dedifferentiation. The representative picture is one of three immunohistochemical staining replicates of different specimens. Well, well-differentiated CC. Mod, moderately differentiated CC. Poor, poorly differentiated CC. Sar, sarcomatoid CC. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines.", "Using BeadChip microarray analysis, we compared the gene expression profiles of nine CC cell lines, an immortalized biliary epithelial cell line, and four types of NBE cells. We selected 828 unique genes with a 2-fold or greater expression difference from the mean, with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis of all samples was based on the similarity in the expression pattern of all genes (Figure 1). Cell samples were separated into two main groups, the NBE cluster, and the transformed and immortalized biliary epithelial cells (CCC cluster). Each distinctive gene cluster was identified by delineation using a hierarchical clustering dendrogram. Cluster I consisted of genes upregulated in CC cells, which included tumor-related genes such as LGR4, AGR2, PCAF, TMEM97, FRAT2, EFNB2 and ZIC2 [21-27]. Cluster II included genes underexpressed in CC cells. These were mainly tumor suppressor genes such as GREM1, THY1, STC2, SERPINE1, SPARC and TAGLN [28-33]. Cluster III was genes upregulated in NBE cells, and contained the PDGFRA, CD248, and BDKRB1 genes.\nUnsupervised hierarchical clustering of four biliary epithelial cells, one immortalized cholangiocyte cell line and nine CC cells. Unsupervised hierarchical clustering separated the samples into two main groups, normal biliary epithelial cells (NBE) isolated from mucosal slices of normal bile ducts and ex-vivo cultured as described in Methods, and cholangiocarcinoma cells (CCC). Data are in matrix format, with columns representing individual cell lines and rows representing each gene. Red, high expression; green, low expression; black, no significant change in expression level between the mean and sample. A hierarchical clustering algorithm was applied to all cells and genes using the 1 - Pearson correlation coefficient as a similarity measure. Raw data for a single array were summarized using Illumina BeadStudio v3.0 and output to the user was as a set of 43,148 values for each individual hybridization. We selected 828 unique genes with a two-fold or greater difference from the mean and P < 0.01 by t-test, for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified in the hierarchical cluster of the genes differentially expressed in CCC compared with NBE. CC, cholangiocarcinoma; IMC, immortalized cholangiocytes.", "Using BeadChip microarrays, gene expression profiles of 19 CC tissues and 4 types of NBE cells were compared. We selected 1798 unique genes with a 2-fold or greater differences from the mean difference with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis was as described above (Figure 2A). All samples separated into two main groups, NBE and CC tissues (CCT). Each distinctive gene cluster was identified using a hierarchical clustering dendrogram as above. Intriguingly, the CC sample cluster was divided into two subclasses by tumor differentiation: differentiated (Df) and undifferentiated (Udf). Clustering data for the CC group revealed three clusters. Cluster I had genes upregulated in NBE and downregulated in CCT including SERPINB2, PAPPA, LRRC17, and GREM1. Cluster II contained genes upregulated in the Df CCT and downregulated in NBE. Cluster III included genes upregulated in poorly differentiated or Udf CCT, and downregulated in NBE. A supervised hierarchical clustering analysis was performed between the NBE class, and the Df and the Udf subclasses based on the similarity of expression pattern of all genes (Figure 2B and 2C). We selected 420 differentially expressed genes in the Df subclass, and 646 genes in the Udf subclass for comparison with the NBE class (Additional files 2 and 3).\nUnsupervised hierarchical clustering of 4 biliary epithelial cells and 19 CC tissues. (A) Unsupervised hierarchical clustering separated the samples into two main groups. We selected 1798 unique genes with two-fold or greater difference from the mean with P < 0.01 by t-test for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified of differentially expressed in CCT compared to NBE. (B) Supervised hierarchical clustering of four biliary epithelial cells and seven differentiated CC tissues. We selected 420 unique genes with four-fold or greater difference from the mean and P < 0.01 by t-test for hierarchical clustering analysis. (C) Supervised hierarchical clustering of 4 biliary epithelial cells and 10 undifferentiated CC tissues. We selected 646 unique genes with the criteria in B for hierarchical clustering analysis.", "We compared the gene lists from the cell-based and tissue-based databases, and selected 342 commonly regulated genes, including 53 commonly upregulated genes and 289 commonly downregulated genes (Figure 3A). The top 25 commonly regulated genes in both CCC and CCT compared to NBE are in Additional file 4. To verify the microarray data, we examined the mRNA levels of the identified genes using real-time RT-PCR in human CC tissues. We selected five up-regulated genes from the commonly upregulated genes of both the cell and tissue sample classes (Figure 3B). We also chose the IRX3, PTTG1, and PPARγ genes, which were highly upregulated in only the cell sample class. These genes were preferentially expressed in CC cells and tissues. We also examined the expression of the commonly downregulated KRT17 and UCHL1 genes, as well as the cellular downregulated IGFBP7 and SPARC genes using real-time RT-PCR in human CC. The human NBE showed substantial expression of CK-17, UCHL1, IGFBP7 and SPARC, which were barely detected in CC tissues (Figure 3C).\nDifferentially regulated genes in human CC tissues compared to NBE cells. (A) Venn diagram of genes commonly regulated in the cell and tissue samples. The 342 genes included 53 upregulated and 289 downregulated genes, selected from the cell- and tissue-based microarray databases. (B) Real-time RT-PCR analysis of upregulated genes selected from the list of top 25 genes commonly upregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database. (C) Real-time RT-PCR analysis of downregulated genes selected from the list of top 25 genes commonly downregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database.", "To confirm the reliability of the microarray data and the robustness of the strategy for identifying genes with altered expression, we examined the protein levels of the identified genes using immunohistochemical analysis of human tissues (Figure 4A). We selected three upregulated genes from the genes that were upregulated in both cell and tissue samples. The SPP1, EFNB2 and E2F2 proteins were abnormally overexpressed in the CC cell cytoplasm, and weakly or barely expressed in HCC. We also examined the IRX3, PTTG1, and PPARγ proteins, which were highly upregulated in only the cell samples. IRX3 was the most highly upregulated, and we was strongly expressed in the nucleus of CC cells in the tissue sections, but was barely detectable in the NBE nuclei. PTTG1 and PPARγ were abnormally overexpressed in the CC cell cytoplasm, and their expression was attenuated in poorly differentiated CC. Next, we also used immunohistochemical staining of human CC to examine the KRT17 and UCHL1 proteins, whose genes were both downregulated in CC cells and tissues, and the IGFBP7 and SPARC proteins, which were downregulated in CC cells only. Human NBE showed substantial expression of the CK-17, UCHL1, IGFBP7, and SPARC proteins, but these were barely detectable in CC tissue. However, KRT-17 was clearly positive in HCC (Figure 4B).\nImmunohistochemical staining of differentially expressed proteins in the CC tissues. (A) Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, anti-IRX3, anti-PTTG1 or anti-PPARγ in NBE, human CC tissues with good differentiation (well), moderate differentiation (mod) or poor differentiation (poor), and HCC tissues. The representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens. (B) Immunohistochemical staining with anti-KRT-17, anti-UCHL1, anti-IGFBP7 or anti-SPARC in the human CC and HCC tissues. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens", "Although it is unknown whether antibodies raised to human proteins recognize hamster proteins, we examined the protein levels of the identified genes using immunohistochemical analysis of hamster CC tissues (Additional file 5). As in humans, the SPP1, EFNB2, and E2F2 proteins were abnormally overexpressed in the hamster CC cell cytoplasm. IRX3 was also similarly expressed in the CC cell nucleus, and PTTG1 was differentially expressed in the CC cell cytoplasm. Interestingly, in contrast to human CC cells, PPARγ was preferentially expressed in the hamster CC cell nuclei. Therefore, the immunoreactivity of identified gene proteins in hamster CC seemed to be substantially consistent with that in human CC.", "Previously, we established four human CC cell lines and characterized one with a typical sarcomatoid phenotype of SCK. We classified the other cell lines according to tumor cell differentition, as a poorly differentiated JCK, a moderately differentiated Cho-CK, and a well-differentiated Choi-CK cell line [34]. Two-way unsupervised hierarchical clustering analysis of quadruplicate samples for each cell line was conducted, based on the similarity of expression patterns of all genes (Figure 5). We selected 559 unique genes whose expression differed from the mean by four-fold or more with P < 0.005 by t-test. Cell samples were separated into two main groups, sarcomatoid (SC) and ordinary or adenocarcinomatous CC (AC), by the gene axis. The SC group contained 292 differentially upregulated genes (>four-fold change), and 267 downregulated genes (<0.25-fold change), compared to the AC group. The top 25 genes that were differentially expressed in the sarcomatoid SCK cells compared to the three adenocarcinomatous CC lines are in Additional file 6. Clustering data within groups revealed that the core clusters I and II were associated with transdifferentiation. Genes in cluster I appeared to be downregulated in the SCK cells, compared to ordinary CC cells. In contrast, the genes in cluster II were upregulated in the SCK cells and downregulated in the ordinary CC cells. Cluster I contained the GSTT1, TACSTD, BST2, RAB25, and MAL2 genes. Cluster II contained genes associated with tumor progression and metastasis, including HOXA9, MUC13, and members of the GAGE and CT-45 families [35-38]. Expression of methylation-silenced genes, such as LDHB, BNIP3, UCHL1, and NPTX2 [39-42], was barely detectable in the AC group, but appeared in this cluster.\nUnsupervised hierarchical cluster analysis of differentially expressed genes illustrated in a heat-map. Unsupervised hierarchical clustering separated the samples into two main groups: SC and AC. The samples were independently prepared from the cultured cells four times, and four kinds of CC cells were used: Choi-CK, Cho-CK, JCK and SCK cells. Samples were clustered closer within their own group than in samples from other groups. We selected 559 unique genes with a four-fold or greater difference from the mean and a P < 0.005 by t-test for hierarchical clustering analysis. Cluster I included genes differentially downregulated in the sarcomatoid CC cells compared to three adenomatous CC lines. Cluster II contained genes differentially upregulated in the sarcomatoid CC cells compared to three adenomatous CC lines.", "From 559 genes that were differentially regulated between SCK cells and the three ordinary CC lines, we selected six upregulated genes and six downregulated genes, and examined their mRNA expression using real-time RT-PCR (Figure 6A), which verified the differential expression. We examined protein expression by Western blot analysis of the four CC lines. LDHB, Bnip3, HO-1, and UCHL1 were overexpressed exclusively in SCK cells. The expression of VIM and TWIST1 increased according to tumor dedifferentiation and was highest in SCK cells (Figure 6B, left). In contrast, LCN2, S100P, KRT7, KRT19, GPX1, and EFNA1 were preferentially expressed in Choi-CK, Cho-CK and JCK cells, but minimally expressed in SCK cells (Figure 6B, right). Because LDHB, BNIP3, and UCHL1 are well-known methylation-silenced genes in tumors [39-41], and are highly expressed in SCK cells, this suggested that DNA demethylation was involved in CC. To confirm this hypothesis, we treated the AC cells with the demethylating agent Aza, which dramatically restorated expression of the silenced UCHL1 gene in these cells (Figure 6C). In addition, we performed immunohistochemical examination of protein expression according to tumor dedifferentiation in human CC tissue (Figure 6D). As expected, HO-1 was exclusively overexpressed in SC, while TWIST1 was overexpressed in the poorly differentiated and SC cells. In contrast, LCN2 was exclusively downregulated in SC, while EFNA1 expression decreased with tumor dedifferentiation. Therefore, expression of these proteins clearly correlated with clinicopathological features such as tumor differentiation and EMT change, in CC tissues.\nGenes and proteins differentially expressed in sarcomatoid CC and adenomatous CC cells. (A) Real-time RT-PCR analysis of upregulated (left) and downregulated (right) genes selected from the list of top 25 genes differentially expressed in sarcomatoid SCK cells and three adenocarcinomatous CC cell lines. (B) Immunoblot of upregulated (left) and down-regulated (right) proteins in sarcomtoid SCK cells compared to three adenocarcinomatous CC cell lines. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines. Lane 1, Choi-CK cells. Lane 2, Cho-CK cells. Lane 3, JCK cells, Lane 4, SCK cells. (C) Immunoblot of UCHL1. Expression was restored in adenocarcimatous CC cells by treatment of 5 μM of Aza for 4 days, compared to vehicle control (VC). (D) Immunohistochemical staining of down- or upregulated proteins according to tumor dedifferentiation. The representative picture is one of three immunohistochemical staining replicates of different specimens. Well, well-differentiated CC. Mod, moderately differentiated CC. Poor, poorly differentiated CC. Sar, sarcomatoid CC. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines.", "In this study, our experimental design primarily investigated the gene expression profiles of 10 cell lines and 19 CC tissues, and compared these profiles to those from four cultured NBE cell line using genome-wide BeadChip microarray analysis. Transdifferentiation-related genes were analyzed by same method. Using unsupervised hierarchical clustering analysis, we found that the SPP1, EFNB2, and E2F genes were commonly upregulated in both cell and tissue samples. IRX3, PTTG1, and PPARγ were upregulated in the cell samples, and were immunohistochemically verified in human and hamster CC tissues. SPP1 (osteopontin), a secretory adhesive alycoprotein, was identified as a highly overexpressed gene in CC lines and tissues. SPP1 is a ligand of CD44 that binds to αV-containing integrins and is important in malignant cell attachment and tumor invasion [43]. It was a highly overexpressed gene in HCC, and its expression correlated with earlier recurrence, poorer prognosis, and metastasis [44]. Consistent with our findings, a recent oligonucleotide microarray study reported that SPP1 was the most highly expressed gene in intrahepatic cholangiocarcinoma [45]. EFNB2 was identified as a preferentially expressed genes in CC. EFNB2 overexpression is reported to be significantly correlated with the number of lymph node metastases and clinical stage in esophageal cancer [46]. Several reports have examined concomitant expression of the ligand EFNB2 and its receptor EphB4 in leukemia-lymphoma cell lines [47], and in endometrial cancer [48]. E2Fs 1-3 are characterized as \"activator E2Fs\" since their binding to promoters results in increased transcription, while E2Fs 4 and 5 are \"repressor E2Fs\" since they form complexes with p130, HDACs, and other factors to block transcription [49]. During hepatocarcinogenesis in c-myc/TGFalpha double-transgenic mice, expression of E2F-1 and E2F-2 increases, and putative E2F target genes are induced [50].\nFor immunohistochemical verification, the representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. In addition, other genes were selected from only cell-based microarray database. The same immunohistochemical staining in hamster CC tissues induced by Clonorchiasis infestation was compared with control staining in normal hamster livers. IRX3 is involved in dorsal-ventral patterning in spinal cord development and coordination with other homeobox genes [51]. IRX3 is preferentially expressed in the examined CC tissues and localized to the nucleus of human and hamster malignant biliary epithelial cells, independent of cell differentiation. A methylated CpG island was detected in exon 2 of the IRX3 locus, rather than in the promoter, and is responsible for IRX3 overexpression in brain tumor cells and tissues [52]. PTTG1, a critical mitotic checkpoint protein, is a known proto-oncogene that is highly expressed in HCC [53]. Our data showed that PTTG1 was preferentially expressed in the cytoplasm of the human and hamster CC cells. PPAR-γ, a member of the nuclear receptor superfamily, functions as a ligand-activated transcription factor [54]. It is overexpressed in a variety of cancers, including HCC and pancreatic cancer [55,56]. Positive immunostaining was localized in the cytoplasm and nuclei of human CC cells. However, positive immunostaining was exclusively detected in the nuclei of the hamster CC cells. Our data also immunohistochemically validated the downregulation of proteins KRT17, UCHL1, IGFBP7, and SPARC. Our hamster model showed the similar expression patterns of human CC related genes and therefore might be a relevant model to study human CC.\nAnalysis of genes involved in the transdifferentiation of CC cells showed two clusters in the gene axis, with genes that were upregulated (cluster II), and downregulated (cluster I) in the SC group as compared to the AC group. The mesenchymal antigen VIM and the transcriptional factor TWIST1 were upregulated in JCK and SCK cells by tumor dedifferentiation. The overexpression of these proteins is reported to be associated with the EMT [57,58]. Intriguingly, genes silenced by promoter hypermethylation during CC development were restored at the point of sarcomatous transdifferentation, which implied that the demethylation may be involved in the EMT progression of CC.\nIn addition to tumor-related genes known to be overexprssed in intrahepatic CC, we identified other strongly and consistently dysregulated genes in CC that are known to be involved in other human cancers. Our data support a correlation between the expression of these genes and CC tumor differentiation, and the gene expression patterns found in this study are consistent with those associated with a poor clinical prognosis for this cancer. gene expression profiling appears to be a useful diagnostic tool, especially for differentiating CC from other liver masses, as well as for the subclassification of intrahepatic CC compared to histopathological findings.", "Gene expression profiling appears to be a useful diagnostic tool, especially for differentiating CC from other liver masses, as well as for the subclassification of intrahepatic CC compared to histopathological findings. The most consistently overexpressed genes are candidate therapeutic targets, and related genes can be used for predicting survival and outcomes for different therapeutic modalities.", "The authors declare that they have no competing interests.", "MS and IC performed most of the experiments and drafted the manuscript. ML and GY carried out the tissue collection and the establishment of cell lines. XC participated in the immunohistochemical analysis. BC and IK participated in the design and coordination of the study and helped to draft the manuscript. EA and SL participated in the array data processing and analysis. DK conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/78/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Cell lines and cultures", "5-Aza-2'-deoxycytidine (Aza) treatment", "Patients and tissue samples", "Primer labeling and Illumina Beadchip array hybridization", "Data analysis", "Immunoblotting", "Immunohistochemistry", "Real-time RT-PCR", "Animal model of cholangiocarcinoma", "Results", "Gene expression patterns distinguish CC cells from cultured NBE cells", "Gene expression patterns distinguish CC tissues from cultured NBE cells", "Differential expression and verification of CC-related genes", "Immunohistochemical analysis of CC-related genes", "Immunohistochemical analysis in hamster model of CC", "Gene expression patterns distinguish the SCK cell line from three CC cell lines", "Expressions of transdifferentiation-related genes", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Cholangiocarcinoma (CC) is a highly lethal adenocarcinoma arising from bile duct epithelial cells. CC accounts for approximately 15% of the total liver cancer cases worldwide, and its incidence is rising [1,2]. The prognosis for CC is quite poor because of difficulties in early diagnosis, and relative resistance of the tumors to chemotherapy [3,4]. At the time of diagnosis, approximately 70% of CC patients have an occult metastasis or advanced local disease that precludes curative resection. Of candidates for curative resection, 30% develop recurrent disease at the anastomotic site or within the intrahepatic biliary tree, and succumb to disease progression or cholangitis [5]. Established risk factors for ductal cholangiocarcinomas include primary sclerosing cholangitis, infection with Clonorchis sinensis or Opisthorchis viverrini (liver flukes), Calori's disease, congenital choledochal cysts, and chronic intrahepatic lithiasis [6]. However, for most CCs, the cause is unknown.\nRecently, molecular investigations have provided evidence that CC carcinogenesis involves a number of genetic alterations, including activating point mutations in the K-ras oncogene, and in p53 and BRAF [7-9]. The deregulated expression of a number of other genes has also been reported, and cyclooxygenase-2 and c-erbB-2 are frequently overexpressed in CCs, suggesting an involvement in early biliary carcinogenesis [10]. In addition, increased expression of interleukin-6 is frequently observed in CC [11]. CC also develops after the liver-specific targeted disruption of the tumor suppressors SMAD4 and PTEN [12]. The incidence of sarcomatoid changes in CC is estimated to be approximately 5% [13], and sarcomatoid cells are thought to result from de-differentiation of ordinary carcinomatous CC cells. Sarcomatoid neoplasms are highly aggressive and have a poorer survival rate than ordinary CCs [14], but the underlying molecular alterations, which may be related to the epithelial-mesenchymal transition (EMT), remain unclear. Little extensive genome-wide information about altered gene expression in CCs is available, and only a few published studies have reported a comprehensive analysis of gene expression among biliary tract cancers in general [15,16]. The advancement of microarray technology now enables us to analyze genome-wide gene expression in a single experiment, opening avenues for the molecular classification of tumors, detection of the biological nature of tumors, and prediction of prognosis and sensitivity to treatments.\nIn this study, we generated genome-wide gene expression profiles of 10 cell lines (9 CC cell lines and 1 immortalized cholangiocyte line), and 19 CC tissues using a BeadChip oligonucleotide technology containing 48,000 genes. This procedure allowed us to observe a comprehensive pattern of gene expression in CC compared to cultured normal biliary epithelia (NBE). In addition, we identified a set of genes associated with sarcomatoid transdifferentiation. These data are useful not only because they provide a more profound understanding of cholangiocarcinogenesis and transdifferentiation, but also because they may help to develop diagnostic tools and improve the accuracy of CC prognosis.", "[SUBTITLE] Cell lines and cultures [SUBSECTION] Tumor tissues were obtained from surgical specimens and biopsy specimens in Korean cholangiocarcinoma patients. Tumor tissues were washed three times in Opti-MEM I (Gibco, Grand Island, NY) containing antibiotics. Washed tissue was transferred to a sterile Petri dish and finely minced with scalpels into 1- to 2-mm3 fragments. Tissue fragments in culture medium were seeded in T25 culture flasks (Corning, Medfield, MA) in Opti-MEM supplemented with 10% fetal bovine serum (FBS, Gibco), 30-mM sodium bicarbonate and antibiotics. Tumor cells were cultured undisturbed and passaged as described [17]. Near the 20th passage, the medium was changed from Opti-MEM I to DMEM supplemented with 10% FBS and antibiotics. NBE cells were isolated from mucosal slices of normal bile ducts, with informed consent from liver transplantation donors, and ex-vivo cultured in T25 culture flasks in Opti-MEM supplemented with 10% FBS, 30 mM sodium bicarbonate and antibiotics at 37°C with 5% CO2 in air. Near-confluent NBE cells were harvested and stored at -80°C until use. Cells were routinely tested for mycoplasma and found to be negative using a Gen Probe kit (San Diego, CA). CC cell lines are in Table 1.\nClinicopathological features of nine patients with intrahepatic cholangiocarcinomas used to generate CC cells lines.\nM, male; F, female; C. sinensis, clonorchis sinensis; HCC, hepatocellular carcinoma; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated.\n*International Hepato-Pancreato-Biliary Association classification.\nTumor tissues were obtained from surgical specimens and biopsy specimens in Korean cholangiocarcinoma patients. Tumor tissues were washed three times in Opti-MEM I (Gibco, Grand Island, NY) containing antibiotics. Washed tissue was transferred to a sterile Petri dish and finely minced with scalpels into 1- to 2-mm3 fragments. Tissue fragments in culture medium were seeded in T25 culture flasks (Corning, Medfield, MA) in Opti-MEM supplemented with 10% fetal bovine serum (FBS, Gibco), 30-mM sodium bicarbonate and antibiotics. Tumor cells were cultured undisturbed and passaged as described [17]. Near the 20th passage, the medium was changed from Opti-MEM I to DMEM supplemented with 10% FBS and antibiotics. NBE cells were isolated from mucosal slices of normal bile ducts, with informed consent from liver transplantation donors, and ex-vivo cultured in T25 culture flasks in Opti-MEM supplemented with 10% FBS, 30 mM sodium bicarbonate and antibiotics at 37°C with 5% CO2 in air. Near-confluent NBE cells were harvested and stored at -80°C until use. Cells were routinely tested for mycoplasma and found to be negative using a Gen Probe kit (San Diego, CA). CC cell lines are in Table 1.\nClinicopathological features of nine patients with intrahepatic cholangiocarcinomas used to generate CC cells lines.\nM, male; F, female; C. sinensis, clonorchis sinensis; HCC, hepatocellular carcinoma; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated.\n*International Hepato-Pancreato-Biliary Association classification.\n[SUBTITLE] 5-Aza-2'-deoxycytidine (Aza) treatment [SUBSECTION] Choi-CK, Cho-CK, and JCK cells were seeded at 1 × 106 cells/ml. After overnight culture, cells were treated with 5 μM of the DNA methylating agent Aza (Sigma-Aldrich, St. Louis, MO) for 4 days, and then harvested.\nChoi-CK, Cho-CK, and JCK cells were seeded at 1 × 106 cells/ml. After overnight culture, cells were treated with 5 μM of the DNA methylating agent Aza (Sigma-Aldrich, St. Louis, MO) for 4 days, and then harvested.\n[SUBTITLE] Patients and tissue samples [SUBSECTION] CC tissues were obtained with informed consent from Korean patients who underwent hepatectomy and common bile duct exploration at Chonbuk National University Hospital. All tumors were clinically and histologically diagnosed as cholangiocarcinoma. Detailed clinocopathological data of the 19 samples are in Table 2. All samples were immediately frozen in nitrogen tanks. Patient information was obtained from medical records. Clinical stage was determined according to the International Hepato-Pancreato-Biliary Association (IHPBA) classification [18].\nClinicopathological features of 19 CC samples used for microarray analysis.\nHCC, hepatocellular carcinoma; M, male; F, female; A, anterior segment; P, posterior segment; Med, medial segment; L, lateral segment; MF, mass forming type; PDI, periductal infiltrating type; IDG, intraductal growth type; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated; NA, not available.\n*International Hepato-Pancreato-Biliary Association classification.\nCC tissues were obtained with informed consent from Korean patients who underwent hepatectomy and common bile duct exploration at Chonbuk National University Hospital. All tumors were clinically and histologically diagnosed as cholangiocarcinoma. Detailed clinocopathological data of the 19 samples are in Table 2. All samples were immediately frozen in nitrogen tanks. Patient information was obtained from medical records. Clinical stage was determined according to the International Hepato-Pancreato-Biliary Association (IHPBA) classification [18].\nClinicopathological features of 19 CC samples used for microarray analysis.\nHCC, hepatocellular carcinoma; M, male; F, female; A, anterior segment; P, posterior segment; Med, medial segment; L, lateral segment; MF, mass forming type; PDI, periductal infiltrating type; IDG, intraductal growth type; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated; NA, not available.\n*International Hepato-Pancreato-Biliary Association classification.\n[SUBTITLE] Primer labeling and Illumina Beadchip array hybridization [SUBSECTION] Total RNA from CC samples was isolated using TRIzol reagent (Invitrogen, CA) according to the manufacturer's instructions. RNA quality was determined by gel electrophoresis, and concentrations were determined using an Ultrospec 3100 pro spectrophotometer (Amersham Bioscience, Buckinghamshire, UK). Biotin-labeled cRNA samples for hybridization were prepared according to Illumina's recommended sample-labeling procedure: 500 ng of total RNA was used for cDNA synthesis, followed by an amplification/labeling step (in vitro transcription) to synthesize biotin-labeled cRNA using the Illumina TotalPrep RNA Amplification kit (Ambion Inc., Austin, TX). cRNA concentrations were measured by the RiboGreen method (Quant-iT RiboGreen RNA assay kit; Invitrogen-Molecular Probes, ON, Canada) using a Victor3 spectrophotometer (PerkinElmer, CT), and cRNA quality was determined on a 1% agarose gel. Labeled, amplified material (1500 ng per array) was hybridized to Illumina Human-6 BeadChips v2 containing 48,701 probes for 24,498 genes, according to the manufacturer's instructions (Illumina, San Diego, CA). Array signals were developed by Amersham fluorolink streptavidin-Cy3 (GE Healthcare Bio-Sciences, Little Chalfont, UK) following the BeadChip manual. Arrays were scanned with an Illumina Bead-array Reader confocal scanner (BeadStation 500GXDW; Illumina) according to the manufacturer's instructions. Array data processing and analysis were performed using Illumina BeadStudio software. The BeadStudio Gene Expression Module is a tool for analyzing gene expression data from scanned microarray images generated by the Illumina BeadArray Reader.\nTotal RNA from CC samples was isolated using TRIzol reagent (Invitrogen, CA) according to the manufacturer's instructions. RNA quality was determined by gel electrophoresis, and concentrations were determined using an Ultrospec 3100 pro spectrophotometer (Amersham Bioscience, Buckinghamshire, UK). Biotin-labeled cRNA samples for hybridization were prepared according to Illumina's recommended sample-labeling procedure: 500 ng of total RNA was used for cDNA synthesis, followed by an amplification/labeling step (in vitro transcription) to synthesize biotin-labeled cRNA using the Illumina TotalPrep RNA Amplification kit (Ambion Inc., Austin, TX). cRNA concentrations were measured by the RiboGreen method (Quant-iT RiboGreen RNA assay kit; Invitrogen-Molecular Probes, ON, Canada) using a Victor3 spectrophotometer (PerkinElmer, CT), and cRNA quality was determined on a 1% agarose gel. Labeled, amplified material (1500 ng per array) was hybridized to Illumina Human-6 BeadChips v2 containing 48,701 probes for 24,498 genes, according to the manufacturer's instructions (Illumina, San Diego, CA). Array signals were developed by Amersham fluorolink streptavidin-Cy3 (GE Healthcare Bio-Sciences, Little Chalfont, UK) following the BeadChip manual. Arrays were scanned with an Illumina Bead-array Reader confocal scanner (BeadStation 500GXDW; Illumina) according to the manufacturer's instructions. Array data processing and analysis were performed using Illumina BeadStudio software. The BeadStudio Gene Expression Module is a tool for analyzing gene expression data from scanned microarray images generated by the Illumina BeadArray Reader.\n[SUBTITLE] Data analysis [SUBSECTION] Normalization algorithms were used to adjust sample signals to minimize the effects of variation from non-biological factors. To reduce variation between microarrays, the intensity values for samples in each microarray were rescaled using a quartile normalization method in the BeadStudio module. Measured gene expression values were log2-transformed and median-centered across genes and samples for further analysis. To generate an overview of the gene expression profile and to identify major relationships in cell lines, we used unsupervised hierarchical clustering analysis. Genes with an expression ratio of at least a two-fold difference relative to the median gene expression level across all samples in at least 10% of samples were selected for clustering analysis. Average linkage hierarchical cluster analysis was carried out using a Pearson correlation as the similarity metric, using the GeneCluster/TreeView program (http://rana.lbl.gov/EisenSoftware.htm). Expression profiles for the differentially expressed genes were selected by t-test with false discovery rate (FDR) and q-values as gene significance measures, using R software (version 2.5). Because of varying significance in the analyzed comparisons, using a fixed FDR (or q-value) cut-off value was not practical. Therefore, we used t-test P = 0.01. To ascertain biological relevance, a fold-change cut-off value of 2 or 4 from the mean was chosen. The gene ontology (GO) program (http://david.abcc.ncifcrf.gov/) was used to categorize genes in subgroups based on biological function. Values for each GO group were calculated as a percentage of total mRNA change. For example, the Fisher exact test was used to determine whether the proportions of genes in each category differed by group. The microarray data were registered with the Gene Expression Omnibus (GEO) database (Accession No. GSE22633)\nNormalization algorithms were used to adjust sample signals to minimize the effects of variation from non-biological factors. To reduce variation between microarrays, the intensity values for samples in each microarray were rescaled using a quartile normalization method in the BeadStudio module. Measured gene expression values were log2-transformed and median-centered across genes and samples for further analysis. To generate an overview of the gene expression profile and to identify major relationships in cell lines, we used unsupervised hierarchical clustering analysis. Genes with an expression ratio of at least a two-fold difference relative to the median gene expression level across all samples in at least 10% of samples were selected for clustering analysis. Average linkage hierarchical cluster analysis was carried out using a Pearson correlation as the similarity metric, using the GeneCluster/TreeView program (http://rana.lbl.gov/EisenSoftware.htm). Expression profiles for the differentially expressed genes were selected by t-test with false discovery rate (FDR) and q-values as gene significance measures, using R software (version 2.5). Because of varying significance in the analyzed comparisons, using a fixed FDR (or q-value) cut-off value was not practical. Therefore, we used t-test P = 0.01. To ascertain biological relevance, a fold-change cut-off value of 2 or 4 from the mean was chosen. The gene ontology (GO) program (http://david.abcc.ncifcrf.gov/) was used to categorize genes in subgroups based on biological function. Values for each GO group were calculated as a percentage of total mRNA change. For example, the Fisher exact test was used to determine whether the proportions of genes in each category differed by group. The microarray data were registered with the Gene Expression Omnibus (GEO) database (Accession No. GSE22633)\n[SUBTITLE] Immunoblotting [SUBSECTION] Extracted protein (30 μg) from cell lysates was resolved by SDS-PAGE and transferred to a nitrocellulose membrane. Membranes were incubated for 1 h at room temperature with primary antibody at 1:1000 dilution. After incubation, blots were washed three times in TBS/0.1% Tween 20. Immunoreactivity was detected using alkaline phosphatase-conjugated goat anti-rabbit IgG or a commercial chemiluminescence detection kit (Amersham), according to the manufacturer's instructions.\nExtracted protein (30 μg) from cell lysates was resolved by SDS-PAGE and transferred to a nitrocellulose membrane. Membranes were incubated for 1 h at room temperature with primary antibody at 1:1000 dilution. After incubation, blots were washed three times in TBS/0.1% Tween 20. Immunoreactivity was detected using alkaline phosphatase-conjugated goat anti-rabbit IgG or a commercial chemiluminescence detection kit (Amersham), according to the manufacturer's instructions.\n[SUBTITLE] Immunohistochemistry [SUBSECTION] Immunohistochemical staining was performed on formalin-fixed, paraffin-embedded 4-μM tissue sections, as described preciously [19]. Briefly, a deparaffinized section was pretreated by microwave epitope retrieval (750 W during 15 min in citrate buffer 10 mmol; pH 6.0) after rehydration. Before applying primary antibodies, the endogenous peroxidase activity was inhibited with 3% hydrogen peroxide, and a blocking step with biotin and bovine albumin was performed. Primary monoclonal or polyclonal antibodies were detected using a secondary biotinylated antibody and a streptavidin-horseradish peroxidase conjugate according to the manufacturer's instructions (DAKO, Glostrup, Denmark). Counterstaining was performed using Meyer's hematoxylin. Tumors were evaluated for the percentage of positive cells and the staining intensity. Negative controls were samples incubated with either PBS or mouse IgG1 instead of primary antibody.\nImmunohistochemical staining was performed on formalin-fixed, paraffin-embedded 4-μM tissue sections, as described preciously [19]. Briefly, a deparaffinized section was pretreated by microwave epitope retrieval (750 W during 15 min in citrate buffer 10 mmol; pH 6.0) after rehydration. Before applying primary antibodies, the endogenous peroxidase activity was inhibited with 3% hydrogen peroxide, and a blocking step with biotin and bovine albumin was performed. Primary monoclonal or polyclonal antibodies were detected using a secondary biotinylated antibody and a streptavidin-horseradish peroxidase conjugate according to the manufacturer's instructions (DAKO, Glostrup, Denmark). Counterstaining was performed using Meyer's hematoxylin. Tumors were evaluated for the percentage of positive cells and the staining intensity. Negative controls were samples incubated with either PBS or mouse IgG1 instead of primary antibody.\n[SUBTITLE] Real-time RT-PCR [SUBSECTION] RNA prepared from dissected tissues was precipitated with isopropanol and dissolved in DEPC-treated distilled water. Reverse transcription (RT) was performed using 2 μg total RNA, 50 μM decamer and 1 μl (200 units) and RT-PCR Superscript II (Invitrogen) at 37°C for 50 min, as previously described. Specific primers for each gene were designed using the Primerdepot website (http://primerdepot.nci.nih.gov/) and are in Additional file 1. The control 18S ribosomal RNA primer was from Applied Biosystems (Foster City, CA) and was used as the invariant control. The real-time RT-PCR reaction mixture consisted of 10 ng reverse-transcribed total RNA, 167 nM forward and reverse primers, and 2 × PCR master mixture in a final volume of 10 μl PCR, was in 384-well plates using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems).\nRNA prepared from dissected tissues was precipitated with isopropanol and dissolved in DEPC-treated distilled water. Reverse transcription (RT) was performed using 2 μg total RNA, 50 μM decamer and 1 μl (200 units) and RT-PCR Superscript II (Invitrogen) at 37°C for 50 min, as previously described. Specific primers for each gene were designed using the Primerdepot website (http://primerdepot.nci.nih.gov/) and are in Additional file 1. The control 18S ribosomal RNA primer was from Applied Biosystems (Foster City, CA) and was used as the invariant control. The real-time RT-PCR reaction mixture consisted of 10 ng reverse-transcribed total RNA, 167 nM forward and reverse primers, and 2 × PCR master mixture in a final volume of 10 μl PCR, was in 384-well plates using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems).\n[SUBTITLE] Animal model of cholangiocarcinoma [SUBSECTION] The hamster CC model was modified from a previous study [20]. On the first day of the experiment, hamsters in the experimental group were infected with 15 metacercariae of the liver fluke, C. sinensis. One day after parasite infestation, hamsters received 15 ppm of dimethylnitrosamine (DMN; Kasei, Japan) in the drinking water for 4 weeks with a normal diet. Thereafter, hamsters were given tap water with a normal diet for the rest of the study. An interim stage of cholangiocarcinogenesis was confirmed at 8 weeks after experiment initiation. Control and CC model hamsters were maintained for a total of 27 weeks for CC to develop.\nThe hamster CC model was modified from a previous study [20]. On the first day of the experiment, hamsters in the experimental group were infected with 15 metacercariae of the liver fluke, C. sinensis. One day after parasite infestation, hamsters received 15 ppm of dimethylnitrosamine (DMN; Kasei, Japan) in the drinking water for 4 weeks with a normal diet. Thereafter, hamsters were given tap water with a normal diet for the rest of the study. An interim stage of cholangiocarcinogenesis was confirmed at 8 weeks after experiment initiation. Control and CC model hamsters were maintained for a total of 27 weeks for CC to develop.", "Tumor tissues were obtained from surgical specimens and biopsy specimens in Korean cholangiocarcinoma patients. Tumor tissues were washed three times in Opti-MEM I (Gibco, Grand Island, NY) containing antibiotics. Washed tissue was transferred to a sterile Petri dish and finely minced with scalpels into 1- to 2-mm3 fragments. Tissue fragments in culture medium were seeded in T25 culture flasks (Corning, Medfield, MA) in Opti-MEM supplemented with 10% fetal bovine serum (FBS, Gibco), 30-mM sodium bicarbonate and antibiotics. Tumor cells were cultured undisturbed and passaged as described [17]. Near the 20th passage, the medium was changed from Opti-MEM I to DMEM supplemented with 10% FBS and antibiotics. NBE cells were isolated from mucosal slices of normal bile ducts, with informed consent from liver transplantation donors, and ex-vivo cultured in T25 culture flasks in Opti-MEM supplemented with 10% FBS, 30 mM sodium bicarbonate and antibiotics at 37°C with 5% CO2 in air. Near-confluent NBE cells were harvested and stored at -80°C until use. Cells were routinely tested for mycoplasma and found to be negative using a Gen Probe kit (San Diego, CA). CC cell lines are in Table 1.\nClinicopathological features of nine patients with intrahepatic cholangiocarcinomas used to generate CC cells lines.\nM, male; F, female; C. sinensis, clonorchis sinensis; HCC, hepatocellular carcinoma; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated.\n*International Hepato-Pancreato-Biliary Association classification.", "Choi-CK, Cho-CK, and JCK cells were seeded at 1 × 106 cells/ml. After overnight culture, cells were treated with 5 μM of the DNA methylating agent Aza (Sigma-Aldrich, St. Louis, MO) for 4 days, and then harvested.", "CC tissues were obtained with informed consent from Korean patients who underwent hepatectomy and common bile duct exploration at Chonbuk National University Hospital. All tumors were clinically and histologically diagnosed as cholangiocarcinoma. Detailed clinocopathological data of the 19 samples are in Table 2. All samples were immediately frozen in nitrogen tanks. Patient information was obtained from medical records. Clinical stage was determined according to the International Hepato-Pancreato-Biliary Association (IHPBA) classification [18].\nClinicopathological features of 19 CC samples used for microarray analysis.\nHCC, hepatocellular carcinoma; M, male; F, female; A, anterior segment; P, posterior segment; Med, medial segment; L, lateral segment; MF, mass forming type; PDI, periductal infiltrating type; IDG, intraductal growth type; WD, well differentiated; MD, moderately differentiated; PD, poorly differentiated; NA, not available.\n*International Hepato-Pancreato-Biliary Association classification.", "Total RNA from CC samples was isolated using TRIzol reagent (Invitrogen, CA) according to the manufacturer's instructions. RNA quality was determined by gel electrophoresis, and concentrations were determined using an Ultrospec 3100 pro spectrophotometer (Amersham Bioscience, Buckinghamshire, UK). Biotin-labeled cRNA samples for hybridization were prepared according to Illumina's recommended sample-labeling procedure: 500 ng of total RNA was used for cDNA synthesis, followed by an amplification/labeling step (in vitro transcription) to synthesize biotin-labeled cRNA using the Illumina TotalPrep RNA Amplification kit (Ambion Inc., Austin, TX). cRNA concentrations were measured by the RiboGreen method (Quant-iT RiboGreen RNA assay kit; Invitrogen-Molecular Probes, ON, Canada) using a Victor3 spectrophotometer (PerkinElmer, CT), and cRNA quality was determined on a 1% agarose gel. Labeled, amplified material (1500 ng per array) was hybridized to Illumina Human-6 BeadChips v2 containing 48,701 probes for 24,498 genes, according to the manufacturer's instructions (Illumina, San Diego, CA). Array signals were developed by Amersham fluorolink streptavidin-Cy3 (GE Healthcare Bio-Sciences, Little Chalfont, UK) following the BeadChip manual. Arrays were scanned with an Illumina Bead-array Reader confocal scanner (BeadStation 500GXDW; Illumina) according to the manufacturer's instructions. Array data processing and analysis were performed using Illumina BeadStudio software. The BeadStudio Gene Expression Module is a tool for analyzing gene expression data from scanned microarray images generated by the Illumina BeadArray Reader.", "Normalization algorithms were used to adjust sample signals to minimize the effects of variation from non-biological factors. To reduce variation between microarrays, the intensity values for samples in each microarray were rescaled using a quartile normalization method in the BeadStudio module. Measured gene expression values were log2-transformed and median-centered across genes and samples for further analysis. To generate an overview of the gene expression profile and to identify major relationships in cell lines, we used unsupervised hierarchical clustering analysis. Genes with an expression ratio of at least a two-fold difference relative to the median gene expression level across all samples in at least 10% of samples were selected for clustering analysis. Average linkage hierarchical cluster analysis was carried out using a Pearson correlation as the similarity metric, using the GeneCluster/TreeView program (http://rana.lbl.gov/EisenSoftware.htm). Expression profiles for the differentially expressed genes were selected by t-test with false discovery rate (FDR) and q-values as gene significance measures, using R software (version 2.5). Because of varying significance in the analyzed comparisons, using a fixed FDR (or q-value) cut-off value was not practical. Therefore, we used t-test P = 0.01. To ascertain biological relevance, a fold-change cut-off value of 2 or 4 from the mean was chosen. The gene ontology (GO) program (http://david.abcc.ncifcrf.gov/) was used to categorize genes in subgroups based on biological function. Values for each GO group were calculated as a percentage of total mRNA change. For example, the Fisher exact test was used to determine whether the proportions of genes in each category differed by group. The microarray data were registered with the Gene Expression Omnibus (GEO) database (Accession No. GSE22633)", "Extracted protein (30 μg) from cell lysates was resolved by SDS-PAGE and transferred to a nitrocellulose membrane. Membranes were incubated for 1 h at room temperature with primary antibody at 1:1000 dilution. After incubation, blots were washed three times in TBS/0.1% Tween 20. Immunoreactivity was detected using alkaline phosphatase-conjugated goat anti-rabbit IgG or a commercial chemiluminescence detection kit (Amersham), according to the manufacturer's instructions.", "Immunohistochemical staining was performed on formalin-fixed, paraffin-embedded 4-μM tissue sections, as described preciously [19]. Briefly, a deparaffinized section was pretreated by microwave epitope retrieval (750 W during 15 min in citrate buffer 10 mmol; pH 6.0) after rehydration. Before applying primary antibodies, the endogenous peroxidase activity was inhibited with 3% hydrogen peroxide, and a blocking step with biotin and bovine albumin was performed. Primary monoclonal or polyclonal antibodies were detected using a secondary biotinylated antibody and a streptavidin-horseradish peroxidase conjugate according to the manufacturer's instructions (DAKO, Glostrup, Denmark). Counterstaining was performed using Meyer's hematoxylin. Tumors were evaluated for the percentage of positive cells and the staining intensity. Negative controls were samples incubated with either PBS or mouse IgG1 instead of primary antibody.", "RNA prepared from dissected tissues was precipitated with isopropanol and dissolved in DEPC-treated distilled water. Reverse transcription (RT) was performed using 2 μg total RNA, 50 μM decamer and 1 μl (200 units) and RT-PCR Superscript II (Invitrogen) at 37°C for 50 min, as previously described. Specific primers for each gene were designed using the Primerdepot website (http://primerdepot.nci.nih.gov/) and are in Additional file 1. The control 18S ribosomal RNA primer was from Applied Biosystems (Foster City, CA) and was used as the invariant control. The real-time RT-PCR reaction mixture consisted of 10 ng reverse-transcribed total RNA, 167 nM forward and reverse primers, and 2 × PCR master mixture in a final volume of 10 μl PCR, was in 384-well plates using the ABI Prism 7900HT Sequence Detection System (Applied Biosystems).", "The hamster CC model was modified from a previous study [20]. On the first day of the experiment, hamsters in the experimental group were infected with 15 metacercariae of the liver fluke, C. sinensis. One day after parasite infestation, hamsters received 15 ppm of dimethylnitrosamine (DMN; Kasei, Japan) in the drinking water for 4 weeks with a normal diet. Thereafter, hamsters were given tap water with a normal diet for the rest of the study. An interim stage of cholangiocarcinogenesis was confirmed at 8 weeks after experiment initiation. Control and CC model hamsters were maintained for a total of 27 weeks for CC to develop.", "[SUBTITLE] Gene expression patterns distinguish CC cells from cultured NBE cells [SUBSECTION] Using BeadChip microarray analysis, we compared the gene expression profiles of nine CC cell lines, an immortalized biliary epithelial cell line, and four types of NBE cells. We selected 828 unique genes with a 2-fold or greater expression difference from the mean, with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis of all samples was based on the similarity in the expression pattern of all genes (Figure 1). Cell samples were separated into two main groups, the NBE cluster, and the transformed and immortalized biliary epithelial cells (CCC cluster). Each distinctive gene cluster was identified by delineation using a hierarchical clustering dendrogram. Cluster I consisted of genes upregulated in CC cells, which included tumor-related genes such as LGR4, AGR2, PCAF, TMEM97, FRAT2, EFNB2 and ZIC2 [21-27]. Cluster II included genes underexpressed in CC cells. These were mainly tumor suppressor genes such as GREM1, THY1, STC2, SERPINE1, SPARC and TAGLN [28-33]. Cluster III was genes upregulated in NBE cells, and contained the PDGFRA, CD248, and BDKRB1 genes.\nUnsupervised hierarchical clustering of four biliary epithelial cells, one immortalized cholangiocyte cell line and nine CC cells. Unsupervised hierarchical clustering separated the samples into two main groups, normal biliary epithelial cells (NBE) isolated from mucosal slices of normal bile ducts and ex-vivo cultured as described in Methods, and cholangiocarcinoma cells (CCC). Data are in matrix format, with columns representing individual cell lines and rows representing each gene. Red, high expression; green, low expression; black, no significant change in expression level between the mean and sample. A hierarchical clustering algorithm was applied to all cells and genes using the 1 - Pearson correlation coefficient as a similarity measure. Raw data for a single array were summarized using Illumina BeadStudio v3.0 and output to the user was as a set of 43,148 values for each individual hybridization. We selected 828 unique genes with a two-fold or greater difference from the mean and P < 0.01 by t-test, for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified in the hierarchical cluster of the genes differentially expressed in CCC compared with NBE. CC, cholangiocarcinoma; IMC, immortalized cholangiocytes.\nUsing BeadChip microarray analysis, we compared the gene expression profiles of nine CC cell lines, an immortalized biliary epithelial cell line, and four types of NBE cells. We selected 828 unique genes with a 2-fold or greater expression difference from the mean, with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis of all samples was based on the similarity in the expression pattern of all genes (Figure 1). Cell samples were separated into two main groups, the NBE cluster, and the transformed and immortalized biliary epithelial cells (CCC cluster). Each distinctive gene cluster was identified by delineation using a hierarchical clustering dendrogram. Cluster I consisted of genes upregulated in CC cells, which included tumor-related genes such as LGR4, AGR2, PCAF, TMEM97, FRAT2, EFNB2 and ZIC2 [21-27]. Cluster II included genes underexpressed in CC cells. These were mainly tumor suppressor genes such as GREM1, THY1, STC2, SERPINE1, SPARC and TAGLN [28-33]. Cluster III was genes upregulated in NBE cells, and contained the PDGFRA, CD248, and BDKRB1 genes.\nUnsupervised hierarchical clustering of four biliary epithelial cells, one immortalized cholangiocyte cell line and nine CC cells. Unsupervised hierarchical clustering separated the samples into two main groups, normal biliary epithelial cells (NBE) isolated from mucosal slices of normal bile ducts and ex-vivo cultured as described in Methods, and cholangiocarcinoma cells (CCC). Data are in matrix format, with columns representing individual cell lines and rows representing each gene. Red, high expression; green, low expression; black, no significant change in expression level between the mean and sample. A hierarchical clustering algorithm was applied to all cells and genes using the 1 - Pearson correlation coefficient as a similarity measure. Raw data for a single array were summarized using Illumina BeadStudio v3.0 and output to the user was as a set of 43,148 values for each individual hybridization. We selected 828 unique genes with a two-fold or greater difference from the mean and P < 0.01 by t-test, for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified in the hierarchical cluster of the genes differentially expressed in CCC compared with NBE. CC, cholangiocarcinoma; IMC, immortalized cholangiocytes.\n[SUBTITLE] Gene expression patterns distinguish CC tissues from cultured NBE cells [SUBSECTION] Using BeadChip microarrays, gene expression profiles of 19 CC tissues and 4 types of NBE cells were compared. We selected 1798 unique genes with a 2-fold or greater differences from the mean difference with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis was as described above (Figure 2A). All samples separated into two main groups, NBE and CC tissues (CCT). Each distinctive gene cluster was identified using a hierarchical clustering dendrogram as above. Intriguingly, the CC sample cluster was divided into two subclasses by tumor differentiation: differentiated (Df) and undifferentiated (Udf). Clustering data for the CC group revealed three clusters. Cluster I had genes upregulated in NBE and downregulated in CCT including SERPINB2, PAPPA, LRRC17, and GREM1. Cluster II contained genes upregulated in the Df CCT and downregulated in NBE. Cluster III included genes upregulated in poorly differentiated or Udf CCT, and downregulated in NBE. A supervised hierarchical clustering analysis was performed between the NBE class, and the Df and the Udf subclasses based on the similarity of expression pattern of all genes (Figure 2B and 2C). We selected 420 differentially expressed genes in the Df subclass, and 646 genes in the Udf subclass for comparison with the NBE class (Additional files 2 and 3).\nUnsupervised hierarchical clustering of 4 biliary epithelial cells and 19 CC tissues. (A) Unsupervised hierarchical clustering separated the samples into two main groups. We selected 1798 unique genes with two-fold or greater difference from the mean with P < 0.01 by t-test for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified of differentially expressed in CCT compared to NBE. (B) Supervised hierarchical clustering of four biliary epithelial cells and seven differentiated CC tissues. We selected 420 unique genes with four-fold or greater difference from the mean and P < 0.01 by t-test for hierarchical clustering analysis. (C) Supervised hierarchical clustering of 4 biliary epithelial cells and 10 undifferentiated CC tissues. We selected 646 unique genes with the criteria in B for hierarchical clustering analysis.\nUsing BeadChip microarrays, gene expression profiles of 19 CC tissues and 4 types of NBE cells were compared. We selected 1798 unique genes with a 2-fold or greater differences from the mean difference with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis was as described above (Figure 2A). All samples separated into two main groups, NBE and CC tissues (CCT). Each distinctive gene cluster was identified using a hierarchical clustering dendrogram as above. Intriguingly, the CC sample cluster was divided into two subclasses by tumor differentiation: differentiated (Df) and undifferentiated (Udf). Clustering data for the CC group revealed three clusters. Cluster I had genes upregulated in NBE and downregulated in CCT including SERPINB2, PAPPA, LRRC17, and GREM1. Cluster II contained genes upregulated in the Df CCT and downregulated in NBE. Cluster III included genes upregulated in poorly differentiated or Udf CCT, and downregulated in NBE. A supervised hierarchical clustering analysis was performed between the NBE class, and the Df and the Udf subclasses based on the similarity of expression pattern of all genes (Figure 2B and 2C). We selected 420 differentially expressed genes in the Df subclass, and 646 genes in the Udf subclass for comparison with the NBE class (Additional files 2 and 3).\nUnsupervised hierarchical clustering of 4 biliary epithelial cells and 19 CC tissues. (A) Unsupervised hierarchical clustering separated the samples into two main groups. We selected 1798 unique genes with two-fold or greater difference from the mean with P < 0.01 by t-test for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified of differentially expressed in CCT compared to NBE. (B) Supervised hierarchical clustering of four biliary epithelial cells and seven differentiated CC tissues. We selected 420 unique genes with four-fold or greater difference from the mean and P < 0.01 by t-test for hierarchical clustering analysis. (C) Supervised hierarchical clustering of 4 biliary epithelial cells and 10 undifferentiated CC tissues. We selected 646 unique genes with the criteria in B for hierarchical clustering analysis.\n[SUBTITLE] Differential expression and verification of CC-related genes [SUBSECTION] We compared the gene lists from the cell-based and tissue-based databases, and selected 342 commonly regulated genes, including 53 commonly upregulated genes and 289 commonly downregulated genes (Figure 3A). The top 25 commonly regulated genes in both CCC and CCT compared to NBE are in Additional file 4. To verify the microarray data, we examined the mRNA levels of the identified genes using real-time RT-PCR in human CC tissues. We selected five up-regulated genes from the commonly upregulated genes of both the cell and tissue sample classes (Figure 3B). We also chose the IRX3, PTTG1, and PPARγ genes, which were highly upregulated in only the cell sample class. These genes were preferentially expressed in CC cells and tissues. We also examined the expression of the commonly downregulated KRT17 and UCHL1 genes, as well as the cellular downregulated IGFBP7 and SPARC genes using real-time RT-PCR in human CC. The human NBE showed substantial expression of CK-17, UCHL1, IGFBP7 and SPARC, which were barely detected in CC tissues (Figure 3C).\nDifferentially regulated genes in human CC tissues compared to NBE cells. (A) Venn diagram of genes commonly regulated in the cell and tissue samples. The 342 genes included 53 upregulated and 289 downregulated genes, selected from the cell- and tissue-based microarray databases. (B) Real-time RT-PCR analysis of upregulated genes selected from the list of top 25 genes commonly upregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database. (C) Real-time RT-PCR analysis of downregulated genes selected from the list of top 25 genes commonly downregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database.\nWe compared the gene lists from the cell-based and tissue-based databases, and selected 342 commonly regulated genes, including 53 commonly upregulated genes and 289 commonly downregulated genes (Figure 3A). The top 25 commonly regulated genes in both CCC and CCT compared to NBE are in Additional file 4. To verify the microarray data, we examined the mRNA levels of the identified genes using real-time RT-PCR in human CC tissues. We selected five up-regulated genes from the commonly upregulated genes of both the cell and tissue sample classes (Figure 3B). We also chose the IRX3, PTTG1, and PPARγ genes, which were highly upregulated in only the cell sample class. These genes were preferentially expressed in CC cells and tissues. We also examined the expression of the commonly downregulated KRT17 and UCHL1 genes, as well as the cellular downregulated IGFBP7 and SPARC genes using real-time RT-PCR in human CC. The human NBE showed substantial expression of CK-17, UCHL1, IGFBP7 and SPARC, which were barely detected in CC tissues (Figure 3C).\nDifferentially regulated genes in human CC tissues compared to NBE cells. (A) Venn diagram of genes commonly regulated in the cell and tissue samples. The 342 genes included 53 upregulated and 289 downregulated genes, selected from the cell- and tissue-based microarray databases. (B) Real-time RT-PCR analysis of upregulated genes selected from the list of top 25 genes commonly upregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database. (C) Real-time RT-PCR analysis of downregulated genes selected from the list of top 25 genes commonly downregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database.\n[SUBTITLE] Immunohistochemical analysis of CC-related genes [SUBSECTION] To confirm the reliability of the microarray data and the robustness of the strategy for identifying genes with altered expression, we examined the protein levels of the identified genes using immunohistochemical analysis of human tissues (Figure 4A). We selected three upregulated genes from the genes that were upregulated in both cell and tissue samples. The SPP1, EFNB2 and E2F2 proteins were abnormally overexpressed in the CC cell cytoplasm, and weakly or barely expressed in HCC. We also examined the IRX3, PTTG1, and PPARγ proteins, which were highly upregulated in only the cell samples. IRX3 was the most highly upregulated, and we was strongly expressed in the nucleus of CC cells in the tissue sections, but was barely detectable in the NBE nuclei. PTTG1 and PPARγ were abnormally overexpressed in the CC cell cytoplasm, and their expression was attenuated in poorly differentiated CC. Next, we also used immunohistochemical staining of human CC to examine the KRT17 and UCHL1 proteins, whose genes were both downregulated in CC cells and tissues, and the IGFBP7 and SPARC proteins, which were downregulated in CC cells only. Human NBE showed substantial expression of the CK-17, UCHL1, IGFBP7, and SPARC proteins, but these were barely detectable in CC tissue. However, KRT-17 was clearly positive in HCC (Figure 4B).\nImmunohistochemical staining of differentially expressed proteins in the CC tissues. (A) Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, anti-IRX3, anti-PTTG1 or anti-PPARγ in NBE, human CC tissues with good differentiation (well), moderate differentiation (mod) or poor differentiation (poor), and HCC tissues. The representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens. (B) Immunohistochemical staining with anti-KRT-17, anti-UCHL1, anti-IGFBP7 or anti-SPARC in the human CC and HCC tissues. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens\nTo confirm the reliability of the microarray data and the robustness of the strategy for identifying genes with altered expression, we examined the protein levels of the identified genes using immunohistochemical analysis of human tissues (Figure 4A). We selected three upregulated genes from the genes that were upregulated in both cell and tissue samples. The SPP1, EFNB2 and E2F2 proteins were abnormally overexpressed in the CC cell cytoplasm, and weakly or barely expressed in HCC. We also examined the IRX3, PTTG1, and PPARγ proteins, which were highly upregulated in only the cell samples. IRX3 was the most highly upregulated, and we was strongly expressed in the nucleus of CC cells in the tissue sections, but was barely detectable in the NBE nuclei. PTTG1 and PPARγ were abnormally overexpressed in the CC cell cytoplasm, and their expression was attenuated in poorly differentiated CC. Next, we also used immunohistochemical staining of human CC to examine the KRT17 and UCHL1 proteins, whose genes were both downregulated in CC cells and tissues, and the IGFBP7 and SPARC proteins, which were downregulated in CC cells only. Human NBE showed substantial expression of the CK-17, UCHL1, IGFBP7, and SPARC proteins, but these were barely detectable in CC tissue. However, KRT-17 was clearly positive in HCC (Figure 4B).\nImmunohistochemical staining of differentially expressed proteins in the CC tissues. (A) Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, anti-IRX3, anti-PTTG1 or anti-PPARγ in NBE, human CC tissues with good differentiation (well), moderate differentiation (mod) or poor differentiation (poor), and HCC tissues. The representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens. (B) Immunohistochemical staining with anti-KRT-17, anti-UCHL1, anti-IGFBP7 or anti-SPARC in the human CC and HCC tissues. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens\n[SUBTITLE] Immunohistochemical analysis in hamster model of CC [SUBSECTION] Although it is unknown whether antibodies raised to human proteins recognize hamster proteins, we examined the protein levels of the identified genes using immunohistochemical analysis of hamster CC tissues (Additional file 5). As in humans, the SPP1, EFNB2, and E2F2 proteins were abnormally overexpressed in the hamster CC cell cytoplasm. IRX3 was also similarly expressed in the CC cell nucleus, and PTTG1 was differentially expressed in the CC cell cytoplasm. Interestingly, in contrast to human CC cells, PPARγ was preferentially expressed in the hamster CC cell nuclei. Therefore, the immunoreactivity of identified gene proteins in hamster CC seemed to be substantially consistent with that in human CC.\nAlthough it is unknown whether antibodies raised to human proteins recognize hamster proteins, we examined the protein levels of the identified genes using immunohistochemical analysis of hamster CC tissues (Additional file 5). As in humans, the SPP1, EFNB2, and E2F2 proteins were abnormally overexpressed in the hamster CC cell cytoplasm. IRX3 was also similarly expressed in the CC cell nucleus, and PTTG1 was differentially expressed in the CC cell cytoplasm. Interestingly, in contrast to human CC cells, PPARγ was preferentially expressed in the hamster CC cell nuclei. Therefore, the immunoreactivity of identified gene proteins in hamster CC seemed to be substantially consistent with that in human CC.\n[SUBTITLE] Gene expression patterns distinguish the SCK cell line from three CC cell lines [SUBSECTION] Previously, we established four human CC cell lines and characterized one with a typical sarcomatoid phenotype of SCK. We classified the other cell lines according to tumor cell differentition, as a poorly differentiated JCK, a moderately differentiated Cho-CK, and a well-differentiated Choi-CK cell line [34]. Two-way unsupervised hierarchical clustering analysis of quadruplicate samples for each cell line was conducted, based on the similarity of expression patterns of all genes (Figure 5). We selected 559 unique genes whose expression differed from the mean by four-fold or more with P < 0.005 by t-test. Cell samples were separated into two main groups, sarcomatoid (SC) and ordinary or adenocarcinomatous CC (AC), by the gene axis. The SC group contained 292 differentially upregulated genes (>four-fold change), and 267 downregulated genes (<0.25-fold change), compared to the AC group. The top 25 genes that were differentially expressed in the sarcomatoid SCK cells compared to the three adenocarcinomatous CC lines are in Additional file 6. Clustering data within groups revealed that the core clusters I and II were associated with transdifferentiation. Genes in cluster I appeared to be downregulated in the SCK cells, compared to ordinary CC cells. In contrast, the genes in cluster II were upregulated in the SCK cells and downregulated in the ordinary CC cells. Cluster I contained the GSTT1, TACSTD, BST2, RAB25, and MAL2 genes. Cluster II contained genes associated with tumor progression and metastasis, including HOXA9, MUC13, and members of the GAGE and CT-45 families [35-38]. Expression of methylation-silenced genes, such as LDHB, BNIP3, UCHL1, and NPTX2 [39-42], was barely detectable in the AC group, but appeared in this cluster.\nUnsupervised hierarchical cluster analysis of differentially expressed genes illustrated in a heat-map. Unsupervised hierarchical clustering separated the samples into two main groups: SC and AC. The samples were independently prepared from the cultured cells four times, and four kinds of CC cells were used: Choi-CK, Cho-CK, JCK and SCK cells. Samples were clustered closer within their own group than in samples from other groups. We selected 559 unique genes with a four-fold or greater difference from the mean and a P < 0.005 by t-test for hierarchical clustering analysis. Cluster I included genes differentially downregulated in the sarcomatoid CC cells compared to three adenomatous CC lines. Cluster II contained genes differentially upregulated in the sarcomatoid CC cells compared to three adenomatous CC lines.\nPreviously, we established four human CC cell lines and characterized one with a typical sarcomatoid phenotype of SCK. We classified the other cell lines according to tumor cell differentition, as a poorly differentiated JCK, a moderately differentiated Cho-CK, and a well-differentiated Choi-CK cell line [34]. Two-way unsupervised hierarchical clustering analysis of quadruplicate samples for each cell line was conducted, based on the similarity of expression patterns of all genes (Figure 5). We selected 559 unique genes whose expression differed from the mean by four-fold or more with P < 0.005 by t-test. Cell samples were separated into two main groups, sarcomatoid (SC) and ordinary or adenocarcinomatous CC (AC), by the gene axis. The SC group contained 292 differentially upregulated genes (>four-fold change), and 267 downregulated genes (<0.25-fold change), compared to the AC group. The top 25 genes that were differentially expressed in the sarcomatoid SCK cells compared to the three adenocarcinomatous CC lines are in Additional file 6. Clustering data within groups revealed that the core clusters I and II were associated with transdifferentiation. Genes in cluster I appeared to be downregulated in the SCK cells, compared to ordinary CC cells. In contrast, the genes in cluster II were upregulated in the SCK cells and downregulated in the ordinary CC cells. Cluster I contained the GSTT1, TACSTD, BST2, RAB25, and MAL2 genes. Cluster II contained genes associated with tumor progression and metastasis, including HOXA9, MUC13, and members of the GAGE and CT-45 families [35-38]. Expression of methylation-silenced genes, such as LDHB, BNIP3, UCHL1, and NPTX2 [39-42], was barely detectable in the AC group, but appeared in this cluster.\nUnsupervised hierarchical cluster analysis of differentially expressed genes illustrated in a heat-map. Unsupervised hierarchical clustering separated the samples into two main groups: SC and AC. The samples were independently prepared from the cultured cells four times, and four kinds of CC cells were used: Choi-CK, Cho-CK, JCK and SCK cells. Samples were clustered closer within their own group than in samples from other groups. We selected 559 unique genes with a four-fold or greater difference from the mean and a P < 0.005 by t-test for hierarchical clustering analysis. Cluster I included genes differentially downregulated in the sarcomatoid CC cells compared to three adenomatous CC lines. Cluster II contained genes differentially upregulated in the sarcomatoid CC cells compared to three adenomatous CC lines.\n[SUBTITLE] Expressions of transdifferentiation-related genes [SUBSECTION] From 559 genes that were differentially regulated between SCK cells and the three ordinary CC lines, we selected six upregulated genes and six downregulated genes, and examined their mRNA expression using real-time RT-PCR (Figure 6A), which verified the differential expression. We examined protein expression by Western blot analysis of the four CC lines. LDHB, Bnip3, HO-1, and UCHL1 were overexpressed exclusively in SCK cells. The expression of VIM and TWIST1 increased according to tumor dedifferentiation and was highest in SCK cells (Figure 6B, left). In contrast, LCN2, S100P, KRT7, KRT19, GPX1, and EFNA1 were preferentially expressed in Choi-CK, Cho-CK and JCK cells, but minimally expressed in SCK cells (Figure 6B, right). Because LDHB, BNIP3, and UCHL1 are well-known methylation-silenced genes in tumors [39-41], and are highly expressed in SCK cells, this suggested that DNA demethylation was involved in CC. To confirm this hypothesis, we treated the AC cells with the demethylating agent Aza, which dramatically restorated expression of the silenced UCHL1 gene in these cells (Figure 6C). In addition, we performed immunohistochemical examination of protein expression according to tumor dedifferentiation in human CC tissue (Figure 6D). As expected, HO-1 was exclusively overexpressed in SC, while TWIST1 was overexpressed in the poorly differentiated and SC cells. In contrast, LCN2 was exclusively downregulated in SC, while EFNA1 expression decreased with tumor dedifferentiation. Therefore, expression of these proteins clearly correlated with clinicopathological features such as tumor differentiation and EMT change, in CC tissues.\nGenes and proteins differentially expressed in sarcomatoid CC and adenomatous CC cells. (A) Real-time RT-PCR analysis of upregulated (left) and downregulated (right) genes selected from the list of top 25 genes differentially expressed in sarcomatoid SCK cells and three adenocarcinomatous CC cell lines. (B) Immunoblot of upregulated (left) and down-regulated (right) proteins in sarcomtoid SCK cells compared to three adenocarcinomatous CC cell lines. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines. Lane 1, Choi-CK cells. Lane 2, Cho-CK cells. Lane 3, JCK cells, Lane 4, SCK cells. (C) Immunoblot of UCHL1. Expression was restored in adenocarcimatous CC cells by treatment of 5 μM of Aza for 4 days, compared to vehicle control (VC). (D) Immunohistochemical staining of down- or upregulated proteins according to tumor dedifferentiation. The representative picture is one of three immunohistochemical staining replicates of different specimens. Well, well-differentiated CC. Mod, moderately differentiated CC. Poor, poorly differentiated CC. Sar, sarcomatoid CC. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines.\nFrom 559 genes that were differentially regulated between SCK cells and the three ordinary CC lines, we selected six upregulated genes and six downregulated genes, and examined their mRNA expression using real-time RT-PCR (Figure 6A), which verified the differential expression. We examined protein expression by Western blot analysis of the four CC lines. LDHB, Bnip3, HO-1, and UCHL1 were overexpressed exclusively in SCK cells. The expression of VIM and TWIST1 increased according to tumor dedifferentiation and was highest in SCK cells (Figure 6B, left). In contrast, LCN2, S100P, KRT7, KRT19, GPX1, and EFNA1 were preferentially expressed in Choi-CK, Cho-CK and JCK cells, but minimally expressed in SCK cells (Figure 6B, right). Because LDHB, BNIP3, and UCHL1 are well-known methylation-silenced genes in tumors [39-41], and are highly expressed in SCK cells, this suggested that DNA demethylation was involved in CC. To confirm this hypothesis, we treated the AC cells with the demethylating agent Aza, which dramatically restorated expression of the silenced UCHL1 gene in these cells (Figure 6C). In addition, we performed immunohistochemical examination of protein expression according to tumor dedifferentiation in human CC tissue (Figure 6D). As expected, HO-1 was exclusively overexpressed in SC, while TWIST1 was overexpressed in the poorly differentiated and SC cells. In contrast, LCN2 was exclusively downregulated in SC, while EFNA1 expression decreased with tumor dedifferentiation. Therefore, expression of these proteins clearly correlated with clinicopathological features such as tumor differentiation and EMT change, in CC tissues.\nGenes and proteins differentially expressed in sarcomatoid CC and adenomatous CC cells. (A) Real-time RT-PCR analysis of upregulated (left) and downregulated (right) genes selected from the list of top 25 genes differentially expressed in sarcomatoid SCK cells and three adenocarcinomatous CC cell lines. (B) Immunoblot of upregulated (left) and down-regulated (right) proteins in sarcomtoid SCK cells compared to three adenocarcinomatous CC cell lines. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines. Lane 1, Choi-CK cells. Lane 2, Cho-CK cells. Lane 3, JCK cells, Lane 4, SCK cells. (C) Immunoblot of UCHL1. Expression was restored in adenocarcimatous CC cells by treatment of 5 μM of Aza for 4 days, compared to vehicle control (VC). (D) Immunohistochemical staining of down- or upregulated proteins according to tumor dedifferentiation. The representative picture is one of three immunohistochemical staining replicates of different specimens. Well, well-differentiated CC. Mod, moderately differentiated CC. Poor, poorly differentiated CC. Sar, sarcomatoid CC. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines.", "Using BeadChip microarray analysis, we compared the gene expression profiles of nine CC cell lines, an immortalized biliary epithelial cell line, and four types of NBE cells. We selected 828 unique genes with a 2-fold or greater expression difference from the mean, with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis of all samples was based on the similarity in the expression pattern of all genes (Figure 1). Cell samples were separated into two main groups, the NBE cluster, and the transformed and immortalized biliary epithelial cells (CCC cluster). Each distinctive gene cluster was identified by delineation using a hierarchical clustering dendrogram. Cluster I consisted of genes upregulated in CC cells, which included tumor-related genes such as LGR4, AGR2, PCAF, TMEM97, FRAT2, EFNB2 and ZIC2 [21-27]. Cluster II included genes underexpressed in CC cells. These were mainly tumor suppressor genes such as GREM1, THY1, STC2, SERPINE1, SPARC and TAGLN [28-33]. Cluster III was genes upregulated in NBE cells, and contained the PDGFRA, CD248, and BDKRB1 genes.\nUnsupervised hierarchical clustering of four biliary epithelial cells, one immortalized cholangiocyte cell line and nine CC cells. Unsupervised hierarchical clustering separated the samples into two main groups, normal biliary epithelial cells (NBE) isolated from mucosal slices of normal bile ducts and ex-vivo cultured as described in Methods, and cholangiocarcinoma cells (CCC). Data are in matrix format, with columns representing individual cell lines and rows representing each gene. Red, high expression; green, low expression; black, no significant change in expression level between the mean and sample. A hierarchical clustering algorithm was applied to all cells and genes using the 1 - Pearson correlation coefficient as a similarity measure. Raw data for a single array were summarized using Illumina BeadStudio v3.0 and output to the user was as a set of 43,148 values for each individual hybridization. We selected 828 unique genes with a two-fold or greater difference from the mean and P < 0.01 by t-test, for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified in the hierarchical cluster of the genes differentially expressed in CCC compared with NBE. CC, cholangiocarcinoma; IMC, immortalized cholangiocytes.", "Using BeadChip microarrays, gene expression profiles of 19 CC tissues and 4 types of NBE cells were compared. We selected 1798 unique genes with a 2-fold or greater differences from the mean difference with a P < 0.01 by t-test. Unsupervised hierarchical clustering analysis was as described above (Figure 2A). All samples separated into two main groups, NBE and CC tissues (CCT). Each distinctive gene cluster was identified using a hierarchical clustering dendrogram as above. Intriguingly, the CC sample cluster was divided into two subclasses by tumor differentiation: differentiated (Df) and undifferentiated (Udf). Clustering data for the CC group revealed three clusters. Cluster I had genes upregulated in NBE and downregulated in CCT including SERPINB2, PAPPA, LRRC17, and GREM1. Cluster II contained genes upregulated in the Df CCT and downregulated in NBE. Cluster III included genes upregulated in poorly differentiated or Udf CCT, and downregulated in NBE. A supervised hierarchical clustering analysis was performed between the NBE class, and the Df and the Udf subclasses based on the similarity of expression pattern of all genes (Figure 2B and 2C). We selected 420 differentially expressed genes in the Df subclass, and 646 genes in the Udf subclass for comparison with the NBE class (Additional files 2 and 3).\nUnsupervised hierarchical clustering of 4 biliary epithelial cells and 19 CC tissues. (A) Unsupervised hierarchical clustering separated the samples into two main groups. We selected 1798 unique genes with two-fold or greater difference from the mean with P < 0.01 by t-test for hierarchical clustering analysis. Specific gene clusters (Cluster 1 through Cluster III) were identified of differentially expressed in CCT compared to NBE. (B) Supervised hierarchical clustering of four biliary epithelial cells and seven differentiated CC tissues. We selected 420 unique genes with four-fold or greater difference from the mean and P < 0.01 by t-test for hierarchical clustering analysis. (C) Supervised hierarchical clustering of 4 biliary epithelial cells and 10 undifferentiated CC tissues. We selected 646 unique genes with the criteria in B for hierarchical clustering analysis.", "We compared the gene lists from the cell-based and tissue-based databases, and selected 342 commonly regulated genes, including 53 commonly upregulated genes and 289 commonly downregulated genes (Figure 3A). The top 25 commonly regulated genes in both CCC and CCT compared to NBE are in Additional file 4. To verify the microarray data, we examined the mRNA levels of the identified genes using real-time RT-PCR in human CC tissues. We selected five up-regulated genes from the commonly upregulated genes of both the cell and tissue sample classes (Figure 3B). We also chose the IRX3, PTTG1, and PPARγ genes, which were highly upregulated in only the cell sample class. These genes were preferentially expressed in CC cells and tissues. We also examined the expression of the commonly downregulated KRT17 and UCHL1 genes, as well as the cellular downregulated IGFBP7 and SPARC genes using real-time RT-PCR in human CC. The human NBE showed substantial expression of CK-17, UCHL1, IGFBP7 and SPARC, which were barely detected in CC tissues (Figure 3C).\nDifferentially regulated genes in human CC tissues compared to NBE cells. (A) Venn diagram of genes commonly regulated in the cell and tissue samples. The 342 genes included 53 upregulated and 289 downregulated genes, selected from the cell- and tissue-based microarray databases. (B) Real-time RT-PCR analysis of upregulated genes selected from the list of top 25 genes commonly upregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database. (C) Real-time RT-PCR analysis of downregulated genes selected from the list of top 25 genes commonly downregulated in both CC cells (C) and tissues (T) compared to cultured NBE cells (N). *, selected from only the cell-based microarray database.", "To confirm the reliability of the microarray data and the robustness of the strategy for identifying genes with altered expression, we examined the protein levels of the identified genes using immunohistochemical analysis of human tissues (Figure 4A). We selected three upregulated genes from the genes that were upregulated in both cell and tissue samples. The SPP1, EFNB2 and E2F2 proteins were abnormally overexpressed in the CC cell cytoplasm, and weakly or barely expressed in HCC. We also examined the IRX3, PTTG1, and PPARγ proteins, which were highly upregulated in only the cell samples. IRX3 was the most highly upregulated, and we was strongly expressed in the nucleus of CC cells in the tissue sections, but was barely detectable in the NBE nuclei. PTTG1 and PPARγ were abnormally overexpressed in the CC cell cytoplasm, and their expression was attenuated in poorly differentiated CC. Next, we also used immunohistochemical staining of human CC to examine the KRT17 and UCHL1 proteins, whose genes were both downregulated in CC cells and tissues, and the IGFBP7 and SPARC proteins, which were downregulated in CC cells only. Human NBE showed substantial expression of the CK-17, UCHL1, IGFBP7, and SPARC proteins, but these were barely detectable in CC tissue. However, KRT-17 was clearly positive in HCC (Figure 4B).\nImmunohistochemical staining of differentially expressed proteins in the CC tissues. (A) Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, anti-IRX3, anti-PTTG1 or anti-PPARγ in NBE, human CC tissues with good differentiation (well), moderate differentiation (mod) or poor differentiation (poor), and HCC tissues. The representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens. (B) Immunohistochemical staining with anti-KRT-17, anti-UCHL1, anti-IGFBP7 or anti-SPARC in the human CC and HCC tissues. *, selected from only cell-based microarray database. The representative picture is one of three immunohistochemical staining replicates of different specimens", "Although it is unknown whether antibodies raised to human proteins recognize hamster proteins, we examined the protein levels of the identified genes using immunohistochemical analysis of hamster CC tissues (Additional file 5). As in humans, the SPP1, EFNB2, and E2F2 proteins were abnormally overexpressed in the hamster CC cell cytoplasm. IRX3 was also similarly expressed in the CC cell nucleus, and PTTG1 was differentially expressed in the CC cell cytoplasm. Interestingly, in contrast to human CC cells, PPARγ was preferentially expressed in the hamster CC cell nuclei. Therefore, the immunoreactivity of identified gene proteins in hamster CC seemed to be substantially consistent with that in human CC.", "Previously, we established four human CC cell lines and characterized one with a typical sarcomatoid phenotype of SCK. We classified the other cell lines according to tumor cell differentition, as a poorly differentiated JCK, a moderately differentiated Cho-CK, and a well-differentiated Choi-CK cell line [34]. Two-way unsupervised hierarchical clustering analysis of quadruplicate samples for each cell line was conducted, based on the similarity of expression patterns of all genes (Figure 5). We selected 559 unique genes whose expression differed from the mean by four-fold or more with P < 0.005 by t-test. Cell samples were separated into two main groups, sarcomatoid (SC) and ordinary or adenocarcinomatous CC (AC), by the gene axis. The SC group contained 292 differentially upregulated genes (>four-fold change), and 267 downregulated genes (<0.25-fold change), compared to the AC group. The top 25 genes that were differentially expressed in the sarcomatoid SCK cells compared to the three adenocarcinomatous CC lines are in Additional file 6. Clustering data within groups revealed that the core clusters I and II were associated with transdifferentiation. Genes in cluster I appeared to be downregulated in the SCK cells, compared to ordinary CC cells. In contrast, the genes in cluster II were upregulated in the SCK cells and downregulated in the ordinary CC cells. Cluster I contained the GSTT1, TACSTD, BST2, RAB25, and MAL2 genes. Cluster II contained genes associated with tumor progression and metastasis, including HOXA9, MUC13, and members of the GAGE and CT-45 families [35-38]. Expression of methylation-silenced genes, such as LDHB, BNIP3, UCHL1, and NPTX2 [39-42], was barely detectable in the AC group, but appeared in this cluster.\nUnsupervised hierarchical cluster analysis of differentially expressed genes illustrated in a heat-map. Unsupervised hierarchical clustering separated the samples into two main groups: SC and AC. The samples were independently prepared from the cultured cells four times, and four kinds of CC cells were used: Choi-CK, Cho-CK, JCK and SCK cells. Samples were clustered closer within their own group than in samples from other groups. We selected 559 unique genes with a four-fold or greater difference from the mean and a P < 0.005 by t-test for hierarchical clustering analysis. Cluster I included genes differentially downregulated in the sarcomatoid CC cells compared to three adenomatous CC lines. Cluster II contained genes differentially upregulated in the sarcomatoid CC cells compared to three adenomatous CC lines.", "From 559 genes that were differentially regulated between SCK cells and the three ordinary CC lines, we selected six upregulated genes and six downregulated genes, and examined their mRNA expression using real-time RT-PCR (Figure 6A), which verified the differential expression. We examined protein expression by Western blot analysis of the four CC lines. LDHB, Bnip3, HO-1, and UCHL1 were overexpressed exclusively in SCK cells. The expression of VIM and TWIST1 increased according to tumor dedifferentiation and was highest in SCK cells (Figure 6B, left). In contrast, LCN2, S100P, KRT7, KRT19, GPX1, and EFNA1 were preferentially expressed in Choi-CK, Cho-CK and JCK cells, but minimally expressed in SCK cells (Figure 6B, right). Because LDHB, BNIP3, and UCHL1 are well-known methylation-silenced genes in tumors [39-41], and are highly expressed in SCK cells, this suggested that DNA demethylation was involved in CC. To confirm this hypothesis, we treated the AC cells with the demethylating agent Aza, which dramatically restorated expression of the silenced UCHL1 gene in these cells (Figure 6C). In addition, we performed immunohistochemical examination of protein expression according to tumor dedifferentiation in human CC tissue (Figure 6D). As expected, HO-1 was exclusively overexpressed in SC, while TWIST1 was overexpressed in the poorly differentiated and SC cells. In contrast, LCN2 was exclusively downregulated in SC, while EFNA1 expression decreased with tumor dedifferentiation. Therefore, expression of these proteins clearly correlated with clinicopathological features such as tumor differentiation and EMT change, in CC tissues.\nGenes and proteins differentially expressed in sarcomatoid CC and adenomatous CC cells. (A) Real-time RT-PCR analysis of upregulated (left) and downregulated (right) genes selected from the list of top 25 genes differentially expressed in sarcomatoid SCK cells and three adenocarcinomatous CC cell lines. (B) Immunoblot of upregulated (left) and down-regulated (right) proteins in sarcomtoid SCK cells compared to three adenocarcinomatous CC cell lines. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines. Lane 1, Choi-CK cells. Lane 2, Cho-CK cells. Lane 3, JCK cells, Lane 4, SCK cells. (C) Immunoblot of UCHL1. Expression was restored in adenocarcimatous CC cells by treatment of 5 μM of Aza for 4 days, compared to vehicle control (VC). (D) Immunohistochemical staining of down- or upregulated proteins according to tumor dedifferentiation. The representative picture is one of three immunohistochemical staining replicates of different specimens. Well, well-differentiated CC. Mod, moderately differentiated CC. Poor, poorly differentiated CC. Sar, sarcomatoid CC. *, selected from the top 100 genes differentially expressed in sarcomatoid SCK cells compared to three adenocarcinomatous CC cell lines.", "In this study, our experimental design primarily investigated the gene expression profiles of 10 cell lines and 19 CC tissues, and compared these profiles to those from four cultured NBE cell line using genome-wide BeadChip microarray analysis. Transdifferentiation-related genes were analyzed by same method. Using unsupervised hierarchical clustering analysis, we found that the SPP1, EFNB2, and E2F genes were commonly upregulated in both cell and tissue samples. IRX3, PTTG1, and PPARγ were upregulated in the cell samples, and were immunohistochemically verified in human and hamster CC tissues. SPP1 (osteopontin), a secretory adhesive alycoprotein, was identified as a highly overexpressed gene in CC lines and tissues. SPP1 is a ligand of CD44 that binds to αV-containing integrins and is important in malignant cell attachment and tumor invasion [43]. It was a highly overexpressed gene in HCC, and its expression correlated with earlier recurrence, poorer prognosis, and metastasis [44]. Consistent with our findings, a recent oligonucleotide microarray study reported that SPP1 was the most highly expressed gene in intrahepatic cholangiocarcinoma [45]. EFNB2 was identified as a preferentially expressed genes in CC. EFNB2 overexpression is reported to be significantly correlated with the number of lymph node metastases and clinical stage in esophageal cancer [46]. Several reports have examined concomitant expression of the ligand EFNB2 and its receptor EphB4 in leukemia-lymphoma cell lines [47], and in endometrial cancer [48]. E2Fs 1-3 are characterized as \"activator E2Fs\" since their binding to promoters results in increased transcription, while E2Fs 4 and 5 are \"repressor E2Fs\" since they form complexes with p130, HDACs, and other factors to block transcription [49]. During hepatocarcinogenesis in c-myc/TGFalpha double-transgenic mice, expression of E2F-1 and E2F-2 increases, and putative E2F target genes are induced [50].\nFor immunohistochemical verification, the representative genes were selected from the list of top 25 commonly upregulated genes, according to antibody available for immunohistochemistry. In addition, other genes were selected from only cell-based microarray database. The same immunohistochemical staining in hamster CC tissues induced by Clonorchiasis infestation was compared with control staining in normal hamster livers. IRX3 is involved in dorsal-ventral patterning in spinal cord development and coordination with other homeobox genes [51]. IRX3 is preferentially expressed in the examined CC tissues and localized to the nucleus of human and hamster malignant biliary epithelial cells, independent of cell differentiation. A methylated CpG island was detected in exon 2 of the IRX3 locus, rather than in the promoter, and is responsible for IRX3 overexpression in brain tumor cells and tissues [52]. PTTG1, a critical mitotic checkpoint protein, is a known proto-oncogene that is highly expressed in HCC [53]. Our data showed that PTTG1 was preferentially expressed in the cytoplasm of the human and hamster CC cells. PPAR-γ, a member of the nuclear receptor superfamily, functions as a ligand-activated transcription factor [54]. It is overexpressed in a variety of cancers, including HCC and pancreatic cancer [55,56]. Positive immunostaining was localized in the cytoplasm and nuclei of human CC cells. However, positive immunostaining was exclusively detected in the nuclei of the hamster CC cells. Our data also immunohistochemically validated the downregulation of proteins KRT17, UCHL1, IGFBP7, and SPARC. Our hamster model showed the similar expression patterns of human CC related genes and therefore might be a relevant model to study human CC.\nAnalysis of genes involved in the transdifferentiation of CC cells showed two clusters in the gene axis, with genes that were upregulated (cluster II), and downregulated (cluster I) in the SC group as compared to the AC group. The mesenchymal antigen VIM and the transcriptional factor TWIST1 were upregulated in JCK and SCK cells by tumor dedifferentiation. The overexpression of these proteins is reported to be associated with the EMT [57,58]. Intriguingly, genes silenced by promoter hypermethylation during CC development were restored at the point of sarcomatous transdifferentation, which implied that the demethylation may be involved in the EMT progression of CC.\nIn addition to tumor-related genes known to be overexprssed in intrahepatic CC, we identified other strongly and consistently dysregulated genes in CC that are known to be involved in other human cancers. Our data support a correlation between the expression of these genes and CC tumor differentiation, and the gene expression patterns found in this study are consistent with those associated with a poor clinical prognosis for this cancer. gene expression profiling appears to be a useful diagnostic tool, especially for differentiating CC from other liver masses, as well as for the subclassification of intrahepatic CC compared to histopathological findings.", "Gene expression profiling appears to be a useful diagnostic tool, especially for differentiating CC from other liver masses, as well as for the subclassification of intrahepatic CC compared to histopathological findings. The most consistently overexpressed genes are candidate therapeutic targets, and related genes can be used for predicting survival and outcomes for different therapeutic modalities.", "The authors declare that they have no competing interests.", "MS and IC performed most of the experiments and drafted the manuscript. ML and GY carried out the tissue collection and the establishment of cell lines. XC participated in the immunohistochemical analysis. BC and IK participated in the design and coordination of the study and helped to draft the manuscript. EA and SL participated in the array data processing and analysis. DK conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/78/prepub\n", "Supplementary Table S1: Sequences and accession numbers for the forward (FOR) and reverse (REV) primers used in real-time RT-PCR.\nClick here for file\nSupplementary Table S2: List of genes differentially expressed between differentiated cholangiocarcinoma and normal biliary epithelium (NBE).\nClick here for file\nSupplementary Table S3: List of genes differentially expressed between undifferentiated cholangiocarcinoma and normal biliary epithelium (NBE).\nClick here for file\nSupplementary Table S4: Top 25 genes commonly regulated in both CC cells and tissues compared with cultured biliary epithelial cells.\nClick here for file\nSupplementary Figure S1: Immunohistochemical staining with anti-SPP1, anti-EFNB2, anti-E2F2, and anti-IRX3 in hamster CC tissues induced by Clonorchiasis infestation. Control stainings were performed in normal hamster livers. *, selected from only cell-based microarray database. 35\nClick here for file\nSupplementary Table 5. Top 25 genes differentially expressed in the sarcomatoid SCK cells compared with 3 adenocarcinomatous CC cell lines.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Low prevalence of H. pylori infection in HIV-positive patients in the northeast of Brazil.
21333017
This study conducted in Northeastern Brazil, evaluated the prevalence of H. pylori infection and the presence of gastritis in HIV-infected patients.
BACKGROUND
There were included 113 HIV-positive and 141 age-matched HIV-negative patients, who underwent upper gastrointestinal endoscopy for dyspeptic symptoms. H. pylori status was evaluated by urease test and histology.
METHODS
The prevalence of H. pylori infection was significantly lower (p < 0.001) in HIV-infected (37.2%) than in uninfected (75.2%) patients. There were no significant differences between H. pylori status and gender, age, HIV viral load, antiretroviral therapy and the use of antibiotics. A lower prevalence of H. pylori was observed among patients with T CD4 cell count below 200/mm3; however, it was not significant. Chronic active antral gastritis was observed in 87.6% of the HIV-infected patients and in 780.4% of the control group (p = 0.11). H. pylori infection was significantly associated with chronic active gastritis in the antrum in both groups, but it was not associated with corpus chronic active gastritis in the HIV-infected patients.
RESULTS
We demonstrated that the prevalence of H. pylori was significantly lower in HIV-positive patients compared with HIV-negative ones. However, corpus gastritis was frequently observed in the HIV-positive patients, pointing to different mechanisms than H. pylori infection in the genesis of the lesion.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Brazil", "Case-Control Studies", "Comorbidity", "Endoscopy, Gastrointestinal", "Female", "Gastritis", "HIV Infections", "Helicobacter Infections", "Helicobacter pylori", "Humans", "Male", "Middle Aged", "Prevalence", "Risk Factors", "Upper Gastrointestinal Tract", "Young Adult" ]
3055236
null
null
Methods
The study was approved by the Ethical Committee of Research of the University of Ceará, and informed consent was obtained from each patient. This prospective cross-sectional study was carried out at the Hospital São José, a major referral center for assistance of HIV-infected individuals in the city of Fortaleza, Ceará, Brazil. From May 2001 to April 2003, 113 HIV-positive patients who underwent upper gastrointestinal endoscopy for dyspeptic symptoms were included in the study. The control group was composed by 141 HIV-negative patients who were undergoing upper gastrointestinal endoscopy for investigation of dyspeptic symptoms at the University Hospital Walter Cantideo, Fortaleza, Ceara, Brazil. Patients and age matched controls (interval of 10 years) were enrolled at the same period. All patients gave written informed consent to participate in the study and answered a questionnaire about symptoms and consumption of medications, including acid secretion inhibitors and antibiotics six months before endoscopy. In the HIV-positive patient group, data regarding the risk factors for HIV infection and antiretroviral therapy were also obtained. Total T CD4 cell count and HIV viral load were accepted as valid if the blood sample for their determination had been taken within 1 month before or after the entrance in the study. [SUBTITLE] Upper gastrointestinal endoscopy [SUBSECTION] Gastro-endoscopy was performed with Olympus video endoscopes (Olympus Optical Co, Ltd. GIF TYPE V) in the standard manner. Fragments of the gastric mucosa were obtained from the five sites recommended by the Houston-updated Sydney system for classification of gastritis and to evaluate the presence of spiral microorganism stained by Giemsa [16]. Two fragments from the lesser curvature of the gastric antrum and two from the lesser curvature of the lower gastric body were obtained for urease test. The activity of chronic gastritis was classified as mild, moderate and marked based on the number of neutrophil infiltration. The specimens were fixed in 10% formalin, embedded in paraffin wax, and 5-mm sections were stained with hematoxylin and eosin for histology and with Giemsa staining to evaluate H. pylori status. Exclusion criteria included age below 18 years old or above 80 years old, other serious medical problems, or previous treatment for H. pylori infection. H. pylori status was determined by the rapid urease test and histology (Giemsa staining) and was considered negative when both tests were negative. Gastro-endoscopy was performed with Olympus video endoscopes (Olympus Optical Co, Ltd. GIF TYPE V) in the standard manner. Fragments of the gastric mucosa were obtained from the five sites recommended by the Houston-updated Sydney system for classification of gastritis and to evaluate the presence of spiral microorganism stained by Giemsa [16]. Two fragments from the lesser curvature of the gastric antrum and two from the lesser curvature of the lower gastric body were obtained for urease test. The activity of chronic gastritis was classified as mild, moderate and marked based on the number of neutrophil infiltration. The specimens were fixed in 10% formalin, embedded in paraffin wax, and 5-mm sections were stained with hematoxylin and eosin for histology and with Giemsa staining to evaluate H. pylori status. Exclusion criteria included age below 18 years old or above 80 years old, other serious medical problems, or previous treatment for H. pylori infection. H. pylori status was determined by the rapid urease test and histology (Giemsa staining) and was considered negative when both tests were negative. [SUBTITLE] Statistical Analysis [SUBSECTION] Data were analyzed using the software SPSS (version 10.0, Chicago, IL). Chi square test with Yates' correction or Fischer's exact test were used to compare results among the different groups. Significance was accepted at P values below 0.05. Data were analyzed using the software SPSS (version 10.0, Chicago, IL). Chi square test with Yates' correction or Fischer's exact test were used to compare results among the different groups. Significance was accepted at P values below 0.05.
null
null
null
null
[ "Background", "Upper gastrointestinal endoscopy", "Statistical Analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Helicobacter pylori infection is the major etiologic factor of chronic gastritis and peptic ulcer in the general population. Gastrointestinal (GI) symptoms are frequent among patients infected with human immunodeficiency virus (HIV) and with acquired immunodeficiency syndrome (AIDS) [1,2] However, the role of H. pylori infection in the GI tract mucosa of HIV patients is not well defined [3]. Some studies suggested that interactions between the immune/inflammatory response, gastric physiology and host repair mechanisms play an important role in dictating the disease outcome in response to H. pylori infection, suggesting that the host's immune competence might be an important issue in H. pylori infection [4,5].\nData in regard to the prevalence of H. pylori infection in HIV-infected population are controversial. Some reports have shown that the rate of the infection in HIV-positive patients is remarkably low when compared with the general population [6,7]. Conversely, other studies have not found similar results [8-10].\nIt is well known that the immune deficiencies caused by HIV give rise to many different gastrointestinal opportunistic infections, such as cytomegalovirus (CMV) infection and fungal esophagitis [11,12]. However, there are few studies evaluating the gastric mucosa of patients co-infected by H. pylori and HIV [13-15].\nTherefore, the aim of this study was to evaluate the prevalence of H. pylori infection, risk factors associated with the infection, as well as the macroscopic and microscopic alterations of the gastric mucosa of HIV-infected patients in a high H. pylori prevalence area in Northeastern, Brazil.", "Gastro-endoscopy was performed with Olympus video endoscopes (Olympus Optical Co, Ltd. GIF TYPE V) in the standard manner. Fragments of the gastric mucosa were obtained from the five sites recommended by the Houston-updated Sydney system for classification of gastritis and to evaluate the presence of spiral microorganism stained by Giemsa [16]. Two fragments from the lesser curvature of the gastric antrum and two from the lesser curvature of the lower gastric body were obtained for urease test. The activity of chronic gastritis was classified as mild, moderate and marked based on the number of neutrophil infiltration. The specimens were fixed in 10% formalin, embedded in paraffin wax, and 5-mm sections were stained with hematoxylin and eosin for histology and with Giemsa staining to evaluate H. pylori status.\nExclusion criteria included age below 18 years old or above 80 years old, other serious medical problems, or previous treatment for H. pylori infection. H. pylori status was determined by the rapid urease test and histology (Giemsa staining) and was considered negative when both tests were negative.", "Data were analyzed using the software SPSS (version 10.0, Chicago, IL). Chi square test with Yates' correction or Fischer's exact test were used to compare results among the different groups. Significance was accepted at P values below 0.05.", "Two hundred and fifty four subjects were included: 113 HIV-infected patients and 141 age-matched controls. The mean age of HIV infected patients was 36.0 years (range, 21-70 years) and 61.9% (70/113) were male. The mean age of the control group was 39.7 years (range 18-76 years) and 36.2% (51/141) were male. Most of the symptoms of HIV-positive patients were nonspecific, such as diarrhea, dyspepsia, abdominal pain, nausea, vomiting, odynophagia or dysphagia. The frequency of diarrhea, odynophagia, and dysphagia was significantly higher in HIV-positive group compared with the controls (P < 0.05).\nMacroscopic lesions in the HIV-infected group included, widespread esophageal candidiasis (32.7%; 37/113), esophageal ulcers (7.9%; 9/113) and candidiasis plus esophageal ulcers (1.7%; 2/113). Cryptosporidium was found in the gastric mucosa of two HIV-infected patients. Table 1 shows the endoscopic gastric mucosal findings in HIV-positive and HIV-negative dyspeptic patients. Corpus gastritis was significantly more frequently observed in the dyspeptic HIV-positive than in HIV-negative patients.\nEndoscopic findings of the gastroduodenal mucosa of dyspeptic HIV-positive and negative patients\nThe overall prevalence of H. pylori infection was significantly lower (p < 0.001) in HIV-infected patients (37.2%; 42/113) when compared with the controls (75.2%; 106/141); and did not increase with age (p = 0.73). Of note, the infection prevalence in the oldest group did not differ between HIV-positive and HIV-negative patients. The prevalence of H. pylori infection in the HIV-positive patients and controls according to the age is shown in Figure 1.\nHelicobacter pylori infection in HIV-positive and -negative patients according to the age.\nIn the HIV-positive group, there was no significant difference between H. pylori status and gender, age, HIV viral load, antiretroviral therapy and the use of antibiotics and H2-blocker. Only 4 patients referred the use of proton pump inhibitors (PPI). A non-significant lower prevalence of H. pylori infection was observed in the patients with T CD4 cell count below 200 (Table 2).\nCovariates associated with H. pylori infection in HIV-positive patients\nThe gastric mucosa histological results are shown in Table 3. Chronic active antral gastritis was observed in 87.6% (99/113) of the HIV-infected patients and in 80.1% (113/141; p = 0.11) of the control group. H. pylori infection was significantly associated with the presence of chronic active antral gastritis in both groups (p = 0.03 and p < 0.001, respectively). No significant difference (p = 0.89) was also observed between the groups in respect to the frequency of chronic active corpus gastritis (53.1% in the HIV-positive patients and 53.9% in the HIV-negative patient). However, the H. pylori infection did not associate with chronic active corpus gastritis in the HIV-positive patients (p = 0.15), but high association was observed in the HIV-negative ones (p < 0.001). Additionally, in the HIV-negative group, the degree of gastritis was also associated with H. pylori infection, being the presence of the microorganism more frequently observed in the more marked (50%, 40/80) than in moderated (10%, 2/20) gastritis. Atrophy/intestinal metaplasia was observed less frequently in the gastric corpus of HIV-positive (6.2%, 7/113) than in the gastric corpus of HIV-negative (9.9%, 14/141) patients, but the differences were not significant (p = 0.27).\nAssociation between the frequency of H. pylori infection and antral and corpus active gastritis in HIV-infected patients and controls", "The prevalence of H. pylori infection was lower in the HIV-positive group than in the age-matched controls. The low prevalence of H. pylori infection we observed in the HIV-positive patients differs profoundly from that previously reported (82.0%) in HIV-negative adults from a poor urban Community in the same city (Fortaleza; Brazil) [17]. A similar result has been observed in a cross-sectional study in Southeastern Brazil that evaluated the prevalence of H. pylori infection in 528 HIV-infected patients (32.38% of H. pylori positivity) [18]. Studies from East countries, where H. pylori infection is highly prevalent such as Taiwan [19] and China [20] also demonstrated a lower H. pylori infection prevalence (17.3% and 22.1%, respectively) in HIV-infected than in-non-infected (63.5% and 44.8%, respectively) patients. Conversely, studies from Argentina and from India showed similar H. pylori infection prevalence in HIV-infected and non infected patients [13,21,22].\nIt has to be emphasized that H. pylori infection was diagnosed by histology and urease test in all patients. The results were concordant with those obtained by the evaluation of H. pylori specific ureA gene in the paraffin imbedded gastric tissue from a subgroup of patients (data not shown) and both HIV-positive and -negative patients belong to the same low-income population. As above mentioned, the prevalence of H. pylori infection in a similar population from Fortaleza in respect to the socio-economical level is high [17]. It is well known that H. pylori infection is mainly acquired during childhood and that once acquired it is life-long lasting [1]. Therefore, we may hypothesize that the HIV-infected patients we studied had been exposed to the bacterium early in life and most of them became infected, but loose the infection after acquired HIV infection. Alternatively, the H. pylori gastric load might be decreased in the HIV-positive patients leading to H. pylori infection misdiagnosis. Explanations include decreased gastric acid secretion predisposing to gastric colonization by other microorganisms that might compete with H. pylori, the use of either antibiotics or PPI and, as suggested in other studies, the low count of T CD4 cells in AIDS patients [6,21,23,24].\nIt has been suggested that T CD4 cells play a role in inducing or perpetuating tissue and epithelial damage that may facilitate H. pylori colonization [25]. In this study, HIV-positive patients were stratified according to the T CD4 cell counts above or below 200 cells/mm3 and a tendency of lower prevalence of H. pylori infection was observed in the group of patients with T CD4 cell count of 200 or below.\nHypochlorhydria has been described in HIV-positive patients [23]. Previous studies have shown that HIV-positive patients with overt AIDS have significantly increased serum levels of gastrin and pepsinogen II compared with HIV-positive patients without overt AIDS [26]. Hypochlorhydria may provide a less suitable environment for H. pylori and predispose to overgrowth of other bacteria [27]. Inhibition of H. pylori by competition with other opportunistic pathogens such as Cytomegalovirus via unknown mechanisms has been also suggested [23,28]. The intragastric environment may be also modified by previous use of PPI. In this study; however, only four HIV-positive patients were under PPI therapy. The frequent usage of antibiotics for treatment or prophylaxis against opportunistic infections in patients at an advanced stage of HIV infection might explain the low prevalence of H. pylori infection in the patient group. However, the antibiotics most commonly used in AIDS patients are not always efficacious against H. pylori. Furthermore, low H. pylori eradication ratio is observed with the use of mono therapy, even with clarithromycin that has a good anti-H. pylori activity [29].\nAn interesting finding observed in this study was the presence of active chronic gastritis in the gastric body of HIV-positive patients independently of the H. pylori positivity, in agreement with the studies of Welage et al.; Marano et al., and Mach et al. [23,30,31], which; however, was not observed by others [6,32]. Otherwise, in this study, the H. pylori status was significantly associated with the presence of active chronic gastritis in the antral gastric mucosa of HIV-positive and -negative patients. Taking together the data, it is possible that different mechanisms participate in the development of corpus chronic active gastritis in HIV-positive patients. Therefore, other microorganisms such as Cytomegalovirus or some drugs used to treat AIDS and to prevent opportunistic infections may play a role [18,33].", "Although the prevalence of H. pylori infection in HIV-positive patients was lower than in HIV-negative ones, the presence of chronic active gastritis was similarly high either in HIV-positive or -negative patients, which points to the possibility that other mechanisms than H. pylori infection are involved in the genesis of corpus gastritis in HIV positive patients.", "The authors declare that they have no competing interests.", "EG: participated in the conception, performed the endoscopies, and helped writing the manuscript. MB and ABN participated in the statistical analysis, interpretation and critical writing of the manuscript. AMN: participated in implementation of the study, data collection, database management and statistical analysis. CT and CS: participated in design and implementation of the study. KC, JS and IS: participated in implementation of the study, data collection IM: statistical analysis, interpretation and writing the manuscript. DQ: performed critical writing and reviewing LB: participated in conception, design, implementation, coordination of the study and critical writing and reviewing. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-230X/11/13/prepub\n" ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Upper gastrointestinal endoscopy", "Statistical Analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Helicobacter pylori infection is the major etiologic factor of chronic gastritis and peptic ulcer in the general population. Gastrointestinal (GI) symptoms are frequent among patients infected with human immunodeficiency virus (HIV) and with acquired immunodeficiency syndrome (AIDS) [1,2] However, the role of H. pylori infection in the GI tract mucosa of HIV patients is not well defined [3]. Some studies suggested that interactions between the immune/inflammatory response, gastric physiology and host repair mechanisms play an important role in dictating the disease outcome in response to H. pylori infection, suggesting that the host's immune competence might be an important issue in H. pylori infection [4,5].\nData in regard to the prevalence of H. pylori infection in HIV-infected population are controversial. Some reports have shown that the rate of the infection in HIV-positive patients is remarkably low when compared with the general population [6,7]. Conversely, other studies have not found similar results [8-10].\nIt is well known that the immune deficiencies caused by HIV give rise to many different gastrointestinal opportunistic infections, such as cytomegalovirus (CMV) infection and fungal esophagitis [11,12]. However, there are few studies evaluating the gastric mucosa of patients co-infected by H. pylori and HIV [13-15].\nTherefore, the aim of this study was to evaluate the prevalence of H. pylori infection, risk factors associated with the infection, as well as the macroscopic and microscopic alterations of the gastric mucosa of HIV-infected patients in a high H. pylori prevalence area in Northeastern, Brazil.", "The study was approved by the Ethical Committee of Research of the University of Ceará, and informed consent was obtained from each patient. This prospective cross-sectional study was carried out at the Hospital São José, a major referral center for assistance of HIV-infected individuals in the city of Fortaleza, Ceará, Brazil. From May 2001 to April 2003, 113 HIV-positive patients who underwent upper gastrointestinal endoscopy for dyspeptic symptoms were included in the study. The control group was composed by 141 HIV-negative patients who were undergoing upper gastrointestinal endoscopy for investigation of dyspeptic symptoms at the University Hospital Walter Cantideo, Fortaleza, Ceara, Brazil. Patients and age matched controls (interval of 10 years) were enrolled at the same period. All patients gave written informed consent to participate in the study and answered a questionnaire about symptoms and consumption of medications, including acid secretion inhibitors and antibiotics six months before endoscopy. In the HIV-positive patient group, data regarding the risk factors for HIV infection and antiretroviral therapy were also obtained. Total T CD4 cell count and HIV viral load were accepted as valid if the blood sample for their determination had been taken within 1 month before or after the entrance in the study.\n[SUBTITLE] Upper gastrointestinal endoscopy [SUBSECTION] Gastro-endoscopy was performed with Olympus video endoscopes (Olympus Optical Co, Ltd. GIF TYPE V) in the standard manner. Fragments of the gastric mucosa were obtained from the five sites recommended by the Houston-updated Sydney system for classification of gastritis and to evaluate the presence of spiral microorganism stained by Giemsa [16]. Two fragments from the lesser curvature of the gastric antrum and two from the lesser curvature of the lower gastric body were obtained for urease test. The activity of chronic gastritis was classified as mild, moderate and marked based on the number of neutrophil infiltration. The specimens were fixed in 10% formalin, embedded in paraffin wax, and 5-mm sections were stained with hematoxylin and eosin for histology and with Giemsa staining to evaluate H. pylori status.\nExclusion criteria included age below 18 years old or above 80 years old, other serious medical problems, or previous treatment for H. pylori infection. H. pylori status was determined by the rapid urease test and histology (Giemsa staining) and was considered negative when both tests were negative.\nGastro-endoscopy was performed with Olympus video endoscopes (Olympus Optical Co, Ltd. GIF TYPE V) in the standard manner. Fragments of the gastric mucosa were obtained from the five sites recommended by the Houston-updated Sydney system for classification of gastritis and to evaluate the presence of spiral microorganism stained by Giemsa [16]. Two fragments from the lesser curvature of the gastric antrum and two from the lesser curvature of the lower gastric body were obtained for urease test. The activity of chronic gastritis was classified as mild, moderate and marked based on the number of neutrophil infiltration. The specimens were fixed in 10% formalin, embedded in paraffin wax, and 5-mm sections were stained with hematoxylin and eosin for histology and with Giemsa staining to evaluate H. pylori status.\nExclusion criteria included age below 18 years old or above 80 years old, other serious medical problems, or previous treatment for H. pylori infection. H. pylori status was determined by the rapid urease test and histology (Giemsa staining) and was considered negative when both tests were negative.\n[SUBTITLE] Statistical Analysis [SUBSECTION] Data were analyzed using the software SPSS (version 10.0, Chicago, IL). Chi square test with Yates' correction or Fischer's exact test were used to compare results among the different groups. Significance was accepted at P values below 0.05.\nData were analyzed using the software SPSS (version 10.0, Chicago, IL). Chi square test with Yates' correction or Fischer's exact test were used to compare results among the different groups. Significance was accepted at P values below 0.05.", "Gastro-endoscopy was performed with Olympus video endoscopes (Olympus Optical Co, Ltd. GIF TYPE V) in the standard manner. Fragments of the gastric mucosa were obtained from the five sites recommended by the Houston-updated Sydney system for classification of gastritis and to evaluate the presence of spiral microorganism stained by Giemsa [16]. Two fragments from the lesser curvature of the gastric antrum and two from the lesser curvature of the lower gastric body were obtained for urease test. The activity of chronic gastritis was classified as mild, moderate and marked based on the number of neutrophil infiltration. The specimens were fixed in 10% formalin, embedded in paraffin wax, and 5-mm sections were stained with hematoxylin and eosin for histology and with Giemsa staining to evaluate H. pylori status.\nExclusion criteria included age below 18 years old or above 80 years old, other serious medical problems, or previous treatment for H. pylori infection. H. pylori status was determined by the rapid urease test and histology (Giemsa staining) and was considered negative when both tests were negative.", "Data were analyzed using the software SPSS (version 10.0, Chicago, IL). Chi square test with Yates' correction or Fischer's exact test were used to compare results among the different groups. Significance was accepted at P values below 0.05.", "Two hundred and fifty four subjects were included: 113 HIV-infected patients and 141 age-matched controls. The mean age of HIV infected patients was 36.0 years (range, 21-70 years) and 61.9% (70/113) were male. The mean age of the control group was 39.7 years (range 18-76 years) and 36.2% (51/141) were male. Most of the symptoms of HIV-positive patients were nonspecific, such as diarrhea, dyspepsia, abdominal pain, nausea, vomiting, odynophagia or dysphagia. The frequency of diarrhea, odynophagia, and dysphagia was significantly higher in HIV-positive group compared with the controls (P < 0.05).\nMacroscopic lesions in the HIV-infected group included, widespread esophageal candidiasis (32.7%; 37/113), esophageal ulcers (7.9%; 9/113) and candidiasis plus esophageal ulcers (1.7%; 2/113). Cryptosporidium was found in the gastric mucosa of two HIV-infected patients. Table 1 shows the endoscopic gastric mucosal findings in HIV-positive and HIV-negative dyspeptic patients. Corpus gastritis was significantly more frequently observed in the dyspeptic HIV-positive than in HIV-negative patients.\nEndoscopic findings of the gastroduodenal mucosa of dyspeptic HIV-positive and negative patients\nThe overall prevalence of H. pylori infection was significantly lower (p < 0.001) in HIV-infected patients (37.2%; 42/113) when compared with the controls (75.2%; 106/141); and did not increase with age (p = 0.73). Of note, the infection prevalence in the oldest group did not differ between HIV-positive and HIV-negative patients. The prevalence of H. pylori infection in the HIV-positive patients and controls according to the age is shown in Figure 1.\nHelicobacter pylori infection in HIV-positive and -negative patients according to the age.\nIn the HIV-positive group, there was no significant difference between H. pylori status and gender, age, HIV viral load, antiretroviral therapy and the use of antibiotics and H2-blocker. Only 4 patients referred the use of proton pump inhibitors (PPI). A non-significant lower prevalence of H. pylori infection was observed in the patients with T CD4 cell count below 200 (Table 2).\nCovariates associated with H. pylori infection in HIV-positive patients\nThe gastric mucosa histological results are shown in Table 3. Chronic active antral gastritis was observed in 87.6% (99/113) of the HIV-infected patients and in 80.1% (113/141; p = 0.11) of the control group. H. pylori infection was significantly associated with the presence of chronic active antral gastritis in both groups (p = 0.03 and p < 0.001, respectively). No significant difference (p = 0.89) was also observed between the groups in respect to the frequency of chronic active corpus gastritis (53.1% in the HIV-positive patients and 53.9% in the HIV-negative patient). However, the H. pylori infection did not associate with chronic active corpus gastritis in the HIV-positive patients (p = 0.15), but high association was observed in the HIV-negative ones (p < 0.001). Additionally, in the HIV-negative group, the degree of gastritis was also associated with H. pylori infection, being the presence of the microorganism more frequently observed in the more marked (50%, 40/80) than in moderated (10%, 2/20) gastritis. Atrophy/intestinal metaplasia was observed less frequently in the gastric corpus of HIV-positive (6.2%, 7/113) than in the gastric corpus of HIV-negative (9.9%, 14/141) patients, but the differences were not significant (p = 0.27).\nAssociation between the frequency of H. pylori infection and antral and corpus active gastritis in HIV-infected patients and controls", "The prevalence of H. pylori infection was lower in the HIV-positive group than in the age-matched controls. The low prevalence of H. pylori infection we observed in the HIV-positive patients differs profoundly from that previously reported (82.0%) in HIV-negative adults from a poor urban Community in the same city (Fortaleza; Brazil) [17]. A similar result has been observed in a cross-sectional study in Southeastern Brazil that evaluated the prevalence of H. pylori infection in 528 HIV-infected patients (32.38% of H. pylori positivity) [18]. Studies from East countries, where H. pylori infection is highly prevalent such as Taiwan [19] and China [20] also demonstrated a lower H. pylori infection prevalence (17.3% and 22.1%, respectively) in HIV-infected than in-non-infected (63.5% and 44.8%, respectively) patients. Conversely, studies from Argentina and from India showed similar H. pylori infection prevalence in HIV-infected and non infected patients [13,21,22].\nIt has to be emphasized that H. pylori infection was diagnosed by histology and urease test in all patients. The results were concordant with those obtained by the evaluation of H. pylori specific ureA gene in the paraffin imbedded gastric tissue from a subgroup of patients (data not shown) and both HIV-positive and -negative patients belong to the same low-income population. As above mentioned, the prevalence of H. pylori infection in a similar population from Fortaleza in respect to the socio-economical level is high [17]. It is well known that H. pylori infection is mainly acquired during childhood and that once acquired it is life-long lasting [1]. Therefore, we may hypothesize that the HIV-infected patients we studied had been exposed to the bacterium early in life and most of them became infected, but loose the infection after acquired HIV infection. Alternatively, the H. pylori gastric load might be decreased in the HIV-positive patients leading to H. pylori infection misdiagnosis. Explanations include decreased gastric acid secretion predisposing to gastric colonization by other microorganisms that might compete with H. pylori, the use of either antibiotics or PPI and, as suggested in other studies, the low count of T CD4 cells in AIDS patients [6,21,23,24].\nIt has been suggested that T CD4 cells play a role in inducing or perpetuating tissue and epithelial damage that may facilitate H. pylori colonization [25]. In this study, HIV-positive patients were stratified according to the T CD4 cell counts above or below 200 cells/mm3 and a tendency of lower prevalence of H. pylori infection was observed in the group of patients with T CD4 cell count of 200 or below.\nHypochlorhydria has been described in HIV-positive patients [23]. Previous studies have shown that HIV-positive patients with overt AIDS have significantly increased serum levels of gastrin and pepsinogen II compared with HIV-positive patients without overt AIDS [26]. Hypochlorhydria may provide a less suitable environment for H. pylori and predispose to overgrowth of other bacteria [27]. Inhibition of H. pylori by competition with other opportunistic pathogens such as Cytomegalovirus via unknown mechanisms has been also suggested [23,28]. The intragastric environment may be also modified by previous use of PPI. In this study; however, only four HIV-positive patients were under PPI therapy. The frequent usage of antibiotics for treatment or prophylaxis against opportunistic infections in patients at an advanced stage of HIV infection might explain the low prevalence of H. pylori infection in the patient group. However, the antibiotics most commonly used in AIDS patients are not always efficacious against H. pylori. Furthermore, low H. pylori eradication ratio is observed with the use of mono therapy, even with clarithromycin that has a good anti-H. pylori activity [29].\nAn interesting finding observed in this study was the presence of active chronic gastritis in the gastric body of HIV-positive patients independently of the H. pylori positivity, in agreement with the studies of Welage et al.; Marano et al., and Mach et al. [23,30,31], which; however, was not observed by others [6,32]. Otherwise, in this study, the H. pylori status was significantly associated with the presence of active chronic gastritis in the antral gastric mucosa of HIV-positive and -negative patients. Taking together the data, it is possible that different mechanisms participate in the development of corpus chronic active gastritis in HIV-positive patients. Therefore, other microorganisms such as Cytomegalovirus or some drugs used to treat AIDS and to prevent opportunistic infections may play a role [18,33].", "Although the prevalence of H. pylori infection in HIV-positive patients was lower than in HIV-negative ones, the presence of chronic active gastritis was similarly high either in HIV-positive or -negative patients, which points to the possibility that other mechanisms than H. pylori infection are involved in the genesis of corpus gastritis in HIV positive patients.", "The authors declare that they have no competing interests.", "EG: participated in the conception, performed the endoscopies, and helped writing the manuscript. MB and ABN participated in the statistical analysis, interpretation and critical writing of the manuscript. AMN: participated in implementation of the study, data collection, database management and statistical analysis. CT and CS: participated in design and implementation of the study. KC, JS and IS: participated in implementation of the study, data collection IM: statistical analysis, interpretation and writing the manuscript. DQ: performed critical writing and reviewing LB: participated in conception, design, implementation, coordination of the study and critical writing and reviewing. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-230X/11/13/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[]
General practitioners' and district nurses' conceptions of the encounter with obese patients in primary health care.
21333018
Primary health care specialists have a key role in the management of obesity. Through understanding how they conceive the encounter with patients with obesity, treatment may be improved. The aim of this study was thus to explore general practitioners' and district nurses' conceptions of encountering patients with obesity in primary health care.
BACKGROUND
Data were collected through semi-structured interviews, and analysed using a phenomenographic approach. The participants were 10 general practitioners (6 women, 4 men) and 10 district nurses (7 women, 3 men) from 19 primary health care centres within a well-defined area of Sweden.
METHOD
Five descriptive categories were identified: Adequate primary health care, Promoting lifestyle change, Need for competency, Adherence to new habits and Understanding patient attitudes. All participants, independent of gender and profession, were represented in the descriptive categories. Some profession and gender differences were, however, found in the underlying conceptions. The general staff view was that obesity had to be prioritised. However, there was also the contradictory view that obesity is not a disease and therefore not the responsibility of primary health care. Despite this, staff conceived it as important that patients were met with respect and that individual solutions were provided which could be adhered to step-by-step by the patient. Patient attitudes, such as motivation to change, evasive behaviour, too much trust in care and lack of self-confidence, were, however, conceived as major barriers to a fruitful encounter.
RESULTS
Findings from this study indicate that there is a need for development and organisation of weight management in primary health care. Raising awareness of staff's negative views of patient attitudes is important since it is likely that it affects the patient-staff relationship and staff's treatment efforts. More research is also needed on gender and profession differences in this area.
CONCLUSIONS
[ "Adult", "Attitude of Health Personnel", "Female", "General Practitioners", "Humans", "Interviews as Topic", "Male", "Middle Aged", "Nurse-Patient Relations", "Nurses", "Obesity", "Perception", "Physician-Patient Relations", "Primary Health Care", "Sweden", "Weight Gain" ]
3050702
null
null
Methods
We used a phenomenographic approach to identify and describe the various ways in which a specific phenomenon is conceived [16], in this case primary health care specialists' conceptions of obesity and obesity treatment. This qualitative approach has been used in health care research to study, for example, doctors' or nurses' conceptions of treatments for different diseases [17]. The phenomenographic tradition acknowledges two research perspectives: there is a first-order perspective which concerns how the world actually is, and a second-order perspective which concerns how the world is experienced. In phenomenography, the second-order perspective is essential [16]. The assumption is that there is only one world, but that it can be experienced in qualitatively different ways. The approach comes from educational research and entails adopting a content-related perspective to characterise, understand and describe the qualitatively different ways in which people make sense of the world around them. The conceptions are derived from individual interviews, but the analyses emanate in descriptive categories at a collective level [16]. [SUBTITLE] Participants [SUBSECTION] Eligible for participation in the study were staff from 57 primary health care centres in a well-defined area in Sweden. The criteria for inclusion were being a DN or GP with a specialist education, speaking Swedish fluently, and acceptance of tape-recording. A strategic selection of participants was made to obtain variety regarding age, gender and primary health care experience. This would provide a range of conceptions from DNs and GPs. The heads of the primary health care centre in question had to give their consent for staff to participate. Therefore we first contacted the heads by e-mail with an information letter, then by telephone approximately a week later. If the head gave permission, the informant in question was approached by either e-mail or telephone. We gave the informants the same information as their head. We included 20 participants at 19 primary health care centres (Table 1). Two DNs came from the same centre. The participants' median age was 51 years, and their median professional experience 10.5 years. Of the 20 informants, 10 were recommended by their medical head, 3 were medical heads themselves, working as GPs, and 7 were contacted from staff lists. Four of the 10 DNs were specialists in diabetes or weight management. Among the GPs, 3 were involved in some kind of weight management strategies. Demographic characteristics of the primary health care professionals (n = 20). The medical heads at 16 primary health care centres refused to grant permission for participation, largely because of re-organisation, work overload or shortage of staff, although some were simply unwilling. We could not reach the medical heads at 10 centres. At 12 centres the medical head gave approval for the study but the DNs and GPs declined because of work overload, unwillingness or lack of financial compensation; some could not be reached. The Regional Ethical Review Board, Karolinska Institute, Stockholm, Sweden, approved the study (reference number 2006/7-31/1). Eligible for participation in the study were staff from 57 primary health care centres in a well-defined area in Sweden. The criteria for inclusion were being a DN or GP with a specialist education, speaking Swedish fluently, and acceptance of tape-recording. A strategic selection of participants was made to obtain variety regarding age, gender and primary health care experience. This would provide a range of conceptions from DNs and GPs. The heads of the primary health care centre in question had to give their consent for staff to participate. Therefore we first contacted the heads by e-mail with an information letter, then by telephone approximately a week later. If the head gave permission, the informant in question was approached by either e-mail or telephone. We gave the informants the same information as their head. We included 20 participants at 19 primary health care centres (Table 1). Two DNs came from the same centre. The participants' median age was 51 years, and their median professional experience 10.5 years. Of the 20 informants, 10 were recommended by their medical head, 3 were medical heads themselves, working as GPs, and 7 were contacted from staff lists. Four of the 10 DNs were specialists in diabetes or weight management. Among the GPs, 3 were involved in some kind of weight management strategies. Demographic characteristics of the primary health care professionals (n = 20). The medical heads at 16 primary health care centres refused to grant permission for participation, largely because of re-organisation, work overload or shortage of staff, although some were simply unwilling. We could not reach the medical heads at 10 centres. At 12 centres the medical head gave approval for the study but the DNs and GPs declined because of work overload, unwillingness or lack of financial compensation; some could not be reached. The Regional Ethical Review Board, Karolinska Institute, Stockholm, Sweden, approved the study (reference number 2006/7-31/1). [SUBTITLE] Data collection [SUBSECTION] Two of us (LMH, GA) developed an interview guide with open-ended questions. The main questions were: What are your experiences of meeting patients with obesity? How do you conceive the life situation of patients with obesity? How do you think care is working for patients with obesity? Individual interviews, which took the form of conversations, were performed by one of us (LMH). Follow-up questions were asked depending on how comprehensively the informant answered the main questions. Examples of follow-up questions were: "What do you mean?" "Can you explain?" "Can you tell me more?" "Is there anything you would like to add?" Participants chose where the interview should take place. Two of the 20 participants preferred to be interviewed at the interviewer's research department, while the remaining interviews were performed at the workplace. The tape-recorded interviews (30-80 minutes) were transcribed verbatim. Two of us (LMH, GA) developed an interview guide with open-ended questions. The main questions were: What are your experiences of meeting patients with obesity? How do you conceive the life situation of patients with obesity? How do you think care is working for patients with obesity? Individual interviews, which took the form of conversations, were performed by one of us (LMH). Follow-up questions were asked depending on how comprehensively the informant answered the main questions. Examples of follow-up questions were: "What do you mean?" "Can you explain?" "Can you tell me more?" "Is there anything you would like to add?" Participants chose where the interview should take place. Two of the 20 participants preferred to be interviewed at the interviewer's research department, while the remaining interviews were performed at the workplace. The tape-recorded interviews (30-80 minutes) were transcribed verbatim. [SUBTITLE] Data analysis [SUBSECTION] We carried out the analysis in four steps in accordance with the phenomenographic approach [18,19]. At the first step, the interviews were listened to again to verify that they had been transcribed correctly. The individual transcripts were then read through several times to get an overall impression, and thereafter statements relevant to the study were identified. At the second step, the aim was to identify distinct ways of conceiving obesity treatment, which involved constantly comparing and labelling the statements. The labelled statements were then treated as preliminary conceptions. At the third step the conceptions were compared with one another and grouped to obtain an overall picture of possible links between them. The grouped conceptions formed preliminary descriptive categories to which names were given. At the fourth step, the focus shifted from relations between conceptions to relations between the preliminary descriptive categories. Finally, the descriptive categories were critically investigated to ensure that they properly represented the conceptions. We carried out the analysis in four steps in accordance with the phenomenographic approach [18,19]. At the first step, the interviews were listened to again to verify that they had been transcribed correctly. The individual transcripts were then read through several times to get an overall impression, and thereafter statements relevant to the study were identified. At the second step, the aim was to identify distinct ways of conceiving obesity treatment, which involved constantly comparing and labelling the statements. The labelled statements were then treated as preliminary conceptions. At the third step the conceptions were compared with one another and grouped to obtain an overall picture of possible links between them. The grouped conceptions formed preliminary descriptive categories to which names were given. At the fourth step, the focus shifted from relations between conceptions to relations between the preliminary descriptive categories. Finally, the descriptive categories were critically investigated to ensure that they properly represented the conceptions.
null
null
null
null
[ "Background", "Participants", "Data collection", "Data analysis", "Findings", "Adequate primary health care", "Overweight needs to be prioritised", "Lack of distinct guidelines and evidence", "Overweight not our responsibility", "Continuity and long-term support", "Co-operate for knowledge-based care", "Promoting lifestyle change", "Small steps and realistic goals", "Raise awareness", "Individually based solutions", "Facilitate motivation", "Need for competency", "Respectful encounters", "Staff with active interest", "Knowledge about diet and counselling", "Adherence to new habits", "Overcome deep-seated habits", "Psychological and medical barriers", "Socio-cultural barriers", "Understanding patient attitudes", "Motivation to change", "Evasive behaviour", "Trusting in care", "Lack of self-confidence", "Discussion", "Limitations and Strengths", "Implications for Clinical Practice and Future Research", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The number of obese adults in Sweden has doubled over a 20-year period, and about 10% of both women and men are estimated to be obese [1]. Recent data show that the prevalence of obesity in a primary care setting is 20-24% [2]. In Sweden, teams of general practitioners (GPs) and nurses, especially district nurses (DNs), are on the front line of the encounter between health care personnel and people with obesity or other lifestyle-related diseases [3]. Consequently, these professional groups must, to a greater extent than before, assist patients with weight management. The trend in health care is towards a more patient-centred approach, empowering the patients themselves to take responsibility for their health [4]. Accordingly, one of the tasks of GPs and DNs is to support patients in making healthy decisions about their daily living habits.\nPrimary health care is recognised as an important resource in co-ordinating the treatment of obese patients [5]. Previous studies also reveal that GPs and nurses acknowledge that primary care has an important role in managing obesity [6,7]. Other studies, however, show that GPs do not regard obesity management as their responsibility, but rather that of the patient [8].\nA significant proportion of GPs and nurses in primary health care consider themselves insufficiently skilled or prepared to treat patients with obesity [6,7]. Both GPs and nurses regard themselves as ineffective in their work with obese patients [7,9], and a majority of GPs consider the possibility of a patient's succeeding with weight loss as limited [6,9]. Studies have also shown that medical staff attribute the failure to lose weight to personal, patient-related factors rather than to professional ones [8,10]. Several other studies support this notion, since they show that GPs and nurses perceive obese patients as having low motivation, lacking willpower, being unwilling to change lifestyle [11,12] and non-compliant to advice [10]. However, there are studies supporting the presence of system-level barriers, such as lack of time during patient visits [12], lack of places to which to refer patients, and lack of patient-oriented educational materials [13].\nThe seemingly negative beliefs about obesity and obesity treatment which have been documented, especially in quantitative studies, might be over-simplified, and it is likely that GPs and nurses hold more complex views. Qualitative studies of GPs and nurses in primary health care support this idea to some extent [8,12,14]. Nurses perceive weight as a sensitive issue to discuss with patients and therefore try to achieve a balance between factors involving personal responsibility and factors outside the individual's control [14]. Mercer and Tessier found that GPs were negative about their role in obesity treatment, while practice nurses, although accepting it as part of their job, experienced that GPs \"off-loaded\" patients on to them [12]. Epstein and Ogden found that GPs were sceptical about the success of available treatments, but still offered anti-obesity drugs and tried to listen to and understand the patient's problem [8].\nReviewing the literature on existing qualitative studies, including those of nurses and GPs in primary health care, it becomes apparent that there is a need to know more about how they conceive their encounters with patients with obesity. Research has hitherto been limited in that no male nurses have been included, despite its having been demonstrated that gender might be important regarding attitudes and practices within obesity management [9,15]. This emphasises the need for further research. Against this background, the aim of the current study was to describe how GPs and DNs, both male and female, conceive their encounters with obesity in primary health care.", "Eligible for participation in the study were staff from 57 primary health care centres in a well-defined area in Sweden. The criteria for inclusion were being a DN or GP with a specialist education, speaking Swedish fluently, and acceptance of tape-recording. A strategic selection of participants was made to obtain variety regarding age, gender and primary health care experience. This would provide a range of conceptions from DNs and GPs. The heads of the primary health care centre in question had to give their consent for staff to participate. Therefore we first contacted the heads by e-mail with an information letter, then by telephone approximately a week later. If the head gave permission, the informant in question was approached by either e-mail or telephone. We gave the informants the same information as their head.\nWe included 20 participants at 19 primary health care centres (Table 1). Two DNs came from the same centre. The participants' median age was 51 years, and their median professional experience 10.5 years. Of the 20 informants, 10 were recommended by their medical head, 3 were medical heads themselves, working as GPs, and 7 were contacted from staff lists. Four of the 10 DNs were specialists in diabetes or weight management. Among the GPs, 3 were involved in some kind of weight management strategies.\nDemographic characteristics of the primary health care professionals (n = 20).\nThe medical heads at 16 primary health care centres refused to grant permission for participation, largely because of re-organisation, work overload or shortage of staff, although some were simply unwilling. We could not reach the medical heads at 10 centres. At 12 centres the medical head gave approval for the study but the DNs and GPs declined because of work overload, unwillingness or lack of financial compensation; some could not be reached. The Regional Ethical Review Board, Karolinska Institute, Stockholm, Sweden, approved the study (reference number 2006/7-31/1).", "Two of us (LMH, GA) developed an interview guide with open-ended questions. The main questions were: What are your experiences of meeting patients with obesity? How do you conceive the life situation of patients with obesity? How do you think care is working for patients with obesity? Individual interviews, which took the form of conversations, were performed by one of us (LMH). Follow-up questions were asked depending on how comprehensively the informant answered the main questions. Examples of follow-up questions were: \"What do you mean?\" \"Can you explain?\" \"Can you tell me more?\" \"Is there anything you would like to add?\" Participants chose where the interview should take place. Two of the 20 participants preferred to be interviewed at the interviewer's research department, while the remaining interviews were performed at the workplace. The tape-recorded interviews (30-80 minutes) were transcribed verbatim.", "We carried out the analysis in four steps in accordance with the phenomenographic approach [18,19]. At the first step, the interviews were listened to again to verify that they had been transcribed correctly. The individual transcripts were then read through several times to get an overall impression, and thereafter statements relevant to the study were identified. At the second step, the aim was to identify distinct ways of conceiving obesity treatment, which involved constantly comparing and labelling the statements. The labelled statements were then treated as preliminary conceptions. At the third step the conceptions were compared with one another and grouped to obtain an overall picture of possible links between them. The grouped conceptions formed preliminary descriptive categories to which names were given. At the fourth step, the focus shifted from relations between conceptions to relations between the preliminary descriptive categories. Finally, the descriptive categories were critically investigated to ensure that they properly represented the conceptions.", "There emerged five descriptive categories illustrating GPs' and DNs' conceptions of their encounters with obese patients in primary health care (Table 2). The conceptions in each descriptive category are illustrated by quotations from individual interviewees.\nStaff's conceptions of encounters with patients with obesity in primary health care.\nAll participants, independent of profession and gender, were represented with statements in each descriptive category. However, certain profession and gender differences were identified regarding the conceptions forming the descriptive categories. If more than two thirds of all the statements within a conception were made by one of the compared groups (gender or profession), this is reflected in the account of the findings.\n[SUBTITLE] Adequate primary health care [SUBSECTION] [SUBTITLE] Overweight needs to be prioritised [SUBSECTION] Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\nStaff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\n[SUBTITLE] Lack of distinct guidelines and evidence [SUBSECTION] The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\nThe lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\n[SUBTITLE] Overweight not our responsibility [SUBSECTION] This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\nThis conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\n[SUBTITLE] Continuity and long-term support [SUBSECTION] This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\nThis conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\n[SUBTITLE] Co-operate for knowledge-based care [SUBSECTION] Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\nStaff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\n[SUBTITLE] Overweight needs to be prioritised [SUBSECTION] Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\nStaff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\n[SUBTITLE] Lack of distinct guidelines and evidence [SUBSECTION] The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\nThe lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\n[SUBTITLE] Overweight not our responsibility [SUBSECTION] This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\nThis conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\n[SUBTITLE] Continuity and long-term support [SUBSECTION] This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\nThis conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\n[SUBTITLE] Co-operate for knowledge-based care [SUBSECTION] Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\nStaff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\n[SUBTITLE] Promoting lifestyle change [SUBSECTION] [SUBTITLE] Small steps and realistic goals [SUBSECTION] Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\nStaff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\n[SUBTITLE] Raise awareness [SUBSECTION] Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\nStaff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\n[SUBTITLE] Individually based solutions [SUBSECTION] Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\nStaff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\n[SUBTITLE] Facilitate motivation [SUBSECTION] It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\nIt was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\n[SUBTITLE] Small steps and realistic goals [SUBSECTION] Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\nStaff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\n[SUBTITLE] Raise awareness [SUBSECTION] Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\nStaff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\n[SUBTITLE] Individually based solutions [SUBSECTION] Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\nStaff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\n[SUBTITLE] Facilitate motivation [SUBSECTION] It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\nIt was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\n[SUBTITLE] Need for competency [SUBSECTION] [SUBTITLE] Respectful encounters [SUBSECTION] Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\nStaff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\n[SUBTITLE] Staff with active interest [SUBSECTION] This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\nThis conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\n[SUBTITLE] Knowledge about diet and counselling [SUBSECTION] This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\nThis conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\n[SUBTITLE] Respectful encounters [SUBSECTION] Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\nStaff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\n[SUBTITLE] Staff with active interest [SUBSECTION] This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\nThis conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\n[SUBTITLE] Knowledge about diet and counselling [SUBSECTION] This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\nThis conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\n[SUBTITLE] Adherence to new habits [SUBSECTION] [SUBTITLE] Overcome deep-seated habits [SUBSECTION] Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\nStaff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\n[SUBTITLE] Psychological and medical barriers [SUBSECTION] This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\nThis conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\n[SUBTITLE] Socio-cultural barriers [SUBSECTION] This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\nThis conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\n[SUBTITLE] Overcome deep-seated habits [SUBSECTION] Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\nStaff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\n[SUBTITLE] Psychological and medical barriers [SUBSECTION] This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\nThis conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\n[SUBTITLE] Socio-cultural barriers [SUBSECTION] This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\nThis conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\n[SUBTITLE] Understanding patient attitudes [SUBSECTION] [SUBTITLE] Motivation to change [SUBSECTION] Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\nStaff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\n[SUBTITLE] Evasive behaviour [SUBSECTION] Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\nStaff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\n[SUBTITLE] Trusting in care [SUBSECTION] Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\nStaff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\n[SUBTITLE] Lack of self-confidence [SUBSECTION] Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\nFemale staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\n[SUBTITLE] Motivation to change [SUBSECTION] Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\nStaff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\n[SUBTITLE] Evasive behaviour [SUBSECTION] Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\nStaff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\n[SUBTITLE] Trusting in care [SUBSECTION] Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\nStaff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\n[SUBTITLE] Lack of self-confidence [SUBSECTION] Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\nFemale staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)", "[SUBTITLE] Overweight needs to be prioritised [SUBSECTION] Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\nStaff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\n[SUBTITLE] Lack of distinct guidelines and evidence [SUBSECTION] The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\nThe lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\n[SUBTITLE] Overweight not our responsibility [SUBSECTION] This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\nThis conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\n[SUBTITLE] Continuity and long-term support [SUBSECTION] This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\nThis conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\n[SUBTITLE] Co-operate for knowledge-based care [SUBSECTION] Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\nStaff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)", "Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)", "The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)", "This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)", "This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)", "Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)", "[SUBTITLE] Small steps and realistic goals [SUBSECTION] Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\nStaff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\n[SUBTITLE] Raise awareness [SUBSECTION] Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\nStaff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\n[SUBTITLE] Individually based solutions [SUBSECTION] Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\nStaff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\n[SUBTITLE] Facilitate motivation [SUBSECTION] It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\nIt was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)", "Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)", "Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)", "Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)", "It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)", "[SUBTITLE] Respectful encounters [SUBSECTION] Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\nStaff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\n[SUBTITLE] Staff with active interest [SUBSECTION] This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\nThis conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\n[SUBTITLE] Knowledge about diet and counselling [SUBSECTION] This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\nThis conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)", "Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)", "This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)", "This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)", "[SUBTITLE] Overcome deep-seated habits [SUBSECTION] Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\nStaff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\n[SUBTITLE] Psychological and medical barriers [SUBSECTION] This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\nThis conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\n[SUBTITLE] Socio-cultural barriers [SUBSECTION] This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\nThis conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)", "Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)", "This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)", "This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)", "[SUBTITLE] Motivation to change [SUBSECTION] Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\nStaff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\n[SUBTITLE] Evasive behaviour [SUBSECTION] Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\nStaff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\n[SUBTITLE] Trusting in care [SUBSECTION] Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\nStaff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\n[SUBTITLE] Lack of self-confidence [SUBSECTION] Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\nFemale staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)", "Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)", "Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)", "Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)", "Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)", "We found a wide variation in GPs' and DNs' conceptions of their encounters with obese patients in primary health care. Staff described the encounters from both an organisational/personnel perspective and a patient perspective. The need for primary care to have an adequate organisation for obesity treatment and competent staff for promoting lifestyle change was stressed. However, patients' adherence and attitudes to behaviour change were also looked upon as important. In addition to the findings on the collective level we found certain differences in the pattern of conceptions according to profession and gender. However, in view of the small number of male participants, especially in the DN group, these findings have to be interpreted cautiously.\nThe findings of the present study are to some extent in line with those of previous quantitative and qualitative investigations of attitudes and beliefs regarding obesity treatment in primary care. Examples of such beliefs are that primary care is not an entirely appropriate setting for obesity treatment (especially if no concomitant disease is present), that time is lacking for patient visits, that reimbursement systems are inappropriate, that distinct and evidence-based guidelines need to be improved, and that patient motivation to change is low [7,8,12]. Male staff emphasised to a higher degree than female staff that there is a lack of guidelines and evidence. This may reflect that men to a higher degree explain lack of success in obesity treatment in terms of external (organisation) rather than internal (personal competence) causes. The conception that primary health care is not necessarily the best arena for the prevention and treatment of obesity was more evident among GPs than among DNs. This is in line with the finding of Mercer and Tessier [12] to the effect that GPs were more negative as to their role in obesity treatment than were nurses.\nStaff in this study (mainly female GPs) emphasised the need for respectful treatment and individual solutions, and showed an understanding of the difficulty of changing lifestyle. This replicates what has been found in previous qualitative studies. Brown and Thompson [14] and Epstein and Ogden [8] reported that staff perceived the patient-provider relationship to be central to the improvement of obesity treatment. Patients who are informed and involved in decision-making have been found to be more adherent [20] and staff engaged in patient-centred care and make decisions together with their patients are in a better position to offer more individualised behavioural recommendations to their patients, resulting in better adherence [21]. Patients themselves have also asked for a more personalised approach to weight management, and for specific advice rather than broad statements on how to lose weight [22].\nHowever, we also found that some staff experienced that, to motivate patients, they had to threaten them with a possibly fatal outcome, or at least inform them about the negative consequences of obesity. There is evidence in the literature that some patients view even mild warnings as scare tactics, with a negative impact on adherence [23]; others, however, regard warnings as encouraging and motivating, even essential to change [22]. It is important that GPs and DNs assess patient motivation and discuss what facilitates motivation individually. Previous studies have shown that patients report greater motivation and are more optimistic about weight loss than their GPs, but those who see obese patients more often are better at predicting patient motivation [24].\nThe findings of the present study regarding the staff's perception that some patients exhibit evasive behaviour, are untrustworthy when it comes to revealing their lifestyles and off-load their problems on to staff, are in line with previous research [8,12]. These conceptions were more strongly expressed by DNs than GPs in our study. One reason for this difference could be that the sort of behaviour in question may not appear until after a couple of sessions, whereby DNs are more likely to encounter it in that they often spend more time with patients than do GPs. The problem with these attitudes on the part of staff is that they may manifest themselves in encounters with patients, and that patients might thereby sense that staff do not trust them [25]. Previous studies have shown that patients' higher body mass index is associated with less respect from their doctors [26] and higher reporting of perceived discrimination in a health care setting [27]. Respect develops over time, therefore it would seem necessary that goals in obesity treatment should include continuity of care and long-term support, as indeed was emphasised by staff in our study.\nThe use of scare tactics and perceptions of patients as non-adherent might be due to staff's not being able fully to use or appreciate the strategies of motivational interviewing that are central to increasing patients' intrinsic willingness to change. Female GPs and DNs, and one male GP, were the only ones that considered that lack of self-confidence in changing lifestyle could be the reason for patients' not succeeding in losing weight. Patients' belief in their ability to make the necessary changes has been found to be important for behaviour change [28], and an understanding of this on the part of the health care provider seems crucial. This finding may suggest that male GPs and DNs are too quick to jump to a conclusion about what underlies some of their patients' non-adherence.\nStaff stressed the need for more knowledge and skills in counselling, such as motivational interviewing, and also in basic nutrition and appropriate dietary interventions. The question is whether staff are being trained in the most updated methods of obesity treatment. Staff might, for example, also need skills directed towards helping patients to cope with their situation, because of the limited chance of losing weight going together with the stigma related to obesity. Coping strategies to deal with stigma have important implications for emotional functioning, and health care staff could assist in finding methods to help improve patients' daily functioning.\nThere seems to be a need for evidence-based guidelines which are easy to use and regarded as effective by staff. The use of guidelines is intended to improve the quality of treatment, and it has been found that nurses with no specific preparation or guidelines regarding obesity treatment were the ones who experienced most awkward in meeting obese patients [14]. However, a study of GPs found that awareness of the guidelines was associated with a more negative attitude towards obesity [6]. These contradictory findings suggest that, even when guidelines are available, they might not be user-friendly, updated and/or integrated appropriately into primary care.\nStaff in the present study emphasised the importance of having an active interest in obesity treatment. To further enhance work at primary care centres, it might be necessary for every unit to appoint specific staff to take responsibility for structuring activities and engaging others regarding obesity treatment, organised perhaps in the same way as for diabetic patients. Findings from the Counterweight Programme [29] show, for example, that staff perceived that involvement and a sense of ownership, alongside a clear understanding of programme goals, are important factors in the effective implementation of weight management in primary health care.\n[SUBTITLE] Limitations and Strengths [SUBSECTION] There are certain limitations to consider when it comes to interpreting the findings of the present study. Firstly, there were quite a number of health care centres which refused participation, for which reason it is possible that the participants came from centres which had taken an active interest in weight management. The health care centres included, however, were situated in both affluent and poor areas of Stockholm, and were both large and small. The demographic characteristics of non-participants and participants are unfortunately not known. Secondly, the number of male participants was low (one third). However, owing to the limited number of males eligible from the nurse population and to the fact that previous research has not been able to recruit male nurses, this study makes an important contribution to the field. Thirdly, some of the participants were recommended by their medical head, and it is likely that those with a more positive attitude were chosen. However, the findings display very rich descriptions of conceptions; both negative and positive views were expressed, and participants were not afraid of raising difficult topics.\nAll participants were asked the same questions, and all interviews and analysis were performed by the first author (LMH), who is a nutritionist. However, the possibility cannot be excluded that the background of the interviewer prompted participants to focus more on issues related to healthy diet and physical activity. To attain greater rigour, the conceptions and descriptive categories derived were scrutinised at each step by the third author (GA), who is experienced in phenomenographic analysis. The findings were constantly discussed in depth during the analysis by two of us (LMH, GA) until agreement was reached.\nThere are certain limitations to consider when it comes to interpreting the findings of the present study. Firstly, there were quite a number of health care centres which refused participation, for which reason it is possible that the participants came from centres which had taken an active interest in weight management. The health care centres included, however, were situated in both affluent and poor areas of Stockholm, and were both large and small. The demographic characteristics of non-participants and participants are unfortunately not known. Secondly, the number of male participants was low (one third). However, owing to the limited number of males eligible from the nurse population and to the fact that previous research has not been able to recruit male nurses, this study makes an important contribution to the field. Thirdly, some of the participants were recommended by their medical head, and it is likely that those with a more positive attitude were chosen. However, the findings display very rich descriptions of conceptions; both negative and positive views were expressed, and participants were not afraid of raising difficult topics.\nAll participants were asked the same questions, and all interviews and analysis were performed by the first author (LMH), who is a nutritionist. However, the possibility cannot be excluded that the background of the interviewer prompted participants to focus more on issues related to healthy diet and physical activity. To attain greater rigour, the conceptions and descriptive categories derived were scrutinised at each step by the third author (GA), who is experienced in phenomenographic analysis. The findings were constantly discussed in depth during the analysis by two of us (LMH, GA) until agreement was reached.\n[SUBTITLE] Implications for Clinical Practice and Future Research [SUBSECTION] It is likely that low confidence of staff in treating obesity means that obesity in primary care has low priority, and their belief that patients are not motivated produces a moral dilemma for GPs and DNs. How should they prioritise their work? Is obesity the responsibility of the patient or of the health care system? Obesity treatment in primary health care, though, has the potential of being much more effective than it currently is, and the GPs and DNs in the present study touched upon many organisational aspects that need improvement. For example, obesity must be recognised as an important issue at all levels of the health care system. It also seems warranted to promote competence in motivational interviewing and evidence-based treatments, and also to increase awareness of staff's negative views on patient attitudes. Additional ways to enhance care might be to adopt a team-based approach within each unit, with resources to enable continuity in care, and also to promote co-operation with other stakeholders, such as social welfare authorities, commercial weight-loss organisations and specialist obesity units.\nFinally, the gender- and profession-based differences which were found are somewhat difficult to interpret and therefore deserve further investigation in larger quantitative studies. For instance, to our knowledge no one has compared both profession and gender aspects in the same study. Moreover, research should investigate the association between staff's negative perceptions of patients with obesity and their actual practices, which, in the long run, might have additional harmful effects on obese patients' health.\nIt is likely that low confidence of staff in treating obesity means that obesity in primary care has low priority, and their belief that patients are not motivated produces a moral dilemma for GPs and DNs. How should they prioritise their work? Is obesity the responsibility of the patient or of the health care system? Obesity treatment in primary health care, though, has the potential of being much more effective than it currently is, and the GPs and DNs in the present study touched upon many organisational aspects that need improvement. For example, obesity must be recognised as an important issue at all levels of the health care system. It also seems warranted to promote competence in motivational interviewing and evidence-based treatments, and also to increase awareness of staff's negative views on patient attitudes. Additional ways to enhance care might be to adopt a team-based approach within each unit, with resources to enable continuity in care, and also to promote co-operation with other stakeholders, such as social welfare authorities, commercial weight-loss organisations and specialist obesity units.\nFinally, the gender- and profession-based differences which were found are somewhat difficult to interpret and therefore deserve further investigation in larger quantitative studies. For instance, to our knowledge no one has compared both profession and gender aspects in the same study. Moreover, research should investigate the association between staff's negative perceptions of patients with obesity and their actual practices, which, in the long run, might have additional harmful effects on obese patients' health.", "There are certain limitations to consider when it comes to interpreting the findings of the present study. Firstly, there were quite a number of health care centres which refused participation, for which reason it is possible that the participants came from centres which had taken an active interest in weight management. The health care centres included, however, were situated in both affluent and poor areas of Stockholm, and were both large and small. The demographic characteristics of non-participants and participants are unfortunately not known. Secondly, the number of male participants was low (one third). However, owing to the limited number of males eligible from the nurse population and to the fact that previous research has not been able to recruit male nurses, this study makes an important contribution to the field. Thirdly, some of the participants were recommended by their medical head, and it is likely that those with a more positive attitude were chosen. However, the findings display very rich descriptions of conceptions; both negative and positive views were expressed, and participants were not afraid of raising difficult topics.\nAll participants were asked the same questions, and all interviews and analysis were performed by the first author (LMH), who is a nutritionist. However, the possibility cannot be excluded that the background of the interviewer prompted participants to focus more on issues related to healthy diet and physical activity. To attain greater rigour, the conceptions and descriptive categories derived were scrutinised at each step by the third author (GA), who is experienced in phenomenographic analysis. The findings were constantly discussed in depth during the analysis by two of us (LMH, GA) until agreement was reached.", "It is likely that low confidence of staff in treating obesity means that obesity in primary care has low priority, and their belief that patients are not motivated produces a moral dilemma for GPs and DNs. How should they prioritise their work? Is obesity the responsibility of the patient or of the health care system? Obesity treatment in primary health care, though, has the potential of being much more effective than it currently is, and the GPs and DNs in the present study touched upon many organisational aspects that need improvement. For example, obesity must be recognised as an important issue at all levels of the health care system. It also seems warranted to promote competence in motivational interviewing and evidence-based treatments, and also to increase awareness of staff's negative views on patient attitudes. Additional ways to enhance care might be to adopt a team-based approach within each unit, with resources to enable continuity in care, and also to promote co-operation with other stakeholders, such as social welfare authorities, commercial weight-loss organisations and specialist obesity units.\nFinally, the gender- and profession-based differences which were found are somewhat difficult to interpret and therefore deserve further investigation in larger quantitative studies. For instance, to our knowledge no one has compared both profession and gender aspects in the same study. Moreover, research should investigate the association between staff's negative perceptions of patients with obesity and their actual practices, which, in the long run, might have additional harmful effects on obese patients' health.", "The authors declare that they have no competing interests.", "LMH, FR and GA all participated in the design of the study. LMH conducted all the interviews, made the initial analysis of the interview transcripts and drafted the manuscript. LMH and GA had discussions about the analysis and reporting. FR and GA offered comments on the draft of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2296/12/7/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Data collection", "Data analysis", "Findings", "Adequate primary health care", "Overweight needs to be prioritised", "Lack of distinct guidelines and evidence", "Overweight not our responsibility", "Continuity and long-term support", "Co-operate for knowledge-based care", "Promoting lifestyle change", "Small steps and realistic goals", "Raise awareness", "Individually based solutions", "Facilitate motivation", "Need for competency", "Respectful encounters", "Staff with active interest", "Knowledge about diet and counselling", "Adherence to new habits", "Overcome deep-seated habits", "Psychological and medical barriers", "Socio-cultural barriers", "Understanding patient attitudes", "Motivation to change", "Evasive behaviour", "Trusting in care", "Lack of self-confidence", "Discussion", "Limitations and Strengths", "Implications for Clinical Practice and Future Research", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The number of obese adults in Sweden has doubled over a 20-year period, and about 10% of both women and men are estimated to be obese [1]. Recent data show that the prevalence of obesity in a primary care setting is 20-24% [2]. In Sweden, teams of general practitioners (GPs) and nurses, especially district nurses (DNs), are on the front line of the encounter between health care personnel and people with obesity or other lifestyle-related diseases [3]. Consequently, these professional groups must, to a greater extent than before, assist patients with weight management. The trend in health care is towards a more patient-centred approach, empowering the patients themselves to take responsibility for their health [4]. Accordingly, one of the tasks of GPs and DNs is to support patients in making healthy decisions about their daily living habits.\nPrimary health care is recognised as an important resource in co-ordinating the treatment of obese patients [5]. Previous studies also reveal that GPs and nurses acknowledge that primary care has an important role in managing obesity [6,7]. Other studies, however, show that GPs do not regard obesity management as their responsibility, but rather that of the patient [8].\nA significant proportion of GPs and nurses in primary health care consider themselves insufficiently skilled or prepared to treat patients with obesity [6,7]. Both GPs and nurses regard themselves as ineffective in their work with obese patients [7,9], and a majority of GPs consider the possibility of a patient's succeeding with weight loss as limited [6,9]. Studies have also shown that medical staff attribute the failure to lose weight to personal, patient-related factors rather than to professional ones [8,10]. Several other studies support this notion, since they show that GPs and nurses perceive obese patients as having low motivation, lacking willpower, being unwilling to change lifestyle [11,12] and non-compliant to advice [10]. However, there are studies supporting the presence of system-level barriers, such as lack of time during patient visits [12], lack of places to which to refer patients, and lack of patient-oriented educational materials [13].\nThe seemingly negative beliefs about obesity and obesity treatment which have been documented, especially in quantitative studies, might be over-simplified, and it is likely that GPs and nurses hold more complex views. Qualitative studies of GPs and nurses in primary health care support this idea to some extent [8,12,14]. Nurses perceive weight as a sensitive issue to discuss with patients and therefore try to achieve a balance between factors involving personal responsibility and factors outside the individual's control [14]. Mercer and Tessier found that GPs were negative about their role in obesity treatment, while practice nurses, although accepting it as part of their job, experienced that GPs \"off-loaded\" patients on to them [12]. Epstein and Ogden found that GPs were sceptical about the success of available treatments, but still offered anti-obesity drugs and tried to listen to and understand the patient's problem [8].\nReviewing the literature on existing qualitative studies, including those of nurses and GPs in primary health care, it becomes apparent that there is a need to know more about how they conceive their encounters with patients with obesity. Research has hitherto been limited in that no male nurses have been included, despite its having been demonstrated that gender might be important regarding attitudes and practices within obesity management [9,15]. This emphasises the need for further research. Against this background, the aim of the current study was to describe how GPs and DNs, both male and female, conceive their encounters with obesity in primary health care.", "We used a phenomenographic approach to identify and describe the various ways in which a specific phenomenon is conceived [16], in this case primary health care specialists' conceptions of obesity and obesity treatment. This qualitative approach has been used in health care research to study, for example, doctors' or nurses' conceptions of treatments for different diseases [17]. The phenomenographic tradition acknowledges two research perspectives: there is a first-order perspective which concerns how the world actually is, and a second-order perspective which concerns how the world is experienced. In phenomenography, the second-order perspective is essential [16]. The assumption is that there is only one world, but that it can be experienced in qualitatively different ways. The approach comes from educational research and entails adopting a content-related perspective to characterise, understand and describe the qualitatively different ways in which people make sense of the world around them. The conceptions are derived from individual interviews, but the analyses emanate in descriptive categories at a collective level [16].\n[SUBTITLE] Participants [SUBSECTION] Eligible for participation in the study were staff from 57 primary health care centres in a well-defined area in Sweden. The criteria for inclusion were being a DN or GP with a specialist education, speaking Swedish fluently, and acceptance of tape-recording. A strategic selection of participants was made to obtain variety regarding age, gender and primary health care experience. This would provide a range of conceptions from DNs and GPs. The heads of the primary health care centre in question had to give their consent for staff to participate. Therefore we first contacted the heads by e-mail with an information letter, then by telephone approximately a week later. If the head gave permission, the informant in question was approached by either e-mail or telephone. We gave the informants the same information as their head.\nWe included 20 participants at 19 primary health care centres (Table 1). Two DNs came from the same centre. The participants' median age was 51 years, and their median professional experience 10.5 years. Of the 20 informants, 10 were recommended by their medical head, 3 were medical heads themselves, working as GPs, and 7 were contacted from staff lists. Four of the 10 DNs were specialists in diabetes or weight management. Among the GPs, 3 were involved in some kind of weight management strategies.\nDemographic characteristics of the primary health care professionals (n = 20).\nThe medical heads at 16 primary health care centres refused to grant permission for participation, largely because of re-organisation, work overload or shortage of staff, although some were simply unwilling. We could not reach the medical heads at 10 centres. At 12 centres the medical head gave approval for the study but the DNs and GPs declined because of work overload, unwillingness or lack of financial compensation; some could not be reached. The Regional Ethical Review Board, Karolinska Institute, Stockholm, Sweden, approved the study (reference number 2006/7-31/1).\nEligible for participation in the study were staff from 57 primary health care centres in a well-defined area in Sweden. The criteria for inclusion were being a DN or GP with a specialist education, speaking Swedish fluently, and acceptance of tape-recording. A strategic selection of participants was made to obtain variety regarding age, gender and primary health care experience. This would provide a range of conceptions from DNs and GPs. The heads of the primary health care centre in question had to give their consent for staff to participate. Therefore we first contacted the heads by e-mail with an information letter, then by telephone approximately a week later. If the head gave permission, the informant in question was approached by either e-mail or telephone. We gave the informants the same information as their head.\nWe included 20 participants at 19 primary health care centres (Table 1). Two DNs came from the same centre. The participants' median age was 51 years, and their median professional experience 10.5 years. Of the 20 informants, 10 were recommended by their medical head, 3 were medical heads themselves, working as GPs, and 7 were contacted from staff lists. Four of the 10 DNs were specialists in diabetes or weight management. Among the GPs, 3 were involved in some kind of weight management strategies.\nDemographic characteristics of the primary health care professionals (n = 20).\nThe medical heads at 16 primary health care centres refused to grant permission for participation, largely because of re-organisation, work overload or shortage of staff, although some were simply unwilling. We could not reach the medical heads at 10 centres. At 12 centres the medical head gave approval for the study but the DNs and GPs declined because of work overload, unwillingness or lack of financial compensation; some could not be reached. The Regional Ethical Review Board, Karolinska Institute, Stockholm, Sweden, approved the study (reference number 2006/7-31/1).\n[SUBTITLE] Data collection [SUBSECTION] Two of us (LMH, GA) developed an interview guide with open-ended questions. The main questions were: What are your experiences of meeting patients with obesity? How do you conceive the life situation of patients with obesity? How do you think care is working for patients with obesity? Individual interviews, which took the form of conversations, were performed by one of us (LMH). Follow-up questions were asked depending on how comprehensively the informant answered the main questions. Examples of follow-up questions were: \"What do you mean?\" \"Can you explain?\" \"Can you tell me more?\" \"Is there anything you would like to add?\" Participants chose where the interview should take place. Two of the 20 participants preferred to be interviewed at the interviewer's research department, while the remaining interviews were performed at the workplace. The tape-recorded interviews (30-80 minutes) were transcribed verbatim.\nTwo of us (LMH, GA) developed an interview guide with open-ended questions. The main questions were: What are your experiences of meeting patients with obesity? How do you conceive the life situation of patients with obesity? How do you think care is working for patients with obesity? Individual interviews, which took the form of conversations, were performed by one of us (LMH). Follow-up questions were asked depending on how comprehensively the informant answered the main questions. Examples of follow-up questions were: \"What do you mean?\" \"Can you explain?\" \"Can you tell me more?\" \"Is there anything you would like to add?\" Participants chose where the interview should take place. Two of the 20 participants preferred to be interviewed at the interviewer's research department, while the remaining interviews were performed at the workplace. The tape-recorded interviews (30-80 minutes) were transcribed verbatim.\n[SUBTITLE] Data analysis [SUBSECTION] We carried out the analysis in four steps in accordance with the phenomenographic approach [18,19]. At the first step, the interviews were listened to again to verify that they had been transcribed correctly. The individual transcripts were then read through several times to get an overall impression, and thereafter statements relevant to the study were identified. At the second step, the aim was to identify distinct ways of conceiving obesity treatment, which involved constantly comparing and labelling the statements. The labelled statements were then treated as preliminary conceptions. At the third step the conceptions were compared with one another and grouped to obtain an overall picture of possible links between them. The grouped conceptions formed preliminary descriptive categories to which names were given. At the fourth step, the focus shifted from relations between conceptions to relations between the preliminary descriptive categories. Finally, the descriptive categories were critically investigated to ensure that they properly represented the conceptions.\nWe carried out the analysis in four steps in accordance with the phenomenographic approach [18,19]. At the first step, the interviews were listened to again to verify that they had been transcribed correctly. The individual transcripts were then read through several times to get an overall impression, and thereafter statements relevant to the study were identified. At the second step, the aim was to identify distinct ways of conceiving obesity treatment, which involved constantly comparing and labelling the statements. The labelled statements were then treated as preliminary conceptions. At the third step the conceptions were compared with one another and grouped to obtain an overall picture of possible links between them. The grouped conceptions formed preliminary descriptive categories to which names were given. At the fourth step, the focus shifted from relations between conceptions to relations between the preliminary descriptive categories. Finally, the descriptive categories were critically investigated to ensure that they properly represented the conceptions.", "Eligible for participation in the study were staff from 57 primary health care centres in a well-defined area in Sweden. The criteria for inclusion were being a DN or GP with a specialist education, speaking Swedish fluently, and acceptance of tape-recording. A strategic selection of participants was made to obtain variety regarding age, gender and primary health care experience. This would provide a range of conceptions from DNs and GPs. The heads of the primary health care centre in question had to give their consent for staff to participate. Therefore we first contacted the heads by e-mail with an information letter, then by telephone approximately a week later. If the head gave permission, the informant in question was approached by either e-mail or telephone. We gave the informants the same information as their head.\nWe included 20 participants at 19 primary health care centres (Table 1). Two DNs came from the same centre. The participants' median age was 51 years, and their median professional experience 10.5 years. Of the 20 informants, 10 were recommended by their medical head, 3 were medical heads themselves, working as GPs, and 7 were contacted from staff lists. Four of the 10 DNs were specialists in diabetes or weight management. Among the GPs, 3 were involved in some kind of weight management strategies.\nDemographic characteristics of the primary health care professionals (n = 20).\nThe medical heads at 16 primary health care centres refused to grant permission for participation, largely because of re-organisation, work overload or shortage of staff, although some were simply unwilling. We could not reach the medical heads at 10 centres. At 12 centres the medical head gave approval for the study but the DNs and GPs declined because of work overload, unwillingness or lack of financial compensation; some could not be reached. The Regional Ethical Review Board, Karolinska Institute, Stockholm, Sweden, approved the study (reference number 2006/7-31/1).", "Two of us (LMH, GA) developed an interview guide with open-ended questions. The main questions were: What are your experiences of meeting patients with obesity? How do you conceive the life situation of patients with obesity? How do you think care is working for patients with obesity? Individual interviews, which took the form of conversations, were performed by one of us (LMH). Follow-up questions were asked depending on how comprehensively the informant answered the main questions. Examples of follow-up questions were: \"What do you mean?\" \"Can you explain?\" \"Can you tell me more?\" \"Is there anything you would like to add?\" Participants chose where the interview should take place. Two of the 20 participants preferred to be interviewed at the interviewer's research department, while the remaining interviews were performed at the workplace. The tape-recorded interviews (30-80 minutes) were transcribed verbatim.", "We carried out the analysis in four steps in accordance with the phenomenographic approach [18,19]. At the first step, the interviews were listened to again to verify that they had been transcribed correctly. The individual transcripts were then read through several times to get an overall impression, and thereafter statements relevant to the study were identified. At the second step, the aim was to identify distinct ways of conceiving obesity treatment, which involved constantly comparing and labelling the statements. The labelled statements were then treated as preliminary conceptions. At the third step the conceptions were compared with one another and grouped to obtain an overall picture of possible links between them. The grouped conceptions formed preliminary descriptive categories to which names were given. At the fourth step, the focus shifted from relations between conceptions to relations between the preliminary descriptive categories. Finally, the descriptive categories were critically investigated to ensure that they properly represented the conceptions.", "There emerged five descriptive categories illustrating GPs' and DNs' conceptions of their encounters with obese patients in primary health care (Table 2). The conceptions in each descriptive category are illustrated by quotations from individual interviewees.\nStaff's conceptions of encounters with patients with obesity in primary health care.\nAll participants, independent of profession and gender, were represented with statements in each descriptive category. However, certain profession and gender differences were identified regarding the conceptions forming the descriptive categories. If more than two thirds of all the statements within a conception were made by one of the compared groups (gender or profession), this is reflected in the account of the findings.\n[SUBTITLE] Adequate primary health care [SUBSECTION] [SUBTITLE] Overweight needs to be prioritised [SUBSECTION] Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\nStaff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\n[SUBTITLE] Lack of distinct guidelines and evidence [SUBSECTION] The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\nThe lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\n[SUBTITLE] Overweight not our responsibility [SUBSECTION] This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\nThis conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\n[SUBTITLE] Continuity and long-term support [SUBSECTION] This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\nThis conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\n[SUBTITLE] Co-operate for knowledge-based care [SUBSECTION] Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\nStaff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\n[SUBTITLE] Overweight needs to be prioritised [SUBSECTION] Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\nStaff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\n[SUBTITLE] Lack of distinct guidelines and evidence [SUBSECTION] The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\nThe lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\n[SUBTITLE] Overweight not our responsibility [SUBSECTION] This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\nThis conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\n[SUBTITLE] Continuity and long-term support [SUBSECTION] This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\nThis conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\n[SUBTITLE] Co-operate for knowledge-based care [SUBSECTION] Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\nStaff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\n[SUBTITLE] Promoting lifestyle change [SUBSECTION] [SUBTITLE] Small steps and realistic goals [SUBSECTION] Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\nStaff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\n[SUBTITLE] Raise awareness [SUBSECTION] Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\nStaff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\n[SUBTITLE] Individually based solutions [SUBSECTION] Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\nStaff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\n[SUBTITLE] Facilitate motivation [SUBSECTION] It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\nIt was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\n[SUBTITLE] Small steps and realistic goals [SUBSECTION] Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\nStaff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\n[SUBTITLE] Raise awareness [SUBSECTION] Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\nStaff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\n[SUBTITLE] Individually based solutions [SUBSECTION] Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\nStaff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\n[SUBTITLE] Facilitate motivation [SUBSECTION] It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\nIt was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\n[SUBTITLE] Need for competency [SUBSECTION] [SUBTITLE] Respectful encounters [SUBSECTION] Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\nStaff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\n[SUBTITLE] Staff with active interest [SUBSECTION] This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\nThis conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\n[SUBTITLE] Knowledge about diet and counselling [SUBSECTION] This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\nThis conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\n[SUBTITLE] Respectful encounters [SUBSECTION] Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\nStaff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\n[SUBTITLE] Staff with active interest [SUBSECTION] This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\nThis conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\n[SUBTITLE] Knowledge about diet and counselling [SUBSECTION] This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\nThis conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\n[SUBTITLE] Adherence to new habits [SUBSECTION] [SUBTITLE] Overcome deep-seated habits [SUBSECTION] Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\nStaff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\n[SUBTITLE] Psychological and medical barriers [SUBSECTION] This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\nThis conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\n[SUBTITLE] Socio-cultural barriers [SUBSECTION] This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\nThis conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\n[SUBTITLE] Overcome deep-seated habits [SUBSECTION] Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\nStaff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\n[SUBTITLE] Psychological and medical barriers [SUBSECTION] This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\nThis conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\n[SUBTITLE] Socio-cultural barriers [SUBSECTION] This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\nThis conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\n[SUBTITLE] Understanding patient attitudes [SUBSECTION] [SUBTITLE] Motivation to change [SUBSECTION] Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\nStaff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\n[SUBTITLE] Evasive behaviour [SUBSECTION] Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\nStaff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\n[SUBTITLE] Trusting in care [SUBSECTION] Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\nStaff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\n[SUBTITLE] Lack of self-confidence [SUBSECTION] Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\nFemale staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\n[SUBTITLE] Motivation to change [SUBSECTION] Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\nStaff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\n[SUBTITLE] Evasive behaviour [SUBSECTION] Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\nStaff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\n[SUBTITLE] Trusting in care [SUBSECTION] Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\nStaff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\n[SUBTITLE] Lack of self-confidence [SUBSECTION] Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\nFemale staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)", "[SUBTITLE] Overweight needs to be prioritised [SUBSECTION] Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\nStaff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)\n[SUBTITLE] Lack of distinct guidelines and evidence [SUBSECTION] The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\nThe lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)\n[SUBTITLE] Overweight not our responsibility [SUBSECTION] This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\nThis conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)\n[SUBTITLE] Continuity and long-term support [SUBSECTION] This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\nThis conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)\n[SUBTITLE] Co-operate for knowledge-based care [SUBSECTION] Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)\nStaff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)", "Staff regarded primary care as having an important role to play in both the prevention and treatment of overweight and obesity. However, there was a major barrier to working effectively in this area, namely lack of time. Sometimes staff avoided bringing up weight issues because they considered that they would not have time to deal properly with the problem later on. DNs often devoted more time per visit than GPs, but experienced that there were competing assignments hindering them from working more actively with these patients. Staff considered that GPs had much less time because more easily treated conditions were often prioritised. They regarded the system of reimbursing health care centres as in need of improvement.\n\"We have no resources and that's why we can't do anything. There's no time. We should actually devote more time to prevention, but it's all about diseases.\" (DN, female, 58 years old)", "The lack of guidelines and evidence within the area of obesity was more strongly experienced by the men than by the women. The view of male DNs, for instance, was that there are few treatment options to offer the patients. Also, the existing methods, except for surgical intervention, were considered to be ineffective and not evidence-based. Staff were especially eager for guidelines regarding dietary advice, which at present tended to be vague. Because of the many contradictions, different opinions and extensive debate about what were the most successful diet regimes, staff regarded it as difficult to offer balanced advice to patients. The diet pills and medical treatments available for losing weight were also considered ineffective. Staff might be prepared to recommend them but were not optimistic about the outcome.\n\"I feel I don't have much to offer medically. It's very general advice at best, and a recommendation to try Weight Watchers or something like that. For those who've tried everything, it might be a referral for surgical treatment.\" (GP, male, 38 years old)", "This conception has to do with staff's reporting that the treatment of overweight and obesity should not self-evidently be performed within primary health care. This conception was more pronounced among GPs than DNs. However, female DNs were more inclined than male DNs to deny that primary health care plays an important role. Staff considered that their main task was to treat diseases, and overweight and obesity were seen more as conditions that might involve a risk of diabetes or some other disease. If a concomitant disorder was present, however, it was important to intervene. GPs, though, thought that other groups within primary health care, such as DNs, physiotherapists and dietitians, could handle this better than members of their own profession. Commercial weight-loss organisations were also thought to be more effective, and therefore more appropriate. Furthermore, staff thought that overweight and obesity constituted a societal problem, where community planning, school policies and information to parents were what mattered most. Some staff, however, acknowledged that health care had a role to play in prevention, but this only applied to child health care centres and school health services.\n\"I don't think you should take it for granted that we're the ones to intervene. We're trained in medical care. Overweight and obesity are more of a societal problem.\" (GP, male, 49 years old)", "This conception has to do with staff's belief that obesity must be managed over an extended period. Staff emphasised continuity in care because lifestyle change was conceived of as taking time. Often, they saw that people were non-compliant to advice year after year but then suddenly things started to happen. The importance of encountering the same personnel was stressed by a number of staff. For the patients, this would mean not having to repeat their story again and again, and would also mean not being given contradictory information.\n\"It has to be a long-term relationship. Often it's very short encounters, but I've noticed that I can get further with the patients I meet repeatedly.\" (DN, male, 55 years old)", "Staff considered it important to adopt a team approach, which included different competencies. Co-operation had to be both within and outside primary health care. Staff regarded psychologists, welfare officers and psychiatrists as especially important collaboration partners, as some patients had eating problems or other psychological problems. Patients were often referred to centres specialising in obesity management, but staff saw little real co-operation taking place. Otherwise, co-ordinated efforts in the local community, where primary care would be just one of the stakeholders, were seen as an important next step in improving care.\n\"Well, what we have, and I think is very positive, is access to a dietitian, helpful doctors, a social welfare officer and a nurse competent in cognitive therapy. I think we can solve this in-house.\" (DN, female, 55 years old)", "[SUBTITLE] Small steps and realistic goals [SUBSECTION] Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\nStaff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)\n[SUBTITLE] Raise awareness [SUBSECTION] Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\nStaff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)\n[SUBTITLE] Individually based solutions [SUBSECTION] Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\nStaff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)\n[SUBTITLE] Facilitate motivation [SUBSECTION] It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)\nIt was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)", "Staff regarded it as necessary to help patients find ways to improve lifestyle. The focus was mainly on increasing physical activity, especially lighter activities on a day-to-day basis, like cycling or walking to work. To recommend more intense workouts, like running, was seen as counter-productive, because patients were unlikely to adhere to such a regime. Some staff put exercise on a prescription basis (whereby, for example, the patient had the right to go to a gym during working hours), because they thought it gave additional force to their advice. However, some experienced that this worked best with patients who were already motivated but had not yet started, or who had done some exercise before. Staff also considered it important to encourage patients to reduce their energy intake. However, staff saw it as important to advise the patient to take small steps and start with one thing at a time. Often, patients were eager to go ahead and make changes in all areas at once. Unrealistic expectations about weight loss and progress had prompted staff to help patients find more realistic and achievable goals.\n\"I try to get them to be active on a daily basis. Walking a short distance to work or using the stairs. It's important that they begin changing their behaviour slowly.\" (DN, female, 56 years old)", "Staff considered that patients were not always aware of their weight status or unhealthy lifestyle. Obese patients often sought care for neck, back and knee pain, heart and lung problems, or for general tiredness. Therefore it was important to give factual information about the association between obesity and ill-health. It was also regarded as helpful to use self-report questionnaires on health and lifestyle, or food diaries, as a basis for talking about the patient's condition. Staff considered that many did not reflect at all on their dietary patterns, and that the recording of food intake made patients more attentive. Staff also relied on medical health data, which constituted very straightforward information. One strategy to raise weight awareness was to measure body size or weight and then to move on to presenting facts about the association with ill-health.\n\"I think these general health questionnaires about smoking, physical activity, diet and alcohol are a way of approaching the weight issue. They might not even have thought about the fact they don't eat vegetables every day.\" (GP, female, 45 years old)", "Staff regarded it as important to start from the patient's perspective and keep as open a mind as possible. They also considered it important to ask a lot of questions and to take careful note of the thoughts and strategies that the patient had applied in the past or was now applying. It was important to help patients find their own solutions, the staff's task being to guide the patient towards the right level of ambition. Different solutions were required for different patients. Contrary to what staff sometimes expected, patients with fewer resources might make the best progress. Staff experienced that solutions could always be found, and that these solutions could be managed by the patients. However, in more difficult cases, staff on occasion told patients that it might be time to stop trying to lose weight and focus on something else.\n\"It's obvious that it has to be adapted to the individual. It's very personal what work, how old they are, etc. It may not even be a question of how much food they eat, but what and when they eat.\" (GP, female, 34 years old)", "It was important for staff to emphasise the improved health that could be achieved by changing eating habits and physical activity regardless of any weight loss. Showing patients' improvements on health indicators was also regarded as motivational. Female GPs and DNs highlighted this to a greater extent than male GPs and DNs. Furthermore, staff considered that group activities were very useful. Not only the exchange of experience, support, and company, but also the pressure that could be exerted within the group enabled people to take responsibility. However, some staff considered that there were cases where more drastic methods were needed, like pushing the patient quite hard, making strong demands or using scare tactics. Staff told patients that they were going to have severe complications, develop certain diseases, or could even die from obesity.\n\"I sometimes say: 'Heart disease, do you want that? Or diabetes?' I try to scare them a little bit and if I find out that their mother died of a heart attack I can use that.\" (DN, female, 50 years old)", "[SUBTITLE] Respectful encounters [SUBSECTION] Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\nStaff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)\n[SUBTITLE] Staff with active interest [SUBSECTION] This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\nThis conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)\n[SUBTITLE] Knowledge about diet and counselling [SUBSECTION] This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)\nThis conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)", "Staff were aware of the stigmatisation of obese people, and also of the fact that it was hurtful. They therefore tried to be sensitive when raising issues of lifestyle and weight, and were careful not to blame the patients themselves. When medical advice had been sought concerning conditions not related to weight, but staff still raised the weight issue, some patients were offended and angry, even furious. For this reason, staff often waited until the second or third encounter before raising the matter. They sensed they had to establish a good relation in order not to seem provocative and possibly lose trust. Moreover, staff tried to show empathy and an understanding of the difficult situation of being obese. Often, staff perceived that the patient needed to be consoled, emotionally supported and acknowledged. Feelings of shame and guilt were often apparent. The hopelessness that some patients described was also difficult to handle, but staff tried to provide encouragement and hope. Female GPs made greater reference to respectful encounters than the other groups.\n\"It's very much a question of comforting words or, so to speak, off-loading the blame. You need the right psychological feeling for meeting these individuals and their giant dilemma.\" (GP, female, 44 years old)", "This conception has to do with staff's interest in weight management. Staff considered that work with overweight and obese patients was best performed by those with an active interest in the area. An enthusiastic person with a strong driving force who really had a commitment was perceived as enhancing the patient encounter. GPs considered that nurses were often very interested in helping obese patients and actively trying to improve their work in this respect. DNs regarded the work as part of their profession and had an independent interest. DNs' conceptions of GPs' involvement were divided. Some DNs regarded GPs as initiators of weight management, others regarded them as not particularly interested, and sometimes evasive. Some GPs expressed a willingness to work with overweight and obesity but accepted that their colleagues were often not eager to get involved.\n\"The nurses are probably a bit more oriented towards this kind of work, and I think they can do a lot. They have more time to go through things, and they're highly competent. Anyway, the ones we have have taken an active interest.\" (GP, male, 52 years old)", "This conception has to do with how staff regarded training in different counselling techniques and basic knowledge about diet as essential to their encounters with obese patients. Staff considered that they needed pedagogic skills to help patients take decisions about lifestyle. They also experienced that special education in motivational interviewing and cognitive behavioural techniques would make for more fruitful encounters. However, many of the staff wanted better knowledge about the basic principles of nutrition and what constitutes a healthy diet. Staff tried to follow the constantly ongoing debate and catch up with new findings about successful dietary interventions, but expressed a need for short training courses.\n\"Our basic and further education has to become much better. First, there's all this discussion about what diet to recommend, and second, about how to get people to do what you tell them. (DN, male, 58 years old)", "[SUBTITLE] Overcome deep-seated habits [SUBSECTION] Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\nStaff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)\n[SUBTITLE] Psychological and medical barriers [SUBSECTION] This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\nThis conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)\n[SUBTITLE] Socio-cultural barriers [SUBSECTION] This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)\nThis conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)", "Staff regarded it as very difficult for people to change their lifestyle. They indicated that very few succeed in losing weight, and even those only lose a little. Lack of success among their patients has also made them less optimistic about their ability to help others in the future. Patients were perceived as having tried a huge set of strategies to lose weight, but without result. Staff attributed this largely to patients' difficulty in changing their behaviour. Lack of knowledge was not an issue for most patients, but the question was how they were going to put their knowledge into practice. Often, patients managed their new daily habits for some months but then relapsed. The view was that it is too tough to stick to a new regime and that the lack of an immediate positive outcome made people give in.\n\"It's not so easy to change old routines. I think most people know how to eat, but it's one thing to know how and another thing to actually do it.\" (DN, female, 55 years old)", "This conception concerns how staff regarded patients as facing psychological, medical and physical barriers to adhering to new habits. Males reflected more on this than females. The view was that pain in knees and hips limited opportunities for physical activity. Diabetes was often part of the picture, and this was regarded as requiring the patient to be extra compliant to advice about losing weight. Some staff also indicated that their patients had to rely on many drugs, which made it difficult to adopt new lifestyle habits. Otherwise, patients who were perceived as having psychological or psychiatric problems (depression, anxiety or addiction) were regarded as especially non-compliant. Often, they used food as a way of handling their distress, and changing eating behaviour was especially difficult. Medications against depression and anxiety were also regarded as causing weight gain, which discouraged patients from improving their habits.\n\"Often orthopaedic problems hinder people and one thing leads to the other. Orthopaedic problems increase and then you can't move around. Your weight goes up and of course it gets harder to exercise.\" (GP, male, 52 years old)", "This conception has to do with staff's considering that their patients were facing social and cultural barriers in adhering to new habits. Working hours, family life and financial situation were perceived as important factors affecting adherence, whilst patients from other cultures had the additional problem of finding appropriate food options corresponding to the traditions in their home countries. Staff sometimes advised moderate exercise, like swimming, to very heavy patients, but this could involve the problem of having to undress in front of strangers. DNs, especially the male DNs, indicated that socio-cultural barriers may have an important influence on behaviour.\n\"There are quite a lot here that come from Asia and the Mediterranean area and they often have dinner very late and have particular eating habits. It's very difficult to make them change things.\" (DN, male, 35 years old)", "[SUBTITLE] Motivation to change [SUBSECTION] Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\nStaff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)\n[SUBTITLE] Evasive behaviour [SUBSECTION] Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\nStaff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)\n[SUBTITLE] Trusting in care [SUBSECTION] Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\nStaff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)\n[SUBTITLE] Lack of self-confidence [SUBSECTION] Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)\nFemale staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)", "Staff experienced that it was the patients' own responsibility to find the motivation. Patients had to come up with their own ideas about what to do. Willpower was very important. Patients had to commit themselves, be prepared to put in a lot of work, and make weight their first priority. However, some patients were described as a bit lazy, lacking in energy and indifferent to their situation. Some were regarded as having the motivation to lose weight but still reluctant to make the necessary changes. Those who sought professional care to lose weight were sometimes regarded as just wanting to wear smaller clothes. Even though some slight physical symptoms were often present, their main motivation was a better appearance. Staff considered that patients often experienced ashamed of their appearance, and that the low degree of acceptance of larger bodies in society acted as a motivator for losing weight. However, some staff thought the opposite, namely that there is little motivation to lose weight because so many others in society are just as obese. Patients who had experienced, or had close experience of diabetes, a heart attack or other severe problem were often highly motivated to do something. Mild back pain or a hurting knee could also be motivating factors, but some staff considered that many patients just adapted to their excess weight and did not seek help before they experienced very uncomfortable.\n\"Patients want to lose weight but they don't want to change. Start walking instead of taking the bus, and eat less, that's all there is to it. Or the motivation might be there but they don't really want to do it, only if they think it's important.\" (GP, male, 49 years old)", "Staff indicated that patients with obesity often made excuses for not coming to appointments, following advice or taking exercise. Patients tended to blame their failures on such things as family problems, lack of time, lack of money, and sometimes pain and medication. These patients were also regarded as having a tendency to disappear without notice. They were compliant for some time but then made themselves unavailable for follow-up. DNs were often the ones taking care of the follow-up and this group expressed greater concern about patients' evasive behaviour than GPs. Furthermore, what the staff perceived as very striking about this group of patients was that they claimed not to eat much and exercise a lot, and yet did not lose weight. Here it was very difficult for staff to find ways of telling the patient that this was in conflict with scientific evidence.\n\"They often say 'I don't understand it, I don't eat anything', but actually we know they do.\" (DN, female, 45 years old)", "Staff considered that many patients sought medical care just to get diet pills. They had tried different methods, and now their hope lay in pills or other medical treatment. The patients wondered if they suffered from some metabolic disturbance and wanted GPs to do tests. However, staff stated that the tests were seldom positive. Some patients also turned to staff in the hope that they would come up with a solution that worked. Staff regarded patients as off-loading their problem, and as expecting them to see to it that the excess weight disappeared as if by magic. Staff regarded patients as unwilling to assume the responsibility for losing weight, transferring it instead to the GP or DN.\n\"I think a lot of them believe that someone else is going to do the job for them.... They put the responsibility on me, I'm the one who's going to fix it so they lose weight. I try to talk them out of it, but some don't listen.\" (DN, female, 59 years old)", "Female staff (and one male GP) experienced that patients lacked self-confidence in their ability to lose weight and adopt a healthy pattern of behaviour. Patients were regarded as being highly motivated but also expressed hopelessness about the fact that, even though they had tried a huge number of strategies, they had not succeeded in losing weight. Staff considered that patients' disappointment at not managing their overweight made them lose confidence, and feelings of guilt, frustration, despair and low self-esteem were often apparent.\n\"There comes a time when you get so disappointed with yourself, because you just can't lose weight. You think you've done everything, and you still can't like yourself. You lose confidence.\" (GP, female, 35 years old)", "We found a wide variation in GPs' and DNs' conceptions of their encounters with obese patients in primary health care. Staff described the encounters from both an organisational/personnel perspective and a patient perspective. The need for primary care to have an adequate organisation for obesity treatment and competent staff for promoting lifestyle change was stressed. However, patients' adherence and attitudes to behaviour change were also looked upon as important. In addition to the findings on the collective level we found certain differences in the pattern of conceptions according to profession and gender. However, in view of the small number of male participants, especially in the DN group, these findings have to be interpreted cautiously.\nThe findings of the present study are to some extent in line with those of previous quantitative and qualitative investigations of attitudes and beliefs regarding obesity treatment in primary care. Examples of such beliefs are that primary care is not an entirely appropriate setting for obesity treatment (especially if no concomitant disease is present), that time is lacking for patient visits, that reimbursement systems are inappropriate, that distinct and evidence-based guidelines need to be improved, and that patient motivation to change is low [7,8,12]. Male staff emphasised to a higher degree than female staff that there is a lack of guidelines and evidence. This may reflect that men to a higher degree explain lack of success in obesity treatment in terms of external (organisation) rather than internal (personal competence) causes. The conception that primary health care is not necessarily the best arena for the prevention and treatment of obesity was more evident among GPs than among DNs. This is in line with the finding of Mercer and Tessier [12] to the effect that GPs were more negative as to their role in obesity treatment than were nurses.\nStaff in this study (mainly female GPs) emphasised the need for respectful treatment and individual solutions, and showed an understanding of the difficulty of changing lifestyle. This replicates what has been found in previous qualitative studies. Brown and Thompson [14] and Epstein and Ogden [8] reported that staff perceived the patient-provider relationship to be central to the improvement of obesity treatment. Patients who are informed and involved in decision-making have been found to be more adherent [20] and staff engaged in patient-centred care and make decisions together with their patients are in a better position to offer more individualised behavioural recommendations to their patients, resulting in better adherence [21]. Patients themselves have also asked for a more personalised approach to weight management, and for specific advice rather than broad statements on how to lose weight [22].\nHowever, we also found that some staff experienced that, to motivate patients, they had to threaten them with a possibly fatal outcome, or at least inform them about the negative consequences of obesity. There is evidence in the literature that some patients view even mild warnings as scare tactics, with a negative impact on adherence [23]; others, however, regard warnings as encouraging and motivating, even essential to change [22]. It is important that GPs and DNs assess patient motivation and discuss what facilitates motivation individually. Previous studies have shown that patients report greater motivation and are more optimistic about weight loss than their GPs, but those who see obese patients more often are better at predicting patient motivation [24].\nThe findings of the present study regarding the staff's perception that some patients exhibit evasive behaviour, are untrustworthy when it comes to revealing their lifestyles and off-load their problems on to staff, are in line with previous research [8,12]. These conceptions were more strongly expressed by DNs than GPs in our study. One reason for this difference could be that the sort of behaviour in question may not appear until after a couple of sessions, whereby DNs are more likely to encounter it in that they often spend more time with patients than do GPs. The problem with these attitudes on the part of staff is that they may manifest themselves in encounters with patients, and that patients might thereby sense that staff do not trust them [25]. Previous studies have shown that patients' higher body mass index is associated with less respect from their doctors [26] and higher reporting of perceived discrimination in a health care setting [27]. Respect develops over time, therefore it would seem necessary that goals in obesity treatment should include continuity of care and long-term support, as indeed was emphasised by staff in our study.\nThe use of scare tactics and perceptions of patients as non-adherent might be due to staff's not being able fully to use or appreciate the strategies of motivational interviewing that are central to increasing patients' intrinsic willingness to change. Female GPs and DNs, and one male GP, were the only ones that considered that lack of self-confidence in changing lifestyle could be the reason for patients' not succeeding in losing weight. Patients' belief in their ability to make the necessary changes has been found to be important for behaviour change [28], and an understanding of this on the part of the health care provider seems crucial. This finding may suggest that male GPs and DNs are too quick to jump to a conclusion about what underlies some of their patients' non-adherence.\nStaff stressed the need for more knowledge and skills in counselling, such as motivational interviewing, and also in basic nutrition and appropriate dietary interventions. The question is whether staff are being trained in the most updated methods of obesity treatment. Staff might, for example, also need skills directed towards helping patients to cope with their situation, because of the limited chance of losing weight going together with the stigma related to obesity. Coping strategies to deal with stigma have important implications for emotional functioning, and health care staff could assist in finding methods to help improve patients' daily functioning.\nThere seems to be a need for evidence-based guidelines which are easy to use and regarded as effective by staff. The use of guidelines is intended to improve the quality of treatment, and it has been found that nurses with no specific preparation or guidelines regarding obesity treatment were the ones who experienced most awkward in meeting obese patients [14]. However, a study of GPs found that awareness of the guidelines was associated with a more negative attitude towards obesity [6]. These contradictory findings suggest that, even when guidelines are available, they might not be user-friendly, updated and/or integrated appropriately into primary care.\nStaff in the present study emphasised the importance of having an active interest in obesity treatment. To further enhance work at primary care centres, it might be necessary for every unit to appoint specific staff to take responsibility for structuring activities and engaging others regarding obesity treatment, organised perhaps in the same way as for diabetic patients. Findings from the Counterweight Programme [29] show, for example, that staff perceived that involvement and a sense of ownership, alongside a clear understanding of programme goals, are important factors in the effective implementation of weight management in primary health care.\n[SUBTITLE] Limitations and Strengths [SUBSECTION] There are certain limitations to consider when it comes to interpreting the findings of the present study. Firstly, there were quite a number of health care centres which refused participation, for which reason it is possible that the participants came from centres which had taken an active interest in weight management. The health care centres included, however, were situated in both affluent and poor areas of Stockholm, and were both large and small. The demographic characteristics of non-participants and participants are unfortunately not known. Secondly, the number of male participants was low (one third). However, owing to the limited number of males eligible from the nurse population and to the fact that previous research has not been able to recruit male nurses, this study makes an important contribution to the field. Thirdly, some of the participants were recommended by their medical head, and it is likely that those with a more positive attitude were chosen. However, the findings display very rich descriptions of conceptions; both negative and positive views were expressed, and participants were not afraid of raising difficult topics.\nAll participants were asked the same questions, and all interviews and analysis were performed by the first author (LMH), who is a nutritionist. However, the possibility cannot be excluded that the background of the interviewer prompted participants to focus more on issues related to healthy diet and physical activity. To attain greater rigour, the conceptions and descriptive categories derived were scrutinised at each step by the third author (GA), who is experienced in phenomenographic analysis. The findings were constantly discussed in depth during the analysis by two of us (LMH, GA) until agreement was reached.\nThere are certain limitations to consider when it comes to interpreting the findings of the present study. Firstly, there were quite a number of health care centres which refused participation, for which reason it is possible that the participants came from centres which had taken an active interest in weight management. The health care centres included, however, were situated in both affluent and poor areas of Stockholm, and were both large and small. The demographic characteristics of non-participants and participants are unfortunately not known. Secondly, the number of male participants was low (one third). However, owing to the limited number of males eligible from the nurse population and to the fact that previous research has not been able to recruit male nurses, this study makes an important contribution to the field. Thirdly, some of the participants were recommended by their medical head, and it is likely that those with a more positive attitude were chosen. However, the findings display very rich descriptions of conceptions; both negative and positive views were expressed, and participants were not afraid of raising difficult topics.\nAll participants were asked the same questions, and all interviews and analysis were performed by the first author (LMH), who is a nutritionist. However, the possibility cannot be excluded that the background of the interviewer prompted participants to focus more on issues related to healthy diet and physical activity. To attain greater rigour, the conceptions and descriptive categories derived were scrutinised at each step by the third author (GA), who is experienced in phenomenographic analysis. The findings were constantly discussed in depth during the analysis by two of us (LMH, GA) until agreement was reached.\n[SUBTITLE] Implications for Clinical Practice and Future Research [SUBSECTION] It is likely that low confidence of staff in treating obesity means that obesity in primary care has low priority, and their belief that patients are not motivated produces a moral dilemma for GPs and DNs. How should they prioritise their work? Is obesity the responsibility of the patient or of the health care system? Obesity treatment in primary health care, though, has the potential of being much more effective than it currently is, and the GPs and DNs in the present study touched upon many organisational aspects that need improvement. For example, obesity must be recognised as an important issue at all levels of the health care system. It also seems warranted to promote competence in motivational interviewing and evidence-based treatments, and also to increase awareness of staff's negative views on patient attitudes. Additional ways to enhance care might be to adopt a team-based approach within each unit, with resources to enable continuity in care, and also to promote co-operation with other stakeholders, such as social welfare authorities, commercial weight-loss organisations and specialist obesity units.\nFinally, the gender- and profession-based differences which were found are somewhat difficult to interpret and therefore deserve further investigation in larger quantitative studies. For instance, to our knowledge no one has compared both profession and gender aspects in the same study. Moreover, research should investigate the association between staff's negative perceptions of patients with obesity and their actual practices, which, in the long run, might have additional harmful effects on obese patients' health.\nIt is likely that low confidence of staff in treating obesity means that obesity in primary care has low priority, and their belief that patients are not motivated produces a moral dilemma for GPs and DNs. How should they prioritise their work? Is obesity the responsibility of the patient or of the health care system? Obesity treatment in primary health care, though, has the potential of being much more effective than it currently is, and the GPs and DNs in the present study touched upon many organisational aspects that need improvement. For example, obesity must be recognised as an important issue at all levels of the health care system. It also seems warranted to promote competence in motivational interviewing and evidence-based treatments, and also to increase awareness of staff's negative views on patient attitudes. Additional ways to enhance care might be to adopt a team-based approach within each unit, with resources to enable continuity in care, and also to promote co-operation with other stakeholders, such as social welfare authorities, commercial weight-loss organisations and specialist obesity units.\nFinally, the gender- and profession-based differences which were found are somewhat difficult to interpret and therefore deserve further investigation in larger quantitative studies. For instance, to our knowledge no one has compared both profession and gender aspects in the same study. Moreover, research should investigate the association between staff's negative perceptions of patients with obesity and their actual practices, which, in the long run, might have additional harmful effects on obese patients' health.", "There are certain limitations to consider when it comes to interpreting the findings of the present study. Firstly, there were quite a number of health care centres which refused participation, for which reason it is possible that the participants came from centres which had taken an active interest in weight management. The health care centres included, however, were situated in both affluent and poor areas of Stockholm, and were both large and small. The demographic characteristics of non-participants and participants are unfortunately not known. Secondly, the number of male participants was low (one third). However, owing to the limited number of males eligible from the nurse population and to the fact that previous research has not been able to recruit male nurses, this study makes an important contribution to the field. Thirdly, some of the participants were recommended by their medical head, and it is likely that those with a more positive attitude were chosen. However, the findings display very rich descriptions of conceptions; both negative and positive views were expressed, and participants were not afraid of raising difficult topics.\nAll participants were asked the same questions, and all interviews and analysis were performed by the first author (LMH), who is a nutritionist. However, the possibility cannot be excluded that the background of the interviewer prompted participants to focus more on issues related to healthy diet and physical activity. To attain greater rigour, the conceptions and descriptive categories derived were scrutinised at each step by the third author (GA), who is experienced in phenomenographic analysis. The findings were constantly discussed in depth during the analysis by two of us (LMH, GA) until agreement was reached.", "It is likely that low confidence of staff in treating obesity means that obesity in primary care has low priority, and their belief that patients are not motivated produces a moral dilemma for GPs and DNs. How should they prioritise their work? Is obesity the responsibility of the patient or of the health care system? Obesity treatment in primary health care, though, has the potential of being much more effective than it currently is, and the GPs and DNs in the present study touched upon many organisational aspects that need improvement. For example, obesity must be recognised as an important issue at all levels of the health care system. It also seems warranted to promote competence in motivational interviewing and evidence-based treatments, and also to increase awareness of staff's negative views on patient attitudes. Additional ways to enhance care might be to adopt a team-based approach within each unit, with resources to enable continuity in care, and also to promote co-operation with other stakeholders, such as social welfare authorities, commercial weight-loss organisations and specialist obesity units.\nFinally, the gender- and profession-based differences which were found are somewhat difficult to interpret and therefore deserve further investigation in larger quantitative studies. For instance, to our knowledge no one has compared both profession and gender aspects in the same study. Moreover, research should investigate the association between staff's negative perceptions of patients with obesity and their actual practices, which, in the long run, might have additional harmful effects on obese patients' health.", "The authors declare that they have no competing interests.", "LMH, FR and GA all participated in the design of the study. LMH conducted all the interviews, made the initial analysis of the interview transcripts and drafted the manuscript. LMH and GA had discussions about the analysis and reporting. FR and GA offered comments on the draft of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2296/12/7/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Nonoperative treatment of slipped capital femoral epiphysis: a scientific study.
21333019
Treatment of the Slipped Capital Femoral Epiphysis remains a cause of concern due to the fact that the true knowledge of the etiopathogeny is unknown, as well as one of its major complications: chondrolysis. The conservative treatment remains controversial; it has been overlooked in the studies and subjected to intense criticism. The purpose of this study is to investigate the results of treatment on the hip of patients displaying slipped capital femoral epiphysis, using the plaster cast immobilization method and its link to chondrolysis.
BACKGROUND
The research was performed based on the study of the following variables: symptomatology, and the degree of slipping. A hip spica cast and bilateral short/long leg casts in abduction, internal rotation with anti-rotational bars were used for immobilizing the patient's hip for twelve weeks. Statistical analysis was accomplished by Wilcoxon's marked position test and by the Fisher accuracy test at a 5% level.
METHODS
A satisfactory result was obtained in the acute group, 70.5%; 94%; in the chronic group (chronic + acute on chronic). Regarding the degree of the slipping, a satisfactory result was obtained in 90.5% of hips tested with a mild slip; in 76% with moderate slip and 73% in the severe slip. The statistical result revealed that a significant improvement was found for flexion (p = 0.0001), abduction (p = 0.0001), internal rotation (p = 0.0001) and external rotation (p = 0.02). Chondrolysis was present in 11.3% of the hips tested. One case of pseudoarthrosis with aseptic capital necrosis was presented. There was no significant variation between age and chondrolysis (p = 1.00).Significant variation between gender/non-white patients versus chondrolysis (p = 0.031) and (p = 0.037), respectively was verified. No causal association between plaster cast and chondrolysis was observed (p = 0.60). In regard to the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in both analyses, p = 0.61 and p = 0.085 respectively.
RESULTS
After analyzing the nonoperative treatment of slipped capital femoral epiphysis and chondrolysis, we conclude that employment of the treatment revealed that the method was functional, efficient, valid, and reproducible; it also can be used as an alternative therapeutic procedure regarding to this specific disease.
CONCLUSIONS
[ "Adolescent", "Casts, Surgical", "Cell Death", "Child", "Chondrocytes", "Cohort Studies", "Female", "Humans", "Immobilization", "Longitudinal Studies", "Male", "Manipulation, Orthopedic", "Radiography", "Retrospective Studies", "Slipped Capital Femoral Epiphyses", "Treatment Outcome" ]
3056824
null
null
Patients and Methods
The Committee of Ethics of the Jesus Children's Hospital in Brazil, Rio de Janeiro, have analyzed and approved the Research Project entitled, Nonoperative treatment of slipped capital femoral epiphysis, which was also evaluated by the Ethics Committee for Research of the Federal University of Rio de Janeiro (UFRJ), Brazil. The typology of the design employed in this sample was a study of a single cohort with observational, longitudinal and retrospective characteristics. In this research, chondrolysis was the dependent variable. A consecutive series of 106 hip joints in eighty-four patients affected, the great majority of them obese, displaying SCFE, were treated by means of plaster cast (Table 1 and Table 2). Patients' age varied at the time of diagnosis, ranging from 7.6 to 15.8 years. The duration of the follow-up ranged from 12 months, with the complete growth-plate closure, to 146 months, an average of 51 months. Thirteen patients were younger than eleven, 55 patients were between the ages of 11 and 13; and 16 patients were between the ages of 13 and 16. The average age was 12.5, males having the average age of 14.5 and females 10.5. Forty-four patients were males, and 40 females. Regarding race, 43 were white, and 41 were non-white. Unilateral involvement was present in 62 left hips and 44 right hips. Bilateral displacement (simultaneous involvement) of hips was present in 19 patients. Three patients were detected as displaying involvement of the contra lateral hip in different periods (sequential bilaterality), comprising, in total, 22 bilateral slip patients. Data on the Patients *M = male and F = female; # W = white and N-W = non-white; ¥ R = right and L = left Data on the Patients *M = male and F = female; #W = white and N-W = non-white; ¥ R = right and L = left. The methods used were evaluated based on symptomatology, and categorized as acute, chronic, or acute on chronic, according to Fahey and O'Brien [1]; also, slip degrees were documented by the standard method of thirds and classified as mild, moderate, or severe, according to Wilson, Jacobs, Schecter [2]; MacEwen and Ramsey who use the three grades of slip percentage [3]. The hips were systematically evaluated roentgenographically, as well as functionally, according to Heyman and Herndon's criteria [4], being also categorized as satisfactory and unsatisfactory by means of Aadalen, Weiner, Hoyt, Herdon and Herdon's criteria [5]. The radiographic methods used to analyze joint cartilage and detect chondrolysis were based on Ingram, Clarke, Clark and Marshall's criteria [6]. [SUBTITLE] Treatment Protocol [SUBSECTION] The main objective of the SCFE treatment is to avoid progressive displacement, with the use of the safest and the most effective technique to arrest growth plate. The routine methodology employed was based on the conservative principle with the use of spicas (earlier cases) and bilateral short/long leg casts in abduction, and a slight internal rotation (15°) with antirotational bars (later cases), aiming at immobilizing the patient's hip for 12 weeks. Skin traction was used in order to avoid slip progression pre-casting in those patients displaying muscle spasms. Traction was also used to limit the patient's motion in order to reduce pain, and to prevent irritability (pain when moved through passive or active range of motion) [7]. Skeletal traction was also applied. This type of traction was used in these patients in an attempt to improve the neck-femoral head relationship. Reduction of the degree of slip by skeletal traction was not found in this series. For this reason, this type of traction was abandoned in SCFE pre-treatment. Anaesthesia was administered as needed in the presence of pain and/or discomfort during plaster hip spica and short/long-leg cast application, in preparation for resting the hip. Manipulation under anesthesia was performed as an alternative procedure to improve epiphysis position. In very few cases, Leadbetter's maneuver was gently applied prior to cast application, with the intention of improving the displacement of the neck/femoral head relationship, this being carefully carried out in chosen hips [8]. Cast immobilization was carried out for 12 weeks, in accordance with the casting protocol. No weightbearing was permitted during the "casting period". A hip spica was used in earlier cases; as time went on, and we gained more "experience" in the matter, choice was made of changing the method of plastering to short leg casts, on account of this being an easier application, allowing the patients to set hips and knees into motion in flexion and extension, thus performing muscle exercises (dynamic method). This type of immobilization was based on King's work, being also used to facilitate the patient's movement in a wheelchair [9]. The criteria adopted for interruption of the plaster cast use were based on the physeal stability of the head with the femoral neck in the affected hip. Stability, which is the ability to walk without hip pain, was reached regardless of the progress and stage of the growth-plate closure (12 weeks). Follow-up was performed every three months to monitor the growth plate closure (Figure 1). Early slipping of the femoral epiphysis of the left hip. A cast after twelve weeks was applied. The image of the left hip shows growth arrest andno progression with conservative management. (A and B) Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing the zone of rarefaction on the metaphyseal side in the left hip of the growth plate in Chronic/Mild SCFE, in a ten and half year old boy. (C and D) Anteroposterior and frog-leg lateral radiographs eight months after spica cast had been discontinued. The rarefaction zone has diminished and persists in the left hip. (E and F) Final result. The growth-plate has completely closed on both radiographs of the left hip. For patients who developed chondrolysis, the treatment protocol for the hip was as follows: analgesics, skin traction, bed rest, gentle active range-of-motion exercises, hydrotherapeutic/physiotherapeutic program, and the use of crutches (prolonged and nonweightbearing). The patients who presented chondrolysis underwent an observation period which took from 3 (three) to 12 (twelve) months; the criterion to stop the treatment for chondrolysis was opted for when irreversible clinical range of motion and deformation of both the femoral head and acetabulum were detected. The main objective of the SCFE treatment is to avoid progressive displacement, with the use of the safest and the most effective technique to arrest growth plate. The routine methodology employed was based on the conservative principle with the use of spicas (earlier cases) and bilateral short/long leg casts in abduction, and a slight internal rotation (15°) with antirotational bars (later cases), aiming at immobilizing the patient's hip for 12 weeks. Skin traction was used in order to avoid slip progression pre-casting in those patients displaying muscle spasms. Traction was also used to limit the patient's motion in order to reduce pain, and to prevent irritability (pain when moved through passive or active range of motion) [7]. Skeletal traction was also applied. This type of traction was used in these patients in an attempt to improve the neck-femoral head relationship. Reduction of the degree of slip by skeletal traction was not found in this series. For this reason, this type of traction was abandoned in SCFE pre-treatment. Anaesthesia was administered as needed in the presence of pain and/or discomfort during plaster hip spica and short/long-leg cast application, in preparation for resting the hip. Manipulation under anesthesia was performed as an alternative procedure to improve epiphysis position. In very few cases, Leadbetter's maneuver was gently applied prior to cast application, with the intention of improving the displacement of the neck/femoral head relationship, this being carefully carried out in chosen hips [8]. Cast immobilization was carried out for 12 weeks, in accordance with the casting protocol. No weightbearing was permitted during the "casting period". A hip spica was used in earlier cases; as time went on, and we gained more "experience" in the matter, choice was made of changing the method of plastering to short leg casts, on account of this being an easier application, allowing the patients to set hips and knees into motion in flexion and extension, thus performing muscle exercises (dynamic method). This type of immobilization was based on King's work, being also used to facilitate the patient's movement in a wheelchair [9]. The criteria adopted for interruption of the plaster cast use were based on the physeal stability of the head with the femoral neck in the affected hip. Stability, which is the ability to walk without hip pain, was reached regardless of the progress and stage of the growth-plate closure (12 weeks). Follow-up was performed every three months to monitor the growth plate closure (Figure 1). Early slipping of the femoral epiphysis of the left hip. A cast after twelve weeks was applied. The image of the left hip shows growth arrest andno progression with conservative management. (A and B) Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing the zone of rarefaction on the metaphyseal side in the left hip of the growth plate in Chronic/Mild SCFE, in a ten and half year old boy. (C and D) Anteroposterior and frog-leg lateral radiographs eight months after spica cast had been discontinued. The rarefaction zone has diminished and persists in the left hip. (E and F) Final result. The growth-plate has completely closed on both radiographs of the left hip. For patients who developed chondrolysis, the treatment protocol for the hip was as follows: analgesics, skin traction, bed rest, gentle active range-of-motion exercises, hydrotherapeutic/physiotherapeutic program, and the use of crutches (prolonged and nonweightbearing). The patients who presented chondrolysis underwent an observation period which took from 3 (three) to 12 (twelve) months; the criterion to stop the treatment for chondrolysis was opted for when irreversible clinical range of motion and deformation of both the femoral head and acetabulum were detected.
null
null
null
null
[ "Background", "Treatment Protocol", "Results", "Statistical Analysis", "Discussion", "Conclusions", "Consent", "Competing interests", "Authors' Information" ]
[ "The contributions and reasons for the use of the non-operative management of Slipped Capital Femoral Epiphysis (SCFE) are as follows:\n- applicability: non-operative treatment of SCFE allows the use of this method at any hospital, even for surgeons who have very little hands-on experience with this specific disease;\n- elucidation: the work elucidates the employment of a principle and the method of treatment little exploited by world literature;\n- knowledge: this research offers the opportunity for orthopedic surgeons to employ a method based on biology, contributing to further knowledge of SCFE, thereby also promoting the possibility of a wide debate on the subject;\n- reproducible: the easy use of this method allows the treatment to be repeated in other innovating medicine centers by an execution of a general procedure to a widespread application, adding value to knowledge;\n- results: the work has proven its effectiveness based on statistical data obtained, thereby demonstrating its importance and feasibility;\n- therapeutic: the use of the plaster cast method revealed the possibility of obtaining favorable results for its use;\n- prognosis: early diagnosis, associated with the simplicity of the SCFE method, favors a good prognosis and low morbidity for the disease.\nThis work posits that the benefits and application of the therapeutic criteria based on biology comprise a valid method of treatment, considering disease prognostic uncertainty.", "The main objective of the SCFE treatment is to avoid progressive displacement, with the use of the safest and the most effective technique to arrest growth plate. The routine methodology employed was based on the conservative principle with the use of spicas (earlier cases) and bilateral short/long leg casts in abduction, and a slight internal rotation (15°) with antirotational bars (later cases), aiming at immobilizing the patient's hip for 12 weeks.\nSkin traction was used in order to avoid slip progression pre-casting in those patients displaying muscle spasms. Traction was also used to limit the patient's motion in order to reduce pain, and to prevent irritability (pain when moved through passive or active range of motion) [7]. Skeletal traction was also applied. This type of traction was used in these patients in an attempt to improve the neck-femoral head relationship. Reduction of the degree of slip by skeletal traction was not found in this series. For this reason, this type of traction was abandoned in SCFE pre-treatment.\nAnaesthesia was administered as needed in the presence of pain and/or discomfort during plaster hip spica and short/long-leg cast application, in preparation for resting the hip.\nManipulation under anesthesia was performed as an alternative procedure to improve epiphysis position. In very few cases, Leadbetter's maneuver was gently applied prior to cast application, with the intention of improving the displacement of the neck/femoral head relationship, this being carefully carried out in chosen hips [8].\nCast immobilization was carried out for 12 weeks, in accordance with the casting protocol. No weightbearing was permitted during the \"casting period\". A hip spica was used in earlier cases; as time went on, and we gained more \"experience\" in the matter, choice was made of changing the method of plastering to short leg casts, on account of this being an easier application, allowing the patients to set hips and knees into motion in flexion and extension, thus performing muscle exercises (dynamic method). This type of immobilization was based on King's work, being also used to facilitate the patient's movement in a wheelchair [9].\nThe criteria adopted for interruption of the plaster cast use were based on the physeal stability of the head with the femoral neck in the affected hip. Stability, which is the ability to walk without hip pain, was reached regardless of the progress and stage of the growth-plate closure (12 weeks). Follow-up was performed every three months to monitor the growth plate closure (Figure 1).\nEarly slipping of the femoral epiphysis of the left hip. A cast after twelve weeks was applied. The image of the left hip shows growth arrest andno progression with conservative management. (A and B) Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing the zone of rarefaction on the metaphyseal side in the left hip of the growth plate in Chronic/Mild SCFE, in a ten and half year old boy. (C and D) Anteroposterior and frog-leg lateral radiographs eight months after spica cast had been discontinued. The rarefaction zone has diminished and persists in the left hip. (E and F) Final result. The growth-plate has completely closed on both radiographs of the left hip.\nFor patients who developed chondrolysis, the treatment protocol for the hip was as follows: analgesics, skin traction, bed rest, gentle active range-of-motion exercises, hydrotherapeutic/physiotherapeutic program, and the use of crutches (prolonged and nonweightbearing). The patients who presented chondrolysis underwent an observation period which took from 3 (three) to 12 (twelve) months; the criterion to stop the treatment for chondrolysis was opted for when irreversible clinical range of motion and deformation of both the femoral head and acetabulum were detected.", "The results of the spica treatment (69%) and bilateral short/long leg casts (31%) in abduction and internal rotation with anti-rotational bars were evaluated functionally as well as roentgenographically according to Heyman, Herdon [4], Aadalen, Weiner, Hoyt, Herdon and Herdon's methods and criteria [5]. A 70.5% satisfactory result was obtained in the acute group, 94% in the chronic group (chronic + acute-on-chronic). Regarding the degree of the slipping, a satisfactory result was obtained in 90.5% of hips with a mild slip, 76% of hips with a moderate slip and 73% of hips with a severe slip.\nIt became necessary to reapply a new cast (re-displacement), after the established protocol (12 weeks), in six (5.6%) patients (Cases 25, 27, 63, 64, 74, and 75), who presented a second slip (average: 11 months after cast was discontinued) (Table 3).\nDistribution of the results of the six patients who presented a re-displacement (Progression cases after cast discontinued)\nIn 106 analyzed hips, 12 (11.3%) were detected with chondrolysis, clinically diagnosed by pain, limp, muscle spasms, stiffness, mobility limitations and narrowing of the hip joints' space, as radiographically determined. Among 44 males, only two (Cases 54 and 82) presented chondrolysis, and, in 40 females, eight (Cases 1,2,5,6,13,18,53 and 67) also displayed the same problem (Table 4). Among twelve hips with chondrolysis, four (33% [Cases 2, 5, 6, and 82]) presented transient chondrolysis, joints had widened close to normal, osteopenia had improved and pain and stiffness had decreased during the follow-up period (Figure 2).\nChondrolysis incidence correlated to the following variables: sex, race, side, cast type, symptomatology and slip degree\nNecrosis of the joint cartilage (Waldenström disease) of the right hip after cast period. The functional value of mobility of the affected hip was reached. Reversible clinical range of motion and deformation of both the femoral head and acetabulum were detected. (A and B)-Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing bilateral chronic SCFE, being moderate slip in the right hip and mild in the left. (C) Anteroposterior roentgenogram of both hips after cast treatment with bilateral leg casts in abduction and an internal rotation. We may observe narrowing and irregularity of the right hip joint with demineralization of the surrounding bone = chondrolysis of the right hip. (D and E) Anteroposterior and frog-leg lateral radiographs of the hips showing closure of the growth-plate in the right hip, further demineralization with obliteration of the joint space and irregularity of the head of the femur and acetabulum and also decrease in cartilage thickness. (F and G) Anteroposterior and frog-leg lateral radiographs observing in the right hip some restoration of cartilage, with irregular contour of the femoral head. (H and I) Anteroposterior and frog-leg lateral radiographs observing in the right hip joint, the articular space is now widened compared to the initials X-rays. The femoral head presents mild deformity and limited range-of-motion in the right hip.\nRegarding race types, there were 43 white SCFE patients. Only two (Cases 54 and 82) displayed chondrolysis. Among 41 non-white patients, eight (Cases 1, 2, 5, 6, 13, 18, 54 and 67) also presented chondrolysis. Seven of these (Cases 1, 2, 5, 6, 13, 18, and 67) were female patients, and one was a male (Case 54).\nIn 19 patients (38 hips) with simultaneous involvement displacement, only two patient cases, 18 and 67, developed complications. In 44 hips with the right side affected, only three (Cases 1, 13 and 82) presented chondrolysis; in 62 cases on the left side, five (Cases 2, 5, 6, 53 and 54) presented the same complication.\nRegarding the type of plaster cast used and chondrolysis, the following was observed: 1 1/2 spica - four chondrolysis hips, cases, (1, 2, 13 and 54); double short leg casts-three chondrolysis hips, cases (67 [both hips] and 82); double spica - three chondrolysis hips (18 [both hips] and 53); and double long leg casts-one chondrolysis hip (Case 5).\nThere were 17 hips with symptoms classified as acute, two (Cases 5 and 13), displaying chondrolysis, only ten hips (Cases 1, 2, 6, 18 [both hips], 54, 67 [both hips], 53 and 82) from 85 pertaining to the chronic group developed chondrolysis.\nSeventy-four displacements were observed in the mild-degree group. Seven hips (Cases 1, 2, 6, 18, 54, and 67 [both hips]) presented chondrolysis; in the moderate degree, 5 out of 21 hips (Cases 5, 13, 18, 53 and 82) presented chondrolysis, and none of the nine hips with a severe degree developed it. Avascular necrosis was not detected in none of the hips manipulated, by the Leadbetter maneuver [8] (Figure 3). Two patients with SCFE (Cases 85 and 86) were excluded from the study as these had the epiphyseal line already closured in the first appointment. Both patients had chondrolysis without any previous kind of treatment.\nYoung female patient with severe slip of the left hip, treated by immobilization (anti-rotation plasters) after hip manipulation. The range of motion of the left hip was normal at the final follow-up. (A and B) Anteroposterior radiograph of the pelvis and spot film before treatment, in a nine-year-old girl who had an acute/severe slip SCFE in the left hip. (C and D) Patient under general anesthesia submitted to gentle Leadbetter manipulation. Bilateral toe-to-groin casts had been applied. (E and F) Anteroposterior and Frog-leg lateral radiographs showing the physis beginning the closure process in AP and lateral views. (G and H) Anteroposterior and frog-leg lateral radiographs of the left hip, showing complete closure of the growth-plate.\nOne case of pseudoarthrosis (0.9%) with necrosis of the head was detected after a repeated slip. This complication was classified as severe, of the traumatic displacement type, in the patient's hip (Case 75), due to a prolonged heavy femoral and tibia skeletal traction time employed simultaneously; avascular necrosis also was observed as a complication.\n[SUBTITLE] Statistical Analysis [SUBSECTION] One of the objectives of the statistical analysis was to specify whether a significant variation existed in hip mobility measures (in degrees) before or after treatment. The absolute variation (in degrees) between pre-and post-treatment is given by the following formula: Absolute variation of flexion = flexion in post-treatment-flexion in pre-treatment. Statistical analysis was accomplished by Wilcoxon's marked positions test [10]. According to hip flexion analysis, significant variations (p = 0.0001) were found, i. e., there was an increase of 29.5° on average after treatment. With regard to hip abduction, a significant variation (p = 0.0001) was found, i. e., there was an increase of 12.5°. As for hip internal rotation, there were significant variations (p = 0.0001), i. e., an increase of 11.8°. Concerning hip external rotation, significant variations (p = 0.02) were also observed, i.e., there was an increase of 5.1°.\nThe other objective regarding statistical analysis was to specify whether there existed a significant variation between age, sex, race, and type of immobilization versus chondrolysis. Statistical analysis was preformed by means of Fisher's accurate test, at 5% level [11]. Chondrolysis was present in 11.3% of the hips tested. There was no significant variation between age and chondrolysis (p = 1.00). Concerning gender analysis, statistically significant variations were observed (p = 0.031). In race analysis, there was also a statistically significant difference (p = 0.037). No causal association between plaster cast and chondrolysis was observed (p = 0.60). Regarding the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in either analysis, respectively p = 0.61 and p = 0.085.\nOne of the objectives of the statistical analysis was to specify whether a significant variation existed in hip mobility measures (in degrees) before or after treatment. The absolute variation (in degrees) between pre-and post-treatment is given by the following formula: Absolute variation of flexion = flexion in post-treatment-flexion in pre-treatment. Statistical analysis was accomplished by Wilcoxon's marked positions test [10]. According to hip flexion analysis, significant variations (p = 0.0001) were found, i. e., there was an increase of 29.5° on average after treatment. With regard to hip abduction, a significant variation (p = 0.0001) was found, i. e., there was an increase of 12.5°. As for hip internal rotation, there were significant variations (p = 0.0001), i. e., an increase of 11.8°. Concerning hip external rotation, significant variations (p = 0.02) were also observed, i.e., there was an increase of 5.1°.\nThe other objective regarding statistical analysis was to specify whether there existed a significant variation between age, sex, race, and type of immobilization versus chondrolysis. Statistical analysis was preformed by means of Fisher's accurate test, at 5% level [11]. Chondrolysis was present in 11.3% of the hips tested. There was no significant variation between age and chondrolysis (p = 1.00). Concerning gender analysis, statistically significant variations were observed (p = 0.031). In race analysis, there was also a statistically significant difference (p = 0.037). No causal association between plaster cast and chondrolysis was observed (p = 0.60). Regarding the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in either analysis, respectively p = 0.61 and p = 0.085.", "One of the objectives of the statistical analysis was to specify whether a significant variation existed in hip mobility measures (in degrees) before or after treatment. The absolute variation (in degrees) between pre-and post-treatment is given by the following formula: Absolute variation of flexion = flexion in post-treatment-flexion in pre-treatment. Statistical analysis was accomplished by Wilcoxon's marked positions test [10]. According to hip flexion analysis, significant variations (p = 0.0001) were found, i. e., there was an increase of 29.5° on average after treatment. With regard to hip abduction, a significant variation (p = 0.0001) was found, i. e., there was an increase of 12.5°. As for hip internal rotation, there were significant variations (p = 0.0001), i. e., an increase of 11.8°. Concerning hip external rotation, significant variations (p = 0.02) were also observed, i.e., there was an increase of 5.1°.\nThe other objective regarding statistical analysis was to specify whether there existed a significant variation between age, sex, race, and type of immobilization versus chondrolysis. Statistical analysis was preformed by means of Fisher's accurate test, at 5% level [11]. Chondrolysis was present in 11.3% of the hips tested. There was no significant variation between age and chondrolysis (p = 1.00). Concerning gender analysis, statistically significant variations were observed (p = 0.031). In race analysis, there was also a statistically significant difference (p = 0.037). No causal association between plaster cast and chondrolysis was observed (p = 0.60). Regarding the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in either analysis, respectively p = 0.61 and p = 0.085.", "The cause of articular cartilage necrosis after slipped capital femoral epiphysis still remains obscure [12]. Betz, Steel, Emper, Huss and Clancy found 13.5% of chondrolysis in their trials [7]. Ingram, Clarke, Clark and Marshall mentioned that the incidence of chondrolysis varies from 2% to 55% [6]. Jerre, in a series of 200 slipped femoral epiphyses treated mainly by closed reduction and plaster immobilization, found nine hips (4.5%) with articular cartilage necrosis [13]; in this study, chondrolysis affected 12 hips (11.3%): four presented a temporary form of chondrolysis (7.5%), with eight being permanent. Writings on this subject have shown a predominance of females over males [14,15]; in this series, chondrolysis was also predominant in females over males.\nAccording to published works [2,14,16,17]; chondrolysis in non-white patients (16%- 66%) is more common than in white patients (2.5%). In this study, regarding articular cartilage necrosis, it was ascertained that non-white patients prevailed by a considerable number over the white patients. The manifestation and prevalence of chondrolysis as a complication in females and non-whites are some of the unclarified points in the study as of yet.\nRegarding symptomatology, classification in previous studies assigns to chronic group patients the worst prognosis in relation to chondrolysis [6,7,17,18]. In this sample, the record of chondrolysis incidence in this type of group was in accordance with the literature.\nConcerning the degree of epiphysis displacement in relation to the femoral neck, in chondrolysis, bad results are proportional to the severity of the slip degree [6,17,18]. In this study, seven patients classified as mild degree presented chondrolysis, five classified as moderate presented the complication, with none of the nine severe cases displaying it. This finding is contrary to the general condition.\nNevertheless, concerning chondrolysis, there was an inexplicable finding with one female patient who was treated for bilateral slipping by 1 1/2 spica cast. While her right hip was normal, the left one deteriorated to chondrolysis. Necrosis of articular cartilage is an entity that represents an auto-immune disease in genetically-susceptible individuals [19]. Still in relation to a chondrolysis complication, some authors affirm that excessive immobilization also favors articular cartilage necrosis [13,16,20]. It was observed, in this work, that five hips out of 12 were attacked by the disease when cast immobilization was used for over 12 weeks (apprehension curve).\nWaldenström mentioned that the collum produces new vessels, which attempt to heal rupture continuity [20]. The period of immobilization (12 weeks) was observed as providing stability of the epiphysis to metaphysis, thus avoiding displacement continuity. Ponseti and Barta ascertained that growth plate obliteration process happens between 5 and 12 months, with a 9-month average after the beginning of the treatment with cast immobilization [16]. In this work, growth plate ossification time was 16.5 months.\nGreen found a 5% average progression of slipping after the cast had been discontinued (one of 18 hips; this patient's hip had been immobilized for only 8 weeks) [21]. Jerre found definite redisplacement in 20 (10%) hips in his series [13]. For prevention of additional slip of chronic SCFE groups, Betz, Steel, Emper, Huss and Clancy have shown effective treatment in 12 weeks, with a spica cast [7]. They reported one progression (8 weeks in a cast only) out of 37 hips. The range of time in which a redisplacement is possible is claimed by Waldenström to be approximately 1 year [22]. Wilson observed redisplacement occurring within 2 to 33 months (average, 11.8 months) from the start of the treatment [23]. In the present series, out of 106 hips, six (5.6%) were recorded with redisplacement (on average, 11 months after the cast had been removed), four following a traumatic episode.\nKing presented the use of bilateral short-leg cast immobilization as a form of treatment without chondrolysis [9]. In his work, 52 affected hips were recorded with satisfactory results. In the article, 33 short/long-leg casts in abduction and internal rotation were fixed with a stick; four chondrolysis were found, and, in 73 plaster spica casts, eight cases.\nThe disadvantages of immobilization in a spica cast include potential skin and pulmonary problems, ileus, and the difficulty in handling an obese child, in addition to problems involving education [7]. These disadvantages should be taken into consideration because of the risks of pinning by means of wires or screws, and the serious sequelae which include pin penetration, fracture, infection, pin breakage, growth disturbance, wound problems, subsequent slippage, difficulty in pin extraction during hardware removal, nail slipping into the joint, nail extruding, nails bending, avascular necrosis, as well as chondrolysis [7,14,15,24,25]. The global incidence of chondrolysis is 7% with all forms of treatment [26]. Chondrolysis can appear spontaneously after the slipping of the femoral epiphysis without any treatment, and may follow either a slight or a severe slip. It may occur after any type of treatment, whether conservative or operative [12].\nThese results show why some methods are in favor, and others are in disfavor, in the clinic where these patients were treated and where as, in all hospitals the facilities and limitations must be evaluated by every surgeon (Clarence H. Heyman, M D) [27].", "After analyzing the nonoperative treatment in slipped capital femoral epiphysis and chondrolysis, we concluded that the employment of the treatment revealed that the method was functional, efficient, valid, and reproducible; it can also be used as an alternative therapeutic procedure regarding to this specific disease.\nThis manuscript is faced with the fact that the orthopaedic surgeons employ and evaluate a little-adopted treatment technique by musculoskeletal studies in the treatment of SCFE. The success or failure of treatment intervention is determined based on the outcomes [28]. The presented work was evaluated and tested on its contents, methodology and clinical usefulness. Modern medicine is based on evidence, and outcomes have to have their importance proven. The instrument of quality employed (plaster cast method) was assessed not only by the surgeon, but also by the patient, through his descriptions. The patient was always given the option, upon the first appointment, to choose from the conservative or surgical treatment. The nonoperative management of SCFE was accepted by relatives. The interest demonstrated by the patients in method reliability has shown the possibility of analyzing the difference between the patients' reports, and those from the professionals and their studies, with the possibility of varied outcomes. Evaluation in modern medicine must be based on evidences of the result and on the functional radiographic measurements, in addition to being statistically analyzed and including the patients' reports. The present work showed an optional method for the treatment of slipped capital femoral epiphysis.", "Written informed consent was obtained from all patients and relevant parents/guardians for publication of this report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.", "The author has not received any outside funding or grants in support for, or in preparation of his research. Neither did he, nor any member of his immediate family receive payments, benefits or agreements to provide the research for financial reasons.", "The author certifies that he has no commercial associations (e.g. consultancies, stock holdings, equity interest, patent/licensing arrangements, etc) which might pose a conflict of interest in connection with the submitted article." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Background", "Patients and Methods", "Treatment Protocol", "Results", "Statistical Analysis", "Discussion", "Conclusions", "Consent", "Competing interests", "Authors' Information" ]
[ "The contributions and reasons for the use of the non-operative management of Slipped Capital Femoral Epiphysis (SCFE) are as follows:\n- applicability: non-operative treatment of SCFE allows the use of this method at any hospital, even for surgeons who have very little hands-on experience with this specific disease;\n- elucidation: the work elucidates the employment of a principle and the method of treatment little exploited by world literature;\n- knowledge: this research offers the opportunity for orthopedic surgeons to employ a method based on biology, contributing to further knowledge of SCFE, thereby also promoting the possibility of a wide debate on the subject;\n- reproducible: the easy use of this method allows the treatment to be repeated in other innovating medicine centers by an execution of a general procedure to a widespread application, adding value to knowledge;\n- results: the work has proven its effectiveness based on statistical data obtained, thereby demonstrating its importance and feasibility;\n- therapeutic: the use of the plaster cast method revealed the possibility of obtaining favorable results for its use;\n- prognosis: early diagnosis, associated with the simplicity of the SCFE method, favors a good prognosis and low morbidity for the disease.\nThis work posits that the benefits and application of the therapeutic criteria based on biology comprise a valid method of treatment, considering disease prognostic uncertainty.", "The Committee of Ethics of the Jesus Children's Hospital in Brazil, Rio de Janeiro, have analyzed and approved the Research Project entitled, Nonoperative treatment of slipped capital femoral epiphysis, which was also evaluated by the Ethics Committee for Research of the Federal University of Rio de Janeiro (UFRJ), Brazil.\nThe typology of the design employed in this sample was a study of a single cohort with observational, longitudinal and retrospective characteristics. In this research, chondrolysis was the dependent variable. A consecutive series of 106 hip joints in eighty-four patients affected, the great majority of them obese, displaying SCFE, were treated by means of plaster cast (Table 1 and Table 2). Patients' age varied at the time of diagnosis, ranging from 7.6 to 15.8 years. The duration of the follow-up ranged from 12 months, with the complete growth-plate closure, to 146 months, an average of 51 months. Thirteen patients were younger than eleven, 55 patients were between the ages of 11 and 13; and 16 patients were between the ages of 13 and 16. The average age was 12.5, males having the average age of 14.5 and females 10.5. Forty-four patients were males, and 40 females. Regarding race, 43 were white, and 41 were non-white. Unilateral involvement was present in 62 left hips and 44 right hips. Bilateral displacement (simultaneous involvement) of hips was present in 19 patients. Three patients were detected as displaying involvement of the contra lateral hip in different periods (sequential bilaterality), comprising, in total, 22 bilateral slip patients.\nData on the Patients\n*M = male and F = female; # W = white and N-W = non-white; ¥ R = right and L = left\nData on the Patients\n*M = male and F = female;\n#W = white and N-W = non-white;\n¥ R = right and L = left.\nThe methods used were evaluated based on symptomatology, and categorized as acute, chronic, or acute on chronic, according to Fahey and O'Brien [1]; also, slip degrees were documented by the standard method of thirds and classified as mild, moderate, or severe, according to Wilson, Jacobs, Schecter [2]; MacEwen and Ramsey who use the three grades of slip percentage [3]. The hips were systematically evaluated roentgenographically, as well as functionally, according to Heyman and Herndon's criteria [4], being also categorized as satisfactory and unsatisfactory by means of Aadalen, Weiner, Hoyt, Herdon and Herdon's criteria [5]. The radiographic methods used to analyze joint cartilage and detect chondrolysis were based on Ingram, Clarke, Clark and Marshall's criteria [6].\n[SUBTITLE] Treatment Protocol [SUBSECTION] The main objective of the SCFE treatment is to avoid progressive displacement, with the use of the safest and the most effective technique to arrest growth plate. The routine methodology employed was based on the conservative principle with the use of spicas (earlier cases) and bilateral short/long leg casts in abduction, and a slight internal rotation (15°) with antirotational bars (later cases), aiming at immobilizing the patient's hip for 12 weeks.\nSkin traction was used in order to avoid slip progression pre-casting in those patients displaying muscle spasms. Traction was also used to limit the patient's motion in order to reduce pain, and to prevent irritability (pain when moved through passive or active range of motion) [7]. Skeletal traction was also applied. This type of traction was used in these patients in an attempt to improve the neck-femoral head relationship. Reduction of the degree of slip by skeletal traction was not found in this series. For this reason, this type of traction was abandoned in SCFE pre-treatment.\nAnaesthesia was administered as needed in the presence of pain and/or discomfort during plaster hip spica and short/long-leg cast application, in preparation for resting the hip.\nManipulation under anesthesia was performed as an alternative procedure to improve epiphysis position. In very few cases, Leadbetter's maneuver was gently applied prior to cast application, with the intention of improving the displacement of the neck/femoral head relationship, this being carefully carried out in chosen hips [8].\nCast immobilization was carried out for 12 weeks, in accordance with the casting protocol. No weightbearing was permitted during the \"casting period\". A hip spica was used in earlier cases; as time went on, and we gained more \"experience\" in the matter, choice was made of changing the method of plastering to short leg casts, on account of this being an easier application, allowing the patients to set hips and knees into motion in flexion and extension, thus performing muscle exercises (dynamic method). This type of immobilization was based on King's work, being also used to facilitate the patient's movement in a wheelchair [9].\nThe criteria adopted for interruption of the plaster cast use were based on the physeal stability of the head with the femoral neck in the affected hip. Stability, which is the ability to walk without hip pain, was reached regardless of the progress and stage of the growth-plate closure (12 weeks). Follow-up was performed every three months to monitor the growth plate closure (Figure 1).\nEarly slipping of the femoral epiphysis of the left hip. A cast after twelve weeks was applied. The image of the left hip shows growth arrest andno progression with conservative management. (A and B) Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing the zone of rarefaction on the metaphyseal side in the left hip of the growth plate in Chronic/Mild SCFE, in a ten and half year old boy. (C and D) Anteroposterior and frog-leg lateral radiographs eight months after spica cast had been discontinued. The rarefaction zone has diminished and persists in the left hip. (E and F) Final result. The growth-plate has completely closed on both radiographs of the left hip.\nFor patients who developed chondrolysis, the treatment protocol for the hip was as follows: analgesics, skin traction, bed rest, gentle active range-of-motion exercises, hydrotherapeutic/physiotherapeutic program, and the use of crutches (prolonged and nonweightbearing). The patients who presented chondrolysis underwent an observation period which took from 3 (three) to 12 (twelve) months; the criterion to stop the treatment for chondrolysis was opted for when irreversible clinical range of motion and deformation of both the femoral head and acetabulum were detected.\nThe main objective of the SCFE treatment is to avoid progressive displacement, with the use of the safest and the most effective technique to arrest growth plate. The routine methodology employed was based on the conservative principle with the use of spicas (earlier cases) and bilateral short/long leg casts in abduction, and a slight internal rotation (15°) with antirotational bars (later cases), aiming at immobilizing the patient's hip for 12 weeks.\nSkin traction was used in order to avoid slip progression pre-casting in those patients displaying muscle spasms. Traction was also used to limit the patient's motion in order to reduce pain, and to prevent irritability (pain when moved through passive or active range of motion) [7]. Skeletal traction was also applied. This type of traction was used in these patients in an attempt to improve the neck-femoral head relationship. Reduction of the degree of slip by skeletal traction was not found in this series. For this reason, this type of traction was abandoned in SCFE pre-treatment.\nAnaesthesia was administered as needed in the presence of pain and/or discomfort during plaster hip spica and short/long-leg cast application, in preparation for resting the hip.\nManipulation under anesthesia was performed as an alternative procedure to improve epiphysis position. In very few cases, Leadbetter's maneuver was gently applied prior to cast application, with the intention of improving the displacement of the neck/femoral head relationship, this being carefully carried out in chosen hips [8].\nCast immobilization was carried out for 12 weeks, in accordance with the casting protocol. No weightbearing was permitted during the \"casting period\". A hip spica was used in earlier cases; as time went on, and we gained more \"experience\" in the matter, choice was made of changing the method of plastering to short leg casts, on account of this being an easier application, allowing the patients to set hips and knees into motion in flexion and extension, thus performing muscle exercises (dynamic method). This type of immobilization was based on King's work, being also used to facilitate the patient's movement in a wheelchair [9].\nThe criteria adopted for interruption of the plaster cast use were based on the physeal stability of the head with the femoral neck in the affected hip. Stability, which is the ability to walk without hip pain, was reached regardless of the progress and stage of the growth-plate closure (12 weeks). Follow-up was performed every three months to monitor the growth plate closure (Figure 1).\nEarly slipping of the femoral epiphysis of the left hip. A cast after twelve weeks was applied. The image of the left hip shows growth arrest andno progression with conservative management. (A and B) Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing the zone of rarefaction on the metaphyseal side in the left hip of the growth plate in Chronic/Mild SCFE, in a ten and half year old boy. (C and D) Anteroposterior and frog-leg lateral radiographs eight months after spica cast had been discontinued. The rarefaction zone has diminished and persists in the left hip. (E and F) Final result. The growth-plate has completely closed on both radiographs of the left hip.\nFor patients who developed chondrolysis, the treatment protocol for the hip was as follows: analgesics, skin traction, bed rest, gentle active range-of-motion exercises, hydrotherapeutic/physiotherapeutic program, and the use of crutches (prolonged and nonweightbearing). The patients who presented chondrolysis underwent an observation period which took from 3 (three) to 12 (twelve) months; the criterion to stop the treatment for chondrolysis was opted for when irreversible clinical range of motion and deformation of both the femoral head and acetabulum were detected.", "The main objective of the SCFE treatment is to avoid progressive displacement, with the use of the safest and the most effective technique to arrest growth plate. The routine methodology employed was based on the conservative principle with the use of spicas (earlier cases) and bilateral short/long leg casts in abduction, and a slight internal rotation (15°) with antirotational bars (later cases), aiming at immobilizing the patient's hip for 12 weeks.\nSkin traction was used in order to avoid slip progression pre-casting in those patients displaying muscle spasms. Traction was also used to limit the patient's motion in order to reduce pain, and to prevent irritability (pain when moved through passive or active range of motion) [7]. Skeletal traction was also applied. This type of traction was used in these patients in an attempt to improve the neck-femoral head relationship. Reduction of the degree of slip by skeletal traction was not found in this series. For this reason, this type of traction was abandoned in SCFE pre-treatment.\nAnaesthesia was administered as needed in the presence of pain and/or discomfort during plaster hip spica and short/long-leg cast application, in preparation for resting the hip.\nManipulation under anesthesia was performed as an alternative procedure to improve epiphysis position. In very few cases, Leadbetter's maneuver was gently applied prior to cast application, with the intention of improving the displacement of the neck/femoral head relationship, this being carefully carried out in chosen hips [8].\nCast immobilization was carried out for 12 weeks, in accordance with the casting protocol. No weightbearing was permitted during the \"casting period\". A hip spica was used in earlier cases; as time went on, and we gained more \"experience\" in the matter, choice was made of changing the method of plastering to short leg casts, on account of this being an easier application, allowing the patients to set hips and knees into motion in flexion and extension, thus performing muscle exercises (dynamic method). This type of immobilization was based on King's work, being also used to facilitate the patient's movement in a wheelchair [9].\nThe criteria adopted for interruption of the plaster cast use were based on the physeal stability of the head with the femoral neck in the affected hip. Stability, which is the ability to walk without hip pain, was reached regardless of the progress and stage of the growth-plate closure (12 weeks). Follow-up was performed every three months to monitor the growth plate closure (Figure 1).\nEarly slipping of the femoral epiphysis of the left hip. A cast after twelve weeks was applied. The image of the left hip shows growth arrest andno progression with conservative management. (A and B) Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing the zone of rarefaction on the metaphyseal side in the left hip of the growth plate in Chronic/Mild SCFE, in a ten and half year old boy. (C and D) Anteroposterior and frog-leg lateral radiographs eight months after spica cast had been discontinued. The rarefaction zone has diminished and persists in the left hip. (E and F) Final result. The growth-plate has completely closed on both radiographs of the left hip.\nFor patients who developed chondrolysis, the treatment protocol for the hip was as follows: analgesics, skin traction, bed rest, gentle active range-of-motion exercises, hydrotherapeutic/physiotherapeutic program, and the use of crutches (prolonged and nonweightbearing). The patients who presented chondrolysis underwent an observation period which took from 3 (three) to 12 (twelve) months; the criterion to stop the treatment for chondrolysis was opted for when irreversible clinical range of motion and deformation of both the femoral head and acetabulum were detected.", "The results of the spica treatment (69%) and bilateral short/long leg casts (31%) in abduction and internal rotation with anti-rotational bars were evaluated functionally as well as roentgenographically according to Heyman, Herdon [4], Aadalen, Weiner, Hoyt, Herdon and Herdon's methods and criteria [5]. A 70.5% satisfactory result was obtained in the acute group, 94% in the chronic group (chronic + acute-on-chronic). Regarding the degree of the slipping, a satisfactory result was obtained in 90.5% of hips with a mild slip, 76% of hips with a moderate slip and 73% of hips with a severe slip.\nIt became necessary to reapply a new cast (re-displacement), after the established protocol (12 weeks), in six (5.6%) patients (Cases 25, 27, 63, 64, 74, and 75), who presented a second slip (average: 11 months after cast was discontinued) (Table 3).\nDistribution of the results of the six patients who presented a re-displacement (Progression cases after cast discontinued)\nIn 106 analyzed hips, 12 (11.3%) were detected with chondrolysis, clinically diagnosed by pain, limp, muscle spasms, stiffness, mobility limitations and narrowing of the hip joints' space, as radiographically determined. Among 44 males, only two (Cases 54 and 82) presented chondrolysis, and, in 40 females, eight (Cases 1,2,5,6,13,18,53 and 67) also displayed the same problem (Table 4). Among twelve hips with chondrolysis, four (33% [Cases 2, 5, 6, and 82]) presented transient chondrolysis, joints had widened close to normal, osteopenia had improved and pain and stiffness had decreased during the follow-up period (Figure 2).\nChondrolysis incidence correlated to the following variables: sex, race, side, cast type, symptomatology and slip degree\nNecrosis of the joint cartilage (Waldenström disease) of the right hip after cast period. The functional value of mobility of the affected hip was reached. Reversible clinical range of motion and deformation of both the femoral head and acetabulum were detected. (A and B)-Anteroposterior and frog-leg lateral radiographs of the pelvis made before treatment, showing bilateral chronic SCFE, being moderate slip in the right hip and mild in the left. (C) Anteroposterior roentgenogram of both hips after cast treatment with bilateral leg casts in abduction and an internal rotation. We may observe narrowing and irregularity of the right hip joint with demineralization of the surrounding bone = chondrolysis of the right hip. (D and E) Anteroposterior and frog-leg lateral radiographs of the hips showing closure of the growth-plate in the right hip, further demineralization with obliteration of the joint space and irregularity of the head of the femur and acetabulum and also decrease in cartilage thickness. (F and G) Anteroposterior and frog-leg lateral radiographs observing in the right hip some restoration of cartilage, with irregular contour of the femoral head. (H and I) Anteroposterior and frog-leg lateral radiographs observing in the right hip joint, the articular space is now widened compared to the initials X-rays. The femoral head presents mild deformity and limited range-of-motion in the right hip.\nRegarding race types, there were 43 white SCFE patients. Only two (Cases 54 and 82) displayed chondrolysis. Among 41 non-white patients, eight (Cases 1, 2, 5, 6, 13, 18, 54 and 67) also presented chondrolysis. Seven of these (Cases 1, 2, 5, 6, 13, 18, and 67) were female patients, and one was a male (Case 54).\nIn 19 patients (38 hips) with simultaneous involvement displacement, only two patient cases, 18 and 67, developed complications. In 44 hips with the right side affected, only three (Cases 1, 13 and 82) presented chondrolysis; in 62 cases on the left side, five (Cases 2, 5, 6, 53 and 54) presented the same complication.\nRegarding the type of plaster cast used and chondrolysis, the following was observed: 1 1/2 spica - four chondrolysis hips, cases, (1, 2, 13 and 54); double short leg casts-three chondrolysis hips, cases (67 [both hips] and 82); double spica - three chondrolysis hips (18 [both hips] and 53); and double long leg casts-one chondrolysis hip (Case 5).\nThere were 17 hips with symptoms classified as acute, two (Cases 5 and 13), displaying chondrolysis, only ten hips (Cases 1, 2, 6, 18 [both hips], 54, 67 [both hips], 53 and 82) from 85 pertaining to the chronic group developed chondrolysis.\nSeventy-four displacements were observed in the mild-degree group. Seven hips (Cases 1, 2, 6, 18, 54, and 67 [both hips]) presented chondrolysis; in the moderate degree, 5 out of 21 hips (Cases 5, 13, 18, 53 and 82) presented chondrolysis, and none of the nine hips with a severe degree developed it. Avascular necrosis was not detected in none of the hips manipulated, by the Leadbetter maneuver [8] (Figure 3). Two patients with SCFE (Cases 85 and 86) were excluded from the study as these had the epiphyseal line already closured in the first appointment. Both patients had chondrolysis without any previous kind of treatment.\nYoung female patient with severe slip of the left hip, treated by immobilization (anti-rotation plasters) after hip manipulation. The range of motion of the left hip was normal at the final follow-up. (A and B) Anteroposterior radiograph of the pelvis and spot film before treatment, in a nine-year-old girl who had an acute/severe slip SCFE in the left hip. (C and D) Patient under general anesthesia submitted to gentle Leadbetter manipulation. Bilateral toe-to-groin casts had been applied. (E and F) Anteroposterior and Frog-leg lateral radiographs showing the physis beginning the closure process in AP and lateral views. (G and H) Anteroposterior and frog-leg lateral radiographs of the left hip, showing complete closure of the growth-plate.\nOne case of pseudoarthrosis (0.9%) with necrosis of the head was detected after a repeated slip. This complication was classified as severe, of the traumatic displacement type, in the patient's hip (Case 75), due to a prolonged heavy femoral and tibia skeletal traction time employed simultaneously; avascular necrosis also was observed as a complication.\n[SUBTITLE] Statistical Analysis [SUBSECTION] One of the objectives of the statistical analysis was to specify whether a significant variation existed in hip mobility measures (in degrees) before or after treatment. The absolute variation (in degrees) between pre-and post-treatment is given by the following formula: Absolute variation of flexion = flexion in post-treatment-flexion in pre-treatment. Statistical analysis was accomplished by Wilcoxon's marked positions test [10]. According to hip flexion analysis, significant variations (p = 0.0001) were found, i. e., there was an increase of 29.5° on average after treatment. With regard to hip abduction, a significant variation (p = 0.0001) was found, i. e., there was an increase of 12.5°. As for hip internal rotation, there were significant variations (p = 0.0001), i. e., an increase of 11.8°. Concerning hip external rotation, significant variations (p = 0.02) were also observed, i.e., there was an increase of 5.1°.\nThe other objective regarding statistical analysis was to specify whether there existed a significant variation between age, sex, race, and type of immobilization versus chondrolysis. Statistical analysis was preformed by means of Fisher's accurate test, at 5% level [11]. Chondrolysis was present in 11.3% of the hips tested. There was no significant variation between age and chondrolysis (p = 1.00). Concerning gender analysis, statistically significant variations were observed (p = 0.031). In race analysis, there was also a statistically significant difference (p = 0.037). No causal association between plaster cast and chondrolysis was observed (p = 0.60). Regarding the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in either analysis, respectively p = 0.61 and p = 0.085.\nOne of the objectives of the statistical analysis was to specify whether a significant variation existed in hip mobility measures (in degrees) before or after treatment. The absolute variation (in degrees) between pre-and post-treatment is given by the following formula: Absolute variation of flexion = flexion in post-treatment-flexion in pre-treatment. Statistical analysis was accomplished by Wilcoxon's marked positions test [10]. According to hip flexion analysis, significant variations (p = 0.0001) were found, i. e., there was an increase of 29.5° on average after treatment. With regard to hip abduction, a significant variation (p = 0.0001) was found, i. e., there was an increase of 12.5°. As for hip internal rotation, there were significant variations (p = 0.0001), i. e., an increase of 11.8°. Concerning hip external rotation, significant variations (p = 0.02) were also observed, i.e., there was an increase of 5.1°.\nThe other objective regarding statistical analysis was to specify whether there existed a significant variation between age, sex, race, and type of immobilization versus chondrolysis. Statistical analysis was preformed by means of Fisher's accurate test, at 5% level [11]. Chondrolysis was present in 11.3% of the hips tested. There was no significant variation between age and chondrolysis (p = 1.00). Concerning gender analysis, statistically significant variations were observed (p = 0.031). In race analysis, there was also a statistically significant difference (p = 0.037). No causal association between plaster cast and chondrolysis was observed (p = 0.60). Regarding the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in either analysis, respectively p = 0.61 and p = 0.085.", "One of the objectives of the statistical analysis was to specify whether a significant variation existed in hip mobility measures (in degrees) before or after treatment. The absolute variation (in degrees) between pre-and post-treatment is given by the following formula: Absolute variation of flexion = flexion in post-treatment-flexion in pre-treatment. Statistical analysis was accomplished by Wilcoxon's marked positions test [10]. According to hip flexion analysis, significant variations (p = 0.0001) were found, i. e., there was an increase of 29.5° on average after treatment. With regard to hip abduction, a significant variation (p = 0.0001) was found, i. e., there was an increase of 12.5°. As for hip internal rotation, there were significant variations (p = 0.0001), i. e., an increase of 11.8°. Concerning hip external rotation, significant variations (p = 0.02) were also observed, i.e., there was an increase of 5.1°.\nThe other objective regarding statistical analysis was to specify whether there existed a significant variation between age, sex, race, and type of immobilization versus chondrolysis. Statistical analysis was preformed by means of Fisher's accurate test, at 5% level [11]. Chondrolysis was present in 11.3% of the hips tested. There was no significant variation between age and chondrolysis (p = 1.00). Concerning gender analysis, statistically significant variations were observed (p = 0.031). In race analysis, there was also a statistically significant difference (p = 0.037). No causal association between plaster cast and chondrolysis was observed (p = 0.60). Regarding the symptomatology group and the slip degree versus chondrolysis, the p value was not statistically significant in either analysis, respectively p = 0.61 and p = 0.085.", "The cause of articular cartilage necrosis after slipped capital femoral epiphysis still remains obscure [12]. Betz, Steel, Emper, Huss and Clancy found 13.5% of chondrolysis in their trials [7]. Ingram, Clarke, Clark and Marshall mentioned that the incidence of chondrolysis varies from 2% to 55% [6]. Jerre, in a series of 200 slipped femoral epiphyses treated mainly by closed reduction and plaster immobilization, found nine hips (4.5%) with articular cartilage necrosis [13]; in this study, chondrolysis affected 12 hips (11.3%): four presented a temporary form of chondrolysis (7.5%), with eight being permanent. Writings on this subject have shown a predominance of females over males [14,15]; in this series, chondrolysis was also predominant in females over males.\nAccording to published works [2,14,16,17]; chondrolysis in non-white patients (16%- 66%) is more common than in white patients (2.5%). In this study, regarding articular cartilage necrosis, it was ascertained that non-white patients prevailed by a considerable number over the white patients. The manifestation and prevalence of chondrolysis as a complication in females and non-whites are some of the unclarified points in the study as of yet.\nRegarding symptomatology, classification in previous studies assigns to chronic group patients the worst prognosis in relation to chondrolysis [6,7,17,18]. In this sample, the record of chondrolysis incidence in this type of group was in accordance with the literature.\nConcerning the degree of epiphysis displacement in relation to the femoral neck, in chondrolysis, bad results are proportional to the severity of the slip degree [6,17,18]. In this study, seven patients classified as mild degree presented chondrolysis, five classified as moderate presented the complication, with none of the nine severe cases displaying it. This finding is contrary to the general condition.\nNevertheless, concerning chondrolysis, there was an inexplicable finding with one female patient who was treated for bilateral slipping by 1 1/2 spica cast. While her right hip was normal, the left one deteriorated to chondrolysis. Necrosis of articular cartilage is an entity that represents an auto-immune disease in genetically-susceptible individuals [19]. Still in relation to a chondrolysis complication, some authors affirm that excessive immobilization also favors articular cartilage necrosis [13,16,20]. It was observed, in this work, that five hips out of 12 were attacked by the disease when cast immobilization was used for over 12 weeks (apprehension curve).\nWaldenström mentioned that the collum produces new vessels, which attempt to heal rupture continuity [20]. The period of immobilization (12 weeks) was observed as providing stability of the epiphysis to metaphysis, thus avoiding displacement continuity. Ponseti and Barta ascertained that growth plate obliteration process happens between 5 and 12 months, with a 9-month average after the beginning of the treatment with cast immobilization [16]. In this work, growth plate ossification time was 16.5 months.\nGreen found a 5% average progression of slipping after the cast had been discontinued (one of 18 hips; this patient's hip had been immobilized for only 8 weeks) [21]. Jerre found definite redisplacement in 20 (10%) hips in his series [13]. For prevention of additional slip of chronic SCFE groups, Betz, Steel, Emper, Huss and Clancy have shown effective treatment in 12 weeks, with a spica cast [7]. They reported one progression (8 weeks in a cast only) out of 37 hips. The range of time in which a redisplacement is possible is claimed by Waldenström to be approximately 1 year [22]. Wilson observed redisplacement occurring within 2 to 33 months (average, 11.8 months) from the start of the treatment [23]. In the present series, out of 106 hips, six (5.6%) were recorded with redisplacement (on average, 11 months after the cast had been removed), four following a traumatic episode.\nKing presented the use of bilateral short-leg cast immobilization as a form of treatment without chondrolysis [9]. In his work, 52 affected hips were recorded with satisfactory results. In the article, 33 short/long-leg casts in abduction and internal rotation were fixed with a stick; four chondrolysis were found, and, in 73 plaster spica casts, eight cases.\nThe disadvantages of immobilization in a spica cast include potential skin and pulmonary problems, ileus, and the difficulty in handling an obese child, in addition to problems involving education [7]. These disadvantages should be taken into consideration because of the risks of pinning by means of wires or screws, and the serious sequelae which include pin penetration, fracture, infection, pin breakage, growth disturbance, wound problems, subsequent slippage, difficulty in pin extraction during hardware removal, nail slipping into the joint, nail extruding, nails bending, avascular necrosis, as well as chondrolysis [7,14,15,24,25]. The global incidence of chondrolysis is 7% with all forms of treatment [26]. Chondrolysis can appear spontaneously after the slipping of the femoral epiphysis without any treatment, and may follow either a slight or a severe slip. It may occur after any type of treatment, whether conservative or operative [12].\nThese results show why some methods are in favor, and others are in disfavor, in the clinic where these patients were treated and where as, in all hospitals the facilities and limitations must be evaluated by every surgeon (Clarence H. Heyman, M D) [27].", "After analyzing the nonoperative treatment in slipped capital femoral epiphysis and chondrolysis, we concluded that the employment of the treatment revealed that the method was functional, efficient, valid, and reproducible; it can also be used as an alternative therapeutic procedure regarding to this specific disease.\nThis manuscript is faced with the fact that the orthopaedic surgeons employ and evaluate a little-adopted treatment technique by musculoskeletal studies in the treatment of SCFE. The success or failure of treatment intervention is determined based on the outcomes [28]. The presented work was evaluated and tested on its contents, methodology and clinical usefulness. Modern medicine is based on evidence, and outcomes have to have their importance proven. The instrument of quality employed (plaster cast method) was assessed not only by the surgeon, but also by the patient, through his descriptions. The patient was always given the option, upon the first appointment, to choose from the conservative or surgical treatment. The nonoperative management of SCFE was accepted by relatives. The interest demonstrated by the patients in method reliability has shown the possibility of analyzing the difference between the patients' reports, and those from the professionals and their studies, with the possibility of varied outcomes. Evaluation in modern medicine must be based on evidences of the result and on the functional radiographic measurements, in addition to being statistically analyzed and including the patients' reports. The present work showed an optional method for the treatment of slipped capital femoral epiphysis.", "Written informed consent was obtained from all patients and relevant parents/guardians for publication of this report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.", "The author has not received any outside funding or grants in support for, or in preparation of his research. Neither did he, nor any member of his immediate family receive payments, benefits or agreements to provide the research for financial reasons.", "The author certifies that he has no commercial associations (e.g. consultancies, stock holdings, equity interest, patent/licensing arrangements, etc) which might pose a conflict of interest in connection with the submitted article." ]
[ null, "methods", null, null, null, null, null, null, null, null ]
[]
Sensorineural hearing loss after concurrent chemoradiotherapy in nasopharyngeal cancer patients.
21333025
Sensorineural hearing loss (SNHL) is one of the major long term side effects from radiation therapy (RT) in nasopharyngeal cancer (NPC) patients. This study aims to review the incidences of SNHL when treating with different radiation techniques. The additional objective is to determine the relationship of the SNHL with the radiation doses delivered to the inner ear.
BACKGROUND
A retrospective cohort study of 134 individual ears from 68 NPC patients, treated with conventional RT and IMRT in combination with chemotherapy from 2004-2008 was performed. Dosimetric data of the cochlea were analyzed. Significant SNHL was defined as >15 dB increase in bone conduction threshold at 4 kHz and PTA (pure tone average of 0.5, 1, 2 kHz). Relative risk (RR) was used to determine the associated factors with the hearing threshold changes at 4 kHz and PTA.
METHODS
Median audiological follow up time was 14 months. The incidence of high frequency (4 kHz) SNHL was 44% for the whole group (48.75% in the conventional RT, 37% with IMRT). Internal auditory canal mean dose of >50 Gy had shown a trend to increase the risk of high frequency SNHL (RR 2.02 with 95% CI 1.01-4.03, p=0.047).
RESULTS
IMRT and radiation dose limitation to the inner ear appeared to decrease SNHL.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Algorithms", "Antineoplastic Combined Chemotherapy Protocols", "Carcinoma", "Cohort Studies", "Combined Modality Therapy", "Ear, Inner", "Hearing Loss, Sensorineural", "Humans", "Incidence", "Middle Aged", "Nasopharyngeal Neoplasms", "Radiotherapy Dosage", "Radiotherapy, Adjuvant", "Retrospective Studies", "Young Adult" ]
3048471
null
null
Methods
The medical records, including radiation dosimetric data and audiological assessment of the 507 NPC patients receiving definitive RT at the division of Radiation Oncology, Siriraj Hospital from January 2004 to December 2008 were retrospectively reviewed under the approval of the Siriraj institutional review board. Two hundred and four NPC patients with T1-T4, N0-N3, M0 diseases (according to AJCC 1997 staging system) who completed RT courses with either conventional RT or IMRT with baseline pre RT audiograms were included. Patients were excluded from the study when they had no medical records, no post RT audiograms, or not completed RT. Patients who had tumor invasion into the inner ear or had a recurrent disease were also excluded. No patients were excluded because of a hearing impairment during RT. Patients who had severe hearing impairment (pure tone average: PTA, at 0.5, 1, 2 kHz > 50 dB in both ears) on pre RT audiograms were excluded. Each individual ear was evaluated independently for radiation doses and hearing status. Ultimately, 134 individual ears with intact hearing status were included for data analysis (Figure 1). Patients flow diagram. [SUBTITLE] Radiation therapy [SUBSECTION] The radiation technique has changed from conventional RT (before 2007) to IMRT (since 2007) due to machine evolution at our institute. After the start of the IMRT era, all patients with a curative aim were treated with IMRT. Therefore, 41 patients were treated with conventional RT and 27 patients were treated with IMRT. For the conventional RT, radiation was prescribed to a total dose of 66-70 Gy at 2 Gy per fraction, 5 fractions per week. All patients were treated with a Cobalt 60 teletherapy unit. Parallel opposed portals were used for the primary tumor site and the upper neck. Spinal cord and brainstem were mostly shielded at the dose of 46 Gy. This conventional field generally included the base of the skull, for which the inner ear was not intentionally protected by the posterior fossa block. The lower neck was routinely treated with the anterior split field. For IMRT, the target volumes and normal tissue structures were defined by using CT images. The gross target volume (GTV) consisted of the gross primary tumor and involved lymph nodes as defined by contrast enhancement CT. Generally, clinical target volume (CTV) high risk was defined by adding a 5-mm margin to GTV. A smaller margin (3 mm) was accepted for the margin that was in close proximity to the critical structures, such as brainstem, optic nerves and optic chiasm. CTV intermediate and low risk regions were contoured according to the RTOG recommendation [8]. Planning target volume (PTV) was defined by adding a 5-mm margin to the CTVs in all dimensions to include setup uncertainties. Radiation doses were prescribed simultaneously to total doses of 66-70 Gy to the high risk region, 59.4-63 Gy to the intermediate-risk region, and 50.4-57 Gy to the low-risk region, in 33-35 fractions. The primary tumor and the upper neck were treated with IMRT. For the lower neck region, either continuing IMRT with the upper neck part or with the anterior spilt field was allowed. The radiation technique has changed from conventional RT (before 2007) to IMRT (since 2007) due to machine evolution at our institute. After the start of the IMRT era, all patients with a curative aim were treated with IMRT. Therefore, 41 patients were treated with conventional RT and 27 patients were treated with IMRT. For the conventional RT, radiation was prescribed to a total dose of 66-70 Gy at 2 Gy per fraction, 5 fractions per week. All patients were treated with a Cobalt 60 teletherapy unit. Parallel opposed portals were used for the primary tumor site and the upper neck. Spinal cord and brainstem were mostly shielded at the dose of 46 Gy. This conventional field generally included the base of the skull, for which the inner ear was not intentionally protected by the posterior fossa block. The lower neck was routinely treated with the anterior split field. For IMRT, the target volumes and normal tissue structures were defined by using CT images. The gross target volume (GTV) consisted of the gross primary tumor and involved lymph nodes as defined by contrast enhancement CT. Generally, clinical target volume (CTV) high risk was defined by adding a 5-mm margin to GTV. A smaller margin (3 mm) was accepted for the margin that was in close proximity to the critical structures, such as brainstem, optic nerves and optic chiasm. CTV intermediate and low risk regions were contoured according to the RTOG recommendation [8]. Planning target volume (PTV) was defined by adding a 5-mm margin to the CTVs in all dimensions to include setup uncertainties. Radiation doses were prescribed simultaneously to total doses of 66-70 Gy to the high risk region, 59.4-63 Gy to the intermediate-risk region, and 50.4-57 Gy to the low-risk region, in 33-35 fractions. The primary tumor and the upper neck were treated with IMRT. For the lower neck region, either continuing IMRT with the upper neck part or with the anterior spilt field was allowed. [SUBTITLE] Dose Calculation of the Inner Ear [SUBSECTION] Dose calculation of the inner ear was not accessible for patients who received conventional RT. For the patients who were treated with IMRT, dose calculations to the inner ear were evaluated. Initially, the inner ears (cochlea and vestibule) were contoured and constrained (mean doses constraint of 35 Gy with doses accepted at 50 Gy) at the time of radiation treatment planning. Each of the inner ear structures was re-contoured (using bone window; window width = 2000 HU, window level = 400 HU) and reviewed by the authors (JP and AS) as in Figure 2. We defined the inner ear as a combination of the cochlea and vestibule. The purpose of the inner ear delineation was to compare its' dose with the prior studies that defined the inner ear as a cochlea for SNHL evaluation [6,7]. The IAC was contoured to evaluate the radiation doses to the cochlea nerve, which can be affected by radiation. The minimum dose, maximum dose and mean dose were recalculated for each part of the auditory pathway. Inner ear contouring. C: Cochlea, V: Vestibule, IAC: internal auditory canal Inner ear = cochlea(C) + vestibule (V). Dose calculation of the inner ear was not accessible for patients who received conventional RT. For the patients who were treated with IMRT, dose calculations to the inner ear were evaluated. Initially, the inner ears (cochlea and vestibule) were contoured and constrained (mean doses constraint of 35 Gy with doses accepted at 50 Gy) at the time of radiation treatment planning. Each of the inner ear structures was re-contoured (using bone window; window width = 2000 HU, window level = 400 HU) and reviewed by the authors (JP and AS) as in Figure 2. We defined the inner ear as a combination of the cochlea and vestibule. The purpose of the inner ear delineation was to compare its' dose with the prior studies that defined the inner ear as a cochlea for SNHL evaluation [6,7]. The IAC was contoured to evaluate the radiation doses to the cochlea nerve, which can be affected by radiation. The minimum dose, maximum dose and mean dose were recalculated for each part of the auditory pathway. Inner ear contouring. C: Cochlea, V: Vestibule, IAC: internal auditory canal Inner ear = cochlea(C) + vestibule (V). [SUBTITLE] Chemotherapy [SUBSECTION] Patients with locally advanced disease received concurrent intravenous platinum-based chemotherapy (Cisplatin or Carboplatin). Cisplatin was given with 100 mg/m2 every 3 weeks during the radiation course, followed by Cisplatin 80 mg/m2 at day 1 and 5-FU 1000 mg/m2 at day 1-4 every 3 weeks. Carboplatin was allowed in patients with poor renal function or were intolerant to Cisplatin. Carboplatin was given in a weekly fashion (AUC 2) during the radiation course, followed by adjuvant Carboplatin (AUC 5 at day 1) in combination with 5-FU (1000 mg/m2 at day 1-4) every 3 weeks. Type (Cisplatin or Carboplatin), doses, and cycles of chemotherapy were recorded. Patients with locally advanced disease received concurrent intravenous platinum-based chemotherapy (Cisplatin or Carboplatin). Cisplatin was given with 100 mg/m2 every 3 weeks during the radiation course, followed by Cisplatin 80 mg/m2 at day 1 and 5-FU 1000 mg/m2 at day 1-4 every 3 weeks. Carboplatin was allowed in patients with poor renal function or were intolerant to Cisplatin. Carboplatin was given in a weekly fashion (AUC 2) during the radiation course, followed by adjuvant Carboplatin (AUC 5 at day 1) in combination with 5-FU (1000 mg/m2 at day 1-4) every 3 weeks. Type (Cisplatin or Carboplatin), doses, and cycles of chemotherapy were recorded. [SUBTITLE] Audiological assessment [SUBSECTION] Pre and post RT audiological data were reviewed. The audiograms were ordered routinely for all patients at pre RT and post RT periods by ENT physicians per our hospital's policy. The bone conduction (BC) threshold was measured at 0.5-4 kHz to detect the early SNHL from the cochlea and/or IAC damages. BC threshold at 4 kHz was selected to represent the high frequency loss. The pure tone average (PTA), an average of threshold levels at 0.5 kHz, 1 kHz and 2 kHz, was chosen to reflect the threshold in the low frequency speech range [9,10]. Post RT audiograms (at least 6 months after completion of RT) were obtained at various intervals. The most recently performed audiograms were used for the analysis. Hearing threshold changes were determined by comparing with their pre RT baselines. As per the American Speech and Hearing Association guidelines, significant SNHL was defined as a ≥ 10 dB increase at two consecutive frequencies or ≥ 15 dB at one frequency. Hence, the cut-off point of ≥ 15 dB increase from baseline in BC threshold at 4 kHz was used as a criterion for SNHL in this study. The incidences of otitis media effusion (OME) and tympanic membrane perforation were documented at baseline and follow up. Influences from age, chemotherapy, OME, co-morbidities (DM and hypertension), radiation techniques and the radiation doses on the change of BC thresholds were assessed. Pre and post RT audiological data were reviewed. The audiograms were ordered routinely for all patients at pre RT and post RT periods by ENT physicians per our hospital's policy. The bone conduction (BC) threshold was measured at 0.5-4 kHz to detect the early SNHL from the cochlea and/or IAC damages. BC threshold at 4 kHz was selected to represent the high frequency loss. The pure tone average (PTA), an average of threshold levels at 0.5 kHz, 1 kHz and 2 kHz, was chosen to reflect the threshold in the low frequency speech range [9,10]. Post RT audiograms (at least 6 months after completion of RT) were obtained at various intervals. The most recently performed audiograms were used for the analysis. Hearing threshold changes were determined by comparing with their pre RT baselines. As per the American Speech and Hearing Association guidelines, significant SNHL was defined as a ≥ 10 dB increase at two consecutive frequencies or ≥ 15 dB at one frequency. Hence, the cut-off point of ≥ 15 dB increase from baseline in BC threshold at 4 kHz was used as a criterion for SNHL in this study. The incidences of otitis media effusion (OME) and tympanic membrane perforation were documented at baseline and follow up. Influences from age, chemotherapy, OME, co-morbidities (DM and hypertension), radiation techniques and the radiation doses on the change of BC thresholds were assessed. [SUBTITLE] Statistical methods [SUBSECTION] The statistics program STATA, version 8 was employed for data analysis. Relative risk (RR) with 95% confidential interval (CI) was used to determine the relationship between the possible associated factors and the threshold changes at 4 kHz and PTA. We tested the null hypothesis as to whether the relative risk was equal to 1 by calculating the chi-square test statistics. The statistics program STATA, version 8 was employed for data analysis. Relative risk (RR) with 95% confidential interval (CI) was used to determine the relationship between the possible associated factors and the threshold changes at 4 kHz and PTA. We tested the null hypothesis as to whether the relative risk was equal to 1 by calculating the chi-square test statistics.
null
null
null
null
[ "Background", "Radiation therapy", "Dose Calculation of the Inner Ear", "Chemotherapy", "Audiological assessment", "Statistical methods", "Results", "Radiation doses to the inner ear", "Chemotherapy", "Treatment outcomes", "Audiological assessment and the incidences of post radiation SNHL", "Factors associated with the incidences of SNHL", "Radiation techniques", "Radiation doses to the cochlea, inner ear, and IAC", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Authors' information" ]
[ "Radiation therapy (RT) is the standard treatment for nasopharyngeal cancer (NPC) patients as a result of the relative radiosensitivity, deep location and the close proximity to the normal critical structures. High dose RT of ≥ 66 Gy in combination with chemotherapy has yielded a 5-year locoregional control for more than 80% of the patients with locally advanced disease [1-3]. Consequently, RT produces undesirable side effects on the adjacent organs. In addition to xerostomia, sensorineural hearing loss (SNHL), resulting from the cochlea damage, is one of the major long term side effects which impacts the patients' quality of life. With modern conformal radiation techniques, the incidence of radiation induced SNHL is expected to decline, due to a better visualization of the organs on the planning CT images and a better capability to spare the cochlea with a mean dose < 40-50 Gy [4-7].\nThis retrospective analytic study aims to report the incidences of SNHL of NPC patients receiving chemoradiotherapy with conventional RT comparing with intensity modulated radiation therapy (IMRT). To our knowledge, this study is the first one to compare hearing status between conventional RT and IMRT for NPC patients. As most earlier studies had some disagreement about the cochlea contouring for dose volume analysis, the further aim of this study is to evaluate radiation doses in each specific part of the inner ear [cochlea, inner ear (cochlea and vestibule) and internal auditory canal (IAC)] in correlation with the incidences of SNHL.", "The radiation technique has changed from conventional RT (before 2007) to IMRT (since 2007) due to machine evolution at our institute. After the start of the IMRT era, all patients with a curative aim were treated with IMRT. Therefore, 41 patients were treated with conventional RT and 27 patients were treated with IMRT. For the conventional RT, radiation was prescribed to a total dose of 66-70 Gy at 2 Gy per fraction, 5 fractions per week. All patients were treated with a Cobalt 60 teletherapy unit. Parallel opposed portals were used for the primary tumor site and the upper neck. Spinal cord and brainstem were mostly shielded at the dose of 46 Gy. This conventional field generally included the base of the skull, for which the inner ear was not intentionally protected by the posterior fossa block. The lower neck was routinely treated with the anterior split field.\nFor IMRT, the target volumes and normal tissue structures were defined by using CT images. The gross target volume (GTV) consisted of the gross primary tumor and involved lymph nodes as defined by contrast enhancement CT. Generally, clinical target volume (CTV) high risk was defined by adding a 5-mm margin to GTV. A smaller margin (3 mm) was accepted for the margin that was in close proximity to the critical structures, such as brainstem, optic nerves and optic chiasm. CTV intermediate and low risk regions were contoured according to the RTOG recommendation [8]. Planning target volume (PTV) was defined by adding a 5-mm margin to the CTVs in all dimensions to include setup uncertainties. Radiation doses were prescribed simultaneously to total doses of 66-70 Gy to the high risk region, 59.4-63 Gy to the intermediate-risk region, and 50.4-57 Gy to the low-risk region, in 33-35 fractions. The primary tumor and the upper neck were treated with IMRT. For the lower neck region, either continuing IMRT with the upper neck part or with the anterior spilt field was allowed.", "Dose calculation of the inner ear was not accessible for patients who received conventional RT. For the patients who were treated with IMRT, dose calculations to the inner ear were evaluated. Initially, the inner ears (cochlea and vestibule) were contoured and constrained (mean doses constraint of 35 Gy with doses accepted at 50 Gy) at the time of radiation treatment planning. Each of the inner ear structures was re-contoured (using bone window; window width = 2000 HU, window level = 400 HU) and reviewed by the authors (JP and AS) as in Figure 2. We defined the inner ear as a combination of the cochlea and vestibule. The purpose of the inner ear delineation was to compare its' dose with the prior studies that defined the inner ear as a cochlea for SNHL evaluation [6,7]. The IAC was contoured to evaluate the radiation doses to the cochlea nerve, which can be affected by radiation. The minimum dose, maximum dose and mean dose were recalculated for each part of the auditory pathway.\nInner ear contouring. C: Cochlea, V: Vestibule, IAC: internal auditory canal Inner ear = cochlea(C) + vestibule (V).", "Patients with locally advanced disease received concurrent intravenous platinum-based chemotherapy (Cisplatin or Carboplatin). Cisplatin was given with 100 mg/m2 every 3 weeks during the radiation course, followed by Cisplatin 80 mg/m2 at day 1 and 5-FU 1000 mg/m2 at day 1-4 every 3 weeks. Carboplatin was allowed in patients with poor renal function or were intolerant to Cisplatin. Carboplatin was given in a weekly fashion (AUC 2) during the radiation course, followed by adjuvant Carboplatin (AUC 5 at day 1) in combination with 5-FU (1000 mg/m2 at day 1-4) every 3 weeks. Type (Cisplatin or Carboplatin), doses, and cycles of chemotherapy were recorded.", "Pre and post RT audiological data were reviewed. The audiograms were ordered routinely for all patients at pre RT and post RT periods by ENT physicians per our hospital's policy. The bone conduction (BC) threshold was measured at 0.5-4 kHz to detect the early SNHL from the cochlea and/or IAC damages. BC threshold at 4 kHz was selected to represent the high frequency loss. The pure tone average (PTA), an average of threshold levels at 0.5 kHz, 1 kHz and 2 kHz, was chosen to reflect the threshold in the low frequency speech range [9,10]. Post RT audiograms (at least 6 months after completion of RT) were obtained at various intervals. The most recently performed audiograms were used for the analysis. Hearing threshold changes were determined by comparing with their pre RT baselines. As per the American Speech and Hearing Association guidelines, significant SNHL was defined as a ≥ 10 dB increase at two consecutive frequencies or ≥ 15 dB at one frequency. Hence, the cut-off point of ≥ 15 dB increase from baseline in BC threshold at 4 kHz was used as a criterion for SNHL in this study.\nThe incidences of otitis media effusion (OME) and tympanic membrane perforation were documented at baseline and follow up. Influences from age, chemotherapy, OME, co-morbidities (DM and hypertension), radiation techniques and the radiation doses on the change of BC thresholds were assessed.", "The statistics program STATA, version 8 was employed for data analysis. Relative risk (RR) with 95% confidential interval (CI) was used to determine the relationship between the possible associated factors and the threshold changes at 4 kHz and PTA. We tested the null hypothesis as to whether the relative risk was equal to 1 by calculating the chi-square test statistics.", "From January 2004 to December 2008, 68 patients (41 patients with conventional RT, 27 patients with IMRT) were enrolled for the hearing analysis. The patients' characteristics were shown in Table 1. Sixty six patients (97.1%) received concurrent chemoradiotherapy and only 2 patients (2.9%) received RT alone.\nPatient characteristics (Total 134 individual ears, 68 patients)\n[SUBTITLE] Radiation doses to the inner ear [SUBSECTION] For 41 patients who received conventional RT, dosimetric data were not available. For 27 patients who received IMRT, 54 ears were re-analyzed. Mean doses to the cochlea, inner ear and IAC were 51.02 Gy (range 25.09 - 75.54), 45.32 Gy (range 19.86-75.55) and 50.51 Gy (range 27.75-73.29), respectively.\nFor 41 patients who received conventional RT, dosimetric data were not available. For 27 patients who received IMRT, 54 ears were re-analyzed. Mean doses to the cochlea, inner ear and IAC were 51.02 Gy (range 25.09 - 75.54), 45.32 Gy (range 19.86-75.55) and 50.51 Gy (range 27.75-73.29), respectively.\n[SUBTITLE] Chemotherapy [SUBSECTION] Chemotherapy was given to 97% of the patients (66/68 patients). Most of the patients received concurrent chemoradiotherapy followed by adjuvant chemotherapy. Sixty two patients received Cisplatin, while 4 patients received Carboplatin. The total accumulative doses of Cisplatin ranged from 120 mg to 980 mg (median dose 689 mg, mean dose 639 ± 233 mg). Carboplatin accumulative doses ranged from 200 mg to 2100 mg (median dose 980 mg, mean 988 ± 670 mg).\nChemotherapy was given to 97% of the patients (66/68 patients). Most of the patients received concurrent chemoradiotherapy followed by adjuvant chemotherapy. Sixty two patients received Cisplatin, while 4 patients received Carboplatin. The total accumulative doses of Cisplatin ranged from 120 mg to 980 mg (median dose 689 mg, mean dose 639 ± 233 mg). Carboplatin accumulative doses ranged from 200 mg to 2100 mg (median dose 980 mg, mean 988 ± 670 mg).\n[SUBTITLE] Treatment outcomes [SUBSECTION] Median follow up time for all patients was 27.5 months (range 8-65 months). At the end of the study, 13 out of 68 patients were lost to follow up. The 2 year-progression free survival of this study group was 76.4% with a 2 year locoregional control of 88.5%.\nMedian follow up time for all patients was 27.5 months (range 8-65 months). At the end of the study, 13 out of 68 patients were lost to follow up. The 2 year-progression free survival of this study group was 76.4% with a 2 year locoregional control of 88.5%.\n[SUBTITLE] Audiological assessment and the incidences of post radiation SNHL [SUBSECTION] Pre RT audiograms demonstrated that 65.5% of the ears (88/134 ears) were normal or had mild BC hearing losses (16-25 dB) at 4 kHz. At PTA, 91% of the ears (122/134 ears) were normal or had mild hearing losses.\nPost RT audiograms were performed at different follow up intervals. The median follow up time of audiological assessment for all 68 patients was 14 months (range 6-43 months). Median audiological follow up times for conventional RT and IMRT groups were 15 months (range 6-43 months) and 13 months (range 6-29 months), respectively. For total of 68 patients (134 ears), the incidence of SNHL at high frequency (4 kHz) was 52.9% (unilateral loss 13/68 patients, bilateral loss 23/68 patients). At PTA, the incidence of SNHL was 10.3% (unilateral loss 6/68 patients, bilateral loss 1/68 patients). For individual ear evaluation, the incidences of SNHL were 44% (59/134 ears) and 6% (8/134 ears) at 4 kHz and PTA, respectively.\nPre RT audiograms demonstrated that 65.5% of the ears (88/134 ears) were normal or had mild BC hearing losses (16-25 dB) at 4 kHz. At PTA, 91% of the ears (122/134 ears) were normal or had mild hearing losses.\nPost RT audiograms were performed at different follow up intervals. The median follow up time of audiological assessment for all 68 patients was 14 months (range 6-43 months). Median audiological follow up times for conventional RT and IMRT groups were 15 months (range 6-43 months) and 13 months (range 6-29 months), respectively. For total of 68 patients (134 ears), the incidence of SNHL at high frequency (4 kHz) was 52.9% (unilateral loss 13/68 patients, bilateral loss 23/68 patients). At PTA, the incidence of SNHL was 10.3% (unilateral loss 6/68 patients, bilateral loss 1/68 patients). For individual ear evaluation, the incidences of SNHL were 44% (59/134 ears) and 6% (8/134 ears) at 4 kHz and PTA, respectively.\n[SUBTITLE] Factors associated with the incidences of SNHL [SUBSECTION] [SUBTITLE] Radiation techniques [SUBSECTION] With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\nWith conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\n[SUBTITLE] Radiation doses to the cochlea, inner ear, and IAC [SUBSECTION] Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\nMean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\n[SUBTITLE] Radiation techniques [SUBSECTION] With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\nWith conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\n[SUBTITLE] Radiation doses to the cochlea, inner ear, and IAC [SUBSECTION] Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\nMean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.", "For 41 patients who received conventional RT, dosimetric data were not available. For 27 patients who received IMRT, 54 ears were re-analyzed. Mean doses to the cochlea, inner ear and IAC were 51.02 Gy (range 25.09 - 75.54), 45.32 Gy (range 19.86-75.55) and 50.51 Gy (range 27.75-73.29), respectively.", "Chemotherapy was given to 97% of the patients (66/68 patients). Most of the patients received concurrent chemoradiotherapy followed by adjuvant chemotherapy. Sixty two patients received Cisplatin, while 4 patients received Carboplatin. The total accumulative doses of Cisplatin ranged from 120 mg to 980 mg (median dose 689 mg, mean dose 639 ± 233 mg). Carboplatin accumulative doses ranged from 200 mg to 2100 mg (median dose 980 mg, mean 988 ± 670 mg).", "Median follow up time for all patients was 27.5 months (range 8-65 months). At the end of the study, 13 out of 68 patients were lost to follow up. The 2 year-progression free survival of this study group was 76.4% with a 2 year locoregional control of 88.5%.", "Pre RT audiograms demonstrated that 65.5% of the ears (88/134 ears) were normal or had mild BC hearing losses (16-25 dB) at 4 kHz. At PTA, 91% of the ears (122/134 ears) were normal or had mild hearing losses.\nPost RT audiograms were performed at different follow up intervals. The median follow up time of audiological assessment for all 68 patients was 14 months (range 6-43 months). Median audiological follow up times for conventional RT and IMRT groups were 15 months (range 6-43 months) and 13 months (range 6-29 months), respectively. For total of 68 patients (134 ears), the incidence of SNHL at high frequency (4 kHz) was 52.9% (unilateral loss 13/68 patients, bilateral loss 23/68 patients). At PTA, the incidence of SNHL was 10.3% (unilateral loss 6/68 patients, bilateral loss 1/68 patients). For individual ear evaluation, the incidences of SNHL were 44% (59/134 ears) and 6% (8/134 ears) at 4 kHz and PTA, respectively.", "[SUBTITLE] Radiation techniques [SUBSECTION] With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\nWith conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\n[SUBTITLE] Radiation doses to the cochlea, inner ear, and IAC [SUBSECTION] Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\nMean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.", "With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.", "Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.", "Radiation induced SNHL has been recognized as an important adverse effect which generally develops 6 to 24 months after radiation treatment and may progress to complete deafness [10,12]. The inner ear is the most susceptible organ for a durable long term SNHL. The etiologies of RT induced SNHL are vascular insufficiency, reduced number of capillaries, degeneration of endotheliocytes in vessels, loss of cells in the organ of Corti, atrophy and degeneration of the stria vascularis, and atrophy of the spiral ganglion cells and the cochlea nerve [13,14]. This damage is more prominent to the outer hair cells in the basal turn of the cochlea, which is responsible for transduction of higher frequency sound and a clinically significant SNHL at a higher frequency (>2 Hz) might occur.\nThe incidences of radiation induced SNHL were reported in the range of 0-65% with various radiation techniques (Table 5). Our study demonstrated that the incidences of SNHL were 44% (59/134 ears) at high frequency (4 kHz) and 6% (8/134 ears) at PTA for the whole population. Each study, however, was performed and evaluated with different criteria and follow up times.\nCriteria and radiation doses to the cochlea in correlation with the incidences of SNHL\nConv RT = Conventional radiation therapy Conf RT = Conformal radiation therapy\nThe median follow up time (14 months) for audiological assessment in this study was rather shorter than the other studies. Nonetheless, radiation induced ototoxicity is typically evident at 6-12 months after completion of radiation therapy [4]. Transient SNHL might occur up to 41% of the patients as reported by Ho et al [12]. This study was not able to evaluate the transient hearing loss because of its retrospective design which was based upon different follow up times.\nThe hospital's policy is to routinely perform the audiological assessment for all NPC patients. However, a number of patients in our study did not complete the audiological tests. We recognized that the completeness of audiometric evaluation for every patient would be challenging in the absence of a prospective clinical trial. In this study, we compared the differences of pre and post RT audiograms rather than using the specific hearing threshold to justify the SNHL. We also excluded the patients who had only post RT audiograms. This should diminish the bias from patients who performed audiological exams because of having hearing impairment post RT.\nThe prior studies included only patients who completed the audiological assessment. This would potentially alter the incidence of SNHL in a certain number of patients who never performed the audiological exams. Also, the usage of contralateral ear as a baseline could create some inconsistency of the results [4,15,16]. Thus, we assumed that the results of our study would be adequate to report the incidence of SNHL although we understood the weakness of retrospective data.\nWith different radiation techniques, IMRT was found to have fewer incidence of SNHL when compared to the conventional RT (37% vs 48.75%). There was a trend to decrease the incidences of SNHL with IMRT in our study (RR of 0.76 with 95% CI 0.5-1.15, favouring IMRT). The former studies for NPC treatment had not directly compared the incidences of SNHL between conventional technique and conformal techniques. By indirect comparison among studies, the incidences of SNHL with the conformal techniques were not consistently lower than the conventional technique as shown in Table 5. The delineation of the normal structures and radiation dose constraint are very important in IMRT planning. IMRT potentially provides higher radiation doses to the cochlea than three dimensional conformal radiation technique, or even more than conventional RT, if the cochlea is not intentionally avoided [17].\nIn this study, we had delineated the inner ear into the cochlea and inner ear as there was some disagreement of cochlea delineation among the earlier studies [5,7]. Because of the tiny volume of the cochlea, target delineation is essential for dose volume analysis. Especially, its location is in the high dose gradient of the IMRT.\nThe results from our study demonstrated that the incidence of high frequency SNHL tended to be increased when mean dose delivered to the cochlea was > 50 Gy. Earlier studies suggested that the incidences of SNHL were increased with mean cochlea doses > 45-50 Gy [4,5,7,11]. However, our exploratory analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy. Therefore, our study suggested that the mean cochlea dose of 50 Gy should be reasonable since the excessive dose constraint to the cochlea would potentially compromise the nearby targets coverage.\nApart from the cochlea, the IAC should be concerned as the cochlea nerve traverses through the canal entering into the brainstem. The SNHL due to a retro-cochlea (cochlea nerve) damage may occur, although, this was relatively rare compared to the cochlea damage [18]. IMRT could deliver higher doses up to 66 Gy to the IAC if the IAC was not specified as the organ at risk [19]. This study demonstrated that the IAC mean doses > 50 Gy showed a trend to increase the incidence of high frequency (4 kHz) SNHL (RR 2.02 with 95% CI 0.99-4.13). IAC dose limitation was also crucial as the patients who developed SNHL from cochlea nerve damage would not obtain any benefits from using a hearing aid or even a cochlea implantation.\nOur study revealed the lower incidence (10.3%) of low frequency SNHL (PTA). This concurred with the other series, as the high frequency (> 4 kHz) would be the earliest sign for damage at the outer hair cells in the basal turn of the cochlea [19].\nAnother coexisting factor for high frequency SNHL was a combination of Cisplatin chemotherapy with RT from a synergistic effect to the cochlea [4,5,15,20]. Some series reported that 600 mg/m2 [21] or total dose of 1,050 mg [22] of Cisplatin increased the incidences of high frequency SNHL. As most of the patients in this study had locally advanced disease and received a combination of chemotherapy, the effect of Cisplatin to SNHL could not be evaluated directly. There was no apparent increase in the incidence of SNHL with a total accumulation dose of > 600 mg in this study. However, a further bivariate analysis revealed that dose limitation to the cochlea (< 50 Gy) and inner ear (< 45 Gy) would potentially protect SNHL in patients who received Cisplatin chemotherapy to an accumulative dose of > 600 mg.\nFor the other associated factors, Kwong et al reported the association of age, sex, and post RT serous otitis media as significant prognostic factors for persistent SNHL on multivariate analysis [10]. Nonetheless, this study could not demonstrate any relationship among age, evidence of otitis media, and/or medical co-morbidities with the incidence of SNHL.\nLastly, the inter-fraction setup uncertainties for very small structures are crucial. Radiation dose evaluation on the computer planning would be only the estimation of the actual dose delivered to the tiny inner ear during the radiation course.", "Apparently, radiation therapy produces relatively high incidences of high frequency SNHL. The severities of the damage are increased with higher radiation doses delivered to the inner ear structures. Mean radiation dose constraint of 50 Gy to the cochlea and IAC showed a trend to decrease the incidences of SNHL, especially in patients who received combination of Cisplatin chemotherapy. Normal structures delineation and radiation dose constraint with modern radiation techniques are crucial to diminish the long term SNHL and enhance the quality of life in addition to insuring survival from the cancer.", "The authors declare that they have no competing interests.", "JP designed the study, analysed the data and prepared the manuscript. AS carried out data collection, data analysis and helped to draft the manuscript. KT participated in the design of the study and performed the statistical analysis. PK participated in its design and hearing data evaluation. KT participated in hearing data evaluation. YC participated in its design and helped to draft the manuscript. PP participated in its design and helped to draft the manuscript. All authors read and approved the final manuscript.", "JP is an assistant professor in radiation oncology, focusing in head and neck cancer treatment. AS is a former radiation oncology resident. She is now a radiation oncologist at the regional cancer centre. KT is an instructor in radiation oncology, also a research facilitator in radiation oncology field. PK is an assistant professor in oto-rhino-laryngology, focusing in head and neck cancer surgery. KT is an instructor in oto-rhino-laryngology, specialized in otology. YC is an associated professor in radiation oncology. PP is a professor in radiation oncology, and a head of radiation oncology division." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Radiation therapy", "Dose Calculation of the Inner Ear", "Chemotherapy", "Audiological assessment", "Statistical methods", "Results", "Radiation doses to the inner ear", "Chemotherapy", "Treatment outcomes", "Audiological assessment and the incidences of post radiation SNHL", "Factors associated with the incidences of SNHL", "Radiation techniques", "Radiation doses to the cochlea, inner ear, and IAC", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Authors' information" ]
[ "Radiation therapy (RT) is the standard treatment for nasopharyngeal cancer (NPC) patients as a result of the relative radiosensitivity, deep location and the close proximity to the normal critical structures. High dose RT of ≥ 66 Gy in combination with chemotherapy has yielded a 5-year locoregional control for more than 80% of the patients with locally advanced disease [1-3]. Consequently, RT produces undesirable side effects on the adjacent organs. In addition to xerostomia, sensorineural hearing loss (SNHL), resulting from the cochlea damage, is one of the major long term side effects which impacts the patients' quality of life. With modern conformal radiation techniques, the incidence of radiation induced SNHL is expected to decline, due to a better visualization of the organs on the planning CT images and a better capability to spare the cochlea with a mean dose < 40-50 Gy [4-7].\nThis retrospective analytic study aims to report the incidences of SNHL of NPC patients receiving chemoradiotherapy with conventional RT comparing with intensity modulated radiation therapy (IMRT). To our knowledge, this study is the first one to compare hearing status between conventional RT and IMRT for NPC patients. As most earlier studies had some disagreement about the cochlea contouring for dose volume analysis, the further aim of this study is to evaluate radiation doses in each specific part of the inner ear [cochlea, inner ear (cochlea and vestibule) and internal auditory canal (IAC)] in correlation with the incidences of SNHL.", "The medical records, including radiation dosimetric data and audiological assessment of the 507 NPC patients receiving definitive RT at the division of Radiation Oncology, Siriraj Hospital from January 2004 to December 2008 were retrospectively reviewed under the approval of the Siriraj institutional review board.\nTwo hundred and four NPC patients with T1-T4, N0-N3, M0 diseases (according to AJCC 1997 staging system) who completed RT courses with either conventional RT or IMRT with baseline pre RT audiograms were included. Patients were excluded from the study when they had no medical records, no post RT audiograms, or not completed RT. Patients who had tumor invasion into the inner ear or had a recurrent disease were also excluded. No patients were excluded because of a hearing impairment during RT. Patients who had severe hearing impairment (pure tone average: PTA, at 0.5, 1, 2 kHz > 50 dB in both ears) on pre RT audiograms were excluded. Each individual ear was evaluated independently for radiation doses and hearing status. Ultimately, 134 individual ears with intact hearing status were included for data analysis (Figure 1).\nPatients flow diagram.\n[SUBTITLE] Radiation therapy [SUBSECTION] The radiation technique has changed from conventional RT (before 2007) to IMRT (since 2007) due to machine evolution at our institute. After the start of the IMRT era, all patients with a curative aim were treated with IMRT. Therefore, 41 patients were treated with conventional RT and 27 patients were treated with IMRT. For the conventional RT, radiation was prescribed to a total dose of 66-70 Gy at 2 Gy per fraction, 5 fractions per week. All patients were treated with a Cobalt 60 teletherapy unit. Parallel opposed portals were used for the primary tumor site and the upper neck. Spinal cord and brainstem were mostly shielded at the dose of 46 Gy. This conventional field generally included the base of the skull, for which the inner ear was not intentionally protected by the posterior fossa block. The lower neck was routinely treated with the anterior split field.\nFor IMRT, the target volumes and normal tissue structures were defined by using CT images. The gross target volume (GTV) consisted of the gross primary tumor and involved lymph nodes as defined by contrast enhancement CT. Generally, clinical target volume (CTV) high risk was defined by adding a 5-mm margin to GTV. A smaller margin (3 mm) was accepted for the margin that was in close proximity to the critical structures, such as brainstem, optic nerves and optic chiasm. CTV intermediate and low risk regions were contoured according to the RTOG recommendation [8]. Planning target volume (PTV) was defined by adding a 5-mm margin to the CTVs in all dimensions to include setup uncertainties. Radiation doses were prescribed simultaneously to total doses of 66-70 Gy to the high risk region, 59.4-63 Gy to the intermediate-risk region, and 50.4-57 Gy to the low-risk region, in 33-35 fractions. The primary tumor and the upper neck were treated with IMRT. For the lower neck region, either continuing IMRT with the upper neck part or with the anterior spilt field was allowed.\nThe radiation technique has changed from conventional RT (before 2007) to IMRT (since 2007) due to machine evolution at our institute. After the start of the IMRT era, all patients with a curative aim were treated with IMRT. Therefore, 41 patients were treated with conventional RT and 27 patients were treated with IMRT. For the conventional RT, radiation was prescribed to a total dose of 66-70 Gy at 2 Gy per fraction, 5 fractions per week. All patients were treated with a Cobalt 60 teletherapy unit. Parallel opposed portals were used for the primary tumor site and the upper neck. Spinal cord and brainstem were mostly shielded at the dose of 46 Gy. This conventional field generally included the base of the skull, for which the inner ear was not intentionally protected by the posterior fossa block. The lower neck was routinely treated with the anterior split field.\nFor IMRT, the target volumes and normal tissue structures were defined by using CT images. The gross target volume (GTV) consisted of the gross primary tumor and involved lymph nodes as defined by contrast enhancement CT. Generally, clinical target volume (CTV) high risk was defined by adding a 5-mm margin to GTV. A smaller margin (3 mm) was accepted for the margin that was in close proximity to the critical structures, such as brainstem, optic nerves and optic chiasm. CTV intermediate and low risk regions were contoured according to the RTOG recommendation [8]. Planning target volume (PTV) was defined by adding a 5-mm margin to the CTVs in all dimensions to include setup uncertainties. Radiation doses were prescribed simultaneously to total doses of 66-70 Gy to the high risk region, 59.4-63 Gy to the intermediate-risk region, and 50.4-57 Gy to the low-risk region, in 33-35 fractions. The primary tumor and the upper neck were treated with IMRT. For the lower neck region, either continuing IMRT with the upper neck part or with the anterior spilt field was allowed.\n[SUBTITLE] Dose Calculation of the Inner Ear [SUBSECTION] Dose calculation of the inner ear was not accessible for patients who received conventional RT. For the patients who were treated with IMRT, dose calculations to the inner ear were evaluated. Initially, the inner ears (cochlea and vestibule) were contoured and constrained (mean doses constraint of 35 Gy with doses accepted at 50 Gy) at the time of radiation treatment planning. Each of the inner ear structures was re-contoured (using bone window; window width = 2000 HU, window level = 400 HU) and reviewed by the authors (JP and AS) as in Figure 2. We defined the inner ear as a combination of the cochlea and vestibule. The purpose of the inner ear delineation was to compare its' dose with the prior studies that defined the inner ear as a cochlea for SNHL evaluation [6,7]. The IAC was contoured to evaluate the radiation doses to the cochlea nerve, which can be affected by radiation. The minimum dose, maximum dose and mean dose were recalculated for each part of the auditory pathway.\nInner ear contouring. C: Cochlea, V: Vestibule, IAC: internal auditory canal Inner ear = cochlea(C) + vestibule (V).\nDose calculation of the inner ear was not accessible for patients who received conventional RT. For the patients who were treated with IMRT, dose calculations to the inner ear were evaluated. Initially, the inner ears (cochlea and vestibule) were contoured and constrained (mean doses constraint of 35 Gy with doses accepted at 50 Gy) at the time of radiation treatment planning. Each of the inner ear structures was re-contoured (using bone window; window width = 2000 HU, window level = 400 HU) and reviewed by the authors (JP and AS) as in Figure 2. We defined the inner ear as a combination of the cochlea and vestibule. The purpose of the inner ear delineation was to compare its' dose with the prior studies that defined the inner ear as a cochlea for SNHL evaluation [6,7]. The IAC was contoured to evaluate the radiation doses to the cochlea nerve, which can be affected by radiation. The minimum dose, maximum dose and mean dose were recalculated for each part of the auditory pathway.\nInner ear contouring. C: Cochlea, V: Vestibule, IAC: internal auditory canal Inner ear = cochlea(C) + vestibule (V).\n[SUBTITLE] Chemotherapy [SUBSECTION] Patients with locally advanced disease received concurrent intravenous platinum-based chemotherapy (Cisplatin or Carboplatin). Cisplatin was given with 100 mg/m2 every 3 weeks during the radiation course, followed by Cisplatin 80 mg/m2 at day 1 and 5-FU 1000 mg/m2 at day 1-4 every 3 weeks. Carboplatin was allowed in patients with poor renal function or were intolerant to Cisplatin. Carboplatin was given in a weekly fashion (AUC 2) during the radiation course, followed by adjuvant Carboplatin (AUC 5 at day 1) in combination with 5-FU (1000 mg/m2 at day 1-4) every 3 weeks. Type (Cisplatin or Carboplatin), doses, and cycles of chemotherapy were recorded.\nPatients with locally advanced disease received concurrent intravenous platinum-based chemotherapy (Cisplatin or Carboplatin). Cisplatin was given with 100 mg/m2 every 3 weeks during the radiation course, followed by Cisplatin 80 mg/m2 at day 1 and 5-FU 1000 mg/m2 at day 1-4 every 3 weeks. Carboplatin was allowed in patients with poor renal function or were intolerant to Cisplatin. Carboplatin was given in a weekly fashion (AUC 2) during the radiation course, followed by adjuvant Carboplatin (AUC 5 at day 1) in combination with 5-FU (1000 mg/m2 at day 1-4) every 3 weeks. Type (Cisplatin or Carboplatin), doses, and cycles of chemotherapy were recorded.\n[SUBTITLE] Audiological assessment [SUBSECTION] Pre and post RT audiological data were reviewed. The audiograms were ordered routinely for all patients at pre RT and post RT periods by ENT physicians per our hospital's policy. The bone conduction (BC) threshold was measured at 0.5-4 kHz to detect the early SNHL from the cochlea and/or IAC damages. BC threshold at 4 kHz was selected to represent the high frequency loss. The pure tone average (PTA), an average of threshold levels at 0.5 kHz, 1 kHz and 2 kHz, was chosen to reflect the threshold in the low frequency speech range [9,10]. Post RT audiograms (at least 6 months after completion of RT) were obtained at various intervals. The most recently performed audiograms were used for the analysis. Hearing threshold changes were determined by comparing with their pre RT baselines. As per the American Speech and Hearing Association guidelines, significant SNHL was defined as a ≥ 10 dB increase at two consecutive frequencies or ≥ 15 dB at one frequency. Hence, the cut-off point of ≥ 15 dB increase from baseline in BC threshold at 4 kHz was used as a criterion for SNHL in this study.\nThe incidences of otitis media effusion (OME) and tympanic membrane perforation were documented at baseline and follow up. Influences from age, chemotherapy, OME, co-morbidities (DM and hypertension), radiation techniques and the radiation doses on the change of BC thresholds were assessed.\nPre and post RT audiological data were reviewed. The audiograms were ordered routinely for all patients at pre RT and post RT periods by ENT physicians per our hospital's policy. The bone conduction (BC) threshold was measured at 0.5-4 kHz to detect the early SNHL from the cochlea and/or IAC damages. BC threshold at 4 kHz was selected to represent the high frequency loss. The pure tone average (PTA), an average of threshold levels at 0.5 kHz, 1 kHz and 2 kHz, was chosen to reflect the threshold in the low frequency speech range [9,10]. Post RT audiograms (at least 6 months after completion of RT) were obtained at various intervals. The most recently performed audiograms were used for the analysis. Hearing threshold changes were determined by comparing with their pre RT baselines. As per the American Speech and Hearing Association guidelines, significant SNHL was defined as a ≥ 10 dB increase at two consecutive frequencies or ≥ 15 dB at one frequency. Hence, the cut-off point of ≥ 15 dB increase from baseline in BC threshold at 4 kHz was used as a criterion for SNHL in this study.\nThe incidences of otitis media effusion (OME) and tympanic membrane perforation were documented at baseline and follow up. Influences from age, chemotherapy, OME, co-morbidities (DM and hypertension), radiation techniques and the radiation doses on the change of BC thresholds were assessed.\n[SUBTITLE] Statistical methods [SUBSECTION] The statistics program STATA, version 8 was employed for data analysis. Relative risk (RR) with 95% confidential interval (CI) was used to determine the relationship between the possible associated factors and the threshold changes at 4 kHz and PTA. We tested the null hypothesis as to whether the relative risk was equal to 1 by calculating the chi-square test statistics.\nThe statistics program STATA, version 8 was employed for data analysis. Relative risk (RR) with 95% confidential interval (CI) was used to determine the relationship between the possible associated factors and the threshold changes at 4 kHz and PTA. We tested the null hypothesis as to whether the relative risk was equal to 1 by calculating the chi-square test statistics.", "The radiation technique has changed from conventional RT (before 2007) to IMRT (since 2007) due to machine evolution at our institute. After the start of the IMRT era, all patients with a curative aim were treated with IMRT. Therefore, 41 patients were treated with conventional RT and 27 patients were treated with IMRT. For the conventional RT, radiation was prescribed to a total dose of 66-70 Gy at 2 Gy per fraction, 5 fractions per week. All patients were treated with a Cobalt 60 teletherapy unit. Parallel opposed portals were used for the primary tumor site and the upper neck. Spinal cord and brainstem were mostly shielded at the dose of 46 Gy. This conventional field generally included the base of the skull, for which the inner ear was not intentionally protected by the posterior fossa block. The lower neck was routinely treated with the anterior split field.\nFor IMRT, the target volumes and normal tissue structures were defined by using CT images. The gross target volume (GTV) consisted of the gross primary tumor and involved lymph nodes as defined by contrast enhancement CT. Generally, clinical target volume (CTV) high risk was defined by adding a 5-mm margin to GTV. A smaller margin (3 mm) was accepted for the margin that was in close proximity to the critical structures, such as brainstem, optic nerves and optic chiasm. CTV intermediate and low risk regions were contoured according to the RTOG recommendation [8]. Planning target volume (PTV) was defined by adding a 5-mm margin to the CTVs in all dimensions to include setup uncertainties. Radiation doses were prescribed simultaneously to total doses of 66-70 Gy to the high risk region, 59.4-63 Gy to the intermediate-risk region, and 50.4-57 Gy to the low-risk region, in 33-35 fractions. The primary tumor and the upper neck were treated with IMRT. For the lower neck region, either continuing IMRT with the upper neck part or with the anterior spilt field was allowed.", "Dose calculation of the inner ear was not accessible for patients who received conventional RT. For the patients who were treated with IMRT, dose calculations to the inner ear were evaluated. Initially, the inner ears (cochlea and vestibule) were contoured and constrained (mean doses constraint of 35 Gy with doses accepted at 50 Gy) at the time of radiation treatment planning. Each of the inner ear structures was re-contoured (using bone window; window width = 2000 HU, window level = 400 HU) and reviewed by the authors (JP and AS) as in Figure 2. We defined the inner ear as a combination of the cochlea and vestibule. The purpose of the inner ear delineation was to compare its' dose with the prior studies that defined the inner ear as a cochlea for SNHL evaluation [6,7]. The IAC was contoured to evaluate the radiation doses to the cochlea nerve, which can be affected by radiation. The minimum dose, maximum dose and mean dose were recalculated for each part of the auditory pathway.\nInner ear contouring. C: Cochlea, V: Vestibule, IAC: internal auditory canal Inner ear = cochlea(C) + vestibule (V).", "Patients with locally advanced disease received concurrent intravenous platinum-based chemotherapy (Cisplatin or Carboplatin). Cisplatin was given with 100 mg/m2 every 3 weeks during the radiation course, followed by Cisplatin 80 mg/m2 at day 1 and 5-FU 1000 mg/m2 at day 1-4 every 3 weeks. Carboplatin was allowed in patients with poor renal function or were intolerant to Cisplatin. Carboplatin was given in a weekly fashion (AUC 2) during the radiation course, followed by adjuvant Carboplatin (AUC 5 at day 1) in combination with 5-FU (1000 mg/m2 at day 1-4) every 3 weeks. Type (Cisplatin or Carboplatin), doses, and cycles of chemotherapy were recorded.", "Pre and post RT audiological data were reviewed. The audiograms were ordered routinely for all patients at pre RT and post RT periods by ENT physicians per our hospital's policy. The bone conduction (BC) threshold was measured at 0.5-4 kHz to detect the early SNHL from the cochlea and/or IAC damages. BC threshold at 4 kHz was selected to represent the high frequency loss. The pure tone average (PTA), an average of threshold levels at 0.5 kHz, 1 kHz and 2 kHz, was chosen to reflect the threshold in the low frequency speech range [9,10]. Post RT audiograms (at least 6 months after completion of RT) were obtained at various intervals. The most recently performed audiograms were used for the analysis. Hearing threshold changes were determined by comparing with their pre RT baselines. As per the American Speech and Hearing Association guidelines, significant SNHL was defined as a ≥ 10 dB increase at two consecutive frequencies or ≥ 15 dB at one frequency. Hence, the cut-off point of ≥ 15 dB increase from baseline in BC threshold at 4 kHz was used as a criterion for SNHL in this study.\nThe incidences of otitis media effusion (OME) and tympanic membrane perforation were documented at baseline and follow up. Influences from age, chemotherapy, OME, co-morbidities (DM and hypertension), radiation techniques and the radiation doses on the change of BC thresholds were assessed.", "The statistics program STATA, version 8 was employed for data analysis. Relative risk (RR) with 95% confidential interval (CI) was used to determine the relationship between the possible associated factors and the threshold changes at 4 kHz and PTA. We tested the null hypothesis as to whether the relative risk was equal to 1 by calculating the chi-square test statistics.", "From January 2004 to December 2008, 68 patients (41 patients with conventional RT, 27 patients with IMRT) were enrolled for the hearing analysis. The patients' characteristics were shown in Table 1. Sixty six patients (97.1%) received concurrent chemoradiotherapy and only 2 patients (2.9%) received RT alone.\nPatient characteristics (Total 134 individual ears, 68 patients)\n[SUBTITLE] Radiation doses to the inner ear [SUBSECTION] For 41 patients who received conventional RT, dosimetric data were not available. For 27 patients who received IMRT, 54 ears were re-analyzed. Mean doses to the cochlea, inner ear and IAC were 51.02 Gy (range 25.09 - 75.54), 45.32 Gy (range 19.86-75.55) and 50.51 Gy (range 27.75-73.29), respectively.\nFor 41 patients who received conventional RT, dosimetric data were not available. For 27 patients who received IMRT, 54 ears were re-analyzed. Mean doses to the cochlea, inner ear and IAC were 51.02 Gy (range 25.09 - 75.54), 45.32 Gy (range 19.86-75.55) and 50.51 Gy (range 27.75-73.29), respectively.\n[SUBTITLE] Chemotherapy [SUBSECTION] Chemotherapy was given to 97% of the patients (66/68 patients). Most of the patients received concurrent chemoradiotherapy followed by adjuvant chemotherapy. Sixty two patients received Cisplatin, while 4 patients received Carboplatin. The total accumulative doses of Cisplatin ranged from 120 mg to 980 mg (median dose 689 mg, mean dose 639 ± 233 mg). Carboplatin accumulative doses ranged from 200 mg to 2100 mg (median dose 980 mg, mean 988 ± 670 mg).\nChemotherapy was given to 97% of the patients (66/68 patients). Most of the patients received concurrent chemoradiotherapy followed by adjuvant chemotherapy. Sixty two patients received Cisplatin, while 4 patients received Carboplatin. The total accumulative doses of Cisplatin ranged from 120 mg to 980 mg (median dose 689 mg, mean dose 639 ± 233 mg). Carboplatin accumulative doses ranged from 200 mg to 2100 mg (median dose 980 mg, mean 988 ± 670 mg).\n[SUBTITLE] Treatment outcomes [SUBSECTION] Median follow up time for all patients was 27.5 months (range 8-65 months). At the end of the study, 13 out of 68 patients were lost to follow up. The 2 year-progression free survival of this study group was 76.4% with a 2 year locoregional control of 88.5%.\nMedian follow up time for all patients was 27.5 months (range 8-65 months). At the end of the study, 13 out of 68 patients were lost to follow up. The 2 year-progression free survival of this study group was 76.4% with a 2 year locoregional control of 88.5%.\n[SUBTITLE] Audiological assessment and the incidences of post radiation SNHL [SUBSECTION] Pre RT audiograms demonstrated that 65.5% of the ears (88/134 ears) were normal or had mild BC hearing losses (16-25 dB) at 4 kHz. At PTA, 91% of the ears (122/134 ears) were normal or had mild hearing losses.\nPost RT audiograms were performed at different follow up intervals. The median follow up time of audiological assessment for all 68 patients was 14 months (range 6-43 months). Median audiological follow up times for conventional RT and IMRT groups were 15 months (range 6-43 months) and 13 months (range 6-29 months), respectively. For total of 68 patients (134 ears), the incidence of SNHL at high frequency (4 kHz) was 52.9% (unilateral loss 13/68 patients, bilateral loss 23/68 patients). At PTA, the incidence of SNHL was 10.3% (unilateral loss 6/68 patients, bilateral loss 1/68 patients). For individual ear evaluation, the incidences of SNHL were 44% (59/134 ears) and 6% (8/134 ears) at 4 kHz and PTA, respectively.\nPre RT audiograms demonstrated that 65.5% of the ears (88/134 ears) were normal or had mild BC hearing losses (16-25 dB) at 4 kHz. At PTA, 91% of the ears (122/134 ears) were normal or had mild hearing losses.\nPost RT audiograms were performed at different follow up intervals. The median follow up time of audiological assessment for all 68 patients was 14 months (range 6-43 months). Median audiological follow up times for conventional RT and IMRT groups were 15 months (range 6-43 months) and 13 months (range 6-29 months), respectively. For total of 68 patients (134 ears), the incidence of SNHL at high frequency (4 kHz) was 52.9% (unilateral loss 13/68 patients, bilateral loss 23/68 patients). At PTA, the incidence of SNHL was 10.3% (unilateral loss 6/68 patients, bilateral loss 1/68 patients). For individual ear evaluation, the incidences of SNHL were 44% (59/134 ears) and 6% (8/134 ears) at 4 kHz and PTA, respectively.\n[SUBTITLE] Factors associated with the incidences of SNHL [SUBSECTION] [SUBTITLE] Radiation techniques [SUBSECTION] With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\nWith conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\n[SUBTITLE] Radiation doses to the cochlea, inner ear, and IAC [SUBSECTION] Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\nMean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\n[SUBTITLE] Radiation techniques [SUBSECTION] With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\nWith conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\n[SUBTITLE] Radiation doses to the cochlea, inner ear, and IAC [SUBSECTION] Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\nMean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.", "For 41 patients who received conventional RT, dosimetric data were not available. For 27 patients who received IMRT, 54 ears were re-analyzed. Mean doses to the cochlea, inner ear and IAC were 51.02 Gy (range 25.09 - 75.54), 45.32 Gy (range 19.86-75.55) and 50.51 Gy (range 27.75-73.29), respectively.", "Chemotherapy was given to 97% of the patients (66/68 patients). Most of the patients received concurrent chemoradiotherapy followed by adjuvant chemotherapy. Sixty two patients received Cisplatin, while 4 patients received Carboplatin. The total accumulative doses of Cisplatin ranged from 120 mg to 980 mg (median dose 689 mg, mean dose 639 ± 233 mg). Carboplatin accumulative doses ranged from 200 mg to 2100 mg (median dose 980 mg, mean 988 ± 670 mg).", "Median follow up time for all patients was 27.5 months (range 8-65 months). At the end of the study, 13 out of 68 patients were lost to follow up. The 2 year-progression free survival of this study group was 76.4% with a 2 year locoregional control of 88.5%.", "Pre RT audiograms demonstrated that 65.5% of the ears (88/134 ears) were normal or had mild BC hearing losses (16-25 dB) at 4 kHz. At PTA, 91% of the ears (122/134 ears) were normal or had mild hearing losses.\nPost RT audiograms were performed at different follow up intervals. The median follow up time of audiological assessment for all 68 patients was 14 months (range 6-43 months). Median audiological follow up times for conventional RT and IMRT groups were 15 months (range 6-43 months) and 13 months (range 6-29 months), respectively. For total of 68 patients (134 ears), the incidence of SNHL at high frequency (4 kHz) was 52.9% (unilateral loss 13/68 patients, bilateral loss 23/68 patients). At PTA, the incidence of SNHL was 10.3% (unilateral loss 6/68 patients, bilateral loss 1/68 patients). For individual ear evaluation, the incidences of SNHL were 44% (59/134 ears) and 6% (8/134 ears) at 4 kHz and PTA, respectively.", "[SUBTITLE] Radiation techniques [SUBSECTION] With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\nWith conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.\n[SUBTITLE] Radiation doses to the cochlea, inner ear, and IAC [SUBSECTION] Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.\nMean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.", "With conventional RT, the incidences of SNHL were 48.75% (39/80 ears) at 4 kHz and 5% (4/80 ears) at PTA, respectively. With IMRT, the incidences of SNHL were 37% (20/54 ears) at 4 kHz and 7.4% (4/54 ears) at PTA, respectively.", "Mean radiation doses to the cochlea, inner ear and IAC in this study were about 50 Gy, 45 Gy and 50 Gy, respectively. The authors then evaluated the incidences of SNHL based upon the mean radiation doses to each inner ear structure as shown in Table 2.\nThe incidences of SNHL and the inner ear mean radiation dose (IMRT)\nOn univariate analysis; IMRT, cochlea mean dose ≤ 50 Gy, inner ear mean doses ≤ 45 Gy and IAC mean dose ≤ 50 Gy appeared to have lower incidences of SNHL at high frequency (4 kHz). The other associated factors, including Cisplatin doses, OME, age and co-morbidities of the patients were not demonstrated to affect the incidence of SNHL (Figure 3). At PTA, there was no significant factor affecting the incidence of SNHL (Figure 4).\nForest plot for relative risk of SNHL at 4 kHz.\nForest plot for relative risk of SNHL at PTA.\nBased on the literatures reviewed, most studies reported that the incidences of SNHL were impacted by mean cochlea doses in the range of 45-50 Gy [4,5,7,11]. We therefore did not re-explore the data as quantitative continuous variables. Instead, we re-validated the known cut-off point starting at 50 Gy, which was actually the mean cochlea dose in our study. The lowermost cut-off level at 45 Gy was chosen for analysis [7]. The data showed that the RR for mean dose of > 45 Gy was 1.77 (95% CI 0.82-4.24) at 4 kHz, compared to dose ≤ 45 Gy. This analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy.\nBased on the cochlea nerve tolerance of 54 Gy, we then explored the optimal radiation threshold to the IAC by creating a hypothesis with a cut-off point of 54 Gy. The data showed that the RR for a mean dose > 54 Gy was 2.25 (95% CI 1.14-4.17) compared to a mean dose ≤ 54 Gy at 4 Hz.\nAs radiation techniques had a potential effect on SNHL, we had analyzed the effect of IMRT on different variables with bivariate analysis. The bivariate analysis demonstrated that IMRT tended to decrease SNHL in younger patients (≤ 50 years old) or healthy patients without medical co-morbidities (DM and/or hypertension) (Table 3).\nBivariate analysis (Effect of IMRT on different variables)\nAs NPC patients required Cisplatin chemotherapy to ensure the local and distant control, SNHL was potentially worse when combined with high radiation doses to the hearing structures. The authors then performed a bivariate analysis to evaluate the effects of Cisplatin on radiation dose levels for each inner ear structure. An accumulative Cisplatin dose of > 600 mg was used in this analysis since the mean Cisplatin dose delivered was about 600 mg in our study. The data demonstrated that in patients who received higher dose of Cisplatin (> 600 mg), the incidences of SNHL tended to be higher if they received mean radiation dose of > 50 Gy to the cochlea and > 45 Gy to the inner ear (Table 4).\nBivariate analysis of Cisplatin effect on radiation dose levels.", "Radiation induced SNHL has been recognized as an important adverse effect which generally develops 6 to 24 months after radiation treatment and may progress to complete deafness [10,12]. The inner ear is the most susceptible organ for a durable long term SNHL. The etiologies of RT induced SNHL are vascular insufficiency, reduced number of capillaries, degeneration of endotheliocytes in vessels, loss of cells in the organ of Corti, atrophy and degeneration of the stria vascularis, and atrophy of the spiral ganglion cells and the cochlea nerve [13,14]. This damage is more prominent to the outer hair cells in the basal turn of the cochlea, which is responsible for transduction of higher frequency sound and a clinically significant SNHL at a higher frequency (>2 Hz) might occur.\nThe incidences of radiation induced SNHL were reported in the range of 0-65% with various radiation techniques (Table 5). Our study demonstrated that the incidences of SNHL were 44% (59/134 ears) at high frequency (4 kHz) and 6% (8/134 ears) at PTA for the whole population. Each study, however, was performed and evaluated with different criteria and follow up times.\nCriteria and radiation doses to the cochlea in correlation with the incidences of SNHL\nConv RT = Conventional radiation therapy Conf RT = Conformal radiation therapy\nThe median follow up time (14 months) for audiological assessment in this study was rather shorter than the other studies. Nonetheless, radiation induced ototoxicity is typically evident at 6-12 months after completion of radiation therapy [4]. Transient SNHL might occur up to 41% of the patients as reported by Ho et al [12]. This study was not able to evaluate the transient hearing loss because of its retrospective design which was based upon different follow up times.\nThe hospital's policy is to routinely perform the audiological assessment for all NPC patients. However, a number of patients in our study did not complete the audiological tests. We recognized that the completeness of audiometric evaluation for every patient would be challenging in the absence of a prospective clinical trial. In this study, we compared the differences of pre and post RT audiograms rather than using the specific hearing threshold to justify the SNHL. We also excluded the patients who had only post RT audiograms. This should diminish the bias from patients who performed audiological exams because of having hearing impairment post RT.\nThe prior studies included only patients who completed the audiological assessment. This would potentially alter the incidence of SNHL in a certain number of patients who never performed the audiological exams. Also, the usage of contralateral ear as a baseline could create some inconsistency of the results [4,15,16]. Thus, we assumed that the results of our study would be adequate to report the incidence of SNHL although we understood the weakness of retrospective data.\nWith different radiation techniques, IMRT was found to have fewer incidence of SNHL when compared to the conventional RT (37% vs 48.75%). There was a trend to decrease the incidences of SNHL with IMRT in our study (RR of 0.76 with 95% CI 0.5-1.15, favouring IMRT). The former studies for NPC treatment had not directly compared the incidences of SNHL between conventional technique and conformal techniques. By indirect comparison among studies, the incidences of SNHL with the conformal techniques were not consistently lower than the conventional technique as shown in Table 5. The delineation of the normal structures and radiation dose constraint are very important in IMRT planning. IMRT potentially provides higher radiation doses to the cochlea than three dimensional conformal radiation technique, or even more than conventional RT, if the cochlea is not intentionally avoided [17].\nIn this study, we had delineated the inner ear into the cochlea and inner ear as there was some disagreement of cochlea delineation among the earlier studies [5,7]. Because of the tiny volume of the cochlea, target delineation is essential for dose volume analysis. Especially, its location is in the high dose gradient of the IMRT.\nThe results from our study demonstrated that the incidence of high frequency SNHL tended to be increased when mean dose delivered to the cochlea was > 50 Gy. Earlier studies suggested that the incidences of SNHL were increased with mean cochlea doses > 45-50 Gy [4,5,7,11]. However, our exploratory analysis showed that the incidence of SNHL was not significantly changed when mean cut-off doses to the cochlea decreased from 50 to 45 Gy. Therefore, our study suggested that the mean cochlea dose of 50 Gy should be reasonable since the excessive dose constraint to the cochlea would potentially compromise the nearby targets coverage.\nApart from the cochlea, the IAC should be concerned as the cochlea nerve traverses through the canal entering into the brainstem. The SNHL due to a retro-cochlea (cochlea nerve) damage may occur, although, this was relatively rare compared to the cochlea damage [18]. IMRT could deliver higher doses up to 66 Gy to the IAC if the IAC was not specified as the organ at risk [19]. This study demonstrated that the IAC mean doses > 50 Gy showed a trend to increase the incidence of high frequency (4 kHz) SNHL (RR 2.02 with 95% CI 0.99-4.13). IAC dose limitation was also crucial as the patients who developed SNHL from cochlea nerve damage would not obtain any benefits from using a hearing aid or even a cochlea implantation.\nOur study revealed the lower incidence (10.3%) of low frequency SNHL (PTA). This concurred with the other series, as the high frequency (> 4 kHz) would be the earliest sign for damage at the outer hair cells in the basal turn of the cochlea [19].\nAnother coexisting factor for high frequency SNHL was a combination of Cisplatin chemotherapy with RT from a synergistic effect to the cochlea [4,5,15,20]. Some series reported that 600 mg/m2 [21] or total dose of 1,050 mg [22] of Cisplatin increased the incidences of high frequency SNHL. As most of the patients in this study had locally advanced disease and received a combination of chemotherapy, the effect of Cisplatin to SNHL could not be evaluated directly. There was no apparent increase in the incidence of SNHL with a total accumulation dose of > 600 mg in this study. However, a further bivariate analysis revealed that dose limitation to the cochlea (< 50 Gy) and inner ear (< 45 Gy) would potentially protect SNHL in patients who received Cisplatin chemotherapy to an accumulative dose of > 600 mg.\nFor the other associated factors, Kwong et al reported the association of age, sex, and post RT serous otitis media as significant prognostic factors for persistent SNHL on multivariate analysis [10]. Nonetheless, this study could not demonstrate any relationship among age, evidence of otitis media, and/or medical co-morbidities with the incidence of SNHL.\nLastly, the inter-fraction setup uncertainties for very small structures are crucial. Radiation dose evaluation on the computer planning would be only the estimation of the actual dose delivered to the tiny inner ear during the radiation course.", "Apparently, radiation therapy produces relatively high incidences of high frequency SNHL. The severities of the damage are increased with higher radiation doses delivered to the inner ear structures. Mean radiation dose constraint of 50 Gy to the cochlea and IAC showed a trend to decrease the incidences of SNHL, especially in patients who received combination of Cisplatin chemotherapy. Normal structures delineation and radiation dose constraint with modern radiation techniques are crucial to diminish the long term SNHL and enhance the quality of life in addition to insuring survival from the cancer.", "The authors declare that they have no competing interests.", "JP designed the study, analysed the data and prepared the manuscript. AS carried out data collection, data analysis and helped to draft the manuscript. KT participated in the design of the study and performed the statistical analysis. PK participated in its design and hearing data evaluation. KT participated in hearing data evaluation. YC participated in its design and helped to draft the manuscript. PP participated in its design and helped to draft the manuscript. All authors read and approved the final manuscript.", "JP is an assistant professor in radiation oncology, focusing in head and neck cancer treatment. AS is a former radiation oncology resident. She is now a radiation oncologist at the regional cancer centre. KT is an instructor in radiation oncology, also a research facilitator in radiation oncology field. PK is an assistant professor in oto-rhino-laryngology, focusing in head and neck cancer surgery. KT is an instructor in oto-rhino-laryngology, specialized in otology. YC is an associated professor in radiation oncology. PP is a professor in radiation oncology, and a head of radiation oncology division." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Combined subsegmentectomy: postoperative pulmonary function compared to multiple segmental resection.
21333026
For small peripheral c-T1N0M0 non-small cell lung cancers involving multiple segments, we have conducted a resection of subsegments belonging to different segments, i.e. combined subsegmentectomy (CSS), to avoid resection of multiple segments or lobectomy. Tumor size, location of tumor, and forced expiratory volume in 1 second (FEV1) of each preserved lobe were compared among the CSS, resection of single segment, and that of multiple segments.
BACKGROUND
FEV1 of each preserved lobe were examined in 17 patients who underwent CSS, 56 who underwent resection of single segment, and 41 who underwent resection of multiple segments, by measuring pulmonary function and lung-perfusion single-photon-emission computed tomography and computed tomography before and after surgery.
METHODS
Tumor size in the CSS was significantly smaller than that in the resection of multiple segments (1.4±0.5 vs. 2.0±0.8 cm, p=0.002). Tumors in the CSS were located in the right upper lobe more frequently than those in the resection of multiple segments (53% vs. 5%, p<0.001). Postoperative of FEV1 of each lobe after the CSS was higher than that after the resection of multiple segments (0.3±0.2 vs. 0.2±0.2 l, p=0.07). Mean FEV1 of each preserved lobe per subsegment after CSS was significantly higher than that after resection of multiple segments (0.05±0.03 vs. 0.03±0.02 l, p=0.02). There was no significant difference of these factors between the CSS and resection of single segment.
RESULTS
The CSS is effective for preserving pulmonary function of each lobe, especially for small sized lung cancer involving multiple segments in the right upper lobe, which has fewer segments than other lobes.
CONCLUSIONS
[ "Aged", "Carcinoma, Non-Small-Cell Lung", "Female", "Follow-Up Studies", "Forced Expiratory Flow Rates", "Humans", "Lung", "Lung Neoplasms", "Male", "Middle Aged", "Pneumonectomy", "Postoperative Period", "Respiratory Function Tests", "Tomography, Emission-Computed, Single-Photon", "Treatment Outcome" ]
3050688
null
null
Methods
[SUBTITLE] Eligibility [SUBSECTION] The Ethics Committees of Kumamoto University Hospital approved the study protocol for sublobar resection in patients with c-T1N0M0 NSCLC. Informed consent was obtained from all patients after a comprehensive discussion of the risks and benefits of the proposed procedures [8,9]. The Ethics Committees of Kumamoto University Hospital approved the study protocol for sublobar resection in patients with c-T1N0M0 NSCLC. Informed consent was obtained from all patients after a comprehensive discussion of the risks and benefits of the proposed procedures [8,9]. [SUBTITLE] Indications for Segmentectomy and Subsegmentectomy [SUBSECTION] The criteria for segmentectomy was the followings: (1) peripheral c-T1N0M0 NSCLCs less than 3 cm diameter; (2) intraoperative frozen section of lymph nodes showed no metastasis; and (3) surgical margin of at least 2 cm from the tumor can be taken using CSS. The CSS or multiple segmentectomy was further indicated for tumors involving multiple segments, which were identified on serial sections of the axial, sagittal, and coronal views of multidetector CT images using Digital Imaging and Communications in Medicine data. The criteria for segmentectomy was the followings: (1) peripheral c-T1N0M0 NSCLCs less than 3 cm diameter; (2) intraoperative frozen section of lymph nodes showed no metastasis; and (3) surgical margin of at least 2 cm from the tumor can be taken using CSS. The CSS or multiple segmentectomy was further indicated for tumors involving multiple segments, which were identified on serial sections of the axial, sagittal, and coronal views of multidetector CT images using Digital Imaging and Communications in Medicine data. [SUBTITLE] Combined Segmentectomy Procedure [SUBSECTION] Segmentectomy including CSS was performed via open thoracotomy under one-lung ventilation as follows: (1) Pulmonary arteries and bronchi with tumor involvement were identified; (2) After the entire lung had been inflated, bronchi of the involved segment or subsegment were ligated and cut to clarify the boundary between the subsegments to be preserved versus resected, according to a previously reported technique [10]; (3) One lung ventilation was restarted, which made the lung tissues designated for preservation lose gas and collapse while retaining the segments subsegments designated for resection to be inflated, thereby allowing the border between the segments or subsegments of the resecting versus preserving lung tissue to be clarified; (4) The lung was cut using electrocautery between the inflated lung tissue to be resected and the deflated one to be preserved, thereby enabling the resection of targeted segments or subsegments; and (5) Cut plane of the lung was covered with a polyglycol acid sheet (Neoveil: Gunze Ltd., Kyoto, Japan) and fibrin glue to prevent postoperative air leakage. Segmentectomy including CSS was performed via open thoracotomy under one-lung ventilation as follows: (1) Pulmonary arteries and bronchi with tumor involvement were identified; (2) After the entire lung had been inflated, bronchi of the involved segment or subsegment were ligated and cut to clarify the boundary between the subsegments to be preserved versus resected, according to a previously reported technique [10]; (3) One lung ventilation was restarted, which made the lung tissues designated for preservation lose gas and collapse while retaining the segments subsegments designated for resection to be inflated, thereby allowing the border between the segments or subsegments of the resecting versus preserving lung tissue to be clarified; (4) The lung was cut using electrocautery between the inflated lung tissue to be resected and the deflated one to be preserved, thereby enabling the resection of targeted segments or subsegments; and (5) Cut plane of the lung was covered with a polyglycol acid sheet (Neoveil: Gunze Ltd., Kyoto, Japan) and fibrin glue to prevent postoperative air leakage. [SUBTITLE] Patients [SUBSECTION] During April 2005 - March 2009, 248 patients with c-T1N0M0 NSCLC were treated with surgery. Of them, 198 patients (79%) were treated by segmentectomy. Of the 198 patients, CSS was conducted in 32 patients (16%). The other 166 patients were treated by the resection of single segment (single segmentectomy, n = 97) or the resection of multiple segments (multiple segmentectomy, n = 69). During April 2005 - March 2009, 248 patients with c-T1N0M0 NSCLC were treated with surgery. Of them, 198 patients (79%) were treated by segmentectomy. Of the 198 patients, CSS was conducted in 32 patients (16%). The other 166 patients were treated by the resection of single segment (single segmentectomy, n = 97) or the resection of multiple segments (multiple segmentectomy, n = 69). [SUBTITLE] Pulmonary Function Tests [SUBSECTION] Vital capacity (VC), forced vital capacity (FVC), and FEV1 were measured before and more than 6 months after surgery with the patient in a seated position using a dry rolling-seal spirometer (CHESTAC-9800DN; Chest Inc. Tokyo, Japan) according to American Thoracic Society standards [11]. Vital capacity (VC), forced vital capacity (FVC), and FEV1 were measured before and more than 6 months after surgery with the patient in a seated position using a dry rolling-seal spirometer (CHESTAC-9800DN; Chest Inc. Tokyo, Japan) according to American Thoracic Society standards [11]. [SUBTITLE] SPECT/CT [SUBSECTION] SPECT/CT system was composed of a commercially available gantry-free SPECT with dual-head detectors (Skylight; ADAC Laboratories, Milpitas, Calif) and an 8-multidetector-row CT scanner (Light-Speed Ultra Instrument; General Electric, Milwaukee, Wis). Each 185 MBq of 99mTc-macroaggregated human serum albumin (Daiichi Radioisotope Laboratories, Ltd, Tokyo, Japan) was administered intravenously. The two scans were performed sequentially. The SPECT images were manually fused with the CT images on the workstation(AZE Virtual Place; AZE Co Ltd, Tokyo, Japan) [5,7]. Postoperative SPECT/CT was conducted with the pulmonary function test more than 6 months after surgery. SPECT/CT system was composed of a commercially available gantry-free SPECT with dual-head detectors (Skylight; ADAC Laboratories, Milpitas, Calif) and an 8-multidetector-row CT scanner (Light-Speed Ultra Instrument; General Electric, Milwaukee, Wis). Each 185 MBq of 99mTc-macroaggregated human serum albumin (Daiichi Radioisotope Laboratories, Ltd, Tokyo, Japan) was administered intravenously. The two scans were performed sequentially. The SPECT images were manually fused with the CT images on the workstation(AZE Virtual Place; AZE Co Ltd, Tokyo, Japan) [5,7]. Postoperative SPECT/CT was conducted with the pulmonary function test more than 6 months after surgery. [SUBTITLE] Measurement of Pulmonary Function of Each Lobe [SUBSECTION] Images of the lobe before segmentectomy and of the lobe remaining after segmentectomy were traced on the CT image with a region of interest, of which radioisotope (RI) was counted on the SPECT image (Figure 2). Images of before and after segmentectomy. (a) Coronal image of CT before surgery, showing a lung cancer (arrow) in the segment 2b of right upper lobe. (b) Coronal image of the perfusion SPECT/CT of the right upper lobe before operation. (c) Coronal image of the perfusion SPECT/CT of the remaining right upper lobe after the resection of S2b and S3a. The FEV1 of the lobe before (A) and after (B) segmentectomy was measured from the preoperative or postoperative SPECT/CT according to the following formulae. A = Preoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung] B = Postoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung] The postoperative FEV1 of the lobe per preserved subsegment (C) was measured according to the following formula. C = B/number of preserved subsegments of the lobe Images of the lobe before segmentectomy and of the lobe remaining after segmentectomy were traced on the CT image with a region of interest, of which radioisotope (RI) was counted on the SPECT image (Figure 2). Images of before and after segmentectomy. (a) Coronal image of CT before surgery, showing a lung cancer (arrow) in the segment 2b of right upper lobe. (b) Coronal image of the perfusion SPECT/CT of the right upper lobe before operation. (c) Coronal image of the perfusion SPECT/CT of the remaining right upper lobe after the resection of S2b and S3a. The FEV1 of the lobe before (A) and after (B) segmentectomy was measured from the preoperative or postoperative SPECT/CT according to the following formulae. A = Preoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung] B = Postoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung] The postoperative FEV1 of the lobe per preserved subsegment (C) was measured according to the following formula. C = B/number of preserved subsegments of the lobe [SUBTITLE] Statistical Analysis [SUBSECTION] Student's t-test was used to compare the tumor size, number of the resected or preserved subsegments, and preserved FEV1 among the CSS, single segmentectomy, and multiple segmentectomy. Pearson's χ2 test was used to compare the location of tumors among the three groups. The SPSS software (SPSS Inc., Chicago, Illinois) was used for these analyses. Values of p < 0.05 were accepted as significant. All values in the text and table are given as mean ± SD. Student's t-test was used to compare the tumor size, number of the resected or preserved subsegments, and preserved FEV1 among the CSS, single segmentectomy, and multiple segmentectomy. Pearson's χ2 test was used to compare the location of tumors among the three groups. The SPSS software (SPSS Inc., Chicago, Illinois) was used for these analyses. Values of p < 0.05 were accepted as significant. All values in the text and table are given as mean ± SD.
null
null
null
null
[ "Background", "Eligibility", "Indications for Segmentectomy and Subsegmentectomy", "Combined Segmentectomy Procedure", "Patients", "Pulmonary Function Tests", "SPECT/CT", "Measurement of Pulmonary Function of Each Lobe", "Statistical Analysis", "Results", "Discussion", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Advances in high-resolution CT scanning have led to frequent detection of peripheral T1N0M0 non-small cell lung cancers (NSCLCs). Several studies have demonstrated the effectiveness of segmentectomy, regarding not only preservation of pulmonary function but also prognosis [1-4]. However, for small peripheral c-T1N0M0 NSCLCs involving multiple segments, resection of entire segments damages pulmonary function to the same extent as lobectomy. To evaluate local pulmonary function, a lung-perfusion single-photon-emission computed tomography (SPECT) and computed tomography (SPECT/CT) is a reliable tool [5,6]. We recently examined the forced expiratory volume in 1 second (FEV1) of each lobe after segmentectomy by using a lung-perfusion SPECT/CT. The results showed that the FEV1 of the preserved lobes after resection of 1, 2, and 3 segments were decreased, respectively, to 50%, 35%, and 17% of the preoperative value [7]. Especially, the resection of 2 segments in the right upper lobe, which has only 3 segments, can only preserve one segment. Therefore, for patients with small peripheral c-T1N0M0 NSCLCs involving multiple segments, we attempted the resection of only subsegments involved by tumor, i.e. combined subsegmentectomy (CSS), to preserve pulmonary function by avoiding the resection of multiple segments. For example, if the tumor involved the subsegment 2b and 3a of the right upper lobe (Figure 1), we performed the resection of S2b and S3a subsegments. This study examined the results of CSS in patients with peripheral c-T1N0M0 NSCLCs, with special reference to tumor size, location of tumor, and postoperative pulmonary function, which were compared with that after the resection of multiple segments.\nSagittal image of CT. The tumor located between the right subsegment 2b and 3a.", "The Ethics Committees of Kumamoto University Hospital approved the study protocol for sublobar resection in patients with c-T1N0M0 NSCLC. Informed consent was obtained from all patients after a comprehensive discussion of the risks and benefits of the proposed procedures [8,9].", "The criteria for segmentectomy was the followings: (1) peripheral c-T1N0M0 NSCLCs less than 3 cm diameter; (2) intraoperative frozen section of lymph nodes showed no metastasis; and (3) surgical margin of at least 2 cm from the tumor can be taken using CSS. The CSS or multiple segmentectomy was further indicated for tumors involving multiple segments, which were identified on serial sections of the axial, sagittal, and coronal views of multidetector CT images using Digital Imaging and Communications in Medicine data.", "Segmentectomy including CSS was performed via open thoracotomy under one-lung ventilation as follows: (1) Pulmonary arteries and bronchi with tumor involvement were identified; (2) After the entire lung had been inflated, bronchi of the involved segment or subsegment were ligated and cut to clarify the boundary between the subsegments to be preserved versus resected, according to a previously reported technique [10]; (3) One lung ventilation was restarted, which made the lung tissues designated for preservation lose gas and collapse while retaining the segments subsegments designated for resection to be inflated, thereby allowing the border between the segments or subsegments of the resecting versus preserving lung tissue to be clarified; (4) The lung was cut using electrocautery between the inflated lung tissue to be resected and the deflated one to be preserved, thereby enabling the resection of targeted segments or subsegments; and (5) Cut plane of the lung was covered with a polyglycol acid sheet (Neoveil: Gunze Ltd., Kyoto, Japan) and fibrin glue to prevent postoperative air leakage.", "During April 2005 - March 2009, 248 patients with c-T1N0M0 NSCLC were treated with surgery. Of them, 198 patients (79%) were treated by segmentectomy. Of the 198 patients, CSS was conducted in 32 patients (16%). The other 166 patients were treated by the resection of single segment (single segmentectomy, n = 97) or the resection of multiple segments (multiple segmentectomy, n = 69).", "Vital capacity (VC), forced vital capacity (FVC), and FEV1 were measured before and more than 6 months after surgery with the patient in a seated position using a dry rolling-seal spirometer (CHESTAC-9800DN; Chest Inc. Tokyo, Japan) according to American Thoracic Society standards [11].", "SPECT/CT system was composed of a commercially available gantry-free SPECT with dual-head detectors (Skylight; ADAC Laboratories, Milpitas, Calif) and an 8-multidetector-row CT scanner (Light-Speed Ultra Instrument; General Electric, Milwaukee, Wis). Each 185 MBq of 99mTc-macroaggregated human serum albumin (Daiichi Radioisotope Laboratories, Ltd, Tokyo, Japan) was administered intravenously. The two scans were performed sequentially. The SPECT images were manually fused with the CT images on the workstation(AZE Virtual Place; AZE Co Ltd, Tokyo, Japan) [5,7].\nPostoperative SPECT/CT was conducted with the pulmonary function test more than 6 months after surgery.", "Images of the lobe before segmentectomy and of the lobe remaining after segmentectomy were traced on the CT image with a region of interest, of which radioisotope (RI) was counted on the SPECT image (Figure 2).\nImages of before and after segmentectomy. (a) Coronal image of CT before surgery, showing a lung cancer (arrow) in the segment 2b of right upper lobe. (b) Coronal image of the perfusion SPECT/CT of the right upper lobe before operation. (c) Coronal image of the perfusion SPECT/CT of the remaining right upper lobe after the resection of S2b and S3a.\nThe FEV1 of the lobe before (A) and after (B) segmentectomy was measured from the preoperative or postoperative SPECT/CT according to the following formulae. A = Preoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nB = Postoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nThe postoperative FEV1 of the lobe per preserved subsegment (C) was measured according to the following formula.\nC = B/number of preserved subsegments of the lobe", "Student's t-test was used to compare the tumor size, number of the resected or preserved subsegments, and preserved FEV1 among the CSS, single segmentectomy, and multiple segmentectomy. Pearson's χ2 test was used to compare the location of tumors among the three groups. The SPSS software (SPSS Inc., Chicago, Illinois) was used for these analyses. Values of p < 0.05 were accepted as significant. All values in the text and table are given as mean ± SD.", "Of the 32 patients who underwent the CSS, 17 patients who underwent both the pulmonary function test and lung-perfusion SPECT/CT both before and after surgery. Table 1 presents their resected sites and the number of resected subsegments. Mean number of the resected subsegments was 2.9 ± 1.1. If the entire segments involved by tumors were resected, the mean number of resected subsegments would be 5.0 ± 1.2, i.e. the CSS could save 2.2 ± 1.2 subsegments compared with the resection of entire segments.\nSites of combined subsegmentectomy and the number of resected subsegments\nSS: subsegment, CSS: combined subsegmentectomy\nOf the 97 patients who underwent the single segmentectomy, 56 patients who underwent both the pulmonary function test and a lung-perfusion SPECT/CT both before and after surgery (Table 2). Of the 69 patients who underwent the multiple segmentectomy, 41 patients who underwent both the pulmonary function test and a lung-perfusion SPECT/CT both before and after surgery (Table 3).\nSites of single segmentectomy and the number of resected subsegments\nSS: subsegment\nSites of multiple segmentectomy and the number of resected subsegments\nSS: subsegment\nTable 4 presents a comparison of preoperative pulmonary function, tumor size, location of tumor, and the numbers of resected and preserved subsegments among the CSS, single segmentectomy, and multiple segmentectomy. No significant difference of preoperative pulmonary function tests was found among these groups. Mean tumor size was 1.4 ± 0.5 cm in the CSS group, which was significantly smaller than the 2.0 ± 0.8 cm in multiple segmentectomy (p = 0.002). Location of tumor in the right upper lobe was 9 of the 17 (53%) patients who underwent the CSS, which was more frequent than 2 of the 41 (5%) who underwent the multiple segmentectomy (p < 0.001). Mean number of the resected subsegments in the CSS group was 2.9 ± 1.1, which was significantly less than 5.3 ± 1.4 of the multiple segmentectomy group (p < 0.001). However, the mean numbers of preserved subsegments of each lobe were not significantly different between the CSS and multiple segmentectomy (5.4 ± 2.5 vs. 5.0 ± 1.5). This discrepancy of the numbers of resected and preserved subsegments was caused by the difference of location of tumor, i.e. (1) Although the right upper lobe has only 6 subsegments, the right lower lobe, the left upper lobe, and the left lower lobe have 12, 10, and 10 subsegments, respectively, which makes the segmentectomy for the right upper lobe to preserve fewer subsegments than that for other lobes; and (2) Right upper lobe was the resected site more frequent in the CSS than in the multiple segmentectomy (53 vs. 5%, p < 0.001), causing the discrepancy of numbers of resected and preserved subsegments between the two groups.\nPatients' characteristics of combined subsegmentectomy, single segmentectomy, and multiple segmentectomy\nCSS: combined subsegmentectomy, Single S: single segmentectomy, Multiple S: multiple segmentectomy, VC: vital capacity, FVC: functional vital capacity, FEV1: forced expiratory volume in 1 second\n†: p = 0.002 between the CSS and the multiple segmentectomy, ††: p < 0.001 between the CSS and the multiple segmentectomy.\nFigure 3 shows the mean percentage of preserved FEV1 of whole lung after surgery in the three groups. In the CSS group, the mean values of FEV1of the whole lung before and after surgery were 2.4±0.6 and 2.2 ± 0.5 l, respectively, of which the mean percentage of FEV1 preserved, was 91 ± 7%. In the single segmentectomy group, the mean values of FEV1 of the whole lung before and after surgery were 2.2 ± 0.6 and 2.0 ± 0.5 l, respectively, of which the mean percentage of FEV1 preserved was 92 ± 8%. In the multiple segmentectomy group, the mean values of FEV1 of the whole lung before and after surgery were 2.0 ± 0.6 and 1.8 ± 0.6 l, respectively, of which the mean percentage of FEV1 preserved was 88 ± 10%. No significant difference of the mean percentage of FEV1 preserved was found among these three groups.\nForced expiratory volume in 1 second examined by pulmonary function tests before and after surgery.\nFigure 4 shows the FEV1 of each preserved lobe after surgery in the three groups. In the CSS group, the mean values of FEV1 of each lobe before and after surgery were 0.6 ± 0.2 and 0.3 ± 0.2 l, respectively. In the single segmentectomy group, the mean values of FEV1 of each lobe before and after surgery were 0.5 ± 0.2 and 0.3 ± 0.1 l, respectively. In the multiple segmentectomy group, the mean values of FEV1 of each lobe before and after surgery were 0.5 ± 0.2 and 0.2 ± 0.2 l, respectively. While there was no significant difference of the postoperative FEV1 of each lobe between the CSS and single segmentectomy, the value of the CSS was higher than that of the multiple segmentectomy with marginal significance (p = 0.07).\nForced expiratory volume in 1 second of each lobe after surgery.\nFigure 5 shows the FEV1 of each preserved lobe per subsegment after surgery, which were 0.05 ± 0.03, 0.04 ± 0.03, 0.04 ± 0.03 l in the CSS, single segmentectomy, and multiple segmentectomy, respectively. The value was significantly higher in the CSS than in the multiple segmentectomy (p = 0.02).\nPreserved forced expiratory volume in 1 second of each lobe per subsegment after surgery.\nAll of the 198 patients who underwent CSS, single segmentectomy, or multiple segmentectomy were discharged from the hospital without major complications. All of the tumors were pathologically N0 stage. With the mean follow-up period after surgery was 31 ± 10 months (range: 12-60 month), 5 of the 166 patients (2%) who underwent single or multiple segmentectomy suffered postoperative recurrence, but there was no recurrence at the surgical margin. All 32 patients who underwent CSS are alive without recurrence.", "Results of this study elucidated the following points: (1) The CSS could save 2.2 ± 1.2 subsegments compared with the resection of entire segments involved by tumors; and (2) Both the preserved FEV1 of each lobe and that value per subsegment were higher in the CSS than in the multiple segmentectomy, whereas there was no significant difference of preserved % of FEV1 of whole lung between the two groups.\nThe reason for no significant difference in pulmonary function between the CSS and multiple segmentectomy could be caused by the difference of frequency of right upper lobe between the two. The CSS was conducted for right upper lobe more frequently than the multiple segmentectomy, because the right upper lobe has fewer subsegments than the other lobes. To preserve sufficient lung tissue for tumors involving multiple segments in the right upper lobe, we conducted the CSS frequently, for example the resection of S2b and S3a rather than the resection of both the S2 and S3. Contrary to the right upper lobe, other lobes can preserve sufficient lung tissue even after multiple segmentectomy, because they have more subsegments than the right upper lobe. Therefore, our data show that the CSS could preserve the pulmonary function of each lobe by avoiding the multiple segmentectomy, especially for tumors in the right upper lobe.\nThe mean values of postoperative FEV1 after CSS, single segmentectomy, and multiple segmentectomy were approximately 90% of the preoperative values, which were comparable to values in the previous reports of general segmentectomy [7,12,13] and were higher than that after lobectomy [12]. We previously reported that the postoperative FEV1 of each lobe after the resection of 1, 2, and 3 segments was decreased to 50%, 35%, and 17%, respectively [7]. The use of CSS can obviate the resection of multiple segments that a tumor involves. Therefore, to preserve a pulmonary function after segmentectomy in patients with small peripheral c-T1N0M0 NSCLC involving multiple segments, CSS would be preferable to the resection of multiple segments with tumor involvement, especially for small tumors located in the right upper lobe.\nThis study revealed that the mean tumor size in the CSS was significantly smaller than that in multiple segmentectomy. The mean tumor size in the CSS group was 1.4 ± 0.5 cm, ranging from 0.8 to 2.4 cm. To take the surgical margin of at least 2 cm from the tumor by the CSS, tumors larger than 2 cm involving multiple segments would be out of the indication for CSS.\nThe disadvantage of the CSS might be that the lymph node dissection at hilum of resected subsegments would be less sufficient than the conventional segmentectomy. Therefore, we recommend it for likely pathological N0 tumors, such as bronchioloalveolar carcinoma, carcinoid, and metastatic pulmonary tumors.\nThe preserved pulmonary functions after CSS, single segmentectomy, and multiple segmentectomy are shown herein. Our data indicate that the CSS is useful for preservation of pulmonary function of each lobe by avoiding the multiple segmentectomy especially in patients with small sized tumors with likely pathological N0 involving multiple segments of the right upper lobe.", "NSCLCs: non-small cell lung cancers; CSS: combined subsegmentectomy; FEV1: forced expiratory volume in 1 second; SPECT/CT: lung-perfusion single-photon-emission computed tomography and computed tomography; RI: radioisotope.", "The authors declare that they have no competing interests.", "This report reflects the opinion of the authors and does not represent the official position of any institution or sponsor. The contributions of each of the authors were as follows: KY was responsible for reviewing previous research, journal handsearching, drafting report. HN was responsible for quality checking and data processing. HN was responsible for project coordination. All authors have read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Eligibility", "Indications for Segmentectomy and Subsegmentectomy", "Combined Segmentectomy Procedure", "Patients", "Pulmonary Function Tests", "SPECT/CT", "Measurement of Pulmonary Function of Each Lobe", "Statistical Analysis", "Results", "Discussion", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Advances in high-resolution CT scanning have led to frequent detection of peripheral T1N0M0 non-small cell lung cancers (NSCLCs). Several studies have demonstrated the effectiveness of segmentectomy, regarding not only preservation of pulmonary function but also prognosis [1-4]. However, for small peripheral c-T1N0M0 NSCLCs involving multiple segments, resection of entire segments damages pulmonary function to the same extent as lobectomy. To evaluate local pulmonary function, a lung-perfusion single-photon-emission computed tomography (SPECT) and computed tomography (SPECT/CT) is a reliable tool [5,6]. We recently examined the forced expiratory volume in 1 second (FEV1) of each lobe after segmentectomy by using a lung-perfusion SPECT/CT. The results showed that the FEV1 of the preserved lobes after resection of 1, 2, and 3 segments were decreased, respectively, to 50%, 35%, and 17% of the preoperative value [7]. Especially, the resection of 2 segments in the right upper lobe, which has only 3 segments, can only preserve one segment. Therefore, for patients with small peripheral c-T1N0M0 NSCLCs involving multiple segments, we attempted the resection of only subsegments involved by tumor, i.e. combined subsegmentectomy (CSS), to preserve pulmonary function by avoiding the resection of multiple segments. For example, if the tumor involved the subsegment 2b and 3a of the right upper lobe (Figure 1), we performed the resection of S2b and S3a subsegments. This study examined the results of CSS in patients with peripheral c-T1N0M0 NSCLCs, with special reference to tumor size, location of tumor, and postoperative pulmonary function, which were compared with that after the resection of multiple segments.\nSagittal image of CT. The tumor located between the right subsegment 2b and 3a.", "[SUBTITLE] Eligibility [SUBSECTION] The Ethics Committees of Kumamoto University Hospital approved the study protocol for sublobar resection in patients with c-T1N0M0 NSCLC. Informed consent was obtained from all patients after a comprehensive discussion of the risks and benefits of the proposed procedures [8,9].\nThe Ethics Committees of Kumamoto University Hospital approved the study protocol for sublobar resection in patients with c-T1N0M0 NSCLC. Informed consent was obtained from all patients after a comprehensive discussion of the risks and benefits of the proposed procedures [8,9].\n[SUBTITLE] Indications for Segmentectomy and Subsegmentectomy [SUBSECTION] The criteria for segmentectomy was the followings: (1) peripheral c-T1N0M0 NSCLCs less than 3 cm diameter; (2) intraoperative frozen section of lymph nodes showed no metastasis; and (3) surgical margin of at least 2 cm from the tumor can be taken using CSS. The CSS or multiple segmentectomy was further indicated for tumors involving multiple segments, which were identified on serial sections of the axial, sagittal, and coronal views of multidetector CT images using Digital Imaging and Communications in Medicine data.\nThe criteria for segmentectomy was the followings: (1) peripheral c-T1N0M0 NSCLCs less than 3 cm diameter; (2) intraoperative frozen section of lymph nodes showed no metastasis; and (3) surgical margin of at least 2 cm from the tumor can be taken using CSS. The CSS or multiple segmentectomy was further indicated for tumors involving multiple segments, which were identified on serial sections of the axial, sagittal, and coronal views of multidetector CT images using Digital Imaging and Communications in Medicine data.\n[SUBTITLE] Combined Segmentectomy Procedure [SUBSECTION] Segmentectomy including CSS was performed via open thoracotomy under one-lung ventilation as follows: (1) Pulmonary arteries and bronchi with tumor involvement were identified; (2) After the entire lung had been inflated, bronchi of the involved segment or subsegment were ligated and cut to clarify the boundary between the subsegments to be preserved versus resected, according to a previously reported technique [10]; (3) One lung ventilation was restarted, which made the lung tissues designated for preservation lose gas and collapse while retaining the segments subsegments designated for resection to be inflated, thereby allowing the border between the segments or subsegments of the resecting versus preserving lung tissue to be clarified; (4) The lung was cut using electrocautery between the inflated lung tissue to be resected and the deflated one to be preserved, thereby enabling the resection of targeted segments or subsegments; and (5) Cut plane of the lung was covered with a polyglycol acid sheet (Neoveil: Gunze Ltd., Kyoto, Japan) and fibrin glue to prevent postoperative air leakage.\nSegmentectomy including CSS was performed via open thoracotomy under one-lung ventilation as follows: (1) Pulmonary arteries and bronchi with tumor involvement were identified; (2) After the entire lung had been inflated, bronchi of the involved segment or subsegment were ligated and cut to clarify the boundary between the subsegments to be preserved versus resected, according to a previously reported technique [10]; (3) One lung ventilation was restarted, which made the lung tissues designated for preservation lose gas and collapse while retaining the segments subsegments designated for resection to be inflated, thereby allowing the border between the segments or subsegments of the resecting versus preserving lung tissue to be clarified; (4) The lung was cut using electrocautery between the inflated lung tissue to be resected and the deflated one to be preserved, thereby enabling the resection of targeted segments or subsegments; and (5) Cut plane of the lung was covered with a polyglycol acid sheet (Neoveil: Gunze Ltd., Kyoto, Japan) and fibrin glue to prevent postoperative air leakage.\n[SUBTITLE] Patients [SUBSECTION] During April 2005 - March 2009, 248 patients with c-T1N0M0 NSCLC were treated with surgery. Of them, 198 patients (79%) were treated by segmentectomy. Of the 198 patients, CSS was conducted in 32 patients (16%). The other 166 patients were treated by the resection of single segment (single segmentectomy, n = 97) or the resection of multiple segments (multiple segmentectomy, n = 69).\nDuring April 2005 - March 2009, 248 patients with c-T1N0M0 NSCLC were treated with surgery. Of them, 198 patients (79%) were treated by segmentectomy. Of the 198 patients, CSS was conducted in 32 patients (16%). The other 166 patients were treated by the resection of single segment (single segmentectomy, n = 97) or the resection of multiple segments (multiple segmentectomy, n = 69).\n[SUBTITLE] Pulmonary Function Tests [SUBSECTION] Vital capacity (VC), forced vital capacity (FVC), and FEV1 were measured before and more than 6 months after surgery with the patient in a seated position using a dry rolling-seal spirometer (CHESTAC-9800DN; Chest Inc. Tokyo, Japan) according to American Thoracic Society standards [11].\nVital capacity (VC), forced vital capacity (FVC), and FEV1 were measured before and more than 6 months after surgery with the patient in a seated position using a dry rolling-seal spirometer (CHESTAC-9800DN; Chest Inc. Tokyo, Japan) according to American Thoracic Society standards [11].\n[SUBTITLE] SPECT/CT [SUBSECTION] SPECT/CT system was composed of a commercially available gantry-free SPECT with dual-head detectors (Skylight; ADAC Laboratories, Milpitas, Calif) and an 8-multidetector-row CT scanner (Light-Speed Ultra Instrument; General Electric, Milwaukee, Wis). Each 185 MBq of 99mTc-macroaggregated human serum albumin (Daiichi Radioisotope Laboratories, Ltd, Tokyo, Japan) was administered intravenously. The two scans were performed sequentially. The SPECT images were manually fused with the CT images on the workstation(AZE Virtual Place; AZE Co Ltd, Tokyo, Japan) [5,7].\nPostoperative SPECT/CT was conducted with the pulmonary function test more than 6 months after surgery.\nSPECT/CT system was composed of a commercially available gantry-free SPECT with dual-head detectors (Skylight; ADAC Laboratories, Milpitas, Calif) and an 8-multidetector-row CT scanner (Light-Speed Ultra Instrument; General Electric, Milwaukee, Wis). Each 185 MBq of 99mTc-macroaggregated human serum albumin (Daiichi Radioisotope Laboratories, Ltd, Tokyo, Japan) was administered intravenously. The two scans were performed sequentially. The SPECT images were manually fused with the CT images on the workstation(AZE Virtual Place; AZE Co Ltd, Tokyo, Japan) [5,7].\nPostoperative SPECT/CT was conducted with the pulmonary function test more than 6 months after surgery.\n[SUBTITLE] Measurement of Pulmonary Function of Each Lobe [SUBSECTION] Images of the lobe before segmentectomy and of the lobe remaining after segmentectomy were traced on the CT image with a region of interest, of which radioisotope (RI) was counted on the SPECT image (Figure 2).\nImages of before and after segmentectomy. (a) Coronal image of CT before surgery, showing a lung cancer (arrow) in the segment 2b of right upper lobe. (b) Coronal image of the perfusion SPECT/CT of the right upper lobe before operation. (c) Coronal image of the perfusion SPECT/CT of the remaining right upper lobe after the resection of S2b and S3a.\nThe FEV1 of the lobe before (A) and after (B) segmentectomy was measured from the preoperative or postoperative SPECT/CT according to the following formulae. A = Preoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nB = Postoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nThe postoperative FEV1 of the lobe per preserved subsegment (C) was measured according to the following formula.\nC = B/number of preserved subsegments of the lobe\nImages of the lobe before segmentectomy and of the lobe remaining after segmentectomy were traced on the CT image with a region of interest, of which radioisotope (RI) was counted on the SPECT image (Figure 2).\nImages of before and after segmentectomy. (a) Coronal image of CT before surgery, showing a lung cancer (arrow) in the segment 2b of right upper lobe. (b) Coronal image of the perfusion SPECT/CT of the right upper lobe before operation. (c) Coronal image of the perfusion SPECT/CT of the remaining right upper lobe after the resection of S2b and S3a.\nThe FEV1 of the lobe before (A) and after (B) segmentectomy was measured from the preoperative or postoperative SPECT/CT according to the following formulae. A = Preoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nB = Postoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nThe postoperative FEV1 of the lobe per preserved subsegment (C) was measured according to the following formula.\nC = B/number of preserved subsegments of the lobe\n[SUBTITLE] Statistical Analysis [SUBSECTION] Student's t-test was used to compare the tumor size, number of the resected or preserved subsegments, and preserved FEV1 among the CSS, single segmentectomy, and multiple segmentectomy. Pearson's χ2 test was used to compare the location of tumors among the three groups. The SPSS software (SPSS Inc., Chicago, Illinois) was used for these analyses. Values of p < 0.05 were accepted as significant. All values in the text and table are given as mean ± SD.\nStudent's t-test was used to compare the tumor size, number of the resected or preserved subsegments, and preserved FEV1 among the CSS, single segmentectomy, and multiple segmentectomy. Pearson's χ2 test was used to compare the location of tumors among the three groups. The SPSS software (SPSS Inc., Chicago, Illinois) was used for these analyses. Values of p < 0.05 were accepted as significant. All values in the text and table are given as mean ± SD.", "The Ethics Committees of Kumamoto University Hospital approved the study protocol for sublobar resection in patients with c-T1N0M0 NSCLC. Informed consent was obtained from all patients after a comprehensive discussion of the risks and benefits of the proposed procedures [8,9].", "The criteria for segmentectomy was the followings: (1) peripheral c-T1N0M0 NSCLCs less than 3 cm diameter; (2) intraoperative frozen section of lymph nodes showed no metastasis; and (3) surgical margin of at least 2 cm from the tumor can be taken using CSS. The CSS or multiple segmentectomy was further indicated for tumors involving multiple segments, which were identified on serial sections of the axial, sagittal, and coronal views of multidetector CT images using Digital Imaging and Communications in Medicine data.", "Segmentectomy including CSS was performed via open thoracotomy under one-lung ventilation as follows: (1) Pulmonary arteries and bronchi with tumor involvement were identified; (2) After the entire lung had been inflated, bronchi of the involved segment or subsegment were ligated and cut to clarify the boundary between the subsegments to be preserved versus resected, according to a previously reported technique [10]; (3) One lung ventilation was restarted, which made the lung tissues designated for preservation lose gas and collapse while retaining the segments subsegments designated for resection to be inflated, thereby allowing the border between the segments or subsegments of the resecting versus preserving lung tissue to be clarified; (4) The lung was cut using electrocautery between the inflated lung tissue to be resected and the deflated one to be preserved, thereby enabling the resection of targeted segments or subsegments; and (5) Cut plane of the lung was covered with a polyglycol acid sheet (Neoveil: Gunze Ltd., Kyoto, Japan) and fibrin glue to prevent postoperative air leakage.", "During April 2005 - March 2009, 248 patients with c-T1N0M0 NSCLC were treated with surgery. Of them, 198 patients (79%) were treated by segmentectomy. Of the 198 patients, CSS was conducted in 32 patients (16%). The other 166 patients were treated by the resection of single segment (single segmentectomy, n = 97) or the resection of multiple segments (multiple segmentectomy, n = 69).", "Vital capacity (VC), forced vital capacity (FVC), and FEV1 were measured before and more than 6 months after surgery with the patient in a seated position using a dry rolling-seal spirometer (CHESTAC-9800DN; Chest Inc. Tokyo, Japan) according to American Thoracic Society standards [11].", "SPECT/CT system was composed of a commercially available gantry-free SPECT with dual-head detectors (Skylight; ADAC Laboratories, Milpitas, Calif) and an 8-multidetector-row CT scanner (Light-Speed Ultra Instrument; General Electric, Milwaukee, Wis). Each 185 MBq of 99mTc-macroaggregated human serum albumin (Daiichi Radioisotope Laboratories, Ltd, Tokyo, Japan) was administered intravenously. The two scans were performed sequentially. The SPECT images were manually fused with the CT images on the workstation(AZE Virtual Place; AZE Co Ltd, Tokyo, Japan) [5,7].\nPostoperative SPECT/CT was conducted with the pulmonary function test more than 6 months after surgery.", "Images of the lobe before segmentectomy and of the lobe remaining after segmentectomy were traced on the CT image with a region of interest, of which radioisotope (RI) was counted on the SPECT image (Figure 2).\nImages of before and after segmentectomy. (a) Coronal image of CT before surgery, showing a lung cancer (arrow) in the segment 2b of right upper lobe. (b) Coronal image of the perfusion SPECT/CT of the right upper lobe before operation. (c) Coronal image of the perfusion SPECT/CT of the remaining right upper lobe after the resection of S2b and S3a.\nThe FEV1 of the lobe before (A) and after (B) segmentectomy was measured from the preoperative or postoperative SPECT/CT according to the following formulae. A = Preoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nB = Postoperative FEV1 × [RI counts of the lobe/RI counts of the whole lung]\nThe postoperative FEV1 of the lobe per preserved subsegment (C) was measured according to the following formula.\nC = B/number of preserved subsegments of the lobe", "Student's t-test was used to compare the tumor size, number of the resected or preserved subsegments, and preserved FEV1 among the CSS, single segmentectomy, and multiple segmentectomy. Pearson's χ2 test was used to compare the location of tumors among the three groups. The SPSS software (SPSS Inc., Chicago, Illinois) was used for these analyses. Values of p < 0.05 were accepted as significant. All values in the text and table are given as mean ± SD.", "Of the 32 patients who underwent the CSS, 17 patients who underwent both the pulmonary function test and lung-perfusion SPECT/CT both before and after surgery. Table 1 presents their resected sites and the number of resected subsegments. Mean number of the resected subsegments was 2.9 ± 1.1. If the entire segments involved by tumors were resected, the mean number of resected subsegments would be 5.0 ± 1.2, i.e. the CSS could save 2.2 ± 1.2 subsegments compared with the resection of entire segments.\nSites of combined subsegmentectomy and the number of resected subsegments\nSS: subsegment, CSS: combined subsegmentectomy\nOf the 97 patients who underwent the single segmentectomy, 56 patients who underwent both the pulmonary function test and a lung-perfusion SPECT/CT both before and after surgery (Table 2). Of the 69 patients who underwent the multiple segmentectomy, 41 patients who underwent both the pulmonary function test and a lung-perfusion SPECT/CT both before and after surgery (Table 3).\nSites of single segmentectomy and the number of resected subsegments\nSS: subsegment\nSites of multiple segmentectomy and the number of resected subsegments\nSS: subsegment\nTable 4 presents a comparison of preoperative pulmonary function, tumor size, location of tumor, and the numbers of resected and preserved subsegments among the CSS, single segmentectomy, and multiple segmentectomy. No significant difference of preoperative pulmonary function tests was found among these groups. Mean tumor size was 1.4 ± 0.5 cm in the CSS group, which was significantly smaller than the 2.0 ± 0.8 cm in multiple segmentectomy (p = 0.002). Location of tumor in the right upper lobe was 9 of the 17 (53%) patients who underwent the CSS, which was more frequent than 2 of the 41 (5%) who underwent the multiple segmentectomy (p < 0.001). Mean number of the resected subsegments in the CSS group was 2.9 ± 1.1, which was significantly less than 5.3 ± 1.4 of the multiple segmentectomy group (p < 0.001). However, the mean numbers of preserved subsegments of each lobe were not significantly different between the CSS and multiple segmentectomy (5.4 ± 2.5 vs. 5.0 ± 1.5). This discrepancy of the numbers of resected and preserved subsegments was caused by the difference of location of tumor, i.e. (1) Although the right upper lobe has only 6 subsegments, the right lower lobe, the left upper lobe, and the left lower lobe have 12, 10, and 10 subsegments, respectively, which makes the segmentectomy for the right upper lobe to preserve fewer subsegments than that for other lobes; and (2) Right upper lobe was the resected site more frequent in the CSS than in the multiple segmentectomy (53 vs. 5%, p < 0.001), causing the discrepancy of numbers of resected and preserved subsegments between the two groups.\nPatients' characteristics of combined subsegmentectomy, single segmentectomy, and multiple segmentectomy\nCSS: combined subsegmentectomy, Single S: single segmentectomy, Multiple S: multiple segmentectomy, VC: vital capacity, FVC: functional vital capacity, FEV1: forced expiratory volume in 1 second\n†: p = 0.002 between the CSS and the multiple segmentectomy, ††: p < 0.001 between the CSS and the multiple segmentectomy.\nFigure 3 shows the mean percentage of preserved FEV1 of whole lung after surgery in the three groups. In the CSS group, the mean values of FEV1of the whole lung before and after surgery were 2.4±0.6 and 2.2 ± 0.5 l, respectively, of which the mean percentage of FEV1 preserved, was 91 ± 7%. In the single segmentectomy group, the mean values of FEV1 of the whole lung before and after surgery were 2.2 ± 0.6 and 2.0 ± 0.5 l, respectively, of which the mean percentage of FEV1 preserved was 92 ± 8%. In the multiple segmentectomy group, the mean values of FEV1 of the whole lung before and after surgery were 2.0 ± 0.6 and 1.8 ± 0.6 l, respectively, of which the mean percentage of FEV1 preserved was 88 ± 10%. No significant difference of the mean percentage of FEV1 preserved was found among these three groups.\nForced expiratory volume in 1 second examined by pulmonary function tests before and after surgery.\nFigure 4 shows the FEV1 of each preserved lobe after surgery in the three groups. In the CSS group, the mean values of FEV1 of each lobe before and after surgery were 0.6 ± 0.2 and 0.3 ± 0.2 l, respectively. In the single segmentectomy group, the mean values of FEV1 of each lobe before and after surgery were 0.5 ± 0.2 and 0.3 ± 0.1 l, respectively. In the multiple segmentectomy group, the mean values of FEV1 of each lobe before and after surgery were 0.5 ± 0.2 and 0.2 ± 0.2 l, respectively. While there was no significant difference of the postoperative FEV1 of each lobe between the CSS and single segmentectomy, the value of the CSS was higher than that of the multiple segmentectomy with marginal significance (p = 0.07).\nForced expiratory volume in 1 second of each lobe after surgery.\nFigure 5 shows the FEV1 of each preserved lobe per subsegment after surgery, which were 0.05 ± 0.03, 0.04 ± 0.03, 0.04 ± 0.03 l in the CSS, single segmentectomy, and multiple segmentectomy, respectively. The value was significantly higher in the CSS than in the multiple segmentectomy (p = 0.02).\nPreserved forced expiratory volume in 1 second of each lobe per subsegment after surgery.\nAll of the 198 patients who underwent CSS, single segmentectomy, or multiple segmentectomy were discharged from the hospital without major complications. All of the tumors were pathologically N0 stage. With the mean follow-up period after surgery was 31 ± 10 months (range: 12-60 month), 5 of the 166 patients (2%) who underwent single or multiple segmentectomy suffered postoperative recurrence, but there was no recurrence at the surgical margin. All 32 patients who underwent CSS are alive without recurrence.", "Results of this study elucidated the following points: (1) The CSS could save 2.2 ± 1.2 subsegments compared with the resection of entire segments involved by tumors; and (2) Both the preserved FEV1 of each lobe and that value per subsegment were higher in the CSS than in the multiple segmentectomy, whereas there was no significant difference of preserved % of FEV1 of whole lung between the two groups.\nThe reason for no significant difference in pulmonary function between the CSS and multiple segmentectomy could be caused by the difference of frequency of right upper lobe between the two. The CSS was conducted for right upper lobe more frequently than the multiple segmentectomy, because the right upper lobe has fewer subsegments than the other lobes. To preserve sufficient lung tissue for tumors involving multiple segments in the right upper lobe, we conducted the CSS frequently, for example the resection of S2b and S3a rather than the resection of both the S2 and S3. Contrary to the right upper lobe, other lobes can preserve sufficient lung tissue even after multiple segmentectomy, because they have more subsegments than the right upper lobe. Therefore, our data show that the CSS could preserve the pulmonary function of each lobe by avoiding the multiple segmentectomy, especially for tumors in the right upper lobe.\nThe mean values of postoperative FEV1 after CSS, single segmentectomy, and multiple segmentectomy were approximately 90% of the preoperative values, which were comparable to values in the previous reports of general segmentectomy [7,12,13] and were higher than that after lobectomy [12]. We previously reported that the postoperative FEV1 of each lobe after the resection of 1, 2, and 3 segments was decreased to 50%, 35%, and 17%, respectively [7]. The use of CSS can obviate the resection of multiple segments that a tumor involves. Therefore, to preserve a pulmonary function after segmentectomy in patients with small peripheral c-T1N0M0 NSCLC involving multiple segments, CSS would be preferable to the resection of multiple segments with tumor involvement, especially for small tumors located in the right upper lobe.\nThis study revealed that the mean tumor size in the CSS was significantly smaller than that in multiple segmentectomy. The mean tumor size in the CSS group was 1.4 ± 0.5 cm, ranging from 0.8 to 2.4 cm. To take the surgical margin of at least 2 cm from the tumor by the CSS, tumors larger than 2 cm involving multiple segments would be out of the indication for CSS.\nThe disadvantage of the CSS might be that the lymph node dissection at hilum of resected subsegments would be less sufficient than the conventional segmentectomy. Therefore, we recommend it for likely pathological N0 tumors, such as bronchioloalveolar carcinoma, carcinoid, and metastatic pulmonary tumors.\nThe preserved pulmonary functions after CSS, single segmentectomy, and multiple segmentectomy are shown herein. Our data indicate that the CSS is useful for preservation of pulmonary function of each lobe by avoiding the multiple segmentectomy especially in patients with small sized tumors with likely pathological N0 involving multiple segments of the right upper lobe.", "NSCLCs: non-small cell lung cancers; CSS: combined subsegmentectomy; FEV1: forced expiratory volume in 1 second; SPECT/CT: lung-perfusion single-photon-emission computed tomography and computed tomography; RI: radioisotope.", "The authors declare that they have no competing interests.", "This report reflects the opinion of the authors and does not represent the official position of any institution or sponsor. The contributions of each of the authors were as follows: KY was responsible for reviewing previous research, journal handsearching, drafting report. HN was responsible for quality checking and data processing. HN was responsible for project coordination. All authors have read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
The evolution and long-term results of laparoscopic antireflux surgery for the treatment of gastroesophageal reflux disease.
21333184
For nearly 2 decades, the laparoscopic correction of gastroesophageal reflux disease (GERD) has demonstrated its utility. However, the surgical technique has evolved over time, with mixed long-term results. We briefly review the evolution of antireflux surgery for the treatment of GERD, provide an update specific to the long-term efficacy of laparoscopic antireflux surgery (LARS), and analyze the factors predictive of a desirable outcome.
BACKGROUND
PubMed and Medline database searches were performed to identify articles regarding the laparoscopic treatment of GERD. Emphasis was placed on randomized control trials (RCTs) and reports with follow-up >1 year. Specific parameters addressed included operative technique, resolution of symptoms, complications, quality of life, division of short gastric vessels (SGVs), mesh repair, and approximation of the crura. Those studies specifically addressing follow-up of <1 year, the pediatric or elderly population, redo fundoplication, and repair of paraesophageal hernia and short esophagus were excluded.
MATERIALS AND METHODS
LARS has varied in technical approach through the years. Not until recently have more long-term, objective studies become available to allow for evidenced-based appraisals. Our review of the literature found no long-term difference in the rates of heartburn, gas-bloat, antacid use, or patient satisfaction between laparoscopic Nissen and Toupet fundoplication. In addition, several studies have shown that more patients had an abnormal pH profile following laparoscopic partial as opposed to total fundoplication. Conversely, dysphagia was more common following laparoscopic total versus partial fundoplication in 50% of RCTs at 12-month follow-up, though this resolved over time, being present in only 20% with follow-up >24 months. We confirmed that preoperative factors, such as hiatal hernia, atypical symptoms, poor antacid response, body mass index (BMI), and postoperative vomiting, are potential predictors of an unsatisfactory long-term outcome. Last, no trial disfavored division of the short gastric vessels (SGVs), closure of the crura, or mesh repair for hiatal defects.
RESULTS
LARS has significantly evolved over time. The laparoscopic total fundoplication appears to provide more durable long-term results than the partial approach, as long as the technical elements of the operation are respected. Division of the SGVs, closure of the crura, and the use of mesh for large hiatal defects positively impacts long-term outcome. Hiatal hernia, atypical symptoms, poor antacid response, body mass index (BMI), and postoperative vomiting are potential predictors of failure in LARS.
CONCLUSION
[ "Evaluation Studies as Topic", "Follow-Up Studies", "Fundoplication", "Gastroesophageal Reflux", "Humans", "Laparoscopy", "Time Factors" ]
3041027
null
null
null
null
RESULTS
[SUBTITLE] Evolution of Antireflux Surgery and Paradigm Shifts in LARS [SUBSECTION] The initial surgical approach to the refluxing patient began with Philip Allison in 1951, and his attempt at hiatal hernia repair.17 His thoracic approach, posterior crural repair, and left phrenic nerve crush would fall into disfavor because of the poor patient outcome and high recurrence of hiatal hernia. The most proper beginnings, therefore, began with Nissen's report on fundoplication in 1956.1 This technique of fundoplication, involving a 360° wrap and gastropexy for the treatment of hiatal hernias, was adopted shortly thereafter as an acceptable anti-reflux procedure within the surgical community. Later, variations of the technique were described by others, such as Dor from Marseille, who posited a partial anterior fundoplication for the treatment of achalasia in 1962.3 Although Nissen's procedure improved reflux symptoms, it soon became evident that some patients who underwent a total fundoplication were troubled by dysphagia, bloating, and the inability to belch, the so-called “gas-bloat syndrome.” To avoid these side effects, in 1963 André Toupet advocated for the creation of a posterior partial (270°) fundoplication, termed a “semi-fundoplicative maneuver.”2 Baue and Belsey18 and Hill19 would follow in 1967, with their approaches aimed at restoring the normal physiology of the lower esophageal sphincter (LES). In 1975, Vicente Guarner from Mexico described a posterior fundoplasty in which the fundus of the stomach was passed behind the esophagus, thus forming between the esophagus and the right aspect of the fundus a 120° angle in the left upper quadrant on an imaginary circle. This procedure did not require division of the SGVs.4 Subsequently, Rossetti5 proposed in 1977 a revision that included a modified total fundoplication with minimal-dissection of the cardia and no division of the SGVs. Unfortunately, these modifications still did not address recognized issues of postoperative dysphagia and the “gas-bloat” syndrome, for which Donahue and Bombeck pursued, with success, a “floppy Nissen.”20 DeMeester would soon recognize the benefits of this approach and publish his modifications and successful outcomes in 1986.12 In the 1990s, the advent of laparoscopic surgery revolutionized the surgical approach to the patient with GERD when the laparoscopic Nissen fundoplication was described by Dallemagne in 1991 (Figure 1).6 Soon thereafter, many modifications of the laparoscopic Nissen fundoplication were developed, by replicating laparoscopically the original modifications of the open techniques. Evolution of fundoplication. From 1991, then, the application of laparoscopic Nissen fundoplication for the treatment of GERD would undergo its own paradigm shifts. During the early stages of LARS, many agreed that those patients with esophageal dysmotility were at risk for dysphagia, and a “tailored approach” came into vogue.21 However, the report in 1999 by Horvath et al14 would dispel this myth, as they demonstrated reflux in 46% after partial fundoplication. The studies by Fibbe et al in 200122 and Patti et al in 200423 would confirm these findings, and the “tailored approach” would become disfavored. Finally, in an attempt to address an elevated prevalence of early dysphagia following total fundoplication, the “floppy” laparoscopic Nissen fundoplication seems to have become favored by many as the standard surgical procedure regardless of preoperative esophageal function, with a partial fundoplication reserved for those with achalasia and scleroderma without esophageal peristalsis (Figure 2).24 Paradigm shifts in laparoscopic antireflux surgery. The initial surgical approach to the refluxing patient began with Philip Allison in 1951, and his attempt at hiatal hernia repair.17 His thoracic approach, posterior crural repair, and left phrenic nerve crush would fall into disfavor because of the poor patient outcome and high recurrence of hiatal hernia. The most proper beginnings, therefore, began with Nissen's report on fundoplication in 1956.1 This technique of fundoplication, involving a 360° wrap and gastropexy for the treatment of hiatal hernias, was adopted shortly thereafter as an acceptable anti-reflux procedure within the surgical community. Later, variations of the technique were described by others, such as Dor from Marseille, who posited a partial anterior fundoplication for the treatment of achalasia in 1962.3 Although Nissen's procedure improved reflux symptoms, it soon became evident that some patients who underwent a total fundoplication were troubled by dysphagia, bloating, and the inability to belch, the so-called “gas-bloat syndrome.” To avoid these side effects, in 1963 André Toupet advocated for the creation of a posterior partial (270°) fundoplication, termed a “semi-fundoplicative maneuver.”2 Baue and Belsey18 and Hill19 would follow in 1967, with their approaches aimed at restoring the normal physiology of the lower esophageal sphincter (LES). In 1975, Vicente Guarner from Mexico described a posterior fundoplasty in which the fundus of the stomach was passed behind the esophagus, thus forming between the esophagus and the right aspect of the fundus a 120° angle in the left upper quadrant on an imaginary circle. This procedure did not require division of the SGVs.4 Subsequently, Rossetti5 proposed in 1977 a revision that included a modified total fundoplication with minimal-dissection of the cardia and no division of the SGVs. Unfortunately, these modifications still did not address recognized issues of postoperative dysphagia and the “gas-bloat” syndrome, for which Donahue and Bombeck pursued, with success, a “floppy Nissen.”20 DeMeester would soon recognize the benefits of this approach and publish his modifications and successful outcomes in 1986.12 In the 1990s, the advent of laparoscopic surgery revolutionized the surgical approach to the patient with GERD when the laparoscopic Nissen fundoplication was described by Dallemagne in 1991 (Figure 1).6 Soon thereafter, many modifications of the laparoscopic Nissen fundoplication were developed, by replicating laparoscopically the original modifications of the open techniques. Evolution of fundoplication. From 1991, then, the application of laparoscopic Nissen fundoplication for the treatment of GERD would undergo its own paradigm shifts. During the early stages of LARS, many agreed that those patients with esophageal dysmotility were at risk for dysphagia, and a “tailored approach” came into vogue.21 However, the report in 1999 by Horvath et al14 would dispel this myth, as they demonstrated reflux in 46% after partial fundoplication. The studies by Fibbe et al in 200122 and Patti et al in 200423 would confirm these findings, and the “tailored approach” would become disfavored. Finally, in an attempt to address an elevated prevalence of early dysphagia following total fundoplication, the “floppy” laparoscopic Nissen fundoplication seems to have become favored by many as the standard surgical procedure regardless of preoperative esophageal function, with a partial fundoplication reserved for those with achalasia and scleroderma without esophageal peristalsis (Figure 2).24 Paradigm shifts in laparoscopic antireflux surgery. [SUBTITLE] Long-Term Results of LARS [SUBSECTION] From 1997 through 2009, 13 randomized control trials (RCTs) were identified that assessed the outcome of LARS (Table 1).25–37 Four26,28,32,36 of these 13 RCTs demonstrated follow-up of 60 months or more. The surgical approach varied considerably: Nissen versus anterior fundoplication, (3) Nissen-Rossetti versus anterior fundoplication, (3) Nissen versus Toupet, (6) Toupet versus anterior fundoplication, (2) and Nissen versus Nissen-Rossetti. (1) Two studies included were longer-term assessments of the same patient population that had been previously reported at least 12 months postoperatively. Initial sample size ranged from 39 to 200 patients. Long-term Outcome of Laparoscopic Antireflux Surgery for Gastroesophageal Reflux Disease Based on Randomized Control Trials Indicates significance, P≤0.05. Dysphagia based on visual analogue scale. Heartburn based on visual analogue scale. Not stratified by operative technique. Distinction between dysphagia to solids or liquids not made. Trend toward significance. Significance not reported. RCT=Randomized control trial; NS=No significance between groups; Dashes=no data recorded. There was no long-term difference for relief of heartburn in any study that evaluated total versus partial fundoplication, suggesting that both techniques equally resolve typical symptoms of GERD. Hagedorn et al25 and Engström et al26 found a higher prevalence of postoperative heartburn in patients after the anterior fundoplication as opposed to the posterior fundoplication, both at 12 months (P<0.001) and 60 months (60% versus 24%, P<0.0001). All trials reported on the long-term incidence of dysphagia following LARS. Assessment for dysphagia was performed in a varied fashion, though most often by visual analogue scale (VAS) or questionnaire. Not all studies reported dysphagia to solids and liquids independently. In 2 of 3 studies assessing Nissen versus anterior fundoplication and in the study assessing Nissen-Rossetti versus anterior fundoplication, patients at long-term follow-up displayed an increased prevalence of dysphagia to solids.27–29 This phenomenon was mirrored in some respects by those trials evaluating Nissen fundoplication versus Toupet, whereby 2 of 6 studies30,31 again demonstrated an increased prevalence of dysphagia in the Nissen group. No difference was demonstrated between anterior fundoplication and Toupet, or Nissen and Nissen-Rossetti. Of note, in only 1 of 5 trials of longer than 2-year follow-up comparing total versus partial fundoplication was there a significant difference in dysphagia.28 Nine of the 13 trials reported on gas-bloat, with none demonstrating a difference between groups. Postoperative pH testing was not commonly reported, as data were reported in only 3 of the 13 trials. In 2 of these, there appeared to be a protective effect against reflux for total rather than partial fundoplication.29,30 Both of these studies had only shorter follow-up, thus further long-term results are difficult to discern. None of these studies reporting on postoperative pH testing reported postoperative use of histamine blockers or proton pump inhibitors (PPIs), though 6 others did. Of these, only the study by Engström et al26 noted any difference in antisecretory use, which occurred at 60 months with a higher prevalence in anterior fundoplication versus Toupet, coinciding with a higher prevalence of heartburn. No difference was found for total versus partial fundoplication in the postoperative use of antacid therapy. Last, 10 of the 13 randomized control trials reported on patient satisfaction, either by percentage satisfied or score. Again, Engström et al26 noted a difference in their study at 60 months, with more patients having undergone anterior fundoplication versus Toupet demonstrating their dissatisfaction. However, in all trials comparing total versus partial fundoplication, there was no difference in patient satisfaction, which remained high at long-term follow-up regardless of the approach. LARS appears to be associated with minimal morbidity, most often in less than 5% of cases. The group studied by Spence et al29experienced the highest rate of morbidity in any single category; however, the 7% morbidity in the Nissen-Rossetti group did not reach significance compared with the 2% experienced in the anterior fundoplication group. The necessity of reoperation was similarly infrequent throughout the studies, with the exception being the study of Strate et al.31 This group reported 19 reoperations for patient dissatisfaction (10% of study group, 15 who underwent Nissen and 4 who underwent Toupet, P<0.05). All patients were found to have a wrap herniation from disrupted hiatoplasty. From 1997 through 2009, 13 randomized control trials (RCTs) were identified that assessed the outcome of LARS (Table 1).25–37 Four26,28,32,36 of these 13 RCTs demonstrated follow-up of 60 months or more. The surgical approach varied considerably: Nissen versus anterior fundoplication, (3) Nissen-Rossetti versus anterior fundoplication, (3) Nissen versus Toupet, (6) Toupet versus anterior fundoplication, (2) and Nissen versus Nissen-Rossetti. (1) Two studies included were longer-term assessments of the same patient population that had been previously reported at least 12 months postoperatively. Initial sample size ranged from 39 to 200 patients. Long-term Outcome of Laparoscopic Antireflux Surgery for Gastroesophageal Reflux Disease Based on Randomized Control Trials Indicates significance, P≤0.05. Dysphagia based on visual analogue scale. Heartburn based on visual analogue scale. Not stratified by operative technique. Distinction between dysphagia to solids or liquids not made. Trend toward significance. Significance not reported. RCT=Randomized control trial; NS=No significance between groups; Dashes=no data recorded. There was no long-term difference for relief of heartburn in any study that evaluated total versus partial fundoplication, suggesting that both techniques equally resolve typical symptoms of GERD. Hagedorn et al25 and Engström et al26 found a higher prevalence of postoperative heartburn in patients after the anterior fundoplication as opposed to the posterior fundoplication, both at 12 months (P<0.001) and 60 months (60% versus 24%, P<0.0001). All trials reported on the long-term incidence of dysphagia following LARS. Assessment for dysphagia was performed in a varied fashion, though most often by visual analogue scale (VAS) or questionnaire. Not all studies reported dysphagia to solids and liquids independently. In 2 of 3 studies assessing Nissen versus anterior fundoplication and in the study assessing Nissen-Rossetti versus anterior fundoplication, patients at long-term follow-up displayed an increased prevalence of dysphagia to solids.27–29 This phenomenon was mirrored in some respects by those trials evaluating Nissen fundoplication versus Toupet, whereby 2 of 6 studies30,31 again demonstrated an increased prevalence of dysphagia in the Nissen group. No difference was demonstrated between anterior fundoplication and Toupet, or Nissen and Nissen-Rossetti. Of note, in only 1 of 5 trials of longer than 2-year follow-up comparing total versus partial fundoplication was there a significant difference in dysphagia.28 Nine of the 13 trials reported on gas-bloat, with none demonstrating a difference between groups. Postoperative pH testing was not commonly reported, as data were reported in only 3 of the 13 trials. In 2 of these, there appeared to be a protective effect against reflux for total rather than partial fundoplication.29,30 Both of these studies had only shorter follow-up, thus further long-term results are difficult to discern. None of these studies reporting on postoperative pH testing reported postoperative use of histamine blockers or proton pump inhibitors (PPIs), though 6 others did. Of these, only the study by Engström et al26 noted any difference in antisecretory use, which occurred at 60 months with a higher prevalence in anterior fundoplication versus Toupet, coinciding with a higher prevalence of heartburn. No difference was found for total versus partial fundoplication in the postoperative use of antacid therapy. Last, 10 of the 13 randomized control trials reported on patient satisfaction, either by percentage satisfied or score. Again, Engström et al26 noted a difference in their study at 60 months, with more patients having undergone anterior fundoplication versus Toupet demonstrating their dissatisfaction. However, in all trials comparing total versus partial fundoplication, there was no difference in patient satisfaction, which remained high at long-term follow-up regardless of the approach. LARS appears to be associated with minimal morbidity, most often in less than 5% of cases. The group studied by Spence et al29experienced the highest rate of morbidity in any single category; however, the 7% morbidity in the Nissen-Rossetti group did not reach significance compared with the 2% experienced in the anterior fundoplication group. The necessity of reoperation was similarly infrequent throughout the studies, with the exception being the study of Strate et al.31 This group reported 19 reoperations for patient dissatisfaction (10% of study group, 15 who underwent Nissen and 4 who underwent Toupet, P<0.05). All patients were found to have a wrap herniation from disrupted hiatoplasty. [SUBTITLE] Factors Predictive of LARS Failure [SUBSECTION] Various pre- and postoperative features are commonly implicated in the failure of LARS (Table 2).14,31,38–50 In those studies identifying such potential predictors, the laparoscopic approach to antireflux intervention was varied. Preoperative disorders of peristalsis appeared to be the weaker predictor of LARS failure, reaching statistical significance in only 1 of 5 studies.38 On the contrary, the presence preoperatively of atypical symptoms, poor response to preoperative antacids, and postoperative vomiting, indicated a more pronounced predictive value, occurring in over 60% of included studies. Body mass index (BMI) >30 or 35 was correlated with poor operative outcome in 33% of studies. Finally, the presence of hiatal hernias >3cm was a predictor of failure in 38% of the studies. Predictors of Laparoscopic Antireflux Surgery Failure Significant difference at least P<0.05. Hiatal hernia >3 cm. Size of hiatal hernia not mentioned, or <2 cm. 100% of patients with BMI >38 had poor outcome. Significance not reported. Study based on postoperative complications. Predictors are defined as demonstrating failure of physiologic improvement, patient dissatisfaction, or necessity of reoperation. BMI=Body Mass Index; NS=Not significant; Dashes=no data recorded. Over the years, specific details of the operative techniques that are independent of the skills of the surgeon and that may impact outcome have been identified (Table 3).37,38,40,51–61 The studies we reviewed had a large sample size, adequate follow-up, and addressed each technical variable specifically. Although some differences were noted as to the predictive value of each technical variable, no study disfavored division of the SGVs, closure of the crura, or mesh repair for hiatal defects.37,38,40,51–55 Operative Technique and Laparoscopic Antireflux Surgery Statistically significant. N=Number of patients, ND=No difference. Dashes=no data recorded. Various pre- and postoperative features are commonly implicated in the failure of LARS (Table 2).14,31,38–50 In those studies identifying such potential predictors, the laparoscopic approach to antireflux intervention was varied. Preoperative disorders of peristalsis appeared to be the weaker predictor of LARS failure, reaching statistical significance in only 1 of 5 studies.38 On the contrary, the presence preoperatively of atypical symptoms, poor response to preoperative antacids, and postoperative vomiting, indicated a more pronounced predictive value, occurring in over 60% of included studies. Body mass index (BMI) >30 or 35 was correlated with poor operative outcome in 33% of studies. Finally, the presence of hiatal hernias >3cm was a predictor of failure in 38% of the studies. Predictors of Laparoscopic Antireflux Surgery Failure Significant difference at least P<0.05. Hiatal hernia >3 cm. Size of hiatal hernia not mentioned, or <2 cm. 100% of patients with BMI >38 had poor outcome. Significance not reported. Study based on postoperative complications. Predictors are defined as demonstrating failure of physiologic improvement, patient dissatisfaction, or necessity of reoperation. BMI=Body Mass Index; NS=Not significant; Dashes=no data recorded. Over the years, specific details of the operative techniques that are independent of the skills of the surgeon and that may impact outcome have been identified (Table 3).37,38,40,51–61 The studies we reviewed had a large sample size, adequate follow-up, and addressed each technical variable specifically. Although some differences were noted as to the predictive value of each technical variable, no study disfavored division of the SGVs, closure of the crura, or mesh repair for hiatal defects.37,38,40,51–55 Operative Technique and Laparoscopic Antireflux Surgery Statistically significant. N=Number of patients, ND=No difference. Dashes=no data recorded.
CONCLUSION
Since its beginnings, LARS has been slow to assume an evidence-based understanding of fundoplication technique, and the approach has meandered from variation to variation. Although we are now afforded more objectivity by way of esophageal testing and a handful of long-term RCTs, a clear discrepancy remains in the laparoscopic approach, pre- and postoperative analysis, and report of study findings regarding the patient with GERD. Nevertheless, it appears that laparoscopic total fundoplication affords more durable long-term results than the partial fundoplication, provided that potential predictors of failure are identified early and that the technical elements of the operation are respected.
[ "INTRODUCTION", "Evolution of Antireflux Surgery and Paradigm Shifts in LARS", "Long-Term Results of LARS", "Factors Predictive of LARS Failure" ]
[ "Rudolph Nissen was the first to pioneer antireflux surgery in 1956.1 His initial 360° approach would subsequently come to be modified by André Toupet,2 Jacques Dor,3 Vicente Guarner,4 and Mario Rossetti.5 The premise for the modification of Nissen's original technique was to address various complications unique to antireflux surgery (such as dysphagia and gas-bloat syndrome) and disease processes of the esophagus (such as disorders of motility).\nIn 1991, the laparoscopic approach to antireflux surgery was moved forward by Bernard Dallemagne.6 By the time that Dallemagne and others were publishing their first reports of laparoscopic antireflux surgery (LARS) in the early 1990s, the utility in controlling gastroesophageal reflux disease (GERD) by open fundoplication had been established.7–9 However, even after 3 decades of antireflux surgery, much contention persisted as to which variation of fundoplication to use.10–12 Unfortunately, this contention has carried over to the laparoscopic era of antireflux surgery, and seems to persist to this day.13–15 Varin et al,16 in their recent metaanalysis of total versus partial fundoplication for GERD, concluded that many trials are of insufficient quality, are lacking in objectivity, and are too heterogeneous to reliably come to a serious consensus on the best operative technique and their predictors of success. Until larger, more objective studies come forth, Varin et al furthermore concluded that the laparoscopic antireflux approach be tailored to the surgeon's comfort level, based on evidence that still remains limited.\nThe goal of this review is to provide an additional avenue of clarity regarding paradigm shifts in LARS, long-term outcomes of laparoscopic fundoplication, and predictors of a successful surgical outcome.", "The initial surgical approach to the refluxing patient began with Philip Allison in 1951, and his attempt at hiatal hernia repair.17 His thoracic approach, posterior crural repair, and left phrenic nerve crush would fall into disfavor because of the poor patient outcome and high recurrence of hiatal hernia. The most proper beginnings, therefore, began with Nissen's report on fundoplication in 1956.1 This technique of fundoplication, involving a 360° wrap and gastropexy for the treatment of hiatal hernias, was adopted shortly thereafter as an acceptable anti-reflux procedure within the surgical community. Later, variations of the technique were described by others, such as Dor from Marseille, who posited a partial anterior fundoplication for the treatment of achalasia in 1962.3\nAlthough Nissen's procedure improved reflux symptoms, it soon became evident that some patients who underwent a total fundoplication were troubled by dysphagia, bloating, and the inability to belch, the so-called “gas-bloat syndrome.” To avoid these side effects, in 1963 André Toupet advocated for the creation of a posterior partial (270°) fundoplication, termed a “semi-fundoplicative maneuver.”2 Baue and Belsey18 and Hill19 would follow in 1967, with their approaches aimed at restoring the normal physiology of the lower esophageal sphincter (LES). In 1975, Vicente Guarner from Mexico described a posterior fundoplasty in which the fundus of the stomach was passed behind the esophagus, thus forming between the esophagus and the right aspect of the fundus a 120° angle in the left upper quadrant on an imaginary circle. This procedure did not require division of the SGVs.4 Subsequently, Rossetti5 proposed in 1977 a revision that included a modified total fundoplication with minimal-dissection of the cardia and no division of the SGVs.\nUnfortunately, these modifications still did not address recognized issues of postoperative dysphagia and the “gas-bloat” syndrome, for which Donahue and Bombeck pursued, with success, a “floppy Nissen.”20 DeMeester would soon recognize the benefits of this approach and publish his modifications and successful outcomes in 1986.12 In the 1990s, the advent of laparoscopic surgery revolutionized the surgical approach to the patient with GERD when the laparoscopic Nissen fundoplication was described by Dallemagne in 1991 (Figure 1).6 Soon thereafter, many modifications of the laparoscopic Nissen fundoplication were developed, by replicating laparoscopically the original modifications of the open techniques.\nEvolution of fundoplication.\nFrom 1991, then, the application of laparoscopic Nissen fundoplication for the treatment of GERD would undergo its own paradigm shifts. During the early stages of LARS, many agreed that those patients with esophageal dysmotility were at risk for dysphagia, and a “tailored approach” came into vogue.21 However, the report in 1999 by Horvath et al14 would dispel this myth, as they demonstrated reflux in 46% after partial fundoplication. The studies by Fibbe et al in 200122 and Patti et al in 200423 would confirm these findings, and the “tailored approach” would become disfavored. Finally, in an attempt to address an elevated prevalence of early dysphagia following total fundoplication, the “floppy” laparoscopic Nissen fundoplication seems to have become favored by many as the standard surgical procedure regardless of preoperative esophageal function, with a partial fundoplication reserved for those with achalasia and scleroderma without esophageal peristalsis (Figure 2).24\nParadigm shifts in laparoscopic antireflux surgery.", "From 1997 through 2009, 13 randomized control trials (RCTs) were identified that assessed the outcome of LARS (Table 1).25–37 Four26,28,32,36 of these 13 RCTs demonstrated follow-up of 60 months or more. The surgical approach varied considerably: Nissen versus anterior fundoplication, (3) Nissen-Rossetti versus anterior fundoplication, (3) Nissen versus Toupet, (6) Toupet versus anterior fundoplication, (2) and Nissen versus Nissen-Rossetti. (1) Two studies included were longer-term assessments of the same patient population that had been previously reported at least 12 months postoperatively. Initial sample size ranged from 39 to 200 patients.\nLong-term Outcome of Laparoscopic Antireflux Surgery for Gastroesophageal Reflux Disease Based on Randomized Control Trials\nIndicates significance, P≤0.05.\nDysphagia based on visual analogue scale.\nHeartburn based on visual analogue scale.\nNot stratified by operative technique.\nDistinction between dysphagia to solids or liquids not made.\nTrend toward significance.\nSignificance not reported.\nRCT=Randomized control trial; NS=No significance between groups; Dashes=no data recorded.\nThere was no long-term difference for relief of heartburn in any study that evaluated total versus partial fundoplication, suggesting that both techniques equally resolve typical symptoms of GERD. Hagedorn et al25 and Engström et al26 found a higher prevalence of postoperative heartburn in patients after the anterior fundoplication as opposed to the posterior fundoplication, both at 12 months (P<0.001) and 60 months (60% versus 24%, P<0.0001).\nAll trials reported on the long-term incidence of dysphagia following LARS. Assessment for dysphagia was performed in a varied fashion, though most often by visual analogue scale (VAS) or questionnaire. Not all studies reported dysphagia to solids and liquids independently. In 2 of 3 studies assessing Nissen versus anterior fundoplication and in the study assessing Nissen-Rossetti versus anterior fundoplication, patients at long-term follow-up displayed an increased prevalence of dysphagia to solids.27–29 This phenomenon was mirrored in some respects by those trials evaluating Nissen fundoplication versus Toupet, whereby 2 of 6 studies30,31 again demonstrated an increased prevalence of dysphagia in the Nissen group. No difference was demonstrated between anterior fundoplication and Toupet, or Nissen and Nissen-Rossetti. Of note, in only 1 of 5 trials of longer than 2-year follow-up comparing total versus partial fundoplication was there a significant difference in dysphagia.28\nNine of the 13 trials reported on gas-bloat, with none demonstrating a difference between groups. Postoperative pH testing was not commonly reported, as data were reported in only 3 of the 13 trials. In 2 of these, there appeared to be a protective effect against reflux for total rather than partial fundoplication.29,30 Both of these studies had only shorter follow-up, thus further long-term results are difficult to discern. None of these studies reporting on postoperative pH testing reported postoperative use of histamine blockers or proton pump inhibitors (PPIs), though 6 others did. Of these, only the study by Engström et al26 noted any difference in antisecretory use, which occurred at 60 months with a higher prevalence in anterior fundoplication versus Toupet, coinciding with a higher prevalence of heartburn. No difference was found for total versus partial fundoplication in the postoperative use of antacid therapy.\nLast, 10 of the 13 randomized control trials reported on patient satisfaction, either by percentage satisfied or score. Again, Engström et al26 noted a difference in their study at 60 months, with more patients having undergone anterior fundoplication versus Toupet demonstrating their dissatisfaction. However, in all trials comparing total versus partial fundoplication, there was no difference in patient satisfaction, which remained high at long-term follow-up regardless of the approach.\nLARS appears to be associated with minimal morbidity, most often in less than 5% of cases. The group studied by Spence et al29experienced the highest rate of morbidity in any single category; however, the 7% morbidity in the Nissen-Rossetti group did not reach significance compared with the 2% experienced in the anterior fundoplication group. The necessity of reoperation was similarly infrequent throughout the studies, with the exception being the study of Strate et al.31 This group reported 19 reoperations for patient dissatisfaction (10% of study group, 15 who underwent Nissen and 4 who underwent Toupet, P<0.05). All patients were found to have a wrap herniation from disrupted hiatoplasty.", "Various pre- and postoperative features are commonly implicated in the failure of LARS (Table 2).14,31,38–50 In those studies identifying such potential predictors, the laparoscopic approach to antireflux intervention was varied. Preoperative disorders of peristalsis appeared to be the weaker predictor of LARS failure, reaching statistical significance in only 1 of 5 studies.38 On the contrary, the presence preoperatively of atypical symptoms, poor response to preoperative antacids, and postoperative vomiting, indicated a more pronounced predictive value, occurring in over 60% of included studies. Body mass index (BMI) >30 or 35 was correlated with poor operative outcome in 33% of studies. Finally, the presence of hiatal hernias >3cm was a predictor of failure in 38% of the studies.\nPredictors of Laparoscopic Antireflux Surgery Failure\nSignificant difference at least P<0.05.\nHiatal hernia >3 cm.\nSize of hiatal hernia not mentioned, or <2 cm.\n100% of patients with BMI >38 had poor outcome.\nSignificance not reported.\nStudy based on postoperative complications.\nPredictors are defined as demonstrating failure of physiologic improvement, patient dissatisfaction, or necessity of reoperation.\nBMI=Body Mass Index; NS=Not significant; Dashes=no data recorded.\nOver the years, specific details of the operative techniques that are independent of the skills of the surgeon and that may impact outcome have been identified (Table 3).37,38,40,51–61 The studies we reviewed had a large sample size, adequate follow-up, and addressed each technical variable specifically. Although some differences were noted as to the predictive value of each technical variable, no study disfavored division of the SGVs, closure of the crura, or mesh repair for hiatal defects.37,38,40,51–55\nOperative Technique and Laparoscopic Antireflux Surgery\nStatistically significant.\nN=Number of patients, ND=No difference.\nDashes=no data recorded." ]
[ null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "RESULTS", "Evolution of Antireflux Surgery and Paradigm Shifts in LARS", "Long-Term Results of LARS", "Factors Predictive of LARS Failure", "DISCUSSION", "CONCLUSION" ]
[ "Rudolph Nissen was the first to pioneer antireflux surgery in 1956.1 His initial 360° approach would subsequently come to be modified by André Toupet,2 Jacques Dor,3 Vicente Guarner,4 and Mario Rossetti.5 The premise for the modification of Nissen's original technique was to address various complications unique to antireflux surgery (such as dysphagia and gas-bloat syndrome) and disease processes of the esophagus (such as disorders of motility).\nIn 1991, the laparoscopic approach to antireflux surgery was moved forward by Bernard Dallemagne.6 By the time that Dallemagne and others were publishing their first reports of laparoscopic antireflux surgery (LARS) in the early 1990s, the utility in controlling gastroesophageal reflux disease (GERD) by open fundoplication had been established.7–9 However, even after 3 decades of antireflux surgery, much contention persisted as to which variation of fundoplication to use.10–12 Unfortunately, this contention has carried over to the laparoscopic era of antireflux surgery, and seems to persist to this day.13–15 Varin et al,16 in their recent metaanalysis of total versus partial fundoplication for GERD, concluded that many trials are of insufficient quality, are lacking in objectivity, and are too heterogeneous to reliably come to a serious consensus on the best operative technique and their predictors of success. Until larger, more objective studies come forth, Varin et al furthermore concluded that the laparoscopic antireflux approach be tailored to the surgeon's comfort level, based on evidence that still remains limited.\nThe goal of this review is to provide an additional avenue of clarity regarding paradigm shifts in LARS, long-term outcomes of laparoscopic fundoplication, and predictors of a successful surgical outcome.", "PubMed and Medline database searches were performed to obtain articles regarding both open and laparoscopic anti-reflux surgery for the treatment of GERD with follow-up >1 year. The searched articles appeared in print between January 1951 and December 2009. All articles appeared in peer-reviewed journals. The key words utilized in the search are as follows: “surgery/gastroesophageal reflux disease,” “laparoscopic surgery/gastroesophageal reflux disease,” “laparoscopic surgery/gastroesophageal reflux disease/long-term results,” “laparoscopic surgery/gastroesophageal reflux disease/failure,” and “laparoscopic surgery/gastroesophageal reflux disease/randomized trial.” Additional articles were obtained via a manual search of the references included in the essential articles. Articles related to the following were excluded: short esophagus, paraesophageal hernia, redo-fundoplication, those converted to open, trials with <1 year of follow-up, and those specific to either elderly or pediatric patients. The following were parameters that were analyzed: criteria for patient selection, partial versus total fundoplication, changes in the standard of practice, and long-term results or effectiveness with respect to the amelioration of symptoms, recurrence of symptoms, and anatomic failure.", "[SUBTITLE] Evolution of Antireflux Surgery and Paradigm Shifts in LARS [SUBSECTION] The initial surgical approach to the refluxing patient began with Philip Allison in 1951, and his attempt at hiatal hernia repair.17 His thoracic approach, posterior crural repair, and left phrenic nerve crush would fall into disfavor because of the poor patient outcome and high recurrence of hiatal hernia. The most proper beginnings, therefore, began with Nissen's report on fundoplication in 1956.1 This technique of fundoplication, involving a 360° wrap and gastropexy for the treatment of hiatal hernias, was adopted shortly thereafter as an acceptable anti-reflux procedure within the surgical community. Later, variations of the technique were described by others, such as Dor from Marseille, who posited a partial anterior fundoplication for the treatment of achalasia in 1962.3\nAlthough Nissen's procedure improved reflux symptoms, it soon became evident that some patients who underwent a total fundoplication were troubled by dysphagia, bloating, and the inability to belch, the so-called “gas-bloat syndrome.” To avoid these side effects, in 1963 André Toupet advocated for the creation of a posterior partial (270°) fundoplication, termed a “semi-fundoplicative maneuver.”2 Baue and Belsey18 and Hill19 would follow in 1967, with their approaches aimed at restoring the normal physiology of the lower esophageal sphincter (LES). In 1975, Vicente Guarner from Mexico described a posterior fundoplasty in which the fundus of the stomach was passed behind the esophagus, thus forming between the esophagus and the right aspect of the fundus a 120° angle in the left upper quadrant on an imaginary circle. This procedure did not require division of the SGVs.4 Subsequently, Rossetti5 proposed in 1977 a revision that included a modified total fundoplication with minimal-dissection of the cardia and no division of the SGVs.\nUnfortunately, these modifications still did not address recognized issues of postoperative dysphagia and the “gas-bloat” syndrome, for which Donahue and Bombeck pursued, with success, a “floppy Nissen.”20 DeMeester would soon recognize the benefits of this approach and publish his modifications and successful outcomes in 1986.12 In the 1990s, the advent of laparoscopic surgery revolutionized the surgical approach to the patient with GERD when the laparoscopic Nissen fundoplication was described by Dallemagne in 1991 (Figure 1).6 Soon thereafter, many modifications of the laparoscopic Nissen fundoplication were developed, by replicating laparoscopically the original modifications of the open techniques.\nEvolution of fundoplication.\nFrom 1991, then, the application of laparoscopic Nissen fundoplication for the treatment of GERD would undergo its own paradigm shifts. During the early stages of LARS, many agreed that those patients with esophageal dysmotility were at risk for dysphagia, and a “tailored approach” came into vogue.21 However, the report in 1999 by Horvath et al14 would dispel this myth, as they demonstrated reflux in 46% after partial fundoplication. The studies by Fibbe et al in 200122 and Patti et al in 200423 would confirm these findings, and the “tailored approach” would become disfavored. Finally, in an attempt to address an elevated prevalence of early dysphagia following total fundoplication, the “floppy” laparoscopic Nissen fundoplication seems to have become favored by many as the standard surgical procedure regardless of preoperative esophageal function, with a partial fundoplication reserved for those with achalasia and scleroderma without esophageal peristalsis (Figure 2).24\nParadigm shifts in laparoscopic antireflux surgery.\nThe initial surgical approach to the refluxing patient began with Philip Allison in 1951, and his attempt at hiatal hernia repair.17 His thoracic approach, posterior crural repair, and left phrenic nerve crush would fall into disfavor because of the poor patient outcome and high recurrence of hiatal hernia. The most proper beginnings, therefore, began with Nissen's report on fundoplication in 1956.1 This technique of fundoplication, involving a 360° wrap and gastropexy for the treatment of hiatal hernias, was adopted shortly thereafter as an acceptable anti-reflux procedure within the surgical community. Later, variations of the technique were described by others, such as Dor from Marseille, who posited a partial anterior fundoplication for the treatment of achalasia in 1962.3\nAlthough Nissen's procedure improved reflux symptoms, it soon became evident that some patients who underwent a total fundoplication were troubled by dysphagia, bloating, and the inability to belch, the so-called “gas-bloat syndrome.” To avoid these side effects, in 1963 André Toupet advocated for the creation of a posterior partial (270°) fundoplication, termed a “semi-fundoplicative maneuver.”2 Baue and Belsey18 and Hill19 would follow in 1967, with their approaches aimed at restoring the normal physiology of the lower esophageal sphincter (LES). In 1975, Vicente Guarner from Mexico described a posterior fundoplasty in which the fundus of the stomach was passed behind the esophagus, thus forming between the esophagus and the right aspect of the fundus a 120° angle in the left upper quadrant on an imaginary circle. This procedure did not require division of the SGVs.4 Subsequently, Rossetti5 proposed in 1977 a revision that included a modified total fundoplication with minimal-dissection of the cardia and no division of the SGVs.\nUnfortunately, these modifications still did not address recognized issues of postoperative dysphagia and the “gas-bloat” syndrome, for which Donahue and Bombeck pursued, with success, a “floppy Nissen.”20 DeMeester would soon recognize the benefits of this approach and publish his modifications and successful outcomes in 1986.12 In the 1990s, the advent of laparoscopic surgery revolutionized the surgical approach to the patient with GERD when the laparoscopic Nissen fundoplication was described by Dallemagne in 1991 (Figure 1).6 Soon thereafter, many modifications of the laparoscopic Nissen fundoplication were developed, by replicating laparoscopically the original modifications of the open techniques.\nEvolution of fundoplication.\nFrom 1991, then, the application of laparoscopic Nissen fundoplication for the treatment of GERD would undergo its own paradigm shifts. During the early stages of LARS, many agreed that those patients with esophageal dysmotility were at risk for dysphagia, and a “tailored approach” came into vogue.21 However, the report in 1999 by Horvath et al14 would dispel this myth, as they demonstrated reflux in 46% after partial fundoplication. The studies by Fibbe et al in 200122 and Patti et al in 200423 would confirm these findings, and the “tailored approach” would become disfavored. Finally, in an attempt to address an elevated prevalence of early dysphagia following total fundoplication, the “floppy” laparoscopic Nissen fundoplication seems to have become favored by many as the standard surgical procedure regardless of preoperative esophageal function, with a partial fundoplication reserved for those with achalasia and scleroderma without esophageal peristalsis (Figure 2).24\nParadigm shifts in laparoscopic antireflux surgery.\n[SUBTITLE] Long-Term Results of LARS [SUBSECTION] From 1997 through 2009, 13 randomized control trials (RCTs) were identified that assessed the outcome of LARS (Table 1).25–37 Four26,28,32,36 of these 13 RCTs demonstrated follow-up of 60 months or more. The surgical approach varied considerably: Nissen versus anterior fundoplication, (3) Nissen-Rossetti versus anterior fundoplication, (3) Nissen versus Toupet, (6) Toupet versus anterior fundoplication, (2) and Nissen versus Nissen-Rossetti. (1) Two studies included were longer-term assessments of the same patient population that had been previously reported at least 12 months postoperatively. Initial sample size ranged from 39 to 200 patients.\nLong-term Outcome of Laparoscopic Antireflux Surgery for Gastroesophageal Reflux Disease Based on Randomized Control Trials\nIndicates significance, P≤0.05.\nDysphagia based on visual analogue scale.\nHeartburn based on visual analogue scale.\nNot stratified by operative technique.\nDistinction between dysphagia to solids or liquids not made.\nTrend toward significance.\nSignificance not reported.\nRCT=Randomized control trial; NS=No significance between groups; Dashes=no data recorded.\nThere was no long-term difference for relief of heartburn in any study that evaluated total versus partial fundoplication, suggesting that both techniques equally resolve typical symptoms of GERD. Hagedorn et al25 and Engström et al26 found a higher prevalence of postoperative heartburn in patients after the anterior fundoplication as opposed to the posterior fundoplication, both at 12 months (P<0.001) and 60 months (60% versus 24%, P<0.0001).\nAll trials reported on the long-term incidence of dysphagia following LARS. Assessment for dysphagia was performed in a varied fashion, though most often by visual analogue scale (VAS) or questionnaire. Not all studies reported dysphagia to solids and liquids independently. In 2 of 3 studies assessing Nissen versus anterior fundoplication and in the study assessing Nissen-Rossetti versus anterior fundoplication, patients at long-term follow-up displayed an increased prevalence of dysphagia to solids.27–29 This phenomenon was mirrored in some respects by those trials evaluating Nissen fundoplication versus Toupet, whereby 2 of 6 studies30,31 again demonstrated an increased prevalence of dysphagia in the Nissen group. No difference was demonstrated between anterior fundoplication and Toupet, or Nissen and Nissen-Rossetti. Of note, in only 1 of 5 trials of longer than 2-year follow-up comparing total versus partial fundoplication was there a significant difference in dysphagia.28\nNine of the 13 trials reported on gas-bloat, with none demonstrating a difference between groups. Postoperative pH testing was not commonly reported, as data were reported in only 3 of the 13 trials. In 2 of these, there appeared to be a protective effect against reflux for total rather than partial fundoplication.29,30 Both of these studies had only shorter follow-up, thus further long-term results are difficult to discern. None of these studies reporting on postoperative pH testing reported postoperative use of histamine blockers or proton pump inhibitors (PPIs), though 6 others did. Of these, only the study by Engström et al26 noted any difference in antisecretory use, which occurred at 60 months with a higher prevalence in anterior fundoplication versus Toupet, coinciding with a higher prevalence of heartburn. No difference was found for total versus partial fundoplication in the postoperative use of antacid therapy.\nLast, 10 of the 13 randomized control trials reported on patient satisfaction, either by percentage satisfied or score. Again, Engström et al26 noted a difference in their study at 60 months, with more patients having undergone anterior fundoplication versus Toupet demonstrating their dissatisfaction. However, in all trials comparing total versus partial fundoplication, there was no difference in patient satisfaction, which remained high at long-term follow-up regardless of the approach.\nLARS appears to be associated with minimal morbidity, most often in less than 5% of cases. The group studied by Spence et al29experienced the highest rate of morbidity in any single category; however, the 7% morbidity in the Nissen-Rossetti group did not reach significance compared with the 2% experienced in the anterior fundoplication group. The necessity of reoperation was similarly infrequent throughout the studies, with the exception being the study of Strate et al.31 This group reported 19 reoperations for patient dissatisfaction (10% of study group, 15 who underwent Nissen and 4 who underwent Toupet, P<0.05). All patients were found to have a wrap herniation from disrupted hiatoplasty.\nFrom 1997 through 2009, 13 randomized control trials (RCTs) were identified that assessed the outcome of LARS (Table 1).25–37 Four26,28,32,36 of these 13 RCTs demonstrated follow-up of 60 months or more. The surgical approach varied considerably: Nissen versus anterior fundoplication, (3) Nissen-Rossetti versus anterior fundoplication, (3) Nissen versus Toupet, (6) Toupet versus anterior fundoplication, (2) and Nissen versus Nissen-Rossetti. (1) Two studies included were longer-term assessments of the same patient population that had been previously reported at least 12 months postoperatively. Initial sample size ranged from 39 to 200 patients.\nLong-term Outcome of Laparoscopic Antireflux Surgery for Gastroesophageal Reflux Disease Based on Randomized Control Trials\nIndicates significance, P≤0.05.\nDysphagia based on visual analogue scale.\nHeartburn based on visual analogue scale.\nNot stratified by operative technique.\nDistinction between dysphagia to solids or liquids not made.\nTrend toward significance.\nSignificance not reported.\nRCT=Randomized control trial; NS=No significance between groups; Dashes=no data recorded.\nThere was no long-term difference for relief of heartburn in any study that evaluated total versus partial fundoplication, suggesting that both techniques equally resolve typical symptoms of GERD. Hagedorn et al25 and Engström et al26 found a higher prevalence of postoperative heartburn in patients after the anterior fundoplication as opposed to the posterior fundoplication, both at 12 months (P<0.001) and 60 months (60% versus 24%, P<0.0001).\nAll trials reported on the long-term incidence of dysphagia following LARS. Assessment for dysphagia was performed in a varied fashion, though most often by visual analogue scale (VAS) or questionnaire. Not all studies reported dysphagia to solids and liquids independently. In 2 of 3 studies assessing Nissen versus anterior fundoplication and in the study assessing Nissen-Rossetti versus anterior fundoplication, patients at long-term follow-up displayed an increased prevalence of dysphagia to solids.27–29 This phenomenon was mirrored in some respects by those trials evaluating Nissen fundoplication versus Toupet, whereby 2 of 6 studies30,31 again demonstrated an increased prevalence of dysphagia in the Nissen group. No difference was demonstrated between anterior fundoplication and Toupet, or Nissen and Nissen-Rossetti. Of note, in only 1 of 5 trials of longer than 2-year follow-up comparing total versus partial fundoplication was there a significant difference in dysphagia.28\nNine of the 13 trials reported on gas-bloat, with none demonstrating a difference between groups. Postoperative pH testing was not commonly reported, as data were reported in only 3 of the 13 trials. In 2 of these, there appeared to be a protective effect against reflux for total rather than partial fundoplication.29,30 Both of these studies had only shorter follow-up, thus further long-term results are difficult to discern. None of these studies reporting on postoperative pH testing reported postoperative use of histamine blockers or proton pump inhibitors (PPIs), though 6 others did. Of these, only the study by Engström et al26 noted any difference in antisecretory use, which occurred at 60 months with a higher prevalence in anterior fundoplication versus Toupet, coinciding with a higher prevalence of heartburn. No difference was found for total versus partial fundoplication in the postoperative use of antacid therapy.\nLast, 10 of the 13 randomized control trials reported on patient satisfaction, either by percentage satisfied or score. Again, Engström et al26 noted a difference in their study at 60 months, with more patients having undergone anterior fundoplication versus Toupet demonstrating their dissatisfaction. However, in all trials comparing total versus partial fundoplication, there was no difference in patient satisfaction, which remained high at long-term follow-up regardless of the approach.\nLARS appears to be associated with minimal morbidity, most often in less than 5% of cases. The group studied by Spence et al29experienced the highest rate of morbidity in any single category; however, the 7% morbidity in the Nissen-Rossetti group did not reach significance compared with the 2% experienced in the anterior fundoplication group. The necessity of reoperation was similarly infrequent throughout the studies, with the exception being the study of Strate et al.31 This group reported 19 reoperations for patient dissatisfaction (10% of study group, 15 who underwent Nissen and 4 who underwent Toupet, P<0.05). All patients were found to have a wrap herniation from disrupted hiatoplasty.\n[SUBTITLE] Factors Predictive of LARS Failure [SUBSECTION] Various pre- and postoperative features are commonly implicated in the failure of LARS (Table 2).14,31,38–50 In those studies identifying such potential predictors, the laparoscopic approach to antireflux intervention was varied. Preoperative disorders of peristalsis appeared to be the weaker predictor of LARS failure, reaching statistical significance in only 1 of 5 studies.38 On the contrary, the presence preoperatively of atypical symptoms, poor response to preoperative antacids, and postoperative vomiting, indicated a more pronounced predictive value, occurring in over 60% of included studies. Body mass index (BMI) >30 or 35 was correlated with poor operative outcome in 33% of studies. Finally, the presence of hiatal hernias >3cm was a predictor of failure in 38% of the studies.\nPredictors of Laparoscopic Antireflux Surgery Failure\nSignificant difference at least P<0.05.\nHiatal hernia >3 cm.\nSize of hiatal hernia not mentioned, or <2 cm.\n100% of patients with BMI >38 had poor outcome.\nSignificance not reported.\nStudy based on postoperative complications.\nPredictors are defined as demonstrating failure of physiologic improvement, patient dissatisfaction, or necessity of reoperation.\nBMI=Body Mass Index; NS=Not significant; Dashes=no data recorded.\nOver the years, specific details of the operative techniques that are independent of the skills of the surgeon and that may impact outcome have been identified (Table 3).37,38,40,51–61 The studies we reviewed had a large sample size, adequate follow-up, and addressed each technical variable specifically. Although some differences were noted as to the predictive value of each technical variable, no study disfavored division of the SGVs, closure of the crura, or mesh repair for hiatal defects.37,38,40,51–55\nOperative Technique and Laparoscopic Antireflux Surgery\nStatistically significant.\nN=Number of patients, ND=No difference.\nDashes=no data recorded.\nVarious pre- and postoperative features are commonly implicated in the failure of LARS (Table 2).14,31,38–50 In those studies identifying such potential predictors, the laparoscopic approach to antireflux intervention was varied. Preoperative disorders of peristalsis appeared to be the weaker predictor of LARS failure, reaching statistical significance in only 1 of 5 studies.38 On the contrary, the presence preoperatively of atypical symptoms, poor response to preoperative antacids, and postoperative vomiting, indicated a more pronounced predictive value, occurring in over 60% of included studies. Body mass index (BMI) >30 or 35 was correlated with poor operative outcome in 33% of studies. Finally, the presence of hiatal hernias >3cm was a predictor of failure in 38% of the studies.\nPredictors of Laparoscopic Antireflux Surgery Failure\nSignificant difference at least P<0.05.\nHiatal hernia >3 cm.\nSize of hiatal hernia not mentioned, or <2 cm.\n100% of patients with BMI >38 had poor outcome.\nSignificance not reported.\nStudy based on postoperative complications.\nPredictors are defined as demonstrating failure of physiologic improvement, patient dissatisfaction, or necessity of reoperation.\nBMI=Body Mass Index; NS=Not significant; Dashes=no data recorded.\nOver the years, specific details of the operative techniques that are independent of the skills of the surgeon and that may impact outcome have been identified (Table 3).37,38,40,51–61 The studies we reviewed had a large sample size, adequate follow-up, and addressed each technical variable specifically. Although some differences were noted as to the predictive value of each technical variable, no study disfavored division of the SGVs, closure of the crura, or mesh repair for hiatal defects.37,38,40,51–55\nOperative Technique and Laparoscopic Antireflux Surgery\nStatistically significant.\nN=Number of patients, ND=No difference.\nDashes=no data recorded.", "The initial surgical approach to the refluxing patient began with Philip Allison in 1951, and his attempt at hiatal hernia repair.17 His thoracic approach, posterior crural repair, and left phrenic nerve crush would fall into disfavor because of the poor patient outcome and high recurrence of hiatal hernia. The most proper beginnings, therefore, began with Nissen's report on fundoplication in 1956.1 This technique of fundoplication, involving a 360° wrap and gastropexy for the treatment of hiatal hernias, was adopted shortly thereafter as an acceptable anti-reflux procedure within the surgical community. Later, variations of the technique were described by others, such as Dor from Marseille, who posited a partial anterior fundoplication for the treatment of achalasia in 1962.3\nAlthough Nissen's procedure improved reflux symptoms, it soon became evident that some patients who underwent a total fundoplication were troubled by dysphagia, bloating, and the inability to belch, the so-called “gas-bloat syndrome.” To avoid these side effects, in 1963 André Toupet advocated for the creation of a posterior partial (270°) fundoplication, termed a “semi-fundoplicative maneuver.”2 Baue and Belsey18 and Hill19 would follow in 1967, with their approaches aimed at restoring the normal physiology of the lower esophageal sphincter (LES). In 1975, Vicente Guarner from Mexico described a posterior fundoplasty in which the fundus of the stomach was passed behind the esophagus, thus forming between the esophagus and the right aspect of the fundus a 120° angle in the left upper quadrant on an imaginary circle. This procedure did not require division of the SGVs.4 Subsequently, Rossetti5 proposed in 1977 a revision that included a modified total fundoplication with minimal-dissection of the cardia and no division of the SGVs.\nUnfortunately, these modifications still did not address recognized issues of postoperative dysphagia and the “gas-bloat” syndrome, for which Donahue and Bombeck pursued, with success, a “floppy Nissen.”20 DeMeester would soon recognize the benefits of this approach and publish his modifications and successful outcomes in 1986.12 In the 1990s, the advent of laparoscopic surgery revolutionized the surgical approach to the patient with GERD when the laparoscopic Nissen fundoplication was described by Dallemagne in 1991 (Figure 1).6 Soon thereafter, many modifications of the laparoscopic Nissen fundoplication were developed, by replicating laparoscopically the original modifications of the open techniques.\nEvolution of fundoplication.\nFrom 1991, then, the application of laparoscopic Nissen fundoplication for the treatment of GERD would undergo its own paradigm shifts. During the early stages of LARS, many agreed that those patients with esophageal dysmotility were at risk for dysphagia, and a “tailored approach” came into vogue.21 However, the report in 1999 by Horvath et al14 would dispel this myth, as they demonstrated reflux in 46% after partial fundoplication. The studies by Fibbe et al in 200122 and Patti et al in 200423 would confirm these findings, and the “tailored approach” would become disfavored. Finally, in an attempt to address an elevated prevalence of early dysphagia following total fundoplication, the “floppy” laparoscopic Nissen fundoplication seems to have become favored by many as the standard surgical procedure regardless of preoperative esophageal function, with a partial fundoplication reserved for those with achalasia and scleroderma without esophageal peristalsis (Figure 2).24\nParadigm shifts in laparoscopic antireflux surgery.", "From 1997 through 2009, 13 randomized control trials (RCTs) were identified that assessed the outcome of LARS (Table 1).25–37 Four26,28,32,36 of these 13 RCTs demonstrated follow-up of 60 months or more. The surgical approach varied considerably: Nissen versus anterior fundoplication, (3) Nissen-Rossetti versus anterior fundoplication, (3) Nissen versus Toupet, (6) Toupet versus anterior fundoplication, (2) and Nissen versus Nissen-Rossetti. (1) Two studies included were longer-term assessments of the same patient population that had been previously reported at least 12 months postoperatively. Initial sample size ranged from 39 to 200 patients.\nLong-term Outcome of Laparoscopic Antireflux Surgery for Gastroesophageal Reflux Disease Based on Randomized Control Trials\nIndicates significance, P≤0.05.\nDysphagia based on visual analogue scale.\nHeartburn based on visual analogue scale.\nNot stratified by operative technique.\nDistinction between dysphagia to solids or liquids not made.\nTrend toward significance.\nSignificance not reported.\nRCT=Randomized control trial; NS=No significance between groups; Dashes=no data recorded.\nThere was no long-term difference for relief of heartburn in any study that evaluated total versus partial fundoplication, suggesting that both techniques equally resolve typical symptoms of GERD. Hagedorn et al25 and Engström et al26 found a higher prevalence of postoperative heartburn in patients after the anterior fundoplication as opposed to the posterior fundoplication, both at 12 months (P<0.001) and 60 months (60% versus 24%, P<0.0001).\nAll trials reported on the long-term incidence of dysphagia following LARS. Assessment for dysphagia was performed in a varied fashion, though most often by visual analogue scale (VAS) or questionnaire. Not all studies reported dysphagia to solids and liquids independently. In 2 of 3 studies assessing Nissen versus anterior fundoplication and in the study assessing Nissen-Rossetti versus anterior fundoplication, patients at long-term follow-up displayed an increased prevalence of dysphagia to solids.27–29 This phenomenon was mirrored in some respects by those trials evaluating Nissen fundoplication versus Toupet, whereby 2 of 6 studies30,31 again demonstrated an increased prevalence of dysphagia in the Nissen group. No difference was demonstrated between anterior fundoplication and Toupet, or Nissen and Nissen-Rossetti. Of note, in only 1 of 5 trials of longer than 2-year follow-up comparing total versus partial fundoplication was there a significant difference in dysphagia.28\nNine of the 13 trials reported on gas-bloat, with none demonstrating a difference between groups. Postoperative pH testing was not commonly reported, as data were reported in only 3 of the 13 trials. In 2 of these, there appeared to be a protective effect against reflux for total rather than partial fundoplication.29,30 Both of these studies had only shorter follow-up, thus further long-term results are difficult to discern. None of these studies reporting on postoperative pH testing reported postoperative use of histamine blockers or proton pump inhibitors (PPIs), though 6 others did. Of these, only the study by Engström et al26 noted any difference in antisecretory use, which occurred at 60 months with a higher prevalence in anterior fundoplication versus Toupet, coinciding with a higher prevalence of heartburn. No difference was found for total versus partial fundoplication in the postoperative use of antacid therapy.\nLast, 10 of the 13 randomized control trials reported on patient satisfaction, either by percentage satisfied or score. Again, Engström et al26 noted a difference in their study at 60 months, with more patients having undergone anterior fundoplication versus Toupet demonstrating their dissatisfaction. However, in all trials comparing total versus partial fundoplication, there was no difference in patient satisfaction, which remained high at long-term follow-up regardless of the approach.\nLARS appears to be associated with minimal morbidity, most often in less than 5% of cases. The group studied by Spence et al29experienced the highest rate of morbidity in any single category; however, the 7% morbidity in the Nissen-Rossetti group did not reach significance compared with the 2% experienced in the anterior fundoplication group. The necessity of reoperation was similarly infrequent throughout the studies, with the exception being the study of Strate et al.31 This group reported 19 reoperations for patient dissatisfaction (10% of study group, 15 who underwent Nissen and 4 who underwent Toupet, P<0.05). All patients were found to have a wrap herniation from disrupted hiatoplasty.", "Various pre- and postoperative features are commonly implicated in the failure of LARS (Table 2).14,31,38–50 In those studies identifying such potential predictors, the laparoscopic approach to antireflux intervention was varied. Preoperative disorders of peristalsis appeared to be the weaker predictor of LARS failure, reaching statistical significance in only 1 of 5 studies.38 On the contrary, the presence preoperatively of atypical symptoms, poor response to preoperative antacids, and postoperative vomiting, indicated a more pronounced predictive value, occurring in over 60% of included studies. Body mass index (BMI) >30 or 35 was correlated with poor operative outcome in 33% of studies. Finally, the presence of hiatal hernias >3cm was a predictor of failure in 38% of the studies.\nPredictors of Laparoscopic Antireflux Surgery Failure\nSignificant difference at least P<0.05.\nHiatal hernia >3 cm.\nSize of hiatal hernia not mentioned, or <2 cm.\n100% of patients with BMI >38 had poor outcome.\nSignificance not reported.\nStudy based on postoperative complications.\nPredictors are defined as demonstrating failure of physiologic improvement, patient dissatisfaction, or necessity of reoperation.\nBMI=Body Mass Index; NS=Not significant; Dashes=no data recorded.\nOver the years, specific details of the operative techniques that are independent of the skills of the surgeon and that may impact outcome have been identified (Table 3).37,38,40,51–61 The studies we reviewed had a large sample size, adequate follow-up, and addressed each technical variable specifically. Although some differences were noted as to the predictive value of each technical variable, no study disfavored division of the SGVs, closure of the crura, or mesh repair for hiatal defects.37,38,40,51–55\nOperative Technique and Laparoscopic Antireflux Surgery\nStatistically significant.\nN=Number of patients, ND=No difference.\nDashes=no data recorded.", "Antireflux surgery has been an evolving process for over half a century. Changes in technique were aimed at improving patient outcome and satisfaction. Starting with the laparoscopic age of antireflux surgery in 1991, paradigms in technique and patient approach shifted along the lines of technological advance. Oftentimes, information gleaned from postoperative follow-up was for <1 year, limiting the surgeon's perspective as to the true outcome of a particular modification in technique. Unfortunately, this methodology carried over from the previous 30 or so years of antireflux surgery, whereby objectivity in the form of RCTs was few and far between. Presently, we are on the cusp of 2 full decades of experience with LARS. Though the “tailored approach” has fallen into disfavor, there is still no true international consensus on the basic technique of total versus partial fundoplication; however, some evidence suggests that a partial fundoplication ought to be reserved for those with achalasia and GERD secondary to scleroderma.24 Indeed, LARS has come full circle to using the same concept of a “floppy” Nissen proposed by Donahue8 and DeMeester12 before laparoscopy was even introduced.\nOur review of RCTs for long-term outcome in LARS was hampered by an impoverished standardization in reporting morbidity and reoperation, as well as in standardization of outcome assessment. Studies relied heavily on subjective determinations of postoperative dysphagia, gas-bloat, heartburn, and overall patient satisfaction, leaving comparison of these parameters between trials arduous. The most consistent objectivity that we found was in the analysis of postoperative pH testing, manometry, and use of H2-blockers or PPIs, though the intertrial consistency of these objective parameters is questionable. Nonetheless, particular patterns are apparent and consistent.14,24,62 First, either laparoscopic total or partial fundoplication result in equivalently low rates of heartburn at long-term follow-up. Only one group undertook long-term randomized comparison of 2 partial approaches to fundoplication. In anterior versus posterior fundoplication, Engström et al26 demonstrated more heartburn after the anterior approach, which persisted to 60 months. Not surprisingly, those patients in the anterior fundoplication group were more likely to require medical management of their recurrent disease and to be significantly dissatisfied with their results. Second, most RCTs demonstrated a higher incidence of postoperative dysphagia with total fundoplication as opposed to partial fundoplication, yet this persisted in only 1 of 5 trials with greater than 24 months of follow-up.28 Third, those few studies that reported on postoperative esophageal pH testing data showed that partial fundoplication resulted in a higher prevalence of acid reflux than total fundoplication did at long-term follow-up.29,30 This is in line with the current understanding that dysphagia with total fundoplication is more common, yet that it diminishes overtime, affording the patient a greater likelihood of freedom from acid reflux with no difference in heartburn or gas-bloat.\nWe also sought to identify the factors predictive of LARS failure (Tables 2 and 3). Many of the studies suggested that large hiatal hernias, atypical symptoms, poor response to medical reflux management, an elevated BMI, and postoperative vomiting are potential predictors for long-term failure. Interestingly, only the group in the Bell et al study,38 that used a posterior fundoplication, noted any increased rate of failure for those with disorders of peristalsis, in line with the notion that a tailored approach is unnecessary. In addition, the DeMeester score, the presence of Barrett's esophagus and esophagitis, and a defective LES have all been additionally implicated as independent predictors in some series, though this is not uniform throughout the studies.14,37,41,48 Similarly, variations in technique, including division of the SGVs, closure of the crura, and prosthetic repair of the hiatus may influence long-term outcomes. Indeed, no study identified a benefit to leaving the SGVs intact, and Bell et al38 and Soper et al40 found statistical differences to favor their division. Shorter-term studies and the metaanalysis by Catarci et al uphold this conclusion.63,64 Last, some reports have focused on the utility of prosthetic closure of the hiatus, with a consensus in favor of using mesh for this purpose.51–55 However, a recent case series by Stadlhuber et al65 places a caveat on the use of mesh, and given their findings they propose multicenter prospective studies to further validate its use.\nFinally, it is the authors' position that tailoring of technique to weak peristalsis should not be routinely practiced except in the face of achalasia or GERD secondary to scleroderma. In addition, we favor complete mobilization of the gastric fundus and meticulously approximated crural pillars. Our approach is that supported by the best attempts to interpret the limited body of consistent and objective literature regarding long-term outcomes in LARS.", "Since its beginnings, LARS has been slow to assume an evidence-based understanding of fundoplication technique, and the approach has meandered from variation to variation. Although we are now afforded more objectivity by way of esophageal testing and a handful of long-term RCTs, a clear discrepancy remains in the laparoscopic approach, pre- and postoperative analysis, and report of study findings regarding the patient with GERD. Nevertheless, it appears that laparoscopic total fundoplication affords more durable long-term results than the partial fundoplication, provided that potential predictors of failure are identified early and that the technical elements of the operation are respected." ]
[ null, "materials|methods", "results", null, null, null, "discussion", "conclusions" ]
[ "GERD", "Gastroesophageal reflux disease", "LARS", "Laparoscopic antireflux surgery", "Randomized controlled trials", "RCT", "Nissen fundoplication" ]
Outcomes of minimally invasive myotomy for the treatment of achalasia in the elderly.
21333185
An increasing number of elderly patients diagnosed with achalasia are being referred for minimally invasive myotomy. Little data are available about the operative outcomes in this population. The objective of this study was to review our experience with this procedure in an elderly population.
BACKGROUND
A retrospective review was performed of 51 consecutive patients, 65 years of age or older, diagnosed with achalasia who underwent a minimally invasive myotomy at our institution. Prior therapies, perioperative outcomes, and postoperative interventions were also analyzed.
METHODS
Of the 51 patients, 28 (55%) had undergone prior endoscopic therapy, and 2 patients (7%) had a prior myotomy. Mean duration of symptoms was 10.9 years (range, 0.5 to 50). No perioperative mortality occurred, and the median hospital stay was 3 days. Two patients (3.8%) had complications, including a gastric mucosal injury and one atelectasia. Eleven patients (21%) required additional therapy postoperatively. Symptom improvement was described in all patients.
RESULTS
Laparoscopic Heller myotomy can safely be performed in elderly patients, providing significant symptom relief. No evidence suggests that surgery should not be considered a first-line treatment. Advanced age does not appear to adversely affect outcomes of laparoscopic Heller myotomy.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Esophageal Achalasia", "Esophagus", "Female", "Follow-Up Studies", "Humans", "Laparoscopy", "Male", "Minimally Invasive Surgical Procedures", "Muscle, Smooth", "Retrospective Studies", "Treatment Outcome" ]
3041028
null
null
null
null
RESULTS
Fifty-one patients, 29 women and 22 men with a mean age of 73.14 years (range 65 to 89) underwent laparoscopic Heller myotomy at our institution. Twenty-eight patients (55%) had undergone prior therapy (P=0.075). Eight patients (29%) received prior pneumatic dilation, 7 (25%) received prior Botox, and 13 (46%) had both, with a mean period of time between this strategy and the laparoscopic myotomy averaging 12 months (range, 18 to 36). Two (7%) patients had undergone previous myotomies more than 40 years earlier with no improvement, and both patients had been treated with multiple pneumatic dilations and Botox injections (Table 1). Demographic and Clinical of the Study Population All operations were begun laparoscopically with one conversion to open myotomy. The decision for conversion was independent of the myotomy. In addition to an esophageal myotomy, this patient was having a malpositioned Nissen fundoplication taken down. The surgeon's choice to convert to open was based on the discovery of dense scar tissue that prevented the release of the posterior portion of the wrap from the esophagus. The median hospital stay was 3 days (range, 1 to 26). Complications occurred in 2 (3.8%) patients; 1 patient had an intraoperative gastric perforation, which was recognized intraoperatively and immediately repaired, and 1 patient with postoperative atelectasis that required discharge on supplemental oxygen. There was no perioperative mortality. Forty-eight patients (92.3%) had a partial fundoplication procedure. Thirteen (27%) received a Dor fundoplication, while 35 (73%) received Toupet fundoplication. The judgment not to complete a fundoplication was based on the finding of dense scarring from a previous fundoplication or complicated anatomy. There was no difference in surgical outcome or patient satisfaction in those patients who had a Dor or Toupet fundoplication (Table 2). Perioperative Outcomes Of the 42 patients with documented follow-up (mean time 42 months, range 24 to 53), all claimed overall symptom improvement, although a minority had identifiable foods that they restricted. The most common limitations involved solid foods; meat and hot dogs were specifically noted, which caused significant “sticking.” Despite these notations, each patient claimed overall symptom improvement. Eleven patients (22%) required additional therapy. Five patients (46%) underwent additional pneumatic dilations, 4 (36%) received pneumatic dilation and Botox injections, one (9%) patient received Botox for recurrent dysphagia and at a mean of 30 months (range, 6 to 53) following the surgery. One patient (9%) underwent further surgical intervention, receiving an esophagectomy 4 years later. This patient had an extremely complicated history of achalasia since adolescence with severe progression of his symptoms. Of those 11 patients, 8 (73%) had undergone therapy before the laparoscopic fundoplication as well (P<0.001) (Table 3). Postoperative Outcomes
CONCLUSION
Our data suggest that laparoscopic Heller myotomy should be offered as a first-line treatment for elderly patients with achalasia who are fit for surgery. The procedure has proven to be safe and effective in this population. Prior endoscopic therapy does not appear to adversely effect postoperative symptom improvement, but may help predict those patients who might require further therapy postoperatively.
[ "INTRODUCTION", "Operative Technique", "Statistical Analysis" ]
[ "Although achalasia is the most common functional disorder of the esophageal body and lower esophageal sphincter (LES), it occurs rarely, with a prevalence of 1/10 000 and an incidence between 0.03 and 1/100 000 per year.1–7 Achalasia affects both sexes equally and may occur at any age. However, the incidence peaks in the third and seventh decade of life.1,4 Although the cause remains unknown, the disease results from progressive degeneration of the plexus myentericus, resulting in a lack of inhibitory neurons needed for coordination of lower esophageal sphincter relaxation and peristaltic contractions of the esophagus.4 While investigators suggest genetic, autoimmune, or infectious origin of the neural damage, the exact cause remains to be determined.4,5\nClinical symptoms including dysphagia, chest pain, and regurgitation are not specific to achalasia, which may result in a 2-year to 3-year delay in diagnosis from the beginning of symptoms.4 Severity of the disease has also not been found to be linked to symptoms. Further diagnostic tests, including radiographic and manometric findings are used to confirm the clinical diagnosis.6 Esophageal manometry remains the primary diagnostic tool for achalasia. An abnormal pressure measurement pattern is found in patients with achalasia.6 Left untreated, most patients will eventually develop a dilated “mega-esophagus” with severe bolus transit impairment. Therefore, the goal in the management of achalasia is early diagnosis and treatment before reaching this end-stage phase when dysphagia may not be amenable to treatment other than esophagectomy.7–10\nCuring achalasia and reinstating esophageal peristalsis implies restoring the neurons of the myenteric plexi. Until such treatment becomes available, all interventions currently aim at facilitating bolus transit across the LES. Therapies include pharmacotherapy, chemical paralysis through Botox (botulinum toxin A) injection, mechanical dilation, and surgical myotomy. The order in which these therapies are recommended or performed is the subject of debate.11–12 Pharmacotherapy provides short-lived results, incomplete relief, and efficacy that decreases with time. Therefore, it is generally not considered a good treatment option for long-term relief of symptoms.11,12 Surgical myotomy produces the most durable long-term results.7 Minimally invasive myotomies offer increasingly less morbidity, postoperative pain, and facilitate an expeditious return to daily activities.7,11 Prior to the laparoscopic myotomy, operations for achalasia were done either through laparotomy or thoracotomy. The higher risks associated with open surgery dissuaded many from recommending surgical intervention as first-line treatment, especially to the elderly with their associated comorbidities.3,13,14 In the relatively short time since the first laparoscopic myotomy, patients undergoing minimally invasive myotomy have demonstrated excellent symptomatic outcomes with low morbidity and mortality.7 This has resulted in an increased preference of surgery as the initial treatment strategy.7–14 While an increasing number of older patients are being referred for laparoscopic Heller myotomy as the first-line treatment, few studies have followed the impact age has on surgical treatment of achalasia.3,13,14 The goal of this investigation was to review the outcomes of minimally invasive myotomies for achalasia in the elderly at our institution.", "The laparoscopic Heller myotomy is performed using a 5-port technique, as previously described.7 Direct visualization is used to enter the abdominal cavity 3 inches above the umbilicus. Four additional ports are placed. The gastrohepatic ligament is opened widely using Harmonic shears. The dissection then is carried up and down the right and left crura and into the mediastinum for adequate mobilization of the esophagus into the peritoneal cavity. Care is taken to identify and preserve both the anterior and posterior vagus nerves throughout the procedure. The short gastric vessels are divided, freeing the fundus, and the gastroesophageal fat pad is removed.\nHook cautery on a low wattage is used to divide the longitudinal and then circular fibers of the esophagus, completing a myotomy approximately 10cm up the esophagus and 4cm down onto the anterior gastric wall. To confirm an adequate myotomy, intraoperative endoscopy is undertaken. An adequate myotomy is confirmed when the endoscope passes easily into the stomach, the gastroesophageal junction opens easily with endoscopic air insufflation, and transillumination of the myotomized segment confirms muscle division well above and below the Z-line. A partial fundoplication was performed in the majority of the patients.", "Data were maintained on an Excel spreadsheet (Microsoft, Redmond, WA, USA). Statistical analysis was performed using SPSS software (Version 10; SPSS, Inc, Chicago, IL) and included x2 and Student t tests. Data are presented as means and percentages for categorical data, means, and standard deviations for continuous data. P<0.05 was used to determine statistical significance." ]
[ null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Operative Technique", "Statistical Analysis", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Although achalasia is the most common functional disorder of the esophageal body and lower esophageal sphincter (LES), it occurs rarely, with a prevalence of 1/10 000 and an incidence between 0.03 and 1/100 000 per year.1–7 Achalasia affects both sexes equally and may occur at any age. However, the incidence peaks in the third and seventh decade of life.1,4 Although the cause remains unknown, the disease results from progressive degeneration of the plexus myentericus, resulting in a lack of inhibitory neurons needed for coordination of lower esophageal sphincter relaxation and peristaltic contractions of the esophagus.4 While investigators suggest genetic, autoimmune, or infectious origin of the neural damage, the exact cause remains to be determined.4,5\nClinical symptoms including dysphagia, chest pain, and regurgitation are not specific to achalasia, which may result in a 2-year to 3-year delay in diagnosis from the beginning of symptoms.4 Severity of the disease has also not been found to be linked to symptoms. Further diagnostic tests, including radiographic and manometric findings are used to confirm the clinical diagnosis.6 Esophageal manometry remains the primary diagnostic tool for achalasia. An abnormal pressure measurement pattern is found in patients with achalasia.6 Left untreated, most patients will eventually develop a dilated “mega-esophagus” with severe bolus transit impairment. Therefore, the goal in the management of achalasia is early diagnosis and treatment before reaching this end-stage phase when dysphagia may not be amenable to treatment other than esophagectomy.7–10\nCuring achalasia and reinstating esophageal peristalsis implies restoring the neurons of the myenteric plexi. Until such treatment becomes available, all interventions currently aim at facilitating bolus transit across the LES. Therapies include pharmacotherapy, chemical paralysis through Botox (botulinum toxin A) injection, mechanical dilation, and surgical myotomy. The order in which these therapies are recommended or performed is the subject of debate.11–12 Pharmacotherapy provides short-lived results, incomplete relief, and efficacy that decreases with time. Therefore, it is generally not considered a good treatment option for long-term relief of symptoms.11,12 Surgical myotomy produces the most durable long-term results.7 Minimally invasive myotomies offer increasingly less morbidity, postoperative pain, and facilitate an expeditious return to daily activities.7,11 Prior to the laparoscopic myotomy, operations for achalasia were done either through laparotomy or thoracotomy. The higher risks associated with open surgery dissuaded many from recommending surgical intervention as first-line treatment, especially to the elderly with their associated comorbidities.3,13,14 In the relatively short time since the first laparoscopic myotomy, patients undergoing minimally invasive myotomy have demonstrated excellent symptomatic outcomes with low morbidity and mortality.7 This has resulted in an increased preference of surgery as the initial treatment strategy.7–14 While an increasing number of older patients are being referred for laparoscopic Heller myotomy as the first-line treatment, few studies have followed the impact age has on surgical treatment of achalasia.3,13,14 The goal of this investigation was to review the outcomes of minimally invasive myotomies for achalasia in the elderly at our institution.", "After institutional review board consent was obtained, the medical records of 52 consecutive patients aged 65 years or older undergoing minimally invasive Heller myotomy for achalasia were retrospectively reviewed. The diagnosis of achalasia was confirmed by Barium esophagogram, demonstrating the classic appearance of achalasia (proximal esophageal dilation with a distal “bird's peak”), and esophageal manometry. The most common present symptom was dysphagia (90%). Mean duration of symptoms for the entire patient cohort was 10.9 years (minimum 0.5 years - maximum 50 years). Prior therapies for the treatment of achalasia were documented (pneumatic dilations, Botox, prior myotomy), and the postoperative clinical outcomes were analyzed. Demographic data were obtained. Outcome variables included perioperative morbidity and mortality, symptom improvement, and postoperative interventions.\n[SUBTITLE] Operative Technique [SUBSECTION] The laparoscopic Heller myotomy is performed using a 5-port technique, as previously described.7 Direct visualization is used to enter the abdominal cavity 3 inches above the umbilicus. Four additional ports are placed. The gastrohepatic ligament is opened widely using Harmonic shears. The dissection then is carried up and down the right and left crura and into the mediastinum for adequate mobilization of the esophagus into the peritoneal cavity. Care is taken to identify and preserve both the anterior and posterior vagus nerves throughout the procedure. The short gastric vessels are divided, freeing the fundus, and the gastroesophageal fat pad is removed.\nHook cautery on a low wattage is used to divide the longitudinal and then circular fibers of the esophagus, completing a myotomy approximately 10cm up the esophagus and 4cm down onto the anterior gastric wall. To confirm an adequate myotomy, intraoperative endoscopy is undertaken. An adequate myotomy is confirmed when the endoscope passes easily into the stomach, the gastroesophageal junction opens easily with endoscopic air insufflation, and transillumination of the myotomized segment confirms muscle division well above and below the Z-line. A partial fundoplication was performed in the majority of the patients.\nThe laparoscopic Heller myotomy is performed using a 5-port technique, as previously described.7 Direct visualization is used to enter the abdominal cavity 3 inches above the umbilicus. Four additional ports are placed. The gastrohepatic ligament is opened widely using Harmonic shears. The dissection then is carried up and down the right and left crura and into the mediastinum for adequate mobilization of the esophagus into the peritoneal cavity. Care is taken to identify and preserve both the anterior and posterior vagus nerves throughout the procedure. The short gastric vessels are divided, freeing the fundus, and the gastroesophageal fat pad is removed.\nHook cautery on a low wattage is used to divide the longitudinal and then circular fibers of the esophagus, completing a myotomy approximately 10cm up the esophagus and 4cm down onto the anterior gastric wall. To confirm an adequate myotomy, intraoperative endoscopy is undertaken. An adequate myotomy is confirmed when the endoscope passes easily into the stomach, the gastroesophageal junction opens easily with endoscopic air insufflation, and transillumination of the myotomized segment confirms muscle division well above and below the Z-line. A partial fundoplication was performed in the majority of the patients.\n[SUBTITLE] Statistical Analysis [SUBSECTION] Data were maintained on an Excel spreadsheet (Microsoft, Redmond, WA, USA). Statistical analysis was performed using SPSS software (Version 10; SPSS, Inc, Chicago, IL) and included x2 and Student t tests. Data are presented as means and percentages for categorical data, means, and standard deviations for continuous data. P<0.05 was used to determine statistical significance.\nData were maintained on an Excel spreadsheet (Microsoft, Redmond, WA, USA). Statistical analysis was performed using SPSS software (Version 10; SPSS, Inc, Chicago, IL) and included x2 and Student t tests. Data are presented as means and percentages for categorical data, means, and standard deviations for continuous data. P<0.05 was used to determine statistical significance.", "The laparoscopic Heller myotomy is performed using a 5-port technique, as previously described.7 Direct visualization is used to enter the abdominal cavity 3 inches above the umbilicus. Four additional ports are placed. The gastrohepatic ligament is opened widely using Harmonic shears. The dissection then is carried up and down the right and left crura and into the mediastinum for adequate mobilization of the esophagus into the peritoneal cavity. Care is taken to identify and preserve both the anterior and posterior vagus nerves throughout the procedure. The short gastric vessels are divided, freeing the fundus, and the gastroesophageal fat pad is removed.\nHook cautery on a low wattage is used to divide the longitudinal and then circular fibers of the esophagus, completing a myotomy approximately 10cm up the esophagus and 4cm down onto the anterior gastric wall. To confirm an adequate myotomy, intraoperative endoscopy is undertaken. An adequate myotomy is confirmed when the endoscope passes easily into the stomach, the gastroesophageal junction opens easily with endoscopic air insufflation, and transillumination of the myotomized segment confirms muscle division well above and below the Z-line. A partial fundoplication was performed in the majority of the patients.", "Data were maintained on an Excel spreadsheet (Microsoft, Redmond, WA, USA). Statistical analysis was performed using SPSS software (Version 10; SPSS, Inc, Chicago, IL) and included x2 and Student t tests. Data are presented as means and percentages for categorical data, means, and standard deviations for continuous data. P<0.05 was used to determine statistical significance.", "Fifty-one patients, 29 women and 22 men with a mean age of 73.14 years (range 65 to 89) underwent laparoscopic Heller myotomy at our institution. Twenty-eight patients (55%) had undergone prior therapy (P=0.075). Eight patients (29%) received prior pneumatic dilation, 7 (25%) received prior Botox, and 13 (46%) had both, with a mean period of time between this strategy and the laparoscopic myotomy averaging 12 months (range, 18 to 36). Two (7%) patients had undergone previous myotomies more than 40 years earlier with no improvement, and both patients had been treated with multiple pneumatic dilations and Botox injections (Table 1).\nDemographic and Clinical of the Study Population\nAll operations were begun laparoscopically with one conversion to open myotomy. The decision for conversion was independent of the myotomy. In addition to an esophageal myotomy, this patient was having a malpositioned Nissen fundoplication taken down. The surgeon's choice to convert to open was based on the discovery of dense scar tissue that prevented the release of the posterior portion of the wrap from the esophagus. The median hospital stay was 3 days (range, 1 to 26). Complications occurred in 2 (3.8%) patients; 1 patient had an intraoperative gastric perforation, which was recognized intraoperatively and immediately repaired, and 1 patient with postoperative atelectasis that required discharge on supplemental oxygen. There was no perioperative mortality.\nForty-eight patients (92.3%) had a partial fundoplication procedure. Thirteen (27%) received a Dor fundoplication, while 35 (73%) received Toupet fundoplication. The judgment not to complete a fundoplication was based on the finding of dense scarring from a previous fundoplication or complicated anatomy. There was no difference in surgical outcome or patient satisfaction in those patients who had a Dor or Toupet fundoplication (Table 2).\nPerioperative Outcomes\nOf the 42 patients with documented follow-up (mean time 42 months, range 24 to 53), all claimed overall symptom improvement, although a minority had identifiable foods that they restricted. The most common limitations involved solid foods; meat and hot dogs were specifically noted, which caused significant “sticking.” Despite these notations, each patient claimed overall symptom improvement. Eleven patients (22%) required additional therapy. Five patients (46%) underwent additional pneumatic dilations, 4 (36%) received pneumatic dilation and Botox injections, one (9%) patient received Botox for recurrent dysphagia and at a mean of 30 months (range, 6 to 53) following the surgery. One patient (9%) underwent further surgical intervention, receiving an esophagectomy 4 years later. This patient had an extremely complicated history of achalasia since adolescence with severe progression of his symptoms. Of those 11 patients, 8 (73%) had undergone therapy before the laparoscopic fundoplication as well (P<0.001) (Table 3).\nPostoperative Outcomes", "Achalasia is a chronic disorder of the esophagus that significantly impacts the quality of life of the patients. Because the cause remains elusive, no specific therapy is available for managing the underlying disease process. Several available therapies have been developed to alleviate the symptoms of the disease. However, none of the treatment options re-establish normal muscle activity of the esophageal body and lower esophageal sphincter (LES). Instead, all relieve the functional obstruction caused by the failure of LES to relax upon deglutition. Most treatment options are tailored to the patient's overall health status and underlying comorbidities.\nOccurring in 1 of 100 000 people, achalasia has a bimodal distribution, with a smaller peak in persons between 25 years and 40 years of age, and a larger peak occurring in the seventh decade. Significant differences in the clinical presentation between these 2 populations have been reported.1,3–5,7,13,14 Older patients have been shown to experience significantly less frequent dysphagia, regurgitation, and choking episodes premyotomy.14–17 Younger patients have more severe symptoms, or less tolerance of their symptoms, regardless of duration.15,17 Older patients tend to present with more complaints of heartburn than younger patients do.16\nLasch15 and colleagues performed a study with healthy volunteers, which suggested that elderly subjects have a reduced sensitivity to esophageal balloon distention compared with younger subjects. Moreover, acid sensitivity of the inflamed and uninflamed esophageal mucosa has been shown to be largely dependent on the patient's age.16 Because aging notably decreases the frequency of episodic chest pain, some patients will even lose this symptom over a period of several years.17 Rakita et al3 examined a cohort of 262 patients (142 men and 120 women), with an average age of 49 years, who had undergone laparoscopic Heller myotomy. Results supported the previous findings that older patients were more likely to have a longer duration and less severe symptoms before myotomy consistent with a more indolent course of disease. Fortunately, postmyotomy results indicated that neither age nor duration of symptoms influenced long-term patient results3 and we could confirm that in our population (Table 4). Whether distinct variants of achalasia exist in the 2 age distributions or whether these differences merely represent a spectrum of the same disease remains to be elucidated.\nPatients That Require Additional Therapy\nThe increased proficiency of surgeons in minimally invasive techniques over the past 15 years has resulted in the laparoscopic modified Heller myotomy becoming the gold-standard treatment for achalasia in younger individuals (<50 years old).7–10 However, some individuals suggest that older patients should have a different approach to treatment, and surgery should not be offered as the first line of therapy. Several series favor botulinum toxin injections or pneumatic balloon dilation (PD), but none have concluded a definitive treatment approach.18–23 Most suggest that specific treatments work better under certain circumstances such as the early stages of the disease.19,21 However, these nonsurgical options usually have to be repeated to achieve long-term effects. As revealed in our study, most of our patients had undergone nonsurgical treatment without satisfactory long-term results before being referred for surgery.\nPneumatic dilation is recommended by many, because it is associated with an initial success rate of 70%. However, approximately 40% of patients experience symptom recurrence with extended follow-up in some series.18–20 Csendes and colleagues20 found better long-term success in surgical patients compared with those treated by balloon dilatation alone. Farhoomand18 found a 37% recurrence of symptoms within 3 months in patients treated initially with a 3.0-cm balloon. Moreover Karamanolis et al19 found that even with a clinical remission for more than 15 years after the initial pneumatic dilation in 51.4% of their patients, the long-term success rate dropped progressively with time, the need for additional esophageal balloon dilation increased, as well as the risk of perforation related to each procedure, and the symptoms were less likely to abate. Our experience indicates that pneumatic dilations offer relief; however, they often require repeat dilations, each with decreasing efficacy.\nBotox is usually reserved for physiologically compromised individuals who cannot undergo PD or minimally invasive myotomy (MIM). Vaezi et al,21 in a randomized trial comparing Botox and PD, found that pneumatic dilatation resulted in a significantly (P=0.02) higher cumulative remission rate. At 12 months, 14/20 (70%) pneumatic dilatation and 7/22 (32%) Botox-treated patients were in symptomatic remission (P=0.017). PD resulted in significant (P<0.001) reduction in symptom scores, lower esophageal sphincter pressure measurements, esophageal barium column heights, and esophageal diameters. Botox produced a significant reduction in symptom scores (P<0.001), but no reduction in objective parameters. Failure rates were similar initially, but failure over time was significantly (P=0.01) higher after Botox (50%) than after pneumatic dilatation (7%). Zaninotto et al22 published the results of a randomized trial comparing 2 Botox injections 1 month apart (100 units each) with laparoscopic Heller myotomy and fundoplication. There were 40 patients in each group, no mortality in either group, and only 1 minor complication in the surgery group. At 2-year follow-up, nearly 66% of the Botox group was again symptomatic compared with only 13.5% of the surgery group. An initial resistance to Botox, caused by antibody formation, is present in up to 26% of patients and likely contributes to the method's high primary failure rate. Moreover, in patients undergoing Botox injection, intramural fibrosis has been reported that could interfere with subsequent surgical treatment. Neubrand23 reported a 70% success rate with Botox injections. Unfortunately this result only lasts for 6 months to 9 months and only half of them benefit for more than 1 year. He also found that younger patients (<55 years) usually have a higher LES pressure than older patients, and they did not seem to benefit from botulinum toxin injection.\nFew publications of outcomes of MIM specifically in the older population have been published. Kilic et al13 performed MIM on 57 patients 70 years of age or older. This group represented 25% of all achalasia patients on their service. Symptom improvement was achieved in the vast majority as assessed by dysphagia score (96.5%) as was freedom from further surgical intervention (93.0%) at a mean follow-up of approximately 2 years. The complication rate of 19.3% was managed without significant morbidity and nonperioperative mortality. The authors state that one of the best advantages of using MIM for older achalasia patients is the avoidance of repeated procedures, because only 7% of their patient's required further intervention. Similar findings were reported by Kala et al24 in the 8-year follow-up of 115 patients, with a senior subgroup of 26 (average 69.7). Postoperative decrease of LES tone showed good motility results of Heller myotomy, and 24-hour pH-metry revealed good antireflux effects of Toupet fundoplication. In our study, all the patients with a completed follow-up of 32 months claimed overall symptom improvement, and only 11 patients required further interventions.\nThe influence of prior nonsurgical therapy in the laparoscopic Heller myotomy outcome is still controversial. In an analysis of 200 patients undergoing myotomy, predictors of failure were prior therapy, duration of the symptoms, and sigmoid esophagus.25 In our study, prior therapy and preoperative LESP, mm Hg had statistical significance between patients who required additional therapy and those who did not (P<0.01 and P=0.004, respectively). Smith et al12 in their study of 209 patients undergoing Heller myotomy for achalasia found that intraoperative complications were more common in patients with previous therapies and postoperative complications like primarily severe dysphagia or pulmonary complications were more common after endoscopic treatment (10.4% versus 5.4%), also myotomy failure was higher in these patients (19.5% versus 10.1%). On the other hand, Ferulano and colleagues2 assert that age and previous treatment do not influence the outcome. They did not find evidence that repeated dilations render surgery more difficult, even if mucosal tears had occurred in their patients who had undergone previous treatments. While prior therapy may make the actual surgery more demanding, our experience indicates that the long-term results remain unaffected.\nEleven of our 51 patients had additional therapy after surgical treatment. Eight of those 11 (73%) had also received preoperative therapy. Twenty-three patients underwent a laparoscopic Heller myotomy as the first-line treatment with only 3 (13%) requiring further therapy. Our data suggest that although preoperative therapy does not negatively affect the outcome of the Heller myotomy, it makes a patient more susceptible to additional therapy postoperatively. Schuchert25 concluded in his 200-patient study that prior endoscopic therapy was associated with a higher risk of failure after myotomy as well as reoperation rate. Surgery as a first-line treatment is therefore advantageous in that it may prevent additional procedures, which is also potentially advantageous to the elderly population. In addition, patients receiving pneumatic dilations often seek surgery eventually. In the event of a failed myotomy, re-do myotomy is generally considered a formidable task, because of the adhesions and dense fibrosis that destroy the planes within the LES.7 However, with the continually progressing techniques and improvements in advanced laparoscopy, re-do myotomy has also been shown to be safe and effective.25\nThe majority of our patients had a partial fundoplication in addition to the myotomy. Other authors have demonstrated in randomized trials that the addition of a partial fundoplication decreased the incidence of postoperative gastroesophageal reflux from 46.6% with Heller alone to 9.1% with Heller/Dor, with no significant change in dysphagia.26,27 Twenty-seven percent of our patients undergoing fundoplication received a Dor fundoplication, while 73% received a Toupet fundoplication. Our study showed favorable results with the addition of fundoplication, regardless of the technique. Follow-up records report overall improvement in deglutition symptoms, with a handful of patients citing incidences of dysphagia with select foods. No patient complained of reflux at the time of follow-up, suggesting that both Dor and Toupet are effective fundoplication procedures. Ultimately, surgeon experience and preferences dictated procedure choice.\nAll patients reported an improvement in symptoms at follow-up. A minority of patients noted specific foods or incidences that caused dysphagia. However, relative to initial symptoms, all patients claimed improvement. Our data are consistent with that of other studies that also indicate an overwhelming degree of symptom improvement.3,9,11,13,25 Heller myotomy provides high patient satisfaction and significant symptom improvement, both of which play critical roles in determining a treatment.", "Our data suggest that laparoscopic Heller myotomy should be offered as a first-line treatment for elderly patients with achalasia who are fit for surgery. The procedure has proven to be safe and effective in this population. Prior endoscopic therapy does not appear to adversely effect postoperative symptom improvement, but may help predict those patients who might require further therapy postoperatively." ]
[ null, "materials|methods", null, null, "results", "discussion", "conclusions" ]
[ "Achalasia", "Laparoscopic", "Myotomy" ]
Prophylactic laparoscopic gastrectomy for hereditary diffuse gastric cancer: a case series in a single family.
21333186
Hereditary diffuse gastric carcinomas (HDGCs) are particularly troubling because of autosomal dominant heritance, high penetrance, early age of onset, and a lack of effective treatment once symptomatic. HDGC is further complicated by difficulty of effective screening. Gastrectomy provides definitive treatment for CDH1 mutation-positive patients. Attempting to minimize the morbidity and mortality of this procedure via a laparoscopic approach is appropriate.
BACKGROUND
Six consanguineous patients, 21 to 51 years of age, were identified as carriers of the CDH1 gene mutation. All of the patients' gastric mucosa was normal by endoscopic appearance and biopsy. After appropriate multispecialty counseling, all patients elected to undergo a laparoscopic total gastrectomy. Demographics, genealogy, operative approach, outcomes, and pathology were reviewed.
METHODS
All gastrectomies were completed using a laparoscopic approach. Gross examination of resected stomachs was unremarkable. Histological examination demonstrated multiple foci of invasive signet ring adenocarcinoma in all patients. There were no anastomotic leaks, one small bowel obstruction requiring reoperation, and one esophageal stricture requiring dilation.
RESULTS
This series demonstrates the utility and safety of the laparoscopic approach for prophylactic total gastrectomy for carriers of the CDH1 gene mutation. It serves to highlight that patients with CDH1 mutations may be more likely to undergo gastrectomy if they are offered the lower risk laparoscopic approach.
CONCLUSIONS
[ "Adult", "Family", "Female", "Follow-Up Studies", "Gastrectomy", "Genetic Predisposition to Disease", "Humans", "Male", "Middle Aged", "Pedigree", "Stomach Neoplasms", "Treatment Outcome", "Young Adult" ]
3041029
null
null
METHODS
This is a case series of 6 consanguineous patients identified as carriers of the CDH1 gene mutation. After appropriate multispecialty counseling, all patients elected to undergo a laparoscopic total gastrectomy. After Institutional Review Board approval, demographics, genealogy, operative approach, outcomes, and pathology were reviewed.
RESULTS
[SUBTITLE] Demographics [SUBSECTION] The series comprised 6 patients, 3 males and 3 females. The age range was 21 to 51 years old with an average of 38.2 years. All patients had their surgeries performed at the Karmanos Cancer Institute of Detroit, by a team of minimally invasive surgeons. All of the patients were residents of the state of Michigan at the time of their surgeries. The series comprised 6 patients, 3 males and 3 females. The age range was 21 to 51 years old with an average of 38.2 years. All patients had their surgeries performed at the Karmanos Cancer Institute of Detroit, by a team of minimally invasive surgeons. All of the patients were residents of the state of Michigan at the time of their surgeries. [SUBTITLE] Genealogy [SUBSECTION] The patients were suspected of having hereditary gastric cancer when 2 relatives died of gastric cancer in their early thirties. The disease affected 2 separate generations of the family with the first affected family member (individual II-5, Figure 1) dying of gastric cancer at age 32. Individual II-5 had 5 siblings, 4 of whom were alive and could be tested. Three of the 4 agreed to genetic testing and were found to carry a CDH1 mutation. Individual II-4 was not tested, and her CDH1 status is currently unknown. She has 2 young girls, ages 11 and 8, who also have not been tested. Several children of the 3 CDH1 positive siblings (II-1, II-2, II-6) have been tested and have been found to carry the CDH1 gene (III-4, III-9, III-10). The CDH1 status of the remaining family members is unknown. Individual III-1 died in her early thirties, and although her CDH1 status was unknown, given her age, disease course, and the presence of the mutation in her mother, it is highly likely that she was CDH1 positive. Of the family members known to have undergone genetic testing, 6 of 6 have tested positive for the CDH1 mutation. It is presumed that the 2 who died prior to testing indeed also carried the gene defect. Family genealogy of hereditary gastric cancer. The patients were suspected of having hereditary gastric cancer when 2 relatives died of gastric cancer in their early thirties. The disease affected 2 separate generations of the family with the first affected family member (individual II-5, Figure 1) dying of gastric cancer at age 32. Individual II-5 had 5 siblings, 4 of whom were alive and could be tested. Three of the 4 agreed to genetic testing and were found to carry a CDH1 mutation. Individual II-4 was not tested, and her CDH1 status is currently unknown. She has 2 young girls, ages 11 and 8, who also have not been tested. Several children of the 3 CDH1 positive siblings (II-1, II-2, II-6) have been tested and have been found to carry the CDH1 gene (III-4, III-9, III-10). The CDH1 status of the remaining family members is unknown. Individual III-1 died in her early thirties, and although her CDH1 status was unknown, given her age, disease course, and the presence of the mutation in her mother, it is highly likely that she was CDH1 positive. Of the family members known to have undergone genetic testing, 6 of 6 have tested positive for the CDH1 mutation. It is presumed that the 2 who died prior to testing indeed also carried the gene defect. Family genealogy of hereditary gastric cancer. [SUBTITLE] Surgery [SUBSECTION] All patients were asymptomatic at the time of their testing and surgery and had no abnormal abdominal findings on physical examination. All patients had genetic counseling prior to testing and surgery. All patients were screened endoscopically and were free of any gross gastric disease. Random gastric biopsies were performed in all cases. The biopsies were unremarkable. All patients underwent a laparoscopic Roux-en-Y esophagojejunostomy. In all cases, the Roux limb was brought up to the esophagus in a retrocolic orientation. The esophageal anastomosis was performed with an Orvil (25-mm, 3.5-mm Covidien, Norwalk, CT) that was passed transorally. The jejunojejunostomy was performed with a linear stapler (EndoUniversal GIA, Covidien). All hernia-site defects were closed including Peterson's defect, the jejunal mesentery, and the mesocolic window. A drain was left near the site of the esophagojejunostomy and was removed before the patient was discharge from hospital. No intraoperative endoscopy was used to test the integrity of the anastomosis, and no radiologic leak tests were performed postoperatively. All cases were done using 5 ports, 4 of which were 5-mm ports (Figure 2). A fifth 12-mm port placed in the left anterior axillary line at the costal margin was used as the stapling port. This incision was extended at the end of the case to extract the stomach via an endocatch appliance. Trocar location for laparoscopic gastrectomy. Operative time ranged from 287 minutes to 372 minutes with an average time of 292 minutes. The mean length of stay (LOS) was 7.8 days (median 5 days, range 3 to 23). Postoperatively, the patients were placed on regular surgical wards and advanced in diet starting with clear liquids. All patients were seen back in the clinic and were found to be doing well. One patient developed a stricture at the esophageal anastomosis that was treated successfully by endoscopic balloon dilatation. One patient developed nausea and vomiting on postoperative day 5 and was subsequently diagnosed with a small bowel obstruction secondary to adhesions around the mesocolic window. Operative repair, including lysis of adhesions and revision of the mesocolic window, was completed on postoperative day 10. The jejunojejunostomy anastomosis appeared healthy and was without leaks at the time of reoperation. Subsequently, the patient developed a hydropneumothorax, drained once by ultrasound-guided aspiration and once by CT-guided aspiration. The remainder of the hospital course was uncomplicated, and this patient was discharged on hospital day 23. All patients were asymptomatic at the time of their testing and surgery and had no abnormal abdominal findings on physical examination. All patients had genetic counseling prior to testing and surgery. All patients were screened endoscopically and were free of any gross gastric disease. Random gastric biopsies were performed in all cases. The biopsies were unremarkable. All patients underwent a laparoscopic Roux-en-Y esophagojejunostomy. In all cases, the Roux limb was brought up to the esophagus in a retrocolic orientation. The esophageal anastomosis was performed with an Orvil (25-mm, 3.5-mm Covidien, Norwalk, CT) that was passed transorally. The jejunojejunostomy was performed with a linear stapler (EndoUniversal GIA, Covidien). All hernia-site defects were closed including Peterson's defect, the jejunal mesentery, and the mesocolic window. A drain was left near the site of the esophagojejunostomy and was removed before the patient was discharge from hospital. No intraoperative endoscopy was used to test the integrity of the anastomosis, and no radiologic leak tests were performed postoperatively. All cases were done using 5 ports, 4 of which were 5-mm ports (Figure 2). A fifth 12-mm port placed in the left anterior axillary line at the costal margin was used as the stapling port. This incision was extended at the end of the case to extract the stomach via an endocatch appliance. Trocar location for laparoscopic gastrectomy. Operative time ranged from 287 minutes to 372 minutes with an average time of 292 minutes. The mean length of stay (LOS) was 7.8 days (median 5 days, range 3 to 23). Postoperatively, the patients were placed on regular surgical wards and advanced in diet starting with clear liquids. All patients were seen back in the clinic and were found to be doing well. One patient developed a stricture at the esophageal anastomosis that was treated successfully by endoscopic balloon dilatation. One patient developed nausea and vomiting on postoperative day 5 and was subsequently diagnosed with a small bowel obstruction secondary to adhesions around the mesocolic window. Operative repair, including lysis of adhesions and revision of the mesocolic window, was completed on postoperative day 10. The jejunojejunostomy anastomosis appeared healthy and was without leaks at the time of reoperation. Subsequently, the patient developed a hydropneumothorax, drained once by ultrasound-guided aspiration and once by CT-guided aspiration. The remainder of the hospital course was uncomplicated, and this patient was discharged on hospital day 23. [SUBTITLE] Pathology [SUBSECTION] All of the stomachs were examined after removal from the body and were found to be grossly normal. Microscopic examination of the specimens showed that each had multiple microscopic foci of invasive signet ring cell carcinoma. The mean number of foci was 8.2 (median 4, range 2 to 19). The number of foci was reported for 5 of 6 patients; the sixth patient was noted to have multiple foci but no count was provided by the pathologist. All of the foci of invasive carcinoma were confined to the lamina propria, and the largest focus noted was 0.5cm. All of the lymph nodes were negative. The mean lymph node harvest was 12.3 (median 10.5, range 5 to 25). All patients were staged as pT1N0MX. All of the stomachs were examined after removal from the body and were found to be grossly normal. Microscopic examination of the specimens showed that each had multiple microscopic foci of invasive signet ring cell carcinoma. The mean number of foci was 8.2 (median 4, range 2 to 19). The number of foci was reported for 5 of 6 patients; the sixth patient was noted to have multiple foci but no count was provided by the pathologist. All of the foci of invasive carcinoma were confined to the lamina propria, and the largest focus noted was 0.5cm. All of the lymph nodes were negative. The mean lymph node harvest was 12.3 (median 10.5, range 5 to 25). All patients were staged as pT1N0MX.
CONCLUSION
Minimally invasive surgery is playing an increasing role in the management of disease. This is highlighted in our case series, where we demonstrate the utility and safety of the laparoscopic approach toward prophylactic total gastrectomy in a family with HDGC. All of the patients in this series treated with gastrectomy demonstrated microscopic foci of invasive adenocarcinoma with no evidence of local or distant spread. Thus, these surgeries were curative, rather than prophylactic. While current screening techniques are inadequate to make the distinction between prophylactic and curative surgery for these early stage patients, pre- and perioperative patient counseling should highlight the realities inherent in the data. Laparoscopic total gastrectomy not only provides a cure, but also satisfies oncologic staging requirements and provides for quicker recovery and decreased morbidity. We feel it is well suited for asymptomatic CDH1 positive patients in HDGC families. This recommendation comes with screening by family history, genetic counseling, consultation with a multi-specialty team, and the recommendation that the surgery should only be performed by a surgeon experienced in performing laparoscopic total gastrectomies. Moreover, this series demonstrates that gastrectomies can be performed successfully utilizing a laparoscopic approach minimizing morbidity compared with open gastrectomies. We believe that the laparoscopic approach should be offered as first-line therapy to CDH1-positive patients.
[ "INTRODUCTION", "Demographics", "Genealogy", "Surgery", "Pathology" ]
[ "An estimated 21,130 new cases of gastric cancer occurred in the United States in 2009, and this represents a decrease in incidence.1 This decline was accompanied by a decrease in mortality; however, it is unlikely that familial gastric cancers share similar fortunes. Hereditary Diffuse Gastric Cancers (HDGCs) represent 1% to 3% of all gastric cancers, and of those, up to 53% can be attributed to mutations in the CDH1 gene.2–4 These familial cancers demonstrate autosomal dominant heritance and high penetrance. The 5-year survival is <20% if the diagnosis is made after the patient is symptomatic.5 This is compounded by the lack of reliable screening for early stage resectable tumors.6\nHeuristics like the number and age of onset for family clusters of diffuse gastric cancer can identify those patients at highest risk, and testing of affected individuals and relatives can identify the presence of CDH1 mutations. After appropriate counseling, asymptomatic members of that family can then be screened for a CDH1 mutation. For patients with a CDH1 mutation of unknown significance, or in familial gastric cancer without an identifiable CDH1 mutation, the literature does not support any recommendations at this time. For patients with an identified CDH1 mutation, prophylactic surgery is a logical solution.\nHowever, it is estimated that the risk of fatal HDGC exceeds the approximate 1% mortality risk of a gastrectomy for patients over 20 years of age.6 Physicians should present to patients the strong evidence supporting prophylactic gastrectomies and help them understand the range of surgical and nonsurgical options available. Knowledge that a laparoscopic gastrectomy can minimize the morbidity and mortality associated with the surgery may help ease the decision-making process.", "The series comprised 6 patients, 3 males and 3 females. The age range was 21 to 51 years old with an average of 38.2 years. All patients had their surgeries performed at the Karmanos Cancer Institute of Detroit, by a team of minimally invasive surgeons. All of the patients were residents of the state of Michigan at the time of their surgeries.", "The patients were suspected of having hereditary gastric cancer when 2 relatives died of gastric cancer in their early thirties. The disease affected 2 separate generations of the family with the first affected family member (individual II-5, Figure 1) dying of gastric cancer at age 32. Individual II-5 had 5 siblings, 4 of whom were alive and could be tested. Three of the 4 agreed to genetic testing and were found to carry a CDH1 mutation. Individual II-4 was not tested, and her CDH1 status is currently unknown. She has 2 young girls, ages 11 and 8, who also have not been tested. Several children of the 3 CDH1 positive siblings (II-1, II-2, II-6) have been tested and have been found to carry the CDH1 gene (III-4, III-9, III-10). The CDH1 status of the remaining family members is unknown. Individual III-1 died in her early thirties, and although her CDH1 status was unknown, given her age, disease course, and the presence of the mutation in her mother, it is highly likely that she was CDH1 positive. Of the family members known to have undergone genetic testing, 6 of 6 have tested positive for the CDH1 mutation. It is presumed that the 2 who died prior to testing indeed also carried the gene defect.\nFamily genealogy of hereditary gastric cancer.", "All patients were asymptomatic at the time of their testing and surgery and had no abnormal abdominal findings on physical examination. All patients had genetic counseling prior to testing and surgery. All patients were screened endoscopically and were free of any gross gastric disease. Random gastric biopsies were performed in all cases. The biopsies were unremarkable.\nAll patients underwent a laparoscopic Roux-en-Y esophagojejunostomy. In all cases, the Roux limb was brought up to the esophagus in a retrocolic orientation. The esophageal anastomosis was performed with an Orvil (25-mm, 3.5-mm Covidien, Norwalk, CT) that was passed transorally. The jejunojejunostomy was performed with a linear stapler (EndoUniversal GIA, Covidien). All hernia-site defects were closed including Peterson's defect, the jejunal mesentery, and the mesocolic window. A drain was left near the site of the esophagojejunostomy and was removed before the patient was discharge from hospital. No intraoperative endoscopy was used to test the integrity of the anastomosis, and no radiologic leak tests were performed postoperatively. All cases were done using 5 ports, 4 of which were 5-mm ports (Figure 2). A fifth 12-mm port placed in the left anterior axillary line at the costal margin was used as the stapling port. This incision was extended at the end of the case to extract the stomach via an endocatch appliance.\nTrocar location for laparoscopic gastrectomy.\nOperative time ranged from 287 minutes to 372 minutes with an average time of 292 minutes. The mean length of stay (LOS) was 7.8 days (median 5 days, range 3 to 23). Postoperatively, the patients were placed on regular surgical wards and advanced in diet starting with clear liquids. All patients were seen back in the clinic and were found to be doing well.\nOne patient developed a stricture at the esophageal anastomosis that was treated successfully by endoscopic balloon dilatation. One patient developed nausea and vomiting on postoperative day 5 and was subsequently diagnosed with a small bowel obstruction secondary to adhesions around the mesocolic window. Operative repair, including lysis of adhesions and revision of the mesocolic window, was completed on postoperative day 10. The jejunojejunostomy anastomosis appeared healthy and was without leaks at the time of reoperation. Subsequently, the patient developed a hydropneumothorax, drained once by ultrasound-guided aspiration and once by CT-guided aspiration. The remainder of the hospital course was uncomplicated, and this patient was discharged on hospital day 23.", "All of the stomachs were examined after removal from the body and were found to be grossly normal. Microscopic examination of the specimens showed that each had multiple microscopic foci of invasive signet ring cell carcinoma. The mean number of foci was 8.2 (median 4, range 2 to 19). The number of foci was reported for 5 of 6 patients; the sixth patient was noted to have multiple foci but no count was provided by the pathologist. All of the foci of invasive carcinoma were confined to the lamina propria, and the largest focus noted was 0.5cm. All of the lymph nodes were negative. The mean lymph node harvest was 12.3 (median 10.5, range 5 to 25). All patients were staged as pT1N0MX." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "Demographics", "Genealogy", "Surgery", "Pathology", "DISCUSSION", "CONCLUSION" ]
[ "An estimated 21,130 new cases of gastric cancer occurred in the United States in 2009, and this represents a decrease in incidence.1 This decline was accompanied by a decrease in mortality; however, it is unlikely that familial gastric cancers share similar fortunes. Hereditary Diffuse Gastric Cancers (HDGCs) represent 1% to 3% of all gastric cancers, and of those, up to 53% can be attributed to mutations in the CDH1 gene.2–4 These familial cancers demonstrate autosomal dominant heritance and high penetrance. The 5-year survival is <20% if the diagnosis is made after the patient is symptomatic.5 This is compounded by the lack of reliable screening for early stage resectable tumors.6\nHeuristics like the number and age of onset for family clusters of diffuse gastric cancer can identify those patients at highest risk, and testing of affected individuals and relatives can identify the presence of CDH1 mutations. After appropriate counseling, asymptomatic members of that family can then be screened for a CDH1 mutation. For patients with a CDH1 mutation of unknown significance, or in familial gastric cancer without an identifiable CDH1 mutation, the literature does not support any recommendations at this time. For patients with an identified CDH1 mutation, prophylactic surgery is a logical solution.\nHowever, it is estimated that the risk of fatal HDGC exceeds the approximate 1% mortality risk of a gastrectomy for patients over 20 years of age.6 Physicians should present to patients the strong evidence supporting prophylactic gastrectomies and help them understand the range of surgical and nonsurgical options available. Knowledge that a laparoscopic gastrectomy can minimize the morbidity and mortality associated with the surgery may help ease the decision-making process.", "This is a case series of 6 consanguineous patients identified as carriers of the CDH1 gene mutation. After appropriate multispecialty counseling, all patients elected to undergo a laparoscopic total gastrectomy. After Institutional Review Board approval, demographics, genealogy, operative approach, outcomes, and pathology were reviewed.", "[SUBTITLE] Demographics [SUBSECTION] The series comprised 6 patients, 3 males and 3 females. The age range was 21 to 51 years old with an average of 38.2 years. All patients had their surgeries performed at the Karmanos Cancer Institute of Detroit, by a team of minimally invasive surgeons. All of the patients were residents of the state of Michigan at the time of their surgeries.\nThe series comprised 6 patients, 3 males and 3 females. The age range was 21 to 51 years old with an average of 38.2 years. All patients had their surgeries performed at the Karmanos Cancer Institute of Detroit, by a team of minimally invasive surgeons. All of the patients were residents of the state of Michigan at the time of their surgeries.\n[SUBTITLE] Genealogy [SUBSECTION] The patients were suspected of having hereditary gastric cancer when 2 relatives died of gastric cancer in their early thirties. The disease affected 2 separate generations of the family with the first affected family member (individual II-5, Figure 1) dying of gastric cancer at age 32. Individual II-5 had 5 siblings, 4 of whom were alive and could be tested. Three of the 4 agreed to genetic testing and were found to carry a CDH1 mutation. Individual II-4 was not tested, and her CDH1 status is currently unknown. She has 2 young girls, ages 11 and 8, who also have not been tested. Several children of the 3 CDH1 positive siblings (II-1, II-2, II-6) have been tested and have been found to carry the CDH1 gene (III-4, III-9, III-10). The CDH1 status of the remaining family members is unknown. Individual III-1 died in her early thirties, and although her CDH1 status was unknown, given her age, disease course, and the presence of the mutation in her mother, it is highly likely that she was CDH1 positive. Of the family members known to have undergone genetic testing, 6 of 6 have tested positive for the CDH1 mutation. It is presumed that the 2 who died prior to testing indeed also carried the gene defect.\nFamily genealogy of hereditary gastric cancer.\nThe patients were suspected of having hereditary gastric cancer when 2 relatives died of gastric cancer in their early thirties. The disease affected 2 separate generations of the family with the first affected family member (individual II-5, Figure 1) dying of gastric cancer at age 32. Individual II-5 had 5 siblings, 4 of whom were alive and could be tested. Three of the 4 agreed to genetic testing and were found to carry a CDH1 mutation. Individual II-4 was not tested, and her CDH1 status is currently unknown. She has 2 young girls, ages 11 and 8, who also have not been tested. Several children of the 3 CDH1 positive siblings (II-1, II-2, II-6) have been tested and have been found to carry the CDH1 gene (III-4, III-9, III-10). The CDH1 status of the remaining family members is unknown. Individual III-1 died in her early thirties, and although her CDH1 status was unknown, given her age, disease course, and the presence of the mutation in her mother, it is highly likely that she was CDH1 positive. Of the family members known to have undergone genetic testing, 6 of 6 have tested positive for the CDH1 mutation. It is presumed that the 2 who died prior to testing indeed also carried the gene defect.\nFamily genealogy of hereditary gastric cancer.\n[SUBTITLE] Surgery [SUBSECTION] All patients were asymptomatic at the time of their testing and surgery and had no abnormal abdominal findings on physical examination. All patients had genetic counseling prior to testing and surgery. All patients were screened endoscopically and were free of any gross gastric disease. Random gastric biopsies were performed in all cases. The biopsies were unremarkable.\nAll patients underwent a laparoscopic Roux-en-Y esophagojejunostomy. In all cases, the Roux limb was brought up to the esophagus in a retrocolic orientation. The esophageal anastomosis was performed with an Orvil (25-mm, 3.5-mm Covidien, Norwalk, CT) that was passed transorally. The jejunojejunostomy was performed with a linear stapler (EndoUniversal GIA, Covidien). All hernia-site defects were closed including Peterson's defect, the jejunal mesentery, and the mesocolic window. A drain was left near the site of the esophagojejunostomy and was removed before the patient was discharge from hospital. No intraoperative endoscopy was used to test the integrity of the anastomosis, and no radiologic leak tests were performed postoperatively. All cases were done using 5 ports, 4 of which were 5-mm ports (Figure 2). A fifth 12-mm port placed in the left anterior axillary line at the costal margin was used as the stapling port. This incision was extended at the end of the case to extract the stomach via an endocatch appliance.\nTrocar location for laparoscopic gastrectomy.\nOperative time ranged from 287 minutes to 372 minutes with an average time of 292 minutes. The mean length of stay (LOS) was 7.8 days (median 5 days, range 3 to 23). Postoperatively, the patients were placed on regular surgical wards and advanced in diet starting with clear liquids. All patients were seen back in the clinic and were found to be doing well.\nOne patient developed a stricture at the esophageal anastomosis that was treated successfully by endoscopic balloon dilatation. One patient developed nausea and vomiting on postoperative day 5 and was subsequently diagnosed with a small bowel obstruction secondary to adhesions around the mesocolic window. Operative repair, including lysis of adhesions and revision of the mesocolic window, was completed on postoperative day 10. The jejunojejunostomy anastomosis appeared healthy and was without leaks at the time of reoperation. Subsequently, the patient developed a hydropneumothorax, drained once by ultrasound-guided aspiration and once by CT-guided aspiration. The remainder of the hospital course was uncomplicated, and this patient was discharged on hospital day 23.\nAll patients were asymptomatic at the time of their testing and surgery and had no abnormal abdominal findings on physical examination. All patients had genetic counseling prior to testing and surgery. All patients were screened endoscopically and were free of any gross gastric disease. Random gastric biopsies were performed in all cases. The biopsies were unremarkable.\nAll patients underwent a laparoscopic Roux-en-Y esophagojejunostomy. In all cases, the Roux limb was brought up to the esophagus in a retrocolic orientation. The esophageal anastomosis was performed with an Orvil (25-mm, 3.5-mm Covidien, Norwalk, CT) that was passed transorally. The jejunojejunostomy was performed with a linear stapler (EndoUniversal GIA, Covidien). All hernia-site defects were closed including Peterson's defect, the jejunal mesentery, and the mesocolic window. A drain was left near the site of the esophagojejunostomy and was removed before the patient was discharge from hospital. No intraoperative endoscopy was used to test the integrity of the anastomosis, and no radiologic leak tests were performed postoperatively. All cases were done using 5 ports, 4 of which were 5-mm ports (Figure 2). A fifth 12-mm port placed in the left anterior axillary line at the costal margin was used as the stapling port. This incision was extended at the end of the case to extract the stomach via an endocatch appliance.\nTrocar location for laparoscopic gastrectomy.\nOperative time ranged from 287 minutes to 372 minutes with an average time of 292 minutes. The mean length of stay (LOS) was 7.8 days (median 5 days, range 3 to 23). Postoperatively, the patients were placed on regular surgical wards and advanced in diet starting with clear liquids. All patients were seen back in the clinic and were found to be doing well.\nOne patient developed a stricture at the esophageal anastomosis that was treated successfully by endoscopic balloon dilatation. One patient developed nausea and vomiting on postoperative day 5 and was subsequently diagnosed with a small bowel obstruction secondary to adhesions around the mesocolic window. Operative repair, including lysis of adhesions and revision of the mesocolic window, was completed on postoperative day 10. The jejunojejunostomy anastomosis appeared healthy and was without leaks at the time of reoperation. Subsequently, the patient developed a hydropneumothorax, drained once by ultrasound-guided aspiration and once by CT-guided aspiration. The remainder of the hospital course was uncomplicated, and this patient was discharged on hospital day 23.\n[SUBTITLE] Pathology [SUBSECTION] All of the stomachs were examined after removal from the body and were found to be grossly normal. Microscopic examination of the specimens showed that each had multiple microscopic foci of invasive signet ring cell carcinoma. The mean number of foci was 8.2 (median 4, range 2 to 19). The number of foci was reported for 5 of 6 patients; the sixth patient was noted to have multiple foci but no count was provided by the pathologist. All of the foci of invasive carcinoma were confined to the lamina propria, and the largest focus noted was 0.5cm. All of the lymph nodes were negative. The mean lymph node harvest was 12.3 (median 10.5, range 5 to 25). All patients were staged as pT1N0MX.\nAll of the stomachs were examined after removal from the body and were found to be grossly normal. Microscopic examination of the specimens showed that each had multiple microscopic foci of invasive signet ring cell carcinoma. The mean number of foci was 8.2 (median 4, range 2 to 19). The number of foci was reported for 5 of 6 patients; the sixth patient was noted to have multiple foci but no count was provided by the pathologist. All of the foci of invasive carcinoma were confined to the lamina propria, and the largest focus noted was 0.5cm. All of the lymph nodes were negative. The mean lymph node harvest was 12.3 (median 10.5, range 5 to 25). All patients were staged as pT1N0MX.", "The series comprised 6 patients, 3 males and 3 females. The age range was 21 to 51 years old with an average of 38.2 years. All patients had their surgeries performed at the Karmanos Cancer Institute of Detroit, by a team of minimally invasive surgeons. All of the patients were residents of the state of Michigan at the time of their surgeries.", "The patients were suspected of having hereditary gastric cancer when 2 relatives died of gastric cancer in their early thirties. The disease affected 2 separate generations of the family with the first affected family member (individual II-5, Figure 1) dying of gastric cancer at age 32. Individual II-5 had 5 siblings, 4 of whom were alive and could be tested. Three of the 4 agreed to genetic testing and were found to carry a CDH1 mutation. Individual II-4 was not tested, and her CDH1 status is currently unknown. She has 2 young girls, ages 11 and 8, who also have not been tested. Several children of the 3 CDH1 positive siblings (II-1, II-2, II-6) have been tested and have been found to carry the CDH1 gene (III-4, III-9, III-10). The CDH1 status of the remaining family members is unknown. Individual III-1 died in her early thirties, and although her CDH1 status was unknown, given her age, disease course, and the presence of the mutation in her mother, it is highly likely that she was CDH1 positive. Of the family members known to have undergone genetic testing, 6 of 6 have tested positive for the CDH1 mutation. It is presumed that the 2 who died prior to testing indeed also carried the gene defect.\nFamily genealogy of hereditary gastric cancer.", "All patients were asymptomatic at the time of their testing and surgery and had no abnormal abdominal findings on physical examination. All patients had genetic counseling prior to testing and surgery. All patients were screened endoscopically and were free of any gross gastric disease. Random gastric biopsies were performed in all cases. The biopsies were unremarkable.\nAll patients underwent a laparoscopic Roux-en-Y esophagojejunostomy. In all cases, the Roux limb was brought up to the esophagus in a retrocolic orientation. The esophageal anastomosis was performed with an Orvil (25-mm, 3.5-mm Covidien, Norwalk, CT) that was passed transorally. The jejunojejunostomy was performed with a linear stapler (EndoUniversal GIA, Covidien). All hernia-site defects were closed including Peterson's defect, the jejunal mesentery, and the mesocolic window. A drain was left near the site of the esophagojejunostomy and was removed before the patient was discharge from hospital. No intraoperative endoscopy was used to test the integrity of the anastomosis, and no radiologic leak tests were performed postoperatively. All cases were done using 5 ports, 4 of which were 5-mm ports (Figure 2). A fifth 12-mm port placed in the left anterior axillary line at the costal margin was used as the stapling port. This incision was extended at the end of the case to extract the stomach via an endocatch appliance.\nTrocar location for laparoscopic gastrectomy.\nOperative time ranged from 287 minutes to 372 minutes with an average time of 292 minutes. The mean length of stay (LOS) was 7.8 days (median 5 days, range 3 to 23). Postoperatively, the patients were placed on regular surgical wards and advanced in diet starting with clear liquids. All patients were seen back in the clinic and were found to be doing well.\nOne patient developed a stricture at the esophageal anastomosis that was treated successfully by endoscopic balloon dilatation. One patient developed nausea and vomiting on postoperative day 5 and was subsequently diagnosed with a small bowel obstruction secondary to adhesions around the mesocolic window. Operative repair, including lysis of adhesions and revision of the mesocolic window, was completed on postoperative day 10. The jejunojejunostomy anastomosis appeared healthy and was without leaks at the time of reoperation. Subsequently, the patient developed a hydropneumothorax, drained once by ultrasound-guided aspiration and once by CT-guided aspiration. The remainder of the hospital course was uncomplicated, and this patient was discharged on hospital day 23.", "All of the stomachs were examined after removal from the body and were found to be grossly normal. Microscopic examination of the specimens showed that each had multiple microscopic foci of invasive signet ring cell carcinoma. The mean number of foci was 8.2 (median 4, range 2 to 19). The number of foci was reported for 5 of 6 patients; the sixth patient was noted to have multiple foci but no count was provided by the pathologist. All of the foci of invasive carcinoma were confined to the lamina propria, and the largest focus noted was 0.5cm. All of the lymph nodes were negative. The mean lymph node harvest was 12.3 (median 10.5, range 5 to 25). All patients were staged as pT1N0MX.", "Prophylactic gastrectomy in HDGC is a definitive treatment. Patients with a CDH1 mutation have a 70% to 80% risk of developing DGC over their lifetime, and the 5-year survival for symptomatic DGC is <20%. Genetic counseling and genetic testing are mandatory before offering a prophylactic total gastrectomy to any patient. Families suspected of possessing a CDH1 gene mutation should be screened by using the modified criteria as proposed by Suriano et al,7 and genetic testing can be offered to appropriate patients. In those found to carry a CDH1 mutation, prophylactic gastrectomy is advised.\nBecause of incomplete penetrance, some 20% to 30% of CDH1 mutation carriers will not develop DGC. Screening techniques, while contributing to the decline in overall gastric cancer mortality, are insufficient for early DGC, because of its tendency to underlie normal gastric mucosa. Two modalities, PET-CT and chromo-endoscopy have shown some potential for identifying early stage DGC, but both have limitations that prevent them from being implemented as an alternative to prophylactic gastrectomies.6\nTotal gastrectomy is not without complications. Overall, the 30-day mortality for total gastrectomy ranges from 3% to 6%.8 This is not surprising considering that gastrectomies are usually performed in older, sicker patients. In younger, healthier populations undergoing total gastrectomy, such as those with HDGC, morbidity and mortality has been estimated to be in the 1% to 2% range.9 One recent retrospective study of laparoscopic-assisted total gastrectomy patients (not specifically HDGC) showed no perioperative mortality among 131 patients and a postoperative morbidity rate of 19%.10 Postoperative complications included a range from ileus (2.3%) to anastomotic leak requiring reoperation (0.8%), with the most common being wound complications (5.3%). Even these results may overestimate perioperative morbidity for prophylactic gastrectomies, as complications were not reported by tumor stage, and 24 patients (18%) were TNM stage II or greater.\nLong-term morbidity after total gastrectomy includes alterations in eating habits, dumping syndrome, diarrhea and weight loss, and should be independent of surgery method. There is typically a 10% to 15% decrease in body weight, which is principally body fat. Dumping syndrome occurs in 20% to 30% of patients but tends to improve with time.11 Overall, laparoscopic gastrectomies have demonstrated decreased perioperative morbidity and mortality over open12; patients had early return of bowel function, tolerated oral intake early, and were discharged earlier than those undergoing open procedures. The patients in this study were on average started on clear liquid diets by postoperative day 5.8 (median 4, range 2 to 20) and had an average LOS of 7.8 days as previously noted. This is in comparison with open gastrectomies where the average length of stay ranged from 10 days to 18 days.13,14 Earlier discharge and less morbidity may be significant factors in patients' decision to have a laparoscopic prophylactic gastrectomy.\nThe patients in this series selected the surgeon for his prior work in performing a hand-assisted laparoscopic total gastrectomy, which was reported as the first laparoscopic-assisted gastrectomy for HDGC.15 There is at least one additional report of prophylactic laparoscopic gastrectomies for CDH1 mutation carriers.16 Most literature comments on the dilemma of incomplete penetrance when prophylactic gastrectomies are offered to CDH1 positive patients. What is less clear is whether the outcomes of previously reported prophylactic surgeries have been discussed. Over 92% of prophylactic surgeries reported in the literature have shown one or more foci of signet ring cell adenocarcinoma.17 This makes these surgeries curative, rather than prophylactic. Considering the expected 5-year survival difference, the distinction between curative (in T1N0M0) and prophylactic surgeries may be irrelevant, though no data have been published to support this conclusion. However, the semantics may help patients decide. Being cured of cancer is arguably a more powerful motivator for surgery than is prevention of cancer by that same surgery. That does raise the issue of oncologic staging. When performed as prophylaxis, there is little concern for oncologic staging, because there is no cancer to stage. When surgery is therapeutic, staging becomes an issue. In the past, laparoscopic total gastrectomy has been criticized, because doubts remain concerning its ability to satisfy oncologic staging criteria met during more conventional open surgery. In general, most CDH1 positive patients undergoing a therapeutic (nee prophylactic) gastrectomy have early carcinoma, either in-situ or confined to the lamina propria. Guidelines given by the International Gastric Cancer Linkage Consortium (IGCLC) indicate that it is essential to document the complete removal of gastric mucosa by histologically identifying esophageal and duodenal mucosa at the 2 ends of the surgical specimen.9 Additionally, the American Joint Committee on Cancer (AJCC) recommends that at least 15 lymph nodes be examined for staging purposes, but does allow for pN0 staging to be based on examination of resected nodes.18 Multiple studies of gastric cancer have indicated that for early stages of disease, the laparoscopic approach allows appropriate resection of gastric tissue and adequate harvest of D1 lymph nodes.19 Extension to D2 lymph nodes for early stage cancer presented additional risk without significantly altering the 5-year survival.20 There is no reason to believe these results cannot be extrapolated to early DGC.", "Minimally invasive surgery is playing an increasing role in the management of disease. This is highlighted in our case series, where we demonstrate the utility and safety of the laparoscopic approach toward prophylactic total gastrectomy in a family with HDGC. All of the patients in this series treated with gastrectomy demonstrated microscopic foci of invasive adenocarcinoma with no evidence of local or distant spread. Thus, these surgeries were curative, rather than prophylactic.\nWhile current screening techniques are inadequate to make the distinction between prophylactic and curative surgery for these early stage patients, pre- and perioperative patient counseling should highlight the realities inherent in the data. Laparoscopic total gastrectomy not only provides a cure, but also satisfies oncologic staging requirements and provides for quicker recovery and decreased morbidity. We feel it is well suited for asymptomatic CDH1 positive patients in HDGC families. This recommendation comes with screening by family history, genetic counseling, consultation with a multi-specialty team, and the recommendation that the surgery should only be performed by a surgeon experienced in performing laparoscopic total gastrectomies. Moreover, this series demonstrates that gastrectomies can be performed successfully utilizing a laparoscopic approach minimizing morbidity compared with open gastrectomies. We believe that the laparoscopic approach should be offered as first-line therapy to CDH1-positive patients." ]
[ null, "methods", "results", null, null, null, null, "discussion", "conclusions" ]
[ "Total laparoscopic gastrectomy" ]
Transitioning to single-incision laparoscopic inguinal herniorrhaphy.
21333187
Laparoendoscopic single-site surgery (LESS) offers cosmetic benefits and may represent further progress towards reducing the invasiveness of surgical interventions. We report our initial experience with LESS totally extraperitoneal (TEP) inguinal herniorrhaphy.
BACKGROUND
Beginning March 2009, we transitioned from a multiport laparoscopic TEP (MLH) technique to a single-incision TEP (SITE) technique. The first 52 consecutive patients who underwent SITE at our institution were compared with the preceding 52 MLH repairs.
MATERIALS AND METHODS
Of the first 52 patients undergoing SITE, there were no conversions to either open or multiport surgery. The mean operative time for the SITE cases did not differ significantly from that of MLH. Complications were equivalent between the 2 groups and included postoperative seroma and urinary retention.
RESULTS
Transitioning from MLH to SITE was readily accomplished without significantly altering operative time or morbidity.
CONCLUSIONS
[ "Adolescent", "Adult", "Female", "Follow-Up Studies", "Hernia, Inguinal", "Humans", "Laparoscopy", "Male", "Middle Aged", "Retrospective Studies", "Treatment Outcome", "Young Adult" ]
3041030
null
null
null
null
RESULTS
SITE was completed in all 52 patients with no conversions to multiport or open surgery. The mean age of the patients was 35.6 (range, 18 to 61) and mean BMI was 25.2 (range, 16.3 to 35.0); 92% were men; 90 (86.5%) of the total cases were performed on primary hernias and 15 on recurrent (14.4%). Forty-eight percent of the hernias were on the right side, and 31% were on the left. Twenty-one percent were bilateral. Seventy percent were indirect, 30% direct, and 4.9% pantaloon. No femoral hernias were identified. The 2 groups were not significantly different in demographics or type of hernia (Table 1). Demographic Data MLH=multiport laparoscopic totally extraperitoneal herniorrhaphy; SITE=single incision totally extraperitoneal inguinal herniorrhaphy; NS=not statistically significant. Mean multiport operative time was 48.2±10.8 minutes for unilateral hernia and 85.9±8.2 minutes for bilateral. Mean SITE operative time was 51.7±15.1 minutes for unilateral and 85.8±16.5 minutes for bilateral (Table 2). Perioperative, Postoperative, and Morbidity Data MLH = multiport laparoscopic totally extraperitoneal inguinal herniorrhaphy; SITE = single-incision totally extraperitoneal inguinal herniorrhaphy; NS=not statistically significant. Mean time to discharge from the ambulatory unit was 4.5 hours and not significantly different between the 2 groups. At home oral analgesia was used for a mean of 2.5 days, and the mean time to return to full ADL and work was 11.4 days. These parameters were not significantly different between the 2 groups (Table 2). All patients were discharged the same day as their surgical procedure, and all but one were seen at least once in the postoperative period. Only 71% (74/104) of the patients kept their one-month appointment. In this short-term follow-up, no recurrences were identified. Only minor complications were reported in our study group and included postoperative delayed return of bladder function and swelling of the hernia site secondary to a seroma/hematoma (Table 2). No major complications or deaths were identified in our study population.
CONCLUSION
Whether any LESS procedure in contrast to standard laparoscopy bestows any patient benefit other than improved cosmesis remains a matter of speculation and awaits larger trials and more sensitive evaluation tools.16 This study supports the theory that SITE is not significantly more difficult than MLH and with the added benefit of scarlessness, SITE may finally provide the impetus that launches laparoscopic herniorrhaphy past open IH and into the mainstream.
[ "INTRODUCTION", "Technique" ]
[ "Although originally described in 1992,1,2,3 laparoendoscopic single-site surgery (LESS) went largely unexplored until becoming reinvigorated by the NOTES revolution. Resurrected initially as a stepping stone to NOTES,4 LESS quickly began generating interest as a worthy innovation in and of itself. Single-incision surgery provides surgeons with the opportunity to offer scarlesss surgery to their patients today, without having to significantly change the laparoscopic paradigm or wait for tomorrow's innovations. No viscerotmoy is needed with its inherent closure and contamination problems, the instrumentation is readily available, no novel skills or multispecialty conglomerate is needed, and the procedures are readily coded and billed to insurance carriers. It is therefore little surprise that LESS has caught the attention of surgeons and patients alike and has lead to an exponential growth of centers and surgeons offering this option to their patients.5\nWith over 800 000 inguinal hernia repairs performed in the United States6 and 20 million performed worldwide annually,7,8 this type of hernia is a significant public health concern. Open, tension free, mesh repair has long been the gold standard for inguinal herniorrhaphy (IH). This technique requires a sizeable skin incision and dissection, leading to poor cosmetic results, postoperative pain, and delayed return to activities of daily life (ADL). These concerns are not only vexing to patients but also are quite costly from a societal standpoint.\nIn contrast to virtually all other intraabdominal procedures, surgeons have been resistant to the adoption of laparoscopic inguinal herniorrhaphy (LIH). This has been primarily due to an extremely steep learning curve for an operation that yields only modest improvements in postoperative pain and disability9 and is not associated with improved cosmesis. Add to this the increased cost of LIH10 and open surgery is likely to remain the standard of care.\nA major contributing factor to the perpendicular learning curve of LIH is the loss of traditional triangulation engendered by working in the tight preperitoneal space. This diminutive operating window allows only small incremental movements with instruments often held in-line to the scope's view. Because the major drawback of single-incision surgery is loss of triangulation,5 we postulated that surgeons who have already perfected these maneuvers in multiport LIH (MLH) may be able to transition rapidly to a single-incision approach without significantly increasing operative time or morbidity.\nThis report describes our initial experience with LESS totally extraperitoneal (TEP) inguinal herniorrhaphy (SITE).", "Other than port placement, both MLH and SITE used identical TEP approaches. In summary, all patients had a urinary catheter placed at the commencement of the case and received a single dose of perioperative antibiotics. All patients were maintained under general anesthesia with endotrachial intubation. Patients were placed supine with both arms tucked comfortably at their sides in a mild Trendelenburg position. Each case started with a lateral curvilinear incision within the umbilical fold. The subcutaneous fat was distracted and the anterior fascia of the rectus sheath visualized. The midline and an umbilical hernial defect, if present, were avoided, and the anterior rectus sheath entered sharply. A dissecting balloon system (Spacemaker, Covidien, Norwalk, CT) was used to create the preperitoneal space.\nInsufflation of the preperitoneal space with 12mm Hg CO2 was maintained while the dissection and subsequent mesh deployment (Parietex anatomical mesh, Covidien) was carried out. The mesh was secured with a tacking device (ProTack, Covidien). Once the procedure was completed, 0.25% bupivacaine with epinephrine was sprayed into the preperitoneal space and the space dessuflated under direct vision. All trocars were removed and the fascial defect closed with braided absorbable suture.\nThe multiport cases were performed with the Spacemaker at the umbilicus and additional 11-mm and 5-mm trocars placed in the midline through separate skin incisions. The single-incision cases were performed based on availability either with a single-port system (SILS port, Covidien) or 3 separate ports through individual fascial incisions through a single skin incision at the umbilicus.\nAll patients received postoperative pain medications and were discharged home when they were awake, could tolerate liquids, and had adequate pain control. Urination prior to discharge was not a requirement.\nPatients were instructed to return to full activities and their employment as soon as they felt able to do so. They were given a prescription for acetaminophen with codeine but encouraged to use it only sparingly. Patients were advised to return to the office for a 1- week and 1-month follow-up appointment." ]
[ null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Technique", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "Although originally described in 1992,1,2,3 laparoendoscopic single-site surgery (LESS) went largely unexplored until becoming reinvigorated by the NOTES revolution. Resurrected initially as a stepping stone to NOTES,4 LESS quickly began generating interest as a worthy innovation in and of itself. Single-incision surgery provides surgeons with the opportunity to offer scarlesss surgery to their patients today, without having to significantly change the laparoscopic paradigm or wait for tomorrow's innovations. No viscerotmoy is needed with its inherent closure and contamination problems, the instrumentation is readily available, no novel skills or multispecialty conglomerate is needed, and the procedures are readily coded and billed to insurance carriers. It is therefore little surprise that LESS has caught the attention of surgeons and patients alike and has lead to an exponential growth of centers and surgeons offering this option to their patients.5\nWith over 800 000 inguinal hernia repairs performed in the United States6 and 20 million performed worldwide annually,7,8 this type of hernia is a significant public health concern. Open, tension free, mesh repair has long been the gold standard for inguinal herniorrhaphy (IH). This technique requires a sizeable skin incision and dissection, leading to poor cosmetic results, postoperative pain, and delayed return to activities of daily life (ADL). These concerns are not only vexing to patients but also are quite costly from a societal standpoint.\nIn contrast to virtually all other intraabdominal procedures, surgeons have been resistant to the adoption of laparoscopic inguinal herniorrhaphy (LIH). This has been primarily due to an extremely steep learning curve for an operation that yields only modest improvements in postoperative pain and disability9 and is not associated with improved cosmesis. Add to this the increased cost of LIH10 and open surgery is likely to remain the standard of care.\nA major contributing factor to the perpendicular learning curve of LIH is the loss of traditional triangulation engendered by working in the tight preperitoneal space. This diminutive operating window allows only small incremental movements with instruments often held in-line to the scope's view. Because the major drawback of single-incision surgery is loss of triangulation,5 we postulated that surgeons who have already perfected these maneuvers in multiport LIH (MLH) may be able to transition rapidly to a single-incision approach without significantly increasing operative time or morbidity.\nThis report describes our initial experience with LESS totally extraperitoneal (TEP) inguinal herniorrhaphy (SITE).", "A prospective database of all patients undergoing LIH was retrospectively reviewed. The first 52 patients who underwent 61 (25 right, 18 left, 9 bilateral) single-incision repairs were included in this review. We then compared them with the 52 patients who underwent standard multiport LIH immediately prior, for a total of 104 consecutive patients who underwent 126 repairs. Institutional review board approval was obtained.\n[SUBTITLE] Technique [SUBSECTION] Other than port placement, both MLH and SITE used identical TEP approaches. In summary, all patients had a urinary catheter placed at the commencement of the case and received a single dose of perioperative antibiotics. All patients were maintained under general anesthesia with endotrachial intubation. Patients were placed supine with both arms tucked comfortably at their sides in a mild Trendelenburg position. Each case started with a lateral curvilinear incision within the umbilical fold. The subcutaneous fat was distracted and the anterior fascia of the rectus sheath visualized. The midline and an umbilical hernial defect, if present, were avoided, and the anterior rectus sheath entered sharply. A dissecting balloon system (Spacemaker, Covidien, Norwalk, CT) was used to create the preperitoneal space.\nInsufflation of the preperitoneal space with 12mm Hg CO2 was maintained while the dissection and subsequent mesh deployment (Parietex anatomical mesh, Covidien) was carried out. The mesh was secured with a tacking device (ProTack, Covidien). Once the procedure was completed, 0.25% bupivacaine with epinephrine was sprayed into the preperitoneal space and the space dessuflated under direct vision. All trocars were removed and the fascial defect closed with braided absorbable suture.\nThe multiport cases were performed with the Spacemaker at the umbilicus and additional 11-mm and 5-mm trocars placed in the midline through separate skin incisions. The single-incision cases were performed based on availability either with a single-port system (SILS port, Covidien) or 3 separate ports through individual fascial incisions through a single skin incision at the umbilicus.\nAll patients received postoperative pain medications and were discharged home when they were awake, could tolerate liquids, and had adequate pain control. Urination prior to discharge was not a requirement.\nPatients were instructed to return to full activities and their employment as soon as they felt able to do so. They were given a prescription for acetaminophen with codeine but encouraged to use it only sparingly. Patients were advised to return to the office for a 1- week and 1-month follow-up appointment.\nOther than port placement, both MLH and SITE used identical TEP approaches. In summary, all patients had a urinary catheter placed at the commencement of the case and received a single dose of perioperative antibiotics. All patients were maintained under general anesthesia with endotrachial intubation. Patients were placed supine with both arms tucked comfortably at their sides in a mild Trendelenburg position. Each case started with a lateral curvilinear incision within the umbilical fold. The subcutaneous fat was distracted and the anterior fascia of the rectus sheath visualized. The midline and an umbilical hernial defect, if present, were avoided, and the anterior rectus sheath entered sharply. A dissecting balloon system (Spacemaker, Covidien, Norwalk, CT) was used to create the preperitoneal space.\nInsufflation of the preperitoneal space with 12mm Hg CO2 was maintained while the dissection and subsequent mesh deployment (Parietex anatomical mesh, Covidien) was carried out. The mesh was secured with a tacking device (ProTack, Covidien). Once the procedure was completed, 0.25% bupivacaine with epinephrine was sprayed into the preperitoneal space and the space dessuflated under direct vision. All trocars were removed and the fascial defect closed with braided absorbable suture.\nThe multiport cases were performed with the Spacemaker at the umbilicus and additional 11-mm and 5-mm trocars placed in the midline through separate skin incisions. The single-incision cases were performed based on availability either with a single-port system (SILS port, Covidien) or 3 separate ports through individual fascial incisions through a single skin incision at the umbilicus.\nAll patients received postoperative pain medications and were discharged home when they were awake, could tolerate liquids, and had adequate pain control. Urination prior to discharge was not a requirement.\nPatients were instructed to return to full activities and their employment as soon as they felt able to do so. They were given a prescription for acetaminophen with codeine but encouraged to use it only sparingly. Patients were advised to return to the office for a 1- week and 1-month follow-up appointment.", "Other than port placement, both MLH and SITE used identical TEP approaches. In summary, all patients had a urinary catheter placed at the commencement of the case and received a single dose of perioperative antibiotics. All patients were maintained under general anesthesia with endotrachial intubation. Patients were placed supine with both arms tucked comfortably at their sides in a mild Trendelenburg position. Each case started with a lateral curvilinear incision within the umbilical fold. The subcutaneous fat was distracted and the anterior fascia of the rectus sheath visualized. The midline and an umbilical hernial defect, if present, were avoided, and the anterior rectus sheath entered sharply. A dissecting balloon system (Spacemaker, Covidien, Norwalk, CT) was used to create the preperitoneal space.\nInsufflation of the preperitoneal space with 12mm Hg CO2 was maintained while the dissection and subsequent mesh deployment (Parietex anatomical mesh, Covidien) was carried out. The mesh was secured with a tacking device (ProTack, Covidien). Once the procedure was completed, 0.25% bupivacaine with epinephrine was sprayed into the preperitoneal space and the space dessuflated under direct vision. All trocars were removed and the fascial defect closed with braided absorbable suture.\nThe multiport cases were performed with the Spacemaker at the umbilicus and additional 11-mm and 5-mm trocars placed in the midline through separate skin incisions. The single-incision cases were performed based on availability either with a single-port system (SILS port, Covidien) or 3 separate ports through individual fascial incisions through a single skin incision at the umbilicus.\nAll patients received postoperative pain medications and were discharged home when they were awake, could tolerate liquids, and had adequate pain control. Urination prior to discharge was not a requirement.\nPatients were instructed to return to full activities and their employment as soon as they felt able to do so. They were given a prescription for acetaminophen with codeine but encouraged to use it only sparingly. Patients were advised to return to the office for a 1- week and 1-month follow-up appointment.", "SITE was completed in all 52 patients with no conversions to multiport or open surgery.\nThe mean age of the patients was 35.6 (range, 18 to 61) and mean BMI was 25.2 (range, 16.3 to 35.0); 92% were men; 90 (86.5%) of the total cases were performed on primary hernias and 15 on recurrent (14.4%). Forty-eight percent of the hernias were on the right side, and 31% were on the left. Twenty-one percent were bilateral. Seventy percent were indirect, 30% direct, and 4.9% pantaloon. No femoral hernias were identified. The 2 groups were not significantly different in demographics or type of hernia (Table 1).\nDemographic Data\nMLH=multiport laparoscopic totally extraperitoneal herniorrhaphy; SITE=single incision totally extraperitoneal inguinal herniorrhaphy; NS=not statistically significant.\nMean multiport operative time was 48.2±10.8 minutes for unilateral hernia and 85.9±8.2 minutes for bilateral. Mean SITE operative time was 51.7±15.1 minutes for unilateral and 85.8±16.5 minutes for bilateral (Table 2).\nPerioperative, Postoperative, and Morbidity Data\nMLH = multiport laparoscopic totally extraperitoneal inguinal herniorrhaphy; SITE = single-incision totally extraperitoneal inguinal herniorrhaphy; NS=not statistically significant.\nMean time to discharge from the ambulatory unit was 4.5 hours and not significantly different between the 2 groups. At home oral analgesia was used for a mean of 2.5 days, and the mean time to return to full ADL and work was 11.4 days. These parameters were not significantly different between the 2 groups (Table 2).\nAll patients were discharged the same day as their surgical procedure, and all but one were seen at least once in the postoperative period. Only 71% (74/104) of the patients kept their one-month appointment. In this short-term follow-up, no recurrences were identified.\nOnly minor complications were reported in our study group and included postoperative delayed return of bladder function and swelling of the hernia site secondary to a seroma/hematoma (Table 2).\nNo major complications or deaths were identified in our study population.", "Laparoscopic TEP IH is a procedure with an extremely steep learning curve. One report suggests completion of as many as 250 cases before the recurrence rate approaches that of open surgery.11 This gives LIH the distinction of being the most demanding endoscopic operation. Laparoscopic inguinal hernia's level of complexity arises from 2 sources: first is the difficulty in recognizing the 3-dimensional anatomical landmarks of the groin from the inside-out12; second is the host of technical difficulties unique to LIH, specifically:\nLIH is performed in the claustrophobic preperitoneal space, which almost eliminates triangulation;The visual field is limited;The surgeon is often forced to operate with only one working hand.\nLIH is performed in the claustrophobic preperitoneal space, which almost eliminates triangulation;\nThe visual field is limited;\nThe surgeon is often forced to operate with only one working hand.\nThese technical hurtles closely match those identified as the basis for the steepness of the learning curve of all LESS procedures.5 Because of this, surgeons who have mastered the technical challenges of standard LIH should find it easier transitioning to SITE. Our findings of equivalent mean operative times in both the SITE and MLH groups support this contention. Figures 1 and 2 illustrate the similarity in instrument positioning encountered in both SITE and MLH.\nMultiport totally extraperitoneal inguinal herniorrhaphy schematic (thick line-camera, thin lines=instruments).\nSingle-incision totally extraperitoneal inguinal herniorrhaphy schematic (thick line=camera, thin lines=instruments).\nIn contrast to LIH, the multiple access points of standard laparoscopic cholecystectomy provide unmatched triangulation, visualization, and instrument maneuverability (Figure 3). Losing these advantages because of a single access point (Figure 4) requires the acquisition of a new skill set specific to LESS and leads to a significant increase in operative times.13\nMultiport laparoscopic cholecystectomy schematic (thick line=camera, thin lines=instruments).\nSingle-incision laparoscopic cholecystectomy schematic (thick line=camera, thin lines=instruments).\nOne other reason IH may be an especially good application for LESS is that IH is a common operation in the younger demographic, a group that places more emphasis on cosmesis. As in the early days of laparoscopic cholecystectomy, LESS will be consumer driven, making the target population an important factor in procedure adoption. For the first time, SITE will offer patients the option of essentially scarless inguinal herniorrhaphy (Figure 5).\nPostoperative photo of a patient 1 week after undergoing single-incision totally extraperitoneal inguinal herniorrhaphy (SITE).\nIn our practice, patients are carefully selected before being offered any form of laparoscopic herniorrhaphy. This is reflected in the low BMI and age of the patients in both groups. There was no significant difference in demographics of the patients chosen, or type of hernia repaired (recurrent, side, or bilaterallity). Also the consecutive nature of the patients chosen prevented a significant impact of selection bias.\nThe mean operative times for both MLH and SILS in this study were similar to that described in a large series comparing TEP to Lichtenshtein repair.14 Also rates of minor morbidity including delayed return of bladder function and seroma/hematoma formation were equivalent to those reported in other series of standard MLH repairs.14,15\nThis study has a number of limitations. This is a retrospective review with only short-term follow-up precluding a conclusion regarding recurrence rate, the most essential issue in hernia repair. Also, only one surgeon's experience is reported. Perhaps, others, with varied techniques and skills will have a more difficult time transitioning to SITE.", "Whether any LESS procedure in contrast to standard laparoscopy bestows any patient benefit other than improved cosmesis remains a matter of speculation and awaits larger trials and more sensitive evaluation tools.16 This study supports the theory that SITE is not significantly more difficult than MLH and with the added benefit of scarlessness, SITE may finally provide the impetus that launches laparoscopic herniorrhaphy past open IH and into the mainstream." ]
[ null, "materials|methods", null, "results", "discussion", "conclusions" ]
[ "Laparoscopy", "Inguinal hernia", "Single Incision", "Totally extraperitoneal" ]
Reinforced circular stapler in bariatric surgery.
21333188
Roux-en-Y gastric bypass (RYGBP) is the most common procedure for weight loss surgery but has multiple complications. This study evaluates the use of reinforced circular staplers (RCS) and their effects on reducing gastrojejunal anastomotic complications.
BACKGROUND
We conducted a retrospective chart review from January 2007 to November 2008. Laparoscopic RYGBP were performed in 287 patients. A comparison was made of the complications with and without the use of reinforced circular staplers. The comparison was between a nonreinforced circular stapler (NRCS) group comprising 182 patients and an RCS group comprising 105 patients.
METHODS
Complications at gastrojejunal anastomosis were experienced by 15.3% of the patients; 9.5% were in the RCS group and 18.7% were in the NRCS group (P=0.026). Neither group had anastomotic leaks. Bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group. Ulcers occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group. Stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group.
RESULTS
The application of RCS reduced the incidence of gastrojejunal anastomotic complications. Patients are twice as likely to develop complications when no RCS device is used (95% CI 1.03, 4.623). Therefore, it is beneficial to utilize RCS for the gastrojejunal anastomosis in RYGBP procedures.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Equipment Design", "Female", "Gastric Bypass", "Humans", "Male", "Middle Aged", "Obesity, Morbid", "Retrospective Studies", "Surgical Staplers", "Suture Techniques", "Treatment Outcome", "Young Adult" ]
3041031
null
null
METHODS
A retrospective chart review was conducted of all the patients who underwent laparoscopic RYGBP procedures from January 2007 to November 2008; 287 laparoscopic RYGBP procedures were performed by the same 2 surgeons. Two consecutive series of patients who underwent laparoscopic RYGBP were compared. Group 1 consisted of the nonreinforced gastrojejunostomies (NRCS), and group 2 consisted of bovine pericardium reinforcement of the stapled gastrojejunostomy (RCS). The NRCS group comprised 182 patients and the RCS group comprised 105 patients. From January 2007 to March 2008, the majority of gastrojejunal anastomoses were created with nonreinforced circular staples. All gastrojejunal anastomoses between April 2008 and November 2008 were created with bovine pericardial reinforced circular staples (Peri-Strips Dry, Synovis Surgical Innovations, St. Paul, MN). All data were collected into a Microsoft Access template (Microsoft Corp, Redmond, WA) and analyzed with the SAS system. All patients were selected for laparoscopic RYGBP according to the 1991 NIH guidelines.27 Mean age was 40.7±10 years, mean BMI was 47.3±10kg/m2 (mean weight 278.4lb), and 89.2% of patients were female (Table 1). Before the operation, a multidisciplinary team evaluated and informed patients of the risks, benefits, complications, and realistic expectations of the procedure. Patient Demographics Standard 75-cm Roux limbs were constructed, and the gastrojejunostomy was performed using a 25-mm EEA circular stapler (Autosuture, US Surgical, Norwalk, CT). The operations were executed with identical techniques and equipment. All the patients had intraoperative endoscopy performed by the same surgeon. Patients were followed up postoperatively at 2 weeks, 3 months, 6 months, and 1 year. All patients received fractionated weight heparin preoperatively and postoperatively for deep venous thrombosis prophylaxis. On postoperative day one, patients were started on prophylactic doses of low molecular weight heparin (Fragmin 5000 units subcutaneous every day). Perioperative and postoperative complications assessed included bleeding, stricture, anastomotic leak, and ulcer formation. Bleeding is defined as evidence of active bleeding at the gastrojejunostomy during the intraoperative endoscopy, requiring endoscopic clip application. Bleeding was additionally defined as hematemesis and a decrease in hemoglobin >2 g/dL, or evidence of intraluminal clot on postoperative endoscopy or Gastrografin UGI. Ulcer formation was defined by endoscopic evidence of gastrointestinal tissue erosion. The definition of stricture was a gastrojejunal anastomotic opening <9mm seen on endoscopy that could not be reversed by the adult endoscope and required dilatation. Anastomotic leaks were evaluated by air insufflation during intraoperative endoscopy and Gastrografin upper gastrointestinal study on postoperative day one (Figure 1). Roux-en-Y gastric bypass created with the reinforced staplers. [SUBTITLE] Statistical Analysis [SUBSECTION] Statistical analysis compared the RCS and NRCS groups by using SAS system, version 9.2 (SAS Institute, Cary, NC). Univariate analysis was performed. Fisher exact test was utilized. Simple logistic regression was used to decide the benefit of reinforced staples in bariatric surgery. Multiple procedures for variable selection were performed to validate the final model. Modeling the log odds as a function of the covariates age, sex, BMI, weight, and RCS, we have the following model: Where I (*) = 1 if the input is true and zero otherwise (ie, the indicator function). Using proc logistic, we performed the analysis having each variable entered into the model using forward, backward, stepwise, and best 2 variable subset section. In each case, the only variable showing statistical significance was the RCS variable. This results in the final model: Statistical analysis compared the RCS and NRCS groups by using SAS system, version 9.2 (SAS Institute, Cary, NC). Univariate analysis was performed. Fisher exact test was utilized. Simple logistic regression was used to decide the benefit of reinforced staples in bariatric surgery. Multiple procedures for variable selection were performed to validate the final model. Modeling the log odds as a function of the covariates age, sex, BMI, weight, and RCS, we have the following model: Where I (*) = 1 if the input is true and zero otherwise (ie, the indicator function). Using proc logistic, we performed the analysis having each variable entered into the model using forward, backward, stepwise, and best 2 variable subset section. In each case, the only variable showing statistical significance was the RCS variable. This results in the final model:
RESULTS
Complications at the gastrojejunal site were experienced by 44 (15.3%) of 287 patients. Based on the statistical analysis, age, BMI, or sex did not appear to play a role in the presence of a complication. No incidences of bovine pericardium infection, migration, or erosion occurred. From the NRCS group, 34 (18.7%) of the 182 patients had anastomotic complications. In the RCS group, 10 (9.5%) of the 105 patients had anastomotic complications that required perioperative or postoperative intervention. The difference between complications for each group is statistically significant (P=0.026). The odds ratio was 2.18 with a 95% confidence interval (1.03, 4.62). We found through logistic regression that patients were 1.96 times more likely to develop complications when no reinforced staplers were used. Neither group presented with anastomotic leaks or deaths, but both RCS and NRCS cohorts developed bleeding, ulcer formation, and stricture formation (Table 2). Gastrojejunal Anastomotic Complications [SUBTITLE] Bleeding [SUBSECTION] The bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group (P=0.36). In the RCS cohort, during the intraoperative endoscopy, 2 patients had evidence of bleeding at the gastrojejunal anastomosis, which mandated clip application in the operating room. One patient bled 2 days postoperatively. This was diagnosed endoscopically. Treatment required 2 clips and transfusion of 4 units of packed red blood cells (pRBC). One patient presented with hematemesis and a drop in the hematocrit of 4 grams on postoperative day 2. An upper endoscopy performed on postoperative day 3 revealed no evidence of ongoing bleeding, and the patient maintained a stable hemoglobin count. Another patient bled 3 days after surgery and presented at an outside institution with hematemesis. Endoscopy done at the outside institution reported the bleeding was controlled with electrocautery. The patient was then transferred to our institution once hemodynamically stable. In the NRCS cohort, 9 patients were found to have evidence of bleeding during the intraoperative upper endoscopy. They all required the use of clips for hemostasis. In one of these 9 patients, on postoperative day 1, the hemoglobin fell 3g/dL, and this required 2 units for pRBC transfusion. One patient presented with hematemesis on postoperative day 1. A barium swallow identified a large clot in the gastric pouch, but there were no changes in hemoglobin or hemodynamic stability after 24 hours of observation. Another patient had a drop in the hemoglobin of 2g/dL and presented with tachycardia that responded to fluids. Endoscopy found clots but no evidence of active bleeding. One patient had bleeding from an ulcer 5 months after surgery. At the time of endoscopy, no active bleeding was seen, and therefore the patient was treated conservatively with a proton pump inhibitor. This episode of bleeding may not be related to the perioperative use of Peri-Strips but was included in the NRCS statistical analysis. The bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group (P=0.36). In the RCS cohort, during the intraoperative endoscopy, 2 patients had evidence of bleeding at the gastrojejunal anastomosis, which mandated clip application in the operating room. One patient bled 2 days postoperatively. This was diagnosed endoscopically. Treatment required 2 clips and transfusion of 4 units of packed red blood cells (pRBC). One patient presented with hematemesis and a drop in the hematocrit of 4 grams on postoperative day 2. An upper endoscopy performed on postoperative day 3 revealed no evidence of ongoing bleeding, and the patient maintained a stable hemoglobin count. Another patient bled 3 days after surgery and presented at an outside institution with hematemesis. Endoscopy done at the outside institution reported the bleeding was controlled with electrocautery. The patient was then transferred to our institution once hemodynamically stable. In the NRCS cohort, 9 patients were found to have evidence of bleeding during the intraoperative upper endoscopy. They all required the use of clips for hemostasis. In one of these 9 patients, on postoperative day 1, the hemoglobin fell 3g/dL, and this required 2 units for pRBC transfusion. One patient presented with hematemesis on postoperative day 1. A barium swallow identified a large clot in the gastric pouch, but there were no changes in hemoglobin or hemodynamic stability after 24 hours of observation. Another patient had a drop in the hemoglobin of 2g/dL and presented with tachycardia that responded to fluids. Endoscopy found clots but no evidence of active bleeding. One patient had bleeding from an ulcer 5 months after surgery. At the time of endoscopy, no active bleeding was seen, and therefore the patient was treated conservatively with a proton pump inhibitor. This episode of bleeding may not be related to the perioperative use of Peri-Strips but was included in the NRCS statistical analysis. [SUBTITLE] Ulcers [SUBSECTION] Ulcer formation occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group (P=0.18). In the RCS cohort, 3 patients were diagnosed with ulcers, including one with a preoperative H. pylori positive test. By using endoscopic confirmation, 1 patient was found to have a simultaneous stricture and an ulcer. All of these patients were medically managed with proton pump inhibitors. Two patients had ulcer formation in <2 months. Time of presentation of the ulcer was the same in both groups. Eleven patients were diagnosed with anastomotic ulcers in the NRCS group, including 2 who were H. pylori positive preoperatively. Seven patients had ulcer formation in <2 months and 4 others presented more than 3 months postoperatively. One patient presented with a perforated ulcer at the gastrojejunal anastomosis about 13 months after the gastric bypass. In this patient, ulcer formation may not have been related to the lack of Peri-Strip reinforcement and might have been related to the use of NSAIDs Ulcer formation occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group (P=0.18). In the RCS cohort, 3 patients were diagnosed with ulcers, including one with a preoperative H. pylori positive test. By using endoscopic confirmation, 1 patient was found to have a simultaneous stricture and an ulcer. All of these patients were medically managed with proton pump inhibitors. Two patients had ulcer formation in <2 months. Time of presentation of the ulcer was the same in both groups. Eleven patients were diagnosed with anastomotic ulcers in the NRCS group, including 2 who were H. pylori positive preoperatively. Seven patients had ulcer formation in <2 months and 4 others presented more than 3 months postoperatively. One patient presented with a perforated ulcer at the gastrojejunal anastomosis about 13 months after the gastric bypass. In this patient, ulcer formation may not have been related to the lack of Peri-Strip reinforcement and might have been related to the use of NSAIDs [SUBTITLE] Strictures [SUBSECTION] The stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group (P=0.062). Two patients presented with strictures in the RCS cohort. They both occurred <2 months postoperatively. Both patients required 2 separate endoscopic dilatation treatments. In the NRCS cohort, 12 patients developed strictures diagnosed by EGJ. The strictures were identified an average of 40 days (range, 22 to 121) postoperatively. Four of these patients required dilatation only once. The rest of the group needed between 2 to 3 dilatations each. The stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group (P=0.062). Two patients presented with strictures in the RCS cohort. They both occurred <2 months postoperatively. Both patients required 2 separate endoscopic dilatation treatments. In the NRCS cohort, 12 patients developed strictures diagnosed by EGJ. The strictures were identified an average of 40 days (range, 22 to 121) postoperatively. Four of these patients required dilatation only once. The rest of the group needed between 2 to 3 dilatations each.
CONCLUSION
Our study showed a statistically significant decrease in overall complications (P=0.0261) with the use of bovine pericardium reinforcement at the gastrojejunostomy. Individual complication rates for bleeding, ulcer formation, and stricture formation were not statistically significant. The use of bovine pericardium for the reinforcement of the gastrojejunostomy appears to be safe. Prospective studies comparing the 2 techniques are indicated. Based on our study, we recommend the use of reinforced circular staplers for the gastrojejunal anastomosis, because it halves the overall complication rate.
[ "INTRODUCTION", "Statistical Analysis", "Bleeding", "Ulcers", "Strictures" ]
[ "Sixty-seven percent of the American population is overweight or obese. Approximately 5% of the U.S. adult population struggles with morbid obesity and obesity related diseases.1 Bariatric surgery effectively provides permanent weight loss and remission of weight-related comorbidities in the majority of patients.2,3 Due to the increase in demand for minimally invasive surgery and decreased complication rates, laparoscopic Roux-en-Y gastric bypass (RYGBP) has become the gold standard in weight loss surgery.4 However, complications at the gastrojejunostomy including bleeding, strictures, ulcers, and leaks add to the morbidity of the procedure.5\nVariation in the technique for the gastrojejunal anastomosis exists. Hand-sewn, linear staplers, linear reinforced staplers, circular staplers,6 and, recently, reinforced circular staplers have been described in the literature. Previous reports confirm that the application of linear bioabsorbable reinforced staplers reduce anastomotic complications (including bleeding, leaks, fistulas, and strictures).7–12 Jones et al7 performed a retrospective comparison of 2 consecutive case series with a stapled 25 EEA anastomosis with and without GORE Seamguard reinforcement. Tissue reinforcement decreased complications by 85%. Bleeding, leaks, and strictures were lower in the reinforced group compared with those in the nonreinforced group by 0.7% to 1.1%, 0.7% to 1.9%, and 0.7% to 9.3%, respectively. Other reviews have shown that no difference in leak rates exists with the use of reinforced staplers.13 Reports of increased complications with tissue reinforcement also exist. Ibele et al14 performed a retrospective chart review comparing 419 patients with a 25 EEA stapled anastomosis without reinforcement with 69 patients with a 25 EEA anastomosis with bovine pericardium reinforcement. The no reinforcement group had a 0.7% leak rate compared with a 4.9% leak rate in the bovine pericardium reinforced group. To date, little data are available that evaluate the use of reinforced nonabsorbable circular staplers (RCS) with conventional nonreinforced circular staplers (NRCS) in laparoscopic RYGBP.\nThis study compared the incidence of complications at the gastrojejunal anastomosis, such as bleeding, stricture, leaks, and ulcer formation, with the use of a bovine pericardial reinforced circular stapler with a nonreinforced circular stapler at the gastrojejunal anastomosis in the laparoscopic RYGBP.", "Statistical analysis compared the RCS and NRCS groups by using SAS system, version 9.2 (SAS Institute, Cary, NC). Univariate analysis was performed. Fisher exact test was utilized. Simple logistic regression was used to decide the benefit of reinforced staples in bariatric surgery. Multiple procedures for variable selection were performed to validate the final model. Modeling the log odds as a function of the covariates age, sex, BMI, weight, and RCS, we have the following model: Where I (*) = 1 if the input is true and zero otherwise (ie, the indicator function). Using proc logistic, we performed the analysis having each variable entered into the model using forward, backward, stepwise, and best 2 variable subset section. In each case, the only variable showing statistical significance was the RCS variable. This results in the final model: ", "The bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group (P=0.36). In the RCS cohort, during the intraoperative endoscopy, 2 patients had evidence of bleeding at the gastrojejunal anastomosis, which mandated clip application in the operating room. One patient bled 2 days postoperatively. This was diagnosed endoscopically. Treatment required 2 clips and transfusion of 4 units of packed red blood cells (pRBC). One patient presented with hematemesis and a drop in the hematocrit of 4 grams on postoperative day 2. An upper endoscopy performed on postoperative day 3 revealed no evidence of ongoing bleeding, and the patient maintained a stable hemoglobin count. Another patient bled 3 days after surgery and presented at an outside institution with hematemesis. Endoscopy done at the outside institution reported the bleeding was controlled with electrocautery. The patient was then transferred to our institution once hemodynamically stable.\nIn the NRCS cohort, 9 patients were found to have evidence of bleeding during the intraoperative upper endoscopy. They all required the use of clips for hemostasis. In one of these 9 patients, on postoperative day 1, the hemoglobin fell 3g/dL, and this required 2 units for pRBC transfusion. One patient presented with hematemesis on postoperative day 1. A barium swallow identified a large clot in the gastric pouch, but there were no changes in hemoglobin or hemodynamic stability after 24 hours of observation. Another patient had a drop in the hemoglobin of 2g/dL and presented with tachycardia that responded to fluids. Endoscopy found clots but no evidence of active bleeding. One patient had bleeding from an ulcer 5 months after surgery. At the time of endoscopy, no active bleeding was seen, and therefore the patient was treated conservatively with a proton pump inhibitor. This episode of bleeding may not be related to the perioperative use of Peri-Strips but was included in the NRCS statistical analysis.", "Ulcer formation occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group (P=0.18). In the RCS cohort, 3 patients were diagnosed with ulcers, including one with a preoperative H. pylori positive test. By using endoscopic confirmation, 1 patient was found to have a simultaneous stricture and an ulcer. All of these patients were medically managed with proton pump inhibitors. Two patients had ulcer formation in <2 months. Time of presentation of the ulcer was the same in both groups.\nEleven patients were diagnosed with anastomotic ulcers in the NRCS group, including 2 who were H. pylori positive preoperatively. Seven patients had ulcer formation in <2 months and 4 others presented more than 3 months postoperatively. One patient presented with a perforated ulcer at the gastrojejunal anastomosis about 13 months after the gastric bypass. In this patient, ulcer formation may not have been related to the lack of Peri-Strip reinforcement and might have been related to the use of NSAIDs", "The stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group (P=0.062). Two patients presented with strictures in the RCS cohort. They both occurred <2 months postoperatively. Both patients required 2 separate endoscopic dilatation treatments.\nIn the NRCS cohort, 12 patients developed strictures diagnosed by EGJ. The strictures were identified an average of 40 days (range, 22 to 121) postoperatively. Four of these patients required dilatation only once. The rest of the group needed between 2 to 3 dilatations each." ]
[ null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Statistical Analysis", "RESULTS", "Bleeding", "Ulcers", "Strictures", "DISCUSSION", "CONCLUSION" ]
[ "Sixty-seven percent of the American population is overweight or obese. Approximately 5% of the U.S. adult population struggles with morbid obesity and obesity related diseases.1 Bariatric surgery effectively provides permanent weight loss and remission of weight-related comorbidities in the majority of patients.2,3 Due to the increase in demand for minimally invasive surgery and decreased complication rates, laparoscopic Roux-en-Y gastric bypass (RYGBP) has become the gold standard in weight loss surgery.4 However, complications at the gastrojejunostomy including bleeding, strictures, ulcers, and leaks add to the morbidity of the procedure.5\nVariation in the technique for the gastrojejunal anastomosis exists. Hand-sewn, linear staplers, linear reinforced staplers, circular staplers,6 and, recently, reinforced circular staplers have been described in the literature. Previous reports confirm that the application of linear bioabsorbable reinforced staplers reduce anastomotic complications (including bleeding, leaks, fistulas, and strictures).7–12 Jones et al7 performed a retrospective comparison of 2 consecutive case series with a stapled 25 EEA anastomosis with and without GORE Seamguard reinforcement. Tissue reinforcement decreased complications by 85%. Bleeding, leaks, and strictures were lower in the reinforced group compared with those in the nonreinforced group by 0.7% to 1.1%, 0.7% to 1.9%, and 0.7% to 9.3%, respectively. Other reviews have shown that no difference in leak rates exists with the use of reinforced staplers.13 Reports of increased complications with tissue reinforcement also exist. Ibele et al14 performed a retrospective chart review comparing 419 patients with a 25 EEA stapled anastomosis without reinforcement with 69 patients with a 25 EEA anastomosis with bovine pericardium reinforcement. The no reinforcement group had a 0.7% leak rate compared with a 4.9% leak rate in the bovine pericardium reinforced group. To date, little data are available that evaluate the use of reinforced nonabsorbable circular staplers (RCS) with conventional nonreinforced circular staplers (NRCS) in laparoscopic RYGBP.\nThis study compared the incidence of complications at the gastrojejunal anastomosis, such as bleeding, stricture, leaks, and ulcer formation, with the use of a bovine pericardial reinforced circular stapler with a nonreinforced circular stapler at the gastrojejunal anastomosis in the laparoscopic RYGBP.", "A retrospective chart review was conducted of all the patients who underwent laparoscopic RYGBP procedures from January 2007 to November 2008; 287 laparoscopic RYGBP procedures were performed by the same 2 surgeons. Two consecutive series of patients who underwent laparoscopic RYGBP were compared. Group 1 consisted of the nonreinforced gastrojejunostomies (NRCS), and group 2 consisted of bovine pericardium reinforcement of the stapled gastrojejunostomy (RCS). The NRCS group comprised 182 patients and the RCS group comprised 105 patients. From January 2007 to March 2008, the majority of gastrojejunal anastomoses were created with nonreinforced circular staples. All gastrojejunal anastomoses between April 2008 and November 2008 were created with bovine pericardial reinforced circular staples (Peri-Strips Dry, Synovis Surgical Innovations, St. Paul, MN). All data were collected into a Microsoft Access template (Microsoft Corp, Redmond, WA) and analyzed with the SAS system.\nAll patients were selected for laparoscopic RYGBP according to the 1991 NIH guidelines.27 Mean age was 40.7±10 years, mean BMI was 47.3±10kg/m2 (mean weight 278.4lb), and 89.2% of patients were female (Table 1). Before the operation, a multidisciplinary team evaluated and informed patients of the risks, benefits, complications, and realistic expectations of the procedure.\nPatient Demographics\nStandard 75-cm Roux limbs were constructed, and the gastrojejunostomy was performed using a 25-mm EEA circular stapler (Autosuture, US Surgical, Norwalk, CT). The operations were executed with identical techniques and equipment. All the patients had intraoperative endoscopy performed by the same surgeon. Patients were followed up postoperatively at 2 weeks, 3 months, 6 months, and 1 year. All patients received fractionated weight heparin preoperatively and postoperatively for deep venous thrombosis prophylaxis. On postoperative day one, patients were started on prophylactic doses of low molecular weight heparin (Fragmin 5000 units subcutaneous every day).\nPerioperative and postoperative complications assessed included bleeding, stricture, anastomotic leak, and ulcer formation. Bleeding is defined as evidence of active bleeding at the gastrojejunostomy during the intraoperative endoscopy, requiring endoscopic clip application. Bleeding was additionally defined as hematemesis and a decrease in hemoglobin >2 g/dL, or evidence of intraluminal clot on postoperative endoscopy or Gastrografin UGI. Ulcer formation was defined by endoscopic evidence of gastrointestinal tissue erosion. The definition of stricture was a gastrojejunal anastomotic opening <9mm seen on endoscopy that could not be reversed by the adult endoscope and required dilatation. Anastomotic leaks were evaluated by air insufflation during intraoperative endoscopy and Gastrografin upper gastrointestinal study on postoperative day one (Figure 1).\nRoux-en-Y gastric bypass created with the reinforced staplers.\n[SUBTITLE] Statistical Analysis [SUBSECTION] Statistical analysis compared the RCS and NRCS groups by using SAS system, version 9.2 (SAS Institute, Cary, NC). Univariate analysis was performed. Fisher exact test was utilized. Simple logistic regression was used to decide the benefit of reinforced staples in bariatric surgery. Multiple procedures for variable selection were performed to validate the final model. Modeling the log odds as a function of the covariates age, sex, BMI, weight, and RCS, we have the following model: Where I (*) = 1 if the input is true and zero otherwise (ie, the indicator function). Using proc logistic, we performed the analysis having each variable entered into the model using forward, backward, stepwise, and best 2 variable subset section. In each case, the only variable showing statistical significance was the RCS variable. This results in the final model: \nStatistical analysis compared the RCS and NRCS groups by using SAS system, version 9.2 (SAS Institute, Cary, NC). Univariate analysis was performed. Fisher exact test was utilized. Simple logistic regression was used to decide the benefit of reinforced staples in bariatric surgery. Multiple procedures for variable selection were performed to validate the final model. Modeling the log odds as a function of the covariates age, sex, BMI, weight, and RCS, we have the following model: Where I (*) = 1 if the input is true and zero otherwise (ie, the indicator function). Using proc logistic, we performed the analysis having each variable entered into the model using forward, backward, stepwise, and best 2 variable subset section. In each case, the only variable showing statistical significance was the RCS variable. This results in the final model: ", "Statistical analysis compared the RCS and NRCS groups by using SAS system, version 9.2 (SAS Institute, Cary, NC). Univariate analysis was performed. Fisher exact test was utilized. Simple logistic regression was used to decide the benefit of reinforced staples in bariatric surgery. Multiple procedures for variable selection were performed to validate the final model. Modeling the log odds as a function of the covariates age, sex, BMI, weight, and RCS, we have the following model: Where I (*) = 1 if the input is true and zero otherwise (ie, the indicator function). Using proc logistic, we performed the analysis having each variable entered into the model using forward, backward, stepwise, and best 2 variable subset section. In each case, the only variable showing statistical significance was the RCS variable. This results in the final model: ", "Complications at the gastrojejunal site were experienced by 44 (15.3%) of 287 patients. Based on the statistical analysis, age, BMI, or sex did not appear to play a role in the presence of a complication. No incidences of bovine pericardium infection, migration, or erosion occurred.\nFrom the NRCS group, 34 (18.7%) of the 182 patients had anastomotic complications. In the RCS group, 10 (9.5%) of the 105 patients had anastomotic complications that required perioperative or postoperative intervention. The difference between complications for each group is statistically significant (P=0.026). The odds ratio was 2.18 with a 95% confidence interval (1.03, 4.62). We found through logistic regression that patients were 1.96 times more likely to develop complications when no reinforced staplers were used. Neither group presented with anastomotic leaks or deaths, but both RCS and NRCS cohorts developed bleeding, ulcer formation, and stricture formation (Table 2).\nGastrojejunal Anastomotic Complications\n[SUBTITLE] Bleeding [SUBSECTION] The bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group (P=0.36). In the RCS cohort, during the intraoperative endoscopy, 2 patients had evidence of bleeding at the gastrojejunal anastomosis, which mandated clip application in the operating room. One patient bled 2 days postoperatively. This was diagnosed endoscopically. Treatment required 2 clips and transfusion of 4 units of packed red blood cells (pRBC). One patient presented with hematemesis and a drop in the hematocrit of 4 grams on postoperative day 2. An upper endoscopy performed on postoperative day 3 revealed no evidence of ongoing bleeding, and the patient maintained a stable hemoglobin count. Another patient bled 3 days after surgery and presented at an outside institution with hematemesis. Endoscopy done at the outside institution reported the bleeding was controlled with electrocautery. The patient was then transferred to our institution once hemodynamically stable.\nIn the NRCS cohort, 9 patients were found to have evidence of bleeding during the intraoperative upper endoscopy. They all required the use of clips for hemostasis. In one of these 9 patients, on postoperative day 1, the hemoglobin fell 3g/dL, and this required 2 units for pRBC transfusion. One patient presented with hematemesis on postoperative day 1. A barium swallow identified a large clot in the gastric pouch, but there were no changes in hemoglobin or hemodynamic stability after 24 hours of observation. Another patient had a drop in the hemoglobin of 2g/dL and presented with tachycardia that responded to fluids. Endoscopy found clots but no evidence of active bleeding. One patient had bleeding from an ulcer 5 months after surgery. At the time of endoscopy, no active bleeding was seen, and therefore the patient was treated conservatively with a proton pump inhibitor. This episode of bleeding may not be related to the perioperative use of Peri-Strips but was included in the NRCS statistical analysis.\nThe bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group (P=0.36). In the RCS cohort, during the intraoperative endoscopy, 2 patients had evidence of bleeding at the gastrojejunal anastomosis, which mandated clip application in the operating room. One patient bled 2 days postoperatively. This was diagnosed endoscopically. Treatment required 2 clips and transfusion of 4 units of packed red blood cells (pRBC). One patient presented with hematemesis and a drop in the hematocrit of 4 grams on postoperative day 2. An upper endoscopy performed on postoperative day 3 revealed no evidence of ongoing bleeding, and the patient maintained a stable hemoglobin count. Another patient bled 3 days after surgery and presented at an outside institution with hematemesis. Endoscopy done at the outside institution reported the bleeding was controlled with electrocautery. The patient was then transferred to our institution once hemodynamically stable.\nIn the NRCS cohort, 9 patients were found to have evidence of bleeding during the intraoperative upper endoscopy. They all required the use of clips for hemostasis. In one of these 9 patients, on postoperative day 1, the hemoglobin fell 3g/dL, and this required 2 units for pRBC transfusion. One patient presented with hematemesis on postoperative day 1. A barium swallow identified a large clot in the gastric pouch, but there were no changes in hemoglobin or hemodynamic stability after 24 hours of observation. Another patient had a drop in the hemoglobin of 2g/dL and presented with tachycardia that responded to fluids. Endoscopy found clots but no evidence of active bleeding. One patient had bleeding from an ulcer 5 months after surgery. At the time of endoscopy, no active bleeding was seen, and therefore the patient was treated conservatively with a proton pump inhibitor. This episode of bleeding may not be related to the perioperative use of Peri-Strips but was included in the NRCS statistical analysis.\n[SUBTITLE] Ulcers [SUBSECTION] Ulcer formation occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group (P=0.18). In the RCS cohort, 3 patients were diagnosed with ulcers, including one with a preoperative H. pylori positive test. By using endoscopic confirmation, 1 patient was found to have a simultaneous stricture and an ulcer. All of these patients were medically managed with proton pump inhibitors. Two patients had ulcer formation in <2 months. Time of presentation of the ulcer was the same in both groups.\nEleven patients were diagnosed with anastomotic ulcers in the NRCS group, including 2 who were H. pylori positive preoperatively. Seven patients had ulcer formation in <2 months and 4 others presented more than 3 months postoperatively. One patient presented with a perforated ulcer at the gastrojejunal anastomosis about 13 months after the gastric bypass. In this patient, ulcer formation may not have been related to the lack of Peri-Strip reinforcement and might have been related to the use of NSAIDs\nUlcer formation occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group (P=0.18). In the RCS cohort, 3 patients were diagnosed with ulcers, including one with a preoperative H. pylori positive test. By using endoscopic confirmation, 1 patient was found to have a simultaneous stricture and an ulcer. All of these patients were medically managed with proton pump inhibitors. Two patients had ulcer formation in <2 months. Time of presentation of the ulcer was the same in both groups.\nEleven patients were diagnosed with anastomotic ulcers in the NRCS group, including 2 who were H. pylori positive preoperatively. Seven patients had ulcer formation in <2 months and 4 others presented more than 3 months postoperatively. One patient presented with a perforated ulcer at the gastrojejunal anastomosis about 13 months after the gastric bypass. In this patient, ulcer formation may not have been related to the lack of Peri-Strip reinforcement and might have been related to the use of NSAIDs\n[SUBTITLE] Strictures [SUBSECTION] The stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group (P=0.062). Two patients presented with strictures in the RCS cohort. They both occurred <2 months postoperatively. Both patients required 2 separate endoscopic dilatation treatments.\nIn the NRCS cohort, 12 patients developed strictures diagnosed by EGJ. The strictures were identified an average of 40 days (range, 22 to 121) postoperatively. Four of these patients required dilatation only once. The rest of the group needed between 2 to 3 dilatations each.\nThe stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group (P=0.062). Two patients presented with strictures in the RCS cohort. They both occurred <2 months postoperatively. Both patients required 2 separate endoscopic dilatation treatments.\nIn the NRCS cohort, 12 patients developed strictures diagnosed by EGJ. The strictures were identified an average of 40 days (range, 22 to 121) postoperatively. Four of these patients required dilatation only once. The rest of the group needed between 2 to 3 dilatations each.", "The bleeding rate was 4.8% in the RCS group vs. 6.6% in the NRCS group (P=0.36). In the RCS cohort, during the intraoperative endoscopy, 2 patients had evidence of bleeding at the gastrojejunal anastomosis, which mandated clip application in the operating room. One patient bled 2 days postoperatively. This was diagnosed endoscopically. Treatment required 2 clips and transfusion of 4 units of packed red blood cells (pRBC). One patient presented with hematemesis and a drop in the hematocrit of 4 grams on postoperative day 2. An upper endoscopy performed on postoperative day 3 revealed no evidence of ongoing bleeding, and the patient maintained a stable hemoglobin count. Another patient bled 3 days after surgery and presented at an outside institution with hematemesis. Endoscopy done at the outside institution reported the bleeding was controlled with electrocautery. The patient was then transferred to our institution once hemodynamically stable.\nIn the NRCS cohort, 9 patients were found to have evidence of bleeding during the intraoperative upper endoscopy. They all required the use of clips for hemostasis. In one of these 9 patients, on postoperative day 1, the hemoglobin fell 3g/dL, and this required 2 units for pRBC transfusion. One patient presented with hematemesis on postoperative day 1. A barium swallow identified a large clot in the gastric pouch, but there were no changes in hemoglobin or hemodynamic stability after 24 hours of observation. Another patient had a drop in the hemoglobin of 2g/dL and presented with tachycardia that responded to fluids. Endoscopy found clots but no evidence of active bleeding. One patient had bleeding from an ulcer 5 months after surgery. At the time of endoscopy, no active bleeding was seen, and therefore the patient was treated conservatively with a proton pump inhibitor. This episode of bleeding may not be related to the perioperative use of Peri-Strips but was included in the NRCS statistical analysis.", "Ulcer formation occurred in 2.9% of the RCS group vs. 6.0% of the NRCS group (P=0.18). In the RCS cohort, 3 patients were diagnosed with ulcers, including one with a preoperative H. pylori positive test. By using endoscopic confirmation, 1 patient was found to have a simultaneous stricture and an ulcer. All of these patients were medically managed with proton pump inhibitors. Two patients had ulcer formation in <2 months. Time of presentation of the ulcer was the same in both groups.\nEleven patients were diagnosed with anastomotic ulcers in the NRCS group, including 2 who were H. pylori positive preoperatively. Seven patients had ulcer formation in <2 months and 4 others presented more than 3 months postoperatively. One patient presented with a perforated ulcer at the gastrojejunal anastomosis about 13 months after the gastric bypass. In this patient, ulcer formation may not have been related to the lack of Peri-Strip reinforcement and might have been related to the use of NSAIDs", "The stricture rate was 1.9% in the RCS group vs. 6.6% in the NRCS group (P=0.062). Two patients presented with strictures in the RCS cohort. They both occurred <2 months postoperatively. Both patients required 2 separate endoscopic dilatation treatments.\nIn the NRCS cohort, 12 patients developed strictures diagnosed by EGJ. The strictures were identified an average of 40 days (range, 22 to 121) postoperatively. Four of these patients required dilatation only once. The rest of the group needed between 2 to 3 dilatations each.", "The advances in bariatric surgery with the use of laparascopic RYGBP have reduced the morbidity and mortality associated with weight loss surgery.15,16 However complications, such as leaks, bleeding, strictures, and ulcers, can occur at the gastrojejunal anastomosis and add to the cost and morbidity of the procedure. The use of reinforced staple lines is reported to increase the staple-line strength while allowing for natural healing to decrease the incidence of complications. At our institution, the bovine pericardium strips (Peri-Stripes Dry with Veritas, Synovis, St. Paul, MN) are used to reinforce the circular stapler line. Previous publications have shown a decreased risk of bleeding and leak rate when tissue reinforcement is used in staple lines. Our study showed an overall decrease in complication rates in the RCS group compared with the NRCS group. Without the use of RCS, patients are twice as likely to develop complications.\nA review of the literature shows the leak rate to be 0.8% to 5%. Lujan et al6 examined 350 patients who underwent LRYGBP in which a circular stapler was used at the gastrojejunal anastomosis and found the leak rate to be 0.8%. Also, Ibele et al14 in a retrospective review using reinforcement with bovine pericardium at the gastrojejunal anastomosis found an increase in the incidence of leaks and staple-line failure. The learning curve with the application of new technology may have played a role in these findings. In our review, no leaks occurred in either NRCS or RCS group, which precludes us from making any conclusions with regard to this complication.\nMorbidly obese patients are at high risk for deep vein thrombosis and pulmonary embolus and are frequently treated with low-molecular-weight heparin in the perioperative period. This in combination with the popularity of the laparascopic approach has an increased potential for bleeding from luminal and extraluminal staple lines, as was demonstrated by Bahkos et al.17 A modality that would decrease the incidence of staple-line bleeding in laparoscopic Roux-en-y gastric bypass would be welcome. It is felt that tissue reinforcement with bovine peridcardium through its buttressing effect may be the mechanism by which staple-line bleeding is reduced.11 Studies have shown that reinforced staplers in LRYGBP diminish extraluminal bleeding.18 An article by Saber et al12 revealed a lower bleeding rate intraluminaly in patients with anastomosis performed using a reinforced stapler. The evidence in our study also verifies that reinforced staplers decrease intraluminal bleeding at the gastrojejunal anastomosis. This finding was not statistically significant, but a decreased incidence of bleeding was present; most of these were observed in the decreased need of clip application during the intraoperative endoscopy, further decreasing the cost of each clip and the potential of delayed bleeding. Like others, we propose this may be due to the butressing effect of bovine pericardium.11\nWith the development of reinforced staplers, stricture rates have decreased. Jones et al7 investigated the use of bioabsorbable reinforced staple lines. Although their study did not show a statistically significant decrease with leaks and bleeding, it did reveal a 93% decrease in stricture incidence. This is one of the few previously published studies evaluating circular reinforced staplers in LRYGBP surgery. Prior to the use of reinforced staplers, the stricture rate was 3.2% to 23%.19,20 Blackstone et al21 found that younger patients and those with GERD were more likely to form strictures. There are also studies which show that using a 25-mm stapler instead of a 21-mm stapler reduces the stricture rate.22 Our institution routinely uses the 25-mm stapler. In our study, the RCS group had a decreased stricture occurrence of 2% compared with 6.5% in the NRCS group. However, this did not reach statistical significance. The decrease in strictures may be due to a reduction in tension at the staple line by the even distribution of forces over the area of buttressing material and decreased collagen formation seen when bovine pericardium reinforcement is used.7,23\nMarginal ulceration forms between 2 months and 6 months postoperatively with the majority formed by 12 months. The literature reports that ulcer development occurs between 1% to 20% after LRYGBP.24,25 The use of proton pump inhibitors (PPI) is the standard of care for management. Gumbs et al26 recommends prophylactic use of PPI to prevent anastomotic ulceration. In our study, all patients received PPI for 6 months postoperatively. They were also preoperatively screened for H. pylori and given appropriate therapy if H. pylori was present. Our study shows a decreased incidence of marginal ulcer formation from 6.0% to 2.9% with the use of a reinforced stapler; we hypothesize this may be due to a protective effect from the bovine pericardium that prevents gastric juices from interacting with jejunal mucosa. However, it did not reach statistical significance.", "Our study showed a statistically significant decrease in overall complications (P=0.0261) with the use of bovine pericardium reinforcement at the gastrojejunostomy. Individual complication rates for bleeding, ulcer formation, and stricture formation were not statistically significant. The use of bovine pericardium for the reinforcement of the gastrojejunostomy appears to be safe. Prospective studies comparing the 2 techniques are indicated. Based on our study, we recommend the use of reinforced circular staplers for the gastrojejunal anastomosis, because it halves the overall complication rate." ]
[ null, "methods", null, "results", null, null, null, "discussion", "conclusions" ]
[ "Bovine pericardium", "Gastrojejunostomy", "Complications" ]
Laparoscopic resection of large adrenal tumors.
21333189
Laparoscopic adrenalectomy has rapidly replaced open adrenalectomy as the procedure of choice for benign adrenal tumors. It still remains to be clarified whether the laparoscopic resection of large (≥ 8 cm) or potentially malignant tumors is appropriate or not due to technical difficulties and concern about local recurrence. The aim of this study was to evaluate the short- and long-term outcome of 174 consecutive laparoscopic and open adrenalectomies performed in our surgical unit.
BACKGROUND
Our data come from a retrospective analysis of 174 consecutive adrenalectomies performed on 166 patients from May 1997 to December 2008. Fifteen patients with tumors ≥ 8 cm underwent laparoscopic adrenalectomy. Sixty-five patients were men and 101 were women, aged 16 years to 80 years. Nine patients underwent either synchronous or metachronous bilateral adrenalectomy. Tumor size ranged from 3.2 cm to 27 cm. The largest laparoscopically excised tumors were a ganglioneuroma with a mean diameter of 13 cm and a myelolipoma of 14 cm.
METHODS
In 135 patients, a laparoscopic procedure was completed successfully, whereas in 14 patients the laparoscopic procedure was converted to open. Seventeen patients were treated with an open approach from the start. There were no conversions in the group of patients with tumors > 8 cm. Operative time for laparoscopic adrenalectomies ranged from 65 minutes to 240 minutes. In the large adrenal tumor group, operative time for laparoscopic resection ranged from 150 minutes to 240 minutes. The postoperative hospital stay for laparoscopic adrenalectomy ranged from 1 day to 2 days (mean, 1.5) and from 5 days to 20 days for patients undergoing the open or converted procedure. The mean postoperative stay was 2 days for the group with large tumors resected by laparoscopy.
RESULTS
Laparoscopic resection of large (≥ 8 cm) adrenal tumors is feasible and safe. Short- and long-term results did not differ in the 2 groups.
CONCLUSION
[ "Adolescent", "Adrenal Gland Neoplasms", "Adrenalectomy", "Adult", "Aged", "Aged, 80 and over", "Female", "Humans", "Laparoscopy", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Retrospective Studies", "Severity of Illness Index", "Tomography, X-Ray Computed", "Treatment Outcome", "Young Adult" ]
3041032
null
null
null
null
RESULTS
In 166 patients, 135 laparoscopic procedures were completed successfully, whereas in 14 patients the laparoscopic procedure was converted to open, and 17 patients were treated with the open approach from the start. No conversions were necessary in the group of patients with tumors >8cm. Operative time for laparoscopic adrenalectomies ranged from 65 minutes to 240 minutes. In the large adrenal tumor group, operative time for laparoscopic resection ranged from 150 minutes to 240 minutes. The postoperative hospital stay for laparoscopic adrenalectomy ranged from 1 day to 2 days (mean, 1.5) and from 5 days to 20 days for patients undergoing the open or converted procedure. The mean postoperative stay was 2 days for the group with large tumors resected by laparoscopy. There was minimal blood loss in all patients with laparoscopic resection for tumors >8cm and no need for transfusion. Mortality and major morbidity did not differ in patients with large tumors when compared with patients with smaller tumors. Early in our experience, in a patient with morbid obesity, Cushing's syndrome and bilateral macronodular adrenal hyperplasia (adrenal size 2.5cm to 4.5cm), the right laparoscopic adrenalectomy was uneventful, whereas the left laparoscopic adrenalectomy, converted to open, was complicated by a low output pancreatic fistula, treated conservatively with success. One patient with pheochromocytoma developed pulmonary embolism 3 hours after the procedure and succumbed in the intensive care unit (ICU) one month later. The patient initially underwent a laparoscopic approach for an 8-cm tumor, which was converted to open due to difficulty in mobilization and minor but troublesome bleeding obscuring the operative field. A 70-year-old patient with Conn's syndrome and a 2.5-cm adenoma who had a previous medical history of coronary heart disease, anticoagulant treatment not discontinued in a timely manner, and chronic renal failure died from respiratory failure after 2.5 months of hospitalization in the ICU. The patient was reoperated on 24 hours postoperatively with diffuse retroperitoneal hemorrhaging that was treated successfully. One patient had a 10cm x 6.5cm adenoma, laparoscopically resected, suspicious for malignancy on imaging and potentially malignant on histological investigation, that recurred after 5 years and hence the patient underwent open resection including the upper pole of the kidney. At a mean follow-up interval of 96 months after laparoscopic adrenalectomy (range, 8 to 150), resolution of hormonal activity and no evidence of benign tumor recurrence were documented irrespective of tumor size.
CONCLUSION
Laparoscopic adrenalectomy can be considered the treatment of choice for all benign adrenal tumors up to 12cm to 14cm in size. Morbidity, mortality, and hospital stay is similar, irrespective of tumor size, but experience in both laparoscopic and adrenal surgery is necessary. Large tumors suspected of being a primary malignancy based on imaging characteristics should be approached with the open technique from the start.
[ "INTRODUCTION", "Technical Aspects of Laparoscopic Adrenalectomy", "Right Adrenalectomy", "Left Adrenalectomy" ]
[ "The first laparoscopic adrenalectomy was performed by Michel Gagner in 1992.1 Since then, it has become the standard surgical approach for all benign adrenal tumors. Compared with traditional open resection, laparoscopic adrenalectomy is associated with less postoperative pain, shorter hospital stay and recovery time, and better patient satisfaction rates. These clear advantages of laparoscopic adrenalectomy did not encourage any prospective randomized controlled trials comparing the new technique with the classical “open,” either transabdominal or retroperitoneal technique. However, the laparoscopic approach has not been widely accepted as appropriate for the resection of larger adrenal tumors.2 A tumor size of 6cm and later 8cm has been considered as the upper limit for laparoscopic adrenalectomy.3 The risk of malignancy as well as technical difficulties were the main concerns in applying the laparoscopic approach in large adrenal tumors.4 In this article, we highlight the technical aspects and results of laparoscopic surgery for adrenal tumors >8cm to determine the feasibility and safety of the procedure.", "We prefer the transperitoneal lateral decubitus approach, as the best for maximal exposure of the gland and adjacent organs and vessels. On the right side, we use three 10-mm trocars and one 5-mm trocar. On the left side, we use two 10-mm trocars and two 5-mm trocars. We prefer to create pneumoperitoneum with the Hasson technique to avoid any relevant morbidity.", "The right triangular ligament and the retroperitoneal liver attachments are cauterized and divided to allow liver retraction and expose the upper limits of the tumor. Liver mobilization is necessary especially in large tumors. After dividing the retroperitoneum, the inferior vena cava (IVC) is identified and dissected from the tumor. The periadrenal fat is gently pushed upwards with endo-peanuts. The adrenal vein is subsequently identified, dissected, double-clipped, and divided. The inferior and superior adrenal vessels are cauterized or clipped. Ultrasonic scissors are occasionally used after ligation of the adrenal vein.", "The left colonic flexure is always mobilized in large tumors and the left upper renal pole exposed. The splenic attachments are cauterized and divided, and the tail of the pancreas identified. The spleen is further mobilized until the stomach is visualized. Gerota's fascia is then opened, the adrenal gland identified, and the adrenal vein dissected, double clipped, and divided. The renal vein is occasionally identified prior to adrenal vein clipping. The upper adrenal vessels are either cauterized or clipped. Ultrasonic scissors were often used after the division of the main adrenal vein.\nThe specimen was placed in a bag and extracted through an extension of the incision done for the Hasson technique. The length of the extension depends on the tumor size, and the tumor must be removed from the abdomen carefully to be kept intact for histology examination." ]
[ null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Technical Aspects of Laparoscopic Adrenalectomy", "Right Adrenalectomy", "Left Adrenalectomy", "RESULTS", "DISCUSSION", "CONCLUSION" ]
[ "The first laparoscopic adrenalectomy was performed by Michel Gagner in 1992.1 Since then, it has become the standard surgical approach for all benign adrenal tumors. Compared with traditional open resection, laparoscopic adrenalectomy is associated with less postoperative pain, shorter hospital stay and recovery time, and better patient satisfaction rates. These clear advantages of laparoscopic adrenalectomy did not encourage any prospective randomized controlled trials comparing the new technique with the classical “open,” either transabdominal or retroperitoneal technique. However, the laparoscopic approach has not been widely accepted as appropriate for the resection of larger adrenal tumors.2 A tumor size of 6cm and later 8cm has been considered as the upper limit for laparoscopic adrenalectomy.3 The risk of malignancy as well as technical difficulties were the main concerns in applying the laparoscopic approach in large adrenal tumors.4 In this article, we highlight the technical aspects and results of laparoscopic surgery for adrenal tumors >8cm to determine the feasibility and safety of the procedure.", "From May 1997 to December 2008, 166 patients with adrenal tumors were hospitalized in our Department, and 174 adrenalectomies were performed. Sixty-five patients were men and 101 women, aged from 16 to 80. Nine patients underwent either synchronous or metachronous bilateral adrenalectomy. Four of these subjects had Cushing's disease, one had bilateral micronodular pigmental hyperplasia, one patient had MEN IIA syndrome, one metachronous solitary metastasis from colorectal cancer, one bilateral adenoma, and one adrenocortical carcinoma with contralateral large adenoma. The most common adrenal masses were adenomas, pheochromocytomas, Cushing's syndrome, aldosteronomas, and malignant tumors (Table 1).\nIndications for Adrenalectomy\nIn 15 patients, adrenal tumor size, as measured on the uncut specimen in the Pathology Department, was ≥8cm (Table 2). All patients underwent laparoscopic adrenalectomy if their computerized tomography (CT) or magnetic resonance imaging (MRI) showed no invasion of periadrenal tissues or organs, and the adrenal tumor was <12cm to 13cm. Endocrinological evaluation and complete adrenal dynamic testing were performed to determine whether the tumor was functional. In all patients with pheochromocytoma alpha-adrenergic blockade was administered at least 10 days prior to surgery.\nTumors >8 cm Resected Laparoscopically\n[SUBTITLE] Technical Aspects of Laparoscopic Adrenalectomy [SUBSECTION] We prefer the transperitoneal lateral decubitus approach, as the best for maximal exposure of the gland and adjacent organs and vessels. On the right side, we use three 10-mm trocars and one 5-mm trocar. On the left side, we use two 10-mm trocars and two 5-mm trocars. We prefer to create pneumoperitoneum with the Hasson technique to avoid any relevant morbidity.\nWe prefer the transperitoneal lateral decubitus approach, as the best for maximal exposure of the gland and adjacent organs and vessels. On the right side, we use three 10-mm trocars and one 5-mm trocar. On the left side, we use two 10-mm trocars and two 5-mm trocars. We prefer to create pneumoperitoneum with the Hasson technique to avoid any relevant morbidity.\n[SUBTITLE] Right Adrenalectomy [SUBSECTION] The right triangular ligament and the retroperitoneal liver attachments are cauterized and divided to allow liver retraction and expose the upper limits of the tumor. Liver mobilization is necessary especially in large tumors. After dividing the retroperitoneum, the inferior vena cava (IVC) is identified and dissected from the tumor. The periadrenal fat is gently pushed upwards with endo-peanuts. The adrenal vein is subsequently identified, dissected, double-clipped, and divided. The inferior and superior adrenal vessels are cauterized or clipped. Ultrasonic scissors are occasionally used after ligation of the adrenal vein.\nThe right triangular ligament and the retroperitoneal liver attachments are cauterized and divided to allow liver retraction and expose the upper limits of the tumor. Liver mobilization is necessary especially in large tumors. After dividing the retroperitoneum, the inferior vena cava (IVC) is identified and dissected from the tumor. The periadrenal fat is gently pushed upwards with endo-peanuts. The adrenal vein is subsequently identified, dissected, double-clipped, and divided. The inferior and superior adrenal vessels are cauterized or clipped. Ultrasonic scissors are occasionally used after ligation of the adrenal vein.\n[SUBTITLE] Left Adrenalectomy [SUBSECTION] The left colonic flexure is always mobilized in large tumors and the left upper renal pole exposed. The splenic attachments are cauterized and divided, and the tail of the pancreas identified. The spleen is further mobilized until the stomach is visualized. Gerota's fascia is then opened, the adrenal gland identified, and the adrenal vein dissected, double clipped, and divided. The renal vein is occasionally identified prior to adrenal vein clipping. The upper adrenal vessels are either cauterized or clipped. Ultrasonic scissors were often used after the division of the main adrenal vein.\nThe specimen was placed in a bag and extracted through an extension of the incision done for the Hasson technique. The length of the extension depends on the tumor size, and the tumor must be removed from the abdomen carefully to be kept intact for histology examination.\nThe left colonic flexure is always mobilized in large tumors and the left upper renal pole exposed. The splenic attachments are cauterized and divided, and the tail of the pancreas identified. The spleen is further mobilized until the stomach is visualized. Gerota's fascia is then opened, the adrenal gland identified, and the adrenal vein dissected, double clipped, and divided. The renal vein is occasionally identified prior to adrenal vein clipping. The upper adrenal vessels are either cauterized or clipped. Ultrasonic scissors were often used after the division of the main adrenal vein.\nThe specimen was placed in a bag and extracted through an extension of the incision done for the Hasson technique. The length of the extension depends on the tumor size, and the tumor must be removed from the abdomen carefully to be kept intact for histology examination.", "We prefer the transperitoneal lateral decubitus approach, as the best for maximal exposure of the gland and adjacent organs and vessels. On the right side, we use three 10-mm trocars and one 5-mm trocar. On the left side, we use two 10-mm trocars and two 5-mm trocars. We prefer to create pneumoperitoneum with the Hasson technique to avoid any relevant morbidity.", "The right triangular ligament and the retroperitoneal liver attachments are cauterized and divided to allow liver retraction and expose the upper limits of the tumor. Liver mobilization is necessary especially in large tumors. After dividing the retroperitoneum, the inferior vena cava (IVC) is identified and dissected from the tumor. The periadrenal fat is gently pushed upwards with endo-peanuts. The adrenal vein is subsequently identified, dissected, double-clipped, and divided. The inferior and superior adrenal vessels are cauterized or clipped. Ultrasonic scissors are occasionally used after ligation of the adrenal vein.", "The left colonic flexure is always mobilized in large tumors and the left upper renal pole exposed. The splenic attachments are cauterized and divided, and the tail of the pancreas identified. The spleen is further mobilized until the stomach is visualized. Gerota's fascia is then opened, the adrenal gland identified, and the adrenal vein dissected, double clipped, and divided. The renal vein is occasionally identified prior to adrenal vein clipping. The upper adrenal vessels are either cauterized or clipped. Ultrasonic scissors were often used after the division of the main adrenal vein.\nThe specimen was placed in a bag and extracted through an extension of the incision done for the Hasson technique. The length of the extension depends on the tumor size, and the tumor must be removed from the abdomen carefully to be kept intact for histology examination.", "In 166 patients, 135 laparoscopic procedures were completed successfully, whereas in 14 patients the laparoscopic procedure was converted to open, and 17 patients were treated with the open approach from the start. No conversions were necessary in the group of patients with tumors >8cm.\nOperative time for laparoscopic adrenalectomies ranged from 65 minutes to 240 minutes. In the large adrenal tumor group, operative time for laparoscopic resection ranged from 150 minutes to 240 minutes. The postoperative hospital stay for laparoscopic adrenalectomy ranged from 1 day to 2 days (mean, 1.5) and from 5 days to 20 days for patients undergoing the open or converted procedure. The mean postoperative stay was 2 days for the group with large tumors resected by laparoscopy. There was minimal blood loss in all patients with laparoscopic resection for tumors >8cm and no need for transfusion.\nMortality and major morbidity did not differ in patients with large tumors when compared with patients with smaller tumors.\nEarly in our experience, in a patient with morbid obesity, Cushing's syndrome and bilateral macronodular adrenal hyperplasia (adrenal size 2.5cm to 4.5cm), the right laparoscopic adrenalectomy was uneventful, whereas the left laparoscopic adrenalectomy, converted to open, was complicated by a low output pancreatic fistula, treated conservatively with success.\nOne patient with pheochromocytoma developed pulmonary embolism 3 hours after the procedure and succumbed in the intensive care unit (ICU) one month later. The patient initially underwent a laparoscopic approach for an 8-cm tumor, which was converted to open due to difficulty in mobilization and minor but troublesome bleeding obscuring the operative field.\nA 70-year-old patient with Conn's syndrome and a 2.5-cm adenoma who had a previous medical history of coronary heart disease, anticoagulant treatment not discontinued in a timely manner, and chronic renal failure died from respiratory failure after 2.5 months of hospitalization in the ICU. The patient was reoperated on 24 hours postoperatively with diffuse retroperitoneal hemorrhaging that was treated successfully.\nOne patient had a 10cm x 6.5cm adenoma, laparoscopically resected, suspicious for malignancy on imaging and potentially malignant on histological investigation, that recurred after 5 years and hence the patient underwent open resection including the upper pole of the kidney.\nAt a mean follow-up interval of 96 months after laparoscopic adrenalectomy (range, 8 to 150), resolution of hormonal activity and no evidence of benign tumor recurrence were documented irrespective of tumor size.", "Laparoscopic adrenalectomy has become the gold standard in management of most adrenal masses.5,6 In fact, over the last 2 decades, retrospective comparison studies have illustrated the superiority of the laparoscopic approach over the conventional open procedure for the removal of benign functioning and nonfunctioning tumors of the adrenal gland. Laparoscopic procedures are associated with decreased hospitalization time; less operative blood loss; less postoperative discomfort, pain and need for analgesics; faster postoperative recovery; earlier return to everyday activities and diet; and lower overall costs.7–10 Based on these considerations, the indications for this technique have been vastly expanded,11 and laparoscopic adrenalectomy may even be performed, in select cases, on an outpatient basis.12\nIn the early stages, laparoscopic adrenalectomy was advocated only for the removal of small benign adrenal lesions. Despite the success of this procedure in patients with small adrenal tumors, there has been reluctance for several years to use this approach in patients with larger lesions. However, advantages in technology and experience gained in large institutions have extended the indications of laparoscopic adrenalectomy to include resection of larger adrenal tumors.13 Several authors support the laparoscopic approach in lesions <6cm, whereas others have performed laparoscopic adrenalectomy of tumors up to 15cm without discouraging morbidity rates.14–16 Indeed, several studies show that laparoscopic resection of large lesions can be safely performed regardless of the tumor size.17–20 Extensive experience in advanced laparoscopic techniques as well as in open adrenal surgery are mandatory to manipulate and laparoscopically excise large tumors.\nDespite the great experience gained in laparoscopic adrenalectomy, controversy remains in the management of adrenal tumors with high suspicion or evidence of malignancy.21\nThere are still concerns regarding the ability of the laparoscopic approach to totally remove primary malignant lesions that are supported by some cases of local recurrence and peritoneal tumor dissemination following laparoscopic approaches for primary malignancies.22–25 Recurrence may be due to incomplete resection or capsular disruption of the tumor during manipulation of the adrenal mass. However, several studies have clearly demonstrated that long-time survival rates with minimally invasive techniques are similar to these in the open approach, whereas there is a significant improvement in quality of life in the postoperative period.26–28\nThe adrenal gland is also a site of metastatic spread for many tumors, mainly because of the rich sinusoidal blood supply. Large metastatic lesions are usually confined within the adrenal gland on presentation, thus representing an ideal target for laparoscopic excision.29–35 Laparoscopic surgery for metastatic adrenal tumors up to 10cm is definitely a feasible, indicated technique.35,36\nThe interpretation of radiologic characteristics is a cornerstone of preoperative assessment of large masses, because open surgery remains the preferred procedure when malignancy is suspected. Tumor size is a good index but cannot be used as an absolute predictor of malignancy.37 It has been estimated that the risk for cancer in adrenal tumors >6cm is 1 in every 60 adrenalectomies performed, ie, 1.67%.38 On the other hand, 13.5% of adrenocortical carcinomas were diagnosed in patients with adrenal tumors <5cm.39 Moreover, computed tomography may be associated with approximately a 40% underestimation of adrenal tumor size compared with the actual size determined in the histological examination.40 Despite the improvement in imaging techniques, they lack enough accuracy to exclude primary malignancy. An initial laparoscopic approach can be used to establish a diagnosis, and conversion to the open technique is mandatory if curative resection cannot be performed. The sole widely accepted absolute contraindication for minimally invasive techniques in adrenal lesions is the presence of large primary carcinomas with or without local invasion of nearby structures and/or metastasis to periaortic lymph nodes.41 Large but well-encapsulated metastatic adrenal masses without evidence of local invasion can be removed laparoscopically, whereas giant benign tumors or tumors >12cm to 14cm are not an indication for the laparoscopic technique.42", "Laparoscopic adrenalectomy can be considered the treatment of choice for all benign adrenal tumors up to 12cm to 14cm in size. Morbidity, mortality, and hospital stay is similar, irrespective of tumor size, but experience in both laparoscopic and adrenal surgery is necessary.\nLarge tumors suspected of being a primary malignancy based on imaging characteristics should be approached with the open technique from the start." ]
[ null, "materials|methods", null, null, null, "results", "discussion", "conclusions" ]
[ "Large adrenal tumors", "Laparoscopic adrenalectomy", "Adrenalectomy" ]
Laparoscopic repair of a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.
21333198
Posttraumatic diaphragmatic hernias (PDH) are serious complications of blunt and penetrating abdominal or thoracic trauma. Traditional thoracic or abdominal operations are usually performed in these cases.
BACKGROUND
We present 2 cases of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction. Both cases were successfully treated with laparoscopy.
METHODS
We found that laparoscopy is a safe, successful, and gentle procedure not only for diagnosis but also for treatment of complicated PDH. Strangulation and colon obstruction were not contraindications to performing laparoscopic procedures. The postoperative course and long-term follow-up (range, 12 to 30 months) were uneventful and short. We expect the same good long-term results after laparoscopic repair as after open conventional surgery.
RESULTS
We recommend the use a minimally invasive approach to treat posttraumatic diaphragmatic hernia complicated by strangulation and colon obstruction in hemodynamically stable patients.
CONCLUSION
[ "Adult", "Colonic Diseases", "Follow-Up Studies", "Hernia, Diaphragmatic, Traumatic", "Humans", "Intestinal Obstruction", "Laparoscopy", "Male", "Tomography, X-Ray Computed" ]
3041041
null
null
null
null
null
null
CONCLUSION
Our experience shows that laparoscopy is a safe, effective, minimally invasive method of treatment for PDH complicated by strangulation and colon obstruction. Strangulation and colon obstruction are not contraindications to the use of laparoscopic techniques in the treatment of PDH. Laparoscopic repair of complicated PDH is technically challenging and time consuming. However, we suppose that surgeons with sufficient experience in laparoscopy can use a minimally invasive approach in these cases for hemodynamically stable patients.
[ "INTRODUCTION", "CASE REPORTS", "Case #1", "Case #2" ]
[ "Posttraumatic diaphragmatic hernia (PDH) is a rare disease treated traditionally by using open thoracic or abdominal operations.1–3 Only a few studies have reported the use of the laparoscopic technique for PDH.2,4–6 However, this procedure has not been described for the treatment of PDH complicated by strangulation and bowel obstruction. Herein, we report on 2 cases of successful laparoscopic repair of posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.", "[SUBTITLE] Case #1 [SUBSECTION] A 40-year-old man was admitted to our hospital with complaints of cramps around the epigastric area and throughout the abdomen, nausea, a feeling of bloating, and constipation. The patient had a traffic accident 12 years earlier with fracture of the left 8 to 10 ribs and hemothorax that was resolved by pleural draining. Computed tomography scan (CT) showed migration of the large intestine and greater omentum into the left hemithorax and acute colon obstruction (Figure 1). On the same day, laparoscopic surgery was performed. A 3-trocar technique was used.\nComputed tomography scan of the chest and abdominal cavity in a case of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nDiagnostic laparoscopy had shown an insignificant amount of serous liquid in the lateral compartments of the peritoneal cavity and pelvis, dilated ileum, and the right hemicolon. Posttraumatic hernia with bowel obstruction and strangulated greater omentum and left transverse colonic segment, surrounded by cicatricial adhesions, was revealed in the left dome of the diaphragm (Figure 2). The transverse colonic segment and part of the greater omentum were dislocated into the left thoracic cavity. The following stage was laparoscopic limited to the zone of operative intervention. The strangle ring was dissected in a radial direction, and released colon and omentum were pulled down into the abdominal cavity. The colon was freed meticulously from adhesions and strangulation to avoid bowel wall injury. Because pneumothorax is the natural stage of this kind of surgery, pneumoperitoneum pressure was lowered to between 7mm Hg and 8mm Hg before the restrained organs was freed. A 25-mm x 30-mm rupture of the left diaphragm dome surrounded by scar tissue was found (Figure 2). The diaphragm defect was cleared from a cicatricial tissue and closed with separate intracorporeal nonabsorbable suture in 2 rows (Figure 3). The released part of the colon was examined for viability, with intracorporeal suturing of the strangulated zone, and then resection of the greater omentum was performed. The surgery was completed by draining the left subdiaphragmatic space and left pleural cavity. The operation time was 210 minutes, and estimated blood loss was 150mL. Drains were removed on the next day after surgery. The postoperative recovery was uneventful. Spontaneous bowel movements occurred on the second day after surgery. The patient returned to work on the eighth postoperative day. X-ray controls 2 months and 30 months after surgery showed no defects in the diaphragm.\nDiagnostic laparoscopic procedure of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nSuturing of the diaphragm defect.\nA 40-year-old man was admitted to our hospital with complaints of cramps around the epigastric area and throughout the abdomen, nausea, a feeling of bloating, and constipation. The patient had a traffic accident 12 years earlier with fracture of the left 8 to 10 ribs and hemothorax that was resolved by pleural draining. Computed tomography scan (CT) showed migration of the large intestine and greater omentum into the left hemithorax and acute colon obstruction (Figure 1). On the same day, laparoscopic surgery was performed. A 3-trocar technique was used.\nComputed tomography scan of the chest and abdominal cavity in a case of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nDiagnostic laparoscopy had shown an insignificant amount of serous liquid in the lateral compartments of the peritoneal cavity and pelvis, dilated ileum, and the right hemicolon. Posttraumatic hernia with bowel obstruction and strangulated greater omentum and left transverse colonic segment, surrounded by cicatricial adhesions, was revealed in the left dome of the diaphragm (Figure 2). The transverse colonic segment and part of the greater omentum were dislocated into the left thoracic cavity. The following stage was laparoscopic limited to the zone of operative intervention. The strangle ring was dissected in a radial direction, and released colon and omentum were pulled down into the abdominal cavity. The colon was freed meticulously from adhesions and strangulation to avoid bowel wall injury. Because pneumothorax is the natural stage of this kind of surgery, pneumoperitoneum pressure was lowered to between 7mm Hg and 8mm Hg before the restrained organs was freed. A 25-mm x 30-mm rupture of the left diaphragm dome surrounded by scar tissue was found (Figure 2). The diaphragm defect was cleared from a cicatricial tissue and closed with separate intracorporeal nonabsorbable suture in 2 rows (Figure 3). The released part of the colon was examined for viability, with intracorporeal suturing of the strangulated zone, and then resection of the greater omentum was performed. The surgery was completed by draining the left subdiaphragmatic space and left pleural cavity. The operation time was 210 minutes, and estimated blood loss was 150mL. Drains were removed on the next day after surgery. The postoperative recovery was uneventful. Spontaneous bowel movements occurred on the second day after surgery. The patient returned to work on the eighth postoperative day. X-ray controls 2 months and 30 months after surgery showed no defects in the diaphragm.\nDiagnostic laparoscopic procedure of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nSuturing of the diaphragm defect.\n[SUBTITLE] Case #2 [SUBSECTION] A 46-year-old man was delivered to our hospital complaining on periodic spasmodic epigastric pain accompanied by constipation, bloating, and nausea that resolved after conservative therapy. The patient had undergone surgery for a stab wound to the chest with injury to the heart 5 months before admission.\nThe frontal and lateral chest X-rays revealed the transverse colonic segment dislocated into the left thoracic cavity above the diaphragm (Figure 4). The left-sided posttraumatic diaphragmatic hernia was recognized. Surgery was proposed, but the patient refused it.\nChest x-ray of the second patient with posttraumatic left-sided diaphragmatic hernia.\nHe was rehospitalized 3 weeks later with complaints of abdominal cramps, nausea, vomiting, absence of stool and gas for 2 days. X-ray examination demonstrated a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and acute colon obstruction (Figure 5). Conservative therapy for 4 hours was ineffective, and laparoscopic surgery was performed. The approaches, surgical findings, and laparoscopic technique were about the same as in the previous clinical case. The operation time was 90 minutes, and estimated blood loss was 100mL. Drains were removed on the next day after surgery. Spontaneous passage of stool occurred on the second postoperative day. The postoperative recovery was uneventful. The patient was discharged on the fourth day and returned to work on postoperative day 6. There were no defects in the diaphragm on X-ray controls 2, 7, and 12 months after treatment.\nX-ray contrast abdominal cavity examination of the second patient with posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.\nA 46-year-old man was delivered to our hospital complaining on periodic spasmodic epigastric pain accompanied by constipation, bloating, and nausea that resolved after conservative therapy. The patient had undergone surgery for a stab wound to the chest with injury to the heart 5 months before admission.\nThe frontal and lateral chest X-rays revealed the transverse colonic segment dislocated into the left thoracic cavity above the diaphragm (Figure 4). The left-sided posttraumatic diaphragmatic hernia was recognized. Surgery was proposed, but the patient refused it.\nChest x-ray of the second patient with posttraumatic left-sided diaphragmatic hernia.\nHe was rehospitalized 3 weeks later with complaints of abdominal cramps, nausea, vomiting, absence of stool and gas for 2 days. X-ray examination demonstrated a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and acute colon obstruction (Figure 5). Conservative therapy for 4 hours was ineffective, and laparoscopic surgery was performed. The approaches, surgical findings, and laparoscopic technique were about the same as in the previous clinical case. The operation time was 90 minutes, and estimated blood loss was 100mL. Drains were removed on the next day after surgery. Spontaneous passage of stool occurred on the second postoperative day. The postoperative recovery was uneventful. The patient was discharged on the fourth day and returned to work on postoperative day 6. There were no defects in the diaphragm on X-ray controls 2, 7, and 12 months after treatment.\nX-ray contrast abdominal cavity examination of the second patient with posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.", "A 40-year-old man was admitted to our hospital with complaints of cramps around the epigastric area and throughout the abdomen, nausea, a feeling of bloating, and constipation. The patient had a traffic accident 12 years earlier with fracture of the left 8 to 10 ribs and hemothorax that was resolved by pleural draining. Computed tomography scan (CT) showed migration of the large intestine and greater omentum into the left hemithorax and acute colon obstruction (Figure 1). On the same day, laparoscopic surgery was performed. A 3-trocar technique was used.\nComputed tomography scan of the chest and abdominal cavity in a case of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nDiagnostic laparoscopy had shown an insignificant amount of serous liquid in the lateral compartments of the peritoneal cavity and pelvis, dilated ileum, and the right hemicolon. Posttraumatic hernia with bowel obstruction and strangulated greater omentum and left transverse colonic segment, surrounded by cicatricial adhesions, was revealed in the left dome of the diaphragm (Figure 2). The transverse colonic segment and part of the greater omentum were dislocated into the left thoracic cavity. The following stage was laparoscopic limited to the zone of operative intervention. The strangle ring was dissected in a radial direction, and released colon and omentum were pulled down into the abdominal cavity. The colon was freed meticulously from adhesions and strangulation to avoid bowel wall injury. Because pneumothorax is the natural stage of this kind of surgery, pneumoperitoneum pressure was lowered to between 7mm Hg and 8mm Hg before the restrained organs was freed. A 25-mm x 30-mm rupture of the left diaphragm dome surrounded by scar tissue was found (Figure 2). The diaphragm defect was cleared from a cicatricial tissue and closed with separate intracorporeal nonabsorbable suture in 2 rows (Figure 3). The released part of the colon was examined for viability, with intracorporeal suturing of the strangulated zone, and then resection of the greater omentum was performed. The surgery was completed by draining the left subdiaphragmatic space and left pleural cavity. The operation time was 210 minutes, and estimated blood loss was 150mL. Drains were removed on the next day after surgery. The postoperative recovery was uneventful. Spontaneous bowel movements occurred on the second day after surgery. The patient returned to work on the eighth postoperative day. X-ray controls 2 months and 30 months after surgery showed no defects in the diaphragm.\nDiagnostic laparoscopic procedure of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nSuturing of the diaphragm defect.", "A 46-year-old man was delivered to our hospital complaining on periodic spasmodic epigastric pain accompanied by constipation, bloating, and nausea that resolved after conservative therapy. The patient had undergone surgery for a stab wound to the chest with injury to the heart 5 months before admission.\nThe frontal and lateral chest X-rays revealed the transverse colonic segment dislocated into the left thoracic cavity above the diaphragm (Figure 4). The left-sided posttraumatic diaphragmatic hernia was recognized. Surgery was proposed, but the patient refused it.\nChest x-ray of the second patient with posttraumatic left-sided diaphragmatic hernia.\nHe was rehospitalized 3 weeks later with complaints of abdominal cramps, nausea, vomiting, absence of stool and gas for 2 days. X-ray examination demonstrated a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and acute colon obstruction (Figure 5). Conservative therapy for 4 hours was ineffective, and laparoscopic surgery was performed. The approaches, surgical findings, and laparoscopic technique were about the same as in the previous clinical case. The operation time was 90 minutes, and estimated blood loss was 100mL. Drains were removed on the next day after surgery. Spontaneous passage of stool occurred on the second postoperative day. The postoperative recovery was uneventful. The patient was discharged on the fourth day and returned to work on postoperative day 6. There were no defects in the diaphragm on X-ray controls 2, 7, and 12 months after treatment.\nX-ray contrast abdominal cavity examination of the second patient with posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction." ]
[ null, null, null, null ]
[ "INTRODUCTION", "CASE REPORTS", "Case #1", "Case #2", "DISCUSSION", "CONCLUSION" ]
[ "Posttraumatic diaphragmatic hernia (PDH) is a rare disease treated traditionally by using open thoracic or abdominal operations.1–3 Only a few studies have reported the use of the laparoscopic technique for PDH.2,4–6 However, this procedure has not been described for the treatment of PDH complicated by strangulation and bowel obstruction. Herein, we report on 2 cases of successful laparoscopic repair of posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.", "[SUBTITLE] Case #1 [SUBSECTION] A 40-year-old man was admitted to our hospital with complaints of cramps around the epigastric area and throughout the abdomen, nausea, a feeling of bloating, and constipation. The patient had a traffic accident 12 years earlier with fracture of the left 8 to 10 ribs and hemothorax that was resolved by pleural draining. Computed tomography scan (CT) showed migration of the large intestine and greater omentum into the left hemithorax and acute colon obstruction (Figure 1). On the same day, laparoscopic surgery was performed. A 3-trocar technique was used.\nComputed tomography scan of the chest and abdominal cavity in a case of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nDiagnostic laparoscopy had shown an insignificant amount of serous liquid in the lateral compartments of the peritoneal cavity and pelvis, dilated ileum, and the right hemicolon. Posttraumatic hernia with bowel obstruction and strangulated greater omentum and left transverse colonic segment, surrounded by cicatricial adhesions, was revealed in the left dome of the diaphragm (Figure 2). The transverse colonic segment and part of the greater omentum were dislocated into the left thoracic cavity. The following stage was laparoscopic limited to the zone of operative intervention. The strangle ring was dissected in a radial direction, and released colon and omentum were pulled down into the abdominal cavity. The colon was freed meticulously from adhesions and strangulation to avoid bowel wall injury. Because pneumothorax is the natural stage of this kind of surgery, pneumoperitoneum pressure was lowered to between 7mm Hg and 8mm Hg before the restrained organs was freed. A 25-mm x 30-mm rupture of the left diaphragm dome surrounded by scar tissue was found (Figure 2). The diaphragm defect was cleared from a cicatricial tissue and closed with separate intracorporeal nonabsorbable suture in 2 rows (Figure 3). The released part of the colon was examined for viability, with intracorporeal suturing of the strangulated zone, and then resection of the greater omentum was performed. The surgery was completed by draining the left subdiaphragmatic space and left pleural cavity. The operation time was 210 minutes, and estimated blood loss was 150mL. Drains were removed on the next day after surgery. The postoperative recovery was uneventful. Spontaneous bowel movements occurred on the second day after surgery. The patient returned to work on the eighth postoperative day. X-ray controls 2 months and 30 months after surgery showed no defects in the diaphragm.\nDiagnostic laparoscopic procedure of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nSuturing of the diaphragm defect.\nA 40-year-old man was admitted to our hospital with complaints of cramps around the epigastric area and throughout the abdomen, nausea, a feeling of bloating, and constipation. The patient had a traffic accident 12 years earlier with fracture of the left 8 to 10 ribs and hemothorax that was resolved by pleural draining. Computed tomography scan (CT) showed migration of the large intestine and greater omentum into the left hemithorax and acute colon obstruction (Figure 1). On the same day, laparoscopic surgery was performed. A 3-trocar technique was used.\nComputed tomography scan of the chest and abdominal cavity in a case of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nDiagnostic laparoscopy had shown an insignificant amount of serous liquid in the lateral compartments of the peritoneal cavity and pelvis, dilated ileum, and the right hemicolon. Posttraumatic hernia with bowel obstruction and strangulated greater omentum and left transverse colonic segment, surrounded by cicatricial adhesions, was revealed in the left dome of the diaphragm (Figure 2). The transverse colonic segment and part of the greater omentum were dislocated into the left thoracic cavity. The following stage was laparoscopic limited to the zone of operative intervention. The strangle ring was dissected in a radial direction, and released colon and omentum were pulled down into the abdominal cavity. The colon was freed meticulously from adhesions and strangulation to avoid bowel wall injury. Because pneumothorax is the natural stage of this kind of surgery, pneumoperitoneum pressure was lowered to between 7mm Hg and 8mm Hg before the restrained organs was freed. A 25-mm x 30-mm rupture of the left diaphragm dome surrounded by scar tissue was found (Figure 2). The diaphragm defect was cleared from a cicatricial tissue and closed with separate intracorporeal nonabsorbable suture in 2 rows (Figure 3). The released part of the colon was examined for viability, with intracorporeal suturing of the strangulated zone, and then resection of the greater omentum was performed. The surgery was completed by draining the left subdiaphragmatic space and left pleural cavity. The operation time was 210 minutes, and estimated blood loss was 150mL. Drains were removed on the next day after surgery. The postoperative recovery was uneventful. Spontaneous bowel movements occurred on the second day after surgery. The patient returned to work on the eighth postoperative day. X-ray controls 2 months and 30 months after surgery showed no defects in the diaphragm.\nDiagnostic laparoscopic procedure of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nSuturing of the diaphragm defect.\n[SUBTITLE] Case #2 [SUBSECTION] A 46-year-old man was delivered to our hospital complaining on periodic spasmodic epigastric pain accompanied by constipation, bloating, and nausea that resolved after conservative therapy. The patient had undergone surgery for a stab wound to the chest with injury to the heart 5 months before admission.\nThe frontal and lateral chest X-rays revealed the transverse colonic segment dislocated into the left thoracic cavity above the diaphragm (Figure 4). The left-sided posttraumatic diaphragmatic hernia was recognized. Surgery was proposed, but the patient refused it.\nChest x-ray of the second patient with posttraumatic left-sided diaphragmatic hernia.\nHe was rehospitalized 3 weeks later with complaints of abdominal cramps, nausea, vomiting, absence of stool and gas for 2 days. X-ray examination demonstrated a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and acute colon obstruction (Figure 5). Conservative therapy for 4 hours was ineffective, and laparoscopic surgery was performed. The approaches, surgical findings, and laparoscopic technique were about the same as in the previous clinical case. The operation time was 90 minutes, and estimated blood loss was 100mL. Drains were removed on the next day after surgery. Spontaneous passage of stool occurred on the second postoperative day. The postoperative recovery was uneventful. The patient was discharged on the fourth day and returned to work on postoperative day 6. There were no defects in the diaphragm on X-ray controls 2, 7, and 12 months after treatment.\nX-ray contrast abdominal cavity examination of the second patient with posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.\nA 46-year-old man was delivered to our hospital complaining on periodic spasmodic epigastric pain accompanied by constipation, bloating, and nausea that resolved after conservative therapy. The patient had undergone surgery for a stab wound to the chest with injury to the heart 5 months before admission.\nThe frontal and lateral chest X-rays revealed the transverse colonic segment dislocated into the left thoracic cavity above the diaphragm (Figure 4). The left-sided posttraumatic diaphragmatic hernia was recognized. Surgery was proposed, but the patient refused it.\nChest x-ray of the second patient with posttraumatic left-sided diaphragmatic hernia.\nHe was rehospitalized 3 weeks later with complaints of abdominal cramps, nausea, vomiting, absence of stool and gas for 2 days. X-ray examination demonstrated a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and acute colon obstruction (Figure 5). Conservative therapy for 4 hours was ineffective, and laparoscopic surgery was performed. The approaches, surgical findings, and laparoscopic technique were about the same as in the previous clinical case. The operation time was 90 minutes, and estimated blood loss was 100mL. Drains were removed on the next day after surgery. Spontaneous passage of stool occurred on the second postoperative day. The postoperative recovery was uneventful. The patient was discharged on the fourth day and returned to work on postoperative day 6. There were no defects in the diaphragm on X-ray controls 2, 7, and 12 months after treatment.\nX-ray contrast abdominal cavity examination of the second patient with posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.", "A 40-year-old man was admitted to our hospital with complaints of cramps around the epigastric area and throughout the abdomen, nausea, a feeling of bloating, and constipation. The patient had a traffic accident 12 years earlier with fracture of the left 8 to 10 ribs and hemothorax that was resolved by pleural draining. Computed tomography scan (CT) showed migration of the large intestine and greater omentum into the left hemithorax and acute colon obstruction (Figure 1). On the same day, laparoscopic surgery was performed. A 3-trocar technique was used.\nComputed tomography scan of the chest and abdominal cavity in a case of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nDiagnostic laparoscopy had shown an insignificant amount of serous liquid in the lateral compartments of the peritoneal cavity and pelvis, dilated ileum, and the right hemicolon. Posttraumatic hernia with bowel obstruction and strangulated greater omentum and left transverse colonic segment, surrounded by cicatricial adhesions, was revealed in the left dome of the diaphragm (Figure 2). The transverse colonic segment and part of the greater omentum were dislocated into the left thoracic cavity. The following stage was laparoscopic limited to the zone of operative intervention. The strangle ring was dissected in a radial direction, and released colon and omentum were pulled down into the abdominal cavity. The colon was freed meticulously from adhesions and strangulation to avoid bowel wall injury. Because pneumothorax is the natural stage of this kind of surgery, pneumoperitoneum pressure was lowered to between 7mm Hg and 8mm Hg before the restrained organs was freed. A 25-mm x 30-mm rupture of the left diaphragm dome surrounded by scar tissue was found (Figure 2). The diaphragm defect was cleared from a cicatricial tissue and closed with separate intracorporeal nonabsorbable suture in 2 rows (Figure 3). The released part of the colon was examined for viability, with intracorporeal suturing of the strangulated zone, and then resection of the greater omentum was performed. The surgery was completed by draining the left subdiaphragmatic space and left pleural cavity. The operation time was 210 minutes, and estimated blood loss was 150mL. Drains were removed on the next day after surgery. The postoperative recovery was uneventful. Spontaneous bowel movements occurred on the second day after surgery. The patient returned to work on the eighth postoperative day. X-ray controls 2 months and 30 months after surgery showed no defects in the diaphragm.\nDiagnostic laparoscopic procedure of posttraumatic left-sided diaphragmatic hernia complicated by strangulation and colon obstruction.\nSuturing of the diaphragm defect.", "A 46-year-old man was delivered to our hospital complaining on periodic spasmodic epigastric pain accompanied by constipation, bloating, and nausea that resolved after conservative therapy. The patient had undergone surgery for a stab wound to the chest with injury to the heart 5 months before admission.\nThe frontal and lateral chest X-rays revealed the transverse colonic segment dislocated into the left thoracic cavity above the diaphragm (Figure 4). The left-sided posttraumatic diaphragmatic hernia was recognized. Surgery was proposed, but the patient refused it.\nChest x-ray of the second patient with posttraumatic left-sided diaphragmatic hernia.\nHe was rehospitalized 3 weeks later with complaints of abdominal cramps, nausea, vomiting, absence of stool and gas for 2 days. X-ray examination demonstrated a posttraumatic left-sided diaphragmatic hernia complicated by strangulation and acute colon obstruction (Figure 5). Conservative therapy for 4 hours was ineffective, and laparoscopic surgery was performed. The approaches, surgical findings, and laparoscopic technique were about the same as in the previous clinical case. The operation time was 90 minutes, and estimated blood loss was 100mL. Drains were removed on the next day after surgery. Spontaneous passage of stool occurred on the second postoperative day. The postoperative recovery was uneventful. The patient was discharged on the fourth day and returned to work on postoperative day 6. There were no defects in the diaphragm on X-ray controls 2, 7, and 12 months after treatment.\nX-ray contrast abdominal cavity examination of the second patient with posttraumatic left-side diaphragmatic hernia complicated by strangulation and colon obstruction.", "PDH is a displacement of internal abdominal organs to a chest cavity through a pathological aperture of a diaphragm due to the trauma.1,3,6 Because PDH does not always have a hernial sack, some authors use the term “false hernia.” However, the presence or absence of a hernia sack has only a little impact on the clinical course and medical tactics, and the term “posttraumatic diaphragmatic hernia” is generally accepted in the medical literature.1,3\nDiagnosis of PDH often might be delayed, especially if the existence of diaphragm damage has not been established in the acute period of a trauma.1–3,7 The severe diagnostic problems are caused by development of PDH strangulation and acute bowel occlusion.7 Our experience clearly demonstrates a rare but identical pathology of the diaphragm that led to complications by strangulation and bowel obstruction.\nOperative interventions for diaphragm pathology are considered among the most difficult reconstructive surgeries.1,3 Results depend on the kind of disease, its complications, the intensity of pathological changes in a diaphragm and surrounding organs, the surgical approach, and the extent of operating trauma.1 For this reason, particular attention is paid to PDH repair.1,5,8\nThe evolution of minimally invasive surgery has allowed surgeons to challenge various traditional approaches.6,8 Although the laparoscopic treatment of hiatal hernia is a standard operation in daily surgical practice, the use of laparoscopic techniques for PDH is still rare.2,4–6,9 However, the possibility of laparoscopic surgery for complicated PDH remains disputable.3,4\nAll patients with possible diaphragm damage and recurrence of pulmonary dysfunction or intestinal symptoms, such as obstruction, nausea, and pain, should be investigated for PDH.\nLaparoscopy in addition to X-ray examination and CT scan is the final and most valuable diagnostic procedure for PDH, allowing development of a suitable individual surgical approach. Because pneumothorax is the natural stage of PDH laparoscopic repair, it is recommended that it be reduced to 7mm Hg to 8mm Hg to avoid thoracic organ compression. The surgeon developing laparoscopic procedures for strangulated PDH with bowel obstruction should be ready to convert to a traditional operation at any moment.", "Our experience shows that laparoscopy is a safe, effective, minimally invasive method of treatment for PDH complicated by strangulation and colon obstruction. Strangulation and colon obstruction are not contraindications to the use of laparoscopic techniques in the treatment of PDH. Laparoscopic repair of complicated PDH is technically challenging and time consuming. However, we suppose that surgeons with sufficient experience in laparoscopy can use a minimally invasive approach in these cases for hemodynamically stable patients." ]
[ null, null, null, null, "discussion", "conclusions" ]
[ "Laparoscopy", "Posttraumatic diaphragmatic hernia", "Strangulation", "Colon obstruction" ]
Laparoscopic resection of an undifferentiated pleomorphic splenic sarcoma.
21333202
Splenic tumors are rare. Malignant fibrous histiocytoma (MFH) of the spleen is one of the least common primary splenic tumors. Review of the literature shows that a laparoscopic resection has never been tried.
BACKGROUND
We discuss the case of a 76-year-old man with a 7-cm MFH in the spleen and present a review of splenic sarcomas.
METHOD
The patient underwent a successful laparoscopic splenectomy; pathology revealed a rare undifferentiated pleomorphic sarcoma of the spleen. A review of the international literature identified 15 additional cases of primary splenic MFH. Survival was rarely longer than 15 months.
RESULTS
Malignant fibrous histiocytoma of the spleen is an exceedingly rare tumor with a poor prognosis. In experienced hands, laparoscopic splenectomy is a feasible operative choice for primary splenic sarcoma.
CONCLUSION
[ "Aged", "Diagnosis, Differential", "Follow-Up Studies", "Histiocytoma, Malignant Fibrous", "Humans", "Laparoscopy", "Male", "Positron-Emission Tomography", "Sarcoma", "Splenectomy", "Tomography, X-Ray Computed" ]
3041045
null
null
null
null
null
null
CONCLUSION
Primary malignancies of the spleen are rare and are primarily treated with splenectomy. This case report shows that laparoscopic splenectomy is a feasible operative choice for patients with sarcomas, allowing the advantages of laparoscopic surgery and a quicker healing process permitting earlier institution of adjuvant treatment if necessary. The large size and increased risk of bleeding will likely be the greatest challenges for surgeons performing laparoscopic splenectomy for splenic sarcomas.
[ "INTRODUCTION", "CASE REPORT" ]
[ "Laparoscopic splenectomy (LS) is one of the many successful applications of minimally invasive surgical techniques. Since the first laparoscopic splenectomy performed in 1991 by Delaître, this approach has been adopted as the procedure of choice to treat benign splenic pathologies.1 However, data on laparoscopic splenic resection for malignant tumors is scarce because of the rarity of the occurrence of primary malignant tumors in the spleen. These tumors can be classified broadly as lymphoid and nonlymphoid. Non-Hodgkin's lymphoma is the most common primary lymphoid tumor, and angiosarcoma is the most common nonlymphoid malignant neoplasm. The remaining nonlymphoid tumors, such as hemangioendothelioma, malignant fibrous histiocytoma (MFH), fibrosarcoma, and leiomyosarcoma, are exceedingly rare and are only anecdotally reported. This report presents the first case of splenic malignant fibrous histiocytoma treated by LS reported in the international literature.", "A 76-year-old man presented to the Mayo Clinic with a 4-week history of left upper quadrant abdominal and flank pain. He had an intentional weight loss of 30 pounds over the past 18 months. A gastrointestinal review of systems was negative. He had no fever, sweats, or chills. His past medical history was significant for prostate cancer, recurrent kidney stones secondary to cystinuria, hypertension, coronary artery disease, and hyperlipidemia. His past surgical history included multiple stone extraction procedures, including bilateral open stone extractions. He underwent radical retropubic prostatectomy and pelvic lymphadenectomy in 1997.\nOn physical examination, he was well appearing, afebrile, and normotensive. His body mass index was 25kg/m2. His abdomen was soft, nondistended, slightly tender to palpation in the left upper quadrant; no masses or organomegaly were appreciated on deep palpation. Laboratory data showed a white blood cell count of 7,300/L, hemoglobin of 10.6g/dL, and platelets of 241,000/L. His creatinine was 1.4mg/dL.\nAn outside CT scan of the abdomen and pelvis with oral and intravenous contrast showed a 4.8 x 6.1 x 5-cm heterogenous splenic mass that was not present on a CT scan 10 months earlier (Figure 1). The mass had irregular borders and contained enhancing nodular and cystic components. It was reported as suspicious hemangioendothelioma, angiosarcoma, hemangiopericytoma, and other malignant processes, such as metastases or possibly infection. An incidental splenule was noted in the splenic hilum.\nCT scan of the abdomen showing a 4.8-cm AP x 6.1-cm width x 5-cm length nonhomogeneous mass in the spleen. This mass has an irregular outline and contains several enhancing nodules and cystic spaces.\nAn ultrasound-guided biopsy of the splenic mass revealed a high-grade malignant neoplasm, unable to be further classified on cytology. A PET scan showed a centrally hypometabolic and peripherally hypermetabolic lesion in the spleen with a small central hematoma consistent with the biopsy done the day before. (Multiple foci of skeletal malignant type FDG uptake with associated subtle sclerosis were seen on L1, L3, left sacrum left iliac bone, left anterior superior iliac spine, and subtrochanteric left femur (Figure 2). There was circumferential uptake within a couple of loops of small bowel in the left lower quadrant that was considered most likely physiological. A contrast enhanced CT enterography confirmed noninvolvement of the bowel but showed splenic mass enlargement to 7.1 x 5.3cm. Upper endoscopy and colonoscopy identified a cecal tubulovillous adenoma without evidence of malignancy. A carcinoembryonic antigen level was 0.6ng/mL (normal), and PSA was <0.10ng/mL.\nPET of the pelvis showing multiple foci of skeletal malignant-type FDG uptake, including left sacrum at the level of the S2 foramen and left iliac bone adjacent to the SI joint.\nLaparoscopic splenectomy, including the splenule was performed without any intraoperative complications. The patient had an uneventful postoperative course and was discharged on postoperative day 2. The final pathology confirmed an undifferentiated pleomorphic sarcoma, high grade (4 of 4). Postoperatively, a CT-guided biopsy of the left iliac bone lesion was negative for malignancy. The patient declined any treatment that would be based on the presumption of residual disease." ]
[ null, null ]
[ "INTRODUCTION", "CASE REPORT", "DISCUSSION", "CONCLUSION" ]
[ "Laparoscopic splenectomy (LS) is one of the many successful applications of minimally invasive surgical techniques. Since the first laparoscopic splenectomy performed in 1991 by Delaître, this approach has been adopted as the procedure of choice to treat benign splenic pathologies.1 However, data on laparoscopic splenic resection for malignant tumors is scarce because of the rarity of the occurrence of primary malignant tumors in the spleen. These tumors can be classified broadly as lymphoid and nonlymphoid. Non-Hodgkin's lymphoma is the most common primary lymphoid tumor, and angiosarcoma is the most common nonlymphoid malignant neoplasm. The remaining nonlymphoid tumors, such as hemangioendothelioma, malignant fibrous histiocytoma (MFH), fibrosarcoma, and leiomyosarcoma, are exceedingly rare and are only anecdotally reported. This report presents the first case of splenic malignant fibrous histiocytoma treated by LS reported in the international literature.", "A 76-year-old man presented to the Mayo Clinic with a 4-week history of left upper quadrant abdominal and flank pain. He had an intentional weight loss of 30 pounds over the past 18 months. A gastrointestinal review of systems was negative. He had no fever, sweats, or chills. His past medical history was significant for prostate cancer, recurrent kidney stones secondary to cystinuria, hypertension, coronary artery disease, and hyperlipidemia. His past surgical history included multiple stone extraction procedures, including bilateral open stone extractions. He underwent radical retropubic prostatectomy and pelvic lymphadenectomy in 1997.\nOn physical examination, he was well appearing, afebrile, and normotensive. His body mass index was 25kg/m2. His abdomen was soft, nondistended, slightly tender to palpation in the left upper quadrant; no masses or organomegaly were appreciated on deep palpation. Laboratory data showed a white blood cell count of 7,300/L, hemoglobin of 10.6g/dL, and platelets of 241,000/L. His creatinine was 1.4mg/dL.\nAn outside CT scan of the abdomen and pelvis with oral and intravenous contrast showed a 4.8 x 6.1 x 5-cm heterogenous splenic mass that was not present on a CT scan 10 months earlier (Figure 1). The mass had irregular borders and contained enhancing nodular and cystic components. It was reported as suspicious hemangioendothelioma, angiosarcoma, hemangiopericytoma, and other malignant processes, such as metastases or possibly infection. An incidental splenule was noted in the splenic hilum.\nCT scan of the abdomen showing a 4.8-cm AP x 6.1-cm width x 5-cm length nonhomogeneous mass in the spleen. This mass has an irregular outline and contains several enhancing nodules and cystic spaces.\nAn ultrasound-guided biopsy of the splenic mass revealed a high-grade malignant neoplasm, unable to be further classified on cytology. A PET scan showed a centrally hypometabolic and peripherally hypermetabolic lesion in the spleen with a small central hematoma consistent with the biopsy done the day before. (Multiple foci of skeletal malignant type FDG uptake with associated subtle sclerosis were seen on L1, L3, left sacrum left iliac bone, left anterior superior iliac spine, and subtrochanteric left femur (Figure 2). There was circumferential uptake within a couple of loops of small bowel in the left lower quadrant that was considered most likely physiological. A contrast enhanced CT enterography confirmed noninvolvement of the bowel but showed splenic mass enlargement to 7.1 x 5.3cm. Upper endoscopy and colonoscopy identified a cecal tubulovillous adenoma without evidence of malignancy. A carcinoembryonic antigen level was 0.6ng/mL (normal), and PSA was <0.10ng/mL.\nPET of the pelvis showing multiple foci of skeletal malignant-type FDG uptake, including left sacrum at the level of the S2 foramen and left iliac bone adjacent to the SI joint.\nLaparoscopic splenectomy, including the splenule was performed without any intraoperative complications. The patient had an uneventful postoperative course and was discharged on postoperative day 2. The final pathology confirmed an undifferentiated pleomorphic sarcoma, high grade (4 of 4). Postoperatively, a CT-guided biopsy of the left iliac bone lesion was negative for malignancy. The patient declined any treatment that would be based on the presumption of residual disease.", "In 1881, Theodor Billroth performed the first splenectomy for sarcoma in a 43-year-old woman with lymphosarcoma.2 She died 6 months later from recurrent disease. Since that time, splenectomy has remained the preferred treatment for splenic sarcomas. Malignant fibrous histiocytomas are rare and have been classified into 5 subtypes: pleomorphic, inflammatory, myxoid type, giant cell, and angiomatoid.3 They are most commonly found in the extremities. Intraabdominal locations account for approximately 20% of MFH.4–6\nWe have found only 15 cases of splenic MFHs reported in the international literature (Table 1).7–18 Thirteen of the patients were treated with splenectomy, one patient had the tumor found on autopsy, and one patient did not receive any treatment. Patient ages ranged from 11 years to 82 years, with a mean age of 53. The reported survival ranged from 9 days to 18 months. Radiation was given to one of these patients after splenic resection, and chemotherapy was given to one patient. One patient was treated with combined chemotherapy and radiation. The splenic mass ranged from 375g to 1850g (mean, 1136). The tumor size ranged from 2.5cm to 21.5cm (mean, 12.9). To our knowledge, this is the first reported case of a splenic pleomorphic sarcoma treated by laparoscopic splenectomy.\nCases of Splenic Malignant Fibrous Histiocytoma Reported in the International Literature\nLaparoscopic splenectomy has become progressively accepted as an advantageous and safe approach for splenectomy. It has been shown to have reduced morbidity and mortality rates compared with open splenec-tomy in many published series (morbidity 19% vs. 56%; mortality 2% vs. 18%).19 Most reports include patients with benign diseases and small spleens. LS for malignant disease can be a greater challenge, because of the size of the spleen and the general condition of the patient. Burch et al19 compared the outcomes of LS for benign versus malignant neoplasms. They showed that there was no statistically significant difference identified between those undergoing LS for benign versus malignant disease in terms of length of stay, complication rate, or mortality. Although there were significant differences between the 2 groups in terms of operative time, splenic weight, and the need for an accessory incision for spleen retrieval.\nOne criticism of the laparoscopic approach has been the lack of tactile feedback and perhaps an inability to identify accessory splenic tissue. The incidence of accessory spleen is approximately 15%.20 Failure to detect and ablate accessory splenic tissue may lead to treatment failure in case of malignancy. Studies show that the lack of the ability to palpate does not compromise the surgeon's ability to find and remove accessory splenic tissue when LS is compared to OS.21,22\nPatients with splenic sarcoma may be at risk for operative conversion to an open operation for bleeding. In fact, hemoperitoneum due to splenic rupture is seen in up to 13% to 30% of cases and is often the first manifestation of the disease.23\nMaintaining the integrity of the capsule is advisable for both oncological and hemostatic reasons to minimize the chance of tumor dissemination or bleeding from the parenchyma. It is critical to avoid iatrogenic splenic rupture. Therefore, splenic size and surgeon experience are key determining factors when deciding on an open, a hand-assisted, or a total laparoscopic splenectomy.", "Primary malignancies of the spleen are rare and are primarily treated with splenectomy. This case report shows that laparoscopic splenectomy is a feasible operative choice for patients with sarcomas, allowing the advantages of laparoscopic surgery and a quicker healing process permitting earlier institution of adjuvant treatment if necessary. The large size and increased risk of bleeding will likely be the greatest challenges for surgeons performing laparoscopic splenectomy for splenic sarcomas." ]
[ null, null, "discussion", "conclusions" ]
[ "Undifferentiated pleomorphic sarcoma", "Malignant fibrous histiocytoma", "Laparoscopic splenectomy" ]
Robot-assisted excision of a retroperitoneal mass between the left renal artery and vein.
21333207
Extra-adrenal pheochromocytomas are rare. Minimally invasive techniques have been utilized for incidentally discovered masses with successful results.
BACKGROUND
We present a case of a 64-year-old female with a 3.5-cm mass located between her left renal artery and vein, treated by a 4-port robot-assisted transperitoneal laparoscopic approach.
METHODS
Careful dissection of the tumor away from the renal hilum was accomplished without major vascular injury. A pedicle to the tumor was identified and ligated. The pathology demonstrated a benign pheochromocytoma. To our knowledge, this is the first report of a peri-hilar excision of a pheochromocytoma using this approach.
RESULTS
Extra-adrenal pheochromocytomas are rare and can present in difficult locations. While surgical excision may be challenging, the da Vinci Robot may be used effectively and safely for the treatment of these perihilar masses.
CONCLUSION
[ "Adrenal Gland Neoplasms", "Biopsy", "Diagnosis, Differential", "Female", "Humans", "Laparoscopy", "Middle Aged", "Pheochromocytoma", "Renal Artery", "Renal Veins", "Retroperitoneal Space", "Robotics", "Tomography, X-Ray Computed" ]
3041050
null
null
null
null
null
null
CONCLUSION
This is to our knowledge the first reported case of a robot-assisted laparoscopic excision of a peri-hilar pheochromocytoma. The benefits of the da Vinci Surgical System including the wristed instrumentation and 3-dimensional vision were felt to be useful in the successful completion of this case without renal loss or conversion to open excision.
[ "INTRODUCTION", "CASE REPORT", "Surgical Technique" ]
[ "Extra-adrenal pheochromocytomas are rare tumors found in the general population. When discovered, they are usually within the Organ of Zuckerkandl, a conglomerate of neuroendocrine tissue located along the aorta. We present the management of a 64-year-old female with an extra-adrenal pheochromocytoma found in an unusual location between the left renal artery and vein.", "The patient is a 64-year-old, white female with a recent history of cardiac catheterization who presented with a retroperitoneal mass located between her left renal artery and vein (Figure 1). Her medical history includes mild hypertension controlled with amlodipine. Her recent cardiac stents necessitated daily aspirin and clopidogrel. The mass was diagnosed on an abdominal CT performed one year earlier for unexplained abdominal pain. The well-circumscribed mass was initially 2.0cm in diameter and enhanced with IV contrast administration. There was no lymphadenopathy noted. On serial CTs, the mass grew to a size of 3.5cm (Figure 1). This prompted a percutaneous biopsy that was inconclusive but complicated by a retroperitoneal hematoma. Postbiopsy hypertension was not noted, and the patient did not require a blood transfusion. Due to the increasing size of the mass, however, a decision was made to perform an excisional biopsy of the mass using a robot-assisted laparoscopic approach. The patient exhibited none of the usual signs or symptoms of a pheochromocytoma.\nContrast-enhanced CT in the axial plane, showing the mass at the left renal hilum.\n[SUBTITLE] Surgical Technique [SUBSECTION] No significant medical modifications were performed, because the patient had no stigmata of a pheochromocytoma. The procedure was performed with the patient in the right lateral decubitus position. A transperitoneal approach was used, and insufflation was obtained using a Veress needle technique. A 12-mm camera port was placed lateral to the rectus muscle at the level of the umbilicus. Two 8-mm robotic trocars were placed: one in the left lower quadrant above the anterior superior iliac spine and the other in the upper left quadrant midline between the xiphoid and the umbilicus, approximately 14cm apart. A 5-mm assistant port was placed midline just below the umbilicus.\nThe left colon was mobilized medially. The gonadal vein was traced to the level of the renal vein where the mass was clearly visible. Careful dissection enabled excellent visualization of the gonadal, ascending lumbar, and adrenal vein branches of the left renal vein. The mass was trapped behind the left renal vein and invested by venous tributaries (Figure 2). The gonadal vein was ligated and divided to create a window where the mass could be removed from behind the vein. Two other small veins were divided. During the dissection, one of these branches was avulsed from the main renal vein and repaired with a figure of eight 4-0 Prolene stitch. The mass was then mobilized from behind the vein. A small vascular pedicle was noted and clipped with titanium clips, and the mass was excised en bloc (Figure 3). Hemostasis was ensured, and SugiFLO hemostatic matrix (Johnson & Johnson, Somerville, NJ) was applied to the surgical bed. The mass was placed into a laparoscopic entrapment sac and sent for pathologic examination.\nView of mass posterior to the left renal vein. A posterior lumbar vessel was encountered and controlled accordingly.\nThe mass completely mobilized.\nThe patient tolerated the procedure very well and had an uneventful postoperative course. She was discharged home on postoperative day one. Final surgical pathology demonstrated a 3.4x2.9x2.0-cm pheochromocytoma without evidence of malignancy.\nNo significant medical modifications were performed, because the patient had no stigmata of a pheochromocytoma. The procedure was performed with the patient in the right lateral decubitus position. A transperitoneal approach was used, and insufflation was obtained using a Veress needle technique. A 12-mm camera port was placed lateral to the rectus muscle at the level of the umbilicus. Two 8-mm robotic trocars were placed: one in the left lower quadrant above the anterior superior iliac spine and the other in the upper left quadrant midline between the xiphoid and the umbilicus, approximately 14cm apart. A 5-mm assistant port was placed midline just below the umbilicus.\nThe left colon was mobilized medially. The gonadal vein was traced to the level of the renal vein where the mass was clearly visible. Careful dissection enabled excellent visualization of the gonadal, ascending lumbar, and adrenal vein branches of the left renal vein. The mass was trapped behind the left renal vein and invested by venous tributaries (Figure 2). The gonadal vein was ligated and divided to create a window where the mass could be removed from behind the vein. Two other small veins were divided. During the dissection, one of these branches was avulsed from the main renal vein and repaired with a figure of eight 4-0 Prolene stitch. The mass was then mobilized from behind the vein. A small vascular pedicle was noted and clipped with titanium clips, and the mass was excised en bloc (Figure 3). Hemostasis was ensured, and SugiFLO hemostatic matrix (Johnson & Johnson, Somerville, NJ) was applied to the surgical bed. The mass was placed into a laparoscopic entrapment sac and sent for pathologic examination.\nView of mass posterior to the left renal vein. A posterior lumbar vessel was encountered and controlled accordingly.\nThe mass completely mobilized.\nThe patient tolerated the procedure very well and had an uneventful postoperative course. She was discharged home on postoperative day one. Final surgical pathology demonstrated a 3.4x2.9x2.0-cm pheochromocytoma without evidence of malignancy.", "No significant medical modifications were performed, because the patient had no stigmata of a pheochromocytoma. The procedure was performed with the patient in the right lateral decubitus position. A transperitoneal approach was used, and insufflation was obtained using a Veress needle technique. A 12-mm camera port was placed lateral to the rectus muscle at the level of the umbilicus. Two 8-mm robotic trocars were placed: one in the left lower quadrant above the anterior superior iliac spine and the other in the upper left quadrant midline between the xiphoid and the umbilicus, approximately 14cm apart. A 5-mm assistant port was placed midline just below the umbilicus.\nThe left colon was mobilized medially. The gonadal vein was traced to the level of the renal vein where the mass was clearly visible. Careful dissection enabled excellent visualization of the gonadal, ascending lumbar, and adrenal vein branches of the left renal vein. The mass was trapped behind the left renal vein and invested by venous tributaries (Figure 2). The gonadal vein was ligated and divided to create a window where the mass could be removed from behind the vein. Two other small veins were divided. During the dissection, one of these branches was avulsed from the main renal vein and repaired with a figure of eight 4-0 Prolene stitch. The mass was then mobilized from behind the vein. A small vascular pedicle was noted and clipped with titanium clips, and the mass was excised en bloc (Figure 3). Hemostasis was ensured, and SugiFLO hemostatic matrix (Johnson & Johnson, Somerville, NJ) was applied to the surgical bed. The mass was placed into a laparoscopic entrapment sac and sent for pathologic examination.\nView of mass posterior to the left renal vein. A posterior lumbar vessel was encountered and controlled accordingly.\nThe mass completely mobilized.\nThe patient tolerated the procedure very well and had an uneventful postoperative course. She was discharged home on postoperative day one. Final surgical pathology demonstrated a 3.4x2.9x2.0-cm pheochromocytoma without evidence of malignancy." ]
[ null, null, null ]
[ "INTRODUCTION", "CASE REPORT", "Surgical Technique", "DISCUSSION", "CONCLUSION" ]
[ "Extra-adrenal pheochromocytomas are rare tumors found in the general population. When discovered, they are usually within the Organ of Zuckerkandl, a conglomerate of neuroendocrine tissue located along the aorta. We present the management of a 64-year-old female with an extra-adrenal pheochromocytoma found in an unusual location between the left renal artery and vein.", "The patient is a 64-year-old, white female with a recent history of cardiac catheterization who presented with a retroperitoneal mass located between her left renal artery and vein (Figure 1). Her medical history includes mild hypertension controlled with amlodipine. Her recent cardiac stents necessitated daily aspirin and clopidogrel. The mass was diagnosed on an abdominal CT performed one year earlier for unexplained abdominal pain. The well-circumscribed mass was initially 2.0cm in diameter and enhanced with IV contrast administration. There was no lymphadenopathy noted. On serial CTs, the mass grew to a size of 3.5cm (Figure 1). This prompted a percutaneous biopsy that was inconclusive but complicated by a retroperitoneal hematoma. Postbiopsy hypertension was not noted, and the patient did not require a blood transfusion. Due to the increasing size of the mass, however, a decision was made to perform an excisional biopsy of the mass using a robot-assisted laparoscopic approach. The patient exhibited none of the usual signs or symptoms of a pheochromocytoma.\nContrast-enhanced CT in the axial plane, showing the mass at the left renal hilum.\n[SUBTITLE] Surgical Technique [SUBSECTION] No significant medical modifications were performed, because the patient had no stigmata of a pheochromocytoma. The procedure was performed with the patient in the right lateral decubitus position. A transperitoneal approach was used, and insufflation was obtained using a Veress needle technique. A 12-mm camera port was placed lateral to the rectus muscle at the level of the umbilicus. Two 8-mm robotic trocars were placed: one in the left lower quadrant above the anterior superior iliac spine and the other in the upper left quadrant midline between the xiphoid and the umbilicus, approximately 14cm apart. A 5-mm assistant port was placed midline just below the umbilicus.\nThe left colon was mobilized medially. The gonadal vein was traced to the level of the renal vein where the mass was clearly visible. Careful dissection enabled excellent visualization of the gonadal, ascending lumbar, and adrenal vein branches of the left renal vein. The mass was trapped behind the left renal vein and invested by venous tributaries (Figure 2). The gonadal vein was ligated and divided to create a window where the mass could be removed from behind the vein. Two other small veins were divided. During the dissection, one of these branches was avulsed from the main renal vein and repaired with a figure of eight 4-0 Prolene stitch. The mass was then mobilized from behind the vein. A small vascular pedicle was noted and clipped with titanium clips, and the mass was excised en bloc (Figure 3). Hemostasis was ensured, and SugiFLO hemostatic matrix (Johnson & Johnson, Somerville, NJ) was applied to the surgical bed. The mass was placed into a laparoscopic entrapment sac and sent for pathologic examination.\nView of mass posterior to the left renal vein. A posterior lumbar vessel was encountered and controlled accordingly.\nThe mass completely mobilized.\nThe patient tolerated the procedure very well and had an uneventful postoperative course. She was discharged home on postoperative day one. Final surgical pathology demonstrated a 3.4x2.9x2.0-cm pheochromocytoma without evidence of malignancy.\nNo significant medical modifications were performed, because the patient had no stigmata of a pheochromocytoma. The procedure was performed with the patient in the right lateral decubitus position. A transperitoneal approach was used, and insufflation was obtained using a Veress needle technique. A 12-mm camera port was placed lateral to the rectus muscle at the level of the umbilicus. Two 8-mm robotic trocars were placed: one in the left lower quadrant above the anterior superior iliac spine and the other in the upper left quadrant midline between the xiphoid and the umbilicus, approximately 14cm apart. A 5-mm assistant port was placed midline just below the umbilicus.\nThe left colon was mobilized medially. The gonadal vein was traced to the level of the renal vein where the mass was clearly visible. Careful dissection enabled excellent visualization of the gonadal, ascending lumbar, and adrenal vein branches of the left renal vein. The mass was trapped behind the left renal vein and invested by venous tributaries (Figure 2). The gonadal vein was ligated and divided to create a window where the mass could be removed from behind the vein. Two other small veins were divided. During the dissection, one of these branches was avulsed from the main renal vein and repaired with a figure of eight 4-0 Prolene stitch. The mass was then mobilized from behind the vein. A small vascular pedicle was noted and clipped with titanium clips, and the mass was excised en bloc (Figure 3). Hemostasis was ensured, and SugiFLO hemostatic matrix (Johnson & Johnson, Somerville, NJ) was applied to the surgical bed. The mass was placed into a laparoscopic entrapment sac and sent for pathologic examination.\nView of mass posterior to the left renal vein. A posterior lumbar vessel was encountered and controlled accordingly.\nThe mass completely mobilized.\nThe patient tolerated the procedure very well and had an uneventful postoperative course. She was discharged home on postoperative day one. Final surgical pathology demonstrated a 3.4x2.9x2.0-cm pheochromocytoma without evidence of malignancy.", "No significant medical modifications were performed, because the patient had no stigmata of a pheochromocytoma. The procedure was performed with the patient in the right lateral decubitus position. A transperitoneal approach was used, and insufflation was obtained using a Veress needle technique. A 12-mm camera port was placed lateral to the rectus muscle at the level of the umbilicus. Two 8-mm robotic trocars were placed: one in the left lower quadrant above the anterior superior iliac spine and the other in the upper left quadrant midline between the xiphoid and the umbilicus, approximately 14cm apart. A 5-mm assistant port was placed midline just below the umbilicus.\nThe left colon was mobilized medially. The gonadal vein was traced to the level of the renal vein where the mass was clearly visible. Careful dissection enabled excellent visualization of the gonadal, ascending lumbar, and adrenal vein branches of the left renal vein. The mass was trapped behind the left renal vein and invested by venous tributaries (Figure 2). The gonadal vein was ligated and divided to create a window where the mass could be removed from behind the vein. Two other small veins were divided. During the dissection, one of these branches was avulsed from the main renal vein and repaired with a figure of eight 4-0 Prolene stitch. The mass was then mobilized from behind the vein. A small vascular pedicle was noted and clipped with titanium clips, and the mass was excised en bloc (Figure 3). Hemostasis was ensured, and SugiFLO hemostatic matrix (Johnson & Johnson, Somerville, NJ) was applied to the surgical bed. The mass was placed into a laparoscopic entrapment sac and sent for pathologic examination.\nView of mass posterior to the left renal vein. A posterior lumbar vessel was encountered and controlled accordingly.\nThe mass completely mobilized.\nThe patient tolerated the procedure very well and had an uneventful postoperative course. She was discharged home on postoperative day one. Final surgical pathology demonstrated a 3.4x2.9x2.0-cm pheochromocytoma without evidence of malignancy.", "Pheochromocytomas are rare tumors consisting of catecholamine-producing chromaffin cells usually arising from the adrenal medulla. In 15% to 25% of cases, they may arise from the embryological adrenal remnants referred to as extra-adrenal paragangliomas.1–4 These are found approximately 4 times as often in children as in adults.5 In adults, 85% of these occur in the abdomen,6 mainly along the para-spinal sympathetic ganglion and organ of Zuckerkandl.\nClinically, pheochromocytomas may occur along with hypertension, tachycardia, pallor, and headache.4,7 Hypertension is classically paroxysmal, with or without sustained hypertension. Conversely, the patient may be normotensive without a history of hypertension, particularly in incidentalomas.8 Given the exponential increase in imaging tests ordered in recent years, 25% of all discovered pheochromocytomas are incidental findings. The number of normotensive patients with pheochromocytomas has increased significantly as well.8–11\nWhile most pheochromocytomas occur sporadically, there is significant evidence of a genetic component, particularly within familial syndromes, such as von Hippel-Lindau, neurofibromatosis type 1, and multiple endocrine neoplasia 2A and 2B.12 Recent advances in genetic analysis have demonstrated hereditary causes of paragangliomas as well, such as mutations in the enzyme succinate dehydrogenase.13,14\nThere is growing interest in minimally invasive approaches for the treatment of small tumors discovered incidentally on imaging. Laparoscopic adrenalectomy has been reported for pheochromocytoma.15 Because it is a much less common entity, significantly less data are available on such techniques for extra-adrenal disease. Given the difficult location of this mass, a robot-assisted laparoscopic approach was used. The minor vascular repair was performed without difficulty. This may have proven to be a challenge had a standard laparoscopic procedure been used.\nOur port configuration enabled easy access to the mass as well as the lower pole of the kidney. Additionally, the 3-dimensional vision afforded clear delineation of the vascular anatomy investing the tumor. The renal vein and all tributaries caused very little difficulty during the dissection.\nA consideration was made to excise this mass from a retroperitoneal approach. This may have potentially avoided the venous tributaries encountered during the dissection. However, the increased working room afforded by the transperitoneal approach proved to be both safe and efficacious.", "This is to our knowledge the first reported case of a robot-assisted laparoscopic excision of a peri-hilar pheochromocytoma. The benefits of the da Vinci Surgical System including the wristed instrumentation and 3-dimensional vision were felt to be useful in the successful completion of this case without renal loss or conversion to open excision." ]
[ null, null, null, "discussion", "conclusions" ]
[ "Pheochromocytoma", "Extra-adrenal", "Retroperitoneal mass", "da Vinci", "Robotic", "Laparoscopic", "Renal hilum" ]
Laparoscopic retrieval of intrauterine device perforating the sigmoid colon.
21333209
The intrauterine device (IUD) is a well-tolerated, widely used contraceptive. A major but infrequent complication of the IUD is perforation of the uterus or cervix and migration of the device into the abdomen. Our case of laparoscopic retrieval of an IUD perforating the sigmoid colon illustrates this rare complication.
INTRODUCTION
A 36-year-old woman with a history of IUD placement 4 years earlier presented with complaints of abdominal pain and bright red blood per rectum. She had conceived 9 months after IUD placement and suffered a spontaneous abortion requiring an evacuation of the retained products of conception. At presentation, she was afebrile with normal vital signs. Physical examination was significant for tenderness to palpation over the left lower quadrant.
METHODS
Computed tomography (CT) scans of the abdomen and pelvis showed a foreign body through the wall of the uterus and entering the colon. Colonoscopy revealed an IUD penetrating the sigmoid wall, and multiple failed attempts were made to remove the IUD colonoscopically. Diagnostic laparoscopy was performed that revealed an IUD perforating the uterus and entering the sigmoid. The IUD was manipulated free and removed, and a suture closed the sigmoid defect. The patient was discharged home on the first postoperative day without complication.
RESULTS
The IUD is one of the most effective, safe, and economic contraceptive methods. Uterine perforation and intraperitoneal translocation is an unusual complication of an IUD. Perforation of hollow viscous is likely even less common. Confirmation of a "missing" IUD is mandatory if pregnancy occurs after IUD placement. Removal of a translocated IUD is recommended, and operative laparoscopy is the preferred method.
CONCLUSIONS
[ "Adult", "Colonoscopy", "Device Removal", "Female", "Humans", "Intestinal Perforation", "Intrauterine Device Migration", "Laparoscopy", "Sigmoid Diseases", "Tomography, X-Ray Computed", "Uterine Perforation" ]
3041052
null
null
null
null
null
null
CONCLUSION
The intrauterine device (IUD) is generally a well-tolerated, effective contraceptive. A serious but infrequent complication of the IUD is perforation of the uterus and migration of the device into the abdominal cavity or adjacent organs. If pregnancy occurs after IUD placement, clinicians should confirm the presence and location of the IUD with radiographs of the abdomen and pelvis and subsequent workup as indicated by symptoms. Endoscopy may be both informative and therapeutic. Computed tomography often aids in operative planning. Removal of a translocated IUD is recommended and operative laparoscopy is the preferred method.
[ "INTRODUCTION", "CASE REPORT" ]
[ "The intrauterine device (IUD) is a highly effective, economic, usually well-tolerated, widely used reversible contraceptive. A major but infrequent complication of the IUD is perforation of the uterus or cervix and migration of the device into the retroperitoneum or abdomen. The following case of IUD perforation of the sigmoid colon highlights this rare complication.", "A 36-year-old woman presented with a 4-year history of epigastric and left abdominal pain with intermittent bright red blood in her stools attributed to hemorrhoids. Her symptoms had worsened over the preceding 8 weeks. The IUD had been placed 4 years prior. She became pregnant 9 months after IUD placement, suffered a spontaneous abortion, and underwent evacuation of retained products of conception. During this procedure, the IUD was not identified. No further radiographic evaluation was performed.\nAt presentation, she was afebrile with normal vital signs. Her physical examination was significant for tenderness to palpation over the left lower quadrant. Computed tomography (CT) scans of the abdomen and pelvis showed a foreign body through the posterior wall of the uterus and entering the colon (Figure 1). Colonoscopy revealed a yellow foreign body consistent with an IUD penetrating the sigmoid wall with surrounding granulation tissue (Figure 2). Multiple attempts were made to remove the IUD colonoscopically by an experienced endoscopist, but due to its T-shape and dense surrounding inflammation, it could not be removed without significant risk of perforation of the colon wall. After discussing alternative treatment options with the patient, we elected to pursue diagnostic laparoscopy.\nComputed tomographic scan of the abdomen and pelvis showed a foreign body invading the posterior wall of the uterus and entering the colon.\nColonoscopy revealing an IUD penetrating the sigmoid wall with surrounding granulation tissue.\nDuring the operation, careful use of cautery and sharp dissection of the inflammatory mass deep in the pelvis revealed an IUD perforating the uterus and entering the sigmoid (Figures 3 and 4). The IUD was carefully manipulated free and placed in an endobag. A 4–0 Maxon figure-of-eight suture closed the sigmoid defect. The uterine defect did not require repair. Inspection of the remainder of the abdomen and pelvis showed no gross abnormalities. No intraperitoneal spillage of bowel contents occurred. No drain was placed, and no postoperative antibiotic therapy was required. The T-shape of the IUD prevented colonoscopic retrieval and would have likely resulted in a much larger tear in the colon wall or free perforation without surgical control. The uterus abutted the repair of the colon, in effect sealing our repair nicely, which, due to chronic inflammatory changes, required only an absorbable, laparoscopically placed suture. Drainage was avoided in hopes of preventing a colocutaneous or colouterine fistula. Outpatient antibiotics were not prescribed, because no colonic spillage or free perforation was present. In doing so, we avoided selecting out resistant bacteria or causing a postoperative complication, such as Clostridium difficile colitis. The patient recovered uneventfully and was discharged home without complications on the first postoperative day. Her follow-up examination in the outpatient clinic was also without complication.\nLaparoscopy showing an IUD perforating the uterus and entering the sigmoid.\nLaparoscopy showing an IUD perforating the uterus and entering the sigmoid." ]
[ null, null ]
[ "INTRODUCTION", "CASE REPORT", "DISCUSSION", "CONCLUSION" ]
[ "The intrauterine device (IUD) is a highly effective, economic, usually well-tolerated, widely used reversible contraceptive. A major but infrequent complication of the IUD is perforation of the uterus or cervix and migration of the device into the retroperitoneum or abdomen. The following case of IUD perforation of the sigmoid colon highlights this rare complication.", "A 36-year-old woman presented with a 4-year history of epigastric and left abdominal pain with intermittent bright red blood in her stools attributed to hemorrhoids. Her symptoms had worsened over the preceding 8 weeks. The IUD had been placed 4 years prior. She became pregnant 9 months after IUD placement, suffered a spontaneous abortion, and underwent evacuation of retained products of conception. During this procedure, the IUD was not identified. No further radiographic evaluation was performed.\nAt presentation, she was afebrile with normal vital signs. Her physical examination was significant for tenderness to palpation over the left lower quadrant. Computed tomography (CT) scans of the abdomen and pelvis showed a foreign body through the posterior wall of the uterus and entering the colon (Figure 1). Colonoscopy revealed a yellow foreign body consistent with an IUD penetrating the sigmoid wall with surrounding granulation tissue (Figure 2). Multiple attempts were made to remove the IUD colonoscopically by an experienced endoscopist, but due to its T-shape and dense surrounding inflammation, it could not be removed without significant risk of perforation of the colon wall. After discussing alternative treatment options with the patient, we elected to pursue diagnostic laparoscopy.\nComputed tomographic scan of the abdomen and pelvis showed a foreign body invading the posterior wall of the uterus and entering the colon.\nColonoscopy revealing an IUD penetrating the sigmoid wall with surrounding granulation tissue.\nDuring the operation, careful use of cautery and sharp dissection of the inflammatory mass deep in the pelvis revealed an IUD perforating the uterus and entering the sigmoid (Figures 3 and 4). The IUD was carefully manipulated free and placed in an endobag. A 4–0 Maxon figure-of-eight suture closed the sigmoid defect. The uterine defect did not require repair. Inspection of the remainder of the abdomen and pelvis showed no gross abnormalities. No intraperitoneal spillage of bowel contents occurred. No drain was placed, and no postoperative antibiotic therapy was required. The T-shape of the IUD prevented colonoscopic retrieval and would have likely resulted in a much larger tear in the colon wall or free perforation without surgical control. The uterus abutted the repair of the colon, in effect sealing our repair nicely, which, due to chronic inflammatory changes, required only an absorbable, laparoscopically placed suture. Drainage was avoided in hopes of preventing a colocutaneous or colouterine fistula. Outpatient antibiotics were not prescribed, because no colonic spillage or free perforation was present. In doing so, we avoided selecting out resistant bacteria or causing a postoperative complication, such as Clostridium difficile colitis. The patient recovered uneventfully and was discharged home without complications on the first postoperative day. Her follow-up examination in the outpatient clinic was also without complication.\nLaparoscopy showing an IUD perforating the uterus and entering the sigmoid.\nLaparoscopy showing an IUD perforating the uterus and entering the sigmoid.", "The IUD is one of the most effective, safe, and economic contraceptive methods.1 Uterine perforation and translocation is an unusual complication of an IUD, occurring in 1.3/1000.2 Uterine perforation usually occurs during insertion and may be partial, with only a portion of the IUD piercing the uterine wall or cervix, or complete involving adjacent pelvic organs, such as the bladder, appendix, or rectosigmoid. Risk factors for perforation include clinician inexperience in IUD placement, an immobile or retroverted uterus, or insertion postpartum during lactation when the uterine wall is thin.2,3\nPerforations may be asymptomatic or may cause pelvic pain and abnormal vaginal bleeding. Since perforation may go unrecognized, many clinicians re-examine the patient 6 weeks after IUD insertion. Once perforation has been identified, the patient should be treated with antibiotics as for pelvic inflammatory disease and the IUD removed.4 Ultrasound or CT may be used to determine the location of a perforated IUD.\nRemoval of perforated IUDs is recommended due to risk of injury to neighboring organs and associated inflammatory reaction unless the surgical risk is excessive.5–8 Most frequently, it is found encased in adhesions, adherent to the sigmoid colon or omentum, or freely floating in the cul de sac.8–14 Operative laparoscopy is the preferred method of removal and can be performed electively in asymptomatic patients. If laparoscopy is unsuccessful due to extensive adhesions, the procedure should be converted to a laparotomy.3,9", "The intrauterine device (IUD) is generally a well-tolerated, effective contraceptive. A serious but infrequent complication of the IUD is perforation of the uterus and migration of the device into the abdominal cavity or adjacent organs. If pregnancy occurs after IUD placement, clinicians should confirm the presence and location of the IUD with radiographs of the abdomen and pelvis and subsequent workup as indicated by symptoms. Endoscopy may be both informative and therapeutic. Computed tomography often aids in operative planning. Removal of a translocated IUD is recommended and operative laparoscopy is the preferred method." ]
[ null, null, "discussion", "conclusions" ]
[ "Intrauterine device", "Laparoscopy", "Perforation", "Sigmoid colon", "Uterus" ]
Feasibility of assessing public health impacts of air pollution reduction programs on a local scale: New Haven case study.
21335318
New approaches to link health surveillance data with environmental and population exposure information are needed to examine the health benefits of risk management decisions.
BACKGROUND
Using a hybrid modeling approach that combines regional and local-scale air quality data, we estimated ambient concentrations for multiple air pollutants [e.g., PM2.5 (particulate matter ≤ 2.5 μm in aerodynamic diameter), NOx (nitrogen oxides)] for baseline year 2001 and projected emissions for 2010, 2020, and 2030. We assessed the feasibility of detecting health improvements in relation to reductions in air pollution for 26 different pollutant-health outcome linkages using both sample size and exploratory epidemiological simulations to further inform decision-making needs.
METHODS
Model projections suggested decreases (~10-60%) in pollutant concentrations, mainly attributable to decreases in pollutants from local sources between 2001 and 2010. Models indicated considerable spatial variability in the concentrations of most pollutants. Sample size analyses supported the feasibility of identifying linkages between reductions in NOx and improvements in all-cause mortality, prevalence of asthma in children and adults, and cardiovascular and respiratory hospitalizations.
RESULTS
Substantial reductions in air pollution (e.g., ~60% for NOx) are needed to detect health impacts of environmental actions using traditional epidemiological study designs in small communities like New Haven. In contrast, exploratory epidemiological simulations suggest that it may be possible to demonstrate the health impacts of PM reductions by predicting intraurban pollution gradients within New Haven using coupled models.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Air Pollutants", "Air Pollution", "Cardiovascular Diseases", "Child", "Child, Preschool", "Cities", "Connecticut", "Conservation of Natural Resources", "Environmental Policy", "Feasibility Studies", "Health Status", "Humans", "Infant", "Infant, Newborn", "Linear Models", "Middle Aged", "Models, Chemical", "Public Health", "Respiratory Tract Diseases", "Young Adult" ]
3080930
null
null
Simulation-based epidemiological feasibility analysis
The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).
Results
[SUBTITLE] Air quality modeling [SUBSECTION] We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001. Spatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions. Figure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%). [SUBTITLE] Sample-size–based feasibility analysis [SUBSECTION] Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes). Based on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults. [SUBTITLE] Simulation-based epidemiological feasibility analysis [SUBSECTION] The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown). The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).
Conclusions
In this project we successfully applied, compared, and evaluated exposure assessment and epidemiological modeling tools in the context of observed public health status in a relatively small community, New Haven, Connecticut, and provided the U.S. EPA and local, state, and city organizations with a new modeling-based methodology to measure the impact of collective risk mitigation approaches and regulations. Furthermore, because no single regulation or program that affects air quality can be isolated to track its effect on health, this project provided critical findings on how regulatory agencies may better examine the complex interactions of cumulative impacts on air quality and health effects from multiple actions in other urban communities.
[ "Air quality modeling", "Sample-size–based feasibility analysis" ]
[ "We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).", "Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults." ]
[ null, "methods" ]
[ "Materials and Methods", "Results", "Air quality modeling", "Sample-size–based feasibility analysis", "Simulation-based epidemiological feasibility analysis", "Discussion", "Conclusions" ]
[ "The New Haven Study Area is centered in the City of New Haven, Connecticut (population ~ 127,000), and extends to a 20-km radius, encompassing 318 census block groups in New Haven County with an estimated population in 2007 of more than 367,000 people. The City of New Haven is located on the southern coast of Connecticut on New Haven Harbor, which is fed by three rivers (the West, Mill, and Quinnipiac) that discharge into northern Long Island Sound. New Haven lies at the intersection of interstates I-91 and I-95, both major regional expressways that are often congested. In addition, several surface arteries pass through or around New Haven, including Routes 1, 10, 17, 34, and 63. Seaborne traffic passes through the Port of New Haven, a deep-water seaport that attracts a considerable number of barges and associated truck and rail traffic. In addition to several institutional power plants, one power generation facility serves the community. This wide range of emission source categories allows for testing of multipollutant emission control strategies.\nWe evaluated the overall feasibility of assessing the public health impact of air pollution reduction programs in the City of New Haven by linking projected emissions reductions from overall regulatory actions to estimated detectable health outcome changes. We began by identifying pollutants of interest for New Haven based on the local emissions inventory for the baseline year of 2001 (Weil 2004) and criteria air pollutants. For the present study, we focused on two air pollutants: NOx and PM2.5. We also identified health outcomes that have been associated with these pollutants: cardiovascular disease hospitalization and mortality; respiratory disease hospitalization and mortality; chronic obstructive pulmonary disease mortality and hospitalization; and asthma prevalence, diagnosis, and hospitalization.\nWe then evaluated existing data on ambient level air pollution, emission data, personal exposure data, and health outcome data for the New Haven area. As part of this data inventory evaluation, we assessed the relevance and completeness of data, as well as verification of locations and quantities of emissions from local sources. We then generated emission estimates for NOx and PM2.5 based on local emissions sources and the projected impacts of federal, state, and local regulatory reduction activities. We also applied an improved methodology to predict mobile source emissions (Cook et al. 2008).\nWe first estimated pollutant specific local-scale air concentrations using the U.S. EPA’s AERMOD dispersion model (Cimorelli et al. 2005). This model used information on local emission sources and local meteorological conditions to provide hourly and annual average concentrations at multiple locations corresponding to the weighted centroids of each of the 318 census block groups in the study area. We estimated total NOx and PM2.5 concentrations by combining regional background levels, chemically reactive pollutant estimates from the CMAQ (Community Multiscale Air Quality) model, and the AERMOD estimates. We estimated emissions using the baseline year (2001) emissions rates and projected emissions in 2010, 2020, and 2030 based on planned and anticipated pollution control programs.\nTo assess feasibility using a sample size approach, we first determined the minimum detectable decrease in each outcome relative to its baseline incidence rate [tests of two independent proportions for a (one-sided) likelihood ratio chi-square test with an α of 0.05 and power of 0.80] for a study population of 367,173 (i.e., the 2007 Census estimate for the New Haven population within the 318 block groups included in the study area). For some of the health outcomes, we made additional study area subpopulation calculations for different age groups (< 18 years, ≥ 18 years).\nThere is general consensus that RRs associated with air pollution exposure for a wide variety of health outcomes are typically less than 1.50, and usually within the range of 1.01–1.20, often for a 10-μg/m3 change in PM2.5 or an interquartile range change in gaseous pollutant concentrations. Jerrett et al. (2008) and Wellenius et al. (2005) found risk ratios or a RR in this range for NO2. RRs for PM2.5 and various outcomes in this range were found by Pope et al. (2002) and Sheppard et al. (1999), whereas Laden et al. (2006) found higher PM2.5 and mortality RRs of 1.16–1.28, and Peters et al. (2001) found an RR of 1.69 for PM2.5 and acute myocardial infarction.\nNext we determined the percent reduction in exposure that would be required to produce a given reduction in the outcome assuming a range of possible effect sizes for concentration–outcome associations. Specifically, we considered air pollution RR values of 1.01, 1.05, 1.10, 1.15, and 1.20 representing the increase in outcome (y) associated with an incremental increase in a given pollutant exposure equal to the level of the average value of the ambient pollution concentration (c) in the study population. The change in the outcome (Δy) associated with a change in exposure (Δc) is a function of the baseline incidence rate (y) and the risk coefficient (β) for a one-unit increase in exposure:\nwhere β = [ln(RR)]/c. The percent decrease in exposure (Δcreq) required to produce a particular reduction in the outcome for a given RR is calculated as\nValues of Δcreq < 100 indicate the percent reduction in exposure that would be required to produce a specific reduction in the outcome (Δy) assuming a given RR for the exposure–outcome association. Values of Δcreq ≥ 100 indicate that the corresponding value for Δy is not feasible, because exposure would have to be reduced by more than 100% to achieve it.\nFinally, we combined data on projected changes in mean annual ambient concentrations of air pollutants for 2010, 2020, and 2030 with the information on minimum detectable effect estimates and the percent reduction in exposure required to produce a given effect estimate to identify which air pollutant–health outcome associations (out of 26 possible combinations) would be most feasible for assessment.\nFor pollutants such as PM where projected reductions were relatively modest (~ 8%), we used an exploratory epidemiological methodology similar to that presented in Pope et al. (2009). Specifically, we used the simulated health data at census block group level (derived from county-specific health data and census information on demographics) to evaluate different strategies for demonstrating impacts of relatively small changes in ambient pollution (compared to NOx) over multiple years.\nThe outcomes for this analysis were the differences between the number of hospitalizations for 2001 and 2010, as illustrated by Equation 3:\nwhere ΔHC is the change in the number of hospitalizations at census block group C and H2010 and H2001 are the number of hospitalizations for the years 2010 and 2001, respectively.\nWe calculated the number of hospitalizations due to congestive heart disease [CHD; International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9CM; World Health Organization 2004), code 428] and asthma (ICD-9CM, code 493) for each census block group for the years 2010 and 2001. We chose these end points based on significant associations (1.28% increase in risk per 10-μg/m3 increase in same-day PM2.5) reported by Dominici et al. (2006) between PM and CHD hospitalizations for the Medicare cohort. We restricted hospitalizations due to CHD to the population > 65 years of age, and we calculated asthma hospitalizations separately for all ages and for the population < 25 years of age, because of known age-dependent differences [Connecticut Department of Public Health (CDPH) 2007]. We calculated the number of hospitalizations for the years 2001 and 2010 as\nwhere Hc is the number of hospitalizations for census block group C, Rate is the rate of hospitalization for females (F) and males (M) of race R (white, Hispanic, black), and PopC is the population of each subgroup (e.g., white female, black male, etc.).\nWe used 2000 U.S. Census Bureau (2001) data to estimate the size of each population subgroup in 2001. We used county-level population projections for 2010 to estimate the proportional change in each population subgroup from 2000 to 2010 and applied this to the 2000 census block group population to estimate 2010 census block populations for each subgroup.\nHospitalization rates for both outcomes are available for all of New Haven County for 2001 (CDPH 2001) and 2007 (CDPH 2007), and we used 2007 data for 2010 hospitalizations. The rates are broken down by age and sex, and age and race, but not by age, sex, and race. We therefore assumed constant ratios of rates of hospitalizations for males and females for all races. We calculated hospitalization rates according to sex, race, age (> 65 years of age for CHD, all ages, and < 25 years of age for asthma), health outcome (CHD or asthma), and year (2001 or 2010) as\nFor example, RateFR is the county-level hospitalization for females of race R; RateR is the county-level hospitalization rate for race R; Ratio of ratesM/F is the ratio of hospitalization rates between males and females; and PopFR and PopMR are county-level population sizes for females and males of race R, respectively. We then calculated RateMR by multiplying RateFR by Ratio of ratesM/F.\nReductions of PM2.5 at each census block group were then regressed against the changes in hospitalizations from 2001 to 2010 at each census block group. All regression analyses were performed using SAS (version 9.1; SAS Institute Inc., Cary, NC).\nOur analysis indirectly accounted for the effects due to changes in key ethnic/racial demographic profile by using group or sex relevant hospitalization rates as part of the health data and feasibility simulations. We explored the influence of introducing an additional explanatory variable in our health effects regressions to indirectly account for both neighborhood effects and the missing determinants of observed hospital admissions by computing average admissions either within a 3- or 4-km radius around each census tract (similarly considered by Özkaynak and Thurston 1987).", "[SUBTITLE] Air quality modeling [SUBSECTION] We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).\nWe produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).\n[SUBTITLE] Sample-size–based feasibility analysis [SUBSECTION] Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults.\nTable 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults.\n[SUBTITLE] Simulation-based epidemiological feasibility analysis [SUBSECTION] The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).\nThe average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).", "We produced combined CMAQ and AERMOD model results for the study area. Model estimates for NOx and PM2.5 were consistent with measured Air Quality System (AQS) monitoring data for the area (Johnson et al. 2010). Figure 1 shows maps of modeled PM2.5 and NOx concentrations for the baseline year (2001) and projections for 2010, 2020, and 2030. PM2.5 maps for all four time points show a wide range of concentrations in the study area, with high concentrations in the city center, near the port areas, and near major roadways such as I-95, whereas PM2.5 concentrations in suburban areas were much lower than in central parts of the study area. Finally, PM2.5 concentrations are projected to decrease over time, with the most pronounced decreases in areas with the highest estimated concentrations in 2001.\nSpatial patterns in ambient air quality concentrations for NOx relates strongly to the sources of mobile emissions such as major highways. NOx concentrations shown here depict strong spatial gradients because many of the locations for which we estimated concentrations (population-weighted centroids of the 318 census block groups) are near major roadways. Thus, contrasts between those areas and suburban areas are very pronounced. Finally, NOx concentrations are also projected to decrease considerably over time, particularly in locations close to roadways, because of the implementation of federal emission standards for mobile source emissions.\nFigure 2 shows distributions of annual and daily average modeled PM2.5 and NOx concentrations for the baseline model year 2001 and projections (2010, 2020, and 2030). We sorted the annual average concentrations for 2001 in order to group the 318 study area locations (census block groups) for which we divided estimates into three groups according to average pollutant concentrations at baseline: low (locations in the lowest 25% of the distribution), medium (locations in the 2nd and 3rd quartiles), and high (locations in the highest quartile of the distribution). As expected, daily averages are more variable than annual averages for both PM2.5 and NOx, which indicates the importance of temporal variability in pollutant concentrations. Downward trends in PM2.5 concentrations were evident for areas with medium and high concentrations, but not for low-concentration areas. Declines in NOx concentrations were evident over time for all three groups, but the decrease was also much sharper in the high-concentration areas. The models predicted large percentage decreases for NOx between 2001 and 2010 (61%) and with less pronounced decreases from 2001 to 2020 and 2030 (overall decreases of 78% and 81%, respectively). For PM2.5 the models predicted smaller percentage decreases from 2001 to 2010 (8%), 2020 (9%), and 2030 (9%).", "Table 1 exhibits the percent reduction in concentrations for a given pollutant that would be needed if we assumed a specific RR (1.01, 1.05, 1.10, 1.15, and 1.20) and an estimated reduction in adverse health outcome (2.5%, 5%, 10%, 15%, and 20%) using the assumptions stated above. Table 2 lists the health outcomes that we explored with corresponding minimum statistically detectable decreases in each outcome for the New Haven study population given the baseline rate of the outcome. The New Haven study population area is sufficient in size to examine reductions in adverse health outcomes ranging from a low of 2.5% (adult asthma prevalence) to a high of 10% (all-cause mortality and hospital discharge for cardiovascular diseases and respiratory causes).\nBased on the percentage decrease air pollution projected for 2010, 2020, and 2030 within New Haven for NOx (61%, 2010; 78%, 2020; 81%, 2030) and PM2.5 (8%, 2010; 9%, 2020 and 2030), the percent reduction in air pollution needed to produce a given change in the outcome (Table 1), and the minimum statistically significant percent decrease in each health outcome that can be detected in the New Haven study population (Table 2), we can assess the feasibility for detecting beneficial health effects of air pollution reductions. Of the 26 different air pollution–health outcome linkages assessed, only five, all NOx related, are potentially feasible (Table 2, last column): all-cause mortality, cardiovascular disease hospitalization, respiratory disease hospitalization discharge, current prevalence of asthma in children, and current prevalence of asthma in adults.", "The average number of hospitalizations for CHD among those ≥ 65 years of age in each census block group decreased between 2001 and 2010, whereas average numbers of asthma hospitalizations increased (Table 3). We were unable to detect associations between small reductions in PM2.5 pollution concentrations and health outcomes, so we restricted our analysis to census block groups with PM2.5 reductions of > 4 μg/m3 (n = 30). For these census block groups, numbers of CHD hospitalizations were inversely associated with the estimated reduction in PM2.5 concentrations, indicating that numbers of hospitalizations decreased as the reduction in PM2.5 increased (p < 0.1; Table 4, Figure 3A). Asthma hospitalizations were also inversely associated with reductions in PM2.5 concentrations based on our simulations, suggesting that greater reductions in PM2.5 may slow the increase in asthma hospitalizations over time (Table 4, Figure 3B). However, the inverse association was weaker than for CHD hospitalizations. Finally, an exploratory analysis we conducted by including additional surrogate variables in the regression models aimed at capturing neighborhood effects caused substantial increases in model R2 values, whereas the PM2.5 effect estimates were attenuated somewhat depending on the outcome chosen (data not shown).", "We used detailed information on local health and exposure-related data to assess the feasibility of identifying an impact of cumulative air pollution programs on environmental public health in New Haven for 26 different pollutant–health outcome linkages. Combined regional (CMAQ) and local-scale (AERMOD) air quality modeling analysis showed a small overall decrease for PM2.5 (~ 8–9%) in mean pollutant concentrations mostly from local sources and between 2001 and 2010; in contrast, we projected that NOx would decrease by > 60%. Most NOx reductions can be attributed to mobile source emission reduction programs. Thus, it is important to accurately characterize near-road impacts. Local reductions in PM2.5 are modest relative to high background PM concentrations. Statistical power calculations suggest that projected decreases in NOx may result in statistically significant improvements in health outcomes, including all-cause mortality, asthma prevalence in children and adults, and cardiovascular and respiratory hospitalizations. For other pollutants with more modest reductions, including PM, we determined the likelihood of performing a successful traditional air pollution reduction–health reduction analysis in New Haven to be poor. Alternative epidemiological study designs that use spatially and temporally resolved air quality and exposure models to characterize intraurban gradients were promising based on exploratory epidemiological simulations. However, health outcomes with low baseline rates would have to be strongly associated with air pollution exposures in order for exposure reductions to result in identifiable improvements and thus would not be ideal for examining risk management decisions.\nThis study illustrates the advantages of using air quality models over traditional epidemiological approaches using ambient measurements. For example, central-site data are especially problematic for certain PM components and species (e.g., elemental carbon, organic carbon, coarse and ultrafine PM) that exhibit significant spatial heterogeneity. Also, for many pollutants (e.g., toxic pollutants), ambient monitoring data are often nonexistent or limited. Appropriately verified air quality models, on the other hand, can provide the needed spatial and temporal resolution for multiple air pollutant concentrations at many locations. These same models can also be used to estimate the projected air quality and inputs for exposure models for future years, dependent on air pollution reduction activities, or due to the addition of new sources in a community (Isakov et al. 2006). For example, this model can address what happens if emissions from some specific stationary or mobile sources are reduced by certain amounts and what the associated impacts of these local controls versus regional controls may be. This model application helps determine which control options are most effective in reducing ambient concentrations.\nBoth the air quality modeling and feasibility analysis methodologies we used in this research have certain shortcomings. For instance, despite their advantages of being able to provide temporal (hourly) and spatial (at hundreds of locations) estimates, and having a long history of use by regulatory agencies in multipollutant mitigation strategies, models have uncertainties due to model inputs, algorithms, and model parameters (Sax and Isakov 2003). Therefore, in order to reduce uncertainty due to model inputs, detailed emissions and meteorological information should be provided for each model application. In the simulation-based epidemiological feasibility analyses we considered only single-pollutant models and did not include ecological covariates (e.g., income, poverty status, smoking) typically used in cross-sectional, ecological analysis (Özkaynak and Thurston 1987; Pope et al. 2009), because of a lack of complete information. Moreover, it is possible that some of the covariates may change over time, but presumably this may be less of an issue in local-scale assessments than in national-scale analyses. We did not perform joint optimizations with NOx and PM, which could be used to examine more complicated alternative study designs such as census block groups with low reduction levels in NOx but intermediate to high reductions in PM. Clearly, accounting for multipollutant strategies in future assessments will be important in implementing enhanced air pollution–health outcome risk management studies (Mauderly et al. 2010).\nThe linkages between air quality and exposure models (e.g., with the Stochastic Human Exposure and Dose Simulation Model and Hazardous Air Pollution Exposure Model) in the context of the New Haven study have been examined elsewhere (Isakov et al. 2009). Our biggest challenge has been with accessing geographically and temporally resolved health data in New Haven. Of course, this data gap is often a major challenge in other urban areas as well. Although there was strong local cooperation and local, state, and federal interest in working with the project, better research access to locally relevant health data should be both facilitated and encouraged. Given that the 2010 census has recently been collected and the air quality modeling for 2010 can be performed soon, we hope that the methodology we tested can be implemented in the near future using the actual 2010 local air quality modeling, census, and health data, in order to evaluate the results obtained from this feasibility study by using better databases and more robust models.\nBolstered by the findings from our study, the City of New Haven has been working to find better solutions for reducing air pollution burden and for understanding the impacts from air emissions. We presented the results from this analysis to the New Haven departments of Health, City Planning, and Economic Development and to the city chief executive officer. These results have been used by New Haven in finalizing their negotiations to obtain zero emissions from a proposed new power plant unit to meet peak demand operations, which will be achieved through offsets by the local power plant company and proposed retrofits of garbage trucks and some port operations and additional community benefits. Moreover, the city is also evaluating what can be done to reduce impacts from port operations and mitigate exposures at city schools located near busy roads and highways, in light of the detailed air quality modeling results and health risk evaluations presented here.", "In this project we successfully applied, compared, and evaluated exposure assessment and epidemiological modeling tools in the context of observed public health status in a relatively small community, New Haven, Connecticut, and provided the U.S. EPA and local, state, and city organizations with a new modeling-based methodology to measure the impact of collective risk mitigation approaches and regulations. Furthermore, because no single regulation or program that affects air quality can be isolated to track its effect on health, this project provided critical findings on how regulatory agencies may better examine the complex interactions of cumulative impacts on air quality and health effects from multiple actions in other urban communities." ]
[ "materials|methods", "results", null, "methods", "methods", "discussion", "conclusions" ]
[ "air pollution", "feasibility analysis", "health effects", "nitrogen oxides", "particulate matter" ]
Association of genetic variants of the histamine H1 and muscarinic M3 receptors with BMI and HbA1c values in patients on antipsychotic medication.
21336576
Antipsychotic affinity for the histamine H1 receptor and the muscarinic M3 receptor have been associated with the side effects weight gain, and development of diabetes, respectively.
RATIONALE
We included 430 Caucasian patients with a non-affective psychotic disorder using antipsychotics for at least 3 months. Primary endpoints of the study were cross-sectionally measured BMI and HbA1c; secondary endpoints were obesity and hyperglycaemia. Two single-nucleotide polymorphisms (SNPs) in the HRH1 gene, rs346074 and rs346070, and one SNP in the CHRM3 gene, rs3738435, were genotyped. Our primary hypothesis in this study was an interaction between genotype on BMI and antipsychotic affinity for the H1 and M3 receptor.
METHODS
A significant association of interaction between haplotype rs346074-rs346070 and BMI (p value 0.025) and obesity (p value 0.005) in patients using high-H1 affinity antipsychotics versus patients using low-H1 affinity antipsychotics was found. There was no association of CHRM3 gene variant rs3738435 with BMI, and we observed no association with HbA1c or hyperglycaemia in any of the variants.
RESULTS
This study, for the first time, demonstrates a significant association between HRH1 variants and BMI in patients with a psychotic disorder using antipsychotics. In future, genotyping of HRH1 variants may help predicting weight gain in patients using antipsychotics.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Antipsychotic Agents", "Body Mass Index", "Cross-Sectional Studies", "Female", "Genetic Variation", "Glycated Hemoglobin", "Humans", "Hyperglycemia", "Male", "Middle Aged", "Obesity", "Polymorphism, Single Nucleotide", "Psychotic Disorders", "Receptor, Muscarinic M3", "Receptors, Histamine H1", "Weight Gain", "Young Adult" ]
3121946
Introduction
The majority of patients with schizophrenia or other psychotic disorder use antipsychotic medication. Antipsychotic treatment, especially the use of clozapine and olanzapine, increases the risk of developing obesity (Allison et al. 1999; Lieberman et al. 2005; Parsons et al. 2009) and type 2 diabetes mellitus (T2DM) (Leslie and Rosenheck. 2004; Holt et al. 2005; Lieberman et al. 2005; Miller et al. 2005b; Newcomer. 2005; Gianfrancesco et al. 2006; Nasrallah. 2006). The underlying mechanisms of antipsychotic-induced weight gain and diabetes mellitus are unknown, and may involve different pathways. As in the general population, obesity may have an unfavorable impact on glucose homeostasis in patients using antipsychotics. However, several studies have shown elevated serum insulin levels following atypical antipsychotic medication independent of body mass index (BMI) (Melkersson et al. 2000; Arranz et al. 2004; Henderson et al. 2005). This finding suggests that antipsychotics may directly affect glucose homeostasis by mechanisms other than by weight gain alone. There is also a considerable variability among users of the same antipsychotic in weight gain and T2DM (e.g., not all patients on clozapine ultimately develop T2DM). It is plausible that this variability in patient propensity to these side effects is determined by a combination of genetic and environmental factors. Atypical antipsychotics may differ highly in their affinities for the dopaminergic, serotonergic, histaminergic, adrenergic, and muscarinic acetylcholine receptors (Roth et al. 2004). Combining receptor affinities and clinical data, several authors have concluded that histamine H1 antagonism showed the best correlation with drug-induced weight gain and diabetes mellitus (Wirshing et al. 1999; Kroeze et al. 2003; Matsui-Sakata et al. 2005). Likewise, antagonism of the muscarine acetylcholine receptor was suggested to play an important role, especially in the development of diabetes mellitus (Matsui-Sakata et al. 2005; Silvestre and Prous. 2005). Interactions with serotonergic (5-HT2C and 5-HT6) and adrenergic (alpha1A) receptors were also significantly correlated with metabolic parameters (Kroeze et al. 2003; Matsui-Sakata et al. 2005). To date, pharmacogenetic studies have shown the most consistent evidence for polymorphisms in the 5-HT2C receptor and leptin genes to be associated with antipsychotic-induced weight gain (Reynolds et al. 2003; Ellingrod et al. 2005; Miller et al. 2005a; Templeman et al. 2005; Zhang et al. 2007; Kang et al. 2008; Gregoor et al. 2009) and the metabolic syndrome (Mulder et al. 2007a; Yevtushenko et al. 2008; Mulder et al. 2009; Risselada et al. 2010). So far, only two studies (Basile et al. 2001; Hong et al. 2002) have reported on histamine H1 polymorphisms and antipsychotic-induced weight gain, both finding no association. Thus, the contribution of genetic variations of the histamine and muscarine acetylcholine receptors on the emergence of weight gain and diabetes in antipsychotic-treated patients remains to be elucidated. The ventromedial hypothalamus and the paraventricular nucleus of the brain, where H1 receptors are localized in high density (Sakata et al. 1995), play a central role in the development of obesity by regulating energy expenditure and food intake (Masaki et al. 2004). Clozapine, olanzapine, and quetiapine exhibit the highest affinities for the H1 receptor, whereas risperidone and aripiprazole exhibit lower, and ziprasidone and haloperidol exhibit hardly any affinity towards the H1 receptor (Roth et al. 2004; Nasrallah. 2008). Clozapine and olanzapine are also known to induce most weight gain, followed by quetiapine and risperidone. Aripiprazole, ziprasidone, and haloperidol are known to cause little or no weight gain at all (Wirshing et al. 1999; Nasrallah. 2008). Tricyclic antidepressants with a high antihistaminergic effect (e.g. amitriptyline) are found to induce weight gain as well (Zimmermann et al. 2003). The histamine H1 receptor may therefore play a role in the etiology of medication-induced weight gain. The M3 receptor is expressed on pancreatic β cells. These receptors seem to play a critical role in regulating insulin release and glucose homeostasis (Gautam et al. 2006). Impaired glucose tolerance and reduced levels of insulin were found in mice with targeted deletions in the CHRM3 gene (Gautam et al. 2006). This might indicate that antagonism of the β-cell M3 receptor leads to a higher risk of hyperglycemia and developing diabetes in humans. Olanzapine and clozapine, which have the highest binding affinities with the M3 receptor, have been associated with highest risk of developing T2DM (Citrome et al. 2004; Leslie and Rosenheck. 2004; Holt et al. 2005; Newcomer. 2005) and higher levels of glycated hemoglobin (HBA1c) and blood glucose (Lieberman et al. 2005; Gianfrancesco et al. 2006; Nasrallah. 2008). Risperidone, quetiapine, ziprasidone, haloperidol, and aripiprazole have weak to absent M3 receptor antagonistic activity (Roth et al. 2004; Nasrallah. 2008) and are associated with lower levels of HbA1c and blood glucose in patients (Lieberman et al. 2005; Nasrallah. 2008). Out of the known H1 receptor gene (HRH1) splice variants, we studied two polymorphisms in the B/K variant, which is by far the most prevalent (95%) in the brain (Swan et al. 2006). Rs346070 is a single-nucleotide polymorphism (SNP) and may be functional as it is located in the exonic splicing enhancer region. SNP rs346074 is located in the transcription factor binding sites of the HRH1 gene and may thus affect transcription rates. The muscarinic acetylcholine receptor M3 (CHRM3) variant rs3738435 is located in the 5′ untranslated region of the first exon. Its C allele was found to be associated with increased risk of early-onset type 2 diabetes and a reduced acute insulin response in a family-based sample of Pima Indians (Guo et al. 2006). This is, as far as we know, the first study to examine the pharmacogenetics of genetic variations in genes encoding for the histamine H1 (rs346074 and rs346070) and muscarine M3 receptors (rs3738435) in relation to BMI and HbA1c in Caucasian psychosis patients using antipsychotics. Our primary hypothesis in this cross-sectional study is an interaction between the mentioned variations on BMI and antipsychotic affinity for the H1 and M3 receptor.
null
null
Results
[SUBTITLE] Subjects [SUBSECTION] A total of 430 subjects met the inclusion criteria. Table 1 presents their demographic, genetic, and clinical characteristics. Approximately 95% of the patients had a diagnosis within the schizophrenia spectrum, the other patients had a psychotic disorder not otherwise specified (NOS). Table 1Demographic, genetic, and unadjusted clinical variables of the total study sampleCharacteristicTotal study sample (n = 430)Age, mean (range)38.4 (18–69)Gender •Male290 (67%) •Female140 (33%)DSMIV-Diagnosis •Schizophrenia333 (77%) •Schizoaffective disorder77 (18%) •Psychotic disorder NOS20 (5%)Antipsychotic medication •Typical68 (16%) •Atypical362 (84%)BMI (kg/m2), mean (SD)28.0 (5.2)Weight category •Non-obese (BMI < 25)135 (31%) •Overweight (BMI 25-30)157 (37%) •Obesity (BMI > 30)138 (32%)HbA1c (%) (n = 221) •Mean (SD)5.78 (1.25) •Hyperglycaemia (HbA1c  ≥  6.1% or antidiabetic medication)30 (14%)Genotype rates •HRH1 rs346074 (GG/GA/AA)182/189/55 •HRH1 rs346070 (CC/CT/TT)286/128/15 •CHRM3 rs3738435 (TT/TC/CC)276/137/17 Demographic, genetic, and unadjusted clinical variables of the total study sample A total of 430 subjects met the inclusion criteria. Table 1 presents their demographic, genetic, and clinical characteristics. Approximately 95% of the patients had a diagnosis within the schizophrenia spectrum, the other patients had a psychotic disorder not otherwise specified (NOS). Table 1Demographic, genetic, and unadjusted clinical variables of the total study sampleCharacteristicTotal study sample (n = 430)Age, mean (range)38.4 (18–69)Gender •Male290 (67%) •Female140 (33%)DSMIV-Diagnosis •Schizophrenia333 (77%) •Schizoaffective disorder77 (18%) •Psychotic disorder NOS20 (5%)Antipsychotic medication •Typical68 (16%) •Atypical362 (84%)BMI (kg/m2), mean (SD)28.0 (5.2)Weight category •Non-obese (BMI < 25)135 (31%) •Overweight (BMI 25-30)157 (37%) •Obesity (BMI > 30)138 (32%)HbA1c (%) (n = 221) •Mean (SD)5.78 (1.25) •Hyperglycaemia (HbA1c  ≥  6.1% or antidiabetic medication)30 (14%)Genotype rates •HRH1 rs346074 (GG/GA/AA)182/189/55 •HRH1 rs346070 (CC/CT/TT)286/128/15 •CHRM3 rs3738435 (TT/TC/CC)276/137/17 Demographic, genetic, and unadjusted clinical variables of the total study sample [SUBTITLE] Medication [SUBSECTION] Patients used monotherapy clozapine (21.9%), olanzapine (22.6%) or risperidon (22.1%), aripiprazole (2.3%), quetiapine (4.2%), typical antipsychotics (14.4%), or had a combination of more than one antipsychotic (12.6%). No substantial differences in BMI (range 27.4–29.3 kg/m2) were found between users of the various antipsychotics (p value ANOVA 0.58) or between different diagnoses. HbA1c values (range 5.5–6.8%) were significantly different between the various antipsychotics (p value ANOVA 0.033). Between users of typical and atypical antipsychotics, no differences in BMI and HbA1c were found (p value Student's t test 0.93 and 0.82, respectively). Of all antipsychotics used in our population, clozapine, olanzapine, and quetiapine were defined as high H1 receptor affinity antipsychotics, and clozapine and olanzapine as high M3 receptor affinity antipsychotics. Table 2Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic usersVariablesNo. of patientsMean (SD)/proportion p value β genotype p value β interaction genotype x affinity HRH1 rs346074GG/GA/AAGGGAAABMI182/189/5528.0 (5.2)27.8 (5.3)28.5 (5.0)0.93 0.046  High aff.83/97/2827.5 (4.2)27.7 (5.3)30.1 (5.3)0.27 Low aff.99/92/2728.4 (5.9)27.9 (5.2)26.8 (4.0)0.10Obesity182/189/5534%30%31%0.58 0.005  High aff.83/97/2825%30%46%0.14 Low aff.99/92/2740%30%15% 0.015   HRH1 rs346070CC/CT/TTCCCTTTBMI286/128/1528.0 (5.1)28.2 (5.6)27.4 (4.8)0.74 0.044  High aff.139/58/1227.6 (4.7)29.0 (5.9)28.5 (4.2)0.10 Low aff.147/70/328.4 (5.5)27.5 (5.3)22.9 (4.9)0.22Obesity286/128/1534%29%20%0.22 0.009  High aff.139/58/1228%38%25%0.36 Low aff.147/70/339%21%0% 0.006   CHRM3 rs3738435TT/TC/CCTTTCCCBMI276/137/1728.0 (5.2)27.6 (5.2)30.4 (5.5)0.600.88 High aff.127/57/727.8 (4.9)27.8 (4.9)30.7 (6.1)0.33 Low aff.149/80/1028.3 (5.5)27.5 (5.4)30.2 (5.3)0.90Obesity276/137/1731%32%53%0.150.56 High aff.127/57/728%32%57%0.16 Low aff.149/80/1034%33%50%0.56BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine). p values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regressionAll results are adjusted for age, gender, and population groupGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435Significant p values are shown in bold Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic users BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine). p values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regression All results are adjusted for age, gender, and population group Genotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435 Significant p values are shown in bold Patients used monotherapy clozapine (21.9%), olanzapine (22.6%) or risperidon (22.1%), aripiprazole (2.3%), quetiapine (4.2%), typical antipsychotics (14.4%), or had a combination of more than one antipsychotic (12.6%). No substantial differences in BMI (range 27.4–29.3 kg/m2) were found between users of the various antipsychotics (p value ANOVA 0.58) or between different diagnoses. HbA1c values (range 5.5–6.8%) were significantly different between the various antipsychotics (p value ANOVA 0.033). Between users of typical and atypical antipsychotics, no differences in BMI and HbA1c were found (p value Student's t test 0.93 and 0.82, respectively). Of all antipsychotics used in our population, clozapine, olanzapine, and quetiapine were defined as high H1 receptor affinity antipsychotics, and clozapine and olanzapine as high M3 receptor affinity antipsychotics. Table 2Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic usersVariablesNo. of patientsMean (SD)/proportion p value β genotype p value β interaction genotype x affinity HRH1 rs346074GG/GA/AAGGGAAABMI182/189/5528.0 (5.2)27.8 (5.3)28.5 (5.0)0.93 0.046  High aff.83/97/2827.5 (4.2)27.7 (5.3)30.1 (5.3)0.27 Low aff.99/92/2728.4 (5.9)27.9 (5.2)26.8 (4.0)0.10Obesity182/189/5534%30%31%0.58 0.005  High aff.83/97/2825%30%46%0.14 Low aff.99/92/2740%30%15% 0.015   HRH1 rs346070CC/CT/TTCCCTTTBMI286/128/1528.0 (5.1)28.2 (5.6)27.4 (4.8)0.74 0.044  High aff.139/58/1227.6 (4.7)29.0 (5.9)28.5 (4.2)0.10 Low aff.147/70/328.4 (5.5)27.5 (5.3)22.9 (4.9)0.22Obesity286/128/1534%29%20%0.22 0.009  High aff.139/58/1228%38%25%0.36 Low aff.147/70/339%21%0% 0.006   CHRM3 rs3738435TT/TC/CCTTTCCCBMI276/137/1728.0 (5.2)27.6 (5.2)30.4 (5.5)0.600.88 High aff.127/57/727.8 (4.9)27.8 (4.9)30.7 (6.1)0.33 Low aff.149/80/1028.3 (5.5)27.5 (5.4)30.2 (5.3)0.90Obesity276/137/1731%32%53%0.150.56 High aff.127/57/728%32%57%0.16 Low aff.149/80/1034%33%50%0.56BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine). p values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regressionAll results are adjusted for age, gender, and population groupGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435Significant p values are shown in bold Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic users BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine). p values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regression All results are adjusted for age, gender, and population group Genotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435 Significant p values are shown in bold [SUBTITLE] Association analyses [SUBSECTION] Genotype distributions were consistent with the Hardy–Weinberg equilibrium (p values 0.59, 0.88, and 1.00 for rs346074, rs346070, and rs3738435, respectively). Age (increase of 0.055 kg/m2 per year, p value 0.021) and gender (increase of 2.97 kg/m2 if female, p value <0.001) were significantly associated with BMI. Patient population was not associated with BMI. HbA1c was not associated with patient population, age, or gender. Demographic characteristics, DSM-IV-diagnosis, and antipsychotic distributions did not differ between genotype groups in all three variants. In Table 2 the genetic associations with BMI and obesity are depicted. In users with antipsychotics with high H1 affinity, there was a non-significant increase in BMI per A allele of rs346074 and per T allele of rs346070. An opposite trend can be seen in users with a low H1 affinity antipsychotic (see Fig. 1). The increased trend in BMI with minor alleles of rs346074 and rs346070 in high H1 affinity antipsychotic users was significantly different from the decreased trend in BMI with minor alleles in low H1 affinity antipsychotic users. The interaction term genotype x affinity tested significant when using an additive or recessive model for the A allele of rs346074 (p values 0.046 and 0.033, respectively), and when using a dominant model for the T allele of rs346070 (p value 0.044). Fig. 1 HRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor HRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor Logistic regression showed similar results regarding genotype and obesity, but even stronger and more significant. The interaction terms genotype x affinity for rs346074 (OR 2.80, 95% CI 1.23–6.37, p value 0.015) and rs346070 (OR 2.51, 95% CI 1.33–4.74, p value 0.005) were both significant. Thus, for a patient, there is a more than two-and-a-half times higher risk of obesity per minor allele of rs346074 when having a high H1 affinity antipsychotic as compared to when having a low H1 affinity antipsychotic. The two HRH1 SNPs were found to be in substantial LD (D′ = 1.00, r2 = 0.42). Haplotype analyses of the two polymorphisms showed similar opposite effects of haplotype on BMI and obesity in low and high H1 affinity antipsychotic users (see Table 3). For each AT-haplotype, having a high H1 affinity antipsychotic means a more than three times higher risk of obesity (p value 0.005) compared to the reference haplotype G-C, than when having a low H1 affinity antipsychotic. Table 3Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene BMI Haplotype (rs346074-rs346070) β in high H1 affinity AP users p value β in low H1 affinity AP users p value β of interaction term haplotype * affinity p value G-C–––A-C+0.5690.38−0.1290.850.7950.39A-T+0.9410.10−1.0930.132.043 0.025   Obesity Haplotype (rs346074-rs346070) e β in high H1 affinity AP users p value e β in low H1 affinity AP users p value e β of interaction term haplotype * affinity p value G-C–––A-C1.6720.100.7950.432.1100.07A-T1.2560.420.375 0.004 3.331 0.005 The unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e β) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectivelyHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalentAll results are adjusted for age, gender, and population groupSignificant p values are shown in bold Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene The unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e β) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectively Haplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalent All results are adjusted for age, gender, and population group Significant p values are shown in bold In the total sample of antipsychotic users, CHRM3 rs3738435 had no effect on BMI. There were no differences in genotype effect on BMI between users of antipsychotics with high and low affinity for the M3 receptor. None of the three SNPs showed any association with HbA1c or hyperglycaemia (see supplemental Table 1). Genotype distributions were consistent with the Hardy–Weinberg equilibrium (p values 0.59, 0.88, and 1.00 for rs346074, rs346070, and rs3738435, respectively). Age (increase of 0.055 kg/m2 per year, p value 0.021) and gender (increase of 2.97 kg/m2 if female, p value <0.001) were significantly associated with BMI. Patient population was not associated with BMI. HbA1c was not associated with patient population, age, or gender. Demographic characteristics, DSM-IV-diagnosis, and antipsychotic distributions did not differ between genotype groups in all three variants. In Table 2 the genetic associations with BMI and obesity are depicted. In users with antipsychotics with high H1 affinity, there was a non-significant increase in BMI per A allele of rs346074 and per T allele of rs346070. An opposite trend can be seen in users with a low H1 affinity antipsychotic (see Fig. 1). The increased trend in BMI with minor alleles of rs346074 and rs346070 in high H1 affinity antipsychotic users was significantly different from the decreased trend in BMI with minor alleles in low H1 affinity antipsychotic users. The interaction term genotype x affinity tested significant when using an additive or recessive model for the A allele of rs346074 (p values 0.046 and 0.033, respectively), and when using a dominant model for the T allele of rs346070 (p value 0.044). Fig. 1 HRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor HRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor Logistic regression showed similar results regarding genotype and obesity, but even stronger and more significant. The interaction terms genotype x affinity for rs346074 (OR 2.80, 95% CI 1.23–6.37, p value 0.015) and rs346070 (OR 2.51, 95% CI 1.33–4.74, p value 0.005) were both significant. Thus, for a patient, there is a more than two-and-a-half times higher risk of obesity per minor allele of rs346074 when having a high H1 affinity antipsychotic as compared to when having a low H1 affinity antipsychotic. The two HRH1 SNPs were found to be in substantial LD (D′ = 1.00, r2 = 0.42). Haplotype analyses of the two polymorphisms showed similar opposite effects of haplotype on BMI and obesity in low and high H1 affinity antipsychotic users (see Table 3). For each AT-haplotype, having a high H1 affinity antipsychotic means a more than three times higher risk of obesity (p value 0.005) compared to the reference haplotype G-C, than when having a low H1 affinity antipsychotic. Table 3Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene BMI Haplotype (rs346074-rs346070) β in high H1 affinity AP users p value β in low H1 affinity AP users p value β of interaction term haplotype * affinity p value G-C–––A-C+0.5690.38−0.1290.850.7950.39A-T+0.9410.10−1.0930.132.043 0.025   Obesity Haplotype (rs346074-rs346070) e β in high H1 affinity AP users p value e β in low H1 affinity AP users p value e β of interaction term haplotype * affinity p value G-C–––A-C1.6720.100.7950.432.1100.07A-T1.2560.420.375 0.004 3.331 0.005 The unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e β) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectivelyHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalentAll results are adjusted for age, gender, and population groupSignificant p values are shown in bold Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene The unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e β) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectively Haplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalent All results are adjusted for age, gender, and population group Significant p values are shown in bold In the total sample of antipsychotic users, CHRM3 rs3738435 had no effect on BMI. There were no differences in genotype effect on BMI between users of antipsychotics with high and low affinity for the M3 receptor. None of the three SNPs showed any association with HbA1c or hyperglycaemia (see supplemental Table 1).
null
null
[ "Setting", "Design and patients", "Outcome measures", "Determinants", "Genotyping", "Statistical analysis", "Subjects", "Medication", "Association analyses", "" ]
[ "For this study, three similar psychiatric patient populations from the Netherlands were pooled. The majority of patients were from the ongoing ‘Pharmacotherapy Monitoring and Outcome Survey’ (PHAMOUS). PHAMOUS is an initiative from the Rob Giel Research centre, including three Mental Health Care Institutions and the University Centre of Psychiatry of Groningen. It combines a yearly somatic screening with routine outcome assessment in patients using antipsychotics included. Subjects included in this study originated from the northern part of the Netherlands. The two other study populations have been described in detail elsewhere (Cohen et al. 2006; Mulder et al. 2007a; Mulder et al. 2009). In brief, these populations consisted of patients from a Department of Psychiatric Disorders of a general hospital in the North of the Netherlands (Mulder et al. 2007a; Mulder et al. 2009), and patients from a Mental Health Care Organisation in the West of the Netherlands (Cohen et al. 2006).", "A cross-sectional design was used to assess the association between the variants with BMI and HbA1c. Caucasian patients (northern European ancestry) were eligible for inclusion in this study when they met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria for a non-affective psychotic disorder (schizophrenia, schizophreniform disorder, schizoaffective disorder, delusional disorder, psychotic disorder not otherwise specified (NOS)), were 18 years or older, and used one or more antipsychotics for at least 3 months.", "The primary endpoints of the study were BMI, calculated as body weight (kilogram) divided by height squared (square meter), and the proportion glycated hemoglobin HbA1c (percent). BMI was measured in all patients; HbA1c values were available only in the PHAMOUS population.", "Primary determinants were the genotypes of the two SNPs in the HRH1 gene, rs346074 (G/A) and rs346070 (C/T), and one SNP in the CHRM3 gene, rs3738435 (C/T). Other clinical and demographic (co)variables that were measured in the study were gender, age, patient population, DSM-IV-diagnosis, and antipsychotic medication used at the day of assessment.", "The study protocol was approved by the local university hospital medical ethics committee and all participants gave their written informed consent. Genomic DNA was extracted from EDTA whole blood according to standard protocols. Genotyping of rs3738435, rs346070, and rs346074 was conducted blind to the clinical status of the patients. Fluorogenic 5′-exonuclease TaqMan® assays were applied for the genotyping (Made-To-Order assays obtained from Applied Biosystems; C2747428510, C60474110, and C2685588510, respectively). Genotyping success rates were 99% for rs346074 and 100% for rs346070 and rs3738435.", "To compare BMI and HbA1c values among various users of antipsychotics (i.e., BMI in users of clozapine versus olanzapine versus risperidon versus aripiprazole versus quetiapine versus users of more than one antipsychotic) and between patients using typical versus atypical antipsychotics we applied analysis of variance (ANOVA) and Student's t test, respectively. We used linear regression to explore the relationship of BMI and HbA1c with the independent variables age, gender, and patient population.\nDeparture from Hardy–Weinberg Equilibrium was calculated by a χ\n2 test with 1df. We initially considered an additive model for rs346074 (HRH1), and, due to the low numbers of the recessive genotype, a dominant model for rs3738435 (CHRM3) and rs346070 (HRH1).\nWe first compared demographic characteristics between the genotypes of the three variants. To test our primary hypothesis, we applied linear regression to test whether genotype in users of high-affinity antipsychotics has a significantly different outcome on BMI and HbA1c than in users of low-affinity antipsychotics. We used the interaction term affinity x genotype in our model to test this association, where affinity was coded as 1 or 0 when the patient used a high- or a low-affinity antipsychotic, respectively. A pKi > 7 defined a high-affinity antipsychotic for a certain receptor (Nasrallah. 2008), the other antipsychotics were considered having a low affinity. We adjusted for age, gender, and patient population in our analyses. Similarly, logistic regression was used to analyze the associations with obesity (BMI > 30 kg/m2) and hyperglycemia (HbA1c ≥ 6.1% or the use of antidiabetics). Additionally, for the two HRH1 variants haplotype analysis using the haplotype trend regression approach (Zaykin et al. 2002) was performed, with haplotypes inferred by the software package PHASE (Stephens et al. 2001; Stephens and Donnelly. 2003). Pairwise linkage disequilibirum (LD) was tested by calculating D′, as well as r2. All of the analyses were performed using standard software (SPSS 16.0 for Windows). The level of significance was set at 0.05, two-sided.", "A total of 430 subjects met the inclusion criteria. Table 1 presents their demographic, genetic, and clinical characteristics. Approximately 95% of the patients had a diagnosis within the schizophrenia spectrum, the other patients had a psychotic disorder not otherwise specified (NOS).\nTable 1Demographic, genetic, and unadjusted clinical variables of the total study sampleCharacteristicTotal study sample (n = 430)Age, mean (range)38.4 (18–69)Gender •Male290 (67%) •Female140 (33%)DSMIV-Diagnosis •Schizophrenia333 (77%) •Schizoaffective disorder77 (18%) •Psychotic disorder NOS20 (5%)Antipsychotic medication •Typical68 (16%) •Atypical362 (84%)BMI (kg/m2), mean (SD)28.0 (5.2)Weight category •Non-obese (BMI < 25)135 (31%) •Overweight (BMI 25-30)157 (37%) •Obesity (BMI > 30)138 (32%)HbA1c (%) (n = 221) •Mean (SD)5.78 (1.25) •Hyperglycaemia (HbA1c  ≥  6.1% or antidiabetic medication)30 (14%)Genotype rates •HRH1 rs346074 (GG/GA/AA)182/189/55 •HRH1 rs346070 (CC/CT/TT)286/128/15 •CHRM3 rs3738435 (TT/TC/CC)276/137/17\n\nDemographic, genetic, and unadjusted clinical variables of the total study sample", "Patients used monotherapy clozapine (21.9%), olanzapine (22.6%) or risperidon (22.1%), aripiprazole (2.3%), quetiapine (4.2%), typical antipsychotics (14.4%), or had a combination of more than one antipsychotic (12.6%). No substantial differences in BMI (range 27.4–29.3 kg/m2) were found between users of the various antipsychotics (p value ANOVA 0.58) or between different diagnoses. HbA1c values (range 5.5–6.8%) were significantly different between the various antipsychotics (p value ANOVA 0.033). Between users of typical and atypical antipsychotics, no differences in BMI and HbA1c were found (p value Student's t test 0.93 and 0.82, respectively). Of all antipsychotics used in our population, clozapine, olanzapine, and quetiapine were defined as high H1 receptor affinity antipsychotics, and clozapine and olanzapine as high M3 receptor affinity antipsychotics.\nTable 2Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic usersVariablesNo. of patientsMean (SD)/proportion\np value β genotype\np value β interaction genotype x affinity\nHRH1 rs346074GG/GA/AAGGGAAABMI182/189/5528.0 (5.2)27.8 (5.3)28.5 (5.0)0.93\n0.046\n High aff.83/97/2827.5 (4.2)27.7 (5.3)30.1 (5.3)0.27 Low aff.99/92/2728.4 (5.9)27.9 (5.2)26.8 (4.0)0.10Obesity182/189/5534%30%31%0.58\n0.005\n High aff.83/97/2825%30%46%0.14 Low aff.99/92/2740%30%15%\n0.015\n \nHRH1 rs346070CC/CT/TTCCCTTTBMI286/128/1528.0 (5.1)28.2 (5.6)27.4 (4.8)0.74\n0.044\n High aff.139/58/1227.6 (4.7)29.0 (5.9)28.5 (4.2)0.10 Low aff.147/70/328.4 (5.5)27.5 (5.3)22.9 (4.9)0.22Obesity286/128/1534%29%20%0.22\n0.009\n High aff.139/58/1228%38%25%0.36 Low aff.147/70/339%21%0%\n0.006\n \nCHRM3 rs3738435TT/TC/CCTTTCCCBMI276/137/1728.0 (5.2)27.6 (5.2)30.4 (5.5)0.600.88 High aff.127/57/727.8 (4.9)27.8 (4.9)30.7 (6.1)0.33 Low aff.149/80/1028.3 (5.5)27.5 (5.4)30.2 (5.3)0.90Obesity276/137/1731%32%53%0.150.56 High aff.127/57/728%32%57%0.16 Low aff.149/80/1034%33%50%0.56BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regressionAll results are adjusted for age, gender, and population groupGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435Significant p values are shown in bold\n\nMean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic users\nBMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\n\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regression\nAll results are adjusted for age, gender, and population group\nGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435\nSignificant p values are shown in bold", "Genotype distributions were consistent with the Hardy–Weinberg equilibrium (p values 0.59, 0.88, and 1.00 for rs346074, rs346070, and rs3738435, respectively). Age (increase of 0.055 kg/m2 per year, p value 0.021) and gender (increase of 2.97 kg/m2 if female, p value <0.001) were significantly associated with BMI. Patient population was not associated with BMI. HbA1c was not associated with patient population, age, or gender. Demographic characteristics, DSM-IV-diagnosis, and antipsychotic distributions did not differ between genotype groups in all three variants.\nIn Table 2 the genetic associations with BMI and obesity are depicted. In users with antipsychotics with high H1 affinity, there was a non-significant increase in BMI per A allele of rs346074 and per T allele of rs346070. An opposite trend can be seen in users with a low H1 affinity antipsychotic (see Fig. 1). The increased trend in BMI with minor alleles of rs346074 and rs346070 in high H1 affinity antipsychotic users was significantly different from the decreased trend in BMI with minor alleles in low H1 affinity antipsychotic users. The interaction term genotype x affinity tested significant when using an additive or recessive model for the A allele of rs346074 (p values 0.046 and 0.033, respectively), and when using a dominant model for the T allele of rs346070 (p value 0.044).\nFig. 1\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\n\n\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\nLogistic regression showed similar results regarding genotype and obesity, but even stronger and more significant. The interaction terms genotype x affinity for rs346074 (OR 2.80, 95% CI 1.23–6.37, p value 0.015) and rs346070 (OR 2.51, 95% CI 1.33–4.74, p value 0.005) were both significant. Thus, for a patient, there is a more than two-and-a-half times higher risk of obesity per minor allele of rs346074 when having a high H1 affinity antipsychotic as compared to when having a low H1 affinity antipsychotic.\nThe two HRH1 SNPs were found to be in substantial LD (D′ = 1.00, r2 = 0.42). Haplotype analyses of the two polymorphisms showed similar opposite effects of haplotype on BMI and obesity in low and high H1 affinity antipsychotic users (see Table 3). For each AT-haplotype, having a high H1 affinity antipsychotic means a more than three times higher risk of obesity (p value 0.005) compared to the reference haplotype G-C, than when having a low H1 affinity antipsychotic.\nTable 3Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nBMI\n\nHaplotype (rs346074-rs346070)\n\nβ in high H1 affinity AP users\n\np\nvalue\n\nβ in low H1 affinity AP users\n\np\nvalue\n\nβ of interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C+0.5690.38−0.1290.850.7950.39A-T+0.9410.10−1.0930.132.043\n0.025\n \nObesity\n\nHaplotype (rs346074-rs346070)\n\ne\nβ\nin high H1 affinity AP users\n\np\nvalue\n\ne\nβ\nin low H1 affinity AP users\n\np\nvalue\n\ne\nβ\nof interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C1.6720.100.7950.432.1100.07A-T1.2560.420.375\n0.004\n3.331\n0.005\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectivelyHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalentAll results are adjusted for age, gender, and population groupSignificant p values are shown in bold\n\nHaplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectively\nHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalent\nAll results are adjusted for age, gender, and population group\nSignificant p values are shown in bold\nIn the total sample of antipsychotic users, CHRM3 rs3738435 had no effect on BMI. There were no differences in genotype effect on BMI between users of antipsychotics with high and low affinity for the M3 receptor. None of the three SNPs showed any association with HbA1c or hyperglycaemia (see supplemental Table 1).", "Below is the link to the electronic supplementary material.\nSupplemental Table 1(DOC 52 kb)\n\n(DOC 52 kb)" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Setting", "Design and patients", "Outcome measures", "Determinants", "Genotyping", "Statistical analysis", "Results", "Subjects", "Medication", "Association analyses", "Discussion", "Electronic supplementary material", "" ]
[ "The majority of patients with schizophrenia or other psychotic disorder use antipsychotic medication. Antipsychotic treatment, especially the use of clozapine and olanzapine, increases the risk of developing obesity (Allison et al. 1999; Lieberman et al. 2005; Parsons et al. 2009) and type 2 diabetes mellitus (T2DM) (Leslie and Rosenheck. 2004; Holt et al. 2005; Lieberman et al. 2005; Miller et al. 2005b; Newcomer. 2005; Gianfrancesco et al. 2006; Nasrallah. 2006). The underlying mechanisms of antipsychotic-induced weight gain and diabetes mellitus are unknown, and may involve different pathways. As in the general population, obesity may have an unfavorable impact on glucose homeostasis in patients using antipsychotics. However, several studies have shown elevated serum insulin levels following atypical antipsychotic medication independent of body mass index (BMI) (Melkersson et al. 2000; Arranz et al. 2004; Henderson et al. 2005). This finding suggests that antipsychotics may directly affect glucose homeostasis by mechanisms other than by weight gain alone. There is also a considerable variability among users of the same antipsychotic in weight gain and T2DM (e.g., not all patients on clozapine ultimately develop T2DM). It is plausible that this variability in patient propensity to these side effects is determined by a combination of genetic and environmental factors.\nAtypical antipsychotics may differ highly in their affinities for the dopaminergic, serotonergic, histaminergic, adrenergic, and muscarinic acetylcholine receptors (Roth et al. 2004). Combining receptor affinities and clinical data, several authors have concluded that histamine H1 antagonism showed the best correlation with drug-induced weight gain and diabetes mellitus (Wirshing et al. 1999; Kroeze et al. 2003; Matsui-Sakata et al. 2005). Likewise, antagonism of the muscarine acetylcholine receptor was suggested to play an important role, especially in the development of diabetes mellitus (Matsui-Sakata et al. 2005; Silvestre and Prous. 2005). Interactions with serotonergic (5-HT2C and 5-HT6) and adrenergic (alpha1A) receptors were also significantly correlated with metabolic parameters (Kroeze et al. 2003; Matsui-Sakata et al. 2005). To date, pharmacogenetic studies have shown the most consistent evidence for polymorphisms in the 5-HT2C receptor and leptin genes to be associated with antipsychotic-induced weight gain (Reynolds et al. 2003; Ellingrod et al. 2005; Miller et al. 2005a; Templeman et al. 2005; Zhang et al. 2007; Kang et al. 2008; Gregoor et al. 2009) and the metabolic syndrome (Mulder et al. 2007a; Yevtushenko et al. 2008; Mulder et al. 2009; Risselada et al. 2010). So far, only two studies (Basile et al. 2001; Hong et al. 2002) have reported on histamine H1 polymorphisms and antipsychotic-induced weight gain, both finding no association. Thus, the contribution of genetic variations of the histamine and muscarine acetylcholine receptors on the emergence of weight gain and diabetes in antipsychotic-treated patients remains to be elucidated.\nThe ventromedial hypothalamus and the paraventricular nucleus of the brain, where H1 receptors are localized in high density (Sakata et al. 1995), play a central role in the development of obesity by regulating energy expenditure and food intake (Masaki et al. 2004). Clozapine, olanzapine, and quetiapine exhibit the highest affinities for the H1 receptor, whereas risperidone and aripiprazole exhibit lower, and ziprasidone and haloperidol exhibit hardly any affinity towards the H1 receptor (Roth et al. 2004; Nasrallah. 2008). Clozapine and olanzapine are also known to induce most weight gain, followed by quetiapine and risperidone. Aripiprazole, ziprasidone, and haloperidol are known to cause little or no weight gain at all (Wirshing et al. 1999; Nasrallah. 2008). Tricyclic antidepressants with a high antihistaminergic effect (e.g. amitriptyline) are found to induce weight gain as well (Zimmermann et al. 2003). The histamine H1 receptor may therefore play a role in the etiology of medication-induced weight gain.\nThe M3 receptor is expressed on pancreatic β cells. These receptors seem to play a critical role in regulating insulin release and glucose homeostasis (Gautam et al. 2006). Impaired glucose tolerance and reduced levels of insulin were found in mice with targeted deletions in the CHRM3 gene (Gautam et al. 2006). This might indicate that antagonism of the β-cell M3 receptor leads to a higher risk of hyperglycemia and developing diabetes in humans. Olanzapine and clozapine, which have the highest binding affinities with the M3 receptor, have been associated with highest risk of developing T2DM (Citrome et al. 2004; Leslie and Rosenheck. 2004; Holt et al. 2005; Newcomer. 2005) and higher levels of glycated hemoglobin (HBA1c) and blood glucose (Lieberman et al. 2005; Gianfrancesco et al. 2006; Nasrallah. 2008). Risperidone, quetiapine, ziprasidone, haloperidol, and aripiprazole have weak to absent M3 receptor antagonistic activity (Roth et al. 2004; Nasrallah. 2008) and are associated with lower levels of HbA1c and blood glucose in patients (Lieberman et al. 2005; Nasrallah. 2008).\nOut of the known H1 receptor gene (HRH1) splice variants, we studied two polymorphisms in the B/K variant, which is by far the most prevalent (95%) in the brain (Swan et al. 2006). Rs346070 is a single-nucleotide polymorphism (SNP) and may be functional as it is located in the exonic splicing enhancer region. SNP rs346074 is located in the transcription factor binding sites of the HRH1 gene and may thus affect transcription rates. The muscarinic acetylcholine receptor M3 (CHRM3) variant rs3738435 is located in the 5′ untranslated region of the first exon. Its C allele was found to be associated with increased risk of early-onset type 2 diabetes and a reduced acute insulin response in a family-based sample of Pima Indians (Guo et al. 2006).\nThis is, as far as we know, the first study to examine the pharmacogenetics of genetic variations in genes encoding for the histamine H1 (rs346074 and rs346070) and muscarine M3 receptors (rs3738435) in relation to BMI and HbA1c in Caucasian psychosis patients using antipsychotics. Our primary hypothesis in this cross-sectional study is an interaction between the mentioned variations on BMI and antipsychotic affinity for the H1 and M3 receptor.", "[SUBTITLE] Setting [SUBSECTION] For this study, three similar psychiatric patient populations from the Netherlands were pooled. The majority of patients were from the ongoing ‘Pharmacotherapy Monitoring and Outcome Survey’ (PHAMOUS). PHAMOUS is an initiative from the Rob Giel Research centre, including three Mental Health Care Institutions and the University Centre of Psychiatry of Groningen. It combines a yearly somatic screening with routine outcome assessment in patients using antipsychotics included. Subjects included in this study originated from the northern part of the Netherlands. The two other study populations have been described in detail elsewhere (Cohen et al. 2006; Mulder et al. 2007a; Mulder et al. 2009). In brief, these populations consisted of patients from a Department of Psychiatric Disorders of a general hospital in the North of the Netherlands (Mulder et al. 2007a; Mulder et al. 2009), and patients from a Mental Health Care Organisation in the West of the Netherlands (Cohen et al. 2006).\nFor this study, three similar psychiatric patient populations from the Netherlands were pooled. The majority of patients were from the ongoing ‘Pharmacotherapy Monitoring and Outcome Survey’ (PHAMOUS). PHAMOUS is an initiative from the Rob Giel Research centre, including three Mental Health Care Institutions and the University Centre of Psychiatry of Groningen. It combines a yearly somatic screening with routine outcome assessment in patients using antipsychotics included. Subjects included in this study originated from the northern part of the Netherlands. The two other study populations have been described in detail elsewhere (Cohen et al. 2006; Mulder et al. 2007a; Mulder et al. 2009). In brief, these populations consisted of patients from a Department of Psychiatric Disorders of a general hospital in the North of the Netherlands (Mulder et al. 2007a; Mulder et al. 2009), and patients from a Mental Health Care Organisation in the West of the Netherlands (Cohen et al. 2006).\n[SUBTITLE] Design and patients [SUBSECTION] A cross-sectional design was used to assess the association between the variants with BMI and HbA1c. Caucasian patients (northern European ancestry) were eligible for inclusion in this study when they met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria for a non-affective psychotic disorder (schizophrenia, schizophreniform disorder, schizoaffective disorder, delusional disorder, psychotic disorder not otherwise specified (NOS)), were 18 years or older, and used one or more antipsychotics for at least 3 months.\nA cross-sectional design was used to assess the association between the variants with BMI and HbA1c. Caucasian patients (northern European ancestry) were eligible for inclusion in this study when they met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria for a non-affective psychotic disorder (schizophrenia, schizophreniform disorder, schizoaffective disorder, delusional disorder, psychotic disorder not otherwise specified (NOS)), were 18 years or older, and used one or more antipsychotics for at least 3 months.\n[SUBTITLE] Outcome measures [SUBSECTION] The primary endpoints of the study were BMI, calculated as body weight (kilogram) divided by height squared (square meter), and the proportion glycated hemoglobin HbA1c (percent). BMI was measured in all patients; HbA1c values were available only in the PHAMOUS population.\nThe primary endpoints of the study were BMI, calculated as body weight (kilogram) divided by height squared (square meter), and the proportion glycated hemoglobin HbA1c (percent). BMI was measured in all patients; HbA1c values were available only in the PHAMOUS population.\n[SUBTITLE] Determinants [SUBSECTION] Primary determinants were the genotypes of the two SNPs in the HRH1 gene, rs346074 (G/A) and rs346070 (C/T), and one SNP in the CHRM3 gene, rs3738435 (C/T). Other clinical and demographic (co)variables that were measured in the study were gender, age, patient population, DSM-IV-diagnosis, and antipsychotic medication used at the day of assessment.\nPrimary determinants were the genotypes of the two SNPs in the HRH1 gene, rs346074 (G/A) and rs346070 (C/T), and one SNP in the CHRM3 gene, rs3738435 (C/T). Other clinical and demographic (co)variables that were measured in the study were gender, age, patient population, DSM-IV-diagnosis, and antipsychotic medication used at the day of assessment.\n[SUBTITLE] Genotyping [SUBSECTION] The study protocol was approved by the local university hospital medical ethics committee and all participants gave their written informed consent. Genomic DNA was extracted from EDTA whole blood according to standard protocols. Genotyping of rs3738435, rs346070, and rs346074 was conducted blind to the clinical status of the patients. Fluorogenic 5′-exonuclease TaqMan® assays were applied for the genotyping (Made-To-Order assays obtained from Applied Biosystems; C2747428510, C60474110, and C2685588510, respectively). Genotyping success rates were 99% for rs346074 and 100% for rs346070 and rs3738435.\nThe study protocol was approved by the local university hospital medical ethics committee and all participants gave their written informed consent. Genomic DNA was extracted from EDTA whole blood according to standard protocols. Genotyping of rs3738435, rs346070, and rs346074 was conducted blind to the clinical status of the patients. Fluorogenic 5′-exonuclease TaqMan® assays were applied for the genotyping (Made-To-Order assays obtained from Applied Biosystems; C2747428510, C60474110, and C2685588510, respectively). Genotyping success rates were 99% for rs346074 and 100% for rs346070 and rs3738435.\n[SUBTITLE] Statistical analysis [SUBSECTION] To compare BMI and HbA1c values among various users of antipsychotics (i.e., BMI in users of clozapine versus olanzapine versus risperidon versus aripiprazole versus quetiapine versus users of more than one antipsychotic) and between patients using typical versus atypical antipsychotics we applied analysis of variance (ANOVA) and Student's t test, respectively. We used linear regression to explore the relationship of BMI and HbA1c with the independent variables age, gender, and patient population.\nDeparture from Hardy–Weinberg Equilibrium was calculated by a χ\n2 test with 1df. We initially considered an additive model for rs346074 (HRH1), and, due to the low numbers of the recessive genotype, a dominant model for rs3738435 (CHRM3) and rs346070 (HRH1).\nWe first compared demographic characteristics between the genotypes of the three variants. To test our primary hypothesis, we applied linear regression to test whether genotype in users of high-affinity antipsychotics has a significantly different outcome on BMI and HbA1c than in users of low-affinity antipsychotics. We used the interaction term affinity x genotype in our model to test this association, where affinity was coded as 1 or 0 when the patient used a high- or a low-affinity antipsychotic, respectively. A pKi > 7 defined a high-affinity antipsychotic for a certain receptor (Nasrallah. 2008), the other antipsychotics were considered having a low affinity. We adjusted for age, gender, and patient population in our analyses. Similarly, logistic regression was used to analyze the associations with obesity (BMI > 30 kg/m2) and hyperglycemia (HbA1c ≥ 6.1% or the use of antidiabetics). Additionally, for the two HRH1 variants haplotype analysis using the haplotype trend regression approach (Zaykin et al. 2002) was performed, with haplotypes inferred by the software package PHASE (Stephens et al. 2001; Stephens and Donnelly. 2003). Pairwise linkage disequilibirum (LD) was tested by calculating D′, as well as r2. All of the analyses were performed using standard software (SPSS 16.0 for Windows). The level of significance was set at 0.05, two-sided.\nTo compare BMI and HbA1c values among various users of antipsychotics (i.e., BMI in users of clozapine versus olanzapine versus risperidon versus aripiprazole versus quetiapine versus users of more than one antipsychotic) and between patients using typical versus atypical antipsychotics we applied analysis of variance (ANOVA) and Student's t test, respectively. We used linear regression to explore the relationship of BMI and HbA1c with the independent variables age, gender, and patient population.\nDeparture from Hardy–Weinberg Equilibrium was calculated by a χ\n2 test with 1df. We initially considered an additive model for rs346074 (HRH1), and, due to the low numbers of the recessive genotype, a dominant model for rs3738435 (CHRM3) and rs346070 (HRH1).\nWe first compared demographic characteristics between the genotypes of the three variants. To test our primary hypothesis, we applied linear regression to test whether genotype in users of high-affinity antipsychotics has a significantly different outcome on BMI and HbA1c than in users of low-affinity antipsychotics. We used the interaction term affinity x genotype in our model to test this association, where affinity was coded as 1 or 0 when the patient used a high- or a low-affinity antipsychotic, respectively. A pKi > 7 defined a high-affinity antipsychotic for a certain receptor (Nasrallah. 2008), the other antipsychotics were considered having a low affinity. We adjusted for age, gender, and patient population in our analyses. Similarly, logistic regression was used to analyze the associations with obesity (BMI > 30 kg/m2) and hyperglycemia (HbA1c ≥ 6.1% or the use of antidiabetics). Additionally, for the two HRH1 variants haplotype analysis using the haplotype trend regression approach (Zaykin et al. 2002) was performed, with haplotypes inferred by the software package PHASE (Stephens et al. 2001; Stephens and Donnelly. 2003). Pairwise linkage disequilibirum (LD) was tested by calculating D′, as well as r2. All of the analyses were performed using standard software (SPSS 16.0 for Windows). The level of significance was set at 0.05, two-sided.", "For this study, three similar psychiatric patient populations from the Netherlands were pooled. The majority of patients were from the ongoing ‘Pharmacotherapy Monitoring and Outcome Survey’ (PHAMOUS). PHAMOUS is an initiative from the Rob Giel Research centre, including three Mental Health Care Institutions and the University Centre of Psychiatry of Groningen. It combines a yearly somatic screening with routine outcome assessment in patients using antipsychotics included. Subjects included in this study originated from the northern part of the Netherlands. The two other study populations have been described in detail elsewhere (Cohen et al. 2006; Mulder et al. 2007a; Mulder et al. 2009). In brief, these populations consisted of patients from a Department of Psychiatric Disorders of a general hospital in the North of the Netherlands (Mulder et al. 2007a; Mulder et al. 2009), and patients from a Mental Health Care Organisation in the West of the Netherlands (Cohen et al. 2006).", "A cross-sectional design was used to assess the association between the variants with BMI and HbA1c. Caucasian patients (northern European ancestry) were eligible for inclusion in this study when they met Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria for a non-affective psychotic disorder (schizophrenia, schizophreniform disorder, schizoaffective disorder, delusional disorder, psychotic disorder not otherwise specified (NOS)), were 18 years or older, and used one or more antipsychotics for at least 3 months.", "The primary endpoints of the study were BMI, calculated as body weight (kilogram) divided by height squared (square meter), and the proportion glycated hemoglobin HbA1c (percent). BMI was measured in all patients; HbA1c values were available only in the PHAMOUS population.", "Primary determinants were the genotypes of the two SNPs in the HRH1 gene, rs346074 (G/A) and rs346070 (C/T), and one SNP in the CHRM3 gene, rs3738435 (C/T). Other clinical and demographic (co)variables that were measured in the study were gender, age, patient population, DSM-IV-diagnosis, and antipsychotic medication used at the day of assessment.", "The study protocol was approved by the local university hospital medical ethics committee and all participants gave their written informed consent. Genomic DNA was extracted from EDTA whole blood according to standard protocols. Genotyping of rs3738435, rs346070, and rs346074 was conducted blind to the clinical status of the patients. Fluorogenic 5′-exonuclease TaqMan® assays were applied for the genotyping (Made-To-Order assays obtained from Applied Biosystems; C2747428510, C60474110, and C2685588510, respectively). Genotyping success rates were 99% for rs346074 and 100% for rs346070 and rs3738435.", "To compare BMI and HbA1c values among various users of antipsychotics (i.e., BMI in users of clozapine versus olanzapine versus risperidon versus aripiprazole versus quetiapine versus users of more than one antipsychotic) and between patients using typical versus atypical antipsychotics we applied analysis of variance (ANOVA) and Student's t test, respectively. We used linear regression to explore the relationship of BMI and HbA1c with the independent variables age, gender, and patient population.\nDeparture from Hardy–Weinberg Equilibrium was calculated by a χ\n2 test with 1df. We initially considered an additive model for rs346074 (HRH1), and, due to the low numbers of the recessive genotype, a dominant model for rs3738435 (CHRM3) and rs346070 (HRH1).\nWe first compared demographic characteristics between the genotypes of the three variants. To test our primary hypothesis, we applied linear regression to test whether genotype in users of high-affinity antipsychotics has a significantly different outcome on BMI and HbA1c than in users of low-affinity antipsychotics. We used the interaction term affinity x genotype in our model to test this association, where affinity was coded as 1 or 0 when the patient used a high- or a low-affinity antipsychotic, respectively. A pKi > 7 defined a high-affinity antipsychotic for a certain receptor (Nasrallah. 2008), the other antipsychotics were considered having a low affinity. We adjusted for age, gender, and patient population in our analyses. Similarly, logistic regression was used to analyze the associations with obesity (BMI > 30 kg/m2) and hyperglycemia (HbA1c ≥ 6.1% or the use of antidiabetics). Additionally, for the two HRH1 variants haplotype analysis using the haplotype trend regression approach (Zaykin et al. 2002) was performed, with haplotypes inferred by the software package PHASE (Stephens et al. 2001; Stephens and Donnelly. 2003). Pairwise linkage disequilibirum (LD) was tested by calculating D′, as well as r2. All of the analyses were performed using standard software (SPSS 16.0 for Windows). The level of significance was set at 0.05, two-sided.", "[SUBTITLE] Subjects [SUBSECTION] A total of 430 subjects met the inclusion criteria. Table 1 presents their demographic, genetic, and clinical characteristics. Approximately 95% of the patients had a diagnosis within the schizophrenia spectrum, the other patients had a psychotic disorder not otherwise specified (NOS).\nTable 1Demographic, genetic, and unadjusted clinical variables of the total study sampleCharacteristicTotal study sample (n = 430)Age, mean (range)38.4 (18–69)Gender •Male290 (67%) •Female140 (33%)DSMIV-Diagnosis •Schizophrenia333 (77%) •Schizoaffective disorder77 (18%) •Psychotic disorder NOS20 (5%)Antipsychotic medication •Typical68 (16%) •Atypical362 (84%)BMI (kg/m2), mean (SD)28.0 (5.2)Weight category •Non-obese (BMI < 25)135 (31%) •Overweight (BMI 25-30)157 (37%) •Obesity (BMI > 30)138 (32%)HbA1c (%) (n = 221) •Mean (SD)5.78 (1.25) •Hyperglycaemia (HbA1c  ≥  6.1% or antidiabetic medication)30 (14%)Genotype rates •HRH1 rs346074 (GG/GA/AA)182/189/55 •HRH1 rs346070 (CC/CT/TT)286/128/15 •CHRM3 rs3738435 (TT/TC/CC)276/137/17\n\nDemographic, genetic, and unadjusted clinical variables of the total study sample\nA total of 430 subjects met the inclusion criteria. Table 1 presents their demographic, genetic, and clinical characteristics. Approximately 95% of the patients had a diagnosis within the schizophrenia spectrum, the other patients had a psychotic disorder not otherwise specified (NOS).\nTable 1Demographic, genetic, and unadjusted clinical variables of the total study sampleCharacteristicTotal study sample (n = 430)Age, mean (range)38.4 (18–69)Gender •Male290 (67%) •Female140 (33%)DSMIV-Diagnosis •Schizophrenia333 (77%) •Schizoaffective disorder77 (18%) •Psychotic disorder NOS20 (5%)Antipsychotic medication •Typical68 (16%) •Atypical362 (84%)BMI (kg/m2), mean (SD)28.0 (5.2)Weight category •Non-obese (BMI < 25)135 (31%) •Overweight (BMI 25-30)157 (37%) •Obesity (BMI > 30)138 (32%)HbA1c (%) (n = 221) •Mean (SD)5.78 (1.25) •Hyperglycaemia (HbA1c  ≥  6.1% or antidiabetic medication)30 (14%)Genotype rates •HRH1 rs346074 (GG/GA/AA)182/189/55 •HRH1 rs346070 (CC/CT/TT)286/128/15 •CHRM3 rs3738435 (TT/TC/CC)276/137/17\n\nDemographic, genetic, and unadjusted clinical variables of the total study sample\n[SUBTITLE] Medication [SUBSECTION] Patients used monotherapy clozapine (21.9%), olanzapine (22.6%) or risperidon (22.1%), aripiprazole (2.3%), quetiapine (4.2%), typical antipsychotics (14.4%), or had a combination of more than one antipsychotic (12.6%). No substantial differences in BMI (range 27.4–29.3 kg/m2) were found between users of the various antipsychotics (p value ANOVA 0.58) or between different diagnoses. HbA1c values (range 5.5–6.8%) were significantly different between the various antipsychotics (p value ANOVA 0.033). Between users of typical and atypical antipsychotics, no differences in BMI and HbA1c were found (p value Student's t test 0.93 and 0.82, respectively). Of all antipsychotics used in our population, clozapine, olanzapine, and quetiapine were defined as high H1 receptor affinity antipsychotics, and clozapine and olanzapine as high M3 receptor affinity antipsychotics.\nTable 2Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic usersVariablesNo. of patientsMean (SD)/proportion\np value β genotype\np value β interaction genotype x affinity\nHRH1 rs346074GG/GA/AAGGGAAABMI182/189/5528.0 (5.2)27.8 (5.3)28.5 (5.0)0.93\n0.046\n High aff.83/97/2827.5 (4.2)27.7 (5.3)30.1 (5.3)0.27 Low aff.99/92/2728.4 (5.9)27.9 (5.2)26.8 (4.0)0.10Obesity182/189/5534%30%31%0.58\n0.005\n High aff.83/97/2825%30%46%0.14 Low aff.99/92/2740%30%15%\n0.015\n \nHRH1 rs346070CC/CT/TTCCCTTTBMI286/128/1528.0 (5.1)28.2 (5.6)27.4 (4.8)0.74\n0.044\n High aff.139/58/1227.6 (4.7)29.0 (5.9)28.5 (4.2)0.10 Low aff.147/70/328.4 (5.5)27.5 (5.3)22.9 (4.9)0.22Obesity286/128/1534%29%20%0.22\n0.009\n High aff.139/58/1228%38%25%0.36 Low aff.147/70/339%21%0%\n0.006\n \nCHRM3 rs3738435TT/TC/CCTTTCCCBMI276/137/1728.0 (5.2)27.6 (5.2)30.4 (5.5)0.600.88 High aff.127/57/727.8 (4.9)27.8 (4.9)30.7 (6.1)0.33 Low aff.149/80/1028.3 (5.5)27.5 (5.4)30.2 (5.3)0.90Obesity276/137/1731%32%53%0.150.56 High aff.127/57/728%32%57%0.16 Low aff.149/80/1034%33%50%0.56BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regressionAll results are adjusted for age, gender, and population groupGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435Significant p values are shown in bold\n\nMean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic users\nBMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\n\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regression\nAll results are adjusted for age, gender, and population group\nGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435\nSignificant p values are shown in bold\nPatients used monotherapy clozapine (21.9%), olanzapine (22.6%) or risperidon (22.1%), aripiprazole (2.3%), quetiapine (4.2%), typical antipsychotics (14.4%), or had a combination of more than one antipsychotic (12.6%). No substantial differences in BMI (range 27.4–29.3 kg/m2) were found between users of the various antipsychotics (p value ANOVA 0.58) or between different diagnoses. HbA1c values (range 5.5–6.8%) were significantly different between the various antipsychotics (p value ANOVA 0.033). Between users of typical and atypical antipsychotics, no differences in BMI and HbA1c were found (p value Student's t test 0.93 and 0.82, respectively). Of all antipsychotics used in our population, clozapine, olanzapine, and quetiapine were defined as high H1 receptor affinity antipsychotics, and clozapine and olanzapine as high M3 receptor affinity antipsychotics.\nTable 2Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic usersVariablesNo. of patientsMean (SD)/proportion\np value β genotype\np value β interaction genotype x affinity\nHRH1 rs346074GG/GA/AAGGGAAABMI182/189/5528.0 (5.2)27.8 (5.3)28.5 (5.0)0.93\n0.046\n High aff.83/97/2827.5 (4.2)27.7 (5.3)30.1 (5.3)0.27 Low aff.99/92/2728.4 (5.9)27.9 (5.2)26.8 (4.0)0.10Obesity182/189/5534%30%31%0.58\n0.005\n High aff.83/97/2825%30%46%0.14 Low aff.99/92/2740%30%15%\n0.015\n \nHRH1 rs346070CC/CT/TTCCCTTTBMI286/128/1528.0 (5.1)28.2 (5.6)27.4 (4.8)0.74\n0.044\n High aff.139/58/1227.6 (4.7)29.0 (5.9)28.5 (4.2)0.10 Low aff.147/70/328.4 (5.5)27.5 (5.3)22.9 (4.9)0.22Obesity286/128/1534%29%20%0.22\n0.009\n High aff.139/58/1228%38%25%0.36 Low aff.147/70/339%21%0%\n0.006\n \nCHRM3 rs3738435TT/TC/CCTTTCCCBMI276/137/1728.0 (5.2)27.6 (5.2)30.4 (5.5)0.600.88 High aff.127/57/727.8 (4.9)27.8 (4.9)30.7 (6.1)0.33 Low aff.149/80/1028.3 (5.5)27.5 (5.4)30.2 (5.3)0.90Obesity276/137/1731%32%53%0.150.56 High aff.127/57/728%32%57%0.16 Low aff.149/80/1034%33%50%0.56BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regressionAll results are adjusted for age, gender, and population groupGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435Significant p values are shown in bold\n\nMean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic users\nBMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\n\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regression\nAll results are adjusted for age, gender, and population group\nGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435\nSignificant p values are shown in bold\n[SUBTITLE] Association analyses [SUBSECTION] Genotype distributions were consistent with the Hardy–Weinberg equilibrium (p values 0.59, 0.88, and 1.00 for rs346074, rs346070, and rs3738435, respectively). Age (increase of 0.055 kg/m2 per year, p value 0.021) and gender (increase of 2.97 kg/m2 if female, p value <0.001) were significantly associated with BMI. Patient population was not associated with BMI. HbA1c was not associated with patient population, age, or gender. Demographic characteristics, DSM-IV-diagnosis, and antipsychotic distributions did not differ between genotype groups in all three variants.\nIn Table 2 the genetic associations with BMI and obesity are depicted. In users with antipsychotics with high H1 affinity, there was a non-significant increase in BMI per A allele of rs346074 and per T allele of rs346070. An opposite trend can be seen in users with a low H1 affinity antipsychotic (see Fig. 1). The increased trend in BMI with minor alleles of rs346074 and rs346070 in high H1 affinity antipsychotic users was significantly different from the decreased trend in BMI with minor alleles in low H1 affinity antipsychotic users. The interaction term genotype x affinity tested significant when using an additive or recessive model for the A allele of rs346074 (p values 0.046 and 0.033, respectively), and when using a dominant model for the T allele of rs346070 (p value 0.044).\nFig. 1\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\n\n\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\nLogistic regression showed similar results regarding genotype and obesity, but even stronger and more significant. The interaction terms genotype x affinity for rs346074 (OR 2.80, 95% CI 1.23–6.37, p value 0.015) and rs346070 (OR 2.51, 95% CI 1.33–4.74, p value 0.005) were both significant. Thus, for a patient, there is a more than two-and-a-half times higher risk of obesity per minor allele of rs346074 when having a high H1 affinity antipsychotic as compared to when having a low H1 affinity antipsychotic.\nThe two HRH1 SNPs were found to be in substantial LD (D′ = 1.00, r2 = 0.42). Haplotype analyses of the two polymorphisms showed similar opposite effects of haplotype on BMI and obesity in low and high H1 affinity antipsychotic users (see Table 3). For each AT-haplotype, having a high H1 affinity antipsychotic means a more than three times higher risk of obesity (p value 0.005) compared to the reference haplotype G-C, than when having a low H1 affinity antipsychotic.\nTable 3Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nBMI\n\nHaplotype (rs346074-rs346070)\n\nβ in high H1 affinity AP users\n\np\nvalue\n\nβ in low H1 affinity AP users\n\np\nvalue\n\nβ of interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C+0.5690.38−0.1290.850.7950.39A-T+0.9410.10−1.0930.132.043\n0.025\n \nObesity\n\nHaplotype (rs346074-rs346070)\n\ne\nβ\nin high H1 affinity AP users\n\np\nvalue\n\ne\nβ\nin low H1 affinity AP users\n\np\nvalue\n\ne\nβ\nof interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C1.6720.100.7950.432.1100.07A-T1.2560.420.375\n0.004\n3.331\n0.005\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectivelyHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalentAll results are adjusted for age, gender, and population groupSignificant p values are shown in bold\n\nHaplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectively\nHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalent\nAll results are adjusted for age, gender, and population group\nSignificant p values are shown in bold\nIn the total sample of antipsychotic users, CHRM3 rs3738435 had no effect on BMI. There were no differences in genotype effect on BMI between users of antipsychotics with high and low affinity for the M3 receptor. None of the three SNPs showed any association with HbA1c or hyperglycaemia (see supplemental Table 1).\nGenotype distributions were consistent with the Hardy–Weinberg equilibrium (p values 0.59, 0.88, and 1.00 for rs346074, rs346070, and rs3738435, respectively). Age (increase of 0.055 kg/m2 per year, p value 0.021) and gender (increase of 2.97 kg/m2 if female, p value <0.001) were significantly associated with BMI. Patient population was not associated with BMI. HbA1c was not associated with patient population, age, or gender. Demographic characteristics, DSM-IV-diagnosis, and antipsychotic distributions did not differ between genotype groups in all three variants.\nIn Table 2 the genetic associations with BMI and obesity are depicted. In users with antipsychotics with high H1 affinity, there was a non-significant increase in BMI per A allele of rs346074 and per T allele of rs346070. An opposite trend can be seen in users with a low H1 affinity antipsychotic (see Fig. 1). The increased trend in BMI with minor alleles of rs346074 and rs346070 in high H1 affinity antipsychotic users was significantly different from the decreased trend in BMI with minor alleles in low H1 affinity antipsychotic users. The interaction term genotype x affinity tested significant when using an additive or recessive model for the A allele of rs346074 (p values 0.046 and 0.033, respectively), and when using a dominant model for the T allele of rs346070 (p value 0.044).\nFig. 1\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\n\n\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\nLogistic regression showed similar results regarding genotype and obesity, but even stronger and more significant. The interaction terms genotype x affinity for rs346074 (OR 2.80, 95% CI 1.23–6.37, p value 0.015) and rs346070 (OR 2.51, 95% CI 1.33–4.74, p value 0.005) were both significant. Thus, for a patient, there is a more than two-and-a-half times higher risk of obesity per minor allele of rs346074 when having a high H1 affinity antipsychotic as compared to when having a low H1 affinity antipsychotic.\nThe two HRH1 SNPs were found to be in substantial LD (D′ = 1.00, r2 = 0.42). Haplotype analyses of the two polymorphisms showed similar opposite effects of haplotype on BMI and obesity in low and high H1 affinity antipsychotic users (see Table 3). For each AT-haplotype, having a high H1 affinity antipsychotic means a more than three times higher risk of obesity (p value 0.005) compared to the reference haplotype G-C, than when having a low H1 affinity antipsychotic.\nTable 3Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nBMI\n\nHaplotype (rs346074-rs346070)\n\nβ in high H1 affinity AP users\n\np\nvalue\n\nβ in low H1 affinity AP users\n\np\nvalue\n\nβ of interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C+0.5690.38−0.1290.850.7950.39A-T+0.9410.10−1.0930.132.043\n0.025\n \nObesity\n\nHaplotype (rs346074-rs346070)\n\ne\nβ\nin high H1 affinity AP users\n\np\nvalue\n\ne\nβ\nin low H1 affinity AP users\n\np\nvalue\n\ne\nβ\nof interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C1.6720.100.7950.432.1100.07A-T1.2560.420.375\n0.004\n3.331\n0.005\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectivelyHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalentAll results are adjusted for age, gender, and population groupSignificant p values are shown in bold\n\nHaplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectively\nHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalent\nAll results are adjusted for age, gender, and population group\nSignificant p values are shown in bold\nIn the total sample of antipsychotic users, CHRM3 rs3738435 had no effect on BMI. There were no differences in genotype effect on BMI between users of antipsychotics with high and low affinity for the M3 receptor. None of the three SNPs showed any association with HbA1c or hyperglycaemia (see supplemental Table 1).", "A total of 430 subjects met the inclusion criteria. Table 1 presents their demographic, genetic, and clinical characteristics. Approximately 95% of the patients had a diagnosis within the schizophrenia spectrum, the other patients had a psychotic disorder not otherwise specified (NOS).\nTable 1Demographic, genetic, and unadjusted clinical variables of the total study sampleCharacteristicTotal study sample (n = 430)Age, mean (range)38.4 (18–69)Gender •Male290 (67%) •Female140 (33%)DSMIV-Diagnosis •Schizophrenia333 (77%) •Schizoaffective disorder77 (18%) •Psychotic disorder NOS20 (5%)Antipsychotic medication •Typical68 (16%) •Atypical362 (84%)BMI (kg/m2), mean (SD)28.0 (5.2)Weight category •Non-obese (BMI < 25)135 (31%) •Overweight (BMI 25-30)157 (37%) •Obesity (BMI > 30)138 (32%)HbA1c (%) (n = 221) •Mean (SD)5.78 (1.25) •Hyperglycaemia (HbA1c  ≥  6.1% or antidiabetic medication)30 (14%)Genotype rates •HRH1 rs346074 (GG/GA/AA)182/189/55 •HRH1 rs346070 (CC/CT/TT)286/128/15 •CHRM3 rs3738435 (TT/TC/CC)276/137/17\n\nDemographic, genetic, and unadjusted clinical variables of the total study sample", "Patients used monotherapy clozapine (21.9%), olanzapine (22.6%) or risperidon (22.1%), aripiprazole (2.3%), quetiapine (4.2%), typical antipsychotics (14.4%), or had a combination of more than one antipsychotic (12.6%). No substantial differences in BMI (range 27.4–29.3 kg/m2) were found between users of the various antipsychotics (p value ANOVA 0.58) or between different diagnoses. HbA1c values (range 5.5–6.8%) were significantly different between the various antipsychotics (p value ANOVA 0.033). Between users of typical and atypical antipsychotics, no differences in BMI and HbA1c were found (p value Student's t test 0.93 and 0.82, respectively). Of all antipsychotics used in our population, clozapine, olanzapine, and quetiapine were defined as high H1 receptor affinity antipsychotics, and clozapine and olanzapine as high M3 receptor affinity antipsychotics.\nTable 2Mean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic usersVariablesNo. of patientsMean (SD)/proportion\np value β genotype\np value β interaction genotype x affinity\nHRH1 rs346074GG/GA/AAGGGAAABMI182/189/5528.0 (5.2)27.8 (5.3)28.5 (5.0)0.93\n0.046\n High aff.83/97/2827.5 (4.2)27.7 (5.3)30.1 (5.3)0.27 Low aff.99/92/2728.4 (5.9)27.9 (5.2)26.8 (4.0)0.10Obesity182/189/5534%30%31%0.58\n0.005\n High aff.83/97/2825%30%46%0.14 Low aff.99/92/2740%30%15%\n0.015\n \nHRH1 rs346070CC/CT/TTCCCTTTBMI286/128/1528.0 (5.1)28.2 (5.6)27.4 (4.8)0.74\n0.044\n High aff.139/58/1227.6 (4.7)29.0 (5.9)28.5 (4.2)0.10 Low aff.147/70/328.4 (5.5)27.5 (5.3)22.9 (4.9)0.22Obesity286/128/1534%29%20%0.22\n0.009\n High aff.139/58/1228%38%25%0.36 Low aff.147/70/339%21%0%\n0.006\n \nCHRM3 rs3738435TT/TC/CCTTTCCCBMI276/137/1728.0 (5.2)27.6 (5.2)30.4 (5.5)0.600.88 High aff.127/57/727.8 (4.9)27.8 (4.9)30.7 (6.1)0.33 Low aff.149/80/1028.3 (5.5)27.5 (5.4)30.2 (5.3)0.90Obesity276/137/1731%32%53%0.150.56 High aff.127/57/728%32%57%0.16 Low aff.149/80/1034%33%50%0.56BMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regressionAll results are adjusted for age, gender, and population groupGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435Significant p values are shown in bold\n\nMean BMI values and obesity proportions of genotype groups for SNPs rs346074, rs346070, and rs3738435 among 430 antipsychotic users\nBMI (kg/m2, mean and standard deviation) and obesity (%) are given per genotype group, separated in users of antipsychotics with low and high affinity for the histamine H1 receptor (in rs346074 and rs346070 high affinity clozapine, olanzapine, and quetiapine) and the muscarine M3 receptor (in rs3738435 high affinity clozapine and olanzapine).\n\np values are given for (1) the β of the variable genotype in linear and logistic regression, and (2) the β of the interaction term genotype x affinity in linear and logistic regression\nAll results are adjusted for age, gender, and population group\nGenotype was tested additive in rs346074, and dominant for the minor allele in rs346070 and rs3738435\nSignificant p values are shown in bold", "Genotype distributions were consistent with the Hardy–Weinberg equilibrium (p values 0.59, 0.88, and 1.00 for rs346074, rs346070, and rs3738435, respectively). Age (increase of 0.055 kg/m2 per year, p value 0.021) and gender (increase of 2.97 kg/m2 if female, p value <0.001) were significantly associated with BMI. Patient population was not associated with BMI. HbA1c was not associated with patient population, age, or gender. Demographic characteristics, DSM-IV-diagnosis, and antipsychotic distributions did not differ between genotype groups in all three variants.\nIn Table 2 the genetic associations with BMI and obesity are depicted. In users with antipsychotics with high H1 affinity, there was a non-significant increase in BMI per A allele of rs346074 and per T allele of rs346070. An opposite trend can be seen in users with a low H1 affinity antipsychotic (see Fig. 1). The increased trend in BMI with minor alleles of rs346074 and rs346070 in high H1 affinity antipsychotic users was significantly different from the decreased trend in BMI with minor alleles in low H1 affinity antipsychotic users. The interaction term genotype x affinity tested significant when using an additive or recessive model for the A allele of rs346074 (p values 0.046 and 0.033, respectively), and when using a dominant model for the T allele of rs346070 (p value 0.044).\nFig. 1\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\n\n\nHRH1 variants rs346074 and rs346070 and mean BMI values in users of antipsychotics with and without affinity for the H1 receptor: a significant opposite effect can be seen between genotype and BMI in users of antipsychotics with high versus low affinity for the H1 receptor\nLogistic regression showed similar results regarding genotype and obesity, but even stronger and more significant. The interaction terms genotype x affinity for rs346074 (OR 2.80, 95% CI 1.23–6.37, p value 0.015) and rs346070 (OR 2.51, 95% CI 1.33–4.74, p value 0.005) were both significant. Thus, for a patient, there is a more than two-and-a-half times higher risk of obesity per minor allele of rs346074 when having a high H1 affinity antipsychotic as compared to when having a low H1 affinity antipsychotic.\nThe two HRH1 SNPs were found to be in substantial LD (D′ = 1.00, r2 = 0.42). Haplotype analyses of the two polymorphisms showed similar opposite effects of haplotype on BMI and obesity in low and high H1 affinity antipsychotic users (see Table 3). For each AT-haplotype, having a high H1 affinity antipsychotic means a more than three times higher risk of obesity (p value 0.005) compared to the reference haplotype G-C, than when having a low H1 affinity antipsychotic.\nTable 3Haplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nBMI\n\nHaplotype (rs346074-rs346070)\n\nβ in high H1 affinity AP users\n\np\nvalue\n\nβ in low H1 affinity AP users\n\np\nvalue\n\nβ of interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C+0.5690.38−0.1290.850.7950.39A-T+0.9410.10−1.0930.132.043\n0.025\n \nObesity\n\nHaplotype (rs346074-rs346070)\n\ne\nβ\nin high H1 affinity AP users\n\np\nvalue\n\ne\nβ\nin low H1 affinity AP users\n\np\nvalue\n\ne\nβ\nof interaction term haplotype * affinity\n\np\nvalue\nG-C–––A-C1.6720.100.7950.432.1100.07A-T1.2560.420.375\n0.004\n3.331\n0.005\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectivelyHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalentAll results are adjusted for age, gender, and population groupSignificant p values are shown in bold\n\nHaplotype analysis on BMI and obesity for rs346074 and rs346070 of the HRH1 gene\nThe unstandardized coefficients (β) of haplotype in linear regression with BMI and the odds ratios (e\nβ) of haplotype in logistic regression with obesity are given, in high and low H1 affinity antipsychotic (AP) users, respectively\nHaplotypes A-C and A-T are compared with the most frequent haplotype G-C as a reference. Haplotype G-T was not prevalent\nAll results are adjusted for age, gender, and population group\nSignificant p values are shown in bold\nIn the total sample of antipsychotic users, CHRM3 rs3738435 had no effect on BMI. There were no differences in genotype effect on BMI between users of antipsychotics with high and low affinity for the M3 receptor. None of the three SNPs showed any association with HbA1c or hyperglycaemia (see supplemental Table 1).", "To the best of our knowledge, this is the first study to examine the pharmacogenetics of histamine H1 (rs346074 and rs346070) and muscarine M3 (rs3738435) receptor variants in relationship to weight gain and hyperglycaemia as proxied by BMI and HbA1c in Caucasian psychosis patients on antipsychotics. We demonstrated significant associations between the HRH1 gene variants rs346070 and rs346074 and BMI in Caucasian patients with a psychotic disorder when comparing users of high H1 affinity antipsychotics with low H1 affinity antipsychotics. We found no association between the CHRM3 gene variant rs3738435 and BMI. We observed no association with HbA1c in any of the variants.\nAlthough it has been proposed that histamine H1 receptor antagonism causes weight gain (Kroeze et al. 2003; Matsui-Sakata et al. 2005), earlier studies on other histamine H1 receptor variants showed no relationship with clozapine-induced weight gain (Basile et al. 2001; Hong et al. 2002). Of note, post-hoc analysis in our study showed similar direction and effect size of the risk alleles on BMI in all three high H1 affinity antipsychotics studied (clozapine, olanzapine, and quetiapine), emphasizing the role of the histamine receptor.\nRegarding the metabolic consequences of antipsychotic treatment, several receptors other than the H1 receptor are of importance (Reynolds and Kirk. 2010), especially the 5-HT2C receptor. Previously, we have shown a significant association between 5-HT2C polymorphism rs1414334 and obesity (Mulder et al. 2007b) and the metabolic syndrome (Mulder et al. 2007a; Mulder et al. 2009; Risselada et al. 2010). The association with obesity of this polymorphism also tested significant in the present population (data not shown). We additionally included this polymorphism as a covariate in our regression analysis on obesity. This did not alter the results of the H1 polymorphisms on obesity, implying a 5-HT2C rs1414334 independent, additive effect of our H1 polymorphisms.\nWithin the hypothalamus, histamine and the H1 receptor are part of the leptin-signaling pathway (Sakata et al. 1988; Masaki et al. 2001). Leptin is an adipocyte-specific hormone that regulates the mass of adipose tissue through hypothalamic effects on satiety and energy expenditure (Forbes et al. 2001). Polymorphisms in the leptin and leptin receptor gene have been associated with antipsychotic-induced weight gain (Templeman et al. 2005; Zhang et al. 2007; Kang et al. 2008; Gregoor et al. 2009). Templeman et al. (2005) demonstrated that a genetic variation in the 5-HT2C receptor resulted in different pre-treatment leptin levels. Of note, an interaction between two polymorphisms in the 5-HT2C receptor and leptin gene was showed to influence the risk of metabolic disturbances during antipsychotic treatment (Yevtushenko et al. 2008). Future studies investigating gene–gene interactions between histamine H1, 5-HT2C and leptin genes may help unravel the exact role of the histamine system in antipsychotic-induced weight gain.\nSince the biological function of the studied polymorphisms is unknown, one can only speculate about the observed opposite genotype effects on BMI in low and high H1 affinity antipsychotic users. One possible explanation might lay in the LD status of our polymorphisms with one or more other functional polymorphisms. It might be that one of the polymorphisms in LD with our polymorphisms has a large, H1 affinity antipsychotic-induced effect, while another polymorphism in LD has a moderate opposite antipsychotic-independent effect. If our results are true-positive associations, then high H1 affinity antipsychotics should be avoided when possible in patients with risk alleles. It would be interesting for future studies to test whether these variants could predict food intake or energy expenditure as well. This might help to understand the pathways of histaminergic mechanisms for atypical antipsychotic-induced weight gain.\nNext to antipsychotics, several other risk factors for hyperglycaemia are overrepresented in psychotic patients, such as a positive family history, high BMI, and reduced physical activity. It has been hypothesized that patients with schizophrenia may already have β-cell defects prior to antipsychotic treatment (Bergman and Ader. 2005). Since several factors, involving multiple metabolic pathways, may contribute to hyperglycaemia in psychosis patients, examining genetic associations with antipsychotic-induced alterations in glucose homeostasis may be difficult to perform.\nThe present study has some limitations. First, we did not have complete quantitative information on the cumulative exposure to currently and previously used antipsychotics. Therefore, the relationship between BMI and users of antipsychotics with H1 affinity may be partly biased by earlier use of a previous other antipsychotic. However, since all patients used the antipsychotic for at least 3 months, we do not expect this limitation to be a serious deficit. Second, since this study is cross-sectional, we did not have information on BMI or HbA1c before antipsychotic treatment was started, suggesting that results might reflect non-antipsychotic-mediated pathways. However, this is very unlikely, since we decided to test the interaction between genotype and antipsychotic affinity for the certain receptor. We found significantly different genotype effects on BMI values between users of antipsychotics with high and low affinity for the H1 receptor. Since one would expect genotype effect on baseline BMI values to be similar between future users of low and high H1 affinity antipsychotics, non-antipsychotic-mediated effects of genotype would not lead to differences in genotype effect on BMI between users with high and low H1 affinity antipsychotics. Also, genotype distributions did not differ between users of low and high H1 affinity antipsychotics, ruling out the possibility of confounding by indication because of genotype. Despite its limitations this study has also several merits. First, compared to previous studies, we have a big sample size (more than 400 patients). Second, we have a very homogeneous group of Caucasian patients of Northern European ancestry, all diagnosed with a non-affective psychosis.\nIn conclusion, the HRH1 gene haplotype consisting of rs346074 and rs346070 might be associated with BMI and obesity in patients using antipsychotics with high affinity for the histamine H1 receptor. These findings need to be replicated in independent samples. In none of the variants an association with HbA1c or hyperglycaemia was found. Genotyping for HRH1 variants may help predicting weight gain in patients using atypical antipsychotics. Further longitudinal studies are warranted to investigate the potential role on BMI of the HRH1 gene.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\nSupplemental Table 1(DOC 52 kb)\n\n(DOC 52 kb)\nBelow is the link to the electronic supplementary material.\nSupplemental Table 1(DOC 52 kb)\n\n(DOC 52 kb)", "Below is the link to the electronic supplementary material.\nSupplemental Table 1(DOC 52 kb)\n\n(DOC 52 kb)" ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", null, null, null, "discussion", "supplementary-material", null ]
[ "Antipsychotics", "BMI", "HbA1c", "Schizophrenia", "Polymorphism", "Histamine", "Muscarine", "Pharmacogenetics", "Weight gain", "Hyperglycaemia" ]
Effects of tryptophan depletion and tryptophan loading on the affective response to high-dose CO2 challenge in healthy volunteers.
21336580
It has been reported that in panic disorder (PD), tryptophan depletion enhances the vulnerability to experimentally induced panic, while the administration of serotonin precursors blunts the response to challenges.
RATIONALE
Eighteen healthy volunteers participated in a randomized, double-blind placebo-controlled study. Each subject received ATD, ATL, and a balanced condition (BAL) in separate days, and a double-breath 35% CO(2) inhalation 4.5 h after treatment. Tryptophan (Trp) manipulations were obtained adding 0 g (ATD), 1.21 g (BAL), and 5.15 g (ATL) of l-tryptophan to a protein mixture lacking Trp. Assessments consisted of a visual analogue scale for affect (VAAS) and panic symptom list. A separate analysis on a sample of 55 subjects with a separate-group design has also been performed to study the relationship between plasma amino acid levels and subjective response to CO(2).
METHODS
CO(2)-induced subjective distress and breathlessness were significantly lower after ATD compared to BAL and ATL (p < 0.05). In the separate-group analysis, ΔVAAS scores were positively correlated to the ratio Trp:ΣLNAA after treatment (r = 0.39; p < 0.05).
RESULTS
The present results are in line with preclinical data indicating a role for the serotonergic system in promoting the aversive respiratory sensations to hypercapnic stimuli (Richerson, Nat Rev Neurosci 5(6):449-461, 2004). The differences observed in our study, compared to previous findings in PD patients, might depend on an altered serotonergic modulatory function in patients compared to healthy subjects.
CONCLUSIONS
[ "Adult", "Amino Acids", "Carbon Dioxide", "Cross-Over Studies", "Double-Blind Method", "Female", "Humans", "Hypercapnia", "Male", "Panic Disorder", "Psychological Tests", "Serotonin", "Tryptophan" ]
3102203
Introduction
In patients affected by panic disorder (PD), acute tryptophan depletion (ATD) enhances the response to a number of panicogenic agents. This effect of ATD has been shown in studies which used inhalation of 35% and 5% carbon dioxide (CO2) (Miller et al. 2000; Schruers et al. 2000) and infusion of flumazenil (Bell et al. 2002; Davies et al. 2006) as challenges to induce panic, but no effect has been observed using infusion of cholecystokinin tetrapeptide (CCK-4) (Toru et al. 2006). Pre-treatment with 5-hydroxytryptophan (5-HTP), the immediate precursor of serotonin (5-HT), has been shown to blunt the response to 35% CO2 challenge, indicating a “protective” effect on experimentally induced panic (Schruers et al. 2002b). These findings seem to indicate that in PD patients, fear or anxiety provoked by some panicogenic challenges is negatively correlated to the availability of 5-HT precursors. Studies in healthy volunteers yielded inconclusive findings: reports showed that ATD failed to modify the panicogenic effects of CCK-4 (Koszycki et al. 1996) as well as the effects of 5% CO2 inhalation (Miller et al. 2000) or the Read rebreathing test (Struzik et al. 2002). Two studies have tested the effects of 35% CO2 in tryptophan (Trp) depleted subjects, one showing an increase of CO2-induced neurovegetative symptoms but not subjective anxiety (Klaassen et al. 1998), while the other did not find any significant difference between ATD condition and placebo with regard to CO2-provoked subjective effects (Hood et al. 2006). Administration of 5-HTP in healthy volunteers was shown to reduce CCK-4-induced panic attacks and panic-related cognitive symptoms, specifically in females (Maron et al. 2004), but had no effects on the response to CO2 (Schruers et al. 2002b). The discrepancies observed between PD patients and healthy controls, in terms of the effects of ATD on the subjective response to challenges might be due to the relative low sensitivity exhibited by healthy subjects to panicogenic agents. We have previously showed in healthy volunteers that the inhalation of CO2 dose-dependently induces panic-like symptoms, and that high doses of CO2 (double-breath of 35% CO2) in healthy subjects might be as effective as moderate doses of CO2 are in PD patients (Griez et al. 2007; Schruers et al. 2004b). Here, we intended to perform a study in healthy subjects using a high-dose CO2 challenge in order to test whether we can reproduce in a non-clinical population the same modulating effects of 5-HT manipulation observed in PD patients on experimental panic response; for this purpose, we investigated the effects of ATD on CO2-induced panic response in healthy volunteers. Additionally, since the Trp suppletion studies with panicogenic challenges conducted in healthy subjects gave inconclusive results (Maron et al. 2004; Schruers et al. 2002b), we also tested the effects of acute tryptophan loading (ATL) on subjective response to CO2 challenge.
null
null
Results
Eighteen healthy volunteers (10 male) completed the cross-over study (mean age 25 ± 5.5 years). One subject was excluded because she reported nausea and vomiting after the administration of GBM. [SUBTITLE] Amino acid levels [SUBSECTION] Plasma amino acid levels are presented in Fig. 1. Total Trp:ΣLNAA ratio at T0 did not significantly differ between treatment conditions (F = 0.88, p = 0.42). A significant time X condition interaction was found with Δ% Trp:ΣLNAA ratio (from T0 to T3) being different between conditions (F = 106.6; p < 0.0001). ATD resulted in a decrease of 61.35 ± 15.6% in Trp:ΣLNAA ratio compared to baseline, while ATL resulted in an increase of 361.53 ± 154.05%. A 17.76 ± 19% post-GBM increase was found in the BAL condition. Within-subject contrasts evidenced significant differences between changes in Trp:ΣLNAA ratio between ATD and BAL, between ATL and BAL, and between ATD and ATL (F = 106.6; p < 0.0001). In three cases of ATD condition, we observed a decrease in Trp:ΣLNAA ratio <50%, and in one case of BAL condition, an increase in Trp: ΣLNAA ratio >50% was observed. For all the other subjects, we found Trp:ΣLNAA changes >50% in ATD condition, <50% in balanced condition, >100% in the ATL condition. The analyses reported in the following sections have also been performed after exclusion of those three subjects, obtaining the same results as with the complete sample. Fig. 1Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale Plasma amino acid levels are presented in Fig. 1. Total Trp:ΣLNAA ratio at T0 did not significantly differ between treatment conditions (F = 0.88, p = 0.42). A significant time X condition interaction was found with Δ% Trp:ΣLNAA ratio (from T0 to T3) being different between conditions (F = 106.6; p < 0.0001). ATD resulted in a decrease of 61.35 ± 15.6% in Trp:ΣLNAA ratio compared to baseline, while ATL resulted in an increase of 361.53 ± 154.05%. A 17.76 ± 19% post-GBM increase was found in the BAL condition. Within-subject contrasts evidenced significant differences between changes in Trp:ΣLNAA ratio between ATD and BAL, between ATL and BAL, and between ATD and ATL (F = 106.6; p < 0.0001). In three cases of ATD condition, we observed a decrease in Trp:ΣLNAA ratio <50%, and in one case of BAL condition, an increase in Trp: ΣLNAA ratio >50% was observed. For all the other subjects, we found Trp:ΣLNAA changes >50% in ATD condition, <50% in balanced condition, >100% in the ATL condition. The analyses reported in the following sections have also been performed after exclusion of those three subjects, obtaining the same results as with the complete sample. Fig. 1Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale [SUBTITLE] Subjective measures [SUBSECTION] VAAS scores (fear/discomfort) between conditions at different time points, are presented in Fig. 2. There were no significant differences in VAAS scores at T0, T1, T2, and T3 between conditions. No effect of GBM administration per se was observed on VAAS scores, as no significant difference was found between any of pre-CO2 challenge time points (T3, T2, T1) and T0 in any condition. CO2 inhalation was followed by an increase in VAAS scores in all the conditions (F = 57.49; p < 0.0001). A significant time X condition interaction was found, indicating that ΔVAAS scores were significantly lower in ATD compared to BAL and ATL (33.89 ± 26.18 vs 43.78 ± 25.5 and 44.17 ± 23.88, respectively; F = 5.79 and F = 6.58, p < 0.05). No significant differences were found between BAL and ATL conditions. The findings remained identical after controlling for gender and no effects of time X gender X condition interaction have been found. Fig. 2VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) There were no significant differences in PSL scores between conditions at any time point. Total PSL scores did not change at T1, T2, and T3 compared to T0, but significantly increased after CO2 inhalation, relative to T3 (F = 97.51; p < 0.0001). ΔPSL scores were similar between conditions (ATD, 11.11 ± 6.35; BAL, 11.06 ± 4.87; TL, 11.22 ± 4.32; NS). Analyzing individual PSL items separately, the only significant effect of treatment conditions on ΔPSL scores was evident for the item “sensation of shortness of breath”, indicating that ΔPSL scores after CO2 were lower in ATD condition than in BAL and ATL conditions (F = 4.11; p < 0.05) (Fig. 3). Fig. 3Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) No order effect was found and ΔVAAS and ΔPSL scores were not affected by the order of administration of GBM conditions. There was a significant effect of time on POMS-vigor (p < 0.0001) and POMS-tension (p < 0.05) scores, and within-subjects contrasts indicated that POMS-vigor scores were significantly higher at T0 relative to T1 (p < 0.01), and significantly lower at T2 compared to T3 (p < 0.01). POMS-tension scores were higher at T0 compared to T1 (p < 0.05). There was no significant effect of the time X treatment condition interaction on POMS scores. VAAS scores (fear/discomfort) between conditions at different time points, are presented in Fig. 2. There were no significant differences in VAAS scores at T0, T1, T2, and T3 between conditions. No effect of GBM administration per se was observed on VAAS scores, as no significant difference was found between any of pre-CO2 challenge time points (T3, T2, T1) and T0 in any condition. CO2 inhalation was followed by an increase in VAAS scores in all the conditions (F = 57.49; p < 0.0001). A significant time X condition interaction was found, indicating that ΔVAAS scores were significantly lower in ATD compared to BAL and ATL (33.89 ± 26.18 vs 43.78 ± 25.5 and 44.17 ± 23.88, respectively; F = 5.79 and F = 6.58, p < 0.05). No significant differences were found between BAL and ATL conditions. The findings remained identical after controlling for gender and no effects of time X gender X condition interaction have been found. Fig. 2VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) There were no significant differences in PSL scores between conditions at any time point. Total PSL scores did not change at T1, T2, and T3 compared to T0, but significantly increased after CO2 inhalation, relative to T3 (F = 97.51; p < 0.0001). ΔPSL scores were similar between conditions (ATD, 11.11 ± 6.35; BAL, 11.06 ± 4.87; TL, 11.22 ± 4.32; NS). Analyzing individual PSL items separately, the only significant effect of treatment conditions on ΔPSL scores was evident for the item “sensation of shortness of breath”, indicating that ΔPSL scores after CO2 were lower in ATD condition than in BAL and ATL conditions (F = 4.11; p < 0.05) (Fig. 3). Fig. 3Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05) No order effect was found and ΔVAAS and ΔPSL scores were not affected by the order of administration of GBM conditions. There was a significant effect of time on POMS-vigor (p < 0.0001) and POMS-tension (p < 0.05) scores, and within-subjects contrasts indicated that POMS-vigor scores were significantly higher at T0 relative to T1 (p < 0.01), and significantly lower at T2 compared to T3 (p < 0.01). POMS-tension scores were higher at T0 compared to T1 (p < 0.05). There was no significant effect of the time X treatment condition interaction on POMS scores. [SUBTITLE] Separate-group study [SUBSECTION] Fifty-five volunteers were enrolled in the separate-group study. Treatment groups consisted in 19 subjects (8 male; age: 24.84 ± 5.33 years) in ATD condition, 19 subjects (10 male; age: 23.32 ± 4.46 years) in BAL condition, and 17 subjects (10 male; 24.65 ± 5.32) in ATL condition. Age, gender distribution, and weight did not significantly differ between groups. Trp:ΣLNAA ratio at T0 was similar between conditions and baseline VAAS scores and PSL scores did not significantly differ between groups either. The relationship between Trp:ΣLNAA ratio and ΔVAAS scores is presented in Figs. 4 and 5. ΔVAAS scores were positively correlated to Δ% Trp:ΣLNAA ratio and Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.395 (p < 0.005) and 0.381 (p < 0.005), respectively], indicating that higher VAAS scores were associated with larger increases in the ratio Trp:ΣLNAA. Fig. 4Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005] Fig. 5Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005] Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale No correlation was found between total PSL scores and Δ% Trp:ΣLNAA ratio or Trp:ΣLNAA ratio at T3. Fifty-five volunteers were enrolled in the separate-group study. Treatment groups consisted in 19 subjects (8 male; age: 24.84 ± 5.33 years) in ATD condition, 19 subjects (10 male; age: 23.32 ± 4.46 years) in BAL condition, and 17 subjects (10 male; 24.65 ± 5.32) in ATL condition. Age, gender distribution, and weight did not significantly differ between groups. Trp:ΣLNAA ratio at T0 was similar between conditions and baseline VAAS scores and PSL scores did not significantly differ between groups either. The relationship between Trp:ΣLNAA ratio and ΔVAAS scores is presented in Figs. 4 and 5. ΔVAAS scores were positively correlated to Δ% Trp:ΣLNAA ratio and Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.395 (p < 0.005) and 0.381 (p < 0.005), respectively], indicating that higher VAAS scores were associated with larger increases in the ratio Trp:ΣLNAA. Fig. 4Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005] Fig. 5Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005] Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale No correlation was found between total PSL scores and Δ% Trp:ΣLNAA ratio or Trp:ΣLNAA ratio at T3.
null
null
[ "Subjects", "Design", "Procedure", "Gelatin-based mixtures", "Carbon dioxide challenge", "Amino acid analysis", "Assessments", "Data analysis", "Amino acid levels", "Subjective measures", "Separate-group study", "Tryptophan depletion/suppletion studies in healthy volunteers and PD patients", "Panic, 5-HT, and CO2: a complex relationship", "Methodological limitations", "Conclusions" ]
[ "Healthy volunteers were recruited among students or staff, through advertisements from within the Vijverdal Psychiatric Hospital Maastricht (Mondriaan Zorggroep) and throughout the Maastricht University locations. The Medical Ethics Committee of the Academic Hospital Maastricht and Maastricht University approved the study, and the subjects were paid for their participation in the experiment. After a complete description of the study, a written informed consent was obtained from the subjects. The volunteers underwent collection of medical history and physical examination. Inclusion criteria were age between 18 and 65 years and a good present and past physical and mental condition. The latter was established using a structured psychiatric interview (Mini International Neuropsychiatric Interview) performed by a physician. Exclusion criteria were current psychopharmacological or psychological treatment, recent alcohol intake, substance or caffeine-related disorders, excessive smoking (>10 cigarettes a day), respiratory or cardiovascular disease, hypertension (diastolic >100 mmHg; systolic >170 mmHg), personal or family history of cerebral aneurysm, pregnancy, and epilepsy. Individuals were also excluded if they reported common specific fears or if they had a history of panic attacks, or a history of PD in a first-degree relative.", "Subjects were randomized in double-blind placebo-controlled cross-over study design. The subjects received three different gelatin-based mixtures (GBM) on three different days in a randomized order, to induce respectively the ATD condition, a balanced condition (BAL) and the ATL condition. On each day, after GBM administration, they underwent a double-breath CO2 challenge.\nA separate analysis has been conducted on a larger sample of subjects in a randomized separate-group design, in order to investigate the relationship between the ratio Trp to the sum of large neutral amino acids in plasma (Trp:ΣLNAA) and the subjective response to CO2. Subjects were randomly assigned to one of the three groups: ATD condition, BAL condition, and ATL condition. This sample also included subjects who were enrolled in the cross-over study for which only the first study day was taken into consideration in the analysis.", "The subjects arrived at the clinic after an overnight fast. Blood was drawn to measure plasma Trp and large neutral amino acids (LNAA) levels. GBM were administered in the morning at about 9:30. After drinking the GBM, the subjects remained on the ward and were allowed to read or watch a nature documentary on video. Subjects had ad libitum access to mineral water, but they were asked to refrain from eating and drinking any xanthine beverages. At 4.5 h after GBM administration, other blood samples were collected to monitor plasma Trp and LNAA levels. Ten minutes later, the subjects underwent a double-breath inhalation of a gas mixture containing 35% CO2 and 65% O2.", "The gelatin consists of a hydrolysate collagen-protein comprising the entire range of amino acids in the form of peptides, but completely lacking Trp. After administration, these peptides are decomposed into amino acids, and the mechanism of depletion is identical to that of the “classic” amino acid mixture (Sambeth et al. 2009). The GBM was kindly provided by PB Gelatins (Tessenderlo Group, Belgium) in form of powder. Amino acid composition of the GBM can be found in Table 1. The drink was prepared mixing 100 g of the powder with 200 ml water at 50–70°C. The drink was kept refrigerated at 4°C and then kept at room temperature for 30 min before administration. The three GBM were identical in composition, except that 1.15 g of l-tryptophan and 5.15 g of l-tryptophan was added to the mixtures for the BAL and ATL conditions, respectively. No l-tryptophan was added for the ATD condition. The three GBM had the same color and taste.\nTable 1Amino acid spectrumAmino acid spectrum (typical weight% on ds protein)Alanine8.4Arginine7.7Aspartic acid/asparagine4.5Cysteine0.0Glutamic acid/glutamine10.0Glycine23.3Histidine0.9Hydroxylysine1.5Hydroxyproline12.3Isoleucine1.2Leucine2.6Lysine3.3Methionine0.9Phenylalanine1.6Proline13.7Serine3.4Threonine1.9Tryptophan0.0Tyrosine0.6Valine2.2\n\nAmino acid spectrum", "The 35% CO2-inhalation procedure was performed in accordance to a standardized protocol developed at the Maastricht Academic Anxiety Center (Griez et al. 1987; Griez and Schruers 1998). A gas mixture containing 35% CO2/65% O2 was delivered through a nasal–oral exercise self-administration facemask, using a double vital capacity inhalation technique. Before the challenge, the inspired vital capacity of every subject was measured using an analogue respirometer (Wright respirometer Mark 20) connected to the self-administration mask. The same respirometer measured the gas volume delivered at each inhalation. The inspired vital capacity with a double breath of air was measured on each occasion, and a challenge was considered adequate if it was more than 80% of the baseline vital capacity. The subjects were then given the self-administration mask and asked to exhale as deeply as possible. They were asked to take a maximal inspiration through the mask and to make a complete expiration outside the mask, immediately followed by a second maximal inspiration. At the end of the second inhalation, the subjects were asked to hold their breath for 4 s to enhance the alveolar gas exchange, and finally make a complete expiration outside the mask again.", "Samples for determination of plasma amino acid levels were taken at baseline (T0) and 4.5 h after GBM administration (T1). Blood (10 ml) was collected by venepuncture in sodium heparin tubes at each time point immediately after the rating of subjective assessments. After collection, the blood samples were immediately centrifuged at 4°C (10 min at 4,000 rpm). Subsequently, 100 μl of plasma was mixed with 8 mg of sulphasalicyl acid and frozen at −80°C until the amino acid analysis was performed (van Eijk et al. 1993). Plasma amino acids were determined using a fully automated high-performance liquid chromatography system after precolumn derivatization with o-phthaldialdehyde (OPA). OPA-AA derivates of the amino acids were quantified with fluorescence detection. The concentrations of plasma amino acids were expressed as micromoles per liter (μmol/l) (van Eijk et al. 1993). The ratio of total Trp:ΣLNAA (LNAA, i.e., tyrosine, phenylalanine, leucine, isoleucine, and valine) at baseline and 4.5 h after GBM were used as endpoints to monitor changes in Trp availability.", "Rating scales to assess panicogenic effects of CO2 challenge were chosen with reference to the definition of panic attack in DSM-IV TR diagnostic criteria (APA 2000). We used a visual analogue scale for affect (VAAS) labeled “fear or discomfort”, ranging from 0 (no fear/discomfort at all) to 100 (the worst imaginable fear/discomfort). The participants were instructed to indicate the amount of the subjective disturbance, in case of feeling either fear or discomfort following an established procedure (Colasanti et al. 2008).\nPanic symptoms were evaluated using the Panic Symptom List (PSL-IV) (Schruers et al. 2000). This consists of a questionnaire listing 13 items, each representing one of the DSM-IV TR symptoms (i.e., palpitations; sweating; trembling; sensations of shortness of breath or smothering; feeling of choking; chest discomfort; nausea or abdominal distress; feeling dizzy, lightheaded, or faint; derealization or depersonalization; fear of losing control; fear of dying; paresthesias; chills or hot flushes). The participants were asked to rate the intensity of each symptom from 0 (absent) to 4 (very intense). The total scores thus ranged from 0 to 52.\nVAAS and PSL-IV were administered at baseline (T0; pre-GBM administration), 1.5 h (T1), 3 h (T2), 4.5 h post-GBM administration (T3), and after CO2 challenge (Post-CO2). Post-CO2 scores indicated the worst moment experienced by the subjects after inhaling the gas mixture.\nMood states were measured with the shortened 32-item validated version of the Dutch translation of the Profile of Mood States Scale (POMS) (Wald and Mellenbergh 1990), which consists of five mood scales (depression, tension/anxiety, vigor, anger/hostility, and fatigue). The POMS was administered at T0, T1, T2, and T3. Subjects were asked to rate the scale according to how they felt at that moment.", "All the data are presented as mean ± standard deviation (SD). Percentage changes in Trp: ΣLNAA ratio (Δ% Trp:ΣLNAA ratio) after GBM (T3) compared to baseline (T0) were calculated by the formula T3-T0/T0 × 100. CO2-induced changes in VAAS and PSL scores were expressed as Δ scores (obtained by the formula POST-CO2 scores − T3 scores). In the cross-over sample, all data were analyzed with analysis of variance (ANOVA) for repeated measures with time and treatment condition as within-subjects factors. The effects of time X treatment condition interaction were studied to investigate the influence of GBM condition (ATD, BAL, ATL) on the subjective response to CO2 measured with VAAS and PSL scores. The analysis of VAAS scores has been repeated after controlling for gender, and the interaction time X treatment condition X gender has been studied using ANOVA per repeated measures.\nTo investigate the influence of GBM conditions on POMS scores, the effects of time per se and time X treatment condition interaction were studied with ANOVA per repeated measures.\nIf indicated by a significant ANOVA condition effect, time effect, or time X condition interaction effect, a subsequent evaluation on difference between individual conditions and between individual time points was done by within-subject repeated contrasts. The level of significance was set at 0.05. For the separate-group analysis the principal statistical analysis consisted of Spearman's non-parametric correlation between ΔVAAS scores and Trp:ΣLNAA levels. A partial correlation analysis between these two variables was repeated after controlling for gender. Moreover, baseline differences between treatments/groups were analyzed by one-way ANOVA for continuous variables (age, weight, Trp:ΣLNAA ratio, VAS, and PSL scores) and chi-square for non-parametric variables (gender distribution).", "Plasma amino acid levels are presented in Fig. 1. Total Trp:ΣLNAA ratio at T0 did not significantly differ between treatment conditions (F = 0.88, p = 0.42). A significant time X condition interaction was found with Δ% Trp:ΣLNAA ratio (from T0 to T3) being different between conditions (F = 106.6; p < 0.0001). ATD resulted in a decrease of 61.35 ± 15.6% in Trp:ΣLNAA ratio compared to baseline, while ATL resulted in an increase of 361.53 ± 154.05%. A 17.76 ± 19% post-GBM increase was found in the BAL condition. Within-subject contrasts evidenced significant differences between changes in Trp:ΣLNAA ratio between ATD and BAL, between ATL and BAL, and between ATD and ATL (F = 106.6; p < 0.0001). In three cases of ATD condition, we observed a decrease in Trp:ΣLNAA ratio <50%, and in one case of BAL condition, an increase in Trp: ΣLNAA ratio >50% was observed. For all the other subjects, we found Trp:ΣLNAA changes >50% in ATD condition, <50% in balanced condition, >100% in the ATL condition. The analyses reported in the following sections have also been performed after exclusion of those three subjects, obtaining the same results as with the complete sample.\nFig. 1Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nPlasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale", "VAAS scores (fear/discomfort) between conditions at different time points, are presented in Fig. 2. There were no significant differences in VAAS scores at T0, T1, T2, and T3 between conditions. No effect of GBM administration per se was observed on VAAS scores, as no significant difference was found between any of pre-CO2 challenge time points (T3, T2, T1) and T0 in any condition. CO2 inhalation was followed by an increase in VAAS scores in all the conditions (F = 57.49; p < 0.0001). A significant time X condition interaction was found, indicating that ΔVAAS scores were significantly lower in ATD compared to BAL and ATL (33.89 ± 26.18 vs 43.78 ± 25.5 and 44.17 ± 23.88, respectively; F = 5.79 and F = 6.58, p < 0.05). No significant differences were found between BAL and ATL conditions. The findings remained identical after controlling for gender and no effects of time X gender X condition interaction have been found.\nFig. 2VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nVAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nThere were no significant differences in PSL scores between conditions at any time point. Total PSL scores did not change at T1, T2, and T3 compared to T0, but significantly increased after CO2 inhalation, relative to T3 (F = 97.51; p < 0.0001). ΔPSL scores were similar between conditions (ATD, 11.11 ± 6.35; BAL, 11.06 ± 4.87; TL, 11.22 ± 4.32; NS). Analyzing individual PSL items separately, the only significant effect of treatment conditions on ΔPSL scores was evident for the item “sensation of shortness of breath”, indicating that ΔPSL scores after CO2 were lower in ATD condition than in BAL and ATL conditions (F = 4.11; p < 0.05) (Fig. 3).\nFig. 3Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nChange in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nNo order effect was found and ΔVAAS and ΔPSL scores were not affected by the order of administration of GBM conditions.\nThere was a significant effect of time on POMS-vigor (p < 0.0001) and POMS-tension (p < 0.05) scores, and within-subjects contrasts indicated that POMS-vigor scores were significantly higher at T0 relative to T1 (p < 0.01), and significantly lower at T2 compared to T3 (p < 0.01). POMS-tension scores were higher at T0 compared to T1 (p < 0.05). There was no significant effect of the time X treatment condition interaction on POMS scores.", "Fifty-five volunteers were enrolled in the separate-group study. Treatment groups consisted in 19 subjects (8 male; age: 24.84 ± 5.33 years) in ATD condition, 19 subjects (10 male; age: 23.32 ± 4.46 years) in BAL condition, and 17 subjects (10 male; 24.65 ± 5.32) in ATL condition. Age, gender distribution, and weight did not significantly differ between groups. Trp:ΣLNAA ratio at T0 was similar between conditions and baseline VAAS scores and PSL scores did not significantly differ between groups either. The relationship between Trp:ΣLNAA ratio and ΔVAAS scores is presented in Figs. 4 and 5. ΔVAAS scores were positively correlated to Δ% Trp:ΣLNAA ratio and Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.395 (p < 0.005) and 0.381 (p < 0.005), respectively], indicating that higher VAAS scores were associated with larger increases in the ratio Trp:ΣLNAA.\nFig. 4Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nFig. 5Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nRelationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nRelationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\nNo correlation was found between total PSL scores and Δ% Trp:ΣLNAA ratio or Trp:ΣLNAA ratio at T3.", "A large body of evidence suggests that manipulation of 5-HT precursors availability has an influence in modulating panic and anxiety (see review by Maron et al. 2008). However, some of the findings from experimental studies, both in healthy volunteers and PD patients, are divisive and indicate that the role of 5-HT might not be unique.\nHood et al. (2006) and Klaassen et al. (1998) tested the effects of ATD on the response to single-breath 35% CO2 challenge in healthy subjects. In a study on 14 healthy volunteers, no significant differences were found in subjective measures between ATD and balanced conditions, although ATD resulted in an increased CO2-induced elevation of cortisol compared to the balanced condition (Hood et al. 2006). Klaassen et al. study (1998), including 15 volunteers, found a significant increase in CO2-induced neurovegetative symptoms after ATD. In that study sample, mean net increases in total PSL scores and VAAS scores after CO2 were <7 and <7, respectively (Klaassen et al. 1998). Subjective CO2-induced anxiety was, therefore, very mild, hence indicating that single-breath 35% CO2 in healthy subjects did not evoke “real” panic. In the present study, using double-breath CO2, we found a more than 30% higher increase in PSL scores and more than 500% higher increase in VAAS scores after CO2 compared to Klaassen et al. (1998) study. Average VAAS and PSL scores in the present double-breath 35% CO2 study are comparable to those normally found in PD patients after single-breath 35% CO2 (Verburg et al. 1998). Other challenges in healthy volunteers found no significant effect of ATD on responses to anxiogenic challenges. This has been showed using a 5% CO2 challenge (Miller et al. 2000) and the hypercapnic Read rebreathing technique (Struzik et al. 2002), a simulated public speaking challenge (Monteiro-dos-Santos et al. 2000), and CCK-4 (Koszycki et al. 1996). In contrast, a study by (Goddard et al. 1995) showed increased nervousness in response to yohimbine challenge after ATD compared to yohimbine alone.\nTo enhance 5-HT availability in healthy subjects, (Maron et al. 2004) administered 5-HTP (direct precursor of 5-HT) and used CCK-4 as panicogenic challenge in 32 healthy volunteers. They observed a significant reduction compared to placebo in CCK-4-induced panic attacks and cognitive symptoms only in females, whereas in males only a decrease in somatic symptoms was observed. In Schruers et al. (2002b) study, 5-HTP did not alter the response to 35% CO2 compared to placebo. Taken together, these data and our results indicate that the effects of Trp depletion in healthy volunteers largely depend on the type of challenge and its relative potency, and might be additionally confounded by gender effects.\nStudies in PD patients demonstrated that ATD increased the response to single-breath 35% CO2 (Schruers et al. 2000) and 5% CO2 anxiety (Miller et al. 2000), and administration of 5-HTP blunted the response to single-breath 35% CO2 (Schruers et al. 2002b)\nATD also increased the response to flumazenil infusion in PD patients successfully treated with selective serotonin reuptake inhibitors (SSRI) (Bell et al. 2002; Davies et al. 2006) or CBT (Bell et al. 2009). In contrast, subjective response to CCK-4 is not influenced by ATD (Toru et al. 2006) in SSRI-treated PD patients. Also, ATD did not have significant effects on the subjective response to anxiogenic challenges in OCD (Barr et al. 1994; Kulz et al. 2007) and in GAD SSRI-treated patients (Hood et al. 2010). It is interesting to note that in the latest study of Hood and colleagues (2010), using 7.5% CO2 challenge, some subjective measures of anxiety (“something bad is going to happen”, “anxious”, “secure”), seemed to indicate an anxiolytic effect of ATD rather than anxiogenic; however, pairways comparison was not significantly different between conditions.\nAs a further confirmation of the role of 5-HT in modulating experimental panic, serotonergic drugs that are effective in treating panic disorder, like SSRI and tryciclics, also reduce the fear that patients with PD experience when they inhale CO2 (Bertani et al. 2001; Bertani et al. 1997; Perna et al. 2004; Perna et al. 2002; Perna et al. 1997; Pols et al. 1996).\nIn summary, a number of studies in PD indicate that availability of 5-HT precursors is inversely related to vulnerability to CO2 challenges, which is at odds with the present results. However, overall findings in anxiety disorder patients suggest that the effects of Trp manipulation specifically depend on the diagnosis and the type of anxiogenic challenge.", "Accumulating evidence from clinical and experimental research and genetic studies suggest a substantial role for the 5-HT system on the neurobiology of PD (see Maron and Shlik 2006). The relationship between 5-HT and panic is complex, as exemplified by the notion that SSRI are effective in reducing panic, but they may exacerbate anxiety during the initial phase of treatment (Sinclair et al. 2009). It has been hypothesized that panic is associated to either 5-HT excess (Iversen 1984) and 5-HT deficit (Deakin and Graeff 1991). Deakin & Graeff proposed that the 5-HT system plays a dual role in the modulation of anxiety by inhibiting panic responses, but contributing to anticipatory or generalized anxiety. Our findings presented here are not in line with this theory, as in our experimental design increased 5-HT availability did not suppress CO2-induced panic responses in healthy volunteers. However, recent studies suggest that other factors should be taken into account in understanding the relationship between 5-HT, CO2 and anxiety: a subset of 5-HT neurons, located in the chemosensitive zone (ventrolateral medulla and raphe) and associated with large arteries, are intrinsically chemosensitive in vitro (Severson et al. 2003) and are stimulated by hypercapnia in vivo in unanaesthetized animals (Veasey et al. 1995, 1997). Furthermore, experiments using in vivo microdialysis showed that increasing inhaled CO2 causes an increase in 5-HT release (Kanamaru and Homma 2007). Interestingly, mice selectively lacking 5-HT neurons display a blunted respiratory response to CO2, indicating that 5-HT neurons are required for normal central chemoreception (Hodges and Richerson 2008). Richerson (2004) proposed that medullary 5-HT neurons control ventilatory responses to CO2 and project to areas like forebrain and limbic system that are involved in affective regulation. Taking into account the panicogenic properties of CO2, we have previously speculated that these neurons could be part of an adaptive protective mechanism alerting the organism against the risk of impending asphyxia (Griez et al. 2007). Taken all these preclinical findings together, it appears that acute hypercapnia stimulates serotonergic neurons and 5-HT release and that conversely, disruption of the 5-HT system blunts the neuronal response to CO2.\nOur data are in line with the above preclinical evidence; we have effectively reduced availability of 5-HT precursors and we have used CO2 as a panicogenic challenge. In agreement with Hodges and Richerson (2008) data, we found that depletion of 5-HT precursors blunted the subjective response to CO2, particularly the respiratory sensations, and Trp availability was positively correlated with the intensity of the subjective responses.", "Some methodological considerations regard the validity of Trp manipulations to alter 5-HT availability. Trp depletion and Trp loading are relatively easy procedures to rapidly and reversibly change, decrease and increase respectively, the levels of 5-HT precursors (Hood et al. 2005). The first step in 5-HT biosynthesis is the conversion of Trp to 5-HTP by Trp hydroxylase. In the brain, this enzyme is only 50% saturated and the rate at which 5-HT is synthesized is limited only by substrate (Trp) availability. The LNAA transport system at the blood–brain barrier has a high affinity for all the LNAAs, including Trp. Therefore, the ratio Trp:ΣLNAA in plasma is generally used to predict the availability of Trp to the brain. A large body of literature provides evidence that manipulations of the levels of Trp in plasma results in a substantial and parallel alteration of 5-HT synthesis in the brain and availability of 5-HT and its metabolite in humans and animals (Biggio et al. 1974; Carpenter et al. 1998; Gessa et al. 1974; Leathwood 1987; Lieben et al. 2004; Williams et al. 1999). However, an altered release of 5-HT efflux (thought to reflect synaptic release, i.e., 5-HT neuronal activity) was only reported after chronic Trp depletion (Fadda et al. 2000) or after ATD in combination with 5-HT reuptake inhibition (Bel and Artigas 1996), but not after ATD alone (van der Plasse et al. 2007). In vitro studies suggest that an increase in Trp availability determines dose-dependent changes in 5-HT release under conditions of increased serotonergic neuronal activity but not on basal output of 5-HT (Sharp et al. 1992; Wolf and Kuhn 1986).\nWe assume that the alterations in the Trp:ΣLNAA ratio of the present study were followed by changes in brain 5-HT availability and synthesis in the same direction. However, at present, we cannot assure that Trp depletion and Trp loading actually influenced 5-HT release. Nevertheless, the possibility exists that the manipulations in the present study did affect 5-HT neuronal release, based on the abovementioned notions that acute hypercapnia stimulates serotonergic neurons and induces 5-HT release, and that Trp manipulations seem to be effective in altering 5-HT release only in stimulated neurons.\nA number of other methodological issues need to be addressed. To manipulate brain Trp availability, the studies in healthy volunteers of both Klaassen et al. (1998) and Hood et al. (2006) included the use of the classic amino acid mixture (Young et al. 1985), the former including an addition of carbohydrates and fat. In these studies, ATD appeared to be as effective as it was in ours and resulted in a significant decrease of plasma Trp levels. However, the magnitude of the depletion seems difficult to compare with that in our study due to some methodological differences: in the study of Klaassen et al. (1998), no baseline Trp plasma levels were collected, and in the study of Hood et al. (2006), free Trp plasma levels instead of total TRP levels were used for calculation of the Trp:ΣLNAA ratio. It is still debated whether total Trp or free Trp levels in plasma are the most reliable indirect measures of brain Trp availability (Pardridge 1998). Therefore, methodological differences might account for the divergent findings in our study compared to previous studies in healthy volunteers.", "It is recognized that 5-HT system plays a complex role in the regulation of the panic responses to CO2, as different serotonergic mechanisms can coexist, either inhibiting panic (Deakin and Graeff 1991) or promoting the aversive respiratory sensations to hypercapnic stimuli. The differences observed in our study in healthy volunteers, compared to previous findings in PD patients, might depend on the different relative contribution of these mechanisms in different populations." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Material and methods", "Subjects", "Design", "Procedure", "Gelatin-based mixtures", "Carbon dioxide challenge", "Amino acid analysis", "Assessments", "Data analysis", "Results", "Amino acid levels", "Subjective measures", "Separate-group study", "Discussion", "Tryptophan depletion/suppletion studies in healthy volunteers and PD patients", "Panic, 5-HT, and CO2: a complex relationship", "Methodological limitations", "Conclusions" ]
[ "In patients affected by panic disorder (PD), acute tryptophan depletion (ATD) enhances the response to a number of panicogenic agents. This effect of ATD has been shown in studies which used inhalation of 35% and 5% carbon dioxide (CO2) (Miller et al. 2000; Schruers et al. 2000) and infusion of flumazenil (Bell et al. 2002; Davies et al. 2006) as challenges to induce panic, but no effect has been observed using infusion of cholecystokinin tetrapeptide (CCK-4) (Toru et al. 2006). Pre-treatment with 5-hydroxytryptophan (5-HTP), the immediate precursor of serotonin (5-HT), has been shown to blunt the response to 35% CO2 challenge, indicating a “protective” effect on experimentally induced panic (Schruers et al. 2002b). These findings seem to indicate that in PD patients, fear or anxiety provoked by some panicogenic challenges is negatively correlated to the availability of 5-HT precursors.\nStudies in healthy volunteers yielded inconclusive findings: reports showed that ATD failed to modify the panicogenic effects of CCK-4 (Koszycki et al. 1996) as well as the effects of 5% CO2 inhalation (Miller et al. 2000) or the Read rebreathing test (Struzik et al. 2002). Two studies have tested the effects of 35% CO2 in tryptophan (Trp) depleted subjects, one showing an increase of CO2-induced neurovegetative symptoms but not subjective anxiety (Klaassen et al. 1998), while the other did not find any significant difference between ATD condition and placebo with regard to CO2-provoked subjective effects (Hood et al. 2006). Administration of 5-HTP in healthy volunteers was shown to reduce CCK-4-induced panic attacks and panic-related cognitive symptoms, specifically in females (Maron et al. 2004), but had no effects on the response to CO2 (Schruers et al. 2002b).\nThe discrepancies observed between PD patients and healthy controls, in terms of the effects of ATD on the subjective response to challenges might be due to the relative low sensitivity exhibited by healthy subjects to panicogenic agents. We have previously showed in healthy volunteers that the inhalation of CO2 dose-dependently induces panic-like symptoms, and that high doses of CO2 (double-breath of 35% CO2) in healthy subjects might be as effective as moderate doses of CO2 are in PD patients (Griez et al. 2007; Schruers et al. 2004b). Here, we intended to perform a study in healthy subjects using a high-dose CO2 challenge in order to test whether we can reproduce in a non-clinical population the same modulating effects of 5-HT manipulation observed in PD patients on experimental panic response; for this purpose, we investigated the effects of ATD on CO2-induced panic response in healthy volunteers.\nAdditionally, since the Trp suppletion studies with panicogenic challenges conducted in healthy subjects gave inconclusive results (Maron et al. 2004; Schruers et al. 2002b), we also tested the effects of acute tryptophan loading (ATL) on subjective response to CO2 challenge.", "[SUBTITLE] Subjects [SUBSECTION] Healthy volunteers were recruited among students or staff, through advertisements from within the Vijverdal Psychiatric Hospital Maastricht (Mondriaan Zorggroep) and throughout the Maastricht University locations. The Medical Ethics Committee of the Academic Hospital Maastricht and Maastricht University approved the study, and the subjects were paid for their participation in the experiment. After a complete description of the study, a written informed consent was obtained from the subjects. The volunteers underwent collection of medical history and physical examination. Inclusion criteria were age between 18 and 65 years and a good present and past physical and mental condition. The latter was established using a structured psychiatric interview (Mini International Neuropsychiatric Interview) performed by a physician. Exclusion criteria were current psychopharmacological or psychological treatment, recent alcohol intake, substance or caffeine-related disorders, excessive smoking (>10 cigarettes a day), respiratory or cardiovascular disease, hypertension (diastolic >100 mmHg; systolic >170 mmHg), personal or family history of cerebral aneurysm, pregnancy, and epilepsy. Individuals were also excluded if they reported common specific fears or if they had a history of panic attacks, or a history of PD in a first-degree relative.\nHealthy volunteers were recruited among students or staff, through advertisements from within the Vijverdal Psychiatric Hospital Maastricht (Mondriaan Zorggroep) and throughout the Maastricht University locations. The Medical Ethics Committee of the Academic Hospital Maastricht and Maastricht University approved the study, and the subjects were paid for their participation in the experiment. After a complete description of the study, a written informed consent was obtained from the subjects. The volunteers underwent collection of medical history and physical examination. Inclusion criteria were age between 18 and 65 years and a good present and past physical and mental condition. The latter was established using a structured psychiatric interview (Mini International Neuropsychiatric Interview) performed by a physician. Exclusion criteria were current psychopharmacological or psychological treatment, recent alcohol intake, substance or caffeine-related disorders, excessive smoking (>10 cigarettes a day), respiratory or cardiovascular disease, hypertension (diastolic >100 mmHg; systolic >170 mmHg), personal or family history of cerebral aneurysm, pregnancy, and epilepsy. Individuals were also excluded if they reported common specific fears or if they had a history of panic attacks, or a history of PD in a first-degree relative.\n[SUBTITLE] Design [SUBSECTION] Subjects were randomized in double-blind placebo-controlled cross-over study design. The subjects received three different gelatin-based mixtures (GBM) on three different days in a randomized order, to induce respectively the ATD condition, a balanced condition (BAL) and the ATL condition. On each day, after GBM administration, they underwent a double-breath CO2 challenge.\nA separate analysis has been conducted on a larger sample of subjects in a randomized separate-group design, in order to investigate the relationship between the ratio Trp to the sum of large neutral amino acids in plasma (Trp:ΣLNAA) and the subjective response to CO2. Subjects were randomly assigned to one of the three groups: ATD condition, BAL condition, and ATL condition. This sample also included subjects who were enrolled in the cross-over study for which only the first study day was taken into consideration in the analysis.\nSubjects were randomized in double-blind placebo-controlled cross-over study design. The subjects received three different gelatin-based mixtures (GBM) on three different days in a randomized order, to induce respectively the ATD condition, a balanced condition (BAL) and the ATL condition. On each day, after GBM administration, they underwent a double-breath CO2 challenge.\nA separate analysis has been conducted on a larger sample of subjects in a randomized separate-group design, in order to investigate the relationship between the ratio Trp to the sum of large neutral amino acids in plasma (Trp:ΣLNAA) and the subjective response to CO2. Subjects were randomly assigned to one of the three groups: ATD condition, BAL condition, and ATL condition. This sample also included subjects who were enrolled in the cross-over study for which only the first study day was taken into consideration in the analysis.\n[SUBTITLE] Procedure [SUBSECTION] The subjects arrived at the clinic after an overnight fast. Blood was drawn to measure plasma Trp and large neutral amino acids (LNAA) levels. GBM were administered in the morning at about 9:30. After drinking the GBM, the subjects remained on the ward and were allowed to read or watch a nature documentary on video. Subjects had ad libitum access to mineral water, but they were asked to refrain from eating and drinking any xanthine beverages. At 4.5 h after GBM administration, other blood samples were collected to monitor plasma Trp and LNAA levels. Ten minutes later, the subjects underwent a double-breath inhalation of a gas mixture containing 35% CO2 and 65% O2.\nThe subjects arrived at the clinic after an overnight fast. Blood was drawn to measure plasma Trp and large neutral amino acids (LNAA) levels. GBM were administered in the morning at about 9:30. After drinking the GBM, the subjects remained on the ward and were allowed to read or watch a nature documentary on video. Subjects had ad libitum access to mineral water, but they were asked to refrain from eating and drinking any xanthine beverages. At 4.5 h after GBM administration, other blood samples were collected to monitor plasma Trp and LNAA levels. Ten minutes later, the subjects underwent a double-breath inhalation of a gas mixture containing 35% CO2 and 65% O2.\n[SUBTITLE] Gelatin-based mixtures [SUBSECTION] The gelatin consists of a hydrolysate collagen-protein comprising the entire range of amino acids in the form of peptides, but completely lacking Trp. After administration, these peptides are decomposed into amino acids, and the mechanism of depletion is identical to that of the “classic” amino acid mixture (Sambeth et al. 2009). The GBM was kindly provided by PB Gelatins (Tessenderlo Group, Belgium) in form of powder. Amino acid composition of the GBM can be found in Table 1. The drink was prepared mixing 100 g of the powder with 200 ml water at 50–70°C. The drink was kept refrigerated at 4°C and then kept at room temperature for 30 min before administration. The three GBM were identical in composition, except that 1.15 g of l-tryptophan and 5.15 g of l-tryptophan was added to the mixtures for the BAL and ATL conditions, respectively. No l-tryptophan was added for the ATD condition. The three GBM had the same color and taste.\nTable 1Amino acid spectrumAmino acid spectrum (typical weight% on ds protein)Alanine8.4Arginine7.7Aspartic acid/asparagine4.5Cysteine0.0Glutamic acid/glutamine10.0Glycine23.3Histidine0.9Hydroxylysine1.5Hydroxyproline12.3Isoleucine1.2Leucine2.6Lysine3.3Methionine0.9Phenylalanine1.6Proline13.7Serine3.4Threonine1.9Tryptophan0.0Tyrosine0.6Valine2.2\n\nAmino acid spectrum\nThe gelatin consists of a hydrolysate collagen-protein comprising the entire range of amino acids in the form of peptides, but completely lacking Trp. After administration, these peptides are decomposed into amino acids, and the mechanism of depletion is identical to that of the “classic” amino acid mixture (Sambeth et al. 2009). The GBM was kindly provided by PB Gelatins (Tessenderlo Group, Belgium) in form of powder. Amino acid composition of the GBM can be found in Table 1. The drink was prepared mixing 100 g of the powder with 200 ml water at 50–70°C. The drink was kept refrigerated at 4°C and then kept at room temperature for 30 min before administration. The three GBM were identical in composition, except that 1.15 g of l-tryptophan and 5.15 g of l-tryptophan was added to the mixtures for the BAL and ATL conditions, respectively. No l-tryptophan was added for the ATD condition. The three GBM had the same color and taste.\nTable 1Amino acid spectrumAmino acid spectrum (typical weight% on ds protein)Alanine8.4Arginine7.7Aspartic acid/asparagine4.5Cysteine0.0Glutamic acid/glutamine10.0Glycine23.3Histidine0.9Hydroxylysine1.5Hydroxyproline12.3Isoleucine1.2Leucine2.6Lysine3.3Methionine0.9Phenylalanine1.6Proline13.7Serine3.4Threonine1.9Tryptophan0.0Tyrosine0.6Valine2.2\n\nAmino acid spectrum\n[SUBTITLE] Carbon dioxide challenge [SUBSECTION] The 35% CO2-inhalation procedure was performed in accordance to a standardized protocol developed at the Maastricht Academic Anxiety Center (Griez et al. 1987; Griez and Schruers 1998). A gas mixture containing 35% CO2/65% O2 was delivered through a nasal–oral exercise self-administration facemask, using a double vital capacity inhalation technique. Before the challenge, the inspired vital capacity of every subject was measured using an analogue respirometer (Wright respirometer Mark 20) connected to the self-administration mask. The same respirometer measured the gas volume delivered at each inhalation. The inspired vital capacity with a double breath of air was measured on each occasion, and a challenge was considered adequate if it was more than 80% of the baseline vital capacity. The subjects were then given the self-administration mask and asked to exhale as deeply as possible. They were asked to take a maximal inspiration through the mask and to make a complete expiration outside the mask, immediately followed by a second maximal inspiration. At the end of the second inhalation, the subjects were asked to hold their breath for 4 s to enhance the alveolar gas exchange, and finally make a complete expiration outside the mask again.\nThe 35% CO2-inhalation procedure was performed in accordance to a standardized protocol developed at the Maastricht Academic Anxiety Center (Griez et al. 1987; Griez and Schruers 1998). A gas mixture containing 35% CO2/65% O2 was delivered through a nasal–oral exercise self-administration facemask, using a double vital capacity inhalation technique. Before the challenge, the inspired vital capacity of every subject was measured using an analogue respirometer (Wright respirometer Mark 20) connected to the self-administration mask. The same respirometer measured the gas volume delivered at each inhalation. The inspired vital capacity with a double breath of air was measured on each occasion, and a challenge was considered adequate if it was more than 80% of the baseline vital capacity. The subjects were then given the self-administration mask and asked to exhale as deeply as possible. They were asked to take a maximal inspiration through the mask and to make a complete expiration outside the mask, immediately followed by a second maximal inspiration. At the end of the second inhalation, the subjects were asked to hold their breath for 4 s to enhance the alveolar gas exchange, and finally make a complete expiration outside the mask again.\n[SUBTITLE] Amino acid analysis [SUBSECTION] Samples for determination of plasma amino acid levels were taken at baseline (T0) and 4.5 h after GBM administration (T1). Blood (10 ml) was collected by venepuncture in sodium heparin tubes at each time point immediately after the rating of subjective assessments. After collection, the blood samples were immediately centrifuged at 4°C (10 min at 4,000 rpm). Subsequently, 100 μl of plasma was mixed with 8 mg of sulphasalicyl acid and frozen at −80°C until the amino acid analysis was performed (van Eijk et al. 1993). Plasma amino acids were determined using a fully automated high-performance liquid chromatography system after precolumn derivatization with o-phthaldialdehyde (OPA). OPA-AA derivates of the amino acids were quantified with fluorescence detection. The concentrations of plasma amino acids were expressed as micromoles per liter (μmol/l) (van Eijk et al. 1993). The ratio of total Trp:ΣLNAA (LNAA, i.e., tyrosine, phenylalanine, leucine, isoleucine, and valine) at baseline and 4.5 h after GBM were used as endpoints to monitor changes in Trp availability.\nSamples for determination of plasma amino acid levels were taken at baseline (T0) and 4.5 h after GBM administration (T1). Blood (10 ml) was collected by venepuncture in sodium heparin tubes at each time point immediately after the rating of subjective assessments. After collection, the blood samples were immediately centrifuged at 4°C (10 min at 4,000 rpm). Subsequently, 100 μl of plasma was mixed with 8 mg of sulphasalicyl acid and frozen at −80°C until the amino acid analysis was performed (van Eijk et al. 1993). Plasma amino acids were determined using a fully automated high-performance liquid chromatography system after precolumn derivatization with o-phthaldialdehyde (OPA). OPA-AA derivates of the amino acids were quantified with fluorescence detection. The concentrations of plasma amino acids were expressed as micromoles per liter (μmol/l) (van Eijk et al. 1993). The ratio of total Trp:ΣLNAA (LNAA, i.e., tyrosine, phenylalanine, leucine, isoleucine, and valine) at baseline and 4.5 h after GBM were used as endpoints to monitor changes in Trp availability.\n[SUBTITLE] Assessments [SUBSECTION] Rating scales to assess panicogenic effects of CO2 challenge were chosen with reference to the definition of panic attack in DSM-IV TR diagnostic criteria (APA 2000). We used a visual analogue scale for affect (VAAS) labeled “fear or discomfort”, ranging from 0 (no fear/discomfort at all) to 100 (the worst imaginable fear/discomfort). The participants were instructed to indicate the amount of the subjective disturbance, in case of feeling either fear or discomfort following an established procedure (Colasanti et al. 2008).\nPanic symptoms were evaluated using the Panic Symptom List (PSL-IV) (Schruers et al. 2000). This consists of a questionnaire listing 13 items, each representing one of the DSM-IV TR symptoms (i.e., palpitations; sweating; trembling; sensations of shortness of breath or smothering; feeling of choking; chest discomfort; nausea or abdominal distress; feeling dizzy, lightheaded, or faint; derealization or depersonalization; fear of losing control; fear of dying; paresthesias; chills or hot flushes). The participants were asked to rate the intensity of each symptom from 0 (absent) to 4 (very intense). The total scores thus ranged from 0 to 52.\nVAAS and PSL-IV were administered at baseline (T0; pre-GBM administration), 1.5 h (T1), 3 h (T2), 4.5 h post-GBM administration (T3), and after CO2 challenge (Post-CO2). Post-CO2 scores indicated the worst moment experienced by the subjects after inhaling the gas mixture.\nMood states were measured with the shortened 32-item validated version of the Dutch translation of the Profile of Mood States Scale (POMS) (Wald and Mellenbergh 1990), which consists of five mood scales (depression, tension/anxiety, vigor, anger/hostility, and fatigue). The POMS was administered at T0, T1, T2, and T3. Subjects were asked to rate the scale according to how they felt at that moment.\nRating scales to assess panicogenic effects of CO2 challenge were chosen with reference to the definition of panic attack in DSM-IV TR diagnostic criteria (APA 2000). We used a visual analogue scale for affect (VAAS) labeled “fear or discomfort”, ranging from 0 (no fear/discomfort at all) to 100 (the worst imaginable fear/discomfort). The participants were instructed to indicate the amount of the subjective disturbance, in case of feeling either fear or discomfort following an established procedure (Colasanti et al. 2008).\nPanic symptoms were evaluated using the Panic Symptom List (PSL-IV) (Schruers et al. 2000). This consists of a questionnaire listing 13 items, each representing one of the DSM-IV TR symptoms (i.e., palpitations; sweating; trembling; sensations of shortness of breath or smothering; feeling of choking; chest discomfort; nausea or abdominal distress; feeling dizzy, lightheaded, or faint; derealization or depersonalization; fear of losing control; fear of dying; paresthesias; chills or hot flushes). The participants were asked to rate the intensity of each symptom from 0 (absent) to 4 (very intense). The total scores thus ranged from 0 to 52.\nVAAS and PSL-IV were administered at baseline (T0; pre-GBM administration), 1.5 h (T1), 3 h (T2), 4.5 h post-GBM administration (T3), and after CO2 challenge (Post-CO2). Post-CO2 scores indicated the worst moment experienced by the subjects after inhaling the gas mixture.\nMood states were measured with the shortened 32-item validated version of the Dutch translation of the Profile of Mood States Scale (POMS) (Wald and Mellenbergh 1990), which consists of five mood scales (depression, tension/anxiety, vigor, anger/hostility, and fatigue). The POMS was administered at T0, T1, T2, and T3. Subjects were asked to rate the scale according to how they felt at that moment.\n[SUBTITLE] Data analysis [SUBSECTION] All the data are presented as mean ± standard deviation (SD). Percentage changes in Trp: ΣLNAA ratio (Δ% Trp:ΣLNAA ratio) after GBM (T3) compared to baseline (T0) were calculated by the formula T3-T0/T0 × 100. CO2-induced changes in VAAS and PSL scores were expressed as Δ scores (obtained by the formula POST-CO2 scores − T3 scores). In the cross-over sample, all data were analyzed with analysis of variance (ANOVA) for repeated measures with time and treatment condition as within-subjects factors. The effects of time X treatment condition interaction were studied to investigate the influence of GBM condition (ATD, BAL, ATL) on the subjective response to CO2 measured with VAAS and PSL scores. The analysis of VAAS scores has been repeated after controlling for gender, and the interaction time X treatment condition X gender has been studied using ANOVA per repeated measures.\nTo investigate the influence of GBM conditions on POMS scores, the effects of time per se and time X treatment condition interaction were studied with ANOVA per repeated measures.\nIf indicated by a significant ANOVA condition effect, time effect, or time X condition interaction effect, a subsequent evaluation on difference between individual conditions and between individual time points was done by within-subject repeated contrasts. The level of significance was set at 0.05. For the separate-group analysis the principal statistical analysis consisted of Spearman's non-parametric correlation between ΔVAAS scores and Trp:ΣLNAA levels. A partial correlation analysis between these two variables was repeated after controlling for gender. Moreover, baseline differences between treatments/groups were analyzed by one-way ANOVA for continuous variables (age, weight, Trp:ΣLNAA ratio, VAS, and PSL scores) and chi-square for non-parametric variables (gender distribution).\nAll the data are presented as mean ± standard deviation (SD). Percentage changes in Trp: ΣLNAA ratio (Δ% Trp:ΣLNAA ratio) after GBM (T3) compared to baseline (T0) were calculated by the formula T3-T0/T0 × 100. CO2-induced changes in VAAS and PSL scores were expressed as Δ scores (obtained by the formula POST-CO2 scores − T3 scores). In the cross-over sample, all data were analyzed with analysis of variance (ANOVA) for repeated measures with time and treatment condition as within-subjects factors. The effects of time X treatment condition interaction were studied to investigate the influence of GBM condition (ATD, BAL, ATL) on the subjective response to CO2 measured with VAAS and PSL scores. The analysis of VAAS scores has been repeated after controlling for gender, and the interaction time X treatment condition X gender has been studied using ANOVA per repeated measures.\nTo investigate the influence of GBM conditions on POMS scores, the effects of time per se and time X treatment condition interaction were studied with ANOVA per repeated measures.\nIf indicated by a significant ANOVA condition effect, time effect, or time X condition interaction effect, a subsequent evaluation on difference between individual conditions and between individual time points was done by within-subject repeated contrasts. The level of significance was set at 0.05. For the separate-group analysis the principal statistical analysis consisted of Spearman's non-parametric correlation between ΔVAAS scores and Trp:ΣLNAA levels. A partial correlation analysis between these two variables was repeated after controlling for gender. Moreover, baseline differences between treatments/groups were analyzed by one-way ANOVA for continuous variables (age, weight, Trp:ΣLNAA ratio, VAS, and PSL scores) and chi-square for non-parametric variables (gender distribution).", "Healthy volunteers were recruited among students or staff, through advertisements from within the Vijverdal Psychiatric Hospital Maastricht (Mondriaan Zorggroep) and throughout the Maastricht University locations. The Medical Ethics Committee of the Academic Hospital Maastricht and Maastricht University approved the study, and the subjects were paid for their participation in the experiment. After a complete description of the study, a written informed consent was obtained from the subjects. The volunteers underwent collection of medical history and physical examination. Inclusion criteria were age between 18 and 65 years and a good present and past physical and mental condition. The latter was established using a structured psychiatric interview (Mini International Neuropsychiatric Interview) performed by a physician. Exclusion criteria were current psychopharmacological or psychological treatment, recent alcohol intake, substance or caffeine-related disorders, excessive smoking (>10 cigarettes a day), respiratory or cardiovascular disease, hypertension (diastolic >100 mmHg; systolic >170 mmHg), personal or family history of cerebral aneurysm, pregnancy, and epilepsy. Individuals were also excluded if they reported common specific fears or if they had a history of panic attacks, or a history of PD in a first-degree relative.", "Subjects were randomized in double-blind placebo-controlled cross-over study design. The subjects received three different gelatin-based mixtures (GBM) on three different days in a randomized order, to induce respectively the ATD condition, a balanced condition (BAL) and the ATL condition. On each day, after GBM administration, they underwent a double-breath CO2 challenge.\nA separate analysis has been conducted on a larger sample of subjects in a randomized separate-group design, in order to investigate the relationship between the ratio Trp to the sum of large neutral amino acids in plasma (Trp:ΣLNAA) and the subjective response to CO2. Subjects were randomly assigned to one of the three groups: ATD condition, BAL condition, and ATL condition. This sample also included subjects who were enrolled in the cross-over study for which only the first study day was taken into consideration in the analysis.", "The subjects arrived at the clinic after an overnight fast. Blood was drawn to measure plasma Trp and large neutral amino acids (LNAA) levels. GBM were administered in the morning at about 9:30. After drinking the GBM, the subjects remained on the ward and were allowed to read or watch a nature documentary on video. Subjects had ad libitum access to mineral water, but they were asked to refrain from eating and drinking any xanthine beverages. At 4.5 h after GBM administration, other blood samples were collected to monitor plasma Trp and LNAA levels. Ten minutes later, the subjects underwent a double-breath inhalation of a gas mixture containing 35% CO2 and 65% O2.", "The gelatin consists of a hydrolysate collagen-protein comprising the entire range of amino acids in the form of peptides, but completely lacking Trp. After administration, these peptides are decomposed into amino acids, and the mechanism of depletion is identical to that of the “classic” amino acid mixture (Sambeth et al. 2009). The GBM was kindly provided by PB Gelatins (Tessenderlo Group, Belgium) in form of powder. Amino acid composition of the GBM can be found in Table 1. The drink was prepared mixing 100 g of the powder with 200 ml water at 50–70°C. The drink was kept refrigerated at 4°C and then kept at room temperature for 30 min before administration. The three GBM were identical in composition, except that 1.15 g of l-tryptophan and 5.15 g of l-tryptophan was added to the mixtures for the BAL and ATL conditions, respectively. No l-tryptophan was added for the ATD condition. The three GBM had the same color and taste.\nTable 1Amino acid spectrumAmino acid spectrum (typical weight% on ds protein)Alanine8.4Arginine7.7Aspartic acid/asparagine4.5Cysteine0.0Glutamic acid/glutamine10.0Glycine23.3Histidine0.9Hydroxylysine1.5Hydroxyproline12.3Isoleucine1.2Leucine2.6Lysine3.3Methionine0.9Phenylalanine1.6Proline13.7Serine3.4Threonine1.9Tryptophan0.0Tyrosine0.6Valine2.2\n\nAmino acid spectrum", "The 35% CO2-inhalation procedure was performed in accordance to a standardized protocol developed at the Maastricht Academic Anxiety Center (Griez et al. 1987; Griez and Schruers 1998). A gas mixture containing 35% CO2/65% O2 was delivered through a nasal–oral exercise self-administration facemask, using a double vital capacity inhalation technique. Before the challenge, the inspired vital capacity of every subject was measured using an analogue respirometer (Wright respirometer Mark 20) connected to the self-administration mask. The same respirometer measured the gas volume delivered at each inhalation. The inspired vital capacity with a double breath of air was measured on each occasion, and a challenge was considered adequate if it was more than 80% of the baseline vital capacity. The subjects were then given the self-administration mask and asked to exhale as deeply as possible. They were asked to take a maximal inspiration through the mask and to make a complete expiration outside the mask, immediately followed by a second maximal inspiration. At the end of the second inhalation, the subjects were asked to hold their breath for 4 s to enhance the alveolar gas exchange, and finally make a complete expiration outside the mask again.", "Samples for determination of plasma amino acid levels were taken at baseline (T0) and 4.5 h after GBM administration (T1). Blood (10 ml) was collected by venepuncture in sodium heparin tubes at each time point immediately after the rating of subjective assessments. After collection, the blood samples were immediately centrifuged at 4°C (10 min at 4,000 rpm). Subsequently, 100 μl of plasma was mixed with 8 mg of sulphasalicyl acid and frozen at −80°C until the amino acid analysis was performed (van Eijk et al. 1993). Plasma amino acids were determined using a fully automated high-performance liquid chromatography system after precolumn derivatization with o-phthaldialdehyde (OPA). OPA-AA derivates of the amino acids were quantified with fluorescence detection. The concentrations of plasma amino acids were expressed as micromoles per liter (μmol/l) (van Eijk et al. 1993). The ratio of total Trp:ΣLNAA (LNAA, i.e., tyrosine, phenylalanine, leucine, isoleucine, and valine) at baseline and 4.5 h after GBM were used as endpoints to monitor changes in Trp availability.", "Rating scales to assess panicogenic effects of CO2 challenge were chosen with reference to the definition of panic attack in DSM-IV TR diagnostic criteria (APA 2000). We used a visual analogue scale for affect (VAAS) labeled “fear or discomfort”, ranging from 0 (no fear/discomfort at all) to 100 (the worst imaginable fear/discomfort). The participants were instructed to indicate the amount of the subjective disturbance, in case of feeling either fear or discomfort following an established procedure (Colasanti et al. 2008).\nPanic symptoms were evaluated using the Panic Symptom List (PSL-IV) (Schruers et al. 2000). This consists of a questionnaire listing 13 items, each representing one of the DSM-IV TR symptoms (i.e., palpitations; sweating; trembling; sensations of shortness of breath or smothering; feeling of choking; chest discomfort; nausea or abdominal distress; feeling dizzy, lightheaded, or faint; derealization or depersonalization; fear of losing control; fear of dying; paresthesias; chills or hot flushes). The participants were asked to rate the intensity of each symptom from 0 (absent) to 4 (very intense). The total scores thus ranged from 0 to 52.\nVAAS and PSL-IV were administered at baseline (T0; pre-GBM administration), 1.5 h (T1), 3 h (T2), 4.5 h post-GBM administration (T3), and after CO2 challenge (Post-CO2). Post-CO2 scores indicated the worst moment experienced by the subjects after inhaling the gas mixture.\nMood states were measured with the shortened 32-item validated version of the Dutch translation of the Profile of Mood States Scale (POMS) (Wald and Mellenbergh 1990), which consists of five mood scales (depression, tension/anxiety, vigor, anger/hostility, and fatigue). The POMS was administered at T0, T1, T2, and T3. Subjects were asked to rate the scale according to how they felt at that moment.", "All the data are presented as mean ± standard deviation (SD). Percentage changes in Trp: ΣLNAA ratio (Δ% Trp:ΣLNAA ratio) after GBM (T3) compared to baseline (T0) were calculated by the formula T3-T0/T0 × 100. CO2-induced changes in VAAS and PSL scores were expressed as Δ scores (obtained by the formula POST-CO2 scores − T3 scores). In the cross-over sample, all data were analyzed with analysis of variance (ANOVA) for repeated measures with time and treatment condition as within-subjects factors. The effects of time X treatment condition interaction were studied to investigate the influence of GBM condition (ATD, BAL, ATL) on the subjective response to CO2 measured with VAAS and PSL scores. The analysis of VAAS scores has been repeated after controlling for gender, and the interaction time X treatment condition X gender has been studied using ANOVA per repeated measures.\nTo investigate the influence of GBM conditions on POMS scores, the effects of time per se and time X treatment condition interaction were studied with ANOVA per repeated measures.\nIf indicated by a significant ANOVA condition effect, time effect, or time X condition interaction effect, a subsequent evaluation on difference between individual conditions and between individual time points was done by within-subject repeated contrasts. The level of significance was set at 0.05. For the separate-group analysis the principal statistical analysis consisted of Spearman's non-parametric correlation between ΔVAAS scores and Trp:ΣLNAA levels. A partial correlation analysis between these two variables was repeated after controlling for gender. Moreover, baseline differences between treatments/groups were analyzed by one-way ANOVA for continuous variables (age, weight, Trp:ΣLNAA ratio, VAS, and PSL scores) and chi-square for non-parametric variables (gender distribution).", "Eighteen healthy volunteers (10 male) completed the cross-over study (mean age 25 ± 5.5 years).\nOne subject was excluded because she reported nausea and vomiting after the administration of GBM.\n[SUBTITLE] Amino acid levels [SUBSECTION] Plasma amino acid levels are presented in Fig. 1. Total Trp:ΣLNAA ratio at T0 did not significantly differ between treatment conditions (F = 0.88, p = 0.42). A significant time X condition interaction was found with Δ% Trp:ΣLNAA ratio (from T0 to T3) being different between conditions (F = 106.6; p < 0.0001). ATD resulted in a decrease of 61.35 ± 15.6% in Trp:ΣLNAA ratio compared to baseline, while ATL resulted in an increase of 361.53 ± 154.05%. A 17.76 ± 19% post-GBM increase was found in the BAL condition. Within-subject contrasts evidenced significant differences between changes in Trp:ΣLNAA ratio between ATD and BAL, between ATL and BAL, and between ATD and ATL (F = 106.6; p < 0.0001). In three cases of ATD condition, we observed a decrease in Trp:ΣLNAA ratio <50%, and in one case of BAL condition, an increase in Trp: ΣLNAA ratio >50% was observed. For all the other subjects, we found Trp:ΣLNAA changes >50% in ATD condition, <50% in balanced condition, >100% in the ATL condition. The analyses reported in the following sections have also been performed after exclusion of those three subjects, obtaining the same results as with the complete sample.\nFig. 1Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nPlasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\nPlasma amino acid levels are presented in Fig. 1. Total Trp:ΣLNAA ratio at T0 did not significantly differ between treatment conditions (F = 0.88, p = 0.42). A significant time X condition interaction was found with Δ% Trp:ΣLNAA ratio (from T0 to T3) being different between conditions (F = 106.6; p < 0.0001). ATD resulted in a decrease of 61.35 ± 15.6% in Trp:ΣLNAA ratio compared to baseline, while ATL resulted in an increase of 361.53 ± 154.05%. A 17.76 ± 19% post-GBM increase was found in the BAL condition. Within-subject contrasts evidenced significant differences between changes in Trp:ΣLNAA ratio between ATD and BAL, between ATL and BAL, and between ATD and ATL (F = 106.6; p < 0.0001). In three cases of ATD condition, we observed a decrease in Trp:ΣLNAA ratio <50%, and in one case of BAL condition, an increase in Trp: ΣLNAA ratio >50% was observed. For all the other subjects, we found Trp:ΣLNAA changes >50% in ATD condition, <50% in balanced condition, >100% in the ATL condition. The analyses reported in the following sections have also been performed after exclusion of those three subjects, obtaining the same results as with the complete sample.\nFig. 1Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nPlasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n[SUBTITLE] Subjective measures [SUBSECTION] VAAS scores (fear/discomfort) between conditions at different time points, are presented in Fig. 2. There were no significant differences in VAAS scores at T0, T1, T2, and T3 between conditions. No effect of GBM administration per se was observed on VAAS scores, as no significant difference was found between any of pre-CO2 challenge time points (T3, T2, T1) and T0 in any condition. CO2 inhalation was followed by an increase in VAAS scores in all the conditions (F = 57.49; p < 0.0001). A significant time X condition interaction was found, indicating that ΔVAAS scores were significantly lower in ATD compared to BAL and ATL (33.89 ± 26.18 vs 43.78 ± 25.5 and 44.17 ± 23.88, respectively; F = 5.79 and F = 6.58, p < 0.05). No significant differences were found between BAL and ATL conditions. The findings remained identical after controlling for gender and no effects of time X gender X condition interaction have been found.\nFig. 2VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nVAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nThere were no significant differences in PSL scores between conditions at any time point. Total PSL scores did not change at T1, T2, and T3 compared to T0, but significantly increased after CO2 inhalation, relative to T3 (F = 97.51; p < 0.0001). ΔPSL scores were similar between conditions (ATD, 11.11 ± 6.35; BAL, 11.06 ± 4.87; TL, 11.22 ± 4.32; NS). Analyzing individual PSL items separately, the only significant effect of treatment conditions on ΔPSL scores was evident for the item “sensation of shortness of breath”, indicating that ΔPSL scores after CO2 were lower in ATD condition than in BAL and ATL conditions (F = 4.11; p < 0.05) (Fig. 3).\nFig. 3Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nChange in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nNo order effect was found and ΔVAAS and ΔPSL scores were not affected by the order of administration of GBM conditions.\nThere was a significant effect of time on POMS-vigor (p < 0.0001) and POMS-tension (p < 0.05) scores, and within-subjects contrasts indicated that POMS-vigor scores were significantly higher at T0 relative to T1 (p < 0.01), and significantly lower at T2 compared to T3 (p < 0.01). POMS-tension scores were higher at T0 compared to T1 (p < 0.05). There was no significant effect of the time X treatment condition interaction on POMS scores.\nVAAS scores (fear/discomfort) between conditions at different time points, are presented in Fig. 2. There were no significant differences in VAAS scores at T0, T1, T2, and T3 between conditions. No effect of GBM administration per se was observed on VAAS scores, as no significant difference was found between any of pre-CO2 challenge time points (T3, T2, T1) and T0 in any condition. CO2 inhalation was followed by an increase in VAAS scores in all the conditions (F = 57.49; p < 0.0001). A significant time X condition interaction was found, indicating that ΔVAAS scores were significantly lower in ATD compared to BAL and ATL (33.89 ± 26.18 vs 43.78 ± 25.5 and 44.17 ± 23.88, respectively; F = 5.79 and F = 6.58, p < 0.05). No significant differences were found between BAL and ATL conditions. The findings remained identical after controlling for gender and no effects of time X gender X condition interaction have been found.\nFig. 2VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nVAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nThere were no significant differences in PSL scores between conditions at any time point. Total PSL scores did not change at T1, T2, and T3 compared to T0, but significantly increased after CO2 inhalation, relative to T3 (F = 97.51; p < 0.0001). ΔPSL scores were similar between conditions (ATD, 11.11 ± 6.35; BAL, 11.06 ± 4.87; TL, 11.22 ± 4.32; NS). Analyzing individual PSL items separately, the only significant effect of treatment conditions on ΔPSL scores was evident for the item “sensation of shortness of breath”, indicating that ΔPSL scores after CO2 were lower in ATD condition than in BAL and ATL conditions (F = 4.11; p < 0.05) (Fig. 3).\nFig. 3Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nChange in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nNo order effect was found and ΔVAAS and ΔPSL scores were not affected by the order of administration of GBM conditions.\nThere was a significant effect of time on POMS-vigor (p < 0.0001) and POMS-tension (p < 0.05) scores, and within-subjects contrasts indicated that POMS-vigor scores were significantly higher at T0 relative to T1 (p < 0.01), and significantly lower at T2 compared to T3 (p < 0.01). POMS-tension scores were higher at T0 compared to T1 (p < 0.05). There was no significant effect of the time X treatment condition interaction on POMS scores.\n[SUBTITLE] Separate-group study [SUBSECTION] Fifty-five volunteers were enrolled in the separate-group study. Treatment groups consisted in 19 subjects (8 male; age: 24.84 ± 5.33 years) in ATD condition, 19 subjects (10 male; age: 23.32 ± 4.46 years) in BAL condition, and 17 subjects (10 male; 24.65 ± 5.32) in ATL condition. Age, gender distribution, and weight did not significantly differ between groups. Trp:ΣLNAA ratio at T0 was similar between conditions and baseline VAAS scores and PSL scores did not significantly differ between groups either. The relationship between Trp:ΣLNAA ratio and ΔVAAS scores is presented in Figs. 4 and 5. ΔVAAS scores were positively correlated to Δ% Trp:ΣLNAA ratio and Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.395 (p < 0.005) and 0.381 (p < 0.005), respectively], indicating that higher VAAS scores were associated with larger increases in the ratio Trp:ΣLNAA.\nFig. 4Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nFig. 5Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nRelationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nRelationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\nNo correlation was found between total PSL scores and Δ% Trp:ΣLNAA ratio or Trp:ΣLNAA ratio at T3.\nFifty-five volunteers were enrolled in the separate-group study. Treatment groups consisted in 19 subjects (8 male; age: 24.84 ± 5.33 years) in ATD condition, 19 subjects (10 male; age: 23.32 ± 4.46 years) in BAL condition, and 17 subjects (10 male; 24.65 ± 5.32) in ATL condition. Age, gender distribution, and weight did not significantly differ between groups. Trp:ΣLNAA ratio at T0 was similar between conditions and baseline VAAS scores and PSL scores did not significantly differ between groups either. The relationship between Trp:ΣLNAA ratio and ΔVAAS scores is presented in Figs. 4 and 5. ΔVAAS scores were positively correlated to Δ% Trp:ΣLNAA ratio and Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.395 (p < 0.005) and 0.381 (p < 0.005), respectively], indicating that higher VAAS scores were associated with larger increases in the ratio Trp:ΣLNAA.\nFig. 4Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nFig. 5Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nRelationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nRelationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\nNo correlation was found between total PSL scores and Δ% Trp:ΣLNAA ratio or Trp:ΣLNAA ratio at T3.", "Plasma amino acid levels are presented in Fig. 1. Total Trp:ΣLNAA ratio at T0 did not significantly differ between treatment conditions (F = 0.88, p = 0.42). A significant time X condition interaction was found with Δ% Trp:ΣLNAA ratio (from T0 to T3) being different between conditions (F = 106.6; p < 0.0001). ATD resulted in a decrease of 61.35 ± 15.6% in Trp:ΣLNAA ratio compared to baseline, while ATL resulted in an increase of 361.53 ± 154.05%. A 17.76 ± 19% post-GBM increase was found in the BAL condition. Within-subject contrasts evidenced significant differences between changes in Trp:ΣLNAA ratio between ATD and BAL, between ATL and BAL, and between ATD and ATL (F = 106.6; p < 0.0001). In three cases of ATD condition, we observed a decrease in Trp:ΣLNAA ratio <50%, and in one case of BAL condition, an increase in Trp: ΣLNAA ratio >50% was observed. For all the other subjects, we found Trp:ΣLNAA changes >50% in ATD condition, <50% in balanced condition, >100% in the ATL condition. The analyses reported in the following sections have also been performed after exclusion of those three subjects, obtaining the same results as with the complete sample.\nFig. 1Plasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nPlasma Trp:ΣLNAA ratio at baseline and 4.5 h after treatment across conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); time X condition interaction: p < 0.0001; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale", "VAAS scores (fear/discomfort) between conditions at different time points, are presented in Fig. 2. There were no significant differences in VAAS scores at T0, T1, T2, and T3 between conditions. No effect of GBM administration per se was observed on VAAS scores, as no significant difference was found between any of pre-CO2 challenge time points (T3, T2, T1) and T0 in any condition. CO2 inhalation was followed by an increase in VAAS scores in all the conditions (F = 57.49; p < 0.0001). A significant time X condition interaction was found, indicating that ΔVAAS scores were significantly lower in ATD compared to BAL and ATL (33.89 ± 26.18 vs 43.78 ± 25.5 and 44.17 ± 23.88, respectively; F = 5.79 and F = 6.58, p < 0.05). No significant differences were found between BAL and ATL conditions. The findings remained identical after controlling for gender and no effects of time X gender X condition interaction have been found.\nFig. 2VAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nVAAS scores for fear/discomfort at 0, 1.5, 3, 4.5 h after treatment, and after CO2 inhalation, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in VAAS scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nThere were no significant differences in PSL scores between conditions at any time point. Total PSL scores did not change at T1, T2, and T3 compared to T0, but significantly increased after CO2 inhalation, relative to T3 (F = 97.51; p < 0.0001). ΔPSL scores were similar between conditions (ATD, 11.11 ± 6.35; BAL, 11.06 ± 4.87; TL, 11.22 ± 4.32; NS). Analyzing individual PSL items separately, the only significant effect of treatment conditions on ΔPSL scores was evident for the item “sensation of shortness of breath”, indicating that ΔPSL scores after CO2 were lower in ATD condition than in BAL and ATL conditions (F = 4.11; p < 0.05) (Fig. 3).\nFig. 3Change in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\n\nChange in PSL scores for shortness of breath after CO2 inhalation relative to T3, across treatment conditions. Acute tryptohan depletion (ATD), balanced condition (BAL), acute tryptophan loading (ATL); change in PSL scores was significantly lower in ATD compared to BAL and ATL (time X condition interaction: p < 0.05)\nNo order effect was found and ΔVAAS and ΔPSL scores were not affected by the order of administration of GBM conditions.\nThere was a significant effect of time on POMS-vigor (p < 0.0001) and POMS-tension (p < 0.05) scores, and within-subjects contrasts indicated that POMS-vigor scores were significantly higher at T0 relative to T1 (p < 0.01), and significantly lower at T2 compared to T3 (p < 0.01). POMS-tension scores were higher at T0 compared to T1 (p < 0.05). There was no significant effect of the time X treatment condition interaction on POMS scores.", "Fifty-five volunteers were enrolled in the separate-group study. Treatment groups consisted in 19 subjects (8 male; age: 24.84 ± 5.33 years) in ATD condition, 19 subjects (10 male; age: 23.32 ± 4.46 years) in BAL condition, and 17 subjects (10 male; 24.65 ± 5.32) in ATL condition. Age, gender distribution, and weight did not significantly differ between groups. Trp:ΣLNAA ratio at T0 was similar between conditions and baseline VAAS scores and PSL scores did not significantly differ between groups either. The relationship between Trp:ΣLNAA ratio and ΔVAAS scores is presented in Figs. 4 and 5. ΔVAAS scores were positively correlated to Δ% Trp:ΣLNAA ratio and Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.395 (p < 0.005) and 0.381 (p < 0.005), respectively], indicating that higher VAAS scores were associated with larger increases in the ratio Trp:ΣLNAA.\nFig. 4Relationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nFig. 5Relationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\n\nRelationship between% change in Trp:ΣLNAA ratio after treatment and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to% change in Trp:ΣLNAA ratio [Spearman's rho = 0.395, p < 0.005]\nRelationship between Trp:ΣLNAA ratio at T3 (4.5 h post-treatment) and change in VAAS scores after CO2 inhalation; change in VAAS scores after CO2 inhalation was positively correlated to Trp:ΣLNAA ratio at T3 [Spearman's rho = 0.381, p < 0.005]; Trp:ΣLNAA ratio (μmol/l) is presented on logarithmic scale\nNo correlation was found between total PSL scores and Δ% Trp:ΣLNAA ratio or Trp:ΣLNAA ratio at T3.", "To investigate whether the amount of available 5-HT precursors influence the vulnerability to a panicogenic challenge, we tested the subjective response to a 35% CO2 inhalation after acute Trp depletion, Trp loading, and placebo in healthy volunteers. We found a mild, but significant effect of treatment condition on the subjective scores of fear/discomfort, indicating that ATD is associated to a reduced response to a CO2 inhalation relative to placebo and ATL. The same effect was also evident for scores on the panic symptom “shortness of breath”. The analysis of the other separate individual symptoms and the total composite score of panic symptoms induced by CO2 did not show any significant difference between conditions. In a separate analysis, we also investigated the relationship between Trp availability (Trp:ΣLNAA ratio) and subjective response to CO2, as measured by VAAS scores, in a separate-group design. We found a relatively weak, but significant positive correlation between the Trp:ΣLNAA ratio and CO2-induced changes in VAAS scores, indicating that lower availability of 5-HT precursors is associated to a blunted affective response to CO2.\nThese findings are in contrast with previous results found in studies by our laboratory (Klaassen et al. 1998; Schruers and Griez 2004a; Schruers et al. 2000; Schruers et al. 2002a, b) and in other CO2 challenge studies in healthy volunteers (Hood et al. 2006) and in PD patients (Miller et al. 2000). First, we will discuss potential explanations for the divergent findings observed in this study compared to other studies with inhaled CO2 and other panicogenic agents in both healthy volunteers and PD patients. Furthermore, we will try to interpret our findings in the context of the complex relationship between panic, 5-HT, and CO2, also in light of the recent data, which suggest that 5-HT neurons are sensitive to acute changes in CO2 concentration. Methodological issues and limitations relative to the technique used to manipulate 5-HT precursors will be finally addressed.\n[SUBTITLE] Tryptophan depletion/suppletion studies in healthy volunteers and PD patients [SUBSECTION] A large body of evidence suggests that manipulation of 5-HT precursors availability has an influence in modulating panic and anxiety (see review by Maron et al. 2008). However, some of the findings from experimental studies, both in healthy volunteers and PD patients, are divisive and indicate that the role of 5-HT might not be unique.\nHood et al. (2006) and Klaassen et al. (1998) tested the effects of ATD on the response to single-breath 35% CO2 challenge in healthy subjects. In a study on 14 healthy volunteers, no significant differences were found in subjective measures between ATD and balanced conditions, although ATD resulted in an increased CO2-induced elevation of cortisol compared to the balanced condition (Hood et al. 2006). Klaassen et al. study (1998), including 15 volunteers, found a significant increase in CO2-induced neurovegetative symptoms after ATD. In that study sample, mean net increases in total PSL scores and VAAS scores after CO2 were <7 and <7, respectively (Klaassen et al. 1998). Subjective CO2-induced anxiety was, therefore, very mild, hence indicating that single-breath 35% CO2 in healthy subjects did not evoke “real” panic. In the present study, using double-breath CO2, we found a more than 30% higher increase in PSL scores and more than 500% higher increase in VAAS scores after CO2 compared to Klaassen et al. (1998) study. Average VAAS and PSL scores in the present double-breath 35% CO2 study are comparable to those normally found in PD patients after single-breath 35% CO2 (Verburg et al. 1998). Other challenges in healthy volunteers found no significant effect of ATD on responses to anxiogenic challenges. This has been showed using a 5% CO2 challenge (Miller et al. 2000) and the hypercapnic Read rebreathing technique (Struzik et al. 2002), a simulated public speaking challenge (Monteiro-dos-Santos et al. 2000), and CCK-4 (Koszycki et al. 1996). In contrast, a study by (Goddard et al. 1995) showed increased nervousness in response to yohimbine challenge after ATD compared to yohimbine alone.\nTo enhance 5-HT availability in healthy subjects, (Maron et al. 2004) administered 5-HTP (direct precursor of 5-HT) and used CCK-4 as panicogenic challenge in 32 healthy volunteers. They observed a significant reduction compared to placebo in CCK-4-induced panic attacks and cognitive symptoms only in females, whereas in males only a decrease in somatic symptoms was observed. In Schruers et al. (2002b) study, 5-HTP did not alter the response to 35% CO2 compared to placebo. Taken together, these data and our results indicate that the effects of Trp depletion in healthy volunteers largely depend on the type of challenge and its relative potency, and might be additionally confounded by gender effects.\nStudies in PD patients demonstrated that ATD increased the response to single-breath 35% CO2 (Schruers et al. 2000) and 5% CO2 anxiety (Miller et al. 2000), and administration of 5-HTP blunted the response to single-breath 35% CO2 (Schruers et al. 2002b)\nATD also increased the response to flumazenil infusion in PD patients successfully treated with selective serotonin reuptake inhibitors (SSRI) (Bell et al. 2002; Davies et al. 2006) or CBT (Bell et al. 2009). In contrast, subjective response to CCK-4 is not influenced by ATD (Toru et al. 2006) in SSRI-treated PD patients. Also, ATD did not have significant effects on the subjective response to anxiogenic challenges in OCD (Barr et al. 1994; Kulz et al. 2007) and in GAD SSRI-treated patients (Hood et al. 2010). It is interesting to note that in the latest study of Hood and colleagues (2010), using 7.5% CO2 challenge, some subjective measures of anxiety (“something bad is going to happen”, “anxious”, “secure”), seemed to indicate an anxiolytic effect of ATD rather than anxiogenic; however, pairways comparison was not significantly different between conditions.\nAs a further confirmation of the role of 5-HT in modulating experimental panic, serotonergic drugs that are effective in treating panic disorder, like SSRI and tryciclics, also reduce the fear that patients with PD experience when they inhale CO2 (Bertani et al. 2001; Bertani et al. 1997; Perna et al. 2004; Perna et al. 2002; Perna et al. 1997; Pols et al. 1996).\nIn summary, a number of studies in PD indicate that availability of 5-HT precursors is inversely related to vulnerability to CO2 challenges, which is at odds with the present results. However, overall findings in anxiety disorder patients suggest that the effects of Trp manipulation specifically depend on the diagnosis and the type of anxiogenic challenge.\nA large body of evidence suggests that manipulation of 5-HT precursors availability has an influence in modulating panic and anxiety (see review by Maron et al. 2008). However, some of the findings from experimental studies, both in healthy volunteers and PD patients, are divisive and indicate that the role of 5-HT might not be unique.\nHood et al. (2006) and Klaassen et al. (1998) tested the effects of ATD on the response to single-breath 35% CO2 challenge in healthy subjects. In a study on 14 healthy volunteers, no significant differences were found in subjective measures between ATD and balanced conditions, although ATD resulted in an increased CO2-induced elevation of cortisol compared to the balanced condition (Hood et al. 2006). Klaassen et al. study (1998), including 15 volunteers, found a significant increase in CO2-induced neurovegetative symptoms after ATD. In that study sample, mean net increases in total PSL scores and VAAS scores after CO2 were <7 and <7, respectively (Klaassen et al. 1998). Subjective CO2-induced anxiety was, therefore, very mild, hence indicating that single-breath 35% CO2 in healthy subjects did not evoke “real” panic. In the present study, using double-breath CO2, we found a more than 30% higher increase in PSL scores and more than 500% higher increase in VAAS scores after CO2 compared to Klaassen et al. (1998) study. Average VAAS and PSL scores in the present double-breath 35% CO2 study are comparable to those normally found in PD patients after single-breath 35% CO2 (Verburg et al. 1998). Other challenges in healthy volunteers found no significant effect of ATD on responses to anxiogenic challenges. This has been showed using a 5% CO2 challenge (Miller et al. 2000) and the hypercapnic Read rebreathing technique (Struzik et al. 2002), a simulated public speaking challenge (Monteiro-dos-Santos et al. 2000), and CCK-4 (Koszycki et al. 1996). In contrast, a study by (Goddard et al. 1995) showed increased nervousness in response to yohimbine challenge after ATD compared to yohimbine alone.\nTo enhance 5-HT availability in healthy subjects, (Maron et al. 2004) administered 5-HTP (direct precursor of 5-HT) and used CCK-4 as panicogenic challenge in 32 healthy volunteers. They observed a significant reduction compared to placebo in CCK-4-induced panic attacks and cognitive symptoms only in females, whereas in males only a decrease in somatic symptoms was observed. In Schruers et al. (2002b) study, 5-HTP did not alter the response to 35% CO2 compared to placebo. Taken together, these data and our results indicate that the effects of Trp depletion in healthy volunteers largely depend on the type of challenge and its relative potency, and might be additionally confounded by gender effects.\nStudies in PD patients demonstrated that ATD increased the response to single-breath 35% CO2 (Schruers et al. 2000) and 5% CO2 anxiety (Miller et al. 2000), and administration of 5-HTP blunted the response to single-breath 35% CO2 (Schruers et al. 2002b)\nATD also increased the response to flumazenil infusion in PD patients successfully treated with selective serotonin reuptake inhibitors (SSRI) (Bell et al. 2002; Davies et al. 2006) or CBT (Bell et al. 2009). In contrast, subjective response to CCK-4 is not influenced by ATD (Toru et al. 2006) in SSRI-treated PD patients. Also, ATD did not have significant effects on the subjective response to anxiogenic challenges in OCD (Barr et al. 1994; Kulz et al. 2007) and in GAD SSRI-treated patients (Hood et al. 2010). It is interesting to note that in the latest study of Hood and colleagues (2010), using 7.5% CO2 challenge, some subjective measures of anxiety (“something bad is going to happen”, “anxious”, “secure”), seemed to indicate an anxiolytic effect of ATD rather than anxiogenic; however, pairways comparison was not significantly different between conditions.\nAs a further confirmation of the role of 5-HT in modulating experimental panic, serotonergic drugs that are effective in treating panic disorder, like SSRI and tryciclics, also reduce the fear that patients with PD experience when they inhale CO2 (Bertani et al. 2001; Bertani et al. 1997; Perna et al. 2004; Perna et al. 2002; Perna et al. 1997; Pols et al. 1996).\nIn summary, a number of studies in PD indicate that availability of 5-HT precursors is inversely related to vulnerability to CO2 challenges, which is at odds with the present results. However, overall findings in anxiety disorder patients suggest that the effects of Trp manipulation specifically depend on the diagnosis and the type of anxiogenic challenge.\n[SUBTITLE] Panic, 5-HT, and CO2: a complex relationship [SUBSECTION] Accumulating evidence from clinical and experimental research and genetic studies suggest a substantial role for the 5-HT system on the neurobiology of PD (see Maron and Shlik 2006). The relationship between 5-HT and panic is complex, as exemplified by the notion that SSRI are effective in reducing panic, but they may exacerbate anxiety during the initial phase of treatment (Sinclair et al. 2009). It has been hypothesized that panic is associated to either 5-HT excess (Iversen 1984) and 5-HT deficit (Deakin and Graeff 1991). Deakin & Graeff proposed that the 5-HT system plays a dual role in the modulation of anxiety by inhibiting panic responses, but contributing to anticipatory or generalized anxiety. Our findings presented here are not in line with this theory, as in our experimental design increased 5-HT availability did not suppress CO2-induced panic responses in healthy volunteers. However, recent studies suggest that other factors should be taken into account in understanding the relationship between 5-HT, CO2 and anxiety: a subset of 5-HT neurons, located in the chemosensitive zone (ventrolateral medulla and raphe) and associated with large arteries, are intrinsically chemosensitive in vitro (Severson et al. 2003) and are stimulated by hypercapnia in vivo in unanaesthetized animals (Veasey et al. 1995, 1997). Furthermore, experiments using in vivo microdialysis showed that increasing inhaled CO2 causes an increase in 5-HT release (Kanamaru and Homma 2007). Interestingly, mice selectively lacking 5-HT neurons display a blunted respiratory response to CO2, indicating that 5-HT neurons are required for normal central chemoreception (Hodges and Richerson 2008). Richerson (2004) proposed that medullary 5-HT neurons control ventilatory responses to CO2 and project to areas like forebrain and limbic system that are involved in affective regulation. Taking into account the panicogenic properties of CO2, we have previously speculated that these neurons could be part of an adaptive protective mechanism alerting the organism against the risk of impending asphyxia (Griez et al. 2007). Taken all these preclinical findings together, it appears that acute hypercapnia stimulates serotonergic neurons and 5-HT release and that conversely, disruption of the 5-HT system blunts the neuronal response to CO2.\nOur data are in line with the above preclinical evidence; we have effectively reduced availability of 5-HT precursors and we have used CO2 as a panicogenic challenge. In agreement with Hodges and Richerson (2008) data, we found that depletion of 5-HT precursors blunted the subjective response to CO2, particularly the respiratory sensations, and Trp availability was positively correlated with the intensity of the subjective responses.\nAccumulating evidence from clinical and experimental research and genetic studies suggest a substantial role for the 5-HT system on the neurobiology of PD (see Maron and Shlik 2006). The relationship between 5-HT and panic is complex, as exemplified by the notion that SSRI are effective in reducing panic, but they may exacerbate anxiety during the initial phase of treatment (Sinclair et al. 2009). It has been hypothesized that panic is associated to either 5-HT excess (Iversen 1984) and 5-HT deficit (Deakin and Graeff 1991). Deakin & Graeff proposed that the 5-HT system plays a dual role in the modulation of anxiety by inhibiting panic responses, but contributing to anticipatory or generalized anxiety. Our findings presented here are not in line with this theory, as in our experimental design increased 5-HT availability did not suppress CO2-induced panic responses in healthy volunteers. However, recent studies suggest that other factors should be taken into account in understanding the relationship between 5-HT, CO2 and anxiety: a subset of 5-HT neurons, located in the chemosensitive zone (ventrolateral medulla and raphe) and associated with large arteries, are intrinsically chemosensitive in vitro (Severson et al. 2003) and are stimulated by hypercapnia in vivo in unanaesthetized animals (Veasey et al. 1995, 1997). Furthermore, experiments using in vivo microdialysis showed that increasing inhaled CO2 causes an increase in 5-HT release (Kanamaru and Homma 2007). Interestingly, mice selectively lacking 5-HT neurons display a blunted respiratory response to CO2, indicating that 5-HT neurons are required for normal central chemoreception (Hodges and Richerson 2008). Richerson (2004) proposed that medullary 5-HT neurons control ventilatory responses to CO2 and project to areas like forebrain and limbic system that are involved in affective regulation. Taking into account the panicogenic properties of CO2, we have previously speculated that these neurons could be part of an adaptive protective mechanism alerting the organism against the risk of impending asphyxia (Griez et al. 2007). Taken all these preclinical findings together, it appears that acute hypercapnia stimulates serotonergic neurons and 5-HT release and that conversely, disruption of the 5-HT system blunts the neuronal response to CO2.\nOur data are in line with the above preclinical evidence; we have effectively reduced availability of 5-HT precursors and we have used CO2 as a panicogenic challenge. In agreement with Hodges and Richerson (2008) data, we found that depletion of 5-HT precursors blunted the subjective response to CO2, particularly the respiratory sensations, and Trp availability was positively correlated with the intensity of the subjective responses.\n[SUBTITLE] Methodological limitations [SUBSECTION] Some methodological considerations regard the validity of Trp manipulations to alter 5-HT availability. Trp depletion and Trp loading are relatively easy procedures to rapidly and reversibly change, decrease and increase respectively, the levels of 5-HT precursors (Hood et al. 2005). The first step in 5-HT biosynthesis is the conversion of Trp to 5-HTP by Trp hydroxylase. In the brain, this enzyme is only 50% saturated and the rate at which 5-HT is synthesized is limited only by substrate (Trp) availability. The LNAA transport system at the blood–brain barrier has a high affinity for all the LNAAs, including Trp. Therefore, the ratio Trp:ΣLNAA in plasma is generally used to predict the availability of Trp to the brain. A large body of literature provides evidence that manipulations of the levels of Trp in plasma results in a substantial and parallel alteration of 5-HT synthesis in the brain and availability of 5-HT and its metabolite in humans and animals (Biggio et al. 1974; Carpenter et al. 1998; Gessa et al. 1974; Leathwood 1987; Lieben et al. 2004; Williams et al. 1999). However, an altered release of 5-HT efflux (thought to reflect synaptic release, i.e., 5-HT neuronal activity) was only reported after chronic Trp depletion (Fadda et al. 2000) or after ATD in combination with 5-HT reuptake inhibition (Bel and Artigas 1996), but not after ATD alone (van der Plasse et al. 2007). In vitro studies suggest that an increase in Trp availability determines dose-dependent changes in 5-HT release under conditions of increased serotonergic neuronal activity but not on basal output of 5-HT (Sharp et al. 1992; Wolf and Kuhn 1986).\nWe assume that the alterations in the Trp:ΣLNAA ratio of the present study were followed by changes in brain 5-HT availability and synthesis in the same direction. However, at present, we cannot assure that Trp depletion and Trp loading actually influenced 5-HT release. Nevertheless, the possibility exists that the manipulations in the present study did affect 5-HT neuronal release, based on the abovementioned notions that acute hypercapnia stimulates serotonergic neurons and induces 5-HT release, and that Trp manipulations seem to be effective in altering 5-HT release only in stimulated neurons.\nA number of other methodological issues need to be addressed. To manipulate brain Trp availability, the studies in healthy volunteers of both Klaassen et al. (1998) and Hood et al. (2006) included the use of the classic amino acid mixture (Young et al. 1985), the former including an addition of carbohydrates and fat. In these studies, ATD appeared to be as effective as it was in ours and resulted in a significant decrease of plasma Trp levels. However, the magnitude of the depletion seems difficult to compare with that in our study due to some methodological differences: in the study of Klaassen et al. (1998), no baseline Trp plasma levels were collected, and in the study of Hood et al. (2006), free Trp plasma levels instead of total TRP levels were used for calculation of the Trp:ΣLNAA ratio. It is still debated whether total Trp or free Trp levels in plasma are the most reliable indirect measures of brain Trp availability (Pardridge 1998). Therefore, methodological differences might account for the divergent findings in our study compared to previous studies in healthy volunteers.\nSome methodological considerations regard the validity of Trp manipulations to alter 5-HT availability. Trp depletion and Trp loading are relatively easy procedures to rapidly and reversibly change, decrease and increase respectively, the levels of 5-HT precursors (Hood et al. 2005). The first step in 5-HT biosynthesis is the conversion of Trp to 5-HTP by Trp hydroxylase. In the brain, this enzyme is only 50% saturated and the rate at which 5-HT is synthesized is limited only by substrate (Trp) availability. The LNAA transport system at the blood–brain barrier has a high affinity for all the LNAAs, including Trp. Therefore, the ratio Trp:ΣLNAA in plasma is generally used to predict the availability of Trp to the brain. A large body of literature provides evidence that manipulations of the levels of Trp in plasma results in a substantial and parallel alteration of 5-HT synthesis in the brain and availability of 5-HT and its metabolite in humans and animals (Biggio et al. 1974; Carpenter et al. 1998; Gessa et al. 1974; Leathwood 1987; Lieben et al. 2004; Williams et al. 1999). However, an altered release of 5-HT efflux (thought to reflect synaptic release, i.e., 5-HT neuronal activity) was only reported after chronic Trp depletion (Fadda et al. 2000) or after ATD in combination with 5-HT reuptake inhibition (Bel and Artigas 1996), but not after ATD alone (van der Plasse et al. 2007). In vitro studies suggest that an increase in Trp availability determines dose-dependent changes in 5-HT release under conditions of increased serotonergic neuronal activity but not on basal output of 5-HT (Sharp et al. 1992; Wolf and Kuhn 1986).\nWe assume that the alterations in the Trp:ΣLNAA ratio of the present study were followed by changes in brain 5-HT availability and synthesis in the same direction. However, at present, we cannot assure that Trp depletion and Trp loading actually influenced 5-HT release. Nevertheless, the possibility exists that the manipulations in the present study did affect 5-HT neuronal release, based on the abovementioned notions that acute hypercapnia stimulates serotonergic neurons and induces 5-HT release, and that Trp manipulations seem to be effective in altering 5-HT release only in stimulated neurons.\nA number of other methodological issues need to be addressed. To manipulate brain Trp availability, the studies in healthy volunteers of both Klaassen et al. (1998) and Hood et al. (2006) included the use of the classic amino acid mixture (Young et al. 1985), the former including an addition of carbohydrates and fat. In these studies, ATD appeared to be as effective as it was in ours and resulted in a significant decrease of plasma Trp levels. However, the magnitude of the depletion seems difficult to compare with that in our study due to some methodological differences: in the study of Klaassen et al. (1998), no baseline Trp plasma levels were collected, and in the study of Hood et al. (2006), free Trp plasma levels instead of total TRP levels were used for calculation of the Trp:ΣLNAA ratio. It is still debated whether total Trp or free Trp levels in plasma are the most reliable indirect measures of brain Trp availability (Pardridge 1998). Therefore, methodological differences might account for the divergent findings in our study compared to previous studies in healthy volunteers.\n[SUBTITLE] Conclusions [SUBSECTION] It is recognized that 5-HT system plays a complex role in the regulation of the panic responses to CO2, as different serotonergic mechanisms can coexist, either inhibiting panic (Deakin and Graeff 1991) or promoting the aversive respiratory sensations to hypercapnic stimuli. The differences observed in our study in healthy volunteers, compared to previous findings in PD patients, might depend on the different relative contribution of these mechanisms in different populations.\nIt is recognized that 5-HT system plays a complex role in the regulation of the panic responses to CO2, as different serotonergic mechanisms can coexist, either inhibiting panic (Deakin and Graeff 1991) or promoting the aversive respiratory sensations to hypercapnic stimuli. The differences observed in our study in healthy volunteers, compared to previous findings in PD patients, might depend on the different relative contribution of these mechanisms in different populations.", "A large body of evidence suggests that manipulation of 5-HT precursors availability has an influence in modulating panic and anxiety (see review by Maron et al. 2008). However, some of the findings from experimental studies, both in healthy volunteers and PD patients, are divisive and indicate that the role of 5-HT might not be unique.\nHood et al. (2006) and Klaassen et al. (1998) tested the effects of ATD on the response to single-breath 35% CO2 challenge in healthy subjects. In a study on 14 healthy volunteers, no significant differences were found in subjective measures between ATD and balanced conditions, although ATD resulted in an increased CO2-induced elevation of cortisol compared to the balanced condition (Hood et al. 2006). Klaassen et al. study (1998), including 15 volunteers, found a significant increase in CO2-induced neurovegetative symptoms after ATD. In that study sample, mean net increases in total PSL scores and VAAS scores after CO2 were <7 and <7, respectively (Klaassen et al. 1998). Subjective CO2-induced anxiety was, therefore, very mild, hence indicating that single-breath 35% CO2 in healthy subjects did not evoke “real” panic. In the present study, using double-breath CO2, we found a more than 30% higher increase in PSL scores and more than 500% higher increase in VAAS scores after CO2 compared to Klaassen et al. (1998) study. Average VAAS and PSL scores in the present double-breath 35% CO2 study are comparable to those normally found in PD patients after single-breath 35% CO2 (Verburg et al. 1998). Other challenges in healthy volunteers found no significant effect of ATD on responses to anxiogenic challenges. This has been showed using a 5% CO2 challenge (Miller et al. 2000) and the hypercapnic Read rebreathing technique (Struzik et al. 2002), a simulated public speaking challenge (Monteiro-dos-Santos et al. 2000), and CCK-4 (Koszycki et al. 1996). In contrast, a study by (Goddard et al. 1995) showed increased nervousness in response to yohimbine challenge after ATD compared to yohimbine alone.\nTo enhance 5-HT availability in healthy subjects, (Maron et al. 2004) administered 5-HTP (direct precursor of 5-HT) and used CCK-4 as panicogenic challenge in 32 healthy volunteers. They observed a significant reduction compared to placebo in CCK-4-induced panic attacks and cognitive symptoms only in females, whereas in males only a decrease in somatic symptoms was observed. In Schruers et al. (2002b) study, 5-HTP did not alter the response to 35% CO2 compared to placebo. Taken together, these data and our results indicate that the effects of Trp depletion in healthy volunteers largely depend on the type of challenge and its relative potency, and might be additionally confounded by gender effects.\nStudies in PD patients demonstrated that ATD increased the response to single-breath 35% CO2 (Schruers et al. 2000) and 5% CO2 anxiety (Miller et al. 2000), and administration of 5-HTP blunted the response to single-breath 35% CO2 (Schruers et al. 2002b)\nATD also increased the response to flumazenil infusion in PD patients successfully treated with selective serotonin reuptake inhibitors (SSRI) (Bell et al. 2002; Davies et al. 2006) or CBT (Bell et al. 2009). In contrast, subjective response to CCK-4 is not influenced by ATD (Toru et al. 2006) in SSRI-treated PD patients. Also, ATD did not have significant effects on the subjective response to anxiogenic challenges in OCD (Barr et al. 1994; Kulz et al. 2007) and in GAD SSRI-treated patients (Hood et al. 2010). It is interesting to note that in the latest study of Hood and colleagues (2010), using 7.5% CO2 challenge, some subjective measures of anxiety (“something bad is going to happen”, “anxious”, “secure”), seemed to indicate an anxiolytic effect of ATD rather than anxiogenic; however, pairways comparison was not significantly different between conditions.\nAs a further confirmation of the role of 5-HT in modulating experimental panic, serotonergic drugs that are effective in treating panic disorder, like SSRI and tryciclics, also reduce the fear that patients with PD experience when they inhale CO2 (Bertani et al. 2001; Bertani et al. 1997; Perna et al. 2004; Perna et al. 2002; Perna et al. 1997; Pols et al. 1996).\nIn summary, a number of studies in PD indicate that availability of 5-HT precursors is inversely related to vulnerability to CO2 challenges, which is at odds with the present results. However, overall findings in anxiety disorder patients suggest that the effects of Trp manipulation specifically depend on the diagnosis and the type of anxiogenic challenge.", "Accumulating evidence from clinical and experimental research and genetic studies suggest a substantial role for the 5-HT system on the neurobiology of PD (see Maron and Shlik 2006). The relationship between 5-HT and panic is complex, as exemplified by the notion that SSRI are effective in reducing panic, but they may exacerbate anxiety during the initial phase of treatment (Sinclair et al. 2009). It has been hypothesized that panic is associated to either 5-HT excess (Iversen 1984) and 5-HT deficit (Deakin and Graeff 1991). Deakin & Graeff proposed that the 5-HT system plays a dual role in the modulation of anxiety by inhibiting panic responses, but contributing to anticipatory or generalized anxiety. Our findings presented here are not in line with this theory, as in our experimental design increased 5-HT availability did not suppress CO2-induced panic responses in healthy volunteers. However, recent studies suggest that other factors should be taken into account in understanding the relationship between 5-HT, CO2 and anxiety: a subset of 5-HT neurons, located in the chemosensitive zone (ventrolateral medulla and raphe) and associated with large arteries, are intrinsically chemosensitive in vitro (Severson et al. 2003) and are stimulated by hypercapnia in vivo in unanaesthetized animals (Veasey et al. 1995, 1997). Furthermore, experiments using in vivo microdialysis showed that increasing inhaled CO2 causes an increase in 5-HT release (Kanamaru and Homma 2007). Interestingly, mice selectively lacking 5-HT neurons display a blunted respiratory response to CO2, indicating that 5-HT neurons are required for normal central chemoreception (Hodges and Richerson 2008). Richerson (2004) proposed that medullary 5-HT neurons control ventilatory responses to CO2 and project to areas like forebrain and limbic system that are involved in affective regulation. Taking into account the panicogenic properties of CO2, we have previously speculated that these neurons could be part of an adaptive protective mechanism alerting the organism against the risk of impending asphyxia (Griez et al. 2007). Taken all these preclinical findings together, it appears that acute hypercapnia stimulates serotonergic neurons and 5-HT release and that conversely, disruption of the 5-HT system blunts the neuronal response to CO2.\nOur data are in line with the above preclinical evidence; we have effectively reduced availability of 5-HT precursors and we have used CO2 as a panicogenic challenge. In agreement with Hodges and Richerson (2008) data, we found that depletion of 5-HT precursors blunted the subjective response to CO2, particularly the respiratory sensations, and Trp availability was positively correlated with the intensity of the subjective responses.", "Some methodological considerations regard the validity of Trp manipulations to alter 5-HT availability. Trp depletion and Trp loading are relatively easy procedures to rapidly and reversibly change, decrease and increase respectively, the levels of 5-HT precursors (Hood et al. 2005). The first step in 5-HT biosynthesis is the conversion of Trp to 5-HTP by Trp hydroxylase. In the brain, this enzyme is only 50% saturated and the rate at which 5-HT is synthesized is limited only by substrate (Trp) availability. The LNAA transport system at the blood–brain barrier has a high affinity for all the LNAAs, including Trp. Therefore, the ratio Trp:ΣLNAA in plasma is generally used to predict the availability of Trp to the brain. A large body of literature provides evidence that manipulations of the levels of Trp in plasma results in a substantial and parallel alteration of 5-HT synthesis in the brain and availability of 5-HT and its metabolite in humans and animals (Biggio et al. 1974; Carpenter et al. 1998; Gessa et al. 1974; Leathwood 1987; Lieben et al. 2004; Williams et al. 1999). However, an altered release of 5-HT efflux (thought to reflect synaptic release, i.e., 5-HT neuronal activity) was only reported after chronic Trp depletion (Fadda et al. 2000) or after ATD in combination with 5-HT reuptake inhibition (Bel and Artigas 1996), but not after ATD alone (van der Plasse et al. 2007). In vitro studies suggest that an increase in Trp availability determines dose-dependent changes in 5-HT release under conditions of increased serotonergic neuronal activity but not on basal output of 5-HT (Sharp et al. 1992; Wolf and Kuhn 1986).\nWe assume that the alterations in the Trp:ΣLNAA ratio of the present study were followed by changes in brain 5-HT availability and synthesis in the same direction. However, at present, we cannot assure that Trp depletion and Trp loading actually influenced 5-HT release. Nevertheless, the possibility exists that the manipulations in the present study did affect 5-HT neuronal release, based on the abovementioned notions that acute hypercapnia stimulates serotonergic neurons and induces 5-HT release, and that Trp manipulations seem to be effective in altering 5-HT release only in stimulated neurons.\nA number of other methodological issues need to be addressed. To manipulate brain Trp availability, the studies in healthy volunteers of both Klaassen et al. (1998) and Hood et al. (2006) included the use of the classic amino acid mixture (Young et al. 1985), the former including an addition of carbohydrates and fat. In these studies, ATD appeared to be as effective as it was in ours and resulted in a significant decrease of plasma Trp levels. However, the magnitude of the depletion seems difficult to compare with that in our study due to some methodological differences: in the study of Klaassen et al. (1998), no baseline Trp plasma levels were collected, and in the study of Hood et al. (2006), free Trp plasma levels instead of total TRP levels were used for calculation of the Trp:ΣLNAA ratio. It is still debated whether total Trp or free Trp levels in plasma are the most reliable indirect measures of brain Trp availability (Pardridge 1998). Therefore, methodological differences might account for the divergent findings in our study compared to previous studies in healthy volunteers.", "It is recognized that 5-HT system plays a complex role in the regulation of the panic responses to CO2, as different serotonergic mechanisms can coexist, either inhibiting panic (Deakin and Graeff 1991) or promoting the aversive respiratory sensations to hypercapnic stimuli. The differences observed in our study in healthy volunteers, compared to previous findings in PD patients, might depend on the different relative contribution of these mechanisms in different populations." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, "results", null, null, null, "discussion", null, null, null, null ]
[ "Anxiety", "Panic", "Stress", "5-HT", "Hypercapnia", "Tryptophan", "Carbon dioxide", "CO2 challenge", "Fear", "Respiration" ]
Direct-acting antiviral therapy for hepatitis C: attitudes regarding future use.
21336604
Response to current therapy of hepatitis C virus (HCV) is suboptimal. Direct-acting antiviral therapies (DAA) are expected to improve treatment outcomes. Additional treatments for HCV will invariably make therapeutic choices and patient management more complex. We hypothesize that current perceptions regarding the complexity of DAA therapy will influence attitudes towards future use by practitioners who are currently treating HCV.
INTRODUCTION
An Internet-based survey was sent to 10,082 AASLD and AGA members to determine if they treat HCV infection, their knowledge of DAA therapies, attitudes towards current and future HCV treatments, and if they participated in clinical trials using DAA agents.
METHODS
Out of a total of 1,757 individuals responding to the survey, 75% treat HCV; 79% were MDs, 67% were Gastroenterologists, and 24% were Hepatologists. Of the respondents, 77% indicated they were "very aware" or "aware" of DAA therapies, 20% participated in clinical trials, and 3% had minimal knowledge of DAA agents. Comparing treatment "today" versus in the future when DAAs were available, 85 vs. 81% would treat (p = 0.0054), 6 vs. 10% would refer to an "HCV expert" (p = 0.016), and 1% would refer to an ID specialist. Of respondents with "minimal knowledge" of DAA, 52% stated that they would use them in the future.
RESULTS
Although the majority of respondents appear ready to utilize DAA agents in the future, referrals to "hepatitis C experts" will increase. More than half of respondents with "minimal knowledge" of DAA therapies also appear to be willing to utilize these compounds, raising concerns regarding their inappropriate use. Broad education of healthcare providers to prevent inappropriate use of these agents will be critical.
CONCLUSIONS
[ "Antiviral Agents", "Clinical Protocols", "Data Collection", "Health Knowledge, Attitudes, Practice", "Hepatitis C", "Humans", "Surveys and Questionnaires" ]
3082020
Introduction
Hepatitis C virus (HCV) represents the most common chronic blood-borne viral infection in the United States [1, 2]. At present, response to currently available therapy remains suboptimal as a significant number of patients fail to achieve a sustained virologic response to therapy [3–10]. Recent discoveries related to the life cycle and pathobiology of HCV have led to the development of novel therapies that directly inhibit viral replication. These compounds, characterized as “specifically targeted antiviral therapies against HCV” (STAT-C) or “direct-acting antiviral agents” (DAA) have been investigated in naive as well as previously treated patients, and preliminary data from these studies have been encouraging [11–14]. Along with these encouraging results, these and other publications which describe experience with DAA agents have documented the emergence of HCV resistance [15, 16], as well as significant treatment-related adverse events including rash, gastrointestinal side-effects, and anemia [11–14]. As health care providers with interest and experience in treating HCV become aware of emerging data using novel therapeutic agents, we hypothesize that current perceptions regarding the complexity and side-effects of DAA therapies will influence decisions regarding the future use of these agents. To evaluate attitudes regarding the future use of these agents by individuals who are currently treating HCV, we sent an Internet-based survey to all United States-based members of the American Gastroenterology Association (AGA) and American Association for the Study of Liver Disease (AASLD). Members of these societies were chosen because they represent both the vast majority of HCV treaters in the United States, and a population of clinicians likely to have knowledge of DAA therapies. Recipients of the survey were queried regarding their primary professional affiliation and focus of practice, attitudes towards current and future HCV therapy, as well as participation in clinical trials using DAA agents. We determined if any of these parameters affected attitudes related to current and future treatment decisions regarding HCV.
null
null
Results
Of the 10,082 surveys sent, 8,449 were deliverable. The most common reasons for inability to deliver a survey included use of an e-mail filter or vacation message by the intended recipient. A total 1,757 individuals responded to the survey, representing a 21% response rate. A recent analysis performed by supersurvey.com revealed a mean response rate of 18% to online surveys of similar question size and target audience (www.supersurvey.com). The survey questions and responses appear in Table 1. If a respondent stated that they did not treat HCV, their participation in the survey ended after question 1.Table 1Responses to questions by respondents who treat HCVHighest level of education79% MD10% PA8% MD-PhDs2% DO1% PhDYear of graduation from the last school you attended32% within the last 10 years25% 11–20 years ago21% 21–30 years ago12% > 30 years agoPrimary focus of practice67% Gastroenterology24% Hepatology and/or liver transplantation2% Infectious disease7% “other”Location of primary practice46% private practice not assoc with a medical school42% medical school/hospital assoc with a medical school8% private practice associated with a medical school4% “other”Primary professional affiliation55% AGA26% AASLD10% “other”9% ASGE“If I saw a patient with HCV today”…85% would treat6% would refer them to a “hepatitis C expert”4% would refer them to NP, PA in their practice4% would refer to another MD in their practice1% would refer them to an Infectious Disease specialist.Awareness of DAA therapy77% “aware” or “very aware” without participation in any clinical trials20% were “very aware” and had experience using these agents in clinical trials3% had “minimal knowledge” of STAT-C agentsIf DAA therapies were available today81% would evaluate and treat the patient10% would refer to a “Hepatitis C expert”5% would refer to another physician in their group4% would refer the patient to NP, PA in their practice<1% would refer to an infectious disease specialist Responses to questions by respondents who treat HCV Of the respondents, 75% (1,320) stated that they treat HCV, and of these respondents, 79% were MDs, 10% were physician assistants or nurse practitioners, 8% MD-PhDs, 2% DO, and 1% PhD. Of the respondents, 32% graduated within the last 10 years, 25% 11–20 years ago, 21% 21–30 years ago, and 12% greater than 30 years ago. When analyzing results based on focus of practice, 67% of respondents stated that gastroenterology was the primary focus of their clinical practice, 24% selected hepatology and/or liver transplantation, 2% infectious disease, and 7% stated “other” as the primary focus of their practice. The “other” respondents included primary care doctors, surgeons, as well as Gastroenterologists who considered hepato-biliary disease, oncology, or pancreatic disease their primary specialty. Forty-six percent of the respondents were in a private practice not associated with a medical school, 42% at a medical school or hospital associated with a medical school, 8% in a private practice associated with a medical school, and the remaining 4% of respondents practiced in a multispecialty group practice, Veterans Administration hospital, or hospital not associated with a medical school. Professional affiliations were assessed; 55% of respondents considered the American Gastroenterological Association (AGA) their primary professional affiliation, 26% the American Association for the Study of Liver Disease (AASLD), 9% the American Society for Gastrointestinal Endoscopy (ASGE), and 10% “other” including the American College of Gastroenterology (ACG), American Medical Association (AMA), and the American Society for Transplantation (AST). When queried regarding what they would do when presented with an HCV-infected patient “today,” 85% of the respondents would treat them, 6% would refer them to a “hepatitis C expert,” 4% would refer them to a physician extender (PA, NP, or specially trained nurse) in their practice, 4% would refer to another MD in their practice, and 1% would refer them to an infectious disease specialist. Related to future therapies for HCV including DAAs in combination with interferon and ribavirin, 77% of the respondents were “aware” or “very aware” of this concept but had not participated in any clinical trials using DAA agents, 20% were “very aware” and had experience using these agents in clinical trials, and 3% had “minimal knowledge” of DAA agents. When queried regarding treatment approaches if a DAA agent were available “today,” 81% of respondents would evaluate and treat the patient, 10% would refer the patient to a “hepatitis C expert,” 5% would refer the patient to another physician in their group, and 4% would refer the patient to a physician extender (PA, NP, or specially trained nurse) in their practice. Less than 1% of respondents reported that they would refer the patient to an infectious disease specialist. Analysis of survey results (Table 2):Table 2Current and future therapy, and future referral to an HCV specialistRespondent characteristicWould treat HCV todayWould treat with DAA in the futureFuture referral to HCV specialist p valueOverall85 (%)81 (%)10 (%)0.0054AASLD member A91 B91 C3 A0.002 B,C0.001AGA member A84 B79 C12Hepatologist D93 E90 F1 D.0034 E,F0.001Gastroenterologist D86 E81 F8Private practice G91 H89 I12 G,H0.001 INSAcademic practice G81 H75 I15Participated in DAA Clinical trial J91 K90 L4 J,K,L0.0001Minimal knowledge of DAA J59 K52 L22The p value related to the statistical significance of the comparisons of groups A to A, B to B, C to C, etc are reported Current and future therapy, and future referral to an HCV specialist A0.002 B,C0.001 D.0034 E,F0.001 G,H0.001 INS The p value related to the statistical significance of the comparisons of groups A to A, B to B, C to C, etc are reported Overall, a significant number of respondents who treat HCV today with currently available therapies indicated that they would not prescribe DAA therapies when they became available; 85% of respondents to the survey would evaluate and treat an HCV-infected patient “today” versus 81% when DAA agents became available (p = 0.0054). Similarly, more respondents indicated that they would refer their patients to a “hepatitis C specialist” after DAA agents became available (6% current, 10% after DAA availability (p = 0.016). Significant differences existed based on primary professional affiliation in attitudes and experience of survey respondents related to present and future HCV therapy. Ninety-one percent of respondents who considered the AASLD their primary affiliation compared to 84% of AGA members would treat their HCV-infected patient today (p = 0.002), 48% of AASLD members versus 9% of AGA members participated in clinical trials with a DAA agent (p = 0001), and 91% of AASLD members versus 79% of AGA members would use a DAA agent to treat their HCV-infected patient in the future (p = 0.001). A greater percentage of AGA members (12%) stated that they would refer their patient to an “HCV expert” when DAA agents became available compared to AASLD members (3%) (p = 0.001). When analyzing attitudes related to current and future therapy of HCV based on focus of clinical practice, a comparison of Hepatologists/liver transplantation physicians to Gastroenterologists revealed that 93% versus 86% would treat their HCV-infected patient today (p = 0.0034), 48% compared to 8% participated in clinical trials using DAA agents (p = 0.001), 90% versus 81% would treat their patient with a DAA agent (p = 0.001), and 1% versus 8% would refer their patient to an “HCV expert” when DAA therapies became available (p = 0.001). Practice type was also associated with differences in attitudes related to current and future therapy of HCV. Comparison of respondents in private practice versus those who practiced at a medical school or hospital associated with a medical school revealed that respectively, 91% versus 81% would treat their HCV-infected patient today (p = 0.001), 9% versus 30% participated in a clinical trial with a DAA agent (p = 0.001), 89% versus 75% would use DAA therapies when they became available (p = 0.001), and 12% compared to 15% would refer to an “HCV expert” when DAA therapies became available (P = NS). Current awareness and participation in clinical trials with DAA agents influenced attitudes regarding future use of these therapies, as 90% of respondents who participated in clinical trials with DAAs, 81% of those who were “very aware” or “aware” of DAA agents but did not participate in clinical trials, and 59% of those with minimal knowledge of DAA therapies would prescribe these agents in the future (p = 0.0001 clinical trial participant/very aware or aware of DAA agents vs. minimal knowledge). Respondents with “minimal knowledge” of DAA agents also reported that when these compounds were available in the future, 25% would refer their HCV-infected patient to a PA, NP, or other MD in their practice, 22% would refer to an “HCV expert,” and 1% would refer to an ID specialist. Respondents who reported “minimal knowledge of DAA agents” were more likely to consider the AMA their primary professional organization (20 vs. 1% AGA/AASLD (p = 0.001)), or be in a private practice not associated with a medical school (83 vs. 17% practice or hospital associated with a medical school p = 0.0001).
null
null
[]
[]
[]
[ "Introduction", "Methods", "Results", "Discussion" ]
[ "Hepatitis C virus (HCV) represents the most common chronic blood-borne viral infection in the United States [1, 2]. At present, response to currently available therapy remains suboptimal as a significant number of patients fail to achieve a sustained virologic response to therapy [3–10]. Recent discoveries related to the life cycle and pathobiology of HCV have led to the development of novel therapies that directly inhibit viral replication. These compounds, characterized as “specifically targeted antiviral therapies against HCV” (STAT-C) or “direct-acting antiviral agents” (DAA) have been investigated in naive as well as previously treated patients, and preliminary data from these studies have been encouraging [11–14]. Along with these encouraging results, these and other publications which describe experience with DAA agents have documented the emergence of HCV resistance [15, 16], as well as significant treatment-related adverse events including rash, gastrointestinal side-effects, and anemia [11–14].\nAs health care providers with interest and experience in treating HCV become aware of emerging data using novel therapeutic agents, we hypothesize that current perceptions regarding the complexity and side-effects of DAA therapies will influence decisions regarding the future use of these agents. To evaluate attitudes regarding the future use of these agents by individuals who are currently treating HCV, we sent an Internet-based survey to all United States-based members of the American Gastroenterology Association (AGA) and American Association for the Study of Liver Disease (AASLD). Members of these societies were chosen because they represent both the vast majority of HCV treaters in the United States, and a population of clinicians likely to have knowledge of DAA therapies. Recipients of the survey were queried regarding their primary professional affiliation and focus of practice, attitudes towards current and future HCV therapy, as well as participation in clinical trials using DAA agents. We determined if any of these parameters affected attitudes related to current and future treatment decisions regarding HCV.", "This study was reviewed and approved by the institutional review board at the Albert Einstein College of Medicine/Montefiore Medical Center. The e-mail addresses of the 10,082 US-based members of the AGA and AASLD were compiled from the 2009 member directories for both organizations. AGA and AASLD members were selected to be surveyed as they represent the majority of HCV treaters in the United States. Prescriber ProfilerTM data provided to the authors by IMS Health Incorporated, reflecting a US-based database of retail pharmacies and total dispensed prescriptions of CopegusTM, Intron-ATM, InfergenTM, PegasysTM, Peg-IntronTM, RebetolTM, RebetronTM, RibasphereTM, and RibavirinTM indicated that between December 2008 and November 2009, 177,300 prescriptions were written. Gastroenterologists or Hepatologists wrote approximately 94,000 or 55% of these, validating our hypothesis that the targeted survey recipients were a group which treated HCV most frequently. Internal medicine physicians (11%) and nurse practitioners (8%) were the next most common prescribers [17]. Using an Internet-based survey engine (“SurveyMonkey” (surveymonkey.com)) a nine-question survey was sent to each AGA or AASLD member; only one questionnaire was sent to individuals who are members of both organizations. If an individual did not respond to the survey, the survey was re-sent with a second request to complete the survey. All results were tabulated, and statistical analysis was performed using Stata version 9.2, Statcorp LP, College Station, Texas.\n", "Of the 10,082 surveys sent, 8,449 were deliverable. The most common reasons for inability to deliver a survey included use of an e-mail filter or vacation message by the intended recipient. A total 1,757 individuals responded to the survey, representing a 21% response rate. A recent analysis performed by supersurvey.com revealed a mean response rate of 18% to online surveys of similar question size and target audience (www.supersurvey.com). The survey questions and responses appear in Table 1. If a respondent stated that they did not treat HCV, their participation in the survey ended after question 1.Table 1Responses to questions by respondents who treat HCVHighest level of education79% MD10% PA8% MD-PhDs2% DO1% PhDYear of graduation from the last school you attended32% within the last 10 years25% 11–20 years ago21% 21–30 years ago12% > 30 years agoPrimary focus of practice67% Gastroenterology24% Hepatology and/or liver transplantation2% Infectious disease7% “other”Location of primary practice46% private practice not assoc with a medical school42% medical school/hospital assoc with a medical school8% private practice associated with a medical school4% “other”Primary professional affiliation55% AGA26% AASLD10% “other”9% ASGE“If I saw a patient with HCV today”…85% would treat6% would refer them to a “hepatitis C expert”4% would refer them to NP, PA in their practice4% would refer to another MD in their practice1% would refer them to an Infectious Disease specialist.Awareness of DAA therapy77% “aware” or “very aware” without participation in any clinical trials20% were “very aware” and had experience using these agents in clinical trials3% had “minimal knowledge” of STAT-C agentsIf DAA therapies were available today81% would evaluate and treat the patient10% would refer to a “Hepatitis C expert”5% would refer to another physician in their group4% would refer the patient to NP, PA in their practice<1% would refer to an infectious disease specialist\n\nResponses to questions by respondents who treat HCV\nOf the respondents, 75% (1,320) stated that they treat HCV, and of these respondents, 79% were MDs, 10% were physician assistants or nurse practitioners, 8% MD-PhDs, 2% DO, and 1% PhD. Of the respondents, 32% graduated within the last 10 years, 25% 11–20 years ago, 21% 21–30 years ago, and 12% greater than 30 years ago.\nWhen analyzing results based on focus of practice, 67% of respondents stated that gastroenterology was the primary focus of their clinical practice, 24% selected hepatology and/or liver transplantation, 2% infectious disease, and 7% stated “other” as the primary focus of their practice. The “other” respondents included primary care doctors, surgeons, as well as Gastroenterologists who considered hepato-biliary disease, oncology, or pancreatic disease their primary specialty. Forty-six percent of the respondents were in a private practice not associated with a medical school, 42% at a medical school or hospital associated with a medical school, 8% in a private practice associated with a medical school, and the remaining 4% of respondents practiced in a multispecialty group practice, Veterans Administration hospital, or hospital not associated with a medical school.\nProfessional affiliations were assessed; 55% of respondents considered the American Gastroenterological Association (AGA) their primary professional affiliation, 26% the American Association for the Study of Liver Disease (AASLD), 9% the American Society for Gastrointestinal Endoscopy (ASGE), and 10% “other” including the American College of Gastroenterology (ACG), American Medical Association (AMA), and the American Society for Transplantation (AST).\nWhen queried regarding what they would do when presented with an HCV-infected patient “today,” 85% of the respondents would treat them, 6% would refer them to a “hepatitis C expert,” 4% would refer them to a physician extender (PA, NP, or specially trained nurse) in their practice, 4% would refer to another MD in their practice, and 1% would refer them to an infectious disease specialist. Related to future therapies for HCV including DAAs in combination with interferon and ribavirin, 77% of the respondents were “aware” or “very aware” of this concept but had not participated in any clinical trials using DAA agents, 20% were “very aware” and had experience using these agents in clinical trials, and 3% had “minimal knowledge” of DAA agents.\nWhen queried regarding treatment approaches if a DAA agent were available “today,” 81% of respondents would evaluate and treat the patient, 10% would refer the patient to a “hepatitis C expert,” 5% would refer the patient to another physician in their group, and 4% would refer the patient to a physician extender (PA, NP, or specially trained nurse) in their practice. Less than 1% of respondents reported that they would refer the patient to an infectious disease specialist.\nAnalysis of survey results (Table 2):Table 2Current and future therapy, and future referral to an HCV specialistRespondent characteristicWould treat HCV todayWould treat with DAA in the futureFuture referral to HCV specialist\np valueOverall85 (%)81 (%)10 (%)0.0054AASLD member\nA91\nB91\nC3\nA0.002\nB,C0.001AGA member\nA84\nB79\nC12Hepatologist\nD93\nE90\nF1\nD.0034\nE,F0.001Gastroenterologist\nD86\nE81\nF8Private practice\nG91\nH89\nI12\nG,H0.001\nINSAcademic practice\nG81\nH75\nI15Participated in DAA Clinical trial\nJ91\nK90\nL4\nJ,K,L0.0001Minimal knowledge of DAA\nJ59\nK52\nL22The p value related to the statistical significance of the comparisons of groups A to A, B to B, C to C, etc are reported\n\nCurrent and future therapy, and future referral to an HCV specialist\n\nA0.002\n\nB,C0.001\n\nD.0034\n\nE,F0.001\n\nG,H0.001\n\nINS\nThe p value related to the statistical significance of the comparisons of groups A to A, B to B, C to C, etc are reported\nOverall, a significant number of respondents who treat HCV today with currently available therapies indicated that they would not prescribe DAA therapies when they became available; 85% of respondents to the survey would evaluate and treat an HCV-infected patient “today” versus 81% when DAA agents became available (p = 0.0054). Similarly, more respondents indicated that they would refer their patients to a “hepatitis C specialist” after DAA agents became available (6% current, 10% after DAA availability (p = 0.016).\nSignificant differences existed based on primary professional affiliation in attitudes and experience of survey respondents related to present and future HCV therapy. Ninety-one percent of respondents who considered the AASLD their primary affiliation compared to 84% of AGA members would treat their HCV-infected patient today (p = 0.002), 48% of AASLD members versus 9% of AGA members participated in clinical trials with a DAA agent (p = 0001), and 91% of AASLD members versus 79% of AGA members would use a DAA agent to treat their HCV-infected patient in the future (p = 0.001). A greater percentage of AGA members (12%) stated that they would refer their patient to an “HCV expert” when DAA agents became available compared to AASLD members (3%) (p = 0.001). When analyzing attitudes related to current and future therapy of HCV based on focus of clinical practice, a comparison of Hepatologists/liver transplantation physicians to Gastroenterologists revealed that 93% versus 86% would treat their HCV-infected patient today (p = 0.0034), 48% compared to 8% participated in clinical trials using DAA agents (p = 0.001), 90% versus 81% would treat their patient with a DAA agent (p = 0.001), and 1% versus 8% would refer their patient to an “HCV expert” when DAA therapies became available (p = 0.001).\nPractice type was also associated with differences in attitudes related to current and future therapy of HCV. Comparison of respondents in private practice versus those who practiced at a medical school or hospital associated with a medical school revealed that respectively, 91% versus 81% would treat their HCV-infected patient today (p = 0.001), 9% versus 30% participated in a clinical trial with a DAA agent (p = 0.001), 89% versus 75% would use DAA therapies when they became available (p = 0.001), and 12% compared to 15% would refer to an “HCV expert” when DAA therapies became available (P = NS).\nCurrent awareness and participation in clinical trials with DAA agents influenced attitudes regarding future use of these therapies, as 90% of respondents who participated in clinical trials with DAAs, 81% of those who were “very aware” or “aware” of DAA agents but did not participate in clinical trials, and 59% of those with minimal knowledge of DAA therapies would prescribe these agents in the future (p = 0.0001 clinical trial participant/very aware or aware of DAA agents vs. minimal knowledge). Respondents with “minimal knowledge” of DAA agents also reported that when these compounds were available in the future, 25% would refer their HCV-infected patient to a PA, NP, or other MD in their practice, 22% would refer to an “HCV expert,” and 1% would refer to an ID specialist. Respondents who reported “minimal knowledge of DAA agents” were more likely to consider the AMA their primary professional organization (20 vs. 1% AGA/AASLD (p = 0.001)), or be in a private practice not associated with a medical school (83 vs. 17% practice or hospital associated with a medical school p = 0.0001).", "Emerging data suggest that direct-acting antiviral therapies against HCV (DAA) will provide improved response rates when given in combination with currently available therapies. The enthusiasm for these new treatments must be tempered by realistic concerns including side-effects as well as the threat of viral resistance induced by these agents. We hypothesized that concern regarding these issues might affect attitudes of future use of DAA therapies by health care providers who currently treat HCV. The goal of the current study was to query a group of experienced HCV treaters using an Internet-based survey to assess attitudes regarding current and future treatment of HCV and correlate these responses related to focus of clinical practice, academic versus private practice, professional affiliation, and experience with DAA agents in clinical trials. US-based members of the AGA and AASLD were targeted for the survey as they represent the majority of HCV treaters in the US and a group most likely to have knowledge of DAA therapy. This manuscript represents the first description of attitudes regarding future use of DAA agents in a large group of experienced HCV treaters.\nBased on responses to this survey, although the majority of current HCV treaters would prescribe a DAA agent, a significant number of respondents who treat HCV today (85%) would not initiate therapy when these agents became available in the future (81%). The decreased use of DAA therapies may be offset somewhat by an increase in referrals to a more experienced HCV treater, as more respondents would refer their patients to a “hepatitis C expert” after DAA agents became available in the future compared to referring patients if they were diagnosed with HCV today (10% versus 6%). Not surprisingly, future use of DAA therapy appears to be significant in treaters who identify themselves as Hepatologists, and those with experience using these agents in clinical trials. Moreover, direct clinical experience with DAAs did not appear to dampen the enthusiasm for future use of these agents, as 90% of respondents who participated in clinical trials with a DAA agent would utilize these agents to treat an HCV-infected patient when these agents became available.\nWe hypothesized that as infectious disease specialists have extensive experience prescribing oral antiviral agents for HIV, referrals to these providers would increase when DAA agents became available for HCV. Our hypothesis was clearly inaccurate as respondents to the survey were unlikely to refer their HCV-infected patients to an infectious disease specialist when faced with an HCV-infected patient today (1%) and when DAA agents were available in the future (0.8%). This trend was maintained even in respondents with minimal knowledge of DAA agents. Finally, a concerning observation was identified when assessing responses to the survey stratified by past experience with DAA therapy. Although a significant percentage of respondents who participated in clinical trials with a DAA (90%) or who were “aware or very aware” of these agents but did not participate in a clinical trial (81%) would use these agents in the future, more than half of respondents (52%) who reported minimal knowledge of DAA agents stated that they would also use them in the future. Although providers with minimal knowledge of DAA agents represented a small percentage of respondents to the survey, concern exists regarding the inappropriate use of DAAs in the future including inexperience with side-effect management, and lack of recognition of both treatment failure as well as the emergence of viral resistance [11–16]. It is therefore apparent to the authors of this manuscript that extensive education of all future prescribers of DAA agents will be required to ensure successful use of these therapies.\nThere are limitations of this study that are unfortunately shared by other surveys utilizing similar data collection and analyses techniques. Any Internet-based survey is limited by response rate, which is affected both by the ability to deliver the survey and the willingness of the survey recipient to accurately respond to the queries posed to them. Although 21% of recipients of the survey responded, it is possible that the identified results would have been different if more individuals completed the survey. Unfortunately, the large volume of unsolicited e-mail has induced the use of blockers and filters, thus potentially limiting the response rates to the survey. However, we believe that these limitations were balanced by several strengths including the overall brevity of the survey (nine questions) increasing the likelihood that those who received and opened the survey fully completed it, and by the target audience which represented a significant number of current HCV treaters with the highest likelihood of having knowledge regarding DAA agents.\nIn summary, the responses to this Internet-based survey of more than 1,000 current HCV treaters indicated that although the majority of respondents appear ready to utilize DAA agents in the future, referrals to “hepatitis C experts” will increase when these agents become available. In addition, future referrals to ID specialists appear to be limited. Finally, as more than half of respondents to the survey with “minimal knowledge” of DAA therapies also appear to be willing to utilize these compounds in the future, significant provider education will be required to minimize inappropriate use of these agents." ]
[ "introduction", "materials|methods", "results", "discussion" ]
[ "Hepatitis C", "DAA", "STAT C", "Direct-acting antiviral therapy" ]
Expression of the stem cell marker ALDH1 in BRCA1 related breast cancer.
21336637
The BRCA1 protein makes mammary stem cells differentiate into mature luminal and myoepithelial cells. If a BRCA1 mutation results in a differentiation block, an enlarged stem cell component might be present in the benign tissue of BRCA1 mutation carriers, and these mammary stem cells could be the origin of BRCA1 related breast cancer. Since ALDH1 is a marker of both mammary stem cells and breast cancer stem cells, we compared ALDH1 expression in malignant tissue of BRCA1 mutation carriers to non-carriers.
INTRODUCTION
Forty-one BRCA1 related breast cancers and 41 age-matched sporadic breast cancers were immunohistochemically stained for ALDH1. Expression in epithelium and stroma was scored and compared.
METHODS
Epithelial (P = 0.001) and peritumoral (P = 0.001) ALDH1 expression was significantly higher in invasive BRCA1 related carcinomas compared to sporadic carcinomas. Intratumoral stromal ALDH1 expression was similarly high in both groups. ALDH1 tumor cell expression was an independent predictor of BRCA1 mutation status.
RESULTS
BRCA1 related breast cancers showed significantly more frequent epithelial ALDH1 expression, indicating that these hereditary tumors have an enlarged cancer stem cell component. Besides, (peritumoral) stromal ALDH1 expression was also more frequent in BRCA1 mutation carriers. ALDH1 may therefore be a diagnostic marker and a therapeutic target of BRCA1 related breast cancer.
CONCLUSION
[ "Aldehyde Dehydrogenase", "Aldehyde Dehydrogenase 1 Family", "BRCA1 Protein", "Biomarkers", "Breast Neoplasms", "Case-Control Studies", "Female", "Humans", "Isoenzymes", "Retinal Dehydrogenase", "Stem Cells" ]
3046359
Introduction
Germline mutation carriers of the BRCA1 gene locus harbor a high cumulative risk of developing breast and ovarian cancer of 57% and 40% by age 70, respectively [1]. BRCA1 related breast cancer shows a distinct histopathological and immunohistochemical phenotype. It has been shown to be more often of the ductal or medullary types, of high grade and to show a high mitotic activity index (MAI) and necrosis [2–4]. These tumors usually do not express the estrogen (ER) and progesterone receptors (PR) and are almost always HER-2/neu negative (“triple negative”) [2, 3]. At the gene-expression level these tumors cluster together with the basal-like subgroup [5]. BRCA1 seems to play an important role in DNA repair in a common pathway with BRCA2 [6]. Increasing evidence indicates that BRCA1 is necessary for mammary stem cell differentiation, a function that could explain its tissue-specificity [7–10]. Stem cells play a role in repopulating the breast at several points in the human female lifespan. These primitive cells facilitate rapid expansion and regression in puberty and pregnancy, and during the menstrual cycle. In recent studies mammary stem cells have been isolated, by evaluation of specific characteristics like multipotency, the ability to undergo both symmetrical and asymmetrical divisions and being long-lived, slow cycling cells [11, 12]. A hierarchy of epithelial cells does not only seem to be present in the normal mammary gland, but in tumors as well. Al-Hajj et al. showed that only a small subpopulation of all cells in a tumor could be serially passaged, indicative of their tumor initiating capacity. These cells share many characteristics with stem cells, and are therefore denoted cancer stem cells (CSC) [13]. These CSCs could be important therapy targets, due to their tumor initiating capacity and being therapy resistant. Several markers have been identified for the selection of human (cancer) stem cells, of which Aldehyde dehydrogenase 1 (ALDH1) is among the most widely studied ones. ALDH1 is a cytosolic detoxifying enzyme responsible for the oxidation of (retin)aldehydes into retinoids [14], which has been put forward as a marker of both normal human mammary stem cells and breast cancer stem cells. Human mammary cells selected for increased ALDH1 activity had the broadest lineage differentiation potential and highest growth capacity in a xenograft model, indicating that the ALDH1 positive cell population is enriched for mammary stem cells. Furthermore, it was shown that the ALDH1 positive population showed high tumorigenic capacity through serial passages, in contrast with the ALDH1 negative population [15]. The exact function of ALDH1 in (mammary) stem cells remains largely unknown, but it is thought to play a role in cellular differentiation, mainly through the retinoid signaling pathway [16]. As mentioned above, a novel function subscribed to BRCA1 is the regulation of mammary stem cell differentiation. An association between BRCA1 and stem cells was first suspected because of the basal phenotype of BRCA1 related tumors which resembles that of primitive mammary cells, implying that BRCA1 related tumors might originate in stem cells [17]. In vitro experiments have shown that ectopic overexpression of BRCA1 increases differentiation, whereas reduction of endogenous BRCA1 impairs differentiation [8]. Knockdown of BRCA1 in primary breast epithelial cells leads to accumulation of cells expressing ALDH1 and a decrease in ER positive cells expressing luminal epithelial markers. Furthermore, in the normal tissue of BRCA1 mutation carriers, clusters of ALDH1 positive cells have been described that were ER negative and showed loss of heterozygosity (LOH) of BRCA1. These results indicate that BRCA1 might indeed serve as a stem cell regulator in the mammary epithelium and that the stem cell pool in the normal tissue of BRCA1 mutation carriers might be enlarged [9], although our own results contradicted this [18]. If the origin of BRCA1 related cancer lies in this pool of stem cells, we would expect characteristics of this stem cell population like ALDH1 to be reflected in BRCA1 related breast cancers. In this study we therefore evaluated ALDH1 expression in invasive breast carcinomas of BRCA1 mutation carriers in comparison with cancers of non-carriers.
null
null
Results
[SUBTITLE] Baseline characteristics of the study cohort [SUBSECTION] The baseline characteristics of the cohort of patients with invasive carcinomas are summarized in Table 2. Ductal carcinoma was the most prominent histological subtype accounting for 75.6% to 80.5% of tumors in the hereditary and sporadic groups, respectively. In the BRCA1 related group the frequency of medullary and metaplastic carcinomas was higher, whereas (ducto)lobular carcinoma were more frequent in the sporadic group. In the sporadic group tumors were most frequently of the luminal A type (73.2%). Basal-like subtype was only present in 19.5% of the non-carriers, in contrast with BRCA1 mutation carriers among whom basal-like subtype was the most prominent subtype (70.7%) (P < 0.0005; OR 9.97; 95% CI 3.58–27.80). In both groups tumors were most often of high grade, but grade 3 tumors were more frequent in BRCA1 related tumors (75.6%) compared to sporadic tumors (58.5%) (P = 0.10; OR 2.20; 95% CI 0.85–5.65). This is consistent with the median MAI, which was also significantly higher in the BRCA1 group (22.0) compared to the sporadic group (14) (P = 0.007). Median tumor size was slightly higher in the sporadic group (2.5 cm) compared to the group of mutation carriers (1.9 cm) (P = 0.04). A trend for more frequent negative nodal status was seen in the BRCA1 group (70.7%) compared to the sporadic group (46.9%)(P = 0.05). An expansive growth pattern was significantly more often present in BRCA1 related tumors (52.5%) compared to sporadic controls (23.7%) (P = 0.009). Further, BRCA1 related tumors were more frequently negative for HER-2/neu (n.s), ER (P < 0.0005) and PR (P < 0.0005). Table 2Characteristics of BRCA1 related and sporadic breast carcinomasSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageHistologic typeDuctal3380.5%3175.6%n.s.(Ducto)lobular717.1%37.3%Medullary00%49.8%Metaplastic12.4%37.3%Molecular subtypeLuminal A3073.2%1126.8%0.0005Luminal B24.9%00%HER2+12.4%00%Basal-like819.5%2970.7%Unclassified00%12.4%Histologic grade1512.2%24.9%n.s.21229.3%819.5%32458.5%3175.6%Tumour size<2 cm1536.6%2156.8%n.s.2–5 cm2151.2%1540.5%>5 cm512.2%12.7%Unknown04Lymph node statusN01546.9%2970.7%0.05N11031.3%1126.8%N2515.6%00%N326.3%12.4%Unknown90Growth patternInfiltrative2976.3%1947.5%0.009Expansive923.7%2152.5%Unknown31HER-2/neu statusNegative3892.7%41100%0.24Positive37.3%00%ER statusNegative1126.8%3175.6%0.0005Positive3073.2%1024.4%PR statusNegative1435.0%3282.1%0.0005Positive2665.0%717.9%Unknown12 Characteristics of BRCA1 related and sporadic breast carcinomas The baseline characteristics of the cohort of patients with invasive carcinomas are summarized in Table 2. Ductal carcinoma was the most prominent histological subtype accounting for 75.6% to 80.5% of tumors in the hereditary and sporadic groups, respectively. In the BRCA1 related group the frequency of medullary and metaplastic carcinomas was higher, whereas (ducto)lobular carcinoma were more frequent in the sporadic group. In the sporadic group tumors were most frequently of the luminal A type (73.2%). Basal-like subtype was only present in 19.5% of the non-carriers, in contrast with BRCA1 mutation carriers among whom basal-like subtype was the most prominent subtype (70.7%) (P < 0.0005; OR 9.97; 95% CI 3.58–27.80). In both groups tumors were most often of high grade, but grade 3 tumors were more frequent in BRCA1 related tumors (75.6%) compared to sporadic tumors (58.5%) (P = 0.10; OR 2.20; 95% CI 0.85–5.65). This is consistent with the median MAI, which was also significantly higher in the BRCA1 group (22.0) compared to the sporadic group (14) (P = 0.007). Median tumor size was slightly higher in the sporadic group (2.5 cm) compared to the group of mutation carriers (1.9 cm) (P = 0.04). A trend for more frequent negative nodal status was seen in the BRCA1 group (70.7%) compared to the sporadic group (46.9%)(P = 0.05). An expansive growth pattern was significantly more often present in BRCA1 related tumors (52.5%) compared to sporadic controls (23.7%) (P = 0.009). Further, BRCA1 related tumors were more frequently negative for HER-2/neu (n.s), ER (P < 0.0005) and PR (P < 0.0005). Table 2Characteristics of BRCA1 related and sporadic breast carcinomasSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageHistologic typeDuctal3380.5%3175.6%n.s.(Ducto)lobular717.1%37.3%Medullary00%49.8%Metaplastic12.4%37.3%Molecular subtypeLuminal A3073.2%1126.8%0.0005Luminal B24.9%00%HER2+12.4%00%Basal-like819.5%2970.7%Unclassified00%12.4%Histologic grade1512.2%24.9%n.s.21229.3%819.5%32458.5%3175.6%Tumour size<2 cm1536.6%2156.8%n.s.2–5 cm2151.2%1540.5%>5 cm512.2%12.7%Unknown04Lymph node statusN01546.9%2970.7%0.05N11031.3%1126.8%N2515.6%00%N326.3%12.4%Unknown90Growth patternInfiltrative2976.3%1947.5%0.009Expansive923.7%2152.5%Unknown31HER-2/neu statusNegative3892.7%41100%0.24Positive37.3%00%ER statusNegative1126.8%3175.6%0.0005Positive3073.2%1024.4%PR statusNegative1435.0%3282.1%0.0005Positive2665.0%717.9%Unknown12 Characteristics of BRCA1 related and sporadic breast carcinomas [SUBTITLE] ALDH1 expression in invasive carcinomas [SUBSECTION] ALDH1 expression in tumors showed wide variation, ranging from weak to very strong expression and from only a few positive cells, to a diffuse staining pattern in a high percentage of positive cells (Fig. 1). Similar to benign tissue, both stromal and epithelial cells expressed ALDH1[18]. Epithelial ALDH1 expression was distributed randomly over the tumor. However, for stromal expression a specific peritumoral staining pattern was seen in some cases. Data on epithelial and stromal expression are shown in Table 3. Fig. 1Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression Table 3ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controlsSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageIntratumoral epithelial ALDH1 expressionNegative2458.5%922.0%0.001Positive1741.5%3278.0%Intensity of epithelial ALDH1 expressionAbsent2458.5%922.0%0.005Weak819.5%717.1%Moderate922.0%1741.5%Strong00%819.5%Intratumoral stromal ALDH1 expressionAbsent24.9%24.9%n.s.Weak922.0%37.3%Moderate1229.3%1229.3%Strong1843.9%2458.5%Peritumoral ALDH1 expressionAbsent3790.2%2663.4%0.001Present49.8%1536.6% Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controls Significantly more tumors showed epithelial ALDH1 expression in the BRCA1 group (78.0%) compared to sporadic breast cancer (41.5%) (P = 0.001). Both the intensity and the percentage of epithelial cells with ALDH1 expression were significantly higher in BRCA1 related breast cancer. In this group, 19.5% showed strong ALDH1 expression, compared to none of the sporadic tumors (P = 0.005). Overall, the median percentage of positive cells was 0.0 in the sporadic group compared to 2.0 in the hereditary group (P = 0.01), and in the cases with ALDH1 expression, the median percentage of positive cells was 10% in the sporadic group compared to 5% in the hereditary group (P = 0.27) Stromal ALDH1 expression within the tumor was similarly high in both groups (strong expression in 43.9% of sporadic controls and 58.5% in hereditary cases; P = 0.14). However, the peritumoral stroma showed significantly more frequent overexpression in the BRCA1 related group (36.6%) compared to non-carriers (9.8%) (P = 0.001) (Fig. 1). The presence of peritumoral and epithelial ALDH1 expression did not correlate with each other (P = 0.73; OR 1.21; 95% CI 0.42–3.47) and in multivariate analysis both were independent predictors of BRCA1 mutations status. ALDH1 expression in tumors showed wide variation, ranging from weak to very strong expression and from only a few positive cells, to a diffuse staining pattern in a high percentage of positive cells (Fig. 1). Similar to benign tissue, both stromal and epithelial cells expressed ALDH1[18]. Epithelial ALDH1 expression was distributed randomly over the tumor. However, for stromal expression a specific peritumoral staining pattern was seen in some cases. Data on epithelial and stromal expression are shown in Table 3. Fig. 1Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression Table 3ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controlsSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageIntratumoral epithelial ALDH1 expressionNegative2458.5%922.0%0.001Positive1741.5%3278.0%Intensity of epithelial ALDH1 expressionAbsent2458.5%922.0%0.005Weak819.5%717.1%Moderate922.0%1741.5%Strong00%819.5%Intratumoral stromal ALDH1 expressionAbsent24.9%24.9%n.s.Weak922.0%37.3%Moderate1229.3%1229.3%Strong1843.9%2458.5%Peritumoral ALDH1 expressionAbsent3790.2%2663.4%0.001Present49.8%1536.6% Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controls Significantly more tumors showed epithelial ALDH1 expression in the BRCA1 group (78.0%) compared to sporadic breast cancer (41.5%) (P = 0.001). Both the intensity and the percentage of epithelial cells with ALDH1 expression were significantly higher in BRCA1 related breast cancer. In this group, 19.5% showed strong ALDH1 expression, compared to none of the sporadic tumors (P = 0.005). Overall, the median percentage of positive cells was 0.0 in the sporadic group compared to 2.0 in the hereditary group (P = 0.01), and in the cases with ALDH1 expression, the median percentage of positive cells was 10% in the sporadic group compared to 5% in the hereditary group (P = 0.27) Stromal ALDH1 expression within the tumor was similarly high in both groups (strong expression in 43.9% of sporadic controls and 58.5% in hereditary cases; P = 0.14). However, the peritumoral stroma showed significantly more frequent overexpression in the BRCA1 related group (36.6%) compared to non-carriers (9.8%) (P = 0.001) (Fig. 1). The presence of peritumoral and epithelial ALDH1 expression did not correlate with each other (P = 0.73; OR 1.21; 95% CI 0.42–3.47) and in multivariate analysis both were independent predictors of BRCA1 mutations status. [SUBTITLE] Correlation of ALDH1 with other characteristics [SUBSECTION] Epithelial ALDH1 expression in tumors correlated significantly with growth pattern (P = 0.02; OR 3.29; 95% CI 1.19–9.09) and younger age (P = 0.05; 95% CI 0.08–9.35). In addition, a trend for correlation with PR negativity (P = 0.06; OR 0.41; 95% CI 0.16–1.04), ER negativity (P = 0.08; OR 0.45; 95% CI 0.18–1.10), basal-like subtype (P = 0.08; OR 2.26; 95% CI 0.91–5.65) and larger tumor size (P = 0.08) was found. Intratumoral stromal ALDH1 expression did not correlate with other characteristics. Peritumoral ALDH1 overexpression correlated with PR negativity (P = 0.002; OR 0.11; 95% CI 0.02–0.52), ER negativity (P = 0.001; OR 0.13; 95% CI 0.04–0.50), basal-like subtype (P = 0.004; OR 4.87; 95% CI 1.55–15.27) and high MAI (P = 0.009). Since ER, PR, basal-like subtype and growth pattern were associated with both mutation status and the presence of epithelial ALDH1 expression, we performed stratified analysis for basal-like subtype and growth pattern, and estimated corrected ORs by Mantel-Haenszel procedure. The OR, adjusted for basal-like subtype, was still significant (ORadjusted 5.11; 95% CI 1.64–15.97; P = 0.005) and hardly differed from the crude OR (ORcrude 5.02). The estimated OR adjusted for growth pattern was slightly lower than the crude OR (ORadjusted 3.88; 95% CI 1.41–10.69; P = 0.009), but still significant. ER and PR were not independently analyzed as possible confounders, because they were constituents of basal-like subtype. To correct for multiple confounders simultaneously, we performed multivariate analysis, by including ALDH1 expression, basal-like subtype and growth pattern in a stepwise logistic regression model. Hereby we estimated the independent predictive value of these factors for mutation status. In multivariate analysis only basal-like subtype (P < 0.0005) and the presence of epithelial ALDH1 expression (P = 0.005) were independent predictors of BRCA1 mutation status, whereas growth pattern was no longer of additional predictive value (P = 0.25). Since PR, ER, basal-like subtype and MAI correlated significantly with both peritumoral ALDH1 expression and BRCA1 mutation status, we performed univariate and multivariate analysis to exclude confounders as above. The OR adjusted for basal-like phenotype was no longer significant (ORadjusted 2.65; 95% CI 0.77–9.04; P = 0.12), indicating that peritumoral ALDH1 expression and basal-like subtype are not of independent predictive value for BRCA1 mutation status. Because MAI was a continuous variable it was only included in multivariate analysis. In a logistic regression model only basal-like subclass (P < 0.0005), but not peritumoral ALDH1 expression (P = 0.08) and MAI (P = 0.55), was of independent predictive value for BRCA1 mutation status. Epithelial ALDH1 expression in tumors correlated significantly with growth pattern (P = 0.02; OR 3.29; 95% CI 1.19–9.09) and younger age (P = 0.05; 95% CI 0.08–9.35). In addition, a trend for correlation with PR negativity (P = 0.06; OR 0.41; 95% CI 0.16–1.04), ER negativity (P = 0.08; OR 0.45; 95% CI 0.18–1.10), basal-like subtype (P = 0.08; OR 2.26; 95% CI 0.91–5.65) and larger tumor size (P = 0.08) was found. Intratumoral stromal ALDH1 expression did not correlate with other characteristics. Peritumoral ALDH1 overexpression correlated with PR negativity (P = 0.002; OR 0.11; 95% CI 0.02–0.52), ER negativity (P = 0.001; OR 0.13; 95% CI 0.04–0.50), basal-like subtype (P = 0.004; OR 4.87; 95% CI 1.55–15.27) and high MAI (P = 0.009). Since ER, PR, basal-like subtype and growth pattern were associated with both mutation status and the presence of epithelial ALDH1 expression, we performed stratified analysis for basal-like subtype and growth pattern, and estimated corrected ORs by Mantel-Haenszel procedure. The OR, adjusted for basal-like subtype, was still significant (ORadjusted 5.11; 95% CI 1.64–15.97; P = 0.005) and hardly differed from the crude OR (ORcrude 5.02). The estimated OR adjusted for growth pattern was slightly lower than the crude OR (ORadjusted 3.88; 95% CI 1.41–10.69; P = 0.009), but still significant. ER and PR were not independently analyzed as possible confounders, because they were constituents of basal-like subtype. To correct for multiple confounders simultaneously, we performed multivariate analysis, by including ALDH1 expression, basal-like subtype and growth pattern in a stepwise logistic regression model. Hereby we estimated the independent predictive value of these factors for mutation status. In multivariate analysis only basal-like subtype (P < 0.0005) and the presence of epithelial ALDH1 expression (P = 0.005) were independent predictors of BRCA1 mutation status, whereas growth pattern was no longer of additional predictive value (P = 0.25). Since PR, ER, basal-like subtype and MAI correlated significantly with both peritumoral ALDH1 expression and BRCA1 mutation status, we performed univariate and multivariate analysis to exclude confounders as above. The OR adjusted for basal-like phenotype was no longer significant (ORadjusted 2.65; 95% CI 0.77–9.04; P = 0.12), indicating that peritumoral ALDH1 expression and basal-like subtype are not of independent predictive value for BRCA1 mutation status. Because MAI was a continuous variable it was only included in multivariate analysis. In a logistic regression model only basal-like subclass (P < 0.0005), but not peritumoral ALDH1 expression (P = 0.08) and MAI (P = 0.55), was of independent predictive value for BRCA1 mutation status.
null
null
[ "Study population", "Immunohistochemistry", "Scoring of immunohistochemistry", "Statistics", "Baseline characteristics of the study cohort", "ALDH1 expression in invasive carcinomas", "Correlation of ALDH1 with other characteristics" ]
[ "An invasive carcinoma group composed of 41 BRCA1 germline mutation carriers was age-matched with a control group, aiming at a maximum age difference of 5 years between case and control. We excluded all cases that mentioned a strong family history of breast cancer in the pathology report and all cases of which cumulative breast cancer risk exceeded 30% based on family history [19], but included patients that were referred to Clinical Genetics because of the young age of onset only and tested negative for BRCA1/2 mutations. This control group is further denoted “sporadic”. Anonymous use of redundant tissue for research purposes is part of the standard treatment agreement with patients in our hospitals [20]. Clinical data were retrieved from the pathology report and patient files. MAI was assessed as before [21]. Growth pattern was classified as expansive if pushing margins were observed in >50% of the tumor circumference, and otherwise as infiltrative [22].", "Immunohistochemical analysis was carried out on 4-μm sections. All stainings were performed on full slides to avoid false negatives due to tumor heterogeneity. For all stainings, slides were deparaffinized in xylene and rehydrated in decreasing ethanol dilutions. Endogenous peroxidase activity was blocked with a buffer containing peroxide, followed by antigen retrieval. A cooling off period of 30 min preceded incubation with the primary antibody. Primary antibodies used, incubation time and the method of antigen retrieval are summarized in Table 1. In the case of EGFR we followed the protocol of the Pharm Dx kit. For all other antibodies, detection was done with a poly HRP anti Mouse/Rabbit/Rat IgG (ready to use; Powervision, Immunologic, Immunovision Technologies, Brisbane, California, USA). Peroxidase activity was developed with diaminobenzidin, except for ALDH1 for which we used Nova Red (Vector laboratories, Burlingame, USA). Slides were finally lightly counter-stained with hematoxylin and mounted. In between steps, slides were washed in PBS. Appropriate negative and positive controls were used throughout.\nTable 1Primary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistryPrimary antibodyCloneIsotypeCompanyIncubation timeDilutionAntigen retrievalALDH144/ALDHMouseBD transduction60 min1:100Citrate pH 6, 100°C, 20 min.CK5/6D5/16BuMouseChemicon60 min1:500EDTA pH 9, 100°C, 20 min.EGFR2-18C9MousePharm Dxovernight, 4°CPre-dilutedProtein K, 5 min.ER1D5MouseDAKO60 min1:80EDTA pH 9, 100°C, 20 min.HER2SP3RabbitNeomarkers60 min1:100EDTA pH 9, 100°C, 20 min.PRPgR636MouseDAKO60 min1:25Citrate pH 6, 100°C, 20 min.\n\nPrimary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistry\nStaining for ER, PR, HER-2/neu, cytokeratin 5/6 (CK5/6) and epidermal growth factor receptor (EGFR) was performed to classify tumors into molecular subtypes as was described by Carey et al. This classification defines subtypes as follows: luminal A (ER+ and/or PR+ and HER2−), luminal B (ER+ and/or PR+ and HER2+), HER2 subtype (HER2+, ER− and PR−) and basal-like (ER−, PR−, HER2−, CK5/6+ and/or EGFR+). When a tumor was negative for all five markers it was denoted “unclassified” [23]. This sub-classification is used as the immunohistochemical surrogate of the subgroups detected by hierarchical clustering of gene-expression analysis of breast cancers [5].", "Scoring was performed by two observers (MRHvV, PJvD) blinded to BRCA1 mutation status. For ER and PR the percentage of positive nuclei was estimated, and cases with >10% of stained cells were considered positive, according to the Dutch guidelines [24]. HER-2/neu was scored according to the DAKO system, considering only 3+ cases as positive. EGFR and CK5/6 were reported positive if clear membranous [25], respectively cytoplasmic staining [26] was seen.\nSince no consensus exists for ALDH1 scoring, we scored both stromal and epithelial expression. Both intensity and percentage of intratumoral epithelial ALDH1 positive cells were evaluated in invasive carcinomas. Intensity was scored as 0 (absent), 1 (weak), 2 (moderate) or 3 (strong). H-scores reflecting the overall epithelial ALDH1 staining were calculated by multiplying the intensity score with the percentage of positive cells. A tumor was regarded positive if the H-score was 1 or above. The intensity of intratumoral stromal expression was scored from 0–4 as above in malignant tissues. The presence of peritumoral stromal overexpression was scored separately as present or absent.", "Discrete variables were compared by Chi-square test and Fisher’s exact test and odds ratios (OR) were calculated with 95% confidence intervals (95% CI). Normality of continuous data was tested by Kolomogorov-Smirnov test and groups were then compared by Students-T test or Mann-Whitney U test for normally distributed and non-parametric data, respectively.\nIn the case of comparisons between continuous and discrete variables, correlation coefficients were calculated. When features were associated (P < 0.10) with both BRCA1 mutation status and the presence of ALDH1 staining, bivariate statistical analysis of these possible confounders took place by calculating ORs stratified for specific subgroups (corrected by Mantel-Haenszel procedure). In addition, multivariate analysis by means of logistic regression for all significant features was performed. All statistical analyses were performed using SPSS 15.0.", "The baseline characteristics of the cohort of patients with invasive carcinomas are summarized in Table 2. Ductal carcinoma was the most prominent histological subtype accounting for 75.6% to 80.5% of tumors in the hereditary and sporadic groups, respectively. In the BRCA1 related group the frequency of medullary and metaplastic carcinomas was higher, whereas (ducto)lobular carcinoma were more frequent in the sporadic group. In the sporadic group tumors were most frequently of the luminal A type (73.2%). Basal-like subtype was only present in 19.5% of the non-carriers, in contrast with BRCA1 mutation carriers among whom basal-like subtype was the most prominent subtype (70.7%) (P < 0.0005; OR 9.97; 95% CI 3.58–27.80). In both groups tumors were most often of high grade, but grade 3 tumors were more frequent in BRCA1 related tumors (75.6%) compared to sporadic tumors (58.5%) (P = 0.10; OR 2.20; 95% CI 0.85–5.65). This is consistent with the median MAI, which was also significantly higher in the BRCA1 group (22.0) compared to the sporadic group (14) (P = 0.007). Median tumor size was slightly higher in the sporadic group (2.5 cm) compared to the group of mutation carriers (1.9 cm) (P = 0.04). A trend for more frequent negative nodal status was seen in the BRCA1 group (70.7%) compared to the sporadic group (46.9%)(P = 0.05). An expansive growth pattern was significantly more often present in BRCA1 related tumors (52.5%) compared to sporadic controls (23.7%) (P = 0.009). Further, BRCA1 related tumors were more frequently negative for HER-2/neu (n.s), ER (P < 0.0005) and PR (P < 0.0005).\nTable 2Characteristics of BRCA1 related and sporadic breast carcinomasSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageHistologic typeDuctal3380.5%3175.6%n.s.(Ducto)lobular717.1%37.3%Medullary00%49.8%Metaplastic12.4%37.3%Molecular subtypeLuminal A3073.2%1126.8%0.0005Luminal B24.9%00%HER2+12.4%00%Basal-like819.5%2970.7%Unclassified00%12.4%Histologic grade1512.2%24.9%n.s.21229.3%819.5%32458.5%3175.6%Tumour size<2 cm1536.6%2156.8%n.s.2–5 cm2151.2%1540.5%>5 cm512.2%12.7%Unknown04Lymph node statusN01546.9%2970.7%0.05N11031.3%1126.8%N2515.6%00%N326.3%12.4%Unknown90Growth patternInfiltrative2976.3%1947.5%0.009Expansive923.7%2152.5%Unknown31HER-2/neu statusNegative3892.7%41100%0.24Positive37.3%00%ER statusNegative1126.8%3175.6%0.0005Positive3073.2%1024.4%PR statusNegative1435.0%3282.1%0.0005Positive2665.0%717.9%Unknown12\n\nCharacteristics of BRCA1 related and sporadic breast carcinomas", "ALDH1 expression in tumors showed wide variation, ranging from weak to very strong expression and from only a few positive cells, to a diffuse staining pattern in a high percentage of positive cells (Fig. 1). Similar to benign tissue, both stromal and epithelial cells expressed ALDH1[18]. Epithelial ALDH1 expression was distributed randomly over the tumor. However, for stromal expression a specific peritumoral staining pattern was seen in some cases. Data on epithelial and stromal expression are shown in Table 3.\nFig. 1Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nTable 3ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controlsSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageIntratumoral epithelial ALDH1 expressionNegative2458.5%922.0%0.001Positive1741.5%3278.0%Intensity of epithelial ALDH1 expressionAbsent2458.5%922.0%0.005Weak819.5%717.1%Moderate922.0%1741.5%Strong00%819.5%Intratumoral stromal ALDH1 expressionAbsent24.9%24.9%n.s.Weak922.0%37.3%Moderate1229.3%1229.3%Strong1843.9%2458.5%Peritumoral ALDH1 expressionAbsent3790.2%2663.4%0.001Present49.8%1536.6%\n\nExpression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controls\nSignificantly more tumors showed epithelial ALDH1 expression in the BRCA1 group (78.0%) compared to sporadic breast cancer (41.5%) (P = 0.001). Both the intensity and the percentage of epithelial cells with ALDH1 expression were significantly higher in BRCA1 related breast cancer. In this group, 19.5% showed strong ALDH1 expression, compared to none of the sporadic tumors (P = 0.005). Overall, the median percentage of positive cells was 0.0 in the sporadic group compared to 2.0 in the hereditary group (P = 0.01), and in the cases with ALDH1 expression, the median percentage of positive cells was 10% in the sporadic group compared to 5% in the hereditary group (P = 0.27)\nStromal ALDH1 expression within the tumor was similarly high in both groups (strong expression in 43.9% of sporadic controls and 58.5% in hereditary cases; P = 0.14). However, the peritumoral stroma showed significantly more frequent overexpression in the BRCA1 related group (36.6%) compared to non-carriers (9.8%) (P = 0.001) (Fig. 1).\nThe presence of peritumoral and epithelial ALDH1 expression did not correlate with each other (P = 0.73; OR 1.21; 95% CI 0.42–3.47) and in multivariate analysis both were independent predictors of BRCA1 mutations status.", "Epithelial ALDH1 expression in tumors correlated significantly with growth pattern (P = 0.02; OR 3.29; 95% CI 1.19–9.09) and younger age (P = 0.05; 95% CI 0.08–9.35). In addition, a trend for correlation with PR negativity (P = 0.06; OR 0.41; 95% CI 0.16–1.04), ER negativity (P = 0.08; OR 0.45; 95% CI 0.18–1.10), basal-like subtype (P = 0.08; OR 2.26; 95% CI 0.91–5.65) and larger tumor size (P = 0.08) was found. Intratumoral stromal ALDH1 expression did not correlate with other characteristics.\nPeritumoral ALDH1 overexpression correlated with PR negativity (P = 0.002; OR 0.11; 95% CI 0.02–0.52), ER negativity (P = 0.001; OR 0.13; 95% CI 0.04–0.50), basal-like subtype (P = 0.004; OR 4.87; 95% CI 1.55–15.27) and high MAI (P = 0.009).\nSince ER, PR, basal-like subtype and growth pattern were associated with both mutation status and the presence of epithelial ALDH1 expression, we performed stratified analysis for basal-like subtype and growth pattern, and estimated corrected ORs by Mantel-Haenszel procedure. The OR, adjusted for basal-like subtype, was still significant (ORadjusted 5.11; 95% CI 1.64–15.97; P = 0.005) and hardly differed from the crude OR (ORcrude 5.02). The estimated OR adjusted for growth pattern was slightly lower than the crude OR (ORadjusted 3.88; 95% CI 1.41–10.69; P = 0.009), but still significant. ER and PR were not independently analyzed as possible confounders, because they were constituents of basal-like subtype.\nTo correct for multiple confounders simultaneously, we performed multivariate analysis, by including ALDH1 expression, basal-like subtype and growth pattern in a stepwise logistic regression model. Hereby we estimated the independent predictive value of these factors for mutation status. In multivariate analysis only basal-like subtype (P < 0.0005) and the presence of epithelial ALDH1 expression (P = 0.005) were independent predictors of BRCA1 mutation status, whereas growth pattern was no longer of additional predictive value (P = 0.25).\nSince PR, ER, basal-like subtype and MAI correlated significantly with both peritumoral ALDH1 expression and BRCA1 mutation status, we performed univariate and multivariate analysis to exclude confounders as above. The OR adjusted for basal-like phenotype was no longer significant (ORadjusted 2.65; 95% CI 0.77–9.04; P = 0.12), indicating that peritumoral ALDH1 expression and basal-like subtype are not of independent predictive value for BRCA1 mutation status. Because MAI was a continuous variable it was only included in multivariate analysis. In a logistic regression model only basal-like subclass (P < 0.0005), but not peritumoral ALDH1 expression (P = 0.08) and MAI (P = 0.55), was of independent predictive value for BRCA1 mutation status." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study population", "Immunohistochemistry", "Scoring of immunohistochemistry", "Statistics", "Results", "Baseline characteristics of the study cohort", "ALDH1 expression in invasive carcinomas", "Correlation of ALDH1 with other characteristics", "Discussion" ]
[ "Germline mutation carriers of the BRCA1 gene locus harbor a high cumulative risk of developing breast and ovarian cancer of 57% and 40% by age 70, respectively [1]. BRCA1 related breast cancer shows a distinct histopathological and immunohistochemical phenotype. It has been shown to be more often of the ductal or medullary types, of high grade and to show a high mitotic activity index (MAI) and necrosis [2–4]. These tumors usually do not express the estrogen (ER) and progesterone receptors (PR) and are almost always HER-2/neu negative (“triple negative”) [2, 3]. At the gene-expression level these tumors cluster together with the basal-like subgroup [5]. BRCA1 seems to play an important role in DNA repair in a common pathway with BRCA2 [6]. Increasing evidence indicates that BRCA1 is necessary for mammary stem cell differentiation, a function that could explain its tissue-specificity [7–10].\nStem cells play a role in repopulating the breast at several points in the human female lifespan. These primitive cells facilitate rapid expansion and regression in puberty and pregnancy, and during the menstrual cycle. In recent studies mammary stem cells have been isolated, by evaluation of specific characteristics like multipotency, the ability to undergo both symmetrical and asymmetrical divisions and being long-lived, slow cycling cells [11, 12].\nA hierarchy of epithelial cells does not only seem to be present in the normal mammary gland, but in tumors as well. Al-Hajj et al. showed that only a small subpopulation of all cells in a tumor could be serially passaged, indicative of their tumor initiating capacity. These cells share many characteristics with stem cells, and are therefore denoted cancer stem cells (CSC) [13]. These CSCs could be important therapy targets, due to their tumor initiating capacity and being therapy resistant.\nSeveral markers have been identified for the selection of human (cancer) stem cells, of which Aldehyde dehydrogenase 1 (ALDH1) is among the most widely studied ones. ALDH1 is a cytosolic detoxifying enzyme responsible for the oxidation of (retin)aldehydes into retinoids [14], which has been put forward as a marker of both normal human mammary stem cells and breast cancer stem cells. Human mammary cells selected for increased ALDH1 activity had the broadest lineage differentiation potential and highest growth capacity in a xenograft model, indicating that the ALDH1 positive cell population is enriched for mammary stem cells. Furthermore, it was shown that the ALDH1 positive population showed high tumorigenic capacity through serial passages, in contrast with the ALDH1 negative population [15]. The exact function of ALDH1 in (mammary) stem cells remains largely unknown, but it is thought to play a role in cellular differentiation, mainly through the retinoid signaling pathway [16].\nAs mentioned above, a novel function subscribed to BRCA1 is the regulation of mammary stem cell differentiation. An association between BRCA1 and stem cells was first suspected because of the basal phenotype of BRCA1 related tumors which resembles that of primitive mammary cells, implying that BRCA1 related tumors might originate in stem cells [17]. In vitro experiments have shown that ectopic overexpression of BRCA1 increases differentiation, whereas reduction of endogenous BRCA1 impairs differentiation [8]. Knockdown of BRCA1 in primary breast epithelial cells leads to accumulation of cells expressing ALDH1 and a decrease in ER positive cells expressing luminal epithelial markers. Furthermore, in the normal tissue of BRCA1 mutation carriers, clusters of ALDH1 positive cells have been described that were ER negative and showed loss of heterozygosity (LOH) of BRCA1. These results indicate that BRCA1 might indeed serve as a stem cell regulator in the mammary epithelium and that the stem cell pool in the normal tissue of BRCA1 mutation carriers might be enlarged [9], although our own results contradicted this [18]. If the origin of BRCA1 related cancer lies in this pool of stem cells, we would expect characteristics of this stem cell population like ALDH1 to be reflected in BRCA1 related breast cancers. In this study we therefore evaluated ALDH1 expression in invasive breast carcinomas of BRCA1 mutation carriers in comparison with cancers of non-carriers.", "[SUBTITLE] Study population [SUBSECTION] An invasive carcinoma group composed of 41 BRCA1 germline mutation carriers was age-matched with a control group, aiming at a maximum age difference of 5 years between case and control. We excluded all cases that mentioned a strong family history of breast cancer in the pathology report and all cases of which cumulative breast cancer risk exceeded 30% based on family history [19], but included patients that were referred to Clinical Genetics because of the young age of onset only and tested negative for BRCA1/2 mutations. This control group is further denoted “sporadic”. Anonymous use of redundant tissue for research purposes is part of the standard treatment agreement with patients in our hospitals [20]. Clinical data were retrieved from the pathology report and patient files. MAI was assessed as before [21]. Growth pattern was classified as expansive if pushing margins were observed in >50% of the tumor circumference, and otherwise as infiltrative [22].\nAn invasive carcinoma group composed of 41 BRCA1 germline mutation carriers was age-matched with a control group, aiming at a maximum age difference of 5 years between case and control. We excluded all cases that mentioned a strong family history of breast cancer in the pathology report and all cases of which cumulative breast cancer risk exceeded 30% based on family history [19], but included patients that were referred to Clinical Genetics because of the young age of onset only and tested negative for BRCA1/2 mutations. This control group is further denoted “sporadic”. Anonymous use of redundant tissue for research purposes is part of the standard treatment agreement with patients in our hospitals [20]. Clinical data were retrieved from the pathology report and patient files. MAI was assessed as before [21]. Growth pattern was classified as expansive if pushing margins were observed in >50% of the tumor circumference, and otherwise as infiltrative [22].\n[SUBTITLE] Immunohistochemistry [SUBSECTION] Immunohistochemical analysis was carried out on 4-μm sections. All stainings were performed on full slides to avoid false negatives due to tumor heterogeneity. For all stainings, slides were deparaffinized in xylene and rehydrated in decreasing ethanol dilutions. Endogenous peroxidase activity was blocked with a buffer containing peroxide, followed by antigen retrieval. A cooling off period of 30 min preceded incubation with the primary antibody. Primary antibodies used, incubation time and the method of antigen retrieval are summarized in Table 1. In the case of EGFR we followed the protocol of the Pharm Dx kit. For all other antibodies, detection was done with a poly HRP anti Mouse/Rabbit/Rat IgG (ready to use; Powervision, Immunologic, Immunovision Technologies, Brisbane, California, USA). Peroxidase activity was developed with diaminobenzidin, except for ALDH1 for which we used Nova Red (Vector laboratories, Burlingame, USA). Slides were finally lightly counter-stained with hematoxylin and mounted. In between steps, slides were washed in PBS. Appropriate negative and positive controls were used throughout.\nTable 1Primary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistryPrimary antibodyCloneIsotypeCompanyIncubation timeDilutionAntigen retrievalALDH144/ALDHMouseBD transduction60 min1:100Citrate pH 6, 100°C, 20 min.CK5/6D5/16BuMouseChemicon60 min1:500EDTA pH 9, 100°C, 20 min.EGFR2-18C9MousePharm Dxovernight, 4°CPre-dilutedProtein K, 5 min.ER1D5MouseDAKO60 min1:80EDTA pH 9, 100°C, 20 min.HER2SP3RabbitNeomarkers60 min1:100EDTA pH 9, 100°C, 20 min.PRPgR636MouseDAKO60 min1:25Citrate pH 6, 100°C, 20 min.\n\nPrimary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistry\nStaining for ER, PR, HER-2/neu, cytokeratin 5/6 (CK5/6) and epidermal growth factor receptor (EGFR) was performed to classify tumors into molecular subtypes as was described by Carey et al. This classification defines subtypes as follows: luminal A (ER+ and/or PR+ and HER2−), luminal B (ER+ and/or PR+ and HER2+), HER2 subtype (HER2+, ER− and PR−) and basal-like (ER−, PR−, HER2−, CK5/6+ and/or EGFR+). When a tumor was negative for all five markers it was denoted “unclassified” [23]. This sub-classification is used as the immunohistochemical surrogate of the subgroups detected by hierarchical clustering of gene-expression analysis of breast cancers [5].\nImmunohistochemical analysis was carried out on 4-μm sections. All stainings were performed on full slides to avoid false negatives due to tumor heterogeneity. For all stainings, slides were deparaffinized in xylene and rehydrated in decreasing ethanol dilutions. Endogenous peroxidase activity was blocked with a buffer containing peroxide, followed by antigen retrieval. A cooling off period of 30 min preceded incubation with the primary antibody. Primary antibodies used, incubation time and the method of antigen retrieval are summarized in Table 1. In the case of EGFR we followed the protocol of the Pharm Dx kit. For all other antibodies, detection was done with a poly HRP anti Mouse/Rabbit/Rat IgG (ready to use; Powervision, Immunologic, Immunovision Technologies, Brisbane, California, USA). Peroxidase activity was developed with diaminobenzidin, except for ALDH1 for which we used Nova Red (Vector laboratories, Burlingame, USA). Slides were finally lightly counter-stained with hematoxylin and mounted. In between steps, slides were washed in PBS. Appropriate negative and positive controls were used throughout.\nTable 1Primary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistryPrimary antibodyCloneIsotypeCompanyIncubation timeDilutionAntigen retrievalALDH144/ALDHMouseBD transduction60 min1:100Citrate pH 6, 100°C, 20 min.CK5/6D5/16BuMouseChemicon60 min1:500EDTA pH 9, 100°C, 20 min.EGFR2-18C9MousePharm Dxovernight, 4°CPre-dilutedProtein K, 5 min.ER1D5MouseDAKO60 min1:80EDTA pH 9, 100°C, 20 min.HER2SP3RabbitNeomarkers60 min1:100EDTA pH 9, 100°C, 20 min.PRPgR636MouseDAKO60 min1:25Citrate pH 6, 100°C, 20 min.\n\nPrimary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistry\nStaining for ER, PR, HER-2/neu, cytokeratin 5/6 (CK5/6) and epidermal growth factor receptor (EGFR) was performed to classify tumors into molecular subtypes as was described by Carey et al. This classification defines subtypes as follows: luminal A (ER+ and/or PR+ and HER2−), luminal B (ER+ and/or PR+ and HER2+), HER2 subtype (HER2+, ER− and PR−) and basal-like (ER−, PR−, HER2−, CK5/6+ and/or EGFR+). When a tumor was negative for all five markers it was denoted “unclassified” [23]. This sub-classification is used as the immunohistochemical surrogate of the subgroups detected by hierarchical clustering of gene-expression analysis of breast cancers [5].\n[SUBTITLE] Scoring of immunohistochemistry [SUBSECTION] Scoring was performed by two observers (MRHvV, PJvD) blinded to BRCA1 mutation status. For ER and PR the percentage of positive nuclei was estimated, and cases with >10% of stained cells were considered positive, according to the Dutch guidelines [24]. HER-2/neu was scored according to the DAKO system, considering only 3+ cases as positive. EGFR and CK5/6 were reported positive if clear membranous [25], respectively cytoplasmic staining [26] was seen.\nSince no consensus exists for ALDH1 scoring, we scored both stromal and epithelial expression. Both intensity and percentage of intratumoral epithelial ALDH1 positive cells were evaluated in invasive carcinomas. Intensity was scored as 0 (absent), 1 (weak), 2 (moderate) or 3 (strong). H-scores reflecting the overall epithelial ALDH1 staining were calculated by multiplying the intensity score with the percentage of positive cells. A tumor was regarded positive if the H-score was 1 or above. The intensity of intratumoral stromal expression was scored from 0–4 as above in malignant tissues. The presence of peritumoral stromal overexpression was scored separately as present or absent.\nScoring was performed by two observers (MRHvV, PJvD) blinded to BRCA1 mutation status. For ER and PR the percentage of positive nuclei was estimated, and cases with >10% of stained cells were considered positive, according to the Dutch guidelines [24]. HER-2/neu was scored according to the DAKO system, considering only 3+ cases as positive. EGFR and CK5/6 were reported positive if clear membranous [25], respectively cytoplasmic staining [26] was seen.\nSince no consensus exists for ALDH1 scoring, we scored both stromal and epithelial expression. Both intensity and percentage of intratumoral epithelial ALDH1 positive cells were evaluated in invasive carcinomas. Intensity was scored as 0 (absent), 1 (weak), 2 (moderate) or 3 (strong). H-scores reflecting the overall epithelial ALDH1 staining were calculated by multiplying the intensity score with the percentage of positive cells. A tumor was regarded positive if the H-score was 1 or above. The intensity of intratumoral stromal expression was scored from 0–4 as above in malignant tissues. The presence of peritumoral stromal overexpression was scored separately as present or absent.\n[SUBTITLE] Statistics [SUBSECTION] Discrete variables were compared by Chi-square test and Fisher’s exact test and odds ratios (OR) were calculated with 95% confidence intervals (95% CI). Normality of continuous data was tested by Kolomogorov-Smirnov test and groups were then compared by Students-T test or Mann-Whitney U test for normally distributed and non-parametric data, respectively.\nIn the case of comparisons between continuous and discrete variables, correlation coefficients were calculated. When features were associated (P < 0.10) with both BRCA1 mutation status and the presence of ALDH1 staining, bivariate statistical analysis of these possible confounders took place by calculating ORs stratified for specific subgroups (corrected by Mantel-Haenszel procedure). In addition, multivariate analysis by means of logistic regression for all significant features was performed. All statistical analyses were performed using SPSS 15.0.\nDiscrete variables were compared by Chi-square test and Fisher’s exact test and odds ratios (OR) were calculated with 95% confidence intervals (95% CI). Normality of continuous data was tested by Kolomogorov-Smirnov test and groups were then compared by Students-T test or Mann-Whitney U test for normally distributed and non-parametric data, respectively.\nIn the case of comparisons between continuous and discrete variables, correlation coefficients were calculated. When features were associated (P < 0.10) with both BRCA1 mutation status and the presence of ALDH1 staining, bivariate statistical analysis of these possible confounders took place by calculating ORs stratified for specific subgroups (corrected by Mantel-Haenszel procedure). In addition, multivariate analysis by means of logistic regression for all significant features was performed. All statistical analyses were performed using SPSS 15.0.", "An invasive carcinoma group composed of 41 BRCA1 germline mutation carriers was age-matched with a control group, aiming at a maximum age difference of 5 years between case and control. We excluded all cases that mentioned a strong family history of breast cancer in the pathology report and all cases of which cumulative breast cancer risk exceeded 30% based on family history [19], but included patients that were referred to Clinical Genetics because of the young age of onset only and tested negative for BRCA1/2 mutations. This control group is further denoted “sporadic”. Anonymous use of redundant tissue for research purposes is part of the standard treatment agreement with patients in our hospitals [20]. Clinical data were retrieved from the pathology report and patient files. MAI was assessed as before [21]. Growth pattern was classified as expansive if pushing margins were observed in >50% of the tumor circumference, and otherwise as infiltrative [22].", "Immunohistochemical analysis was carried out on 4-μm sections. All stainings were performed on full slides to avoid false negatives due to tumor heterogeneity. For all stainings, slides were deparaffinized in xylene and rehydrated in decreasing ethanol dilutions. Endogenous peroxidase activity was blocked with a buffer containing peroxide, followed by antigen retrieval. A cooling off period of 30 min preceded incubation with the primary antibody. Primary antibodies used, incubation time and the method of antigen retrieval are summarized in Table 1. In the case of EGFR we followed the protocol of the Pharm Dx kit. For all other antibodies, detection was done with a poly HRP anti Mouse/Rabbit/Rat IgG (ready to use; Powervision, Immunologic, Immunovision Technologies, Brisbane, California, USA). Peroxidase activity was developed with diaminobenzidin, except for ALDH1 for which we used Nova Red (Vector laboratories, Burlingame, USA). Slides were finally lightly counter-stained with hematoxylin and mounted. In between steps, slides were washed in PBS. Appropriate negative and positive controls were used throughout.\nTable 1Primary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistryPrimary antibodyCloneIsotypeCompanyIncubation timeDilutionAntigen retrievalALDH144/ALDHMouseBD transduction60 min1:100Citrate pH 6, 100°C, 20 min.CK5/6D5/16BuMouseChemicon60 min1:500EDTA pH 9, 100°C, 20 min.EGFR2-18C9MousePharm Dxovernight, 4°CPre-dilutedProtein K, 5 min.ER1D5MouseDAKO60 min1:80EDTA pH 9, 100°C, 20 min.HER2SP3RabbitNeomarkers60 min1:100EDTA pH 9, 100°C, 20 min.PRPgR636MouseDAKO60 min1:25Citrate pH 6, 100°C, 20 min.\n\nPrimary antibodies, source, dilution, incubation times and methods of antigen retrieval used for immunohistochemistry\nStaining for ER, PR, HER-2/neu, cytokeratin 5/6 (CK5/6) and epidermal growth factor receptor (EGFR) was performed to classify tumors into molecular subtypes as was described by Carey et al. This classification defines subtypes as follows: luminal A (ER+ and/or PR+ and HER2−), luminal B (ER+ and/or PR+ and HER2+), HER2 subtype (HER2+, ER− and PR−) and basal-like (ER−, PR−, HER2−, CK5/6+ and/or EGFR+). When a tumor was negative for all five markers it was denoted “unclassified” [23]. This sub-classification is used as the immunohistochemical surrogate of the subgroups detected by hierarchical clustering of gene-expression analysis of breast cancers [5].", "Scoring was performed by two observers (MRHvV, PJvD) blinded to BRCA1 mutation status. For ER and PR the percentage of positive nuclei was estimated, and cases with >10% of stained cells were considered positive, according to the Dutch guidelines [24]. HER-2/neu was scored according to the DAKO system, considering only 3+ cases as positive. EGFR and CK5/6 were reported positive if clear membranous [25], respectively cytoplasmic staining [26] was seen.\nSince no consensus exists for ALDH1 scoring, we scored both stromal and epithelial expression. Both intensity and percentage of intratumoral epithelial ALDH1 positive cells were evaluated in invasive carcinomas. Intensity was scored as 0 (absent), 1 (weak), 2 (moderate) or 3 (strong). H-scores reflecting the overall epithelial ALDH1 staining were calculated by multiplying the intensity score with the percentage of positive cells. A tumor was regarded positive if the H-score was 1 or above. The intensity of intratumoral stromal expression was scored from 0–4 as above in malignant tissues. The presence of peritumoral stromal overexpression was scored separately as present or absent.", "Discrete variables were compared by Chi-square test and Fisher’s exact test and odds ratios (OR) were calculated with 95% confidence intervals (95% CI). Normality of continuous data was tested by Kolomogorov-Smirnov test and groups were then compared by Students-T test or Mann-Whitney U test for normally distributed and non-parametric data, respectively.\nIn the case of comparisons between continuous and discrete variables, correlation coefficients were calculated. When features were associated (P < 0.10) with both BRCA1 mutation status and the presence of ALDH1 staining, bivariate statistical analysis of these possible confounders took place by calculating ORs stratified for specific subgroups (corrected by Mantel-Haenszel procedure). In addition, multivariate analysis by means of logistic regression for all significant features was performed. All statistical analyses were performed using SPSS 15.0.", "[SUBTITLE] Baseline characteristics of the study cohort [SUBSECTION] The baseline characteristics of the cohort of patients with invasive carcinomas are summarized in Table 2. Ductal carcinoma was the most prominent histological subtype accounting for 75.6% to 80.5% of tumors in the hereditary and sporadic groups, respectively. In the BRCA1 related group the frequency of medullary and metaplastic carcinomas was higher, whereas (ducto)lobular carcinoma were more frequent in the sporadic group. In the sporadic group tumors were most frequently of the luminal A type (73.2%). Basal-like subtype was only present in 19.5% of the non-carriers, in contrast with BRCA1 mutation carriers among whom basal-like subtype was the most prominent subtype (70.7%) (P < 0.0005; OR 9.97; 95% CI 3.58–27.80). In both groups tumors were most often of high grade, but grade 3 tumors were more frequent in BRCA1 related tumors (75.6%) compared to sporadic tumors (58.5%) (P = 0.10; OR 2.20; 95% CI 0.85–5.65). This is consistent with the median MAI, which was also significantly higher in the BRCA1 group (22.0) compared to the sporadic group (14) (P = 0.007). Median tumor size was slightly higher in the sporadic group (2.5 cm) compared to the group of mutation carriers (1.9 cm) (P = 0.04). A trend for more frequent negative nodal status was seen in the BRCA1 group (70.7%) compared to the sporadic group (46.9%)(P = 0.05). An expansive growth pattern was significantly more often present in BRCA1 related tumors (52.5%) compared to sporadic controls (23.7%) (P = 0.009). Further, BRCA1 related tumors were more frequently negative for HER-2/neu (n.s), ER (P < 0.0005) and PR (P < 0.0005).\nTable 2Characteristics of BRCA1 related and sporadic breast carcinomasSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageHistologic typeDuctal3380.5%3175.6%n.s.(Ducto)lobular717.1%37.3%Medullary00%49.8%Metaplastic12.4%37.3%Molecular subtypeLuminal A3073.2%1126.8%0.0005Luminal B24.9%00%HER2+12.4%00%Basal-like819.5%2970.7%Unclassified00%12.4%Histologic grade1512.2%24.9%n.s.21229.3%819.5%32458.5%3175.6%Tumour size<2 cm1536.6%2156.8%n.s.2–5 cm2151.2%1540.5%>5 cm512.2%12.7%Unknown04Lymph node statusN01546.9%2970.7%0.05N11031.3%1126.8%N2515.6%00%N326.3%12.4%Unknown90Growth patternInfiltrative2976.3%1947.5%0.009Expansive923.7%2152.5%Unknown31HER-2/neu statusNegative3892.7%41100%0.24Positive37.3%00%ER statusNegative1126.8%3175.6%0.0005Positive3073.2%1024.4%PR statusNegative1435.0%3282.1%0.0005Positive2665.0%717.9%Unknown12\n\nCharacteristics of BRCA1 related and sporadic breast carcinomas\nThe baseline characteristics of the cohort of patients with invasive carcinomas are summarized in Table 2. Ductal carcinoma was the most prominent histological subtype accounting for 75.6% to 80.5% of tumors in the hereditary and sporadic groups, respectively. In the BRCA1 related group the frequency of medullary and metaplastic carcinomas was higher, whereas (ducto)lobular carcinoma were more frequent in the sporadic group. In the sporadic group tumors were most frequently of the luminal A type (73.2%). Basal-like subtype was only present in 19.5% of the non-carriers, in contrast with BRCA1 mutation carriers among whom basal-like subtype was the most prominent subtype (70.7%) (P < 0.0005; OR 9.97; 95% CI 3.58–27.80). In both groups tumors were most often of high grade, but grade 3 tumors were more frequent in BRCA1 related tumors (75.6%) compared to sporadic tumors (58.5%) (P = 0.10; OR 2.20; 95% CI 0.85–5.65). This is consistent with the median MAI, which was also significantly higher in the BRCA1 group (22.0) compared to the sporadic group (14) (P = 0.007). Median tumor size was slightly higher in the sporadic group (2.5 cm) compared to the group of mutation carriers (1.9 cm) (P = 0.04). A trend for more frequent negative nodal status was seen in the BRCA1 group (70.7%) compared to the sporadic group (46.9%)(P = 0.05). An expansive growth pattern was significantly more often present in BRCA1 related tumors (52.5%) compared to sporadic controls (23.7%) (P = 0.009). Further, BRCA1 related tumors were more frequently negative for HER-2/neu (n.s), ER (P < 0.0005) and PR (P < 0.0005).\nTable 2Characteristics of BRCA1 related and sporadic breast carcinomasSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageHistologic typeDuctal3380.5%3175.6%n.s.(Ducto)lobular717.1%37.3%Medullary00%49.8%Metaplastic12.4%37.3%Molecular subtypeLuminal A3073.2%1126.8%0.0005Luminal B24.9%00%HER2+12.4%00%Basal-like819.5%2970.7%Unclassified00%12.4%Histologic grade1512.2%24.9%n.s.21229.3%819.5%32458.5%3175.6%Tumour size<2 cm1536.6%2156.8%n.s.2–5 cm2151.2%1540.5%>5 cm512.2%12.7%Unknown04Lymph node statusN01546.9%2970.7%0.05N11031.3%1126.8%N2515.6%00%N326.3%12.4%Unknown90Growth patternInfiltrative2976.3%1947.5%0.009Expansive923.7%2152.5%Unknown31HER-2/neu statusNegative3892.7%41100%0.24Positive37.3%00%ER statusNegative1126.8%3175.6%0.0005Positive3073.2%1024.4%PR statusNegative1435.0%3282.1%0.0005Positive2665.0%717.9%Unknown12\n\nCharacteristics of BRCA1 related and sporadic breast carcinomas\n[SUBTITLE] ALDH1 expression in invasive carcinomas [SUBSECTION] ALDH1 expression in tumors showed wide variation, ranging from weak to very strong expression and from only a few positive cells, to a diffuse staining pattern in a high percentage of positive cells (Fig. 1). Similar to benign tissue, both stromal and epithelial cells expressed ALDH1[18]. Epithelial ALDH1 expression was distributed randomly over the tumor. However, for stromal expression a specific peritumoral staining pattern was seen in some cases. Data on epithelial and stromal expression are shown in Table 3.\nFig. 1Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nTable 3ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controlsSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageIntratumoral epithelial ALDH1 expressionNegative2458.5%922.0%0.001Positive1741.5%3278.0%Intensity of epithelial ALDH1 expressionAbsent2458.5%922.0%0.005Weak819.5%717.1%Moderate922.0%1741.5%Strong00%819.5%Intratumoral stromal ALDH1 expressionAbsent24.9%24.9%n.s.Weak922.0%37.3%Moderate1229.3%1229.3%Strong1843.9%2458.5%Peritumoral ALDH1 expressionAbsent3790.2%2663.4%0.001Present49.8%1536.6%\n\nExpression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controls\nSignificantly more tumors showed epithelial ALDH1 expression in the BRCA1 group (78.0%) compared to sporadic breast cancer (41.5%) (P = 0.001). Both the intensity and the percentage of epithelial cells with ALDH1 expression were significantly higher in BRCA1 related breast cancer. In this group, 19.5% showed strong ALDH1 expression, compared to none of the sporadic tumors (P = 0.005). Overall, the median percentage of positive cells was 0.0 in the sporadic group compared to 2.0 in the hereditary group (P = 0.01), and in the cases with ALDH1 expression, the median percentage of positive cells was 10% in the sporadic group compared to 5% in the hereditary group (P = 0.27)\nStromal ALDH1 expression within the tumor was similarly high in both groups (strong expression in 43.9% of sporadic controls and 58.5% in hereditary cases; P = 0.14). However, the peritumoral stroma showed significantly more frequent overexpression in the BRCA1 related group (36.6%) compared to non-carriers (9.8%) (P = 0.001) (Fig. 1).\nThe presence of peritumoral and epithelial ALDH1 expression did not correlate with each other (P = 0.73; OR 1.21; 95% CI 0.42–3.47) and in multivariate analysis both were independent predictors of BRCA1 mutations status.\nALDH1 expression in tumors showed wide variation, ranging from weak to very strong expression and from only a few positive cells, to a diffuse staining pattern in a high percentage of positive cells (Fig. 1). Similar to benign tissue, both stromal and epithelial cells expressed ALDH1[18]. Epithelial ALDH1 expression was distributed randomly over the tumor. However, for stromal expression a specific peritumoral staining pattern was seen in some cases. Data on epithelial and stromal expression are shown in Table 3.\nFig. 1Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nTable 3ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controlsSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageIntratumoral epithelial ALDH1 expressionNegative2458.5%922.0%0.001Positive1741.5%3278.0%Intensity of epithelial ALDH1 expressionAbsent2458.5%922.0%0.005Weak819.5%717.1%Moderate922.0%1741.5%Strong00%819.5%Intratumoral stromal ALDH1 expressionAbsent24.9%24.9%n.s.Weak922.0%37.3%Moderate1229.3%1229.3%Strong1843.9%2458.5%Peritumoral ALDH1 expressionAbsent3790.2%2663.4%0.001Present49.8%1536.6%\n\nExpression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controls\nSignificantly more tumors showed epithelial ALDH1 expression in the BRCA1 group (78.0%) compared to sporadic breast cancer (41.5%) (P = 0.001). Both the intensity and the percentage of epithelial cells with ALDH1 expression were significantly higher in BRCA1 related breast cancer. In this group, 19.5% showed strong ALDH1 expression, compared to none of the sporadic tumors (P = 0.005). Overall, the median percentage of positive cells was 0.0 in the sporadic group compared to 2.0 in the hereditary group (P = 0.01), and in the cases with ALDH1 expression, the median percentage of positive cells was 10% in the sporadic group compared to 5% in the hereditary group (P = 0.27)\nStromal ALDH1 expression within the tumor was similarly high in both groups (strong expression in 43.9% of sporadic controls and 58.5% in hereditary cases; P = 0.14). However, the peritumoral stroma showed significantly more frequent overexpression in the BRCA1 related group (36.6%) compared to non-carriers (9.8%) (P = 0.001) (Fig. 1).\nThe presence of peritumoral and epithelial ALDH1 expression did not correlate with each other (P = 0.73; OR 1.21; 95% CI 0.42–3.47) and in multivariate analysis both were independent predictors of BRCA1 mutations status.\n[SUBTITLE] Correlation of ALDH1 with other characteristics [SUBSECTION] Epithelial ALDH1 expression in tumors correlated significantly with growth pattern (P = 0.02; OR 3.29; 95% CI 1.19–9.09) and younger age (P = 0.05; 95% CI 0.08–9.35). In addition, a trend for correlation with PR negativity (P = 0.06; OR 0.41; 95% CI 0.16–1.04), ER negativity (P = 0.08; OR 0.45; 95% CI 0.18–1.10), basal-like subtype (P = 0.08; OR 2.26; 95% CI 0.91–5.65) and larger tumor size (P = 0.08) was found. Intratumoral stromal ALDH1 expression did not correlate with other characteristics.\nPeritumoral ALDH1 overexpression correlated with PR negativity (P = 0.002; OR 0.11; 95% CI 0.02–0.52), ER negativity (P = 0.001; OR 0.13; 95% CI 0.04–0.50), basal-like subtype (P = 0.004; OR 4.87; 95% CI 1.55–15.27) and high MAI (P = 0.009).\nSince ER, PR, basal-like subtype and growth pattern were associated with both mutation status and the presence of epithelial ALDH1 expression, we performed stratified analysis for basal-like subtype and growth pattern, and estimated corrected ORs by Mantel-Haenszel procedure. The OR, adjusted for basal-like subtype, was still significant (ORadjusted 5.11; 95% CI 1.64–15.97; P = 0.005) and hardly differed from the crude OR (ORcrude 5.02). The estimated OR adjusted for growth pattern was slightly lower than the crude OR (ORadjusted 3.88; 95% CI 1.41–10.69; P = 0.009), but still significant. ER and PR were not independently analyzed as possible confounders, because they were constituents of basal-like subtype.\nTo correct for multiple confounders simultaneously, we performed multivariate analysis, by including ALDH1 expression, basal-like subtype and growth pattern in a stepwise logistic regression model. Hereby we estimated the independent predictive value of these factors for mutation status. In multivariate analysis only basal-like subtype (P < 0.0005) and the presence of epithelial ALDH1 expression (P = 0.005) were independent predictors of BRCA1 mutation status, whereas growth pattern was no longer of additional predictive value (P = 0.25).\nSince PR, ER, basal-like subtype and MAI correlated significantly with both peritumoral ALDH1 expression and BRCA1 mutation status, we performed univariate and multivariate analysis to exclude confounders as above. The OR adjusted for basal-like phenotype was no longer significant (ORadjusted 2.65; 95% CI 0.77–9.04; P = 0.12), indicating that peritumoral ALDH1 expression and basal-like subtype are not of independent predictive value for BRCA1 mutation status. Because MAI was a continuous variable it was only included in multivariate analysis. In a logistic regression model only basal-like subclass (P < 0.0005), but not peritumoral ALDH1 expression (P = 0.08) and MAI (P = 0.55), was of independent predictive value for BRCA1 mutation status.\nEpithelial ALDH1 expression in tumors correlated significantly with growth pattern (P = 0.02; OR 3.29; 95% CI 1.19–9.09) and younger age (P = 0.05; 95% CI 0.08–9.35). In addition, a trend for correlation with PR negativity (P = 0.06; OR 0.41; 95% CI 0.16–1.04), ER negativity (P = 0.08; OR 0.45; 95% CI 0.18–1.10), basal-like subtype (P = 0.08; OR 2.26; 95% CI 0.91–5.65) and larger tumor size (P = 0.08) was found. Intratumoral stromal ALDH1 expression did not correlate with other characteristics.\nPeritumoral ALDH1 overexpression correlated with PR negativity (P = 0.002; OR 0.11; 95% CI 0.02–0.52), ER negativity (P = 0.001; OR 0.13; 95% CI 0.04–0.50), basal-like subtype (P = 0.004; OR 4.87; 95% CI 1.55–15.27) and high MAI (P = 0.009).\nSince ER, PR, basal-like subtype and growth pattern were associated with both mutation status and the presence of epithelial ALDH1 expression, we performed stratified analysis for basal-like subtype and growth pattern, and estimated corrected ORs by Mantel-Haenszel procedure. The OR, adjusted for basal-like subtype, was still significant (ORadjusted 5.11; 95% CI 1.64–15.97; P = 0.005) and hardly differed from the crude OR (ORcrude 5.02). The estimated OR adjusted for growth pattern was slightly lower than the crude OR (ORadjusted 3.88; 95% CI 1.41–10.69; P = 0.009), but still significant. ER and PR were not independently analyzed as possible confounders, because they were constituents of basal-like subtype.\nTo correct for multiple confounders simultaneously, we performed multivariate analysis, by including ALDH1 expression, basal-like subtype and growth pattern in a stepwise logistic regression model. Hereby we estimated the independent predictive value of these factors for mutation status. In multivariate analysis only basal-like subtype (P < 0.0005) and the presence of epithelial ALDH1 expression (P = 0.005) were independent predictors of BRCA1 mutation status, whereas growth pattern was no longer of additional predictive value (P = 0.25).\nSince PR, ER, basal-like subtype and MAI correlated significantly with both peritumoral ALDH1 expression and BRCA1 mutation status, we performed univariate and multivariate analysis to exclude confounders as above. The OR adjusted for basal-like phenotype was no longer significant (ORadjusted 2.65; 95% CI 0.77–9.04; P = 0.12), indicating that peritumoral ALDH1 expression and basal-like subtype are not of independent predictive value for BRCA1 mutation status. Because MAI was a continuous variable it was only included in multivariate analysis. In a logistic regression model only basal-like subclass (P < 0.0005), but not peritumoral ALDH1 expression (P = 0.08) and MAI (P = 0.55), was of independent predictive value for BRCA1 mutation status.", "The baseline characteristics of the cohort of patients with invasive carcinomas are summarized in Table 2. Ductal carcinoma was the most prominent histological subtype accounting for 75.6% to 80.5% of tumors in the hereditary and sporadic groups, respectively. In the BRCA1 related group the frequency of medullary and metaplastic carcinomas was higher, whereas (ducto)lobular carcinoma were more frequent in the sporadic group. In the sporadic group tumors were most frequently of the luminal A type (73.2%). Basal-like subtype was only present in 19.5% of the non-carriers, in contrast with BRCA1 mutation carriers among whom basal-like subtype was the most prominent subtype (70.7%) (P < 0.0005; OR 9.97; 95% CI 3.58–27.80). In both groups tumors were most often of high grade, but grade 3 tumors were more frequent in BRCA1 related tumors (75.6%) compared to sporadic tumors (58.5%) (P = 0.10; OR 2.20; 95% CI 0.85–5.65). This is consistent with the median MAI, which was also significantly higher in the BRCA1 group (22.0) compared to the sporadic group (14) (P = 0.007). Median tumor size was slightly higher in the sporadic group (2.5 cm) compared to the group of mutation carriers (1.9 cm) (P = 0.04). A trend for more frequent negative nodal status was seen in the BRCA1 group (70.7%) compared to the sporadic group (46.9%)(P = 0.05). An expansive growth pattern was significantly more often present in BRCA1 related tumors (52.5%) compared to sporadic controls (23.7%) (P = 0.009). Further, BRCA1 related tumors were more frequently negative for HER-2/neu (n.s), ER (P < 0.0005) and PR (P < 0.0005).\nTable 2Characteristics of BRCA1 related and sporadic breast carcinomasSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageHistologic typeDuctal3380.5%3175.6%n.s.(Ducto)lobular717.1%37.3%Medullary00%49.8%Metaplastic12.4%37.3%Molecular subtypeLuminal A3073.2%1126.8%0.0005Luminal B24.9%00%HER2+12.4%00%Basal-like819.5%2970.7%Unclassified00%12.4%Histologic grade1512.2%24.9%n.s.21229.3%819.5%32458.5%3175.6%Tumour size<2 cm1536.6%2156.8%n.s.2–5 cm2151.2%1540.5%>5 cm512.2%12.7%Unknown04Lymph node statusN01546.9%2970.7%0.05N11031.3%1126.8%N2515.6%00%N326.3%12.4%Unknown90Growth patternInfiltrative2976.3%1947.5%0.009Expansive923.7%2152.5%Unknown31HER-2/neu statusNegative3892.7%41100%0.24Positive37.3%00%ER statusNegative1126.8%3175.6%0.0005Positive3073.2%1024.4%PR statusNegative1435.0%3282.1%0.0005Positive2665.0%717.9%Unknown12\n\nCharacteristics of BRCA1 related and sporadic breast carcinomas", "ALDH1 expression in tumors showed wide variation, ranging from weak to very strong expression and from only a few positive cells, to a diffuse staining pattern in a high percentage of positive cells (Fig. 1). Similar to benign tissue, both stromal and epithelial cells expressed ALDH1[18]. Epithelial ALDH1 expression was distributed randomly over the tumor. However, for stromal expression a specific peritumoral staining pattern was seen in some cases. Data on epithelial and stromal expression are shown in Table 3.\nFig. 1Expression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nTable 3ALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controlsSporadic group (n = 41)BRCA1 group (n = 41)P-valueNPercentageNPercentageIntratumoral epithelial ALDH1 expressionNegative2458.5%922.0%0.001Positive1741.5%3278.0%Intensity of epithelial ALDH1 expressionAbsent2458.5%922.0%0.005Weak819.5%717.1%Moderate922.0%1741.5%Strong00%819.5%Intratumoral stromal ALDH1 expressionAbsent24.9%24.9%n.s.Weak922.0%37.3%Moderate1229.3%1229.3%Strong1843.9%2458.5%Peritumoral ALDH1 expressionAbsent3790.2%2663.4%0.001Present49.8%1536.6%\n\nExpression of ALDH in malignant breast tissues. Left: Breast cancer in a BRCA1 mutated patient showing strong peritumoral stromal (solid arrow) and frequent intratumoral epithelial expression (dashed arrow). Right: Sporadic breast cancer showing no peritumoral stromal and hardly intratumoral epithelial expression\nALDH1 expression in BRCA1 related invasive breast carcinomas and sporadic controls\nSignificantly more tumors showed epithelial ALDH1 expression in the BRCA1 group (78.0%) compared to sporadic breast cancer (41.5%) (P = 0.001). Both the intensity and the percentage of epithelial cells with ALDH1 expression were significantly higher in BRCA1 related breast cancer. In this group, 19.5% showed strong ALDH1 expression, compared to none of the sporadic tumors (P = 0.005). Overall, the median percentage of positive cells was 0.0 in the sporadic group compared to 2.0 in the hereditary group (P = 0.01), and in the cases with ALDH1 expression, the median percentage of positive cells was 10% in the sporadic group compared to 5% in the hereditary group (P = 0.27)\nStromal ALDH1 expression within the tumor was similarly high in both groups (strong expression in 43.9% of sporadic controls and 58.5% in hereditary cases; P = 0.14). However, the peritumoral stroma showed significantly more frequent overexpression in the BRCA1 related group (36.6%) compared to non-carriers (9.8%) (P = 0.001) (Fig. 1).\nThe presence of peritumoral and epithelial ALDH1 expression did not correlate with each other (P = 0.73; OR 1.21; 95% CI 0.42–3.47) and in multivariate analysis both were independent predictors of BRCA1 mutations status.", "Epithelial ALDH1 expression in tumors correlated significantly with growth pattern (P = 0.02; OR 3.29; 95% CI 1.19–9.09) and younger age (P = 0.05; 95% CI 0.08–9.35). In addition, a trend for correlation with PR negativity (P = 0.06; OR 0.41; 95% CI 0.16–1.04), ER negativity (P = 0.08; OR 0.45; 95% CI 0.18–1.10), basal-like subtype (P = 0.08; OR 2.26; 95% CI 0.91–5.65) and larger tumor size (P = 0.08) was found. Intratumoral stromal ALDH1 expression did not correlate with other characteristics.\nPeritumoral ALDH1 overexpression correlated with PR negativity (P = 0.002; OR 0.11; 95% CI 0.02–0.52), ER negativity (P = 0.001; OR 0.13; 95% CI 0.04–0.50), basal-like subtype (P = 0.004; OR 4.87; 95% CI 1.55–15.27) and high MAI (P = 0.009).\nSince ER, PR, basal-like subtype and growth pattern were associated with both mutation status and the presence of epithelial ALDH1 expression, we performed stratified analysis for basal-like subtype and growth pattern, and estimated corrected ORs by Mantel-Haenszel procedure. The OR, adjusted for basal-like subtype, was still significant (ORadjusted 5.11; 95% CI 1.64–15.97; P = 0.005) and hardly differed from the crude OR (ORcrude 5.02). The estimated OR adjusted for growth pattern was slightly lower than the crude OR (ORadjusted 3.88; 95% CI 1.41–10.69; P = 0.009), but still significant. ER and PR were not independently analyzed as possible confounders, because they were constituents of basal-like subtype.\nTo correct for multiple confounders simultaneously, we performed multivariate analysis, by including ALDH1 expression, basal-like subtype and growth pattern in a stepwise logistic regression model. Hereby we estimated the independent predictive value of these factors for mutation status. In multivariate analysis only basal-like subtype (P < 0.0005) and the presence of epithelial ALDH1 expression (P = 0.005) were independent predictors of BRCA1 mutation status, whereas growth pattern was no longer of additional predictive value (P = 0.25).\nSince PR, ER, basal-like subtype and MAI correlated significantly with both peritumoral ALDH1 expression and BRCA1 mutation status, we performed univariate and multivariate analysis to exclude confounders as above. The OR adjusted for basal-like phenotype was no longer significant (ORadjusted 2.65; 95% CI 0.77–9.04; P = 0.12), indicating that peritumoral ALDH1 expression and basal-like subtype are not of independent predictive value for BRCA1 mutation status. Because MAI was a continuous variable it was only included in multivariate analysis. In a logistic regression model only basal-like subclass (P < 0.0005), but not peritumoral ALDH1 expression (P = 0.08) and MAI (P = 0.55), was of independent predictive value for BRCA1 mutation status.", "This is the first study to evaluate ALDH1 expression in breast cancers of BRCA1 mutation carriers and to compare it to controls. We show that ALDH1 expression is significantly higher in the epithelium and in the peritumoral stroma in cancers of BRCA1 mutation carriers compared to sporadic controls.\nIntratumoral epithelial ALDH1 expression was clearly more present in BRCA1 mutation carriers, implying that this population indeed has an enlarged CSC component. We find a somewhat higher frequency of epithelial expression in the group of sporadic invasive carcinomas (40%), as was described elsewhere (5–26%) [15, 27–29]. This might be explained by the fact that we used age-matching to select our sporadic controls. Hereby we possibly select a more aggressive population then we would get by randomly selecting breast cancers. This is also reflected by the high frequency of grade 3 carcinomas (58.5%) in our sporadic group. An additional problem, when comparing frequencies between studies, is that there is no consensus on the scoring method and the cut-off used for positivity. Most studies score both intensity and percentage of positive cells and use a cut off H >1, like we did, but higher thresholds (likely leading to lower frequencies) have also been applied.\nIn both our study and literature a correlation between intratumoral epithelial ALDH1 expression and basal-like subtype and ER- and PR-negative receptor status was found [15, 27]. We were unable to verify correlations between ALDH1 in tumors and grade, HER2 overexpressing subtype, CK5/6, CK14, [15] EGFR, p53, TOP2A [28], Ki67 [29] and MAI [30] that were previously described. This is probably due to the fact that we evaluated these characteristics only in a small population (e.g.CK5/6, EGFR) or not at all (e.g.CK14, p53, TOP2A). Furthermore, we found a significant correlation between ALDH1 expression in tumors and expansive growth pattern and younger age of onset.\nThere is no general consensus on the immunohistochemical definition of threshold for CK5/6 positivity to use. Korsching, E. et al, however, conclude that although a variability of thresholds was used in different studies on CK5/6 expression, they have very similar findings [31]. Therefore, the chosen threshold for CK5/6 positivity seems of little importance for the outcome of our current study.\nRates of strong intratumoral stromal ALDH1 expression were similar to literature [28] and significantly higher in BRCA1 carriers than controls, as was peritumoral stromal expression. Stromal ALDH1 expression is interesting because of its association with BRCA1 mutation status and outcome [28], but is not likely stem cell related, because expression was also found in over 80% of normal stroma tissue of non-carriers. The role of ALDH1 in fibroblasts remains largely unexplored. Our results imply that either BRCA1 is a regulator of stromal ALDH1 expression, or that stromal ALDH1 expression is a physiologic response to (very early) carcinogenetic events.\nIt is plausible that stromal and epithelial ALDH1 expression reflect different processes in carcinogenesis, since we could not find an association between epithelial and peritumoral stromal ALDH1 expression. This hypothesis is also supported by the seemingly contradictory previous reports on the relation between ALDH1 expression and prognosis. Epithelial ALDH1 expression has been associated with significantly decreased overall [15] and disease free survival, grade and systemic metastasis [32]. In addition, patients with an ALDH1 positive core needle biopsy had a significantly lower rate of pathologic complete response (pCR) after neo-adjuvant chemotherapy [33]. Contradictory with the association between epithelial ALDH1 expression and poor prognosis, intratumoral stromal ALDH1 expression correlated with good prognosis. [28] An explanation for the different effects of stromal and epithelial ALDH1 expression might be found in the biological role ALDH1 plays in processes apart from its potential role in stem cells and cellular differentiation. Both a tumor suppressing and an oncogenic role have been ascribed to retinoic acids (RA), the product of enzymatic conversion of vitamin A by ALDH1. In the presence of functional RA-receptor α (RARα), RA exhibits a growth inhibitory pro-apoptotic role, whereas RA was shown to promote cell growth and survival in the absence of functional RARα [34]. These findings indicate that RA can cause a diversity of effects, both pro- and anti-apoptotic in specific situations, for instance different cell types (e.g. stromal and epithelial cells). This diversity of working mechanisms is in line with the independent associations for stromal and epithelial staining, we find in this study and the contradictory associations with prognosis, found by others.\nApart from its pure biological interest, our finding that ALDH1 expression characterizes BRCA1 related breast cancer might serve several clinical purposes. ALDH1 could serve as a biomarker for BRCA1 mutation carriers. Easily assessable biomarkers are necessary to recognize hereditary cancers, because they can help to trigger analyzing family history and to decide on mutation testing in patients at borderline risk based on family history only. Tools that help to select patients for screening are needed, since genetic screening is time-consuming and expensive. Further, an established phenotype can help to pin down the pathogenicity of so called “unclassified variant” mutations. Since intratumoral ALDH1 expression was the only characteristic with additive predictive value independent of basal-like phenotype for BRCA1 mutation status, adding ALDH1 to a panel of IHC markers might significantly increase the predictive value of the model.\nIn addition, ALDH1 might be a possible therapeutic target in breast cancer. CSCs are considered relatively therapy resistant [35]. Several promising ways to target the ALDH1 CSC population have been identified. ALDH1 was first identified to be a marker of hematopoietic stem cells (HSCs). In CD34+CD38- HSCs it was shown that inhibiting ALDH1 signaling increased the stem cell population. This effect could be reversed by administration of exogenous all-trans-retinoic acid (ATRA), which is used on a therapeutic base in patients with acute promyelocytic leukemia (APML) [36]. ATRA stimulates differentiation into mature cells in these patients and treatment with ATRA in addition to chemotherapy significantly improves survival [37]. Its use in the treatment of other (solid) cancers has been limited to date, due to systemic toxicity and the development of resistance during carcinogenesis [38]. Recently it was shown that ALDH1 regulates the differentiation of breast CSC through retinoid signaling in a similar manner as in HSCs [16]. Further, blocking the IL-8 receptor CXCR1 with repertaxin has been shown to deplete the ALDH1 positive CSC population in vitro and to decrease the CSC population in human breast cancer xenografts, retarding tumor growth and reducing metastasis [39]. Both ATRA and repartaxin might enhance the effect of conventional chemotherapy by specifically targeting CSCs and sensitizing these relatively therapy resistant cells. This therapy might be of specific use in BRCA1 related breast cancer, since our results imply that these cancers have an increased ALDH1 positive CSC population.\nIn conclusion, compared to sporadic controls, ALDH1 positive (cancer stem) cells and peritumoral expression were significantly more frequent in BRCA1 related breast cancer. ALDH1 tumor cell expression was an independent predictor of BRCA1 mutation status. Thereby, ALDH1 might serve as a BRCA1 biomarker and therapeutic target." ]
[ "introduction", "materials|methods", null, null, null, null, "results", null, null, null, "discussion" ]
[ "Breast cancer", "BRCA1", "ALDH1", "Stem cells", "Hereditary breast cancer" ]
A participatory return-to-work intervention for temporary agency workers and unemployed workers sick-listed due to musculoskeletal disorders: results of a randomized controlled trial.
21336673
Within the labour force workers without an employment contract represent a vulnerable group. In most cases, when sick-listed, these workers have no workplace/employer to return to. Therefore, the aim of this study was to evaluate the effectiveness on return-to-work of a participatory return-to-work program compared to usual care for unemployed workers and temporary agency workers, sick-listed due to musculoskeletal disorders.
INTRODUCTION
The workers, sick-listed for 2-8 weeks due to musculoskeletal disorders, were randomly allocated to the participatory return-to-work program (n = 79) or to usual care (n = 84). The new program is a stepwise procedure aimed at making a consensus-based return-to-work plan, with the possibility of a temporary (therapeutic) workplace. Outcomes were measured at baseline, 3, 6, 9 and 12 months. The primary outcome measure was time to sustainable first return-to-work. Secondary outcome measures were duration of sickness benefit, functional status, pain intensity, and perceived health.
METHODS
The median duration until sustainable first return-to-work was 161 days in the intervention group, compared to 299 days in the usual care group. The new return-to-work program resulted in a non-significant delay in RTW during the first 90 days, followed by a significant advantage in RTW rate after 90 days (hazard ratio of 2.24 [95% confidence interval 1.28-3.94] P = 0.005). No significant differences were found for the measured secondary outcomes.
RESULTS
The newly developed participatory return-to-work program seems to be a promising intervention to facilitate work resumption and reduce work disability among temporary agency workers and unemployed workers, sick-listed due to musculoskeletal disorders.
CONCLUSIONS
[ "Adult", "Employment", "Female", "Humans", "Kaplan-Meier Estimate", "Male", "Middle Aged", "Musculoskeletal Diseases", "Netherlands", "Occupational Health Services", "Patient Participation", "Program Evaluation", "Proportional Hazards Models", "Referral and Consultation", "Sick Leave", "Time Factors", "Treatment Outcome", "Work Capacity Evaluation" ]
3173632
Introduction
Sickness absence and work disability are a common and substantial public health problem with major economical consequences worldwide [1, 2]. Given the fact that long-term sickness absence contributes largely to the total amount of annual work disability costs in Western countries [1], development of effective return-to-work (RTW) interventions are considered important public health (research) challenges [3]. To date, most RTW intervention research is aimed at sick-listed (established regular) employees, i.e. workers with relatively permanent employment relationships. In contrast, development of effective RTW interventions for sick-listed workers without an employment contract is lagging [4, 5]. However, in view of the growing international trend towards labour market flexibility [6], development of RTW interventions specifically aimed at sick-listed workers without an employment contract and sick-listed workers with a flexible labour arrangement, e.g. temporary agency workers, is of crucial importance. These workers represent a vulnerable group within the working population. Various studies show a poorer health status and an increased risk for (long-term) work disability among these workers, compared to regular employees [7–12]. In addition, they are burdened with a greater distance to the labour market [11, 13, 14]. When sick-listed, these workers have in most cases no workplace/employer to return to [15, 16]. Hence, tailor-made RTW interventions with the presence of a workplace for (therapeutic) RTW could be an important factor in the recovery and (vocational) rehabilitation process [15]. Therefore, a participatory RTW program was developed based on a successful RTW intervention for regular employees, sick-listed due to low back pain [17, 18]. This newly developed RTW program comprises of a stepwise communication process to identify and solve obstacles for RTW, resulting in a consensus-based plan to facilitate (therapeutic) RTW. The three main stakeholders in this intervention are: the sick-listed worker, the labour expert representing the Social Security Agency (SSA) who guides the worker with regard to vocational rehabilitation, and an independent RTW coordinator. The role of the RTW coordinator is to stimulate a high degree of involvement of both the sick-listed worker and the labour expert, and to reach consensus about the RTW plan. To offer a workplace for (therapeutic) RTW, a vocational rehabilitation agency was contracted to find a suitable (therapeutic) workplace matching with the formulated RTW plan. The aim of this study was to assess the effectiveness of the new participatory RTW program compared to usual care for unemployed workers and temporary agency workers, sick-listed due to musculoskeletal disorders (MSD). The primary outcome measure was time to sustainable first RTW. Duration of sickness benefit was secondary outcome measure.
null
null
Results
[SUBTITLE] Recruitment of Participants [SUBSECTION] Recruitment of participants took place between March 2007 and September 2008. The returned screening questionnaires resulted in 784 potentially eligible workers who were interested in participation. After telephone contact 191 workers refused participation and 327 workers did not meet the inclusion criteria, resulting in 266 workers for whom intake meetings were planned. During the intake meeting 103 workers were not included due to several reasons (see Fig. 1). Finally, 163 workers who met all inclusion criteria were enrolled in the study and randomised to the participatory RTW program (n = 79) or usual care (n = 84). An overview of the recruitment flow is presented in Fig. 1.Fig. 1Flow of the workers in the study Flow of the workers in the study Recruitment of participants took place between March 2007 and September 2008. The returned screening questionnaires resulted in 784 potentially eligible workers who were interested in participation. After telephone contact 191 workers refused participation and 327 workers did not meet the inclusion criteria, resulting in 266 workers for whom intake meetings were planned. During the intake meeting 103 workers were not included due to several reasons (see Fig. 1). Finally, 163 workers who met all inclusion criteria were enrolled in the study and randomised to the participatory RTW program (n = 79) or usual care (n = 84). An overview of the recruitment flow is presented in Fig. 1.Fig. 1Flow of the workers in the study Flow of the workers in the study [SUBTITLE] Loss to Follow-Up [SUBSECTION] Data about RTW and sickness benefit were available for all workers for the whole 12-months follow-up period. The RTW data were collected from the SSA database, including the workers’ file, and the self-report questionnaires. Data about sickness benefit were collected from the SSA database. For the self-reported secondary outcomes complete follow-up data were available for 116 participants (=71.2%). Data about RTW and sickness benefit were available for all workers for the whole 12-months follow-up period. The RTW data were collected from the SSA database, including the workers’ file, and the self-report questionnaires. Data about sickness benefit were collected from the SSA database. For the self-reported secondary outcomes complete follow-up data were available for 116 participants (=71.2%). [SUBTITLE] Baseline Characteristics [SUBSECTION] Table 2 presents a summary of the measured baseline characteristics of the participants in the participatory RTW program group and the usual care group. For most of the baseline characteristics (i.e. worker-related, pain-related, health-related, work-related, and behavioural determinants) there were no or only minor (non-significant) differences between the two groups. All participants were fully work disabled at the time of enrolment. Approximately half of the workers in both groups (usual care group 52.4% and intervention group 54.4%, respectively) worked prior to reporting sick, i.e. the onset of work disability. For the participants who did not work before reporting sick the median duration between end of last job and first day of reporting sick was 13.0 months (interquartile range (IQR) 6.3–45.3 months) in the usual care group and 13.5 months (IQR 6.0–43.5 months) in the participatory RTW program group. However, despite randomisation, prognostic dissimilarities were present at baseline with worse physical role functioning (P = 0.052); more regular work schedule in last work (P = 0.031); and less intention to RTW despite symptoms (P = 0.024) in controls. If necessary, for these dissimilarities was adjusted in analyses.Table 2Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)Intervention group (N = 79)Control group (N = 84)Age (mean ± SD)44.0 ± 10.745.6 ± 9.0Gender (% male)57.063.1Level of education (% low)57.060.7Pain intensity (1–10 score) (mean ± SD) Back pain7.1 ± 2.06.8 ± 1.9 Neck pain7.1 ± 1.76.7 ± 2.0 Other pain6.5 ± 1.86.3 ± 1.9Functional status (0–100 score) (mean ± SD) Physical functioning46.0 ± 22.151.4 ± 21.3 Social functioning49.4 ± 25.451.2 ± 27.5Perceived health (0–100 score) (mean ± SD)56.3 ± 21.860.0 ± 20.3Type of worker (%) Temporary agency worker51.952.4 Unemployed worker48.147.6Type of last work (% physically and/or mentally demanding)74.775.0Work schedule (% day work)58.278.3Worker’s expectation regarding RTW at baseline (mean ± SD)2.22 ± 1.152.14 ± 1.12Intention to RTW despite symptoms (1–5) (mean ± SD)3.46 ± 1.103.05 ± 1.19 Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163) Table 2 presents a summary of the measured baseline characteristics of the participants in the participatory RTW program group and the usual care group. For most of the baseline characteristics (i.e. worker-related, pain-related, health-related, work-related, and behavioural determinants) there were no or only minor (non-significant) differences between the two groups. All participants were fully work disabled at the time of enrolment. Approximately half of the workers in both groups (usual care group 52.4% and intervention group 54.4%, respectively) worked prior to reporting sick, i.e. the onset of work disability. For the participants who did not work before reporting sick the median duration between end of last job and first day of reporting sick was 13.0 months (interquartile range (IQR) 6.3–45.3 months) in the usual care group and 13.5 months (IQR 6.0–43.5 months) in the participatory RTW program group. However, despite randomisation, prognostic dissimilarities were present at baseline with worse physical role functioning (P = 0.052); more regular work schedule in last work (P = 0.031); and less intention to RTW despite symptoms (P = 0.024) in controls. If necessary, for these dissimilarities was adjusted in analyses.Table 2Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)Intervention group (N = 79)Control group (N = 84)Age (mean ± SD)44.0 ± 10.745.6 ± 9.0Gender (% male)57.063.1Level of education (% low)57.060.7Pain intensity (1–10 score) (mean ± SD) Back pain7.1 ± 2.06.8 ± 1.9 Neck pain7.1 ± 1.76.7 ± 2.0 Other pain6.5 ± 1.86.3 ± 1.9Functional status (0–100 score) (mean ± SD) Physical functioning46.0 ± 22.151.4 ± 21.3 Social functioning49.4 ± 25.451.2 ± 27.5Perceived health (0–100 score) (mean ± SD)56.3 ± 21.860.0 ± 20.3Type of worker (%) Temporary agency worker51.952.4 Unemployed worker48.147.6Type of last work (% physically and/or mentally demanding)74.775.0Work schedule (% day work)58.278.3Worker’s expectation regarding RTW at baseline (mean ± SD)2.22 ± 1.152.14 ± 1.12Intention to RTW despite symptoms (1–5) (mean ± SD)3.46 ± 1.103.05 ± 1.19 Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163) [SUBTITLE] Compliance [SUBSECTION] In the usual care group 7 workers did not receive usual care as they reported full recovery of health complaints with subsequent ending of sickness benefit shortly after randomisation. Also 7 workers in the participatory RTW program group did not receive the allocated intervention, i.e. the participatory RTW program was not followed, due to several reasons (see Fig. 1). The remaining 72 workers in the intervention group all had the first consult with the insurance physician. One worker reported full recovery of health with ending of sickness benefit before the meeting with the RTW coordinator. For 23 workers the insurance physician established full work ability with ending of sickness benefit, i.e. claim closure, during the first consult. In case of claim closure without actual RTW, these workers were, in accordance with the usual care policy of the SSA, not referred to the RTW coordinator for making a RTW action plan. In addition, following the protocol, 10 workers were not referred to the RTW coordinator as the insurance physician established absence of work ability on medical grounds for at least 3 months during the first consult. The remaining 38 workers in the intervention group had the meetings with the labour expert and the RTW coordinator with the making of a consensus based RTW plan. Referral to a vocational rehabilitation agency for finding a suitable temporary workplace took place for 30 workers. Placement in a temporary (therapeutic) workplace was successfully achieved for 22 workers. In addition, four workers found a suitable workplace on own initiative. The median duration of working in a temporary (therapeutic) workplace was 90 days (IQR 41–147 days). During the 12-months follow-up 12 of the 22 workers with therapeutic work resumption were offered an employment contract. In the usual care group 7 workers did not receive usual care as they reported full recovery of health complaints with subsequent ending of sickness benefit shortly after randomisation. Also 7 workers in the participatory RTW program group did not receive the allocated intervention, i.e. the participatory RTW program was not followed, due to several reasons (see Fig. 1). The remaining 72 workers in the intervention group all had the first consult with the insurance physician. One worker reported full recovery of health with ending of sickness benefit before the meeting with the RTW coordinator. For 23 workers the insurance physician established full work ability with ending of sickness benefit, i.e. claim closure, during the first consult. In case of claim closure without actual RTW, these workers were, in accordance with the usual care policy of the SSA, not referred to the RTW coordinator for making a RTW action plan. In addition, following the protocol, 10 workers were not referred to the RTW coordinator as the insurance physician established absence of work ability on medical grounds for at least 3 months during the first consult. The remaining 38 workers in the intervention group had the meetings with the labour expert and the RTW coordinator with the making of a consensus based RTW plan. Referral to a vocational rehabilitation agency for finding a suitable temporary workplace took place for 30 workers. Placement in a temporary (therapeutic) workplace was successfully achieved for 22 workers. In addition, four workers found a suitable workplace on own initiative. The median duration of working in a temporary (therapeutic) workplace was 90 days (IQR 41–147 days). During the 12-months follow-up 12 of the 22 workers with therapeutic work resumption were offered an employment contract. [SUBTITLE] Usual Care [SUBSECTION] [SUBTITLE] Consults with the Occupational Health Care Professionals [SUBSECTION] In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert. In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert. [SUBTITLE] Received Occupational Health Care Interventions [SUBSECTION] In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1). In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1). [SUBTITLE] Consults with the Occupational Health Care Professionals [SUBSECTION] In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert. In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert. [SUBTITLE] Received Occupational Health Care Interventions [SUBSECTION] In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1). In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1). [SUBTITLE] Return-to-Work [SUBSECTION] The median time until sustainable first RTW was 161 days (IQR 88–365 days) in the participatory RTW program group and 299 days (IQR 71–365 days) in the usual care group (log rank test; P = 0.12). The median total number of days at work during follow-up was 128 days (IQR 0–247 days) in the participatory RTW program group and 46 days (IQR 0–246 days) in the usual care group. In Fig. 2 the Kaplan–Meier curves for time until sustainable first RTW are presented for both groups. The crude Cox regression analysis showed a violation of the proportional hazard assumption with crossing of the survival curves at approximately 90 days follow-up. Therefore, a time-dependent covariate (T > 90 days) was added to the Cox proportional hazards model (P = 0.011). To adjust for significant confounding, the baseline variables ‘work schedule in last work’ and ‘intention to RTW despite symptoms’ were included in the model (Table 2). The resulting adjusted HR (T ≤ 90 days) was 0.76 (95% CI 0.42–1.37; P = 0.36), and the adjusted HR (T > 90 days) was 2.24 (95% CI 1.28–3.94; P = 0.005). The per-protocol analysis showed an adjusted HR (T ≤ 90 days) of 0.93 (95% CI 0.49–1.87; P = 0.83), and an adjusted HR (T > 90 days) of 2.25 (95% CI 1.28–3.98; P = 0.005). In addition, the per-protocol analysis showed a median time until sustainable RTW of 157 days (IQR 89–365 days) in the participatory RTW program group and 330 days (IQR 87–365 days) in the usual care group (log rank test; P = 0.029). Significant clustering on the level of the insurance physicians and on the level of the couples of labour experts and RTW coordinators was not found in the analyses (Table 3).Fig. 2Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group Table 3Differences in return-to-work (RTW) between the participatory RTW program group the and usual care groupAdjusted modela Regression coefficientSE P valueHR95% CILowerUpperInterventionT ≤ 90 days−0.290.300.340.750.421.34T > 90 days0.780.280.012.191.263.80Adjusted for work scheduleT ≤ 90 days−0.230.300.440.790.441.43T > 90 days0.840.29<0.0052.321.324.10Adjusted for intention to RTW despite symptomsT ≤ 90 days−0.330.300.270.720.401.29T > 90 days0.740.280.012.101.203.66Adjusted for work schedule + intention to RTW despite symptomsT ≤ 90 days−0.270.300.360.760.421.37T > 90 days0.810.290.012.241.283.94Clustering on level insurance physicianT ≤ 90 days−0.300.280.420.740.351.55T > 90 days0.740.47<0.0052.101.333.22Clustering on level labour expert + RTW coordinatorT ≤ 90 days−0.250.350.470.780.401.54T > 90 days0.730.260.012.101.243.48Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented aResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group Differences in return-to-work (RTW) between the participatory RTW program group the and usual care group Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented aResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up The median time until sustainable first RTW was 161 days (IQR 88–365 days) in the participatory RTW program group and 299 days (IQR 71–365 days) in the usual care group (log rank test; P = 0.12). The median total number of days at work during follow-up was 128 days (IQR 0–247 days) in the participatory RTW program group and 46 days (IQR 0–246 days) in the usual care group. In Fig. 2 the Kaplan–Meier curves for time until sustainable first RTW are presented for both groups. The crude Cox regression analysis showed a violation of the proportional hazard assumption with crossing of the survival curves at approximately 90 days follow-up. Therefore, a time-dependent covariate (T > 90 days) was added to the Cox proportional hazards model (P = 0.011). To adjust for significant confounding, the baseline variables ‘work schedule in last work’ and ‘intention to RTW despite symptoms’ were included in the model (Table 2). The resulting adjusted HR (T ≤ 90 days) was 0.76 (95% CI 0.42–1.37; P = 0.36), and the adjusted HR (T > 90 days) was 2.24 (95% CI 1.28–3.94; P = 0.005). The per-protocol analysis showed an adjusted HR (T ≤ 90 days) of 0.93 (95% CI 0.49–1.87; P = 0.83), and an adjusted HR (T > 90 days) of 2.25 (95% CI 1.28–3.98; P = 0.005). In addition, the per-protocol analysis showed a median time until sustainable RTW of 157 days (IQR 89–365 days) in the participatory RTW program group and 330 days (IQR 87–365 days) in the usual care group (log rank test; P = 0.029). Significant clustering on the level of the insurance physicians and on the level of the couples of labour experts and RTW coordinators was not found in the analyses (Table 3).Fig. 2Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group Table 3Differences in return-to-work (RTW) between the participatory RTW program group the and usual care groupAdjusted modela Regression coefficientSE P valueHR95% CILowerUpperInterventionT ≤ 90 days−0.290.300.340.750.421.34T > 90 days0.780.280.012.191.263.80Adjusted for work scheduleT ≤ 90 days−0.230.300.440.790.441.43T > 90 days0.840.29<0.0052.321.324.10Adjusted for intention to RTW despite symptomsT ≤ 90 days−0.330.300.270.720.401.29T > 90 days0.740.280.012.101.203.66Adjusted for work schedule + intention to RTW despite symptomsT ≤ 90 days−0.270.300.360.760.421.37T > 90 days0.810.290.012.241.283.94Clustering on level insurance physicianT ≤ 90 days−0.300.280.420.740.351.55T > 90 days0.740.47<0.0052.101.333.22Clustering on level labour expert + RTW coordinatorT ≤ 90 days−0.250.350.470.780.401.54T > 90 days0.730.260.012.101.243.48Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented aResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group Differences in return-to-work (RTW) between the participatory RTW program group the and usual care group Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented aResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up [SUBTITLE] Secondary Outcome Measures [SUBSECTION] [SUBTITLE] Duration of Sickness Benefit [SUBSECTION] The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18). The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18). [SUBTITLE] Attitude, Social Influence, and Self-Efficacy (ASE) Determinants [SUBSECTION] Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa 12 monthsa Group*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months Results of the mixed model analyses Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa 12 monthsa Group*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months Results of the mixed model analyses Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months [SUBTITLE] Health-Related Outcomes [SUBSECTION] Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group. Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group. [SUBTITLE] Duration of Sickness Benefit [SUBSECTION] The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18). The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18). [SUBTITLE] Attitude, Social Influence, and Self-Efficacy (ASE) Determinants [SUBSECTION] Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa 12 monthsa Group*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months Results of the mixed model analyses Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa 12 monthsa Group*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months Results of the mixed model analyses Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented aAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months [SUBTITLE] Health-Related Outcomes [SUBSECTION] Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group. Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.
null
null
[ "Study Design and Setting", "Study Population and Recruitment", "Randomization and Blinding", "Interventions", "Usual Care", "Participatory RTW Program", "Outcome Measures", "Data Collection", "Primary Outcome Measure", "Secondary Outcome Measures", "Prognostic Measures", "Statistical Analyses", "Recruitment of Participants", "Loss to Follow-Up", "Baseline Characteristics", "Compliance", "Usual Care", "Consults with the Occupational Health Care Professionals", "Received Occupational Health Care Interventions", "Return-to-Work", "Secondary Outcome Measures", "Duration of Sickness Benefit", "Attitude, Social Influence, and Self-Efficacy (ASE) Determinants", "Health-Related Outcomes", "Main Findings", "Strengths of This Study", "Limitations of This Study", "Comparison with Other Studies", "Implications for Practice" ]
[ "The study is a randomized controlled trial carried out in collaboration with five front offices of the Dutch National Social Security Agency (SSA) and four large Dutch commercially operating vocational rehabilitation agencies (Olympia, Adeux, Capability, and Randstad Rentrée) in the eastern part of the Netherlands. The Medical Ethics Committee of the VU University Medical Centre (Amsterdam, the Netherlands) approved the study design, the protocols and procedures, and informed consent. The design of the study has been described in detail elsewhere [19].", "Between March 2007 and September 2008 all temporary agency workers and unemployed workers who were sick-listed between one and 2 weeks due to MSD and lived in the eastern part of the Netherlands received a letter with a screening questionnaire from the insurance physician of the SSA, on behalf of the researchers. The workers who returned the screening questionnaire indicating that they were still sick-listed and interested in participation, were contacted by the researchers by telephone to give additional information about the content of the study and to check eligibility. Temporary agency workers and unemployed workers sick-listed between 2 and 8 weeks with MSD as main health complaint for their sickness benefit claim were included. The main exclusion criteria were: (1) being sick-listed for more than 8 weeks; (2) not being able to complete questionnaires written in the Dutch language; (3) having a conflict with the Social Security Agency regarding a sickness benefit claim or a long-term disability claim; (4) having a legal conflict, e.g. an ongoing injury compensation claim; and (5) having had an episode of sickness absence due to MSD within 1 month before the current sickness benefit claim.\nThe insurance physician of the SSA was responsible for the identification of severe co-morbidity among the included workers; i.e. having a terminal disease, having a serious psychiatric disorder, or having a serious cardio-vascular disease. These participants remained in the intervention group, but were excluded from the participatory RTW program.", "Before randomization, to prevent unequal distribution of relevant prognostic baseline characteristics, the sick-listed workers were pre-stratified based on two important prognostic factors, namely type of worker [20–22], i.e. temporary agency worker or unemployed worker, and degree of mental or physical work demands (light or heavy) in last job held before the current sickness benefit claim [23, 24]. Next, block randomization (using blocks of four allocations) was applied to ensure equal group sizes within each stratum. A separate block randomization table was generated for each of the five participating SSA front offices. Allocation to the intervention group or the usual care group was performed after informed consent and completion of the baseline questionnaire.\nThe participants and occupational health care professionals were not blinded to the allocation result. Data regarding work resumption and sickness benefit claim duration were collected from the SSA database. Data entry of the self-reported data was performed by a research assistant using a unique research code for each participant, to ensure that analyses of the data by the researcher was blinded.", "[SUBTITLE] Usual Care [SUBSECTION] In the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\nIn the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\n[SUBTITLE] Participatory RTW Program [SUBSECTION] The intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator\nThe intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator", "In the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.", "The intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator", "[SUBTITLE] Data Collection [SUBSECTION] Prior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\nPrior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\n[SUBTITLE] Primary Outcome Measure [SUBSECTION] The primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\nThe primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\n[SUBTITLE] Secondary Outcome Measures [SUBSECTION] Secondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\nSecondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\n[SUBTITLE] Prognostic Measures [SUBSECTION] All covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].\nAll covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].", "Prior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.", "The primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.", "Secondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].", "All covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].", "All statistical analyses were carried out at workers’ level and according to the intention-to-treat principle. To determine whether randomisation was performed successfully descriptive statistics were used to compare the baseline measurements of both groups. The results of the intention-to-treat analyses were compared to per-protocol analyses to assess the presence of bias due to protocol deviations.\nThe Kaplan–Meier method was used to describe the duration until sustainable RTW in both groups. The Cox proportional hazard model was used to estimate hazard ratios (HR) for sustainable RTW and the corresponding 95% confidence intervals. First, unadjusted Cox regression analysis was carried out and, if necessary, adjusted Cox regression analysis was performed to adjust for prognostic dissimilarities at baseline, i.e. a confounder was added to the model when the regression coefficient changed by 10% or more. To account for clustering of participants within insurance physicians and within the couples of labour experts and RTW coordinators the shared-frailty procedure was used [30]. Linear mixed models were used to assess differences in pain intensity, functional status and perceived health, i.e. the interaction between treatment group and measurement time (baseline, 3, 6 and 12 months), adjusted for baseline differences, and taking into account clustering on the level of the insurance physician. Stata version 11.0 was used to test for clustering in the Cox regression analysis. All other analysis were performed with SPSS version 15.0. For all analyses a P value of 0.05 (two-tailed) was considered statistically significant.", "Recruitment of participants took place between March 2007 and September 2008. The returned screening questionnaires resulted in 784 potentially eligible workers who were interested in participation. After telephone contact 191 workers refused participation and 327 workers did not meet the inclusion criteria, resulting in 266 workers for whom intake meetings were planned. During the intake meeting 103 workers were not included due to several reasons (see Fig. 1). Finally, 163 workers who met all inclusion criteria were enrolled in the study and randomised to the participatory RTW program (n = 79) or usual care (n = 84). An overview of the recruitment flow is presented in Fig. 1.Fig. 1Flow of the workers in the study\n\nFlow of the workers in the study", "Data about RTW and sickness benefit were available for all workers for the whole 12-months follow-up period. The RTW data were collected from the SSA database, including the workers’ file, and the self-report questionnaires. Data about sickness benefit were collected from the SSA database. For the self-reported secondary outcomes complete follow-up data were available for 116 participants (=71.2%).", "Table 2 presents a summary of the measured baseline characteristics of the participants in the participatory RTW program group and the usual care group. For most of the baseline characteristics (i.e. worker-related, pain-related, health-related, work-related, and behavioural determinants) there were no or only minor (non-significant) differences between the two groups. All participants were fully work disabled at the time of enrolment. Approximately half of the workers in both groups (usual care group 52.4% and intervention group 54.4%, respectively) worked prior to reporting sick, i.e. the onset of work disability. For the participants who did not work before reporting sick the median duration between end of last job and first day of reporting sick was 13.0 months (interquartile range (IQR) 6.3–45.3 months) in the usual care group and 13.5 months (IQR 6.0–43.5 months) in the participatory RTW program group. However, despite randomisation, prognostic dissimilarities were present at baseline with worse physical role functioning (P = 0.052); more regular work schedule in last work (P = 0.031); and less intention to RTW despite symptoms (P = 0.024) in controls. If necessary, for these dissimilarities was adjusted in analyses.Table 2Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)Intervention group (N = 79)Control group (N = 84)Age (mean ± SD)44.0 ± 10.745.6 ± 9.0Gender (% male)57.063.1Level of education (% low)57.060.7Pain intensity (1–10 score) (mean ± SD) Back pain7.1 ± 2.06.8 ± 1.9 Neck pain7.1 ± 1.76.7 ± 2.0 Other pain6.5 ± 1.86.3 ± 1.9Functional status (0–100 score) (mean ± SD) Physical functioning46.0 ± 22.151.4 ± 21.3 Social functioning49.4 ± 25.451.2 ± 27.5Perceived health (0–100 score) (mean ± SD)56.3 ± 21.860.0 ± 20.3Type of worker (%) Temporary agency worker51.952.4 Unemployed worker48.147.6Type of last work (% physically and/or mentally demanding)74.775.0Work schedule (% day work)58.278.3Worker’s expectation regarding RTW at baseline (mean ± SD)2.22 ± 1.152.14 ± 1.12Intention to RTW despite symptoms (1–5) (mean ± SD)3.46 ± 1.103.05 ± 1.19\n\nBaseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)", "In the usual care group 7 workers did not receive usual care as they reported full recovery of health complaints with subsequent ending of sickness benefit shortly after randomisation. Also 7 workers in the participatory RTW program group did not receive the allocated intervention, i.e. the participatory RTW program was not followed, due to several reasons (see Fig. 1). The remaining 72 workers in the intervention group all had the first consult with the insurance physician. One worker reported full recovery of health with ending of sickness benefit before the meeting with the RTW coordinator. For 23 workers the insurance physician established full work ability with ending of sickness benefit, i.e. claim closure, during the first consult. In case of claim closure without actual RTW, these workers were, in accordance with the usual care policy of the SSA, not referred to the RTW coordinator for making a RTW action plan. In addition, following the protocol, 10 workers were not referred to the RTW coordinator as the insurance physician established absence of work ability on medical grounds for at least 3 months during the first consult. The remaining 38 workers in the intervention group had the meetings with the labour expert and the RTW coordinator with the making of a consensus based RTW plan. Referral to a vocational rehabilitation agency for finding a suitable temporary workplace took place for 30 workers. Placement in a temporary (therapeutic) workplace was successfully achieved for 22 workers. In addition, four workers found a suitable workplace on own initiative. The median duration of working in a temporary (therapeutic) workplace was 90 days (IQR 41–147 days). During the 12-months follow-up 12 of the 22 workers with therapeutic work resumption were offered an employment contract.", "[SUBTITLE] Consults with the Occupational Health Care Professionals [SUBSECTION] In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\nIn the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\n[SUBTITLE] Received Occupational Health Care Interventions [SUBSECTION] In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).\nIn the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).", "In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.", "In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).", "The median time until sustainable first RTW was 161 days (IQR 88–365 days) in the participatory RTW program group and 299 days (IQR 71–365 days) in the usual care group (log rank test; P = 0.12). The median total number of days at work during follow-up was 128 days (IQR 0–247 days) in the participatory RTW program group and 46 days (IQR 0–246 days) in the usual care group. In Fig. 2 the Kaplan–Meier curves for time until sustainable first RTW are presented for both groups. The crude Cox regression analysis showed a violation of the proportional hazard assumption with crossing of the survival curves at approximately 90 days follow-up. Therefore, a time-dependent covariate (T > 90 days) was added to the Cox proportional hazards model (P = 0.011). To adjust for significant confounding, the baseline variables ‘work schedule in last work’ and ‘intention to RTW despite symptoms’ were included in the model (Table 2). The resulting adjusted HR (T ≤ 90 days) was 0.76 (95% CI 0.42–1.37; P = 0.36), and the adjusted HR (T > 90 days) was 2.24 (95% CI 1.28–3.94; P = 0.005). The per-protocol analysis showed an adjusted HR (T ≤ 90 days) of 0.93 (95% CI 0.49–1.87; P = 0.83), and an adjusted HR (T > 90 days) of 2.25 (95% CI 1.28–3.98; P = 0.005). In addition, the per-protocol analysis showed a median time until sustainable RTW of 157 days (IQR 89–365 days) in the participatory RTW program group and 330 days (IQR 87–365 days) in the usual care group (log rank test; P = 0.029). Significant clustering on the level of the insurance physicians and on the level of the couples of labour experts and RTW coordinators was not found in the analyses (Table 3).Fig. 2Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nTable 3Differences in return-to-work (RTW) between the participatory RTW program group the and usual care groupAdjusted modela\nRegression coefficientSE\nP valueHR95% CILowerUpperInterventionT ≤ 90 days−0.290.300.340.750.421.34T > 90 days0.780.280.012.191.263.80Adjusted for work scheduleT ≤ 90 days−0.230.300.440.790.441.43T > 90 days0.840.29<0.0052.321.324.10Adjusted for intention to RTW despite symptomsT ≤ 90 days−0.330.300.270.720.401.29T > 90 days0.740.280.012.101.203.66Adjusted for work schedule + intention to RTW despite symptomsT ≤ 90 days−0.270.300.360.760.421.37T > 90 days0.810.290.012.241.283.94Clustering on level insurance physicianT ≤ 90 days−0.300.280.420.740.351.55T > 90 days0.740.47<0.0052.101.333.22Clustering on level labour expert + RTW coordinatorT ≤ 90 days−0.250.350.470.780.401.54T > 90 days0.730.260.012.101.243.48Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up\n\nKaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nDifferences in return-to-work (RTW) between the participatory RTW program group the and usual care group\nCox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\n\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up", "[SUBTITLE] Duration of Sickness Benefit [SUBSECTION] The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\nThe median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\n[SUBTITLE] Attitude, Social Influence, and Self-Efficacy (ASE) Determinants [SUBSECTION] Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\nTable 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n[SUBTITLE] Health-Related Outcomes [SUBSECTION] Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.\nTable 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.", "The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).", "Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months", "Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.", "This paper presents the effects of a newly developed participatory RTW program for temporary agency workers and unemployed workers, sick-listed due to MSD, compared to usual care. The main findings of this study are a non-significant trend towards delayed RTW in the intervention group in the first 90 days, followed by a significant advantage in RTW rate after 90 days (hazard ratio of 2.24). In addition, the median duration until sustainable first RTW was 161 days in the participatory RTW program group, compared to 299 days in the usual care group. The initial delay in RTW found in the intervention group can be due to more intensive involvement after enrolment in the new participatory RTW program. A similar finding has been described by others [31, 32]. With regard to the considerable gain in RTW rate after 90 days, this is mostly due to significant more and earlier work resumption in the intervention group from 90 days onward until the end of the 12 months follow-up. Finally, no significant differences were found with regard to the measured secondary outcomes.", "A strength of this study is the focus on a vulnerable group within the working population, namely sick-listed workers without an employment contract or with a flexible labour arrangement. These workers are burdened with a ‘labour market handicap’, with the absence of a workplace/employer to return to when sick-listed being a major RTW obstacle [15, 16]. Therefore, creating an actual RTW perspective by offering the possibility of a temporary (therapeutic) workplace is also an important strength of this study.\nFurthermore, our primary outcome measure, i.e. sustainable first RTW, should be considered a strength of this study. First RTW is commonly used as an outcome measure for RTW interventions, but does not include possible recurrences of sickness absence shortly after work resumption. By defining sustainable RTW as RTW for at least 28 days without relapse, the results in this study can be considered more robust [33].", "A limitation of this pragmatic RCT is the absence of blinding of both the sick-listed workers and the occupational health care professionals of the SSA to the allocation outcome. Unfortunately, due to the nature of the participatory intervention program, blinding was not possible.\nA second limitation is the duration of the follow-up period. The study population is characterised by a greater distance to the labour market and an increased risk for long-term work disability. To assess whether the beneficial effect of the participatory RTW program remains after the 12 months follow-up, an additional measurement after 2 years with RTW data collected from the SSA database could provide more insight and possibly increase the validity of the results found in this study.\nA third limitation is the generalization of the results of this study to another context, e.g. other countries. The participatory RTW program was specifically tailored for our study population and the Dutch context in which it was implemented [15]. Application of this intervention in a different setting should be preceded by tailoring of the program, taking into account the specific characteristics of the population as well as the social, political and cultural context in which the program will be implemented and used.", "Findings in the international literature show that workplace-based interventions are effective in reducing sickness absence among workers with musculoskeletal disorders [34]. More specifically, participatory RTW interventions including a workplace component have shown to be effective on work-related outcomes for sick-listed employees with sub-acute low back pain, i.e. in the early stage of sickness absence [17, 35], as well as for chronic back pain patients with an advanced phase of work disability [18]. However, while the above-mentioned studies focused on regular employees, i.e. those with relative permanent employment relationships, this study shows that a participatory RTW intervention with the possibility of a suitable (therapeutic) workplace is also effective on RTW for a more vulnerable group within the working population, i.e. sick-listed workers who have no (longer) an employer/workplace to return to. In addition, our study findings show that the participatory RTW program can also be applied for workers with all types of MSD, not merely for workers with low back pain.\nThe absence of beneficial or adverse effects on secondary health-related outcomes in this study is in line with recent findings of Lambeek and colleagues [18], and supports the work disability paradigm, i.e. recovery of health is not a necessary precondition for work resumption. The discrepancy between work-related outcomes and health outcomes has also been reported by others [34]. A possible explanation for this is the focus of the intervention on reducing barriers for RTW and not on symptomatic recovery from MSDs.\nIn occupational health care research there is an increasing awareness of the importance of behavioural determinants in the field of RTW research and intervention development [36–38]. Work attitude, social support, self-efficacy, and intention to RTW all have been associated with time to RTW. In our study no statistically significant differences were found between both groups for changes in Attitude, Social support, and self-Efficacy (ASE) determinants. However, the ASE determinants were only measured at baseline and after 3 months of follow-up. In view of the significant gain in more rapid RTW after 90 days, it is possible that potentially favourable effects on behavioural determinants were present at a later stage during follow-up, but were not measured. Nevertheless, in line with the findings of van Oostrom and colleagues [38], the variable ‘intention to RTW despite symptoms’ showed to be a significant confounder for sustainable first RTW in the Cox regression analysis.", "With an eminent earlier work resumption (intention-to-treat: median of 138 days; per-protocol: median of 173 days) during one-year of follow-up, the newly developed participatory RTW program seems to be a promising intervention to enhance work resumption and reduce work disability among temporary agency workers and unemployed workers, sick-listed due to MSD. However, although not statistically significant, the new RTW program had a negative impact on sickness benefit duration (intention-to-treat: median of 69 days; per-protocol: median of 59 days). This was mainly due to the fact that in most cases the therapeutic workplaces were offered with ongoing sickness benefit, i.e. the total number of days working in these temporary workplaces represented 95% of the difference in total benefit duration between both groups. However, in our opinion, the gains in higher RTW rate and earlier RTW may counterbalance this added cost burden by enhancing social participation of vulnerable workers [39], and by generating an economic benefit in terms of productivity gain. Cost-effectiveness and cost-benefit analyses will be conducted to evaluate whether the effects indeed counterbalance the costs. Moreover, these results will be essential to convince policy makers that implementation of the new RTW program is a worthwhile and necessary investment to achieve a sustainable contribution of vulnerable workers to the labour force. This approach is supported by a recent study showing that application of work interventions and less strict compensation policies to be eligible for long-term benefits contributed to sustainable RTW [40]. Nevertheless, due to the relatively short follow-up in this study, our findings should be confirmed in future studies with a longer follow-up. Another possibility could be offering subsidised (temporary) workplaces. This kind of arrangement already exists in the Netherlands for young disabled workers [41]. One could argue that such temporary arrangements can be extended to other groups of vulnerable workers within the framework of an active labour market policy.\nFurthermore, in our study the RTW coordinator played a key role to guarantee (perceived) safety and equality among all stakeholders and active involvement during the making of the consensus-based RTW plan. A systematic review also showed that an important key element in RTW interventions is the active involvement of an independent RTW coordinator [42]. For successful implementation we, therefore, recommend the use of a RTW coordinator competency profile, in line with the recommendation of Pransky and colleagues [43], who stated that identification of a core set of essential RTW coordinator competencies is essential." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Study Design and Setting", "Study Population and Recruitment", "Randomization and Blinding", "Interventions", "Usual Care", "Participatory RTW Program", "Outcome Measures", "Data Collection", "Primary Outcome Measure", "Secondary Outcome Measures", "Prognostic Measures", "Statistical Analyses", "Results", "Recruitment of Participants", "Loss to Follow-Up", "Baseline Characteristics", "Compliance", "Usual Care", "Consults with the Occupational Health Care Professionals", "Received Occupational Health Care Interventions", "Return-to-Work", "Secondary Outcome Measures", "Duration of Sickness Benefit", "Attitude, Social Influence, and Self-Efficacy (ASE) Determinants", "Health-Related Outcomes", "Discussion", "Main Findings", "Strengths of This Study", "Limitations of This Study", "Comparison with Other Studies", "Implications for Practice" ]
[ "Sickness absence and work disability are a common and substantial public health problem with major economical consequences worldwide [1, 2]. Given the fact that long-term sickness absence contributes largely to the total amount of annual work disability costs in Western countries [1], development of effective return-to-work (RTW) interventions are considered important public health (research) challenges [3].\nTo date, most RTW intervention research is aimed at sick-listed (established regular) employees, i.e. workers with relatively permanent employment relationships. In contrast, development of effective RTW interventions for sick-listed workers without an employment contract is lagging [4, 5]. However, in view of the growing international trend towards labour market flexibility [6], development of RTW interventions specifically aimed at sick-listed workers without an employment contract and sick-listed workers with a flexible labour arrangement, e.g. temporary agency workers, is of crucial importance. These workers represent a vulnerable group within the working population. Various studies show a poorer health status and an increased risk for (long-term) work disability among these workers, compared to regular employees [7–12]. In addition, they are burdened with a greater distance to the labour market [11, 13, 14]. When sick-listed, these workers have in most cases no workplace/employer to return to [15, 16]. Hence, tailor-made RTW interventions with the presence of a workplace for (therapeutic) RTW could be an important factor in the recovery and (vocational) rehabilitation process [15]. Therefore, a participatory RTW program was developed based on a successful RTW intervention for regular employees, sick-listed due to low back pain [17, 18]. This newly developed RTW program comprises of a stepwise communication process to identify and solve obstacles for RTW, resulting in a consensus-based plan to facilitate (therapeutic) RTW. The three main stakeholders in this intervention are: the sick-listed worker, the labour expert representing the Social Security Agency (SSA) who guides the worker with regard to vocational rehabilitation, and an independent RTW coordinator. The role of the RTW coordinator is to stimulate a high degree of involvement of both the sick-listed worker and the labour expert, and to reach consensus about the RTW plan. To offer a workplace for (therapeutic) RTW, a vocational rehabilitation agency was contracted to find a suitable (therapeutic) workplace matching with the formulated RTW plan.\nThe aim of this study was to assess the effectiveness of the new participatory RTW program compared to usual care for unemployed workers and temporary agency workers, sick-listed due to musculoskeletal disorders (MSD). The primary outcome measure was time to sustainable first RTW. Duration of sickness benefit was secondary outcome measure.", "[SUBTITLE] Study Design and Setting [SUBSECTION] The study is a randomized controlled trial carried out in collaboration with five front offices of the Dutch National Social Security Agency (SSA) and four large Dutch commercially operating vocational rehabilitation agencies (Olympia, Adeux, Capability, and Randstad Rentrée) in the eastern part of the Netherlands. The Medical Ethics Committee of the VU University Medical Centre (Amsterdam, the Netherlands) approved the study design, the protocols and procedures, and informed consent. The design of the study has been described in detail elsewhere [19].\nThe study is a randomized controlled trial carried out in collaboration with five front offices of the Dutch National Social Security Agency (SSA) and four large Dutch commercially operating vocational rehabilitation agencies (Olympia, Adeux, Capability, and Randstad Rentrée) in the eastern part of the Netherlands. The Medical Ethics Committee of the VU University Medical Centre (Amsterdam, the Netherlands) approved the study design, the protocols and procedures, and informed consent. The design of the study has been described in detail elsewhere [19].\n[SUBTITLE] Study Population and Recruitment [SUBSECTION] Between March 2007 and September 2008 all temporary agency workers and unemployed workers who were sick-listed between one and 2 weeks due to MSD and lived in the eastern part of the Netherlands received a letter with a screening questionnaire from the insurance physician of the SSA, on behalf of the researchers. The workers who returned the screening questionnaire indicating that they were still sick-listed and interested in participation, were contacted by the researchers by telephone to give additional information about the content of the study and to check eligibility. Temporary agency workers and unemployed workers sick-listed between 2 and 8 weeks with MSD as main health complaint for their sickness benefit claim were included. The main exclusion criteria were: (1) being sick-listed for more than 8 weeks; (2) not being able to complete questionnaires written in the Dutch language; (3) having a conflict with the Social Security Agency regarding a sickness benefit claim or a long-term disability claim; (4) having a legal conflict, e.g. an ongoing injury compensation claim; and (5) having had an episode of sickness absence due to MSD within 1 month before the current sickness benefit claim.\nThe insurance physician of the SSA was responsible for the identification of severe co-morbidity among the included workers; i.e. having a terminal disease, having a serious psychiatric disorder, or having a serious cardio-vascular disease. These participants remained in the intervention group, but were excluded from the participatory RTW program.\nBetween March 2007 and September 2008 all temporary agency workers and unemployed workers who were sick-listed between one and 2 weeks due to MSD and lived in the eastern part of the Netherlands received a letter with a screening questionnaire from the insurance physician of the SSA, on behalf of the researchers. The workers who returned the screening questionnaire indicating that they were still sick-listed and interested in participation, were contacted by the researchers by telephone to give additional information about the content of the study and to check eligibility. Temporary agency workers and unemployed workers sick-listed between 2 and 8 weeks with MSD as main health complaint for their sickness benefit claim were included. The main exclusion criteria were: (1) being sick-listed for more than 8 weeks; (2) not being able to complete questionnaires written in the Dutch language; (3) having a conflict with the Social Security Agency regarding a sickness benefit claim or a long-term disability claim; (4) having a legal conflict, e.g. an ongoing injury compensation claim; and (5) having had an episode of sickness absence due to MSD within 1 month before the current sickness benefit claim.\nThe insurance physician of the SSA was responsible for the identification of severe co-morbidity among the included workers; i.e. having a terminal disease, having a serious psychiatric disorder, or having a serious cardio-vascular disease. These participants remained in the intervention group, but were excluded from the participatory RTW program.\n[SUBTITLE] Randomization and Blinding [SUBSECTION] Before randomization, to prevent unequal distribution of relevant prognostic baseline characteristics, the sick-listed workers were pre-stratified based on two important prognostic factors, namely type of worker [20–22], i.e. temporary agency worker or unemployed worker, and degree of mental or physical work demands (light or heavy) in last job held before the current sickness benefit claim [23, 24]. Next, block randomization (using blocks of four allocations) was applied to ensure equal group sizes within each stratum. A separate block randomization table was generated for each of the five participating SSA front offices. Allocation to the intervention group or the usual care group was performed after informed consent and completion of the baseline questionnaire.\nThe participants and occupational health care professionals were not blinded to the allocation result. Data regarding work resumption and sickness benefit claim duration were collected from the SSA database. Data entry of the self-reported data was performed by a research assistant using a unique research code for each participant, to ensure that analyses of the data by the researcher was blinded.\nBefore randomization, to prevent unequal distribution of relevant prognostic baseline characteristics, the sick-listed workers were pre-stratified based on two important prognostic factors, namely type of worker [20–22], i.e. temporary agency worker or unemployed worker, and degree of mental or physical work demands (light or heavy) in last job held before the current sickness benefit claim [23, 24]. Next, block randomization (using blocks of four allocations) was applied to ensure equal group sizes within each stratum. A separate block randomization table was generated for each of the five participating SSA front offices. Allocation to the intervention group or the usual care group was performed after informed consent and completion of the baseline questionnaire.\nThe participants and occupational health care professionals were not blinded to the allocation result. Data regarding work resumption and sickness benefit claim duration were collected from the SSA database. Data entry of the self-reported data was performed by a research assistant using a unique research code for each participant, to ensure that analyses of the data by the researcher was blinded.\n[SUBTITLE] Interventions [SUBSECTION] [SUBTITLE] Usual Care [SUBSECTION] In the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\nIn the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\n[SUBTITLE] Participatory RTW Program [SUBSECTION] The intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator\nThe intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator\n[SUBTITLE] Usual Care [SUBSECTION] In the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\nIn the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\n[SUBTITLE] Participatory RTW Program [SUBSECTION] The intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator\nThe intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator\n[SUBTITLE] Outcome Measures [SUBSECTION] [SUBTITLE] Data Collection [SUBSECTION] Prior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\nPrior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\n[SUBTITLE] Primary Outcome Measure [SUBSECTION] The primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\nThe primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\n[SUBTITLE] Secondary Outcome Measures [SUBSECTION] Secondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\nSecondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\n[SUBTITLE] Prognostic Measures [SUBSECTION] All covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].\nAll covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].\n[SUBTITLE] Data Collection [SUBSECTION] Prior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\nPrior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\n[SUBTITLE] Primary Outcome Measure [SUBSECTION] The primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\nThe primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\n[SUBTITLE] Secondary Outcome Measures [SUBSECTION] Secondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\nSecondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\n[SUBTITLE] Prognostic Measures [SUBSECTION] All covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].\nAll covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].\n[SUBTITLE] Statistical Analyses [SUBSECTION] All statistical analyses were carried out at workers’ level and according to the intention-to-treat principle. To determine whether randomisation was performed successfully descriptive statistics were used to compare the baseline measurements of both groups. The results of the intention-to-treat analyses were compared to per-protocol analyses to assess the presence of bias due to protocol deviations.\nThe Kaplan–Meier method was used to describe the duration until sustainable RTW in both groups. The Cox proportional hazard model was used to estimate hazard ratios (HR) for sustainable RTW and the corresponding 95% confidence intervals. First, unadjusted Cox regression analysis was carried out and, if necessary, adjusted Cox regression analysis was performed to adjust for prognostic dissimilarities at baseline, i.e. a confounder was added to the model when the regression coefficient changed by 10% or more. To account for clustering of participants within insurance physicians and within the couples of labour experts and RTW coordinators the shared-frailty procedure was used [30]. Linear mixed models were used to assess differences in pain intensity, functional status and perceived health, i.e. the interaction between treatment group and measurement time (baseline, 3, 6 and 12 months), adjusted for baseline differences, and taking into account clustering on the level of the insurance physician. Stata version 11.0 was used to test for clustering in the Cox regression analysis. All other analysis were performed with SPSS version 15.0. For all analyses a P value of 0.05 (two-tailed) was considered statistically significant.\nAll statistical analyses were carried out at workers’ level and according to the intention-to-treat principle. To determine whether randomisation was performed successfully descriptive statistics were used to compare the baseline measurements of both groups. The results of the intention-to-treat analyses were compared to per-protocol analyses to assess the presence of bias due to protocol deviations.\nThe Kaplan–Meier method was used to describe the duration until sustainable RTW in both groups. The Cox proportional hazard model was used to estimate hazard ratios (HR) for sustainable RTW and the corresponding 95% confidence intervals. First, unadjusted Cox regression analysis was carried out and, if necessary, adjusted Cox regression analysis was performed to adjust for prognostic dissimilarities at baseline, i.e. a confounder was added to the model when the regression coefficient changed by 10% or more. To account for clustering of participants within insurance physicians and within the couples of labour experts and RTW coordinators the shared-frailty procedure was used [30]. Linear mixed models were used to assess differences in pain intensity, functional status and perceived health, i.e. the interaction between treatment group and measurement time (baseline, 3, 6 and 12 months), adjusted for baseline differences, and taking into account clustering on the level of the insurance physician. Stata version 11.0 was used to test for clustering in the Cox regression analysis. All other analysis were performed with SPSS version 15.0. For all analyses a P value of 0.05 (two-tailed) was considered statistically significant.", "The study is a randomized controlled trial carried out in collaboration with five front offices of the Dutch National Social Security Agency (SSA) and four large Dutch commercially operating vocational rehabilitation agencies (Olympia, Adeux, Capability, and Randstad Rentrée) in the eastern part of the Netherlands. The Medical Ethics Committee of the VU University Medical Centre (Amsterdam, the Netherlands) approved the study design, the protocols and procedures, and informed consent. The design of the study has been described in detail elsewhere [19].", "Between March 2007 and September 2008 all temporary agency workers and unemployed workers who were sick-listed between one and 2 weeks due to MSD and lived in the eastern part of the Netherlands received a letter with a screening questionnaire from the insurance physician of the SSA, on behalf of the researchers. The workers who returned the screening questionnaire indicating that they were still sick-listed and interested in participation, were contacted by the researchers by telephone to give additional information about the content of the study and to check eligibility. Temporary agency workers and unemployed workers sick-listed between 2 and 8 weeks with MSD as main health complaint for their sickness benefit claim were included. The main exclusion criteria were: (1) being sick-listed for more than 8 weeks; (2) not being able to complete questionnaires written in the Dutch language; (3) having a conflict with the Social Security Agency regarding a sickness benefit claim or a long-term disability claim; (4) having a legal conflict, e.g. an ongoing injury compensation claim; and (5) having had an episode of sickness absence due to MSD within 1 month before the current sickness benefit claim.\nThe insurance physician of the SSA was responsible for the identification of severe co-morbidity among the included workers; i.e. having a terminal disease, having a serious psychiatric disorder, or having a serious cardio-vascular disease. These participants remained in the intervention group, but were excluded from the participatory RTW program.", "Before randomization, to prevent unequal distribution of relevant prognostic baseline characteristics, the sick-listed workers were pre-stratified based on two important prognostic factors, namely type of worker [20–22], i.e. temporary agency worker or unemployed worker, and degree of mental or physical work demands (light or heavy) in last job held before the current sickness benefit claim [23, 24]. Next, block randomization (using blocks of four allocations) was applied to ensure equal group sizes within each stratum. A separate block randomization table was generated for each of the five participating SSA front offices. Allocation to the intervention group or the usual care group was performed after informed consent and completion of the baseline questionnaire.\nThe participants and occupational health care professionals were not blinded to the allocation result. Data regarding work resumption and sickness benefit claim duration were collected from the SSA database. Data entry of the self-reported data was performed by a research assistant using a unique research code for each participant, to ensure that analyses of the data by the researcher was blinded.", "[SUBTITLE] Usual Care [SUBSECTION] In the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\nIn the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.\n[SUBTITLE] Participatory RTW Program [SUBSECTION] The intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator\nThe intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator", "In the Netherlands, workers who are sick-listed and who have no (longer) an employment contract, i.e. no employer/workplace to return to, are entitled to supportive income and occupational health care by the SSA during his/her sickness benefit period. Vocational rehabilitation is carried out by a team of occupational health care professionals from the SSA, consisting of an insurance physician, a labour expert, and a case-manager. The insurance physician of the SSA guides the worker according to the guidelines for occupational health care of the Netherlands Society of Occupational Medicine. He/she advises about recovery, e.g. health promotion and RTW options, and, if necessary, he/she can advise and refer to work disability oriented treatment/guidance, such as graded physical therapy or work-related psychological help. The labour expert is responsible for vocational rehabilitation support. Based on a personal examination of the work abilities of the worker (including the problem analysis performed by the insurance physician) and expert knowledge of the (regional) labour market, the labour expert advises the worker with respect to return-to-work options. When the chance of work resumption in regular work without additional vocational rehabilitation support is viewed as slim, interventions such as referral to a vocational rehabilitation agency, personal coaching or short-term education/training are offered to the worker. The case manager of the SSA monitors the vocational rehabilitation process and regularly keeps in contact with the worker to evaluate the progress. In case of an impeded (vocational) recovery/rehabilitation process the case manager consults with, and if necessary refers to, the insurance physician or the labour expert to identify and tackle the cause of this stagnation. This can lead to alterations in the vocational rehabilitation guidance, for instance offering more intensive personal guidance or referral to a graded activity program. The occupational health care by the SSA ends when the sickness benefit ends, i.e. when full recovery of health is present and/or when full recovery of work ability is established by the insurance physician. Both can occur without actual RTW of the worker.", "The intervention group received usual care. This did not differ from the vocational rehabilitation guidance offered to the workers in the usual care group, i.e. the earlier described roles of the OHC professionals. However, in addition, these sick-listed workers were referred by their insurance physician to a RTW coordinator for the new participatory RTW program. The aim of this new program was to make a consensus-based RTW plan. In this study the RTW coordinator was an employee of the SSA, in most cases with a labour expert background, with experience in process guidance, with sufficient knowledge and experience regarding (vocational) rehabilitation, and no involvement in the usual care guidance of the sick-listed worker to guarantee independency. All RTW coordinators received training prior to the start of the study.\nThe newly developed RTW program consisted of consecutive steps starting with a combined consult with the insurance physician and the labour expert of the SSA. Next, two structured meetings took place between the sick-listed worker and the RTW coordinator, and between the labour expert of the SSA and the RTW coordinator, respectively. In the meeting with the sick-listed worker the RTW coordinator used a structured interview to identify and prioritise obstacles for RTW. The ranking of identified obstacles for RTW was performed based on frequency (how often do they occur?) and severity (how large is the perceived impact on functioning in daily life and/or work?). The meeting between the RTW coordinator and the labour expert was carried out in a comparable manner and resulted in a selection of prioritised obstacles for RTW from the perspective of the labour expert. Next, the RTW coordinator, the sick-listed worker, and the labour expert brainstormed about solutions to address the prioritised obstacles. The proposed solutions were judged on the basis of availability, feasibility and ability to solve the barrier. The final step resulted in the making of a consensus-based RTW plan describing the prioritised obstacles for RTW, the consensus-based solutions, the person(s) responsible for implementation of each selected solution, and a time-path when it should be carried out. Furthermore, to create a possibility for therapeutic work resumption, a commercially operating vocational rehabilitation agency could be contracted to find a temporary (therapeutic) workplace matching with the formulated RTW plan and taking into account the worker’s (functional) limitations. Six weeks after the brainstorm session the RTW coordinator contacted the sick-listed worker and the labour expert by telephone to evaluate actual implementation of the solutions, including the progress regarding placement in temporary (therapeutic) work. A more detailed content of the structured meetings with the RTW coordinator is presented in Table 1. The content of the entire new participatory RTW program has been described in detail elsewhere [15].Table 1Content of the structured meeting with the RTW coordinatorContent of the structured meeting with the RTW coordinatorIntroduction Check if the worker, the insurance physician and the labour expert agree with following the participatory program. Explain the independent role of the RTW coordinator. Explain that the main goal is to make a consensus based RTW plan.Inventory of obstacles for RTW Meeting with the worker\n  Starting point is the inventory of obstacles for RTW given by the insurance physician as home assignment to the worker after the first consult.  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the worker. Use the following categories as a framework: personal factors, social factors, physical environment demands (e.g. ergonomic obstacles at the workplace), dynamic action demands (e.g. repetitive work), static posture demands, work experience, commuting, remaining factors (e.g. financial problems).  Rank the identified obstacles based on frequency and perceived severity. Meeting with the labour expert\n  Identify (perceived) work- and non-work related obstacles for RTW from the perspective of the labour expert.  Rank the identified obstacles based on frequency and perceived severity.Brainstorm session with the worker and the labour expert The 3 top ranked obstacles for RTW from both the worker and the labour expert are the starting point. Think of solutions for all 6 prioritised obstacles, e.g. reduction of physical workload, graded return-to-work, improving the commuting distance, short-term education, help with dept repayment. Stimulate active involvement from the worker and the labour expert. Choose solutions based on availability, feasibility and ability to solve the obstacle.Making of the consensus-based RTW plan Give a summary of the prioritised obstacles for RTW, the chosen (consensus based) solutions, if possible a concrete work(place) profile, the person(s) responsible for implementation of the solution(s), and a time-path. Underline the importance of own initiative of the worker to achieve RTW. Sent the report to the worker, the labour expert, and the insurance physician. If chosen for finding a suitable temporary (therapeutic) workplace, contact the case manager of the contracted vocational rehabilitation agency.\n\nContent of the structured meeting with the RTW coordinator", "[SUBTITLE] Data Collection [SUBSECTION] Prior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\nPrior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.\n[SUBTITLE] Primary Outcome Measure [SUBSECTION] The primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\nThe primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.\n[SUBTITLE] Secondary Outcome Measures [SUBSECTION] Secondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\nSecondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].\n[SUBTITLE] Prognostic Measures [SUBSECTION] All covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].\nAll covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].", "Prior to randomization the baseline measurement was performed. Follow-up measurements took place at 3, 6, 9 and 12 months after baseline. Data regarding RTW were obtained from both the SSA database, including the workers’ file, and the self-report questionnaires at 12-months follow-up. Data on sickness benefit were collected from the SSA database. Data regarding applied occupational health care interventions were obtained from the SSA database and the medical file of the worker at the SSA.", "The primary outcome measure in this study was sustainable first RTW, which was defined as the duration in calendar days from the day of randomization until first sustainable return-to-work, i.e. return-to-work in any type of paid work or work resumption with ongoing benefits for at least 28 consecutive (calendar) days.", "Secondary outcome measures in the study were duration of sickness benefit, pain intensity, and functional status. Duration of sickness benefit was measured as a separate outcome measure because, contrary to regular employees, for sick-listed temporary agency workers and sick-listed unemployed workers recovery of health and/or functional limitations with ending of the sickness benefit does not necessarily coincide with actual RTW. First sustainable ending of sickness benefit was defined as the duration in calendar days from the day of randomization until ending of sickness benefit for at least 28 days. Recurrence of sickness absence with an accepted sickness benefit claim within 28 days after ending of the previous sickness benefit was considered as belonging to the preceding sickness benefit period, on condition that it was due to the same (or related) MSD. The total number of days of sickness benefit during the entire 12-months follow-up period was also calculated. Musculoskeletal pain intensity was measured using the Von Korff questionnaire [25]. Functional status, i.e. perceived functional impairments in daily life, and general health were assessed with the Dutch translation of the SF-36 [26, 27].", "All covariates were measured at baseline. Type of previous work (light or heavy demanding) and work status (working or not working) directly prior to reporting sick, i.e. before the onset of work disability, were collected, since findings in the international literature indicate that both items might be prognostic factors for the duration of sickness absence and work disability [20–22, 24]. Furthermore, behavioural determinants were included in the baseline measurement. Pain coping was assessed with the Pain Coping Inventory Scale (PCI) [28]. Behavioural determinants for RTW consisted of the workers’ attitude, social influence, and self-efficacy with regard to RTW, and the workers’ intention to RTW despite symptoms due to MSD. The Attitude, Social Influence and self-Efficacy (ASE) determinants were assessed using a questionnaire developed earlier by Van Oostrom and colleagues [29].", "All statistical analyses were carried out at workers’ level and according to the intention-to-treat principle. To determine whether randomisation was performed successfully descriptive statistics were used to compare the baseline measurements of both groups. The results of the intention-to-treat analyses were compared to per-protocol analyses to assess the presence of bias due to protocol deviations.\nThe Kaplan–Meier method was used to describe the duration until sustainable RTW in both groups. The Cox proportional hazard model was used to estimate hazard ratios (HR) for sustainable RTW and the corresponding 95% confidence intervals. First, unadjusted Cox regression analysis was carried out and, if necessary, adjusted Cox regression analysis was performed to adjust for prognostic dissimilarities at baseline, i.e. a confounder was added to the model when the regression coefficient changed by 10% or more. To account for clustering of participants within insurance physicians and within the couples of labour experts and RTW coordinators the shared-frailty procedure was used [30]. Linear mixed models were used to assess differences in pain intensity, functional status and perceived health, i.e. the interaction between treatment group and measurement time (baseline, 3, 6 and 12 months), adjusted for baseline differences, and taking into account clustering on the level of the insurance physician. Stata version 11.0 was used to test for clustering in the Cox regression analysis. All other analysis were performed with SPSS version 15.0. For all analyses a P value of 0.05 (two-tailed) was considered statistically significant.", "[SUBTITLE] Recruitment of Participants [SUBSECTION] Recruitment of participants took place between March 2007 and September 2008. The returned screening questionnaires resulted in 784 potentially eligible workers who were interested in participation. After telephone contact 191 workers refused participation and 327 workers did not meet the inclusion criteria, resulting in 266 workers for whom intake meetings were planned. During the intake meeting 103 workers were not included due to several reasons (see Fig. 1). Finally, 163 workers who met all inclusion criteria were enrolled in the study and randomised to the participatory RTW program (n = 79) or usual care (n = 84). An overview of the recruitment flow is presented in Fig. 1.Fig. 1Flow of the workers in the study\n\nFlow of the workers in the study\nRecruitment of participants took place between March 2007 and September 2008. The returned screening questionnaires resulted in 784 potentially eligible workers who were interested in participation. After telephone contact 191 workers refused participation and 327 workers did not meet the inclusion criteria, resulting in 266 workers for whom intake meetings were planned. During the intake meeting 103 workers were not included due to several reasons (see Fig. 1). Finally, 163 workers who met all inclusion criteria were enrolled in the study and randomised to the participatory RTW program (n = 79) or usual care (n = 84). An overview of the recruitment flow is presented in Fig. 1.Fig. 1Flow of the workers in the study\n\nFlow of the workers in the study\n[SUBTITLE] Loss to Follow-Up [SUBSECTION] Data about RTW and sickness benefit were available for all workers for the whole 12-months follow-up period. The RTW data were collected from the SSA database, including the workers’ file, and the self-report questionnaires. Data about sickness benefit were collected from the SSA database. For the self-reported secondary outcomes complete follow-up data were available for 116 participants (=71.2%).\nData about RTW and sickness benefit were available for all workers for the whole 12-months follow-up period. The RTW data were collected from the SSA database, including the workers’ file, and the self-report questionnaires. Data about sickness benefit were collected from the SSA database. For the self-reported secondary outcomes complete follow-up data were available for 116 participants (=71.2%).\n[SUBTITLE] Baseline Characteristics [SUBSECTION] Table 2 presents a summary of the measured baseline characteristics of the participants in the participatory RTW program group and the usual care group. For most of the baseline characteristics (i.e. worker-related, pain-related, health-related, work-related, and behavioural determinants) there were no or only minor (non-significant) differences between the two groups. All participants were fully work disabled at the time of enrolment. Approximately half of the workers in both groups (usual care group 52.4% and intervention group 54.4%, respectively) worked prior to reporting sick, i.e. the onset of work disability. For the participants who did not work before reporting sick the median duration between end of last job and first day of reporting sick was 13.0 months (interquartile range (IQR) 6.3–45.3 months) in the usual care group and 13.5 months (IQR 6.0–43.5 months) in the participatory RTW program group. However, despite randomisation, prognostic dissimilarities were present at baseline with worse physical role functioning (P = 0.052); more regular work schedule in last work (P = 0.031); and less intention to RTW despite symptoms (P = 0.024) in controls. If necessary, for these dissimilarities was adjusted in analyses.Table 2Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)Intervention group (N = 79)Control group (N = 84)Age (mean ± SD)44.0 ± 10.745.6 ± 9.0Gender (% male)57.063.1Level of education (% low)57.060.7Pain intensity (1–10 score) (mean ± SD) Back pain7.1 ± 2.06.8 ± 1.9 Neck pain7.1 ± 1.76.7 ± 2.0 Other pain6.5 ± 1.86.3 ± 1.9Functional status (0–100 score) (mean ± SD) Physical functioning46.0 ± 22.151.4 ± 21.3 Social functioning49.4 ± 25.451.2 ± 27.5Perceived health (0–100 score) (mean ± SD)56.3 ± 21.860.0 ± 20.3Type of worker (%) Temporary agency worker51.952.4 Unemployed worker48.147.6Type of last work (% physically and/or mentally demanding)74.775.0Work schedule (% day work)58.278.3Worker’s expectation regarding RTW at baseline (mean ± SD)2.22 ± 1.152.14 ± 1.12Intention to RTW despite symptoms (1–5) (mean ± SD)3.46 ± 1.103.05 ± 1.19\n\nBaseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)\nTable 2 presents a summary of the measured baseline characteristics of the participants in the participatory RTW program group and the usual care group. For most of the baseline characteristics (i.e. worker-related, pain-related, health-related, work-related, and behavioural determinants) there were no or only minor (non-significant) differences between the two groups. All participants were fully work disabled at the time of enrolment. Approximately half of the workers in both groups (usual care group 52.4% and intervention group 54.4%, respectively) worked prior to reporting sick, i.e. the onset of work disability. For the participants who did not work before reporting sick the median duration between end of last job and first day of reporting sick was 13.0 months (interquartile range (IQR) 6.3–45.3 months) in the usual care group and 13.5 months (IQR 6.0–43.5 months) in the participatory RTW program group. However, despite randomisation, prognostic dissimilarities were present at baseline with worse physical role functioning (P = 0.052); more regular work schedule in last work (P = 0.031); and less intention to RTW despite symptoms (P = 0.024) in controls. If necessary, for these dissimilarities was adjusted in analyses.Table 2Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)Intervention group (N = 79)Control group (N = 84)Age (mean ± SD)44.0 ± 10.745.6 ± 9.0Gender (% male)57.063.1Level of education (% low)57.060.7Pain intensity (1–10 score) (mean ± SD) Back pain7.1 ± 2.06.8 ± 1.9 Neck pain7.1 ± 1.76.7 ± 2.0 Other pain6.5 ± 1.86.3 ± 1.9Functional status (0–100 score) (mean ± SD) Physical functioning46.0 ± 22.151.4 ± 21.3 Social functioning49.4 ± 25.451.2 ± 27.5Perceived health (0–100 score) (mean ± SD)56.3 ± 21.860.0 ± 20.3Type of worker (%) Temporary agency worker51.952.4 Unemployed worker48.147.6Type of last work (% physically and/or mentally demanding)74.775.0Work schedule (% day work)58.278.3Worker’s expectation regarding RTW at baseline (mean ± SD)2.22 ± 1.152.14 ± 1.12Intention to RTW despite symptoms (1–5) (mean ± SD)3.46 ± 1.103.05 ± 1.19\n\nBaseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)\n[SUBTITLE] Compliance [SUBSECTION] In the usual care group 7 workers did not receive usual care as they reported full recovery of health complaints with subsequent ending of sickness benefit shortly after randomisation. Also 7 workers in the participatory RTW program group did not receive the allocated intervention, i.e. the participatory RTW program was not followed, due to several reasons (see Fig. 1). The remaining 72 workers in the intervention group all had the first consult with the insurance physician. One worker reported full recovery of health with ending of sickness benefit before the meeting with the RTW coordinator. For 23 workers the insurance physician established full work ability with ending of sickness benefit, i.e. claim closure, during the first consult. In case of claim closure without actual RTW, these workers were, in accordance with the usual care policy of the SSA, not referred to the RTW coordinator for making a RTW action plan. In addition, following the protocol, 10 workers were not referred to the RTW coordinator as the insurance physician established absence of work ability on medical grounds for at least 3 months during the first consult. The remaining 38 workers in the intervention group had the meetings with the labour expert and the RTW coordinator with the making of a consensus based RTW plan. Referral to a vocational rehabilitation agency for finding a suitable temporary workplace took place for 30 workers. Placement in a temporary (therapeutic) workplace was successfully achieved for 22 workers. In addition, four workers found a suitable workplace on own initiative. The median duration of working in a temporary (therapeutic) workplace was 90 days (IQR 41–147 days). During the 12-months follow-up 12 of the 22 workers with therapeutic work resumption were offered an employment contract.\nIn the usual care group 7 workers did not receive usual care as they reported full recovery of health complaints with subsequent ending of sickness benefit shortly after randomisation. Also 7 workers in the participatory RTW program group did not receive the allocated intervention, i.e. the participatory RTW program was not followed, due to several reasons (see Fig. 1). The remaining 72 workers in the intervention group all had the first consult with the insurance physician. One worker reported full recovery of health with ending of sickness benefit before the meeting with the RTW coordinator. For 23 workers the insurance physician established full work ability with ending of sickness benefit, i.e. claim closure, during the first consult. In case of claim closure without actual RTW, these workers were, in accordance with the usual care policy of the SSA, not referred to the RTW coordinator for making a RTW action plan. In addition, following the protocol, 10 workers were not referred to the RTW coordinator as the insurance physician established absence of work ability on medical grounds for at least 3 months during the first consult. The remaining 38 workers in the intervention group had the meetings with the labour expert and the RTW coordinator with the making of a consensus based RTW plan. Referral to a vocational rehabilitation agency for finding a suitable temporary workplace took place for 30 workers. Placement in a temporary (therapeutic) workplace was successfully achieved for 22 workers. In addition, four workers found a suitable workplace on own initiative. The median duration of working in a temporary (therapeutic) workplace was 90 days (IQR 41–147 days). During the 12-months follow-up 12 of the 22 workers with therapeutic work resumption were offered an employment contract.\n[SUBTITLE] Usual Care [SUBSECTION] [SUBTITLE] Consults with the Occupational Health Care Professionals [SUBSECTION] In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\nIn the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\n[SUBTITLE] Received Occupational Health Care Interventions [SUBSECTION] In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).\nIn the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).\n[SUBTITLE] Consults with the Occupational Health Care Professionals [SUBSECTION] In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\nIn the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\n[SUBTITLE] Received Occupational Health Care Interventions [SUBSECTION] In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).\nIn the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).\n[SUBTITLE] Return-to-Work [SUBSECTION] The median time until sustainable first RTW was 161 days (IQR 88–365 days) in the participatory RTW program group and 299 days (IQR 71–365 days) in the usual care group (log rank test; P = 0.12). The median total number of days at work during follow-up was 128 days (IQR 0–247 days) in the participatory RTW program group and 46 days (IQR 0–246 days) in the usual care group. In Fig. 2 the Kaplan–Meier curves for time until sustainable first RTW are presented for both groups. The crude Cox regression analysis showed a violation of the proportional hazard assumption with crossing of the survival curves at approximately 90 days follow-up. Therefore, a time-dependent covariate (T > 90 days) was added to the Cox proportional hazards model (P = 0.011). To adjust for significant confounding, the baseline variables ‘work schedule in last work’ and ‘intention to RTW despite symptoms’ were included in the model (Table 2). The resulting adjusted HR (T ≤ 90 days) was 0.76 (95% CI 0.42–1.37; P = 0.36), and the adjusted HR (T > 90 days) was 2.24 (95% CI 1.28–3.94; P = 0.005). The per-protocol analysis showed an adjusted HR (T ≤ 90 days) of 0.93 (95% CI 0.49–1.87; P = 0.83), and an adjusted HR (T > 90 days) of 2.25 (95% CI 1.28–3.98; P = 0.005). In addition, the per-protocol analysis showed a median time until sustainable RTW of 157 days (IQR 89–365 days) in the participatory RTW program group and 330 days (IQR 87–365 days) in the usual care group (log rank test; P = 0.029). Significant clustering on the level of the insurance physicians and on the level of the couples of labour experts and RTW coordinators was not found in the analyses (Table 3).Fig. 2Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nTable 3Differences in return-to-work (RTW) between the participatory RTW program group the and usual care groupAdjusted modela\nRegression coefficientSE\nP valueHR95% CILowerUpperInterventionT ≤ 90 days−0.290.300.340.750.421.34T > 90 days0.780.280.012.191.263.80Adjusted for work scheduleT ≤ 90 days−0.230.300.440.790.441.43T > 90 days0.840.29<0.0052.321.324.10Adjusted for intention to RTW despite symptomsT ≤ 90 days−0.330.300.270.720.401.29T > 90 days0.740.280.012.101.203.66Adjusted for work schedule + intention to RTW despite symptomsT ≤ 90 days−0.270.300.360.760.421.37T > 90 days0.810.290.012.241.283.94Clustering on level insurance physicianT ≤ 90 days−0.300.280.420.740.351.55T > 90 days0.740.47<0.0052.101.333.22Clustering on level labour expert + RTW coordinatorT ≤ 90 days−0.250.350.470.780.401.54T > 90 days0.730.260.012.101.243.48Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up\n\nKaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nDifferences in return-to-work (RTW) between the participatory RTW program group the and usual care group\nCox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\n\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up\nThe median time until sustainable first RTW was 161 days (IQR 88–365 days) in the participatory RTW program group and 299 days (IQR 71–365 days) in the usual care group (log rank test; P = 0.12). The median total number of days at work during follow-up was 128 days (IQR 0–247 days) in the participatory RTW program group and 46 days (IQR 0–246 days) in the usual care group. In Fig. 2 the Kaplan–Meier curves for time until sustainable first RTW are presented for both groups. The crude Cox regression analysis showed a violation of the proportional hazard assumption with crossing of the survival curves at approximately 90 days follow-up. Therefore, a time-dependent covariate (T > 90 days) was added to the Cox proportional hazards model (P = 0.011). To adjust for significant confounding, the baseline variables ‘work schedule in last work’ and ‘intention to RTW despite symptoms’ were included in the model (Table 2). The resulting adjusted HR (T ≤ 90 days) was 0.76 (95% CI 0.42–1.37; P = 0.36), and the adjusted HR (T > 90 days) was 2.24 (95% CI 1.28–3.94; P = 0.005). The per-protocol analysis showed an adjusted HR (T ≤ 90 days) of 0.93 (95% CI 0.49–1.87; P = 0.83), and an adjusted HR (T > 90 days) of 2.25 (95% CI 1.28–3.98; P = 0.005). In addition, the per-protocol analysis showed a median time until sustainable RTW of 157 days (IQR 89–365 days) in the participatory RTW program group and 330 days (IQR 87–365 days) in the usual care group (log rank test; P = 0.029). Significant clustering on the level of the insurance physicians and on the level of the couples of labour experts and RTW coordinators was not found in the analyses (Table 3).Fig. 2Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nTable 3Differences in return-to-work (RTW) between the participatory RTW program group the and usual care groupAdjusted modela\nRegression coefficientSE\nP valueHR95% CILowerUpperInterventionT ≤ 90 days−0.290.300.340.750.421.34T > 90 days0.780.280.012.191.263.80Adjusted for work scheduleT ≤ 90 days−0.230.300.440.790.441.43T > 90 days0.840.29<0.0052.321.324.10Adjusted for intention to RTW despite symptomsT ≤ 90 days−0.330.300.270.720.401.29T > 90 days0.740.280.012.101.203.66Adjusted for work schedule + intention to RTW despite symptomsT ≤ 90 days−0.270.300.360.760.421.37T > 90 days0.810.290.012.241.283.94Clustering on level insurance physicianT ≤ 90 days−0.300.280.420.740.351.55T > 90 days0.740.47<0.0052.101.333.22Clustering on level labour expert + RTW coordinatorT ≤ 90 days−0.250.350.470.780.401.54T > 90 days0.730.260.012.101.243.48Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up\n\nKaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nDifferences in return-to-work (RTW) between the participatory RTW program group the and usual care group\nCox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\n\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up\n[SUBTITLE] Secondary Outcome Measures [SUBSECTION] [SUBTITLE] Duration of Sickness Benefit [SUBSECTION] The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\nThe median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\n[SUBTITLE] Attitude, Social Influence, and Self-Efficacy (ASE) Determinants [SUBSECTION] Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\nTable 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n[SUBTITLE] Health-Related Outcomes [SUBSECTION] Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.\nTable 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.\n[SUBTITLE] Duration of Sickness Benefit [SUBSECTION] The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\nThe median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\n[SUBTITLE] Attitude, Social Influence, and Self-Efficacy (ASE) Determinants [SUBSECTION] Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\nTable 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n[SUBTITLE] Health-Related Outcomes [SUBSECTION] Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.\nTable 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.", "Recruitment of participants took place between March 2007 and September 2008. The returned screening questionnaires resulted in 784 potentially eligible workers who were interested in participation. After telephone contact 191 workers refused participation and 327 workers did not meet the inclusion criteria, resulting in 266 workers for whom intake meetings were planned. During the intake meeting 103 workers were not included due to several reasons (see Fig. 1). Finally, 163 workers who met all inclusion criteria were enrolled in the study and randomised to the participatory RTW program (n = 79) or usual care (n = 84). An overview of the recruitment flow is presented in Fig. 1.Fig. 1Flow of the workers in the study\n\nFlow of the workers in the study", "Data about RTW and sickness benefit were available for all workers for the whole 12-months follow-up period. The RTW data were collected from the SSA database, including the workers’ file, and the self-report questionnaires. Data about sickness benefit were collected from the SSA database. For the self-reported secondary outcomes complete follow-up data were available for 116 participants (=71.2%).", "Table 2 presents a summary of the measured baseline characteristics of the participants in the participatory RTW program group and the usual care group. For most of the baseline characteristics (i.e. worker-related, pain-related, health-related, work-related, and behavioural determinants) there were no or only minor (non-significant) differences between the two groups. All participants were fully work disabled at the time of enrolment. Approximately half of the workers in both groups (usual care group 52.4% and intervention group 54.4%, respectively) worked prior to reporting sick, i.e. the onset of work disability. For the participants who did not work before reporting sick the median duration between end of last job and first day of reporting sick was 13.0 months (interquartile range (IQR) 6.3–45.3 months) in the usual care group and 13.5 months (IQR 6.0–43.5 months) in the participatory RTW program group. However, despite randomisation, prognostic dissimilarities were present at baseline with worse physical role functioning (P = 0.052); more regular work schedule in last work (P = 0.031); and less intention to RTW despite symptoms (P = 0.024) in controls. If necessary, for these dissimilarities was adjusted in analyses.Table 2Baseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)Intervention group (N = 79)Control group (N = 84)Age (mean ± SD)44.0 ± 10.745.6 ± 9.0Gender (% male)57.063.1Level of education (% low)57.060.7Pain intensity (1–10 score) (mean ± SD) Back pain7.1 ± 2.06.8 ± 1.9 Neck pain7.1 ± 1.76.7 ± 2.0 Other pain6.5 ± 1.86.3 ± 1.9Functional status (0–100 score) (mean ± SD) Physical functioning46.0 ± 22.151.4 ± 21.3 Social functioning49.4 ± 25.451.2 ± 27.5Perceived health (0–100 score) (mean ± SD)56.3 ± 21.860.0 ± 20.3Type of worker (%) Temporary agency worker51.952.4 Unemployed worker48.147.6Type of last work (% physically and/or mentally demanding)74.775.0Work schedule (% day work)58.278.3Worker’s expectation regarding RTW at baseline (mean ± SD)2.22 ± 1.152.14 ± 1.12Intention to RTW despite symptoms (1–5) (mean ± SD)3.46 ± 1.103.05 ± 1.19\n\nBaseline characteristics of the workers without employment contract, sick-listed due to musculoskeletal disorders (N = 163)", "In the usual care group 7 workers did not receive usual care as they reported full recovery of health complaints with subsequent ending of sickness benefit shortly after randomisation. Also 7 workers in the participatory RTW program group did not receive the allocated intervention, i.e. the participatory RTW program was not followed, due to several reasons (see Fig. 1). The remaining 72 workers in the intervention group all had the first consult with the insurance physician. One worker reported full recovery of health with ending of sickness benefit before the meeting with the RTW coordinator. For 23 workers the insurance physician established full work ability with ending of sickness benefit, i.e. claim closure, during the first consult. In case of claim closure without actual RTW, these workers were, in accordance with the usual care policy of the SSA, not referred to the RTW coordinator for making a RTW action plan. In addition, following the protocol, 10 workers were not referred to the RTW coordinator as the insurance physician established absence of work ability on medical grounds for at least 3 months during the first consult. The remaining 38 workers in the intervention group had the meetings with the labour expert and the RTW coordinator with the making of a consensus based RTW plan. Referral to a vocational rehabilitation agency for finding a suitable temporary workplace took place for 30 workers. Placement in a temporary (therapeutic) workplace was successfully achieved for 22 workers. In addition, four workers found a suitable workplace on own initiative. The median duration of working in a temporary (therapeutic) workplace was 90 days (IQR 41–147 days). During the 12-months follow-up 12 of the 22 workers with therapeutic work resumption were offered an employment contract.", "[SUBTITLE] Consults with the Occupational Health Care Professionals [SUBSECTION] In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\nIn the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.\n[SUBTITLE] Received Occupational Health Care Interventions [SUBSECTION] In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).\nIn the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).", "In the participatory RTW program group 21 workers (total of 23 consults) had a consult with the case-manager of the SSA, compared to 41 workers (total of 49 consults) in the usual care group. However, the workers in the participatory RTW program group had more consults with the insurance physician (n = 70; 157 consults) and the labour expert (n = 36; 55 consults) of the SSA, compared to the usual care group, where 60 workers (total of 107 consults) reported a consult with the insurance physician and 19 workers (total of 26 consults) reported a consult with the labour expert.", "In the participatory RTW program group 25 workers received a usual care intervention (total of 28 interventions) during follow-up with a median duration of 6.4 months (IQR 3.0–12.4 months), compared to 30 workers in the usual care group (total of 32 interventions) with a median duration of 7.4 months (IQR 2.9–11.2 months). Three workers in the participatory RTW program group and two workers in the usual care group received two occupational health care interventions. The received usual care interventions consisted of: (1) offering (short-term) education/training (participatory RTW program group (PWP) n = 11, usual care group (UC) n = 5); (2) referral to a vocational rehabilitation agency (PWP n = 4, UC n = 9); (3) referral to an employment agency for employment-finding (PWP n = 5, UC n = 4); (4) personal coaching (PWP n = 3, UC n = 3); (5) interview training (including writing a job application letter) (PWP n = 2, UC n = 4); (6) placement in a temporary workplace (on trial) (PWP n = 1, UC n = 0); (7) searching for a sheltered workplace (PWP n = 1, UC n = 3), (8) on-the-job training (PWP n = 1, UC n = 1); (9) referral to a graded activity program (PWP n = 0, UC n = 2); and (10) type of intervention unknown (PWP n = 0, UC n = 1).", "The median time until sustainable first RTW was 161 days (IQR 88–365 days) in the participatory RTW program group and 299 days (IQR 71–365 days) in the usual care group (log rank test; P = 0.12). The median total number of days at work during follow-up was 128 days (IQR 0–247 days) in the participatory RTW program group and 46 days (IQR 0–246 days) in the usual care group. In Fig. 2 the Kaplan–Meier curves for time until sustainable first RTW are presented for both groups. The crude Cox regression analysis showed a violation of the proportional hazard assumption with crossing of the survival curves at approximately 90 days follow-up. Therefore, a time-dependent covariate (T > 90 days) was added to the Cox proportional hazards model (P = 0.011). To adjust for significant confounding, the baseline variables ‘work schedule in last work’ and ‘intention to RTW despite symptoms’ were included in the model (Table 2). The resulting adjusted HR (T ≤ 90 days) was 0.76 (95% CI 0.42–1.37; P = 0.36), and the adjusted HR (T > 90 days) was 2.24 (95% CI 1.28–3.94; P = 0.005). The per-protocol analysis showed an adjusted HR (T ≤ 90 days) of 0.93 (95% CI 0.49–1.87; P = 0.83), and an adjusted HR (T > 90 days) of 2.25 (95% CI 1.28–3.98; P = 0.005). In addition, the per-protocol analysis showed a median time until sustainable RTW of 157 days (IQR 89–365 days) in the participatory RTW program group and 330 days (IQR 87–365 days) in the usual care group (log rank test; P = 0.029). Significant clustering on the level of the insurance physicians and on the level of the couples of labour experts and RTW coordinators was not found in the analyses (Table 3).Fig. 2Kaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nTable 3Differences in return-to-work (RTW) between the participatory RTW program group the and usual care groupAdjusted modela\nRegression coefficientSE\nP valueHR95% CILowerUpperInterventionT ≤ 90 days−0.290.300.340.750.421.34T > 90 days0.780.280.012.191.263.80Adjusted for work scheduleT ≤ 90 days−0.230.300.440.790.441.43T > 90 days0.840.29<0.0052.321.324.10Adjusted for intention to RTW despite symptomsT ≤ 90 days−0.330.300.270.720.401.29T > 90 days0.740.280.012.101.203.66Adjusted for work schedule + intention to RTW despite symptomsT ≤ 90 days−0.270.300.360.760.421.37T > 90 days0.810.290.012.241.283.94Clustering on level insurance physicianT ≤ 90 days−0.300.280.420.740.351.55T > 90 days0.740.47<0.0052.101.333.22Clustering on level labour expert + RTW coordinatorT ≤ 90 days−0.250.350.470.780.401.54T > 90 days0.730.260.012.101.243.48Cox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up\n\nKaplan–Meier curves for sustainable first return-to-work during the 12-months follow-up for the participatory return-to-work program group and the usual care group\nDifferences in return-to-work (RTW) between the participatory RTW program group the and usual care group\nCox proportional hazards models from the adjusted Cox regression analyses. Regression coefficients, standard errors (SE), P values, hazard ratio’s (HR) and 95% confidence intervals (CI) are presented\n\naResults of the crude Cox regression model are not presented, due to violation of the proportional hazard assumption, i.e. crossing of the survival curves at approximately 90 days follow-up", "[SUBTITLE] Duration of Sickness Benefit [SUBSECTION] The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\nThe median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).\n[SUBTITLE] Attitude, Social Influence, and Self-Efficacy (ASE) Determinants [SUBSECTION] Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\nTable 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n[SUBTITLE] Health-Related Outcomes [SUBSECTION] Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.\nTable 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.", "The median claim duration until first sustainable ending of sickness benefit was 160 days (IQR 39–365 days) in the participatory RTW program group and 91 days (IQR 33–344 days) in the usual care group (Mann–Whitney U test; P = 0.14). The per-protocol analysis results differed slightly and showed a median duration of 168 days (IQR 45–365 days) and 109 days (IQR 35–365 days), respectively (Mann–Whitney U test; P = 0.18).", "Table 4 presents the results of the mixed model analyses for the Attitude, Social influence, and self-Efficacy determinants, accounted for possible clustering on the level of the insurance physicians. After 3 months of follow-up both groups experienced more social influence to RTW, but developed a less positive attitude towards RTW compared to baseline. However, no statistically significant differences were found between both groups.Table 4Results of the mixed model analysesGroupBaseline3 months6 monthsa\n12 monthsa\nGroup*Time P valueFunctional status (0–100 score) (RAND-36) Bodily painPWP27.7 (15.9)48.8 (20.2)47.4 (21.4)51.4 (23.9)0.22UC29.4 (15.4)45.7 (23.0)50.0 (23.0)53.9 (25.4) Physical functioningPWP46.0 (22.1)57.3 (23.4)57.6 (23.2)59.4 (23.6)0.73UC51.4 (21.3)59.8 (25.2)64.5 (24.2)66.5 (26.2) Physical role functioningPWP10.4 (20.6)29.7 (38.8)31.6 (41.1)46.8 (44.0)0.13UC5.1(13.3)24.7 (36.7)38.3 (41.7)45.4 (43.6) Social functioningPWP49.4 (25.4)62.9 (24.0)66.6 (25.1)65.9 (26.0)0.72UC51.2 (27.5)58.9 (26.1)66.1 (25.3)63.7 (28.8)Health status (0–100 score) (RAND-36) Perceived present healthPWP56.3 (21.8)52.4 (20.1)56.6 (22.1)58.5 (21.5)0.70UC60.0 (20.3)55.0 (23.3)55.9 (24.2)59.0 (24.1) Change in healthPWP31.4 (25.6)41.8 (26.0)48.8 (28.3)58.1 (29.6)0.17UC38.1 (25.3)38.7 (30.3)50.8 (28.4)56.3 (31.3)Pain intensity (1–10 score) (Von Korff) Back painPWP7.2 (1.9)6.0 (2.2)5.6 (2.3)5.4 (2.6)0.92UC6.8 (2.0)5.6 (2.5)5.0 (2.8)4.9 (2.8) Neck painPWP7.5 (1.5)5.3 (2.3)4.4 (3.0)4.4 (3.2)0.52UC6.5 (1.9)5.3 (2.9)4.0 (3.2)4.2 (3.1) Other painPWP6.7 (1.8)6.0 (2.2)5.0 (2.7)4.9 (3.0)0.89UC6.2 (1.9)5.7 (2.3)5.1 (2.5)4.7 (3.0)Attitude, social influence, self-efficacy determinants Attitude to RTW (−5 to 12)PWP5.13 (4.27)3.41 (5.21)––0.18UC4.87 (3.96)1.92 (5.81)–– Social influence to RTW (−26 to 18)PWP−5.16 (8.72)−2.13 (9.26)––0.16UC−3.39 (8.89)−2.59 (9.20)–– Self-efficacy to RTW (−4 to 4)PWP0.42 (2.43)0.44 (2.12)––0.79UC0.06 (2.26)0.19 (2.33)–– Intention to RTW despite symptoms (1–5)PWP3.46 (1.10)3.65 (1.24)––0.32UC3.05 (1.19)3.53 (1.39)––Response rate questionnaires (%)10085.377.981.6Differences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months\n\nResults of the mixed model analyses\nDifferences in health-related outcomes, and the attitude, social influence, and self-efficacy determinants between the participatory RTW program group (PWP) and usual care group (UC), accounted for possible clustering on the level of the insurance physician. Unless indicated otherwise the observed mean and standard deviation are presented\n\naAttitude, social influence, and self-efficacy determinants were only measured at baseline and 3 months", "Table 4 also presents the results on the effectiveness of the participatory RTW program on health-related outcomes, accounted for possible clustering on the level of the insurance physicians. No statistically significant differences were found between the improvements in functional status, pain intensity, and perceived health in the participatory RTW program group and the usual care group.", "[SUBTITLE] Main Findings [SUBSECTION] This paper presents the effects of a newly developed participatory RTW program for temporary agency workers and unemployed workers, sick-listed due to MSD, compared to usual care. The main findings of this study are a non-significant trend towards delayed RTW in the intervention group in the first 90 days, followed by a significant advantage in RTW rate after 90 days (hazard ratio of 2.24). In addition, the median duration until sustainable first RTW was 161 days in the participatory RTW program group, compared to 299 days in the usual care group. The initial delay in RTW found in the intervention group can be due to more intensive involvement after enrolment in the new participatory RTW program. A similar finding has been described by others [31, 32]. With regard to the considerable gain in RTW rate after 90 days, this is mostly due to significant more and earlier work resumption in the intervention group from 90 days onward until the end of the 12 months follow-up. Finally, no significant differences were found with regard to the measured secondary outcomes.\nThis paper presents the effects of a newly developed participatory RTW program for temporary agency workers and unemployed workers, sick-listed due to MSD, compared to usual care. The main findings of this study are a non-significant trend towards delayed RTW in the intervention group in the first 90 days, followed by a significant advantage in RTW rate after 90 days (hazard ratio of 2.24). In addition, the median duration until sustainable first RTW was 161 days in the participatory RTW program group, compared to 299 days in the usual care group. The initial delay in RTW found in the intervention group can be due to more intensive involvement after enrolment in the new participatory RTW program. A similar finding has been described by others [31, 32]. With regard to the considerable gain in RTW rate after 90 days, this is mostly due to significant more and earlier work resumption in the intervention group from 90 days onward until the end of the 12 months follow-up. Finally, no significant differences were found with regard to the measured secondary outcomes.\n[SUBTITLE] Strengths of This Study [SUBSECTION] A strength of this study is the focus on a vulnerable group within the working population, namely sick-listed workers without an employment contract or with a flexible labour arrangement. These workers are burdened with a ‘labour market handicap’, with the absence of a workplace/employer to return to when sick-listed being a major RTW obstacle [15, 16]. Therefore, creating an actual RTW perspective by offering the possibility of a temporary (therapeutic) workplace is also an important strength of this study.\nFurthermore, our primary outcome measure, i.e. sustainable first RTW, should be considered a strength of this study. First RTW is commonly used as an outcome measure for RTW interventions, but does not include possible recurrences of sickness absence shortly after work resumption. By defining sustainable RTW as RTW for at least 28 days without relapse, the results in this study can be considered more robust [33].\nA strength of this study is the focus on a vulnerable group within the working population, namely sick-listed workers without an employment contract or with a flexible labour arrangement. These workers are burdened with a ‘labour market handicap’, with the absence of a workplace/employer to return to when sick-listed being a major RTW obstacle [15, 16]. Therefore, creating an actual RTW perspective by offering the possibility of a temporary (therapeutic) workplace is also an important strength of this study.\nFurthermore, our primary outcome measure, i.e. sustainable first RTW, should be considered a strength of this study. First RTW is commonly used as an outcome measure for RTW interventions, but does not include possible recurrences of sickness absence shortly after work resumption. By defining sustainable RTW as RTW for at least 28 days without relapse, the results in this study can be considered more robust [33].\n[SUBTITLE] Limitations of This Study [SUBSECTION] A limitation of this pragmatic RCT is the absence of blinding of both the sick-listed workers and the occupational health care professionals of the SSA to the allocation outcome. Unfortunately, due to the nature of the participatory intervention program, blinding was not possible.\nA second limitation is the duration of the follow-up period. The study population is characterised by a greater distance to the labour market and an increased risk for long-term work disability. To assess whether the beneficial effect of the participatory RTW program remains after the 12 months follow-up, an additional measurement after 2 years with RTW data collected from the SSA database could provide more insight and possibly increase the validity of the results found in this study.\nA third limitation is the generalization of the results of this study to another context, e.g. other countries. The participatory RTW program was specifically tailored for our study population and the Dutch context in which it was implemented [15]. Application of this intervention in a different setting should be preceded by tailoring of the program, taking into account the specific characteristics of the population as well as the social, political and cultural context in which the program will be implemented and used.\nA limitation of this pragmatic RCT is the absence of blinding of both the sick-listed workers and the occupational health care professionals of the SSA to the allocation outcome. Unfortunately, due to the nature of the participatory intervention program, blinding was not possible.\nA second limitation is the duration of the follow-up period. The study population is characterised by a greater distance to the labour market and an increased risk for long-term work disability. To assess whether the beneficial effect of the participatory RTW program remains after the 12 months follow-up, an additional measurement after 2 years with RTW data collected from the SSA database could provide more insight and possibly increase the validity of the results found in this study.\nA third limitation is the generalization of the results of this study to another context, e.g. other countries. The participatory RTW program was specifically tailored for our study population and the Dutch context in which it was implemented [15]. Application of this intervention in a different setting should be preceded by tailoring of the program, taking into account the specific characteristics of the population as well as the social, political and cultural context in which the program will be implemented and used.\n[SUBTITLE] Comparison with Other Studies [SUBSECTION] Findings in the international literature show that workplace-based interventions are effective in reducing sickness absence among workers with musculoskeletal disorders [34]. More specifically, participatory RTW interventions including a workplace component have shown to be effective on work-related outcomes for sick-listed employees with sub-acute low back pain, i.e. in the early stage of sickness absence [17, 35], as well as for chronic back pain patients with an advanced phase of work disability [18]. However, while the above-mentioned studies focused on regular employees, i.e. those with relative permanent employment relationships, this study shows that a participatory RTW intervention with the possibility of a suitable (therapeutic) workplace is also effective on RTW for a more vulnerable group within the working population, i.e. sick-listed workers who have no (longer) an employer/workplace to return to. In addition, our study findings show that the participatory RTW program can also be applied for workers with all types of MSD, not merely for workers with low back pain.\nThe absence of beneficial or adverse effects on secondary health-related outcomes in this study is in line with recent findings of Lambeek and colleagues [18], and supports the work disability paradigm, i.e. recovery of health is not a necessary precondition for work resumption. The discrepancy between work-related outcomes and health outcomes has also been reported by others [34]. A possible explanation for this is the focus of the intervention on reducing barriers for RTW and not on symptomatic recovery from MSDs.\nIn occupational health care research there is an increasing awareness of the importance of behavioural determinants in the field of RTW research and intervention development [36–38]. Work attitude, social support, self-efficacy, and intention to RTW all have been associated with time to RTW. In our study no statistically significant differences were found between both groups for changes in Attitude, Social support, and self-Efficacy (ASE) determinants. However, the ASE determinants were only measured at baseline and after 3 months of follow-up. In view of the significant gain in more rapid RTW after 90 days, it is possible that potentially favourable effects on behavioural determinants were present at a later stage during follow-up, but were not measured. Nevertheless, in line with the findings of van Oostrom and colleagues [38], the variable ‘intention to RTW despite symptoms’ showed to be a significant confounder for sustainable first RTW in the Cox regression analysis.\nFindings in the international literature show that workplace-based interventions are effective in reducing sickness absence among workers with musculoskeletal disorders [34]. More specifically, participatory RTW interventions including a workplace component have shown to be effective on work-related outcomes for sick-listed employees with sub-acute low back pain, i.e. in the early stage of sickness absence [17, 35], as well as for chronic back pain patients with an advanced phase of work disability [18]. However, while the above-mentioned studies focused on regular employees, i.e. those with relative permanent employment relationships, this study shows that a participatory RTW intervention with the possibility of a suitable (therapeutic) workplace is also effective on RTW for a more vulnerable group within the working population, i.e. sick-listed workers who have no (longer) an employer/workplace to return to. In addition, our study findings show that the participatory RTW program can also be applied for workers with all types of MSD, not merely for workers with low back pain.\nThe absence of beneficial or adverse effects on secondary health-related outcomes in this study is in line with recent findings of Lambeek and colleagues [18], and supports the work disability paradigm, i.e. recovery of health is not a necessary precondition for work resumption. The discrepancy between work-related outcomes and health outcomes has also been reported by others [34]. A possible explanation for this is the focus of the intervention on reducing barriers for RTW and not on symptomatic recovery from MSDs.\nIn occupational health care research there is an increasing awareness of the importance of behavioural determinants in the field of RTW research and intervention development [36–38]. Work attitude, social support, self-efficacy, and intention to RTW all have been associated with time to RTW. In our study no statistically significant differences were found between both groups for changes in Attitude, Social support, and self-Efficacy (ASE) determinants. However, the ASE determinants were only measured at baseline and after 3 months of follow-up. In view of the significant gain in more rapid RTW after 90 days, it is possible that potentially favourable effects on behavioural determinants were present at a later stage during follow-up, but were not measured. Nevertheless, in line with the findings of van Oostrom and colleagues [38], the variable ‘intention to RTW despite symptoms’ showed to be a significant confounder for sustainable first RTW in the Cox regression analysis.\n[SUBTITLE] Implications for Practice [SUBSECTION] With an eminent earlier work resumption (intention-to-treat: median of 138 days; per-protocol: median of 173 days) during one-year of follow-up, the newly developed participatory RTW program seems to be a promising intervention to enhance work resumption and reduce work disability among temporary agency workers and unemployed workers, sick-listed due to MSD. However, although not statistically significant, the new RTW program had a negative impact on sickness benefit duration (intention-to-treat: median of 69 days; per-protocol: median of 59 days). This was mainly due to the fact that in most cases the therapeutic workplaces were offered with ongoing sickness benefit, i.e. the total number of days working in these temporary workplaces represented 95% of the difference in total benefit duration between both groups. However, in our opinion, the gains in higher RTW rate and earlier RTW may counterbalance this added cost burden by enhancing social participation of vulnerable workers [39], and by generating an economic benefit in terms of productivity gain. Cost-effectiveness and cost-benefit analyses will be conducted to evaluate whether the effects indeed counterbalance the costs. Moreover, these results will be essential to convince policy makers that implementation of the new RTW program is a worthwhile and necessary investment to achieve a sustainable contribution of vulnerable workers to the labour force. This approach is supported by a recent study showing that application of work interventions and less strict compensation policies to be eligible for long-term benefits contributed to sustainable RTW [40]. Nevertheless, due to the relatively short follow-up in this study, our findings should be confirmed in future studies with a longer follow-up. Another possibility could be offering subsidised (temporary) workplaces. This kind of arrangement already exists in the Netherlands for young disabled workers [41]. One could argue that such temporary arrangements can be extended to other groups of vulnerable workers within the framework of an active labour market policy.\nFurthermore, in our study the RTW coordinator played a key role to guarantee (perceived) safety and equality among all stakeholders and active involvement during the making of the consensus-based RTW plan. A systematic review also showed that an important key element in RTW interventions is the active involvement of an independent RTW coordinator [42]. For successful implementation we, therefore, recommend the use of a RTW coordinator competency profile, in line with the recommendation of Pransky and colleagues [43], who stated that identification of a core set of essential RTW coordinator competencies is essential.\nWith an eminent earlier work resumption (intention-to-treat: median of 138 days; per-protocol: median of 173 days) during one-year of follow-up, the newly developed participatory RTW program seems to be a promising intervention to enhance work resumption and reduce work disability among temporary agency workers and unemployed workers, sick-listed due to MSD. However, although not statistically significant, the new RTW program had a negative impact on sickness benefit duration (intention-to-treat: median of 69 days; per-protocol: median of 59 days). This was mainly due to the fact that in most cases the therapeutic workplaces were offered with ongoing sickness benefit, i.e. the total number of days working in these temporary workplaces represented 95% of the difference in total benefit duration between both groups. However, in our opinion, the gains in higher RTW rate and earlier RTW may counterbalance this added cost burden by enhancing social participation of vulnerable workers [39], and by generating an economic benefit in terms of productivity gain. Cost-effectiveness and cost-benefit analyses will be conducted to evaluate whether the effects indeed counterbalance the costs. Moreover, these results will be essential to convince policy makers that implementation of the new RTW program is a worthwhile and necessary investment to achieve a sustainable contribution of vulnerable workers to the labour force. This approach is supported by a recent study showing that application of work interventions and less strict compensation policies to be eligible for long-term benefits contributed to sustainable RTW [40]. Nevertheless, due to the relatively short follow-up in this study, our findings should be confirmed in future studies with a longer follow-up. Another possibility could be offering subsidised (temporary) workplaces. This kind of arrangement already exists in the Netherlands for young disabled workers [41]. One could argue that such temporary arrangements can be extended to other groups of vulnerable workers within the framework of an active labour market policy.\nFurthermore, in our study the RTW coordinator played a key role to guarantee (perceived) safety and equality among all stakeholders and active involvement during the making of the consensus-based RTW plan. A systematic review also showed that an important key element in RTW interventions is the active involvement of an independent RTW coordinator [42]. For successful implementation we, therefore, recommend the use of a RTW coordinator competency profile, in line with the recommendation of Pransky and colleagues [43], who stated that identification of a core set of essential RTW coordinator competencies is essential.", "This paper presents the effects of a newly developed participatory RTW program for temporary agency workers and unemployed workers, sick-listed due to MSD, compared to usual care. The main findings of this study are a non-significant trend towards delayed RTW in the intervention group in the first 90 days, followed by a significant advantage in RTW rate after 90 days (hazard ratio of 2.24). In addition, the median duration until sustainable first RTW was 161 days in the participatory RTW program group, compared to 299 days in the usual care group. The initial delay in RTW found in the intervention group can be due to more intensive involvement after enrolment in the new participatory RTW program. A similar finding has been described by others [31, 32]. With regard to the considerable gain in RTW rate after 90 days, this is mostly due to significant more and earlier work resumption in the intervention group from 90 days onward until the end of the 12 months follow-up. Finally, no significant differences were found with regard to the measured secondary outcomes.", "A strength of this study is the focus on a vulnerable group within the working population, namely sick-listed workers without an employment contract or with a flexible labour arrangement. These workers are burdened with a ‘labour market handicap’, with the absence of a workplace/employer to return to when sick-listed being a major RTW obstacle [15, 16]. Therefore, creating an actual RTW perspective by offering the possibility of a temporary (therapeutic) workplace is also an important strength of this study.\nFurthermore, our primary outcome measure, i.e. sustainable first RTW, should be considered a strength of this study. First RTW is commonly used as an outcome measure for RTW interventions, but does not include possible recurrences of sickness absence shortly after work resumption. By defining sustainable RTW as RTW for at least 28 days without relapse, the results in this study can be considered more robust [33].", "A limitation of this pragmatic RCT is the absence of blinding of both the sick-listed workers and the occupational health care professionals of the SSA to the allocation outcome. Unfortunately, due to the nature of the participatory intervention program, blinding was not possible.\nA second limitation is the duration of the follow-up period. The study population is characterised by a greater distance to the labour market and an increased risk for long-term work disability. To assess whether the beneficial effect of the participatory RTW program remains after the 12 months follow-up, an additional measurement after 2 years with RTW data collected from the SSA database could provide more insight and possibly increase the validity of the results found in this study.\nA third limitation is the generalization of the results of this study to another context, e.g. other countries. The participatory RTW program was specifically tailored for our study population and the Dutch context in which it was implemented [15]. Application of this intervention in a different setting should be preceded by tailoring of the program, taking into account the specific characteristics of the population as well as the social, political and cultural context in which the program will be implemented and used.", "Findings in the international literature show that workplace-based interventions are effective in reducing sickness absence among workers with musculoskeletal disorders [34]. More specifically, participatory RTW interventions including a workplace component have shown to be effective on work-related outcomes for sick-listed employees with sub-acute low back pain, i.e. in the early stage of sickness absence [17, 35], as well as for chronic back pain patients with an advanced phase of work disability [18]. However, while the above-mentioned studies focused on regular employees, i.e. those with relative permanent employment relationships, this study shows that a participatory RTW intervention with the possibility of a suitable (therapeutic) workplace is also effective on RTW for a more vulnerable group within the working population, i.e. sick-listed workers who have no (longer) an employer/workplace to return to. In addition, our study findings show that the participatory RTW program can also be applied for workers with all types of MSD, not merely for workers with low back pain.\nThe absence of beneficial or adverse effects on secondary health-related outcomes in this study is in line with recent findings of Lambeek and colleagues [18], and supports the work disability paradigm, i.e. recovery of health is not a necessary precondition for work resumption. The discrepancy between work-related outcomes and health outcomes has also been reported by others [34]. A possible explanation for this is the focus of the intervention on reducing barriers for RTW and not on symptomatic recovery from MSDs.\nIn occupational health care research there is an increasing awareness of the importance of behavioural determinants in the field of RTW research and intervention development [36–38]. Work attitude, social support, self-efficacy, and intention to RTW all have been associated with time to RTW. In our study no statistically significant differences were found between both groups for changes in Attitude, Social support, and self-Efficacy (ASE) determinants. However, the ASE determinants were only measured at baseline and after 3 months of follow-up. In view of the significant gain in more rapid RTW after 90 days, it is possible that potentially favourable effects on behavioural determinants were present at a later stage during follow-up, but were not measured. Nevertheless, in line with the findings of van Oostrom and colleagues [38], the variable ‘intention to RTW despite symptoms’ showed to be a significant confounder for sustainable first RTW in the Cox regression analysis.", "With an eminent earlier work resumption (intention-to-treat: median of 138 days; per-protocol: median of 173 days) during one-year of follow-up, the newly developed participatory RTW program seems to be a promising intervention to enhance work resumption and reduce work disability among temporary agency workers and unemployed workers, sick-listed due to MSD. However, although not statistically significant, the new RTW program had a negative impact on sickness benefit duration (intention-to-treat: median of 69 days; per-protocol: median of 59 days). This was mainly due to the fact that in most cases the therapeutic workplaces were offered with ongoing sickness benefit, i.e. the total number of days working in these temporary workplaces represented 95% of the difference in total benefit duration between both groups. However, in our opinion, the gains in higher RTW rate and earlier RTW may counterbalance this added cost burden by enhancing social participation of vulnerable workers [39], and by generating an economic benefit in terms of productivity gain. Cost-effectiveness and cost-benefit analyses will be conducted to evaluate whether the effects indeed counterbalance the costs. Moreover, these results will be essential to convince policy makers that implementation of the new RTW program is a worthwhile and necessary investment to achieve a sustainable contribution of vulnerable workers to the labour force. This approach is supported by a recent study showing that application of work interventions and less strict compensation policies to be eligible for long-term benefits contributed to sustainable RTW [40]. Nevertheless, due to the relatively short follow-up in this study, our findings should be confirmed in future studies with a longer follow-up. Another possibility could be offering subsidised (temporary) workplaces. This kind of arrangement already exists in the Netherlands for young disabled workers [41]. One could argue that such temporary arrangements can be extended to other groups of vulnerable workers within the framework of an active labour market policy.\nFurthermore, in our study the RTW coordinator played a key role to guarantee (perceived) safety and equality among all stakeholders and active involvement during the making of the consensus-based RTW plan. A systematic review also showed that an important key element in RTW interventions is the active involvement of an independent RTW coordinator [42]. For successful implementation we, therefore, recommend the use of a RTW coordinator competency profile, in line with the recommendation of Pransky and colleagues [43], who stated that identification of a core set of essential RTW coordinator competencies is essential." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null, null, null, "discussion", null, null, null, null, null ]
[ "Work disability", "Return-to-work interventions", "Musculoskeletal disorders", "Vulnerable worker populations", "Worker without employment contract" ]
Effects of a weight loss intervention on body mass, fitness, and inflammatory biomarkers in overweight or obese breast cancer survivors.
21336679
Obesity is characterized by chronic mild inflammation and may influence the risk and progression of cancer.
BACKGROUND
Study participants averaged 56 years of age (N=68). Intervention participants (n=44 vs. 24 controls) participated in a cognitive behavioral therapy-based weight management program as part of an exploratory randomized trial. The intervention incorporated strategies to promote increased physical activity and diet modification. Baseline and 16-week data included height, weight, body composition, physical activity level, and biomarkers IL-6, IL-8, TNF-α, and VEGF.
METHODS
Weight loss was significantly greater in the intervention group than controls (-5.7 [3.5] vs. 0.2 [4.1] kg, P<0.001). Paired t tests noted favorable changes in physical activity level (P<0.001 intervention, P=0.70 control), marginally lower IL-6 levels (P=0.06 intervention, P=0.25 control) at 16 weeks for participants in the intervention group, and lower TNF-α levels for participants in the intervention (P<0.05) and control groups (P<0.001). Increased physical activity was associated with favorable changes in IL-6 for participants in the intervention group (R(2) =0.18; P<0.03).
RESULTS
Favorable changes in cytokine levels were observed in association with weight loss in this exploratory study with overweight breast cancer survivors.
CONCLUSION
[ "Adult", "Aged", "Biomarkers", "Body Mass Index", "Breast Neoplasms", "California", "Cognitive Behavioral Therapy", "Female", "Humans", "Middle Aged", "Obesity", "Overweight", "Physical Fitness", "Regression Analysis", "Survivors", "Weight Loss", "Weight Reduction Programs" ]
3212681
Introduction
Breast cancer is the most common invasive cancer among women in developed countries. It accounts for 26% of incident cancers and 15% of cancer deaths among women in the US, with an estimated 180,000 women diagnosed with breast cancer in 2008 [1]. Most breast cancers are now diagnosed at a localized stage, which is associated with a 5-year survival rate of 96% [1]. In addition, improvements in initial treatments have resulted in an ever-increasing number of breast cancer survivors [1, 2]. Recurrence, risks for second primary cancers, and comorbidities, such as diabetes, cardiovascular disease, and osteoporosis, are issues that need to be considered in long-term management of these women [3, 4]. Overweight or obesity is a negative prognostic factor in both pre- and postmenopausal breast cancer [5, 6], and it is increasingly being recognized as a medical condition that is characterized by chronic mild inflammation [7]. Several mechanisms have been proposed to explain the adverse effect of overweight on prognosis after the diagnosis of breast cancer, including the unfavorable effects of obesity on circulating levels of inflammatory cytokines [8]. Inflammatory cytokines, such as interleukin-6 (IL-6), interleukin-8 (IL-8), tumor necrosis factor-α (TNF-α), and vascular endothelial growth factor (VEGF), have been consistently associated with breast pathology, and specifically, the development of breast cancer [9]. This is possibly a result of their regulatory impact on proliferation of breast cancer cells through estrogen production [10]. Even though the exact processes with which these cytokines may influence breast carcinoma is still under debate [11], higher levels of IL-6 and IL-8 are both associated with advanced disease and/or metastases in breast cancer patients [12]. In addition to influencing the risk and progression of cancer [13, 14], research efforts have identified chronic mild inflammation as an independent predictor of several other chronic diseases and mortality [15]. One probable explanation for the relationship between obesity and inflammation is the finding that adipose tissue functions as a major secretory organ for inflammatory markers, including TNF-α, IL-6, IL-8, and VEGF [14, 16, 17]. Furthermore, increased production and release of TNF-α, IL-6, and IL-8 by adipose tissue are associated with degree of obesity [8, 16]. Conversely, weight loss has been associated with a reduction in these inflammatory factors [18]. Most studies evaluating the influence of weight loss on cytokine levels relied primarily on reduced energy intake as a behavioral strategy [8, 19]. In a randomized clinical trial of weight loss and chronic inflammation in obese adults, Nicklas et al. [15] found that diet-induced weight loss of 5.7% on average resulted in significant reductions in concentrations of IL-6 and TNF-α. In a study with 120 premenopausal obese women (body mass index; BMI ≥ 30 kg/m2), a reduction in BMI in the intervention group was associated with lower serum levels of IL-6 and C-reactive protein (CRP) [20]. In a recent review, changes in cytokine levels were noted in all 19 studies designed to evaluate the effects of weight loss and exercise on markers of inflammation [19]. The duration of the interventions ranged from 4-6 weeks to 2 years, with reported weight loss ranging from 3.2% to 30% of body weight. Physical activity has also been shown to affect local and systemic cytokine production. In several studies, exercise interventions of moderate intensity led to significant reductions in circulating levels of IL-6, TNF-α, and IL-8 in healthy individuals and in patients with cardiovascular disease [21–24]. In other studies, the biological response to exercise was found to be dependent on the intensity and duration of the activity [25]. Although several studies have evaluated the relationship between weight loss, exercise, and circulating cytokine levels in healthy obese individuals [15, 26] or in individuals with various health conditions, these relationships have not been previously examined in overweight breast cancer survivors. The purpose of this study was to specifically examine the relationships between weight loss and physical activity and selected inflammatory markers in breast cancer survivors. Samples were obtained from women who participated in a small randomized trial, the Healthy Weight Management (HWM) Study for Breast Cancer Survivors (2002-2004), which successfully promoted weight loss in overweight or obese subjects assigned to the intervention arm. The current study is an exploratory analysis of the effect of weight loss and increased physical activity on inflammatory cytokines TNF-α, IL-6, IL-8 and VEGF at the end of the 16-week intensive intervention period.
null
null
Results
Participant ranged from 33 to 71 years of age. Ninety-four percent of the participants were non-Hispanic white. The majority of the participants were married (77%), and many had completed college or higher levels of education. No significant differences were found between intervention and control groups for demographic characteristics such as age, level of education, and race/ethnicity. Similarly, no differences at baseline were observed for outcome measures such as BMI, weight, and physical fitness or activity levels (Table 1). Table 1Characteristics of the study groups at baselineInterventionControl(n = 44)(n=24)VariablesMean (SD)Mean (SD)Age, years56 (9)56 (8)Years of education, years16 (2)16 (2)Body mass index, kg/m2 30.8 (3.8)31.3 (5.2)Weight, kg83.9 (11.8)87.2 (14.7)Waist circumference, cm101.5 (12.0)106.7 (13.4)Total body fat, kg36.9 (7.5)40.4 (10.2)Step test, HR/30 s60 (8)57 (7)Moderate or vigorous physical activity, h/week3.2 (2.1)3.7 (3.3)None of the means are significantly different between groups at baseline Characteristics of the study groups at baseline None of the means are significantly different between groups at baseline According to independent t tests, the magnitude of reduction in BMI (P < 0.0001), weight (−6.8% in intervention and −0.3% in control, P < 0.0001), waist circumference (P < 0.05), and percent body fat (P < 0.0001) between baseline and 16 weeks was significantly greater for participants in the intervention group (Table 2). Additionally, performance on the stepping test indicated better fitness (P < 0.05), and hours of moderate or vigorous physical activity, between baseline and 16 weeks improved significantly more for participants in the intervention group than for controls (P < 0.05; Table 2). Table 2Mean differences in magnitude of change for key variables between baseline and 16 weeksMean (SD)Intervention (n = 44)Control (n = 24)Change in body mass index, kg/m2−2.1 (1.3)**−0.1 (1.5)Change in weight, kg−5.7 (3.5)**−0.2 (4.1)Change in waist circumference, cm−7.1 (6.4)*−2.5 (7.7)Change in percent body fata −4.5 (3.8)**−0.9 (2.3)Change in step test, HR/30 s−6.0 (8)*−1.0 (6)Change in physical activity levels, h/week2.2 (3.3)*0.3 (3.7) aOne control and two intervention subjects had missing body composition data (one intervention at baseline; one intervention and one control at 16 weeks)*P < 0.05; **P < 0.0001 Mean differences in magnitude of change for key variables between baseline and 16 weeks aOne control and two intervention subjects had missing body composition data (one intervention at baseline; one intervention and one control at 16 weeks) *P < 0.05; **P < 0.0001 According to paired t tests evaluating within-group differences in inflammatory factors for the intervention group between baseline and 16 weeks, levels of TNF-α significantly reduced (P < 0.05). A reduction was also noted for IL-6 level (P = 0.06; Table 3). TNF-α was also found to be decreased at 16 weeks for the control group (P < 0.05; Table 3). No differences were noted for IL-8 and VEGF. Table 3Within group differences for change in cytokine levels between baseline and 16 weeksInterventionControlMean (SD) N Baseline16 Weeks N Baseline16 WeeksVariablesMean (SD)Mean (SD) P valueMean (SD)Mean (SD) P valueTNFα (pg/mL)425.9 (2.0)5.4 (1.9)0.03245.4 (1.3)4.6 (1.0)0.0001IL-6 (pg/mL)431.7 (0.9)1.4 (0.9)0.06241.7 (1.3)1.4 (0.8)0.33IL-8 (pg/mL)434.8 (1.7)5.1 (2.0)0.29234.5 (1.5)4.4 (1.9)0.75VEGF (pg/mL)4229.3 (21.8)33.5 (27.0)0.202334.9 (22.5)31.3 (16.6)0.38 IL-6 interleukin-6, IL-8 interleukin-8, TNF-α tumor necrosis factor-α, and VEGF vascular endothelial growth factorThree sigma outliers (one each for IL-6, IL-8, and VEGF and one for both Il-6 and VEGF in the intervention group; one each for IL-8 and VEGF in the control group) were excluded Within group differences for change in cytokine levels between baseline and 16 weeks IL-6 interleukin-6, IL-8 interleukin-8, TNF-α tumor necrosis factor-α, and VEGF vascular endothelial growth factor Three sigma outliers (one each for IL-6, IL-8, and VEGF and one for both Il-6 and VEGF in the intervention group; one each for IL-8 and VEGF in the control group) were excluded Correlation analysis showed that several inflammatory factors were associated with key outcome measures for the participants in the intervention group. Both IL-6 and VEGF were positively correlated with BMI at 16 weeks (r = 0.37, P < 0.05 for IL-6, and r = 0.44, P < 0.01 for VEGF). IL-6 levels at 16 weeks were also positively correlated with performance on step test (r = 0.42, P < 0.01). Increased total hours of moderate or vigorous exercise at 16 weeks was correlated with favorable reductions in IL-6 (r = −0.35, P < 0.05) and VEGF (r = −0.46, P < 0.01) between baseline and 16 weeks. In a regression analysis using participants in the intervention group, controlling for change in weight and change in heart rate/min after the stepping test, increased level of physical activity was associated with favorable changes in IL-6 levels (R 2=0.18, P < 0.05; Table 4). Other cytokines did not show significant associations with change in physical activity. Table 4Regression model of factors associated with IL-6 levels at 16 weeks in the intervention group (n = 38)Variableβ-coefficientSignificance (P value) R 2 Increase in moderate or vigorous physical activity, hours/week−0.1250.02Change in weight, kg0.010.2Change in heart rate/min after step test−0.010.70.18Excluding one 3-sigma outlier Regression model of factors associated with IL-6 levels at 16 weeks in the intervention group (n = 38) Excluding one 3-sigma outlier
null
null
[ "Participants", "Weight Loss Intervention", "Measures", "Blood Sampling and Assays", "Components of the intervention", "Data Analysis" ]
[ "The participants in the HWM Study were 85 breast cancer survivors living in San Diego, CA, USA. Primary recruitment procedures included community outreach and networking with clinical contacts to receive referrals. Other strategies included advertising in a major local newspaper and setting up booths at community events. Finally, a list of potentially eligible participants from the University of California, San Diego Cancer Registry was requested. A letter was sent to those on the list inviting them to contact the study coordinator if they were interested in participating in the study.\nThe inclusion criteria for the study were: 18 years and older; diagnosed with stage I-IIIA breast cancer within the previous 14 years; completed initial treatments (i.e., surgery, adjuvant chemotherapy, radiation therapy); initial BMI ≥ 25.0 kg/m2 (overweight or obese) and a minimum of 15 kg over ideal weight as defined by the Metropolitan Life Insurance Company tables [27]; willingness and ability to attend group meetings for 16 weeks and to maintain contact with the investigators for 1 year; and ability to provide dietary and exercise data by telephone at prescribed intervals. An exclusion criterion was the inability to participate in physical activity because of severe disability (e.g., severe arthritic conditions).\nAt screening and recruitment, the ability to participate in mild and moderate physical activity was assessed with the Physical Activity Readiness Questionnaire and Health History Questionnaire, a standard procedure for screening participants for community-based physical activity programs of this nature [28]. Following recruitment and written consent, participants were stratified by BMI [(25.0-29.9 (n = 38) versus >30.0 (n = 47) kg/m2)] and age [<=50 (n = 26), 51-65 (n = 47), >65 (n = 12)], and randomly assigned to either the group-based intervention program (n = 56) or a control group (n = 29), with a 2:1 intervention-to-control ratio to provide sufficient statistical power for the main study hypothesis (differential weight loss between groups), while minimizing subject numbers in this feasibility study. A test of two-sample comparison of the 16-week weight change scores was selected with the alpha (type one error) level set at 0.025 assuming a Bonferroni correction for multiple hypothesis tests. The power (or one minus the type two error) was 80%. The standard deviation for weight change, assumed equal in both groups at 16 weeks, was set at 5.2 kg based on data from Andersen et al. [29]. This sample size analysis indicated that a final number of 63 participants (42 intervention and 21 control), after accounting for dropouts, would provide an adequately powered comparison to detect a clinically significant effect size. Figure 1 includes the CONSORT flow chart for the HWM study.\nFig. 1CONSORT Chart for Healthy Weight Management Study including recruitment information\n\nCONSORT Chart for Healthy Weight Management Study including recruitment information", "The intervention sessions were led by trained investigators and research staff. The program curriculum consisted of group sessions provided according to the following schedule: weekly for 4 months, and follow-up monthly sessions through 12 months. The primary goal of the intervention was to promote regular physical activity and reduced energy intake in order to facilitate weight loss (Fig. 2). The group meetings consisted of discussion and educational/didactic sessions that covered the content areas, with the major proportion of time devoted to increasing physical activity. All intervention subjects also received intensive individualized telephone-based counseling from the study coordinator, starting with weekly calls and decreasing in frequency after the first month (every other week for the next 2 months, and once a month thereafter). The time points for data collection from all subjects were baseline, 16 weeks, and 12 months. The group sessions offered to the treatment study arm was closed-group contingents (with an average of 12-15 women). To equalize possible seasonal effects on targeted behaviors and weight change in the two study arms, wait list subjects were followed concurrent with intervention group subjects and received general contact such as mailed communications during the study period. At study end, they were provided all written intervention materials and a concise version of the didactic material along with facilitated discussion in the format of a 2-day seminar.\nFig. 2Weight loss intervention curriculum topics\n\nWeight loss intervention curriculum topics", "Anthropometric measurements (height, weight, and waist and hip circumferences) were collected at baseline and 16 weeks using standard procedures, and body composition was measured with dual energy X-ray absorptiometry (DXA) using a Lunar DPX-NT densitometer (Lunar/GE Corp). Whole body, regional body fat, and percent fat were obtained from total body DXA scans. All scans were conducted by the same certified technician who was blinded on the assignment of the intervention for each participant.\nPhysical activity data were collected at baseline and 16 weeks using the 7-day physical activity recall instrument developed by Blair et al. [30]. This approach has been shown to be highly reliable (test-retest reliability = 0.99) [31], valid, and sensitive to the effects of physical activity promotion programs [30]. This instrument focuses on the participant’s daily activities over a 7-day period. A telephone interview is scheduled and the interviewer asks the participant to recall when and what kind of physical activity they had in the past week, and the intensity of their activity. Examples of moderate, hard, and very hard activities are provided to help them accurately identify the intensity.\nPhysical fitness data were collected with the 3-min stepping test, which was used to detect possible changes in aerobic fitness by measuring heart rate during the first 15 s of recovery from stepping. The stepping test has high reliability (0.92), is sensitive to change [32], and widely used to assess cardiorespiratory fitness [33].\n[SUBTITLE] Blood Sampling and Assays [SUBSECTION] Blood samples were collected at baseline and 16 weeks between the hours of 8 AM and 1 PM for a majority of the participants (83% at baseline, 68% at 16 weeks). Following centrifugation and separation, plasma or serum was stored at −80o C until assays were conducted. Levels of IL-6, TNF-α, IL-8, and VEGF were determined in duplicate by commercial ELISA with internal controls (R&D Systems, Mpls, MN). Intra-assay coefficient of variation (CVs) were <8%, and inter-assay CVs were <7%. Both samples from a given participant were assayed together [34].\nBlood samples were collected at baseline and 16 weeks between the hours of 8 AM and 1 PM for a majority of the participants (83% at baseline, 68% at 16 weeks). Following centrifugation and separation, plasma or serum was stored at −80o C until assays were conducted. Levels of IL-6, TNF-α, IL-8, and VEGF were determined in duplicate by commercial ELISA with internal controls (R&D Systems, Mpls, MN). Intra-assay coefficient of variation (CVs) were <8%, and inter-assay CVs were <7%. Both samples from a given participant were assayed together [34].\n[SUBTITLE] Components of the intervention [SUBSECTION] The overall content of the intervention included behavioral and cognitive strategies for implementing dietary modification and increasing physical activity [35]. The goal was to achieve a modest weight loss that is sustained, with an emphasis on features that increase this likelihood, such as acceptance of modest weight loss and focusing on skills for weight maintenance. The physical activity component involved encouraging and promoting regular planned aerobic exercise. The long-term goal was to achieve an average of at least 1 h/day of planned exercise at a moderate level of intensity, which is consistent with the current Institute of Medicine recommendations [36]. The main goal of the dietary guidance component was to promote a reduction in energy intake relative to expenditure, with a goal being an energy deficit of 500-1,000 kcal/day by individualized diet modification that emphasized reduced energy density of the overall diet [37], while avoiding excessive dietary restraint. The wait list group participants were provided only general contact (monthly check-up calls, holiday and seasonal cards, and mailed communications) without specific reference to weight management topics through a 12-month period of data collection. Following that period, they were provided all written intervention materials and a concise version of the didactic material, and facilitated discussion was offered in the format of a 2-day seminar.\nThe overall content of the intervention included behavioral and cognitive strategies for implementing dietary modification and increasing physical activity [35]. The goal was to achieve a modest weight loss that is sustained, with an emphasis on features that increase this likelihood, such as acceptance of modest weight loss and focusing on skills for weight maintenance. The physical activity component involved encouraging and promoting regular planned aerobic exercise. The long-term goal was to achieve an average of at least 1 h/day of planned exercise at a moderate level of intensity, which is consistent with the current Institute of Medicine recommendations [36]. The main goal of the dietary guidance component was to promote a reduction in energy intake relative to expenditure, with a goal being an energy deficit of 500-1,000 kcal/day by individualized diet modification that emphasized reduced energy density of the overall diet [37], while avoiding excessive dietary restraint. The wait list group participants were provided only general contact (monthly check-up calls, holiday and seasonal cards, and mailed communications) without specific reference to weight management topics through a 12-month period of data collection. Following that period, they were provided all written intervention materials and a concise version of the didactic material, and facilitated discussion was offered in the format of a 2-day seminar.", "Blood samples were collected at baseline and 16 weeks between the hours of 8 AM and 1 PM for a majority of the participants (83% at baseline, 68% at 16 weeks). Following centrifugation and separation, plasma or serum was stored at −80o C until assays were conducted. Levels of IL-6, TNF-α, IL-8, and VEGF were determined in duplicate by commercial ELISA with internal controls (R&D Systems, Mpls, MN). Intra-assay coefficient of variation (CVs) were <8%, and inter-assay CVs were <7%. Both samples from a given participant were assayed together [34].", "The overall content of the intervention included behavioral and cognitive strategies for implementing dietary modification and increasing physical activity [35]. The goal was to achieve a modest weight loss that is sustained, with an emphasis on features that increase this likelihood, such as acceptance of modest weight loss and focusing on skills for weight maintenance. The physical activity component involved encouraging and promoting regular planned aerobic exercise. The long-term goal was to achieve an average of at least 1 h/day of planned exercise at a moderate level of intensity, which is consistent with the current Institute of Medicine recommendations [36]. The main goal of the dietary guidance component was to promote a reduction in energy intake relative to expenditure, with a goal being an energy deficit of 500-1,000 kcal/day by individualized diet modification that emphasized reduced energy density of the overall diet [37], while avoiding excessive dietary restraint. The wait list group participants were provided only general contact (monthly check-up calls, holiday and seasonal cards, and mailed communications) without specific reference to weight management topics through a 12-month period of data collection. Following that period, they were provided all written intervention materials and a concise version of the didactic material, and facilitated discussion was offered in the format of a 2-day seminar.", "Data were analyzed on all participants (n = 68) who had data for weight, waist, percent body fat, physical fitness, physical activity, and inflammatory cytokines at baseline and 16 weeks, following the intensive intervention period, to explore the association between weight loss (independent variable) and change in each inflammatory factor (dependent variable). The relationship between physical activity (independent variable) and change in inflammatory biomarkers was also examined. Although 12 month data were collected as part of the parent study, the present findings describe data from the 16-week data collection period when blood samples were analyzed for cytokine assays.\nChange variables were computed to evaluate group differences in key study outcomes, such as BMI, weight, body composition, and level of physical activity. Group differences in outcome variables at 16 weeks between the intervention and control groups were assessed with independent t tests. After excluding values of cytokine data that exceeded three standard deviations from the overall mean, within group differences between baseline and 16 weeks were evaluated with paired t tests. Spearman correlations (excluding outliers) examined relationships between cytokines, BMI, percent body fat, and physical activity at baseline and at 16 weeks. Regression analyses explored the association between the increase in physical activity levels (independent variable) and change in each inflammatory factor (dependent variable), controlling for weight loss and change in stepping test heart rate. An alpha value ≤0.05 was considered statistically significant. Data were analyzed using SPSS for Windows, Version 11.5 (2002) and SAS statistical software, version 9.2 (2008)." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Methods", "Participants", "Weight Loss Intervention", "Measures", "Blood Sampling and Assays", "Components of the intervention", "Data Analysis", "Results", "Discussion" ]
[ "Breast cancer is the most common invasive cancer among women in developed countries. It accounts for 26% of incident cancers and 15% of cancer deaths among women in the US, with an estimated 180,000 women diagnosed with breast cancer in 2008 [1]. Most breast cancers are now diagnosed at a localized stage, which is associated with a 5-year survival rate of 96% [1]. In addition, improvements in initial treatments have resulted in an ever-increasing number of breast cancer survivors [1, 2]. Recurrence, risks for second primary cancers, and comorbidities, such as diabetes, cardiovascular disease, and osteoporosis, are issues that need to be considered in long-term management of these women [3, 4].\nOverweight or obesity is a negative prognostic factor in both pre- and postmenopausal breast cancer [5, 6], and it is increasingly being recognized as a medical condition that is characterized by chronic mild inflammation [7]. Several mechanisms have been proposed to explain the adverse effect of overweight on prognosis after the diagnosis of breast cancer, including the unfavorable effects of obesity on circulating levels of inflammatory cytokines [8]. Inflammatory cytokines, such as interleukin-6 (IL-6), interleukin-8 (IL-8), tumor necrosis factor-α (TNF-α), and vascular endothelial growth factor (VEGF), have been consistently associated with breast pathology, and specifically, the development of breast cancer [9]. This is possibly a result of their regulatory impact on proliferation of breast cancer cells through estrogen production [10]. Even though the exact processes with which these cytokines may influence breast carcinoma is still under debate [11], higher levels of IL-6 and IL-8 are both associated with advanced disease and/or metastases in breast cancer patients [12]. In addition to influencing the risk and progression of cancer [13, 14], research efforts have identified chronic mild inflammation as an independent predictor of several other chronic diseases and mortality [15].\nOne probable explanation for the relationship between obesity and inflammation is the finding that adipose tissue functions as a major secretory organ for inflammatory markers, including TNF-α, IL-6, IL-8, and VEGF [14, 16, 17]. Furthermore, increased production and release of TNF-α, IL-6, and IL-8 by adipose tissue are associated with degree of obesity [8, 16]. Conversely, weight loss has been associated with a reduction in these inflammatory factors [18]. Most studies evaluating the influence of weight loss on cytokine levels relied primarily on reduced energy intake as a behavioral strategy [8, 19]. In a randomized clinical trial of weight loss and chronic inflammation in obese adults, Nicklas et al. [15] found that diet-induced weight loss of 5.7% on average resulted in significant reductions in concentrations of IL-6 and TNF-α. In a study with 120 premenopausal obese women (body mass index; BMI ≥ 30 kg/m2), a reduction in BMI in the intervention group was associated with lower serum levels of IL-6 and C-reactive protein (CRP) [20]. In a recent review, changes in cytokine levels were noted in all 19 studies designed to evaluate the effects of weight loss and exercise on markers of inflammation [19]. The duration of the interventions ranged from 4-6 weeks to 2 years, with reported weight loss ranging from 3.2% to 30% of body weight.\nPhysical activity has also been shown to affect local and systemic cytokine production. In several studies, exercise interventions of moderate intensity led to significant reductions in circulating levels of IL-6, TNF-α, and IL-8 in healthy individuals and in patients with cardiovascular disease [21–24]. In other studies, the biological response to exercise was found to be dependent on the intensity and duration of the activity [25].\nAlthough several studies have evaluated the relationship between weight loss, exercise, and circulating cytokine levels in healthy obese individuals [15, 26] or in individuals with various health conditions, these relationships have not been previously examined in overweight breast cancer survivors. The purpose of this study was to specifically examine the relationships between weight loss and physical activity and selected inflammatory markers in breast cancer survivors. Samples were obtained from women who participated in a small randomized trial, the Healthy Weight Management (HWM) Study for Breast Cancer Survivors (2002-2004), which successfully promoted weight loss in overweight or obese subjects assigned to the intervention arm. The current study is an exploratory analysis of the effect of weight loss and increased physical activity on inflammatory cytokines TNF-α, IL-6, IL-8 and VEGF at the end of the 16-week intensive intervention period.", "As a feasibility study, the HWM Study was designed as a randomized clinical trial to develop and test a multifaceted approach to promoting healthy weight management in the target population of overweight or obese breast cancer survivors. The intervention incorporated new elements of cognitive behavioral therapy for obesity, such as stronger emphasis on weight maintenance skills. Increased physical activity to promote maintenance of (or increase in) lean body mass, diet modification to facilitate an energy imbalance, and strategies to improve body image and self-acceptance were also emphasized as part of the program.", "The participants in the HWM Study were 85 breast cancer survivors living in San Diego, CA, USA. Primary recruitment procedures included community outreach and networking with clinical contacts to receive referrals. Other strategies included advertising in a major local newspaper and setting up booths at community events. Finally, a list of potentially eligible participants from the University of California, San Diego Cancer Registry was requested. A letter was sent to those on the list inviting them to contact the study coordinator if they were interested in participating in the study.\nThe inclusion criteria for the study were: 18 years and older; diagnosed with stage I-IIIA breast cancer within the previous 14 years; completed initial treatments (i.e., surgery, adjuvant chemotherapy, radiation therapy); initial BMI ≥ 25.0 kg/m2 (overweight or obese) and a minimum of 15 kg over ideal weight as defined by the Metropolitan Life Insurance Company tables [27]; willingness and ability to attend group meetings for 16 weeks and to maintain contact with the investigators for 1 year; and ability to provide dietary and exercise data by telephone at prescribed intervals. An exclusion criterion was the inability to participate in physical activity because of severe disability (e.g., severe arthritic conditions).\nAt screening and recruitment, the ability to participate in mild and moderate physical activity was assessed with the Physical Activity Readiness Questionnaire and Health History Questionnaire, a standard procedure for screening participants for community-based physical activity programs of this nature [28]. Following recruitment and written consent, participants were stratified by BMI [(25.0-29.9 (n = 38) versus >30.0 (n = 47) kg/m2)] and age [<=50 (n = 26), 51-65 (n = 47), >65 (n = 12)], and randomly assigned to either the group-based intervention program (n = 56) or a control group (n = 29), with a 2:1 intervention-to-control ratio to provide sufficient statistical power for the main study hypothesis (differential weight loss between groups), while minimizing subject numbers in this feasibility study. A test of two-sample comparison of the 16-week weight change scores was selected with the alpha (type one error) level set at 0.025 assuming a Bonferroni correction for multiple hypothesis tests. The power (or one minus the type two error) was 80%. The standard deviation for weight change, assumed equal in both groups at 16 weeks, was set at 5.2 kg based on data from Andersen et al. [29]. This sample size analysis indicated that a final number of 63 participants (42 intervention and 21 control), after accounting for dropouts, would provide an adequately powered comparison to detect a clinically significant effect size. Figure 1 includes the CONSORT flow chart for the HWM study.\nFig. 1CONSORT Chart for Healthy Weight Management Study including recruitment information\n\nCONSORT Chart for Healthy Weight Management Study including recruitment information", "The intervention sessions were led by trained investigators and research staff. The program curriculum consisted of group sessions provided according to the following schedule: weekly for 4 months, and follow-up monthly sessions through 12 months. The primary goal of the intervention was to promote regular physical activity and reduced energy intake in order to facilitate weight loss (Fig. 2). The group meetings consisted of discussion and educational/didactic sessions that covered the content areas, with the major proportion of time devoted to increasing physical activity. All intervention subjects also received intensive individualized telephone-based counseling from the study coordinator, starting with weekly calls and decreasing in frequency after the first month (every other week for the next 2 months, and once a month thereafter). The time points for data collection from all subjects were baseline, 16 weeks, and 12 months. The group sessions offered to the treatment study arm was closed-group contingents (with an average of 12-15 women). To equalize possible seasonal effects on targeted behaviors and weight change in the two study arms, wait list subjects were followed concurrent with intervention group subjects and received general contact such as mailed communications during the study period. At study end, they were provided all written intervention materials and a concise version of the didactic material along with facilitated discussion in the format of a 2-day seminar.\nFig. 2Weight loss intervention curriculum topics\n\nWeight loss intervention curriculum topics", "Anthropometric measurements (height, weight, and waist and hip circumferences) were collected at baseline and 16 weeks using standard procedures, and body composition was measured with dual energy X-ray absorptiometry (DXA) using a Lunar DPX-NT densitometer (Lunar/GE Corp). Whole body, regional body fat, and percent fat were obtained from total body DXA scans. All scans were conducted by the same certified technician who was blinded on the assignment of the intervention for each participant.\nPhysical activity data were collected at baseline and 16 weeks using the 7-day physical activity recall instrument developed by Blair et al. [30]. This approach has been shown to be highly reliable (test-retest reliability = 0.99) [31], valid, and sensitive to the effects of physical activity promotion programs [30]. This instrument focuses on the participant’s daily activities over a 7-day period. A telephone interview is scheduled and the interviewer asks the participant to recall when and what kind of physical activity they had in the past week, and the intensity of their activity. Examples of moderate, hard, and very hard activities are provided to help them accurately identify the intensity.\nPhysical fitness data were collected with the 3-min stepping test, which was used to detect possible changes in aerobic fitness by measuring heart rate during the first 15 s of recovery from stepping. The stepping test has high reliability (0.92), is sensitive to change [32], and widely used to assess cardiorespiratory fitness [33].\n[SUBTITLE] Blood Sampling and Assays [SUBSECTION] Blood samples were collected at baseline and 16 weeks between the hours of 8 AM and 1 PM for a majority of the participants (83% at baseline, 68% at 16 weeks). Following centrifugation and separation, plasma or serum was stored at −80o C until assays were conducted. Levels of IL-6, TNF-α, IL-8, and VEGF were determined in duplicate by commercial ELISA with internal controls (R&D Systems, Mpls, MN). Intra-assay coefficient of variation (CVs) were <8%, and inter-assay CVs were <7%. Both samples from a given participant were assayed together [34].\nBlood samples were collected at baseline and 16 weeks between the hours of 8 AM and 1 PM for a majority of the participants (83% at baseline, 68% at 16 weeks). Following centrifugation and separation, plasma or serum was stored at −80o C until assays were conducted. Levels of IL-6, TNF-α, IL-8, and VEGF were determined in duplicate by commercial ELISA with internal controls (R&D Systems, Mpls, MN). Intra-assay coefficient of variation (CVs) were <8%, and inter-assay CVs were <7%. Both samples from a given participant were assayed together [34].\n[SUBTITLE] Components of the intervention [SUBSECTION] The overall content of the intervention included behavioral and cognitive strategies for implementing dietary modification and increasing physical activity [35]. The goal was to achieve a modest weight loss that is sustained, with an emphasis on features that increase this likelihood, such as acceptance of modest weight loss and focusing on skills for weight maintenance. The physical activity component involved encouraging and promoting regular planned aerobic exercise. The long-term goal was to achieve an average of at least 1 h/day of planned exercise at a moderate level of intensity, which is consistent with the current Institute of Medicine recommendations [36]. The main goal of the dietary guidance component was to promote a reduction in energy intake relative to expenditure, with a goal being an energy deficit of 500-1,000 kcal/day by individualized diet modification that emphasized reduced energy density of the overall diet [37], while avoiding excessive dietary restraint. The wait list group participants were provided only general contact (monthly check-up calls, holiday and seasonal cards, and mailed communications) without specific reference to weight management topics through a 12-month period of data collection. Following that period, they were provided all written intervention materials and a concise version of the didactic material, and facilitated discussion was offered in the format of a 2-day seminar.\nThe overall content of the intervention included behavioral and cognitive strategies for implementing dietary modification and increasing physical activity [35]. The goal was to achieve a modest weight loss that is sustained, with an emphasis on features that increase this likelihood, such as acceptance of modest weight loss and focusing on skills for weight maintenance. The physical activity component involved encouraging and promoting regular planned aerobic exercise. The long-term goal was to achieve an average of at least 1 h/day of planned exercise at a moderate level of intensity, which is consistent with the current Institute of Medicine recommendations [36]. The main goal of the dietary guidance component was to promote a reduction in energy intake relative to expenditure, with a goal being an energy deficit of 500-1,000 kcal/day by individualized diet modification that emphasized reduced energy density of the overall diet [37], while avoiding excessive dietary restraint. The wait list group participants were provided only general contact (monthly check-up calls, holiday and seasonal cards, and mailed communications) without specific reference to weight management topics through a 12-month period of data collection. Following that period, they were provided all written intervention materials and a concise version of the didactic material, and facilitated discussion was offered in the format of a 2-day seminar.", "Blood samples were collected at baseline and 16 weeks between the hours of 8 AM and 1 PM for a majority of the participants (83% at baseline, 68% at 16 weeks). Following centrifugation and separation, plasma or serum was stored at −80o C until assays were conducted. Levels of IL-6, TNF-α, IL-8, and VEGF were determined in duplicate by commercial ELISA with internal controls (R&D Systems, Mpls, MN). Intra-assay coefficient of variation (CVs) were <8%, and inter-assay CVs were <7%. Both samples from a given participant were assayed together [34].", "The overall content of the intervention included behavioral and cognitive strategies for implementing dietary modification and increasing physical activity [35]. The goal was to achieve a modest weight loss that is sustained, with an emphasis on features that increase this likelihood, such as acceptance of modest weight loss and focusing on skills for weight maintenance. The physical activity component involved encouraging and promoting regular planned aerobic exercise. The long-term goal was to achieve an average of at least 1 h/day of planned exercise at a moderate level of intensity, which is consistent with the current Institute of Medicine recommendations [36]. The main goal of the dietary guidance component was to promote a reduction in energy intake relative to expenditure, with a goal being an energy deficit of 500-1,000 kcal/day by individualized diet modification that emphasized reduced energy density of the overall diet [37], while avoiding excessive dietary restraint. The wait list group participants were provided only general contact (monthly check-up calls, holiday and seasonal cards, and mailed communications) without specific reference to weight management topics through a 12-month period of data collection. Following that period, they were provided all written intervention materials and a concise version of the didactic material, and facilitated discussion was offered in the format of a 2-day seminar.", "Data were analyzed on all participants (n = 68) who had data for weight, waist, percent body fat, physical fitness, physical activity, and inflammatory cytokines at baseline and 16 weeks, following the intensive intervention period, to explore the association between weight loss (independent variable) and change in each inflammatory factor (dependent variable). The relationship between physical activity (independent variable) and change in inflammatory biomarkers was also examined. Although 12 month data were collected as part of the parent study, the present findings describe data from the 16-week data collection period when blood samples were analyzed for cytokine assays.\nChange variables were computed to evaluate group differences in key study outcomes, such as BMI, weight, body composition, and level of physical activity. Group differences in outcome variables at 16 weeks between the intervention and control groups were assessed with independent t tests. After excluding values of cytokine data that exceeded three standard deviations from the overall mean, within group differences between baseline and 16 weeks were evaluated with paired t tests. Spearman correlations (excluding outliers) examined relationships between cytokines, BMI, percent body fat, and physical activity at baseline and at 16 weeks. Regression analyses explored the association between the increase in physical activity levels (independent variable) and change in each inflammatory factor (dependent variable), controlling for weight loss and change in stepping test heart rate. An alpha value ≤0.05 was considered statistically significant. Data were analyzed using SPSS for Windows, Version 11.5 (2002) and SAS statistical software, version 9.2 (2008).", "Participant ranged from 33 to 71 years of age. Ninety-four percent of the participants were non-Hispanic white. The majority of the participants were married (77%), and many had completed college or higher levels of education. No significant differences were found between intervention and control groups for demographic characteristics such as age, level of education, and race/ethnicity. Similarly, no differences at baseline were observed for outcome measures such as BMI, weight, and physical fitness or activity levels (Table 1).\nTable 1Characteristics of the study groups at baselineInterventionControl(n = 44)(n=24)VariablesMean (SD)Mean (SD)Age, years56 (9)56 (8)Years of education, years16 (2)16 (2)Body mass index, kg/m2\n30.8 (3.8)31.3 (5.2)Weight, kg83.9 (11.8)87.2 (14.7)Waist circumference, cm101.5 (12.0)106.7 (13.4)Total body fat, kg36.9 (7.5)40.4 (10.2)Step test, HR/30 s60 (8)57 (7)Moderate or vigorous physical activity, h/week3.2 (2.1)3.7 (3.3)None of the means are significantly different between groups at baseline\n\nCharacteristics of the study groups at baseline\nNone of the means are significantly different between groups at baseline\nAccording to independent t tests, the magnitude of reduction in BMI (P < 0.0001), weight (−6.8% in intervention and −0.3% in control, P < 0.0001), waist circumference (P < 0.05), and percent body fat (P < 0.0001) between baseline and 16 weeks was significantly greater for participants in the intervention group (Table 2). Additionally, performance on the stepping test indicated better fitness (P < 0.05), and hours of moderate or vigorous physical activity, between baseline and 16 weeks improved significantly more for participants in the intervention group than for controls (P < 0.05; Table 2).\nTable 2Mean differences in magnitude of change for key variables between baseline and 16 weeksMean (SD)Intervention (n = 44)Control (n = 24)Change in body mass index, kg/m2−2.1 (1.3)**−0.1 (1.5)Change in weight, kg−5.7 (3.5)**−0.2 (4.1)Change in waist circumference, cm−7.1 (6.4)*−2.5 (7.7)Change in percent body fata\n−4.5 (3.8)**−0.9 (2.3)Change in step test, HR/30 s−6.0 (8)*−1.0 (6)Change in physical activity levels, h/week2.2 (3.3)*0.3 (3.7)\naOne control and two intervention subjects had missing body composition data (one intervention at baseline; one intervention and one control at 16 weeks)*P < 0.05; **P < 0.0001\n\nMean differences in magnitude of change for key variables between baseline and 16 weeks\n\naOne control and two intervention subjects had missing body composition data (one intervention at baseline; one intervention and one control at 16 weeks)\n*P < 0.05; **P < 0.0001\nAccording to paired t tests evaluating within-group differences in inflammatory factors for the intervention group between baseline and 16 weeks, levels of TNF-α significantly reduced (P < 0.05). A reduction was also noted for IL-6 level (P = 0.06; Table 3). TNF-α was also found to be decreased at 16 weeks for the control group (P < 0.05; Table 3). No differences were noted for IL-8 and VEGF.\nTable 3Within group differences for change in cytokine levels between baseline and 16 weeksInterventionControlMean (SD)\nN\nBaseline16 Weeks\nN\nBaseline16 WeeksVariablesMean (SD)Mean (SD)\nP valueMean (SD)Mean (SD)\nP valueTNFα (pg/mL)425.9 (2.0)5.4 (1.9)0.03245.4 (1.3)4.6 (1.0)0.0001IL-6 (pg/mL)431.7 (0.9)1.4 (0.9)0.06241.7 (1.3)1.4 (0.8)0.33IL-8 (pg/mL)434.8 (1.7)5.1 (2.0)0.29234.5 (1.5)4.4 (1.9)0.75VEGF (pg/mL)4229.3 (21.8)33.5 (27.0)0.202334.9 (22.5)31.3 (16.6)0.38\nIL-6 interleukin-6, IL-8 interleukin-8, TNF-α tumor necrosis factor-α, and VEGF vascular endothelial growth factorThree sigma outliers (one each for IL-6, IL-8, and VEGF and one for both Il-6 and VEGF in the intervention group; one each for IL-8 and VEGF in the control group) were excluded\n\nWithin group differences for change in cytokine levels between baseline and 16 weeks\n\nIL-6 interleukin-6, IL-8 interleukin-8, TNF-α tumor necrosis factor-α, and VEGF vascular endothelial growth factor\nThree sigma outliers (one each for IL-6, IL-8, and VEGF and one for both Il-6 and VEGF in the intervention group; one each for IL-8 and VEGF in the control group) were excluded\nCorrelation analysis showed that several inflammatory factors were associated with key outcome measures for the participants in the intervention group. Both IL-6 and VEGF were positively correlated with BMI at 16 weeks (r = 0.37, P < 0.05 for IL-6, and r = 0.44, P < 0.01 for VEGF). IL-6 levels at 16 weeks were also positively correlated with performance on step test (r = 0.42, P < 0.01). Increased total hours of moderate or vigorous exercise at 16 weeks was correlated with favorable reductions in IL-6 (r = −0.35, P < 0.05) and VEGF (r = −0.46, P < 0.01) between baseline and 16 weeks.\nIn a regression analysis using participants in the intervention group, controlling for change in weight and change in heart rate/min after the stepping test, increased level of physical activity was associated with favorable changes in IL-6 levels (R\n2=0.18, P < 0.05; Table 4). Other cytokines did not show significant associations with change in physical activity.\nTable 4Regression model of factors associated with IL-6 levels at 16 weeks in the intervention group (n = 38)Variableβ-coefficientSignificance (P value)\nR\n2\nIncrease in moderate or vigorous physical activity, hours/week−0.1250.02Change in weight, kg0.010.2Change in heart rate/min after step test−0.010.70.18Excluding one 3-sigma outlier\n\nRegression model of factors associated with IL-6 levels at 16 weeks in the intervention group (n = 38)\nExcluding one 3-sigma outlier", "Several possible mechanisms by which weight loss and physical activity may play a role in reducing breast cancer risk have been proposed [38]. This small randomized clinical trial provides an opportunity to evaluate the short-term effects of weight loss and increased physical activity on circulating cytokines IL-6, IL-8, TNF-α and VEGF in overweight or obese breast cancer survivors.\nParticipants in this study lost nearly 7% of body weight at the end of the intensive intervention period at 16 weeks. They also reported increased physical activity and demonstrated improved cardiorespiratory fitness at this time point. These findings have promising public health implications because the vast majority of women who have been diagnosed with breast cancer are overweight or obese and exercise at very low levels of intensity and duration [39–41]. Also, concern with overweight and weight gain is a common complaint among breast cancer survivors [42]. In a comprehensive review of observational studies on breast cancer recurrence or survival, Rock and Demark-Wahnefried [6] reported that increased BMI and/or excessive adiposity is a significant risk factor for recurrent disease and/or decreased survival in a majority of the studies. The findings from this exploratory study suggest that increased levels of physical activity and weight loss achieved by participants in this weight loss intervention may positively influence the rates of survival in these women by reducing overall inflammation [19].\nThe current study also explored changes in levels of circulating cytokines in these overweight and obese breast cancer survivors because inflammatory cytokines are thought to increase with the degree of adiposity [16], and weight loss has been associated with a reduction in the levels of inflammatory factors in the general population. An association with breast pathology and inflammatory cytokines has been noted in previous research studies [9]. In addition to losing a notable amount of weight, participants in the intervention group reported an increase in level of moderate or vigorous physical activity and improved fitness. During that time period, levels of two inflammatory factors declined; IL-6 for the intervention group and TNF-α for both groups. The observation of a decrease in TNF-α for the control group suggests that the relationship between obesity and TNF-α production by adipose tissue may not be clearly established. Recently, Bastard et al. [18] concluded that the precise role of TNF-α in human obesity needs further investigation because adipose tissue does not seem to be directly implicated in the increased circulating TNF-α levels observed in obese humans. Evidence from other studies suggest lower levels of TNF-α in breast cancer patients and a possible anti-tumor effect on breast cancer cells [12], in addition to its effects on promoting cellular transformation and metastasis [38]. The precise role of TNF-α in relation to obesity and physical activity needs to be investigated further in order to better understand the decline observed in this study.\nWe also observed positive associations for BMI, percent body fat, and IL-6 after the intensive intervention period of 16 weeks. Similar significant positive associations with CRP, BMI, and waist circumference were identified in a recent study with breast cancer survivors [43, 44]. Further, the reduction in IL-6 level was correlated with increased total hours of moderate or vigorous physical activity in both univariate and multivariate analysis. These findings are noteworthy, because even though previous studies have shown that increased exercise may reduce the levels of circulating inflammatory factors [21–23], similar findings have not been previously reported in breast cancer survivors. In a review of the biological mechanisms that may explain the affect of physical activity on breast cancer risk, Neilson et al. [38] concluded that even though weight loss can decrease levels of IL-6, physical activity may alter IL-6 levels through an independent mechanism that is not yet well-understood.\nThese findings provide some insight into the relationship between weight loss, increased physical activity, and inflammatory cytokines, supporting the suggestion that further research should be pursued in this arena. Even though higher cytokine levels have been associated with increased disease risk across studies, identifying the magnitude of change that could be considered beneficial for health outcomes remains a challenge, possibly as a result of multiple factors effecting this relationship [45, 46]. Future research aiming to determine effective levels of change in cytokines in response to weight loss or increased physical activity would be valuable. Due to the small sample size, the findings from the current study should be considered exploratory. Moreover, because the participants in this study were mostly non-Hispanic whites, the results might not be generalizable to breast cancer survivors representing other racial/ethnic groups.\nUnderstanding the complex associations between obesity, physical activity, and cytokine levels as they relate to breast cancer risk has clinical implications because of the potential roles they may play as part of immunotherapic interventions [12, 47]. Findings from this study contribute to exploring the mechanisms by which excessive adiposity increases risk for recurrence and reduces likelihood of survival following the diagnosis and treatment of early stage breast cancer. The findings also contribute to the knowledge base of the complex interactions between inflammatory factors and morbidity and mortality relating to cancer." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", "discussion" ]
[ "Weight loss", "Physical activity", "Exercise", "Inflammatory factors", "Obesity", "Breast cancer survivors" ]
Predictive factors for functional improvement after intravitreal bevacizumab therapy for macular edema due to branch retinal vein occlusion.
21337042
To identify predictive factors for improvement of visual acuity and central retinal thickness by intravitreal bevacizumab for the treatment of macular edema (ME) due to branch retinal vein occlusion (BRVO).
BACKGROUND
Two hundred and five eyes from 204 patients with ME secondary to BRVO were retrospectively included at six sites. All eyes received intravitreal bevacizumab therapy (1.25 mg/0.05 ml). The mean follow-up was 36.8 ± 12.7 weeks (range, 18 to 54 weeks). Measurement of ETDRS best-corrected visual acuity (BCVA, in all eyes) and optical coherence tomography (OCT, in 87% of eyes) were performed at baseline and at follow-up examinations every 12 weeks. Using fluorescein angiography, the perfusion status of the macula at baseline could be assessed in 84% of the eyes. The main outcome measures were changes in BCVA and central retinal thickness (CRT). For analysis of predictive factors, the results at 24 weeks were used.
METHODS
The median BCVA was 0.6 LogMAR at baseline and improved to 0.4 LogMAR at 24 and 48 weeks. This visual improvement was associated by a significant reduction in CRT, decreasing from a baseline of 454 μm to 267 μm and 248 μm after 24 and 48 weeks respectively. Eyes with ME and intact (perfused) or interrupted (ischemic) foveal capillary ring showed a 2-line increase of median BCVA [45 eyes (22%) and 128 eyes (62%) respectively]. However, the final median BCVA was significantly worse in eyes with ischemic ME (0.6 versus 0.3 logMAR in perfused ME). Other factors for visual improvement were absence of previous treatments of the ME, age younger than 60 years and low baseline BCVA (≥0.6 logMAR) (2, 3, and 2 median BCVA lines increase respectively). Furthermore, eyes with duration of the ME of less than 12 months responded with a 3-line increase of the median BCVA. Final CRT only showed minor differences between the subgroups. During the entire follow-up, retreatments were performed in 85% of the eyes, with a median number of injections of three (mean 3.2; range, 1 to 10) and a median time-interval between injections of 11.6 weeks (mean 14.6 weeks).
RESULTS
Intravitreal injection of bevacizumab resulted in a significant improvement of BCVA and reduction of ME in BRVO. Baseline BCVA, patient's age, and duration of BRVO were found to be of prognostic relevance for visual improvement. A less favorable outcome of the bevacizumab therapy in eyes with longstanding BRVO would advocate initiation of treatment within 12 months after onset.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Angiogenesis Inhibitors", "Antibodies, Monoclonal", "Antibodies, Monoclonal, Humanized", "Bevacizumab", "Female", "Fluorescein Angiography", "Follow-Up Studies", "Humans", "Intravitreal Injections", "Macular Edema", "Male", "Middle Aged", "Retina", "Retinal Vein Occlusion", "Retrospective Studies", "Vascular Endothelial Growth Factor A", "Visual Acuity" ]
3042100
Introduction
Secondary macular edema (ME) is one of the main reasons for loss of visual acuity in branch retinal vein occlusion (BRVO). The randomized, controlled Branch Vein Occlusion Study showed limited treatment benefit in eyes with perfused ME: Grid photocoagulation of the edematous macula resulted in a better visual improvement than in the natural course of the disease [1]. Actually, grid photocoagulation was confirmed as the benchmark in a randomized trial, with 29% of the eyes gaining 3 or more best corrected visual acuity (BCVA) lines (≥15 letters) after 1 year. Intravitreal injection of the corticosteroid triamcinolone acetonide has not been shown to be more effective in BRVO than grid photocoagulation [2] and efficacy of intravitreal pegaptanib therapy is unclear [3]. Also, surgical approaches including vitrectomy with or without peeling of the inner limiting membrane [4], arteriovenous dissection (sheathotomy) [5], laser-induced chorioretinal anastomosis [6], and surgical cannulation of branch retinal veins [7] failed to demonstrate a relevant benefit. Therefore, a more efficacious treatment strategy has been sought. Bevacizumab (Avastin®, Genentech, San Francisco, CA, USA) is a humanized monoclonal antibody directed against the vascular endothelial growth factor (VEGF). The rationale for its intravitreal application in BRVO was that vascular occlusion induces upregulation of VEGF, resulting in increased vascular permeability and subsequent ME 8–10. Recently, various clinical studies demonstrated beneficial effects of anti-VEGF therapy on both ME and BCVA in patients with BRVO [11–18]. Moreover, this minimally invasive therapy might be even more effective than grid photocoagulation, which is the current standard of care. A prospective study on previously untreated eyes with perfused ME secondary to BRVO demonstrated a gain of 3 or more BCVA lines in 57% at 1 year [14]. However, the significance of previous studies was limited, due to the relatively small sample sizes. In addition, the optimal time point for initiation of therapy remains unclear. More importantly, there is still minimal knowledge concerning predictive factors for visual outcome. Because of the large number of patients included, this is the first study to permit a detailed subgroup analysis. This made it possible to investigate various potential predictive factors, including macular perfusion status, duration of the ME, patients’ age, baseline BCVA, number of injections applied, and previous treatments before intravitreal bevacizumab therapy in clinical practice. [SUBTITLE] Subjects and methods [SUBSECTION] The study was designed as a multicenter retrospective analysis of patients that received intravitreal bevacizumab therapy for the treatment of BRVO associated with a ME involving the foveal center. Patients received the first bevacizumab injection between October 2005 and May 2009. Only patients that finished the follow-up examination at 24 weeks were included. Eyes that had undergone vitrectomy prior to bevacizumab treatment were excluded due to the different pharmacokinetics. Eyes with other diseases affecting BCVA or presence of neovascularisation were excluded from the analysis. If patients had received peripheral, focal or grid laser photocoagulation, cyclodestructive interventions, cataract surgery, or other surgical procedures during the follow-up they were also excluded. Baseline examination comprised a complete eye examination, including ETDRS BCVA, slit-lamp biomicroscopy, dilated fundus examination, and optical coherence tomography (OCT) in most eyes. To assess the perfusion status of the macula, preservation (perfused ME) or capillary drop-out of the foveal capillary ring (ischemic ME) [1], fluorescein angiographies at baseline were analyzed. In case of hemorrhages in the macular area that obscured the foveal capillary ring, the perfusion status was not evaluated until resorption of these hemorrhages was seen. Only retina specialists experienced in the analysis of fluorescence angiograms were in charge of the analysis in each center. The study followed the tenets of the Declaration of Helsinki, and was approved by the local ethics committees at each site. The study was designed as a multicenter retrospective analysis of patients that received intravitreal bevacizumab therapy for the treatment of BRVO associated with a ME involving the foveal center. Patients received the first bevacizumab injection between October 2005 and May 2009. Only patients that finished the follow-up examination at 24 weeks were included. Eyes that had undergone vitrectomy prior to bevacizumab treatment were excluded due to the different pharmacokinetics. Eyes with other diseases affecting BCVA or presence of neovascularisation were excluded from the analysis. If patients had received peripheral, focal or grid laser photocoagulation, cyclodestructive interventions, cataract surgery, or other surgical procedures during the follow-up they were also excluded. Baseline examination comprised a complete eye examination, including ETDRS BCVA, slit-lamp biomicroscopy, dilated fundus examination, and optical coherence tomography (OCT) in most eyes. To assess the perfusion status of the macula, preservation (perfused ME) or capillary drop-out of the foveal capillary ring (ischemic ME) [1], fluorescein angiographies at baseline were analyzed. In case of hemorrhages in the macular area that obscured the foveal capillary ring, the perfusion status was not evaluated until resorption of these hemorrhages was seen. Only retina specialists experienced in the analysis of fluorescence angiograms were in charge of the analysis in each center. The study followed the tenets of the Declaration of Helsinki, and was approved by the local ethics committees at each site.
null
null
Results
This multicenter, retrospective interventional case series enrolled 205 eyes (204 patients) of six centres that were treated with intravitreal bevacizumab due to ME secondary to BRVO. Patient characteristics at baseline are shown in Table 1. The median follow-up was 36.7 weeks (range, 18 to 54 weeks; mean 36.8 ± 12.7 weeks). The median age of the patients was 69 years (range, 38 to 87 years). The bevacizumab treatment resulted in a significant improvement of the median BCVA (ANOVA p < 0.001), increasing from 0.6 logMAR at baseline to 0.5 logMAR at 12 weeks and 0.4 logMAR at 24 weeks (both p’ < 0.001). Ninety-one eyes (44.4%) finished the final follow-up examination at 48 weeks. They showed a maintenance of the median BCVA of 0.4 logMAR corresponding to a total improvement of 2 median BCVA lines compared to baseline (p < 0.001, Fig. 1a). OCT data were available in 91%, 82%, and 87% of all eyes at baseline, 12, and 24 weeks, respectively. Accordingly, reduction of the CRT was highly significant with the median CRT (ANOVA p < 0.001) decreasing from a baseline of 454 μm to 304 μm and 267 μm after 12 and 24 weeks, respectively (both p < 0.001). This significant reduction was preserved over the entire follow-up with a final median CRT of 248 μm at 1 year (63 eyes, p < 0.001, Fig. 1b). During the follow-up, a median of three injections (mean 3.2; range, 1 to 10) was administered, with a median injection frequency of 11.6 weeks (mean 14.6 weeks). Table 1Predictive factors for visual improvement (at 24 weeks)FactorNumber of eyesIncrease of median BCVA lines P value (increase)Final median BCVA (logMAR)ANOVA (p value)Gender0.83 Male101 (49%)2<0.001 *0.4 Female104 (51%)2<0.001 *0.4Hypertension0.89 Yes131 (64%)2<0.001 *0.4 No69 (34%)2<0.001 *0.4Perfusion status of the macula0.95 Ischemic45 (22%)2<0.001 *0.6 Perfused128 (62%)2<0.001 *0.3Pretreatment0.86 Yes26 (13%)0.5<0.0050.5 No176 (86%)2<0.001 *0.4Patients' age (years)<0.01 * <6041 (20%)3<0.001 *0.3 ≥60163 (80%)1<0.001 *0.5Baseline BCVA (logMAR)<0.001 * ≤0.591 (44%)1<0.001 *0.3 ≥0.6114 (56%)2<0.001 *0.6Duration of BRVO (months)0.03 *+ <363 (31%)2.5<0.001 *0.3 3-1271 (35%)2<0.001 *0.4 >1260 (29%)0.5<0.010.5Number of injections0.48 146 (22%)2.5<0.001 *0.35 279 (39%)2<0.001 *0.5 ≥374 (36%)2<0.001 *0.4* = Significant difference (Bonferroni-corrected)+ = Post-hoc test (Tukey–HSD): <3 months and >12 months sign. difference) Fig. 1Box plot graphs showing the course of best-corrected visual acuity (BCVA) (a) and central retinal thickness (CRT) (b) over the 48 weeks follow-up. a Increase of BCVA, and b, decrease of the CRT following bevacizumab treatment. Note the stabilisation after 24 weeks. n = number of eyes included Predictive factors for visual improvement (at 24 weeks) * = Significant difference (Bonferroni-corrected) + = Post-hoc test (Tukey–HSD): <3 months and >12 months sign. difference) Box plot graphs showing the course of best-corrected visual acuity (BCVA) (a) and central retinal thickness (CRT) (b) over the 48 weeks follow-up. a Increase of BCVA, and b, decrease of the CRT following bevacizumab treatment. Note the stabilisation after 24 weeks. n = number of eyes included
null
null
[ "Subjects and methods", "Intravitreal bevacizumab injection", "Follow-up", "Statistics", "Analysis of predictive factors" ]
[ "The study was designed as a multicenter retrospective analysis of patients that received intravitreal bevacizumab therapy for the treatment of BRVO associated with a ME involving the foveal center. Patients received the first bevacizumab injection between October 2005 and May 2009. Only patients that finished the follow-up examination at 24 weeks were included. Eyes that had undergone vitrectomy prior to bevacizumab treatment were excluded due to the different pharmacokinetics. Eyes with other diseases affecting BCVA or presence of neovascularisation were excluded from the analysis. If patients had received peripheral, focal or grid laser photocoagulation, cyclodestructive interventions, cataract surgery, or other surgical procedures during the follow-up they were also excluded.\nBaseline examination comprised a complete eye examination, including ETDRS BCVA, slit-lamp biomicroscopy, dilated fundus examination, and optical coherence tomography (OCT) in most eyes. To assess the perfusion status of the macula, preservation (perfused ME) or capillary drop-out of the foveal capillary ring (ischemic ME) [1], fluorescein angiographies at baseline were analyzed. In case of hemorrhages in the macular area that obscured the foveal capillary ring, the perfusion status was not evaluated until resorption of these hemorrhages was seen. Only retina specialists experienced in the analysis of fluorescence angiograms were in charge of the analysis in each center.\nThe study followed the tenets of the Declaration of Helsinki, and was approved by the local ethics committees at each site.", "The intravitreal injection was performed according to the recommendation of the German Retina Society [19]. The injections were performed after topical anaesthesia and preoperative antisepsis with povidon iodine using sterile gloves, drape, and a lid speculum. Bevacizumab (Avastin®) 1.25 mg in a 0.05 ml total volume was injected intravitreally via the pars plana. Following the injection, retinal perfusion was controlled.\nAll patients were informed about the nature of off-label use and the experimental nature of the therapy before signing an informed consent prior to each injection. Also, they confirmed that they were aware of the potential side-effects of bevacizumab treatment. Patients with contraindications against an intravitreal bevacizumab injection (acute ocular infection, recent history of stroke or myocardial infarction, unstable angina pectoris, uncontrolled hypertension, uncompensated renal insufficiency, allergy to bevacizumab, or pregnancy) were excluded from treatment [20].", "Patients were followed on a routine clinical program. Follow-up examinations at 12, 24, 36 and 48 weeks (±6 weeks) were analyzed. The main outcome measures of this study were BCVA and central retinal thickness (CRT) as measured by OCT (Stratus OCTTM, Carl Zeiss Meditec, Jena, Germany, axial tissue resolution 10-15 μm, or Spectralis® OCT, Heidelberg Engineering, Dossenheim, Germany, axial tissue resolution 4 μm). To minimize variability between different devices, all patients were followed up with the same OCT. CRT was calculated as the distance between the inner limiting membrane and the retinal pigment epithelium–choriocapillaris interface of radial lines through the foveal area [21]. The foveal area was determined using the patient’s fixation and retinal landmarks. The calipers were set by hand because automated measurement protocols are more prone to errors [22]. For comparison, the normal CRT in healthy eyes was reported to be 170 ± 18 μm [23].\nRe-injection was considered at each follow-up visit, and performed after informed consent of the patient, depending on the individual course of the BCVA and persistence or reoccurence of ME on OCT.", "VA data were converted to logMAR units before analysis. All continuous variables (BCVA and CRT) were described as box plots showing 5% and 95% quantiles (whiskers), 25% and 75% quartiles (box), and the median (marked by an asterisk). While conventional statistical inference in this study is based on sample means, we use median values as a robust index of the sample's central tendency for descriptive report of the data.\nThe following prognostic variables were studied: perfusion status of the foveal capillary ring (ischemic or perfused), existence of pre-treatment, patients’ age (younger than 60 years or 60 years and older), duration of BRVO (<3 months, 3 to 12 months, or >12 months), baseline BCVA (0.6 logMAR and lower or 0.5 logMAR and higher), number of injections applied (one, two or three and more), gender (male or female), and presence of arterial hypertension.\nOverall differences in VA and CRT at 0, 12, and 24 weeks were assessed using repeated measures analysis of variance (ANOVA). Subgroup-specific changes in VA and CRT were analyzed using paired t-tests (two-sided). Significant prognostic factors for an increase in VA were identified by between-subjects multifactorial ANOVA and Tukey’s HSD post-hoc tests where appropriate. Further differences between subgroups were analyzed using unpaired t-tests, and between-subjects ANOVAs in the case of more than two subgroups. To control for multiple comparisons, Bonferroni correction was applied with respect to the overall number of tests for VA and CRT differences respectively (VA: eight tests, CRT: eight tests), resulting in adjusted significance thresholds of p = 0.00625 All statistical analysis was performed using SPSS 16 (SPSS Inc, Chicago, IL, USA).", "Because BCVA and CRT did not significantly change between 24 and 48 weeks (Fig. 1a,b), analysis of predictive factors was performed on the basis of the 24 weeks results of all 205 eyes included.\nEvaluation of the perfusion status of the macular area revealed an ischemic ME with a broken foveal capillary ring in 22% (45 eyes) and a perfused ME in 62% (128 eyes). Sufficient information on the perfusion status was not available in 16% (32 eyes) (Table 1). Interestingly, both subgroups with perfused and ischemic ME improved 2 median BCVA lines at 24 weeks (both p < 0.001). However, eyes with an ischemic ME started from a lower median baseline BCVA of 0.8 logMAR compared to 0.5 logMAR of the subgroup with a perfused ME (p < 0.001). Hence, their final median BCVA of 0.6 logMAR at 24 weeks was significantly lower than 0.3 logMAR in eyes with a perfused ME (p < 0.001) (Fig. 2a). Comparing the baseline and final median CRT in the two subgroups, there was no significant difference between the subgroup with ischemic and perfused ME (baseline: 500 μm and 450 μm (p = 0.04, n.s.); 24 weeks: 266 μm and 250 μm (p = 0.44, n.s.) respectively) (Fig. 2b).\nFig. 2Box plot graphs showing the bevacizumab effect depending on the duration of the perfusion status of the macula (a, b), on the existence of pretreatment (c, d), on the patients’ age (e, f), and on the baseline visual acuity (VA) (g, h) during the 24 weeks follow-up. Left column demonstrates the course of the BCVA, the right column shows the central retinal thickness (CRT)\n\nBox plot graphs showing the bevacizumab effect depending on the duration of the perfusion status of the macula (a, b), on the existence of pretreatment (c, d), on the patients’ age (e, f), and on the baseline visual acuity (VA) (g, h) during the 24 weeks follow-up. Left column demonstrates the course of the BCVA, the right column shows the central retinal thickness (CRT)\nPretreatment had been undertaken in 13% (26 eyes); 23 eyes had undergone grid laser photocoagulation, and seven eyes had received intravitreal triamcinolone injection prior to bevacizumab treatment. Eighty-six percent (176 eyes) received bevacizumab as a primary therapy for BRVO (Table 1). Interestingly, the pretreated subgroup only showed a visual improvement of 0.5 median BCVA lines from a median of 0.55 logMAR to 0.5 logMAR at 24 weeks (p < 0.005, Fig. 2c), and no reliable decrease of the median CRT from 350 μm at baseline to 274 μm at 24 weeks (p = 0.113, Fig. 2d). In contrast, the previously untreated eyes responded with a 2-line increase of the median BCVA (0.6 logMAR to 0.4 logMAR, p < 0.001, Fig. 2c), together with a reduction of the CRT (463 μm to 266 μm, p < 0.001, Fig. 2d). The duration of the BRVO-associated symptoms was significantly longer in the pretreated subgroup, with 21.4 months versus 4.3 months in previously untreated eyes (p < 0.005).\nAnalysis of the patients’ age at the time of BRVO confirmed it as a significant prognostic factor for gain in BCVA (multi-factorial ANOVA, p < 0.001), with a better response in younger individuals (Table 1). The subgroup of patients who were younger than 60 years of age (41 eyes, 20%) revealed a considerable 3-line increase of the median BCVA (from 0.6 logMAR at baseline to 0.3 logMAR at 24 weeks, p < 0.001). In contrast, patients of 60 years and older (163 eyes, 80%) only showed a gain of 1 median BCVA line (from 0.6 logMAR at baseline to 0.5 logMAR at 24 weeks), which was statistically significant, p < 0.001, Fig. 2e). This is despite the fact that the median CRT of younger and older patients was not significantly different, neither at baseline (463 μm compared to 350 μm, p = 0.86), nor at 24 weeks (260 μm compared to 275 μm, p = 0.70) (Fig. 2f). However, the median duration of BRVO prior to bevacizumab therapy was significantly shorter in younger individuals (3.0 months; range, 0.0 to 75 months) than in older individuals (6.5 months; range, 0.0 to 163 months) (p < 0.05). Moreover, in the younger subgroup, only 5% (two eyes) had received pretreatment (intravitreal triamcinolone injection), compared to 15% (24 eyes) in the older subgroup.\nTo investigate the prognostic value of the baseline BCVA, the eyes were divided into two subgroups with a BCVA ≥ 0.6 logMAR (range: 0.6 to 2.0 logMAR, 114 eyes) and ≤ 0.5 logMAR (range, 0.1 to 0.5 logMAR, 91 eyes) before treatment. Our data showed that eyes with a low baseline BCVA gained median 2 BCVA lines (from 0.8 logMAR to 0.6 logMAR, p < 0.001), whereas eyes with a high initial BCVA only gained median 1 VA line (from 0.4 logMAR to 0.3 logMAR, p < 0.001) (Fig. 2g). Multi-factorial ANOVA confirmed baseline BCVA as a significant prognostic factor for BCVA gain (p < 0.001, Table 1). Comparing the percentage of ischemic ME and hemorrhage in the foveal area in the two subgroups, both were considerably higher in the cohort with a low baseline BCVA (32% and 37% versus 10% and 17%). Also, the median baseline CRT was higher in eyes with a low initial BCVA compared to eyes with a high initial BCVA (501 μm versus 348 μm, p < 0.001). However, both subgroups improved their median CRT at 24 weeks to comparable levels (270 μm and 266 μm respectively, p = 0.43) (Fig. 2h).\nThe median time between the onset of BRVO-associated symptoms and the baseline examination was 5.7 months (range, 0.0 to 163 months). The onset of BRVO-associated symptoms remained unclear in 5% (11 eyes) (Table 1). To evaluate the impact of early treatment on BCVA and ME, the eyes were assigned to one of three subgroups according to the duration of BRVO prior bevacizumab therapy: group A <3 months (63 eyes), group B 3 to 12 months (71 eyes), and group C >12 months (60 eyes). Multi-factorial ANOVA confirmed duration of BRVO-associated symptoms as a significant prognostic factor for BCVA gain (p = 0.03), with a significantly higher gain for group A than for group C (p < 0.05, Tukey HSD). Interestingly, subgroup A showed a significant gain of 2.5 median BCVA lines from a median baseline of 0.65 logMAR to 0.3 logMAR at 24 weeks (p < 0.001), whereas the visual improvement in group B was 2 median BCVA lines (from 0.6 to 0.4 logMAR, p < 0.001). In contrast, subgroup C only showed a 0.5 line increase of the median BCVA, from 0.55 to 0.5 logMAR at 24 weeks (p = 0.005) (Fig. 3a). Hemorrhage in the foveal area at baseline was present in 45% (28 eyes) of group A, in 34% (24 eyes) of group B, and in 7% (four eyes) of group C. With regard to the median CRT at 24 weeks, there was no significant difference between the three groups (A 273 μm, B 260 μm, and C 290 μm, p = 0.86) even though at baseline significant differences were evident (ANOVA p < 0.05); Baseline CRT was significantly higher in group A, 483 μm compared to 400 μm in group C (p < 0.05 Tukey HSD) and 472 μm in group B, Fig. 3b).\nFig. 3Box plot graphs showing the bevacizumab effect depending on the duration of the BRVO-associated symptoms (a, b), and on the number of injections applied (c, d) during the 24 weeks follow-up. Left column demonstrates the course of the visual acuity (VA), the right column shows the central retinal thickness (CRT)\n\nBox plot graphs showing the bevacizumab effect depending on the duration of the BRVO-associated symptoms (a, b), and on the number of injections applied (c, d) during the 24 weeks follow-up. Left column demonstrates the course of the visual acuity (VA), the right column shows the central retinal thickness (CRT)\nTo maintain the bevacizumab effect until week 24, re-injections were performed in 75% (153 eyes). During the 6-month follow-up, a median of two injections (mean 2.3; range, 1 to 6) was administered, with a median time-interval between injections of 11.5 weeks (mean 14.8 weeks). The relationship between the bevacizumab effect and the number of injections was analyzed, assigning the eyes to a subgroup with one, two or three and more injections. Interestingly, the BCVA showed comparable results in all three subgroups, with an increase of the median BCVA of 2.5 lines (one injection) or 2 lines (two and ≥3 injections) (ANOVA p = 0.27, Fig. 3c). When comparing the median CRT at baseline and at 24 weeks between the three subgroups, they were lowest in the subset with only one injection (345 and 224 μm), and increased with more injections (two injections: 472 and 271 μm; three and more injections: 454 and 296 μm, both ANOVA p < 0.05) (Fig. 3d). For the group of three or more injections (74 eyes) we analysed if there is a correlation between the number of injections and BCVA or CRT. However, the analysis revealed no significant correlation (p > 0.05), neither for BCVA nor for CRT.\nAs the number of eyes included in the analysis varied across parameters (from 173 to 205, Table 1) we performed an additional analysis on a subset of eyes where the complete data of all essential parameters (perfusion status of the macula, pretreatment, patients' age, baseline BCVA, duration of BRVO, and number of injections) was available. This subset of eyes included 158 eyes. Comparably, multi-factorial ANOVA confirmed patients’ age (p < 0.001), baseline BCVA (p < 0.001), and duration of macular edema (p = 0.05) as significant prognostic factors for BCVA gain (Table 2).\nTable 2Predictive factors for visual improvement at 24 weeks (subgroup of 157 eyes)FactorNumber of eyesIncrease of median BCVA lines\nP value (increase)Final Median BCVA (logMAR)ANOVA (p value)Perfusion status of the macula0.97 Ischemic40 (25%)3<0.001 *0.6 Perfused117 (75%)1<0.001 *0.4Pretreatment0.86 Yes21 (13%)0=0.0180.5 No136 (87%)2<0.001 *0.4Patients' age (years)<0.001 * <6033 (21%)3<0.001 *0.3 ≥60124 (79%)1<0.001 *0.5Baseline BCVA (logMAR)<0.001 * ≤0.568 (43%)1=0.0290.3 ≥0.689 (57%)2<0.001 *0.6Duration of BRVO (months)0.05 *+ <350 (32%)2.5<0.001 *0.35 3-1254 (34%)2<0.001 *0.4 >1253 (34%)1=0.0110.5Number of Injections0.34 141 (26%)1<0.001 *0.5 262 (40%)2<0.001 *0.5 ≥354 (34%)2=0.001 *0.4* = Significant difference (Bonferroni-corrected)+ = Post-hoc test (Tukey–HSD): no significant pair-wise differences\n\nPredictive factors for visual improvement at 24 weeks (subgroup of 157 eyes)\n* = Significant difference (Bonferroni-corrected)\n+ = Post-hoc test (Tukey–HSD): no significant pair-wise differences\nAs eyes with central glaucomatous visual field defects and diabetic maculopathy were excluded from our study, the percentage of eyes with glaucoma and diabetes (7% and 10%, respectively) was likely to be underestimated, and therefore was not part of the analysis of predictive factors.\nNo cases of endophthalmitis, retinal detachment or any other severe procedure-related complications were observed in a total of 652 injections. No obvious bevacizumab-related ocular or systemic adverse events were reported." ]
[ null, null, null, null, null ]
[ "Introduction", "Subjects and methods", "Intravitreal bevacizumab injection", "Follow-up", "Statistics", "Results", "Analysis of predictive factors", "Discussion" ]
[ "Secondary macular edema (ME) is one of the main reasons for loss of visual acuity in branch retinal vein occlusion (BRVO). The randomized, controlled Branch Vein Occlusion Study showed limited treatment benefit in eyes with perfused ME: Grid photocoagulation of the edematous macula resulted in a better visual improvement than in the natural course of the disease [1]. Actually, grid photocoagulation was confirmed as the benchmark in a randomized trial, with 29% of the eyes gaining 3 or more best corrected visual acuity (BCVA) lines (≥15 letters) after 1 year. Intravitreal injection of the corticosteroid triamcinolone acetonide has not been shown to be more effective in BRVO than grid photocoagulation [2] and efficacy of intravitreal pegaptanib therapy is unclear [3]. Also, surgical approaches including vitrectomy with or without peeling of the inner limiting membrane [4], arteriovenous dissection (sheathotomy) [5], laser-induced chorioretinal anastomosis [6], and surgical cannulation of branch retinal veins [7] failed to demonstrate a relevant benefit.\nTherefore, a more efficacious treatment strategy has been sought. Bevacizumab (Avastin®, Genentech, San Francisco, CA, USA) is a humanized monoclonal antibody directed against the vascular endothelial growth factor (VEGF). The rationale for its intravitreal application in BRVO was that vascular occlusion induces upregulation of VEGF, resulting in increased vascular permeability and subsequent ME 8–10.\nRecently, various clinical studies demonstrated beneficial effects of anti-VEGF therapy on both ME and BCVA in patients with BRVO [11–18]. Moreover, this minimally invasive therapy might be even more effective than grid photocoagulation, which is the current standard of care. A prospective study on previously untreated eyes with perfused ME secondary to BRVO demonstrated a gain of 3 or more BCVA lines in 57% at 1 year [14]. However, the significance of previous studies was limited, due to the relatively small sample sizes. In addition, the optimal time point for initiation of therapy remains unclear. More importantly, there is still minimal knowledge concerning predictive factors for visual outcome.\nBecause of the large number of patients included, this is the first study to permit a detailed subgroup analysis. This made it possible to investigate various potential predictive factors, including macular perfusion status, duration of the ME, patients’ age, baseline BCVA, number of injections applied, and previous treatments before intravitreal bevacizumab therapy in clinical practice.\n[SUBTITLE] Subjects and methods [SUBSECTION] The study was designed as a multicenter retrospective analysis of patients that received intravitreal bevacizumab therapy for the treatment of BRVO associated with a ME involving the foveal center. Patients received the first bevacizumab injection between October 2005 and May 2009. Only patients that finished the follow-up examination at 24 weeks were included. Eyes that had undergone vitrectomy prior to bevacizumab treatment were excluded due to the different pharmacokinetics. Eyes with other diseases affecting BCVA or presence of neovascularisation were excluded from the analysis. If patients had received peripheral, focal or grid laser photocoagulation, cyclodestructive interventions, cataract surgery, or other surgical procedures during the follow-up they were also excluded.\nBaseline examination comprised a complete eye examination, including ETDRS BCVA, slit-lamp biomicroscopy, dilated fundus examination, and optical coherence tomography (OCT) in most eyes. To assess the perfusion status of the macula, preservation (perfused ME) or capillary drop-out of the foveal capillary ring (ischemic ME) [1], fluorescein angiographies at baseline were analyzed. In case of hemorrhages in the macular area that obscured the foveal capillary ring, the perfusion status was not evaluated until resorption of these hemorrhages was seen. Only retina specialists experienced in the analysis of fluorescence angiograms were in charge of the analysis in each center.\nThe study followed the tenets of the Declaration of Helsinki, and was approved by the local ethics committees at each site.\nThe study was designed as a multicenter retrospective analysis of patients that received intravitreal bevacizumab therapy for the treatment of BRVO associated with a ME involving the foveal center. Patients received the first bevacizumab injection between October 2005 and May 2009. Only patients that finished the follow-up examination at 24 weeks were included. Eyes that had undergone vitrectomy prior to bevacizumab treatment were excluded due to the different pharmacokinetics. Eyes with other diseases affecting BCVA or presence of neovascularisation were excluded from the analysis. If patients had received peripheral, focal or grid laser photocoagulation, cyclodestructive interventions, cataract surgery, or other surgical procedures during the follow-up they were also excluded.\nBaseline examination comprised a complete eye examination, including ETDRS BCVA, slit-lamp biomicroscopy, dilated fundus examination, and optical coherence tomography (OCT) in most eyes. To assess the perfusion status of the macula, preservation (perfused ME) or capillary drop-out of the foveal capillary ring (ischemic ME) [1], fluorescein angiographies at baseline were analyzed. In case of hemorrhages in the macular area that obscured the foveal capillary ring, the perfusion status was not evaluated until resorption of these hemorrhages was seen. Only retina specialists experienced in the analysis of fluorescence angiograms were in charge of the analysis in each center.\nThe study followed the tenets of the Declaration of Helsinki, and was approved by the local ethics committees at each site.", "The study was designed as a multicenter retrospective analysis of patients that received intravitreal bevacizumab therapy for the treatment of BRVO associated with a ME involving the foveal center. Patients received the first bevacizumab injection between October 2005 and May 2009. Only patients that finished the follow-up examination at 24 weeks were included. Eyes that had undergone vitrectomy prior to bevacizumab treatment were excluded due to the different pharmacokinetics. Eyes with other diseases affecting BCVA or presence of neovascularisation were excluded from the analysis. If patients had received peripheral, focal or grid laser photocoagulation, cyclodestructive interventions, cataract surgery, or other surgical procedures during the follow-up they were also excluded.\nBaseline examination comprised a complete eye examination, including ETDRS BCVA, slit-lamp biomicroscopy, dilated fundus examination, and optical coherence tomography (OCT) in most eyes. To assess the perfusion status of the macula, preservation (perfused ME) or capillary drop-out of the foveal capillary ring (ischemic ME) [1], fluorescein angiographies at baseline were analyzed. In case of hemorrhages in the macular area that obscured the foveal capillary ring, the perfusion status was not evaluated until resorption of these hemorrhages was seen. Only retina specialists experienced in the analysis of fluorescence angiograms were in charge of the analysis in each center.\nThe study followed the tenets of the Declaration of Helsinki, and was approved by the local ethics committees at each site.", "The intravitreal injection was performed according to the recommendation of the German Retina Society [19]. The injections were performed after topical anaesthesia and preoperative antisepsis with povidon iodine using sterile gloves, drape, and a lid speculum. Bevacizumab (Avastin®) 1.25 mg in a 0.05 ml total volume was injected intravitreally via the pars plana. Following the injection, retinal perfusion was controlled.\nAll patients were informed about the nature of off-label use and the experimental nature of the therapy before signing an informed consent prior to each injection. Also, they confirmed that they were aware of the potential side-effects of bevacizumab treatment. Patients with contraindications against an intravitreal bevacizumab injection (acute ocular infection, recent history of stroke or myocardial infarction, unstable angina pectoris, uncontrolled hypertension, uncompensated renal insufficiency, allergy to bevacizumab, or pregnancy) were excluded from treatment [20].", "Patients were followed on a routine clinical program. Follow-up examinations at 12, 24, 36 and 48 weeks (±6 weeks) were analyzed. The main outcome measures of this study were BCVA and central retinal thickness (CRT) as measured by OCT (Stratus OCTTM, Carl Zeiss Meditec, Jena, Germany, axial tissue resolution 10-15 μm, or Spectralis® OCT, Heidelberg Engineering, Dossenheim, Germany, axial tissue resolution 4 μm). To minimize variability between different devices, all patients were followed up with the same OCT. CRT was calculated as the distance between the inner limiting membrane and the retinal pigment epithelium–choriocapillaris interface of radial lines through the foveal area [21]. The foveal area was determined using the patient’s fixation and retinal landmarks. The calipers were set by hand because automated measurement protocols are more prone to errors [22]. For comparison, the normal CRT in healthy eyes was reported to be 170 ± 18 μm [23].\nRe-injection was considered at each follow-up visit, and performed after informed consent of the patient, depending on the individual course of the BCVA and persistence or reoccurence of ME on OCT.", "VA data were converted to logMAR units before analysis. All continuous variables (BCVA and CRT) were described as box plots showing 5% and 95% quantiles (whiskers), 25% and 75% quartiles (box), and the median (marked by an asterisk). While conventional statistical inference in this study is based on sample means, we use median values as a robust index of the sample's central tendency for descriptive report of the data.\nThe following prognostic variables were studied: perfusion status of the foveal capillary ring (ischemic or perfused), existence of pre-treatment, patients’ age (younger than 60 years or 60 years and older), duration of BRVO (<3 months, 3 to 12 months, or >12 months), baseline BCVA (0.6 logMAR and lower or 0.5 logMAR and higher), number of injections applied (one, two or three and more), gender (male or female), and presence of arterial hypertension.\nOverall differences in VA and CRT at 0, 12, and 24 weeks were assessed using repeated measures analysis of variance (ANOVA). Subgroup-specific changes in VA and CRT were analyzed using paired t-tests (two-sided). Significant prognostic factors for an increase in VA were identified by between-subjects multifactorial ANOVA and Tukey’s HSD post-hoc tests where appropriate. Further differences between subgroups were analyzed using unpaired t-tests, and between-subjects ANOVAs in the case of more than two subgroups. To control for multiple comparisons, Bonferroni correction was applied with respect to the overall number of tests for VA and CRT differences respectively (VA: eight tests, CRT: eight tests), resulting in adjusted significance thresholds of p = 0.00625 All statistical analysis was performed using SPSS 16 (SPSS Inc, Chicago, IL, USA).", "This multicenter, retrospective interventional case series enrolled 205 eyes (204 patients) of six centres that were treated with intravitreal bevacizumab due to ME secondary to BRVO. Patient characteristics at baseline are shown in Table 1. The median follow-up was 36.7 weeks (range, 18 to 54 weeks; mean 36.8 ± 12.7 weeks). The median age of the patients was 69 years (range, 38 to 87 years). The bevacizumab treatment resulted in a significant improvement of the median BCVA (ANOVA p < 0.001), increasing from 0.6 logMAR at baseline to 0.5 logMAR at 12 weeks and 0.4 logMAR at 24 weeks (both p’ < 0.001). Ninety-one eyes (44.4%) finished the final follow-up examination at 48 weeks. They showed a maintenance of the median BCVA of 0.4 logMAR corresponding to a total improvement of 2 median BCVA lines compared to baseline (p < 0.001, Fig. 1a). OCT data were available in 91%, 82%, and 87% of all eyes at baseline, 12, and 24 weeks, respectively. Accordingly, reduction of the CRT was highly significant with the median CRT (ANOVA p < 0.001) decreasing from a baseline of 454 μm to 304 μm and 267 μm after 12 and 24 weeks, respectively (both p < 0.001). This significant reduction was preserved over the entire follow-up with a final median CRT of 248 μm at 1 year (63 eyes, p < 0.001, Fig. 1b). During the follow-up, a median of three injections (mean 3.2; range, 1 to 10) was administered, with a median injection frequency of 11.6 weeks (mean 14.6 weeks).\nTable 1Predictive factors for visual improvement (at 24 weeks)FactorNumber of eyesIncrease of median BCVA lines\nP value (increase)Final median BCVA (logMAR)ANOVA (p value)Gender0.83 Male101 (49%)2<0.001 *0.4 Female104 (51%)2<0.001 *0.4Hypertension0.89 Yes131 (64%)2<0.001 *0.4 No69 (34%)2<0.001 *0.4Perfusion status of the macula0.95 Ischemic45 (22%)2<0.001 *0.6 Perfused128 (62%)2<0.001 *0.3Pretreatment0.86 Yes26 (13%)0.5<0.0050.5 No176 (86%)2<0.001 *0.4Patients' age (years)<0.01 * <6041 (20%)3<0.001 *0.3 ≥60163 (80%)1<0.001 *0.5Baseline BCVA (logMAR)<0.001 * ≤0.591 (44%)1<0.001 *0.3 ≥0.6114 (56%)2<0.001 *0.6Duration of BRVO (months)0.03 *+ <363 (31%)2.5<0.001 *0.3 3-1271 (35%)2<0.001 *0.4 >1260 (29%)0.5<0.010.5Number of injections0.48 146 (22%)2.5<0.001 *0.35 279 (39%)2<0.001 *0.5 ≥374 (36%)2<0.001 *0.4* = Significant difference (Bonferroni-corrected)+ = Post-hoc test (Tukey–HSD): <3 months and >12 months sign. difference)\nFig. 1Box plot graphs showing the course of best-corrected visual acuity (BCVA) (a) and central retinal thickness (CRT) (b) over the 48 weeks follow-up. a Increase of BCVA, and b, decrease of the CRT following bevacizumab treatment. Note the stabilisation after 24 weeks. n = number of eyes included\n\nPredictive factors for visual improvement (at 24 weeks)\n* = Significant difference (Bonferroni-corrected)\n+ = Post-hoc test (Tukey–HSD): <3 months and >12 months sign. difference)\nBox plot graphs showing the course of best-corrected visual acuity (BCVA) (a) and central retinal thickness (CRT) (b) over the 48 weeks follow-up. a Increase of BCVA, and b, decrease of the CRT following bevacizumab treatment. Note the stabilisation after 24 weeks. n = number of eyes included", "Because BCVA and CRT did not significantly change between 24 and 48 weeks (Fig. 1a,b), analysis of predictive factors was performed on the basis of the 24 weeks results of all 205 eyes included.\nEvaluation of the perfusion status of the macular area revealed an ischemic ME with a broken foveal capillary ring in 22% (45 eyes) and a perfused ME in 62% (128 eyes). Sufficient information on the perfusion status was not available in 16% (32 eyes) (Table 1). Interestingly, both subgroups with perfused and ischemic ME improved 2 median BCVA lines at 24 weeks (both p < 0.001). However, eyes with an ischemic ME started from a lower median baseline BCVA of 0.8 logMAR compared to 0.5 logMAR of the subgroup with a perfused ME (p < 0.001). Hence, their final median BCVA of 0.6 logMAR at 24 weeks was significantly lower than 0.3 logMAR in eyes with a perfused ME (p < 0.001) (Fig. 2a). Comparing the baseline and final median CRT in the two subgroups, there was no significant difference between the subgroup with ischemic and perfused ME (baseline: 500 μm and 450 μm (p = 0.04, n.s.); 24 weeks: 266 μm and 250 μm (p = 0.44, n.s.) respectively) (Fig. 2b).\nFig. 2Box plot graphs showing the bevacizumab effect depending on the duration of the perfusion status of the macula (a, b), on the existence of pretreatment (c, d), on the patients’ age (e, f), and on the baseline visual acuity (VA) (g, h) during the 24 weeks follow-up. Left column demonstrates the course of the BCVA, the right column shows the central retinal thickness (CRT)\n\nBox plot graphs showing the bevacizumab effect depending on the duration of the perfusion status of the macula (a, b), on the existence of pretreatment (c, d), on the patients’ age (e, f), and on the baseline visual acuity (VA) (g, h) during the 24 weeks follow-up. Left column demonstrates the course of the BCVA, the right column shows the central retinal thickness (CRT)\nPretreatment had been undertaken in 13% (26 eyes); 23 eyes had undergone grid laser photocoagulation, and seven eyes had received intravitreal triamcinolone injection prior to bevacizumab treatment. Eighty-six percent (176 eyes) received bevacizumab as a primary therapy for BRVO (Table 1). Interestingly, the pretreated subgroup only showed a visual improvement of 0.5 median BCVA lines from a median of 0.55 logMAR to 0.5 logMAR at 24 weeks (p < 0.005, Fig. 2c), and no reliable decrease of the median CRT from 350 μm at baseline to 274 μm at 24 weeks (p = 0.113, Fig. 2d). In contrast, the previously untreated eyes responded with a 2-line increase of the median BCVA (0.6 logMAR to 0.4 logMAR, p < 0.001, Fig. 2c), together with a reduction of the CRT (463 μm to 266 μm, p < 0.001, Fig. 2d). The duration of the BRVO-associated symptoms was significantly longer in the pretreated subgroup, with 21.4 months versus 4.3 months in previously untreated eyes (p < 0.005).\nAnalysis of the patients’ age at the time of BRVO confirmed it as a significant prognostic factor for gain in BCVA (multi-factorial ANOVA, p < 0.001), with a better response in younger individuals (Table 1). The subgroup of patients who were younger than 60 years of age (41 eyes, 20%) revealed a considerable 3-line increase of the median BCVA (from 0.6 logMAR at baseline to 0.3 logMAR at 24 weeks, p < 0.001). In contrast, patients of 60 years and older (163 eyes, 80%) only showed a gain of 1 median BCVA line (from 0.6 logMAR at baseline to 0.5 logMAR at 24 weeks), which was statistically significant, p < 0.001, Fig. 2e). This is despite the fact that the median CRT of younger and older patients was not significantly different, neither at baseline (463 μm compared to 350 μm, p = 0.86), nor at 24 weeks (260 μm compared to 275 μm, p = 0.70) (Fig. 2f). However, the median duration of BRVO prior to bevacizumab therapy was significantly shorter in younger individuals (3.0 months; range, 0.0 to 75 months) than in older individuals (6.5 months; range, 0.0 to 163 months) (p < 0.05). Moreover, in the younger subgroup, only 5% (two eyes) had received pretreatment (intravitreal triamcinolone injection), compared to 15% (24 eyes) in the older subgroup.\nTo investigate the prognostic value of the baseline BCVA, the eyes were divided into two subgroups with a BCVA ≥ 0.6 logMAR (range: 0.6 to 2.0 logMAR, 114 eyes) and ≤ 0.5 logMAR (range, 0.1 to 0.5 logMAR, 91 eyes) before treatment. Our data showed that eyes with a low baseline BCVA gained median 2 BCVA lines (from 0.8 logMAR to 0.6 logMAR, p < 0.001), whereas eyes with a high initial BCVA only gained median 1 VA line (from 0.4 logMAR to 0.3 logMAR, p < 0.001) (Fig. 2g). Multi-factorial ANOVA confirmed baseline BCVA as a significant prognostic factor for BCVA gain (p < 0.001, Table 1). Comparing the percentage of ischemic ME and hemorrhage in the foveal area in the two subgroups, both were considerably higher in the cohort with a low baseline BCVA (32% and 37% versus 10% and 17%). Also, the median baseline CRT was higher in eyes with a low initial BCVA compared to eyes with a high initial BCVA (501 μm versus 348 μm, p < 0.001). However, both subgroups improved their median CRT at 24 weeks to comparable levels (270 μm and 266 μm respectively, p = 0.43) (Fig. 2h).\nThe median time between the onset of BRVO-associated symptoms and the baseline examination was 5.7 months (range, 0.0 to 163 months). The onset of BRVO-associated symptoms remained unclear in 5% (11 eyes) (Table 1). To evaluate the impact of early treatment on BCVA and ME, the eyes were assigned to one of three subgroups according to the duration of BRVO prior bevacizumab therapy: group A <3 months (63 eyes), group B 3 to 12 months (71 eyes), and group C >12 months (60 eyes). Multi-factorial ANOVA confirmed duration of BRVO-associated symptoms as a significant prognostic factor for BCVA gain (p = 0.03), with a significantly higher gain for group A than for group C (p < 0.05, Tukey HSD). Interestingly, subgroup A showed a significant gain of 2.5 median BCVA lines from a median baseline of 0.65 logMAR to 0.3 logMAR at 24 weeks (p < 0.001), whereas the visual improvement in group B was 2 median BCVA lines (from 0.6 to 0.4 logMAR, p < 0.001). In contrast, subgroup C only showed a 0.5 line increase of the median BCVA, from 0.55 to 0.5 logMAR at 24 weeks (p = 0.005) (Fig. 3a). Hemorrhage in the foveal area at baseline was present in 45% (28 eyes) of group A, in 34% (24 eyes) of group B, and in 7% (four eyes) of group C. With regard to the median CRT at 24 weeks, there was no significant difference between the three groups (A 273 μm, B 260 μm, and C 290 μm, p = 0.86) even though at baseline significant differences were evident (ANOVA p < 0.05); Baseline CRT was significantly higher in group A, 483 μm compared to 400 μm in group C (p < 0.05 Tukey HSD) and 472 μm in group B, Fig. 3b).\nFig. 3Box plot graphs showing the bevacizumab effect depending on the duration of the BRVO-associated symptoms (a, b), and on the number of injections applied (c, d) during the 24 weeks follow-up. Left column demonstrates the course of the visual acuity (VA), the right column shows the central retinal thickness (CRT)\n\nBox plot graphs showing the bevacizumab effect depending on the duration of the BRVO-associated symptoms (a, b), and on the number of injections applied (c, d) during the 24 weeks follow-up. Left column demonstrates the course of the visual acuity (VA), the right column shows the central retinal thickness (CRT)\nTo maintain the bevacizumab effect until week 24, re-injections were performed in 75% (153 eyes). During the 6-month follow-up, a median of two injections (mean 2.3; range, 1 to 6) was administered, with a median time-interval between injections of 11.5 weeks (mean 14.8 weeks). The relationship between the bevacizumab effect and the number of injections was analyzed, assigning the eyes to a subgroup with one, two or three and more injections. Interestingly, the BCVA showed comparable results in all three subgroups, with an increase of the median BCVA of 2.5 lines (one injection) or 2 lines (two and ≥3 injections) (ANOVA p = 0.27, Fig. 3c). When comparing the median CRT at baseline and at 24 weeks between the three subgroups, they were lowest in the subset with only one injection (345 and 224 μm), and increased with more injections (two injections: 472 and 271 μm; three and more injections: 454 and 296 μm, both ANOVA p < 0.05) (Fig. 3d). For the group of three or more injections (74 eyes) we analysed if there is a correlation between the number of injections and BCVA or CRT. However, the analysis revealed no significant correlation (p > 0.05), neither for BCVA nor for CRT.\nAs the number of eyes included in the analysis varied across parameters (from 173 to 205, Table 1) we performed an additional analysis on a subset of eyes where the complete data of all essential parameters (perfusion status of the macula, pretreatment, patients' age, baseline BCVA, duration of BRVO, and number of injections) was available. This subset of eyes included 158 eyes. Comparably, multi-factorial ANOVA confirmed patients’ age (p < 0.001), baseline BCVA (p < 0.001), and duration of macular edema (p = 0.05) as significant prognostic factors for BCVA gain (Table 2).\nTable 2Predictive factors for visual improvement at 24 weeks (subgroup of 157 eyes)FactorNumber of eyesIncrease of median BCVA lines\nP value (increase)Final Median BCVA (logMAR)ANOVA (p value)Perfusion status of the macula0.97 Ischemic40 (25%)3<0.001 *0.6 Perfused117 (75%)1<0.001 *0.4Pretreatment0.86 Yes21 (13%)0=0.0180.5 No136 (87%)2<0.001 *0.4Patients' age (years)<0.001 * <6033 (21%)3<0.001 *0.3 ≥60124 (79%)1<0.001 *0.5Baseline BCVA (logMAR)<0.001 * ≤0.568 (43%)1=0.0290.3 ≥0.689 (57%)2<0.001 *0.6Duration of BRVO (months)0.05 *+ <350 (32%)2.5<0.001 *0.35 3-1254 (34%)2<0.001 *0.4 >1253 (34%)1=0.0110.5Number of Injections0.34 141 (26%)1<0.001 *0.5 262 (40%)2<0.001 *0.5 ≥354 (34%)2=0.001 *0.4* = Significant difference (Bonferroni-corrected)+ = Post-hoc test (Tukey–HSD): no significant pair-wise differences\n\nPredictive factors for visual improvement at 24 weeks (subgroup of 157 eyes)\n* = Significant difference (Bonferroni-corrected)\n+ = Post-hoc test (Tukey–HSD): no significant pair-wise differences\nAs eyes with central glaucomatous visual field defects and diabetic maculopathy were excluded from our study, the percentage of eyes with glaucoma and diabetes (7% and 10%, respectively) was likely to be underestimated, and therefore was not part of the analysis of predictive factors.\nNo cases of endophthalmitis, retinal detachment or any other severe procedure-related complications were observed in a total of 652 injections. No obvious bevacizumab-related ocular or systemic adverse events were reported.", "This multicentre study examined the anatomic and functional long-term effectiveness of bevacizumab therapy on ME secondary to BRVO in a routine clinical setting. Previously, several prospective studies have shown a significant 3-line increase of BCVA together with a reduction of ME [14–16]. Our study shows that comparable results can also be obtained in the clinical routine. In this large patient cohort, a significant increase of the median BCVA of 2 lines was achieved at the 6-month follow-up, and could be maintained through the 1-year follow-up.\nTogether with results from other reported studies, intravitreal bevacizumab may appear as an established treatment for ME secondary to BRVO [14, 18]. However, clinical practice has also revealed that some patients respond better than others, while factors for the variability in outcomes have been controversial. Due to the large sample size, this is the first study that allows analyzing potential predictive factors in detail to pre-estimate the effectiveness of the therapy.\nA point of intense debate is the influence of the perfusion status of the macula on treatment outcomes. While the beneficial effect of bevacizumab in eyes with ME and perfused foveal capillary ring seems to be generally acknowledged, concerns have been raised on bevacizumab treatment in ischemic ME [8, 24]. Also, grid photocoagulation has not been recommended for BRVO with foveal capillary nonperfusion [1].\nOur study was able to demonstrate for the first time that existence of an ischemic ME can still be associated with an excellent response to bevacizumab therapy. These eyes showed both a pronounced reduction of the CRT and a considerable visual improvement of 2 lines, comparable to the visual improvement in eyes with perfused ME. This interesting finding seems to be in contrast to previous studies. Chung et al. reported the presence of macular ischemia as a significant negative factor for BCVA improvement. The investigation was done on 50 eyes subdivided into two groups (≥1 BCVA line gain vs <1 BCVA line gain) [25]. However, analysis in our cohort revealed BCVA gain in about 70% of the eyes in both ischemic (69%) and perfused ME (73%), indicating a potential for BCVA improvement in ischemic ME.\nHow can we explain the apparent discrepancy between the significant BCVA gain in eyes with ischemic macular edema and the clinical observation that these patients rarely achieve a satisfactory post-treatment BCVA? One should take into account that eyes with ischemic ME disclose a low baseline BCVA, which prevents most eyes to achieve reading visual acuity even if the treatment response is good. Irreversible structural changes induced by persisting hypoxia in the centre of the macula have been identified as the reason behind this observation [8]. Patient satisfaction is not solely determined by gain in BCVA line, but rather by achieving useful vision. Hence, many patients with ischemic ME remain disappointed despite the remarkable 2-line gain. On the other hand, patients with perfused ME seem to be more satisfied with a 2-line BCVA gain because of the high rate of reading ability achieved.\nAnother important question addresses the optimal time point for initiation of bevacizumab therapy. The Branch Vein Occlusion Study showed a worse BCVA outcome in eyes where the treatment with grid laser photocoagulation had been performed more than 1 year after onset of the BRVO [1]. Comparable findings have been described for intravitreal bevacizumab therapy [12, 16]. Our study confirmed these results showing no significant visual improvement in longstanding ME of more than 1 year. Photoreceptor damage as a result of chronic ME has been proposed to explain the irreversible BCVA impairment [8].\nIn contrast, eyes with duration of BRVO shorter than 1 year showed a 2-line BCVA increase. Moreover, further shortening of the initiation of therapy resulted in a further improvement of the outcome. Here, a very early treatment (<3 months) revealed an increase of 2.5 BCVA lines and a remarkable final BCVA of 0.3 logMAR. This might indicate some irreversible damage to the macula following delayed treatment, which has also been proposed by Kondo and colleagues [18]. However, absorption of hemorrhages in the foveal area that are frequently apparent in the subgroup of very early treatment might be a cofactor for the good recovery. Also, spontaneous recovery in the early course of the disease independent of the bevacizumab effect might play a significant role.\nIn addition to the duration of BRVO, the patient's age has been identified as a prognostic factor. We were able to demonstrate that patients under 60 years of age respond with a 3-line increase of BCVA. Additionally, these eyes develop a high final BCVA of 0.3 logMAR. However, in the younger subgroup the median duration of BRVO was shorter, and fewer eyes had undergone pretreatment, accounting for a better bevacizumab response. However, a correlation of age and BCVA has also been found recently by a prospective study [17]. On the other hand, patient age of 60 years and older was associated with minor visual improvement, despite a comparable reduction of the CRT. One might conclude from these data that the anatomical response does not seem to be an appropriate clinical tool for evaluating the treatment benefit. Eyes with poor prognostic factors develop a poor visual outcome, despite the fact that these eyes often show a marked reduction of CRT.\nBaseline BCVA has also been identified as a predictive factor for visual improvement. In accordance with previous studies [18], we found a poor baseline BCVA (>0.5 logMAR) to be correlated with poor visual prognosis below reading BCVA. Frequent associations with a poor baseline BCVA were presence of an ischemic ME and hemorrhage in the foveal area. Vice versa, a good initial BCVA (≤0.5 logMAR) was significantly associated with a good visual outcome. In contrast to previous findings [24], our study demonstrates a remarkable visual improvement of 2 BCVA lines in eyes with a poor baseline, even exceeding the visual improvement in eyes with a high initial BCVA. A negative correlation of preoperative VA and improvement of VA has also been shown in a current study [18]. The pronounced reduction of the CRT in patients with low baseline BCVA compared to eyes with high baseline BCVA is in accordance with the findings of Kriechbaum and colleagues [16].\nExistence of pretreatment has been identified as a negative factor for visual improvement. Eyes with a persistent ME after pretreatment responded with no significant increase of the BCVA to the bevacizumab treatment, despite an excellent reduction of the CRT. However, as pretreatment was correlated with very longstanding ME, this might also account for a worse treatment effect. Due to the small sample number, our results are limited, though a reduced prognosis for BCVA improvement and final BCVA in pretreated eyes was also described in previous studies [11, 15].\nAnother question that is frequently raised addresses the number of re-injections needed to maintain the treatment effect. In our study, we found a mean injection rate of 2.3 injections within 6 months and a mean of 3.2 during the entire follow-up. A prospective study recently reported similar injection rates, with a mean of 2.6 and 3.4 injections within the first 6 or 12 months respectively, resulting in an increase of 3 BCVA lines [14]. Taking into account the fact that our current study has been performed on a clinical routine basis that did not adhere to strict retreatment criteria, and even included eyes with pretreatment and long-standing ME, the current results are remarkably comparable.\nIs a more frequent re-injection rate beneficial? Another prospective study performed an OCT-guided treatment regime with a mean of eight injections during a 12-month follow-up, which resulted in a 3.5-line increase in BCVA [13]. This indicates that a higher injection rate above a critical level does not result in a further increase of the treatment effect. Accordingly, we could not find a better visual improvement with increased numbers of bevacizumab injections. Our study rather demonstrates a slightly higher final median CRT despite frequent re-injections indicating non-responders.\nOne of the most notable results of our study concerns the effect of bevacizumab on the ME. Independent of the median baseline CRT of all subgroups (345 μm to 501 μm), and independent of the bevacizumab effect on the BCVA, the treatment resulted in a decrease of the CRT to a median CRT ranging from 224 to 296 μm (250 to 275 μm except for the subgroups with duration of BRVO of more than 12 months, and number of injections of one and three or more) at 6 months. This indicates that the effect of bevacizumab on the ME is not a reduction by a certain percentage, but rather a decrease to a defined median CRT around 260 μm. Therefore, it appears necessary to adjust the re-injection rate to the individual course of the BCVA and CRT on OCT.\nMain limitations of this study were the retrospective character, the inconsistent reinjection criteria, and the absence of a control group. The drawback of the variable number of eyes across parameters included in the subgroup analysis was counterbalanced by the second subgroup analysis with a homogeneous data set revealing basically comparable results. However, a prospective, randomized study is needed to further assess the effectivity of bevacizumab on various subgroups. However, this is the first study with a sufficient number of patients included to permit the identification of predictive factors in a routine clinical treatment setting.\nBased on more than 650 intravitreal injections in this study, no concerns on the safety of the drug emerged. This is especially important in view of the off-label use and the necessity for repeated injections in most eyes.\nAs duration of the BRVO of more than 1 year seems to result in a negligible effect of bevacizumab, earlier treatment initiation might be reasonable. This study data could be important for understanding the effect of bevacizumab on BCVA and ME, and to manage individual treatment regimes for eyes with BRVO." ]
[ "introduction", null, null, null, null, "results", null, "discussion" ]
[ "Macular edema", "Bevacizumab", "Branch retinal vein occlusion", "Intravitreal therapy", "Predictive factors", "Prognostic facotrs for visual improvement" ]
Isolation and transcriptome analysis of adult zebrafish cells enriched for skeletal muscle progenitors.
21337346
Over the past 10 years, the use of zebrafish for scientific research in the area of muscle development has increased dramatically. Although several protocols exist for the isolation of adult myoblast progenitors from larger fish, no standardized protocol exists for the isolation of myogenic progenitors from adult zebrafish muscle.
INTRODUCTION
Using a variant of a mammalian myoblast isolation protocol, zebrafish muscle progenitors have been isolated from the total dorsal myotome. These zebrafish myoblast progenitors can be cultured for several passages and then differentiated into multinucleated, mature myotubes.
METHODS
Transcriptome analysis of these cells during myogenic differentiation revealed a strong downregulation of pluripotency genes, while, conversely, showing an upregulation of myogenic signaling and structural genes.
RESULTS
Together these studies provide a simple, yet detailed method for the isolation and culture of myogenic progenitors from adult zebrafish, while further promoting their therapeutic potential for the study of muscle disease and drug screening.
CONCLUSIONS
[ "Aging", "Animals", "Animals, Genetically Modified", "Cell Differentiation", "Cells, Cultured", "Gene Expression Profiling", "Muscle Development", "Muscle, Skeletal", "Myoblasts", "Stem Cells", "Zebrafish" ]
3075361
null
null
METHODS
[SUBTITLE] Fish Lines [SUBSECTION] The α-actin–RFP transgenic fish line was a generous gift from H.J. Tsai (Taiwan National University) and has been described previously.17 Additional experiments were done utilizing the wild-type AB strain, which was obtained from the Children's Hospital aquatics program and maintained in their aquatics facility. All animal protocols were approved by the animal resources committee of Children's Hospital. The α-actin–RFP transgenic fish line was a generous gift from H.J. Tsai (Taiwan National University) and has been described previously.17 Additional experiments were done utilizing the wild-type AB strain, which was obtained from the Children's Hospital aquatics program and maintained in their aquatics facility. All animal protocols were approved by the animal resources committee of Children's Hospital. [SUBTITLE] Isolation of Zebrafish Myogenic Muscle Cells from Whole Dorsal Myotome [SUBSECTION] For each cell preparation, 15–20 adult zebrafish were euthanized in tricaine (Sigma-Aldrich) and the whole zebrafish was placed in 100% ethanol for 30 seconds as the first step for sterilization. The fish's head, tail, and fins were removed with a scalpel, and the skin and internal organs were removed with forceps. The fish's body was sterilized in 10% bleach for 30 seconds and then washed twice in sterile phosphate-buffered saline (PBS) for another 30 seconds. Fish dorsal muscle and bone were minced with a scalpel and then transferred to a pre-weighed culture plate. For every gram of fish tissue, 3.5 ml of collagenase IV (10 mg/ml stock solution) and 3.5 ml of dispase (2.4 units/ml stock solution; Worthington Chemicals) were added and mixed by pipetting (Worthington). The solution was incubated at room temperature for 45 minutes (mixed every 10 minutes by pipette) before 10 ml of growth medium (L15; Sigma-Aldrich), 3% fetal calf serum, 100 μg/ml penicillin/streptomycin, 2 mM glutamine, and 0.8 mM CaCl2 (all Sigma-Aldrich) were added to the cells to quench the activity of the collagenase and dispase proteases. Debris was removed by filtering the cells through a 70-μm filter and then through two 40-μm filters (BD Biosciences). On each occasion, the filters were washed with 5 ml of L15 medium. The cells were isolated by centrifugation at 1000 × g for 10 minutes at 9°C, and the supernatant was aspirated. The cells were then resuspended in 3 ml of red blood cell lysis buffer (Qiagen) and incubated for 3 minutes at room temperature before neutralization with 22 ml of L15 growth medium. The cells were then pelleted at 1000 × g for 10 minutes at 9°C, the supernatant aspirated, and the cell pellet resuspended in 3 ml of cold 1× PBS and layered on top of 4 ml of Ficoll-Paque gradient (GE Healthcare) in a 15-ml tube. Samples were then centrifuged at 1400 × g for 40 minutes at 9°C. A mononuclear cell layer was then extracted by pipette and washed with 10 ml of ice-cold 1× PBS. Afterwards, the cells were resuspended in 10 ml of ice-cold L15 buffer. The cell density was determined using an automated hemocytometer (Countess; Invitrogen), and the cell suspension was diluted in L15 growth medium. The cells were then pre-plated on uncoated plates for 1 hour in a 28°C tissue culture incubator at 5% CO2. After pre-plating, the cellular supernatant (non-adherent cells) was removed and placed on laminin-coated plates (BD Biocoat). Alternatively, 0.1% gelatin-coated (porcine) plates can be used. The medium was changed every 3 days. The zebrafish myogenic progenitor cells were able to be grown for up to seven doublings before evidence of cellular senescence, with an average of four and five doublings per myoblast isolation. On average, a yield of 5–10 million live (trypan blue–negative) cells were isolated from each preparation of between 15 and 20 adult zebrafish. Lower yields of 100,000–500,000 live cells were isolated when using 1–5 adult zebrafish. An alternative to the L15 growth medium was later used in zebrafish myogenic progenitor cell cultures and achieved the same results. Human skeletal myoblast growth medium (Promocell) that contained 20% fetal bovine serum (Atlanta Biologicals), 1× antibiotic–antimycotic (Invitrogen), and 1× Glutamax (Invitrogen), and supplemented with 3 ng/ml recombinant human fibroblast-like growth factor (rhFGF; Promega), can be used in lieu of the L15 growth medium. For each cell preparation, 15–20 adult zebrafish were euthanized in tricaine (Sigma-Aldrich) and the whole zebrafish was placed in 100% ethanol for 30 seconds as the first step for sterilization. The fish's head, tail, and fins were removed with a scalpel, and the skin and internal organs were removed with forceps. The fish's body was sterilized in 10% bleach for 30 seconds and then washed twice in sterile phosphate-buffered saline (PBS) for another 30 seconds. Fish dorsal muscle and bone were minced with a scalpel and then transferred to a pre-weighed culture plate. For every gram of fish tissue, 3.5 ml of collagenase IV (10 mg/ml stock solution) and 3.5 ml of dispase (2.4 units/ml stock solution; Worthington Chemicals) were added and mixed by pipetting (Worthington). The solution was incubated at room temperature for 45 minutes (mixed every 10 minutes by pipette) before 10 ml of growth medium (L15; Sigma-Aldrich), 3% fetal calf serum, 100 μg/ml penicillin/streptomycin, 2 mM glutamine, and 0.8 mM CaCl2 (all Sigma-Aldrich) were added to the cells to quench the activity of the collagenase and dispase proteases. Debris was removed by filtering the cells through a 70-μm filter and then through two 40-μm filters (BD Biosciences). On each occasion, the filters were washed with 5 ml of L15 medium. The cells were isolated by centrifugation at 1000 × g for 10 minutes at 9°C, and the supernatant was aspirated. The cells were then resuspended in 3 ml of red blood cell lysis buffer (Qiagen) and incubated for 3 minutes at room temperature before neutralization with 22 ml of L15 growth medium. The cells were then pelleted at 1000 × g for 10 minutes at 9°C, the supernatant aspirated, and the cell pellet resuspended in 3 ml of cold 1× PBS and layered on top of 4 ml of Ficoll-Paque gradient (GE Healthcare) in a 15-ml tube. Samples were then centrifuged at 1400 × g for 40 minutes at 9°C. A mononuclear cell layer was then extracted by pipette and washed with 10 ml of ice-cold 1× PBS. Afterwards, the cells were resuspended in 10 ml of ice-cold L15 buffer. The cell density was determined using an automated hemocytometer (Countess; Invitrogen), and the cell suspension was diluted in L15 growth medium. The cells were then pre-plated on uncoated plates for 1 hour in a 28°C tissue culture incubator at 5% CO2. After pre-plating, the cellular supernatant (non-adherent cells) was removed and placed on laminin-coated plates (BD Biocoat). Alternatively, 0.1% gelatin-coated (porcine) plates can be used. The medium was changed every 3 days. The zebrafish myogenic progenitor cells were able to be grown for up to seven doublings before evidence of cellular senescence, with an average of four and five doublings per myoblast isolation. On average, a yield of 5–10 million live (trypan blue–negative) cells were isolated from each preparation of between 15 and 20 adult zebrafish. Lower yields of 100,000–500,000 live cells were isolated when using 1–5 adult zebrafish. An alternative to the L15 growth medium was later used in zebrafish myogenic progenitor cell cultures and achieved the same results. Human skeletal myoblast growth medium (Promocell) that contained 20% fetal bovine serum (Atlanta Biologicals), 1× antibiotic–antimycotic (Invitrogen), and 1× Glutamax (Invitrogen), and supplemented with 3 ng/ml recombinant human fibroblast-like growth factor (rhFGF; Promega), can be used in lieu of the L15 growth medium. [SUBTITLE] Myogenic Differentiation of Adult Zebrafish Myogenic Progenitor Cells [SUBSECTION] Approximately 300,000 cells/well were plated into six-well 0.1% gelatin-coated plates in 2 ml of growth medium and grown to 95% confluence. The medium was then changed to differentiation medium consisting of: 2% horse serum (Gibco) in Dulbecco modified Eagle medium (DMEM; Mediatech, Inc.) supplemented with 1× antibiotic–antimycotic (Invitrogen) and 1× Glutamax (Invitrogen). The differentiation medium was changed every other day, and cells were monitored for myotube fusion by phase and fluorescent microscopy. Multinucleated myotubes were observed during days 4–7. Approximately 300,000 cells/well were plated into six-well 0.1% gelatin-coated plates in 2 ml of growth medium and grown to 95% confluence. The medium was then changed to differentiation medium consisting of: 2% horse serum (Gibco) in Dulbecco modified Eagle medium (DMEM; Mediatech, Inc.) supplemented with 1× antibiotic–antimycotic (Invitrogen) and 1× Glutamax (Invitrogen). The differentiation medium was changed every other day, and cells were monitored for myotube fusion by phase and fluorescent microscopy. Multinucleated myotubes were observed during days 4–7. [SUBTITLE] Immunohistochemistry [SUBSECTION] The following primary antibodies were used for immunohistochemistry of zebrafish myogenic progenitor cells: Pax3 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); Pax7 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); anti-MyoD1 rabbit polyclonal (1:50; Santa Cruz Biotechnology); and anti-myogenin rabbit polyclonal (1:50; M-225; Santa Cruz Biotechnology). The myogenin antibody has been characterized previously in early zebrafish myogenic progenitor cells.18 The zebrafish myod1 epitope has been shown to be recognized by the myf5 antibody (Santa Cruz Biotechnology).19 Approximately, 100,000 cells were pre-plated on uncoated coverslides (Nunc, Lab-Tek) and, after a 1-hour pre-plating, the supernatant was plated onto 0.1% gelatin-collated coverslips. The following day, the zebrafish myogenic cells attached were fixed in 4% paraformaldehyde (Electron Microscopy Sciences) at 4°C for 10 minutes. To block nonspecific binding of the antibodies, slides were incubated for 30 minutes at room temperature in PBS + 10% goat serum. After blocking, the slides were incubated overnight at 4°C using the primary antibodies. Slides were washed three times in 1× PBS, and sections were incubated with Alexa 488 (anti-mouse IgG)- or 568 (anti-rabbit IgG)-conjugated goat secondary antibodies (Invitrogen) at a 1:500 dilution for 45 minutes at room temperature. The slides were then washed three times in 1× PBS before mounting in Vectashield with 4′,6-diamidino-2-phenylindole (DAPI) (Vector Laboratories). Slides were analyzed by microscope (E1000 Nikon Eclipse; Nikon) and OpenLab software. The following primary antibodies were used for immunohistochemistry of zebrafish myogenic progenitor cells: Pax3 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); Pax7 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); anti-MyoD1 rabbit polyclonal (1:50; Santa Cruz Biotechnology); and anti-myogenin rabbit polyclonal (1:50; M-225; Santa Cruz Biotechnology). The myogenin antibody has been characterized previously in early zebrafish myogenic progenitor cells.18 The zebrafish myod1 epitope has been shown to be recognized by the myf5 antibody (Santa Cruz Biotechnology).19 Approximately, 100,000 cells were pre-plated on uncoated coverslides (Nunc, Lab-Tek) and, after a 1-hour pre-plating, the supernatant was plated onto 0.1% gelatin-collated coverslips. The following day, the zebrafish myogenic cells attached were fixed in 4% paraformaldehyde (Electron Microscopy Sciences) at 4°C for 10 minutes. To block nonspecific binding of the antibodies, slides were incubated for 30 minutes at room temperature in PBS + 10% goat serum. After blocking, the slides were incubated overnight at 4°C using the primary antibodies. Slides were washed three times in 1× PBS, and sections were incubated with Alexa 488 (anti-mouse IgG)- or 568 (anti-rabbit IgG)-conjugated goat secondary antibodies (Invitrogen) at a 1:500 dilution for 45 minutes at room temperature. The slides were then washed three times in 1× PBS before mounting in Vectashield with 4′,6-diamidino-2-phenylindole (DAPI) (Vector Laboratories). Slides were analyzed by microscope (E1000 Nikon Eclipse; Nikon) and OpenLab software. [SUBTITLE] RNA Isolation and Microarray Analysis [SUBSECTION] RNA was extracted directly from zebrafish myogenic progenitor cells in culture at various stages of differentiation using Tripure (Roche Applied Science), following the manufacturer's protocol. Zebrafish cDNA was hybridized to the Affymetrix GeneChip Zebrafish Genome Array (GenBank Release 36.0, June 2003) and processed following the manufacturer's protocol at the Molecular Genetics Core Facility at Children's Hospital Boston. The resulting. CEL files, which contain probe signal intensities of the samples, were preprocessed and normalized together using robust multiarray averaging (RMA), which returns the expression level of each probe set or gene as a positive real number in logarithmic base 2 scale.20 The complete microarray data are available from the NCBI Gene Expression Omnibus (GEO) as GSE19754. Principal component analysis (PCA) was used to survey gene variation across sample (time) and space, and sample variation across transcriptome space, separately.21 Because most of the time-points had replicate sample measurements, we computed the linear correlation between the unlogged replicate time profiles (A, B) for each probe set to assess the reproducibility of their time profile. We selected the probe set with the maximum replicate time profile correlation as the unique representative for genes with more than one probe set representative. The fold change of a probe set for days 10–14 vs. days 0–1 was computed as the average RMA signal of days 10–14 minus the average RMA signal of days 0–1. This fold change is in log base 2 scale, because the RMA signal is in log base 2 scale. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov) on the mouse homologs of zebrafish genes, because the ontological characterization of genes is currently richer for the mouse than for the zebrafish.22 We used the mouse C2C12 myogenic differentiation microarray dataset (GEO, GSE19968) for comparative genomic analysis.23 RNA was extracted directly from zebrafish myogenic progenitor cells in culture at various stages of differentiation using Tripure (Roche Applied Science), following the manufacturer's protocol. Zebrafish cDNA was hybridized to the Affymetrix GeneChip Zebrafish Genome Array (GenBank Release 36.0, June 2003) and processed following the manufacturer's protocol at the Molecular Genetics Core Facility at Children's Hospital Boston. The resulting. CEL files, which contain probe signal intensities of the samples, were preprocessed and normalized together using robust multiarray averaging (RMA), which returns the expression level of each probe set or gene as a positive real number in logarithmic base 2 scale.20 The complete microarray data are available from the NCBI Gene Expression Omnibus (GEO) as GSE19754. Principal component analysis (PCA) was used to survey gene variation across sample (time) and space, and sample variation across transcriptome space, separately.21 Because most of the time-points had replicate sample measurements, we computed the linear correlation between the unlogged replicate time profiles (A, B) for each probe set to assess the reproducibility of their time profile. We selected the probe set with the maximum replicate time profile correlation as the unique representative for genes with more than one probe set representative. The fold change of a probe set for days 10–14 vs. days 0–1 was computed as the average RMA signal of days 10–14 minus the average RMA signal of days 0–1. This fold change is in log base 2 scale, because the RMA signal is in log base 2 scale. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov) on the mouse homologs of zebrafish genes, because the ontological characterization of genes is currently richer for the mouse than for the zebrafish.22 We used the mouse C2C12 myogenic differentiation microarray dataset (GEO, GSE19968) for comparative genomic analysis.23 [SUBTITLE] Quantitative Real-Time Polymerase Chain Reaction [SUBSECTION] Total RNA (1 μg) was extracted from the zebrafish muscle myogenic progenitor cells in culture at various time-points during differentiation and subjected to reverse transcriptase using the First Strand Synthesis Kit (Invitrogen). cDNA was then diluted in sterile water into tenfold serial dilutions, and real-time polymerase chain reaction (PCR) was performed (SYBR Green Master Mix; Applied Biosystems). Gene-specific primers that overlapped introns were used (refer to Supplementary Material, Table S5). All samples were amplified on a light cycler (Model 7900HT; ABI). Cycle time (CT) values were normalized to a zebrafish ef1α loading control. All significant values were determined using Student t-tests (two-tailed). Total RNA (1 μg) was extracted from the zebrafish muscle myogenic progenitor cells in culture at various time-points during differentiation and subjected to reverse transcriptase using the First Strand Synthesis Kit (Invitrogen). cDNA was then diluted in sterile water into tenfold serial dilutions, and real-time polymerase chain reaction (PCR) was performed (SYBR Green Master Mix; Applied Biosystems). Gene-specific primers that overlapped introns were used (refer to Supplementary Material, Table S5). All samples were amplified on a light cycler (Model 7900HT; ABI). Cycle time (CT) values were normalized to a zebrafish ef1α loading control. All significant values were determined using Student t-tests (two-tailed).
null
null
null
null
[ "Fish Lines", "Isolation of Zebrafish Myogenic Muscle Cells from Whole Dorsal Myotome", "Myogenic Differentiation of Adult Zebrafish Myogenic Progenitor Cells", "Immunohistochemistry", "RNA Isolation and Microarray Analysis", "Quantitative Real-Time Polymerase Chain Reaction", "RESULTS", "Isolation and Differentiation of Adult Zebrafish Myogenic Progenitor Cells", "Transcriptome Profiles of Cell Fusion and Differentiation of Zebrafish Myogenic Progenitor Cells", "Zebrafish Myogenic Progenitor Cells Express Myogenic Genes at Critical Time-Points during Differentiation", "Comparison of Zebrafish Myogenic Progenitor Cell Transcriptome with other Mammalian Myogenic Transcriptomes: Strengths and Limitations", "DISCUSSION" ]
[ "The α-actin–RFP transgenic fish line was a generous gift from H.J. Tsai (Taiwan National University) and has been described previously.17 Additional experiments were done utilizing the wild-type AB strain, which was obtained from the Children's Hospital aquatics program and maintained in their aquatics facility. All animal protocols were approved by the animal resources committee of Children's Hospital.", "For each cell preparation, 15–20 adult zebrafish were euthanized in tricaine (Sigma-Aldrich) and the whole zebrafish was placed in 100% ethanol for 30 seconds as the first step for sterilization. The fish's head, tail, and fins were removed with a scalpel, and the skin and internal organs were removed with forceps. The fish's body was sterilized in 10% bleach for 30 seconds and then washed twice in sterile phosphate-buffered saline (PBS) for another 30 seconds. Fish dorsal muscle and bone were minced with a scalpel and then transferred to a pre-weighed culture plate. For every gram of fish tissue, 3.5 ml of collagenase IV (10 mg/ml stock solution) and 3.5 ml of dispase (2.4 units/ml stock solution; Worthington Chemicals) were added and mixed by pipetting (Worthington). The solution was incubated at room temperature for 45 minutes (mixed every 10 minutes by pipette) before 10 ml of growth medium (L15; Sigma-Aldrich), 3% fetal calf serum, 100 μg/ml penicillin/streptomycin, 2 mM glutamine, and 0.8 mM CaCl2 (all Sigma-Aldrich) were added to the cells to quench the activity of the collagenase and dispase proteases. Debris was removed by filtering the cells through a 70-μm filter and then through two 40-μm filters (BD Biosciences). On each occasion, the filters were washed with 5 ml of L15 medium.\nThe cells were isolated by centrifugation at 1000 × g for 10 minutes at 9°C, and the supernatant was aspirated. The cells were then resuspended in 3 ml of red blood cell lysis buffer (Qiagen) and incubated for 3 minutes at room temperature before neutralization with 22 ml of L15 growth medium. The cells were then pelleted at 1000 × g for 10 minutes at 9°C, the supernatant aspirated, and the cell pellet resuspended in 3 ml of cold 1× PBS and layered on top of 4 ml of Ficoll-Paque gradient (GE Healthcare) in a 15-ml tube. Samples were then centrifuged at 1400 × g for 40 minutes at 9°C. A mononuclear cell layer was then extracted by pipette and washed with 10 ml of ice-cold 1× PBS. Afterwards, the cells were resuspended in 10 ml of ice-cold L15 buffer. The cell density was determined using an automated hemocytometer (Countess; Invitrogen), and the cell suspension was diluted in L15 growth medium.\nThe cells were then pre-plated on uncoated plates for 1 hour in a 28°C tissue culture incubator at 5% CO2. After pre-plating, the cellular supernatant (non-adherent cells) was removed and placed on laminin-coated plates (BD Biocoat). Alternatively, 0.1% gelatin-coated (porcine) plates can be used. The medium was changed every 3 days. The zebrafish myogenic progenitor cells were able to be grown for up to seven doublings before evidence of cellular senescence, with an average of four and five doublings per myoblast isolation. On average, a yield of 5–10 million live (trypan blue–negative) cells were isolated from each preparation of between 15 and 20 adult zebrafish. Lower yields of 100,000–500,000 live cells were isolated when using 1–5 adult zebrafish.\nAn alternative to the L15 growth medium was later used in zebrafish myogenic progenitor cell cultures and achieved the same results. Human skeletal myoblast growth medium (Promocell) that contained 20% fetal bovine serum (Atlanta Biologicals), 1× antibiotic–antimycotic (Invitrogen), and 1× Glutamax (Invitrogen), and supplemented with 3 ng/ml recombinant human fibroblast-like growth factor (rhFGF; Promega), can be used in lieu of the L15 growth medium.", "Approximately 300,000 cells/well were plated into six-well 0.1% gelatin-coated plates in 2 ml of growth medium and grown to 95% confluence. The medium was then changed to differentiation medium consisting of: 2% horse serum (Gibco) in Dulbecco modified Eagle medium (DMEM; Mediatech, Inc.) supplemented with 1× antibiotic–antimycotic (Invitrogen) and 1× Glutamax (Invitrogen). The differentiation medium was changed every other day, and cells were monitored for myotube fusion by phase and fluorescent microscopy. Multinucleated myotubes were observed during days 4–7.", "The following primary antibodies were used for immunohistochemistry of zebrafish myogenic progenitor cells: Pax3 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); Pax7 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); anti-MyoD1 rabbit polyclonal (1:50; Santa Cruz Biotechnology); and anti-myogenin rabbit polyclonal (1:50; M-225; Santa Cruz Biotechnology). The myogenin antibody has been characterized previously in early zebrafish myogenic progenitor cells.18 The zebrafish myod1 epitope has been shown to be recognized by the myf5 antibody (Santa Cruz Biotechnology).19\nApproximately, 100,000 cells were pre-plated on uncoated coverslides (Nunc, Lab-Tek) and, after a 1-hour pre-plating, the supernatant was plated onto 0.1% gelatin-collated coverslips. The following day, the zebrafish myogenic cells attached were fixed in 4% paraformaldehyde (Electron Microscopy Sciences) at 4°C for 10 minutes. To block nonspecific binding of the antibodies, slides were incubated for 30 minutes at room temperature in PBS + 10% goat serum. After blocking, the slides were incubated overnight at 4°C using the primary antibodies. Slides were washed three times in 1× PBS, and sections were incubated with Alexa 488 (anti-mouse IgG)- or 568 (anti-rabbit IgG)-conjugated goat secondary antibodies (Invitrogen) at a 1:500 dilution for 45 minutes at room temperature. The slides were then washed three times in 1× PBS before mounting in Vectashield with 4′,6-diamidino-2-phenylindole (DAPI) (Vector Laboratories). Slides were analyzed by microscope (E1000 Nikon Eclipse; Nikon) and OpenLab software.", "RNA was extracted directly from zebrafish myogenic progenitor cells in culture at various stages of differentiation using Tripure (Roche Applied Science), following the manufacturer's protocol. Zebrafish cDNA was hybridized to the Affymetrix GeneChip Zebrafish Genome Array (GenBank Release 36.0, June 2003) and processed following the manufacturer's protocol at the Molecular Genetics Core Facility at Children's Hospital Boston. The resulting. CEL files, which contain probe signal intensities of the samples, were preprocessed and normalized together using robust multiarray averaging (RMA), which returns the expression level of each probe set or gene as a positive real number in logarithmic base 2 scale.20 The complete microarray data are available from the NCBI Gene Expression Omnibus (GEO) as GSE19754.\nPrincipal component analysis (PCA) was used to survey gene variation across sample (time) and space, and sample variation across transcriptome space, separately.21 Because most of the time-points had replicate sample measurements, we computed the linear correlation between the unlogged replicate time profiles (A, B) for each probe set to assess the reproducibility of their time profile. We selected the probe set with the maximum replicate time profile correlation as the unique representative for genes with more than one probe set representative. The fold change of a probe set for days 10–14 vs. days 0–1 was computed as the average RMA signal of days 10–14 minus the average RMA signal of days 0–1. This fold change is in log base 2 scale, because the RMA signal is in log base 2 scale. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov) on the mouse homologs of zebrafish genes, because the ontological characterization of genes is currently richer for the mouse than for the zebrafish.22 We used the mouse C2C12 myogenic differentiation microarray dataset (GEO, GSE19968) for comparative genomic analysis.23", "Total RNA (1 μg) was extracted from the zebrafish muscle myogenic progenitor cells in culture at various time-points during differentiation and subjected to reverse transcriptase using the First Strand Synthesis Kit (Invitrogen). cDNA was then diluted in sterile water into tenfold serial dilutions, and real-time polymerase chain reaction (PCR) was performed (SYBR Green Master Mix; Applied Biosystems). Gene-specific primers that overlapped introns were used (refer to Supplementary Material, Table S5). All samples were amplified on a light cycler (Model 7900HT; ABI). Cycle time (CT) values were normalized to a zebrafish ef1α loading control. All significant values were determined using Student t-tests (two-tailed).", "[SUBTITLE] Isolation and Differentiation of Adult Zebrafish Myogenic Progenitor Cells [SUBSECTION] In mammals, it is possible to identify muscle progenitor cells by their potential to differentiate into multinucleate myotubes in culture. To access this capability in adult zebrafish, myogenic progenitor cells were prepared from α-actin–RFP transgenic zebrafish, as outlined in Figure 1 and detailed in the previous section. Following cellular expansion, after reaching 95%+ confluency (after being plated at 300,000 cells 24 hours earlier), the myogenic progenitor cells were exposed to differentiation medium. Over the course of 14 days, cultured zebrafish muscle cells began to fuse and elongate (Fig. 2). The use of the α-actin–RFP transgenic line allowed for the easy identification of mature myotubes in contrast to any few remaining fibroblasts due to the skeletal muscle-specific enhancer that drives expression of the RFP reporter, as characterized elsewhere.17\nBasic protocol for the isolation of zebrafish skeletal muscle myogenic progenitor cells from whole dorsal myotome. Schematic showing the procedure for the isolation of skeletal myogenic progenitors from adult zebrafish dorsal muscle. Following euthanization of the zebrafish with tricaine, the fish are skinned, decapitated, de-finned, and de-gutted. A disassociation step in a mixture of collagenase IV and neutral protease breaks down cellular adhesion, whereas the use of a Ficoll gradient results in the isolation of a mononuclear cell layer. Pre-plating on uncoated plates was followed by an overnight (16-hour) transfer of the myoblast-enriched supernatant to gelatin-coated plates. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nIn vitro differentiation of primary myoblasts isolated from α-actin–RFP adult dorsal muscle. (A–D) Phase contrast of zebrafish myogenic progenitor cells differentiating from day 0 to day 14. (E–H) RFP expression of the α-actin promoter indicates myotube formation and myogenic differentiation. (I–L) Immunofluorescent staining of day 0 α-actin-–RFP myoblasts. Note that very few cells express high levels of the α-actin RFP transgene, as it undergoes higher levels of transcriptional expression during myogenic differentiation. Green fluorescent staining and open arrowheads demarcate myogenic markers (pax3, pax7, myod1, and myogenin). (M) Quantification of 500 DAPI-stained (blue) nuclei of the results from day 0 myoblast immunofluorescent staining in (I)–(L). Immunostaining was performed in triplicate. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nInitial plating of primary myoblasts from the α-actin–RFP fish resulted in very few RFP-positive cells either attached to the plates or free floating in the medium (Fig. 2A–H). At day 4, several clusters of RFP-positive cells emerged as the myoblasts began to undergo cellular fusion. The detection of RFP (α-actin) reporter was a strong indicator that the zebrafish myogenic progenitors had begun to activate transcripts essential for myoblast fusion and myotube structure, as the α-actin gene (promoter for RFP) expression is most robust in mature myofibers.17, 24 By day 7, long multinucleated RFP+ myotubes were identified that further expanded into twitching myotube clusters by day 14 (Fig. 2H).\nTo further characterize what stage of myogenesis these adult zebrafish myogenic progenitors resided in at the initial time of isolation (day 0), the cells were probed using immunofluoresence with the myogenic determination markers pax3 and pax7. Mammalian pax3 and pax7 function as determinants of the transition from embryonic myoblasts into muscle satellite cells, whereas, in zebrafish, these proteins function in the determination of fast muscle fibers used for swimming.11 Day 0 zebrafish myogenic progenitor cells had low levels of pax3 (1.53%) and pax7 (2.86%) protein expression, as quantified by immunofluorescence with monoclonal specific antibodies (Fig. 2I, J, and M). Conversely, these day 0 myogenic progenitors had significant levels of myod1 (74.86%), indicating that these cells were further committed than mammalian satellite cells to form myotubes (Fig. 2K and M). In addition, these cells had low expression of myogenin (3.27%) (Fig. 2L and M), a marker of myofiber determination. These experiments demonstrate that isolated myogenic progenitor cells can successfully fuse in cell culture as visualized by the α-actin–RFP fluorescent reporter, similar to the myoblast culture of larger fish species, such as the Atlantic salmon.13\nIn mammals, it is possible to identify muscle progenitor cells by their potential to differentiate into multinucleate myotubes in culture. To access this capability in adult zebrafish, myogenic progenitor cells were prepared from α-actin–RFP transgenic zebrafish, as outlined in Figure 1 and detailed in the previous section. Following cellular expansion, after reaching 95%+ confluency (after being plated at 300,000 cells 24 hours earlier), the myogenic progenitor cells were exposed to differentiation medium. Over the course of 14 days, cultured zebrafish muscle cells began to fuse and elongate (Fig. 2). The use of the α-actin–RFP transgenic line allowed for the easy identification of mature myotubes in contrast to any few remaining fibroblasts due to the skeletal muscle-specific enhancer that drives expression of the RFP reporter, as characterized elsewhere.17\nBasic protocol for the isolation of zebrafish skeletal muscle myogenic progenitor cells from whole dorsal myotome. Schematic showing the procedure for the isolation of skeletal myogenic progenitors from adult zebrafish dorsal muscle. Following euthanization of the zebrafish with tricaine, the fish are skinned, decapitated, de-finned, and de-gutted. A disassociation step in a mixture of collagenase IV and neutral protease breaks down cellular adhesion, whereas the use of a Ficoll gradient results in the isolation of a mononuclear cell layer. Pre-plating on uncoated plates was followed by an overnight (16-hour) transfer of the myoblast-enriched supernatant to gelatin-coated plates. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nIn vitro differentiation of primary myoblasts isolated from α-actin–RFP adult dorsal muscle. (A–D) Phase contrast of zebrafish myogenic progenitor cells differentiating from day 0 to day 14. (E–H) RFP expression of the α-actin promoter indicates myotube formation and myogenic differentiation. (I–L) Immunofluorescent staining of day 0 α-actin-–RFP myoblasts. Note that very few cells express high levels of the α-actin RFP transgene, as it undergoes higher levels of transcriptional expression during myogenic differentiation. Green fluorescent staining and open arrowheads demarcate myogenic markers (pax3, pax7, myod1, and myogenin). (M) Quantification of 500 DAPI-stained (blue) nuclei of the results from day 0 myoblast immunofluorescent staining in (I)–(L). Immunostaining was performed in triplicate. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nInitial plating of primary myoblasts from the α-actin–RFP fish resulted in very few RFP-positive cells either attached to the plates or free floating in the medium (Fig. 2A–H). At day 4, several clusters of RFP-positive cells emerged as the myoblasts began to undergo cellular fusion. The detection of RFP (α-actin) reporter was a strong indicator that the zebrafish myogenic progenitors had begun to activate transcripts essential for myoblast fusion and myotube structure, as the α-actin gene (promoter for RFP) expression is most robust in mature myofibers.17, 24 By day 7, long multinucleated RFP+ myotubes were identified that further expanded into twitching myotube clusters by day 14 (Fig. 2H).\nTo further characterize what stage of myogenesis these adult zebrafish myogenic progenitors resided in at the initial time of isolation (day 0), the cells were probed using immunofluoresence with the myogenic determination markers pax3 and pax7. Mammalian pax3 and pax7 function as determinants of the transition from embryonic myoblasts into muscle satellite cells, whereas, in zebrafish, these proteins function in the determination of fast muscle fibers used for swimming.11 Day 0 zebrafish myogenic progenitor cells had low levels of pax3 (1.53%) and pax7 (2.86%) protein expression, as quantified by immunofluorescence with monoclonal specific antibodies (Fig. 2I, J, and M). Conversely, these day 0 myogenic progenitors had significant levels of myod1 (74.86%), indicating that these cells were further committed than mammalian satellite cells to form myotubes (Fig. 2K and M). In addition, these cells had low expression of myogenin (3.27%) (Fig. 2L and M), a marker of myofiber determination. These experiments demonstrate that isolated myogenic progenitor cells can successfully fuse in cell culture as visualized by the α-actin–RFP fluorescent reporter, similar to the myoblast culture of larger fish species, such as the Atlantic salmon.13\n[SUBTITLE] Transcriptome Profiles of Cell Fusion and Differentiation of Zebrafish Myogenic Progenitor Cells [SUBSECTION] To identify the myogenic transcriptome of zebrafish myogenic progenitor cells from cell proliferation through cell fusion and differentiation into mature myotubes, total mRNA was interrogated by microarray at different time-points (days 0, 1, 4, 7, 10, and 14) from zebrafish myogenic progenitor cells of the α-actin–RFP transgenic line as the cells underwent myogenic differentiation in culture.\nDuplicate biological measurements (A, B) were made for most time-points. For each microarray gene probe set, we computed the correlation between duplicate profiles to assess the reproducibility of the myogenic developmental profile of the gene. There were 5960 microarray gene probe sets with a correlation >0.8 between duplicate profiles. Unless otherwise noted, this is the primary microarray gene set used in subsequent analyses. PCA of the standardized temporal expression profiles of these genes show them to have two large-scale temporal patterns (Fig. 3). Fifty-six percent (3340 genes, 2985 unique) have a profile that largely decreases with time (green dots, left hemisphere of PCA plot in Fig. 3A) and are enriched for development and cell signaling receptor ontologic terms (Supplemental Material, Table S1). Forty-four percent (2620 genes, 2414 unique) have a profile that is largely increasing with time (magenta dots, right hemisphere of PCA plot in Fig. 3A) and are enriched for oxidoreductive and metabolic enzyme ontologic terms (Supplementary Material, Table S2). The majority of genes change their expression level at day 4: high to low, and vice versa (Fig. 3B). Phenotypically, zebrafish muscle cells at day 4 of myogenic differentiation are in the initial stages of myotube fusion. To identify the active genes at day 4, we performed a differential analysis of day 4 vs. the other days (0, 1, 7, 10, and 14). Forty-seven unique genes were significantly upregulated at day 4 relative to the other days and were enriched for M-phase and mitosis ontologic terms (Supplementary Material, Table S3). Sixty unique genes were significantly downregulated at day 4 relative to the other days and were enriched for collagen and extracellular matrix ontological terms (Supplementary Material, Table S4). In addition, we examined the microarray expression profile of nine reproducible transcripts that have been reported previously to be differentially expressed during myogenesis.25\nMicroarray analysis of zebrafish myogenic progenitor cell differentiation transcriptome. (A) Principal components analysis (PCA) showing the principal components 1 vs. 2 plot of the zebrafish muscle cell differentiation microarray data of 5960 reproducible genes (shown as colored dots) in time and indicates two large-scale temporal patterns of expression. Genes on the left hemisphere (green) are highly expressed at days 0–1, and decrease over time. Genes on the right hemisphere (magenta) show low expression at days 0–1, and increase over time. The principal components axes are a linear combination of the time-points. (B) The average expression profile of the genes from the two large-scale temporal patterns of expression. (C) Standardized expression for upregulation (red) vs. downregulation (green) of nine differentially regulated myogenic genes. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nTo identify the myogenic transcriptome of zebrafish myogenic progenitor cells from cell proliferation through cell fusion and differentiation into mature myotubes, total mRNA was interrogated by microarray at different time-points (days 0, 1, 4, 7, 10, and 14) from zebrafish myogenic progenitor cells of the α-actin–RFP transgenic line as the cells underwent myogenic differentiation in culture.\nDuplicate biological measurements (A, B) were made for most time-points. For each microarray gene probe set, we computed the correlation between duplicate profiles to assess the reproducibility of the myogenic developmental profile of the gene. There were 5960 microarray gene probe sets with a correlation >0.8 between duplicate profiles. Unless otherwise noted, this is the primary microarray gene set used in subsequent analyses. PCA of the standardized temporal expression profiles of these genes show them to have two large-scale temporal patterns (Fig. 3). Fifty-six percent (3340 genes, 2985 unique) have a profile that largely decreases with time (green dots, left hemisphere of PCA plot in Fig. 3A) and are enriched for development and cell signaling receptor ontologic terms (Supplemental Material, Table S1). Forty-four percent (2620 genes, 2414 unique) have a profile that is largely increasing with time (magenta dots, right hemisphere of PCA plot in Fig. 3A) and are enriched for oxidoreductive and metabolic enzyme ontologic terms (Supplementary Material, Table S2). The majority of genes change their expression level at day 4: high to low, and vice versa (Fig. 3B). Phenotypically, zebrafish muscle cells at day 4 of myogenic differentiation are in the initial stages of myotube fusion. To identify the active genes at day 4, we performed a differential analysis of day 4 vs. the other days (0, 1, 7, 10, and 14). Forty-seven unique genes were significantly upregulated at day 4 relative to the other days and were enriched for M-phase and mitosis ontologic terms (Supplementary Material, Table S3). Sixty unique genes were significantly downregulated at day 4 relative to the other days and were enriched for collagen and extracellular matrix ontological terms (Supplementary Material, Table S4). In addition, we examined the microarray expression profile of nine reproducible transcripts that have been reported previously to be differentially expressed during myogenesis.25\nMicroarray analysis of zebrafish myogenic progenitor cell differentiation transcriptome. (A) Principal components analysis (PCA) showing the principal components 1 vs. 2 plot of the zebrafish muscle cell differentiation microarray data of 5960 reproducible genes (shown as colored dots) in time and indicates two large-scale temporal patterns of expression. Genes on the left hemisphere (green) are highly expressed at days 0–1, and decrease over time. Genes on the right hemisphere (magenta) show low expression at days 0–1, and increase over time. The principal components axes are a linear combination of the time-points. (B) The average expression profile of the genes from the two large-scale temporal patterns of expression. (C) Standardized expression for upregulation (red) vs. downregulation (green) of nine differentially regulated myogenic genes. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\n[SUBTITLE] Zebrafish Myogenic Progenitor Cells Express Myogenic Genes at Critical Time-Points during Differentiation [SUBSECTION] After myogenic progenitor cell microarray analysis of the zebrafish, samples were validated by quantitative real-time PCR for several important myogenic genes using exon-overlapping primers. Several myogenic structural (acta1a, desma), cell-signaling (cav3, cxcr4a), and transcription (myog, pax3a) factors were chosen for validation. In each case, each gene followed the expected microarray trend across myogenic differentiation (Fig. 4). The myogenic structural genes (acta1a and desma) were all upregulated as the zebrafish myogenic progenitor cells underwent myogenic fusion and myotube formation. As expected, the myogenic stem cell marker (cxcr4a) mRNA was downregulated as the zebrafish muscle cells underwent fusion, whereas, conversely, the myogenic transcription factor myogenin (myog) was upregulated. In addition, another marker of early myoblasts, pax3a, had significantly reduced expression as the cells underwent myogenic differentiation.\nValidation of myogenic differentiation in the zebrafish myogenic progenitor cells by microarray and real-time PCR. (A) Real-time quantitative PCR expression (magenta dashed line) levels of six myogenic differentiation factors (acta1a, cav3, cxcr4a, desma, myog, and pax3a) across time (x-axis; days 0–14) as compared with microarray data (green solid line). The y-axis is logarithm base 2 scale fold change of each time-point relative to day 0, which is the average ΔCT (day 0) minus average ΔCT (day N) value for quantitative PCR data (ddCT), and average RMA signal (day N) minus average RMA signal (day 0) for the microarray data. The quantitative PCR CT values were normalized to the zebrafish housekeeping gene ef1α housekeeping per condition. Note that acta1 and cxcr4 primers were specific to both a and b isoforms present in the zebrafish genome. (B) The table compares the log2 expression fold change of days 0–1 vs. 10–14 of the six myogenic differentiation factors between quantitative PCR and microarray data. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nAfter myogenic progenitor cell microarray analysis of the zebrafish, samples were validated by quantitative real-time PCR for several important myogenic genes using exon-overlapping primers. Several myogenic structural (acta1a, desma), cell-signaling (cav3, cxcr4a), and transcription (myog, pax3a) factors were chosen for validation. In each case, each gene followed the expected microarray trend across myogenic differentiation (Fig. 4). The myogenic structural genes (acta1a and desma) were all upregulated as the zebrafish myogenic progenitor cells underwent myogenic fusion and myotube formation. As expected, the myogenic stem cell marker (cxcr4a) mRNA was downregulated as the zebrafish muscle cells underwent fusion, whereas, conversely, the myogenic transcription factor myogenin (myog) was upregulated. In addition, another marker of early myoblasts, pax3a, had significantly reduced expression as the cells underwent myogenic differentiation.\nValidation of myogenic differentiation in the zebrafish myogenic progenitor cells by microarray and real-time PCR. (A) Real-time quantitative PCR expression (magenta dashed line) levels of six myogenic differentiation factors (acta1a, cav3, cxcr4a, desma, myog, and pax3a) across time (x-axis; days 0–14) as compared with microarray data (green solid line). The y-axis is logarithm base 2 scale fold change of each time-point relative to day 0, which is the average ΔCT (day 0) minus average ΔCT (day N) value for quantitative PCR data (ddCT), and average RMA signal (day N) minus average RMA signal (day 0) for the microarray data. The quantitative PCR CT values were normalized to the zebrafish housekeeping gene ef1α housekeeping per condition. Note that acta1 and cxcr4 primers were specific to both a and b isoforms present in the zebrafish genome. (B) The table compares the log2 expression fold change of days 0–1 vs. 10–14 of the six myogenic differentiation factors between quantitative PCR and microarray data. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\n[SUBTITLE] Comparison of Zebrafish Myogenic Progenitor Cell Transcriptome with other Mammalian Myogenic Transcriptomes: Strengths and Limitations [SUBSECTION] To gain insights into similarities between zebrafish and mammalian myogenic cells with respect to changes in gene expression during in vitro differentiation, we compared the zebrafish myogenic differentiation transcriptome data to a recent mouse C2C12 myogenic differentiation microarray dataset from the GEO, GSE19968.23 PCA of samples in transcriptome space of both datasets, done separately, showed a distinct dichotomy between the earlier vs. later time-points of myogenic differentiation along the first principal component (PC1), the direction of maximum sample variation (Fig. 5A). There is a clear transcriptome scale distinction when comparing days 0–1 vs. days 7–10 in the zebrafish, and between myoblasts and differentiated myotubes at day 4 in C2C12. There are 3784 homologous genes in common between the datasets, and 1400 have a correlation of >0.8 between replicate time profiles in both datasets, respectively. Of these 1400 reproducible genes, we investigated the concordance of differential expression of earlier vs. later time-points during myogenic differentiation. We computed the fold change of days 10–14 relative to days 0–1 in the zebrafish, and of myotubes at day 4 relative to myoblasts in C2C12. There was significant concordance among genes that were twofold magnitude changed at earlier vs. later time-points in both datasets: Fisher exact test P-value <7.0 × 10−7 (Fig. 5B).\nComparison of zebrafish and mouse C2C12 myogenic development. (A) Principal components analysis of samples in transcriptome space showing principal components 1 vs. 2, and 1 vs. 3 plots for the zebrafish and C2C12 (from Gene Expression Omnibus, GSE19968) data show transcriptome scale distinctions between earlier vs. later time-points of muscle development: days 0–1 vs. days 7–10 in zebrafish, and myoblasts vs. differentiated myotubes at day 4 in C2C12. Zebrafish samples are labeled by the time-point following myogenic differentiation (days 0–14). C2C12 samples are labeled as myoblasts (B), and time-points following myogenic differentiation (days 0, 1, and 4). (B) Contingency table of genes ≥2-fold magnitude changed in earlier vs. later time-points of 1400 reproducible genes common to both datasets: fold change of days 10–14 relative to days 0–1 in zebrafish, and fold change of myotubes at day 4 relative to myoblasts in C2C12. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nTo gain insights into similarities between zebrafish and mammalian myogenic cells with respect to changes in gene expression during in vitro differentiation, we compared the zebrafish myogenic differentiation transcriptome data to a recent mouse C2C12 myogenic differentiation microarray dataset from the GEO, GSE19968.23 PCA of samples in transcriptome space of both datasets, done separately, showed a distinct dichotomy between the earlier vs. later time-points of myogenic differentiation along the first principal component (PC1), the direction of maximum sample variation (Fig. 5A). There is a clear transcriptome scale distinction when comparing days 0–1 vs. days 7–10 in the zebrafish, and between myoblasts and differentiated myotubes at day 4 in C2C12. There are 3784 homologous genes in common between the datasets, and 1400 have a correlation of >0.8 between replicate time profiles in both datasets, respectively. Of these 1400 reproducible genes, we investigated the concordance of differential expression of earlier vs. later time-points during myogenic differentiation. We computed the fold change of days 10–14 relative to days 0–1 in the zebrafish, and of myotubes at day 4 relative to myoblasts in C2C12. There was significant concordance among genes that were twofold magnitude changed at earlier vs. later time-points in both datasets: Fisher exact test P-value <7.0 × 10−7 (Fig. 5B).\nComparison of zebrafish and mouse C2C12 myogenic development. (A) Principal components analysis of samples in transcriptome space showing principal components 1 vs. 2, and 1 vs. 3 plots for the zebrafish and C2C12 (from Gene Expression Omnibus, GSE19968) data show transcriptome scale distinctions between earlier vs. later time-points of muscle development: days 0–1 vs. days 7–10 in zebrafish, and myoblasts vs. differentiated myotubes at day 4 in C2C12. Zebrafish samples are labeled by the time-point following myogenic differentiation (days 0–14). C2C12 samples are labeled as myoblasts (B), and time-points following myogenic differentiation (days 0, 1, and 4). (B) Contingency table of genes ≥2-fold magnitude changed in earlier vs. later time-points of 1400 reproducible genes common to both datasets: fold change of days 10–14 relative to days 0–1 in zebrafish, and fold change of myotubes at day 4 relative to myoblasts in C2C12. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "In mammals, it is possible to identify muscle progenitor cells by their potential to differentiate into multinucleate myotubes in culture. To access this capability in adult zebrafish, myogenic progenitor cells were prepared from α-actin–RFP transgenic zebrafish, as outlined in Figure 1 and detailed in the previous section. Following cellular expansion, after reaching 95%+ confluency (after being plated at 300,000 cells 24 hours earlier), the myogenic progenitor cells were exposed to differentiation medium. Over the course of 14 days, cultured zebrafish muscle cells began to fuse and elongate (Fig. 2). The use of the α-actin–RFP transgenic line allowed for the easy identification of mature myotubes in contrast to any few remaining fibroblasts due to the skeletal muscle-specific enhancer that drives expression of the RFP reporter, as characterized elsewhere.17\nBasic protocol for the isolation of zebrafish skeletal muscle myogenic progenitor cells from whole dorsal myotome. Schematic showing the procedure for the isolation of skeletal myogenic progenitors from adult zebrafish dorsal muscle. Following euthanization of the zebrafish with tricaine, the fish are skinned, decapitated, de-finned, and de-gutted. A disassociation step in a mixture of collagenase IV and neutral protease breaks down cellular adhesion, whereas the use of a Ficoll gradient results in the isolation of a mononuclear cell layer. Pre-plating on uncoated plates was followed by an overnight (16-hour) transfer of the myoblast-enriched supernatant to gelatin-coated plates. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nIn vitro differentiation of primary myoblasts isolated from α-actin–RFP adult dorsal muscle. (A–D) Phase contrast of zebrafish myogenic progenitor cells differentiating from day 0 to day 14. (E–H) RFP expression of the α-actin promoter indicates myotube formation and myogenic differentiation. (I–L) Immunofluorescent staining of day 0 α-actin-–RFP myoblasts. Note that very few cells express high levels of the α-actin RFP transgene, as it undergoes higher levels of transcriptional expression during myogenic differentiation. Green fluorescent staining and open arrowheads demarcate myogenic markers (pax3, pax7, myod1, and myogenin). (M) Quantification of 500 DAPI-stained (blue) nuclei of the results from day 0 myoblast immunofluorescent staining in (I)–(L). Immunostaining was performed in triplicate. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nInitial plating of primary myoblasts from the α-actin–RFP fish resulted in very few RFP-positive cells either attached to the plates or free floating in the medium (Fig. 2A–H). At day 4, several clusters of RFP-positive cells emerged as the myoblasts began to undergo cellular fusion. The detection of RFP (α-actin) reporter was a strong indicator that the zebrafish myogenic progenitors had begun to activate transcripts essential for myoblast fusion and myotube structure, as the α-actin gene (promoter for RFP) expression is most robust in mature myofibers.17, 24 By day 7, long multinucleated RFP+ myotubes were identified that further expanded into twitching myotube clusters by day 14 (Fig. 2H).\nTo further characterize what stage of myogenesis these adult zebrafish myogenic progenitors resided in at the initial time of isolation (day 0), the cells were probed using immunofluoresence with the myogenic determination markers pax3 and pax7. Mammalian pax3 and pax7 function as determinants of the transition from embryonic myoblasts into muscle satellite cells, whereas, in zebrafish, these proteins function in the determination of fast muscle fibers used for swimming.11 Day 0 zebrafish myogenic progenitor cells had low levels of pax3 (1.53%) and pax7 (2.86%) protein expression, as quantified by immunofluorescence with monoclonal specific antibodies (Fig. 2I, J, and M). Conversely, these day 0 myogenic progenitors had significant levels of myod1 (74.86%), indicating that these cells were further committed than mammalian satellite cells to form myotubes (Fig. 2K and M). In addition, these cells had low expression of myogenin (3.27%) (Fig. 2L and M), a marker of myofiber determination. These experiments demonstrate that isolated myogenic progenitor cells can successfully fuse in cell culture as visualized by the α-actin–RFP fluorescent reporter, similar to the myoblast culture of larger fish species, such as the Atlantic salmon.13", "To identify the myogenic transcriptome of zebrafish myogenic progenitor cells from cell proliferation through cell fusion and differentiation into mature myotubes, total mRNA was interrogated by microarray at different time-points (days 0, 1, 4, 7, 10, and 14) from zebrafish myogenic progenitor cells of the α-actin–RFP transgenic line as the cells underwent myogenic differentiation in culture.\nDuplicate biological measurements (A, B) were made for most time-points. For each microarray gene probe set, we computed the correlation between duplicate profiles to assess the reproducibility of the myogenic developmental profile of the gene. There were 5960 microarray gene probe sets with a correlation >0.8 between duplicate profiles. Unless otherwise noted, this is the primary microarray gene set used in subsequent analyses. PCA of the standardized temporal expression profiles of these genes show them to have two large-scale temporal patterns (Fig. 3). Fifty-six percent (3340 genes, 2985 unique) have a profile that largely decreases with time (green dots, left hemisphere of PCA plot in Fig. 3A) and are enriched for development and cell signaling receptor ontologic terms (Supplemental Material, Table S1). Forty-four percent (2620 genes, 2414 unique) have a profile that is largely increasing with time (magenta dots, right hemisphere of PCA plot in Fig. 3A) and are enriched for oxidoreductive and metabolic enzyme ontologic terms (Supplementary Material, Table S2). The majority of genes change their expression level at day 4: high to low, and vice versa (Fig. 3B). Phenotypically, zebrafish muscle cells at day 4 of myogenic differentiation are in the initial stages of myotube fusion. To identify the active genes at day 4, we performed a differential analysis of day 4 vs. the other days (0, 1, 7, 10, and 14). Forty-seven unique genes were significantly upregulated at day 4 relative to the other days and were enriched for M-phase and mitosis ontologic terms (Supplementary Material, Table S3). Sixty unique genes were significantly downregulated at day 4 relative to the other days and were enriched for collagen and extracellular matrix ontological terms (Supplementary Material, Table S4). In addition, we examined the microarray expression profile of nine reproducible transcripts that have been reported previously to be differentially expressed during myogenesis.25\nMicroarray analysis of zebrafish myogenic progenitor cell differentiation transcriptome. (A) Principal components analysis (PCA) showing the principal components 1 vs. 2 plot of the zebrafish muscle cell differentiation microarray data of 5960 reproducible genes (shown as colored dots) in time and indicates two large-scale temporal patterns of expression. Genes on the left hemisphere (green) are highly expressed at days 0–1, and decrease over time. Genes on the right hemisphere (magenta) show low expression at days 0–1, and increase over time. The principal components axes are a linear combination of the time-points. (B) The average expression profile of the genes from the two large-scale temporal patterns of expression. (C) Standardized expression for upregulation (red) vs. downregulation (green) of nine differentially regulated myogenic genes. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "After myogenic progenitor cell microarray analysis of the zebrafish, samples were validated by quantitative real-time PCR for several important myogenic genes using exon-overlapping primers. Several myogenic structural (acta1a, desma), cell-signaling (cav3, cxcr4a), and transcription (myog, pax3a) factors were chosen for validation. In each case, each gene followed the expected microarray trend across myogenic differentiation (Fig. 4). The myogenic structural genes (acta1a and desma) were all upregulated as the zebrafish myogenic progenitor cells underwent myogenic fusion and myotube formation. As expected, the myogenic stem cell marker (cxcr4a) mRNA was downregulated as the zebrafish muscle cells underwent fusion, whereas, conversely, the myogenic transcription factor myogenin (myog) was upregulated. In addition, another marker of early myoblasts, pax3a, had significantly reduced expression as the cells underwent myogenic differentiation.\nValidation of myogenic differentiation in the zebrafish myogenic progenitor cells by microarray and real-time PCR. (A) Real-time quantitative PCR expression (magenta dashed line) levels of six myogenic differentiation factors (acta1a, cav3, cxcr4a, desma, myog, and pax3a) across time (x-axis; days 0–14) as compared with microarray data (green solid line). The y-axis is logarithm base 2 scale fold change of each time-point relative to day 0, which is the average ΔCT (day 0) minus average ΔCT (day N) value for quantitative PCR data (ddCT), and average RMA signal (day N) minus average RMA signal (day 0) for the microarray data. The quantitative PCR CT values were normalized to the zebrafish housekeeping gene ef1α housekeeping per condition. Note that acta1 and cxcr4 primers were specific to both a and b isoforms present in the zebrafish genome. (B) The table compares the log2 expression fold change of days 0–1 vs. 10–14 of the six myogenic differentiation factors between quantitative PCR and microarray data. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "To gain insights into similarities between zebrafish and mammalian myogenic cells with respect to changes in gene expression during in vitro differentiation, we compared the zebrafish myogenic differentiation transcriptome data to a recent mouse C2C12 myogenic differentiation microarray dataset from the GEO, GSE19968.23 PCA of samples in transcriptome space of both datasets, done separately, showed a distinct dichotomy between the earlier vs. later time-points of myogenic differentiation along the first principal component (PC1), the direction of maximum sample variation (Fig. 5A). There is a clear transcriptome scale distinction when comparing days 0–1 vs. days 7–10 in the zebrafish, and between myoblasts and differentiated myotubes at day 4 in C2C12. There are 3784 homologous genes in common between the datasets, and 1400 have a correlation of >0.8 between replicate time profiles in both datasets, respectively. Of these 1400 reproducible genes, we investigated the concordance of differential expression of earlier vs. later time-points during myogenic differentiation. We computed the fold change of days 10–14 relative to days 0–1 in the zebrafish, and of myotubes at day 4 relative to myoblasts in C2C12. There was significant concordance among genes that were twofold magnitude changed at earlier vs. later time-points in both datasets: Fisher exact test P-value <7.0 × 10−7 (Fig. 5B).\nComparison of zebrafish and mouse C2C12 myogenic development. (A) Principal components analysis of samples in transcriptome space showing principal components 1 vs. 2, and 1 vs. 3 plots for the zebrafish and C2C12 (from Gene Expression Omnibus, GSE19968) data show transcriptome scale distinctions between earlier vs. later time-points of muscle development: days 0–1 vs. days 7–10 in zebrafish, and myoblasts vs. differentiated myotubes at day 4 in C2C12. Zebrafish samples are labeled by the time-point following myogenic differentiation (days 0–14). C2C12 samples are labeled as myoblasts (B), and time-points following myogenic differentiation (days 0, 1, and 4). (B) Contingency table of genes ≥2-fold magnitude changed in earlier vs. later time-points of 1400 reproducible genes common to both datasets: fold change of days 10–14 relative to days 0–1 in zebrafish, and fold change of myotubes at day 4 relative to myoblasts in C2C12. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "Gene expression profiles of early differentiating zebrafish myogenic progenitor cells show expression profiles similar to those expected for mammalian muscle, namely that the expression of many sarcomeric proteins is strongly upregulated with differentiation. In comparison with microarray data from mouse C2C12 myoblast differentiation,25 many of the same myogenic differentiation factors, such as Pax3, Myf5, and MyoD1, decrease in transcript. Although Pax3 is a determinant of embryonic mouse myoblasts, recent studies involving the use of Pax3–green fluorescent protein (GFP) knock-in mice have revealed that a very small population of Pax3-positive myogenic progenitors does persist in adult muscle and are capable of restoring skeletal muscle after injury.26 In zebrafish dorsal muscle, a pax3- and pax7-positive myogenic progenitor population is essential for the expansion of fast- and slow-twitch myofibers through an upstream regulation of myf5 and myod1.27 It is likely that a similar population of pax3- and/or pax7-positive myogenic progenitors exists in adult zebrafish skeletal muscle, and will contribute to myofiber formation following injury. In addition, the presence and subsequent downregulation of a cxcr4a (a homolog of mammalian Cxcr4) cell population during myogenic differentiation is consistent with its role as a myogenic progenitor marker that can be used in myoblast transplantation.28 A transparent zebrafish strain that completely lacks pigmentation, the casper line, allows for the transplantation and long-term monitoring of fluorescently labeled cell populations into adult fish. One can envision that, after the isolation of adult zebrafish myogenic progenitor cells from skeletal muscle transgenic fish lines, engraftment of different populations could be observed in vivo, allowing for the capture in real time of the behavior of transplanted cells. This information is essential for the optimization of cell transplantation approaches (now available with the development of this zebrafish myogenic progenitor isolation protocol) which cannot be visualized in mice at the level of resolution that can be achieved in zebrafish.\nIn mice, many procedures have been used to purify muscle progenitor cells, although, in all cases, the purified population is still heterogeneous, requiring additional pre-plating purification to enrich for cells with myogenic potential.16 We have modified the mammalian pre-plating technique, added a Ficoll-gradient procedure to decrease bacterial contamination, and demonstrated that myogenic cells can be isolated and differentiated in cell culture. These results show that zebrafish have adult muscle progenitor cells that can be isolated and differentiated in cell culture.\nIn conclusion, we have isolated a myogenic progenitor cell in zebrafish dorsal muscle. We have shown that gene expression profiles in these zebrafish myogenic progenitor cells are similar to those of mammals. This successful culture and differentiation of a myogenic progenitor population expands the utility of the zebrafish in the study of adult skeletal muscle mutants. High-throughput screening of chemical libraries has allowed researchers to correct mutations in zebrafish mutants and holds promise for the treatment of muscular dystrophy and myopathies.29 Given the gaps in the zebrafish genome annotation that have frustrated researchers,30 it is likely that the release of a well-annotated copy of the zebrafish genome will lead to improved microarray platforms and increased use of the zebrafish in large-scale transcriptome studies. Until then, rigorous validation of zebrafish transcriptome data by quantitative reverse transcription PCR is essential for drawing valid conclusions from zebrafish microarray transcriptome experiments. Further studies using zebrafish myogenic progenitor cells to identify novel drug compounds will show them to be an attractive, cost-effective alternative to large-scale mammalian studies." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "METHODS", "Fish Lines", "Isolation of Zebrafish Myogenic Muscle Cells from Whole Dorsal Myotome", "Myogenic Differentiation of Adult Zebrafish Myogenic Progenitor Cells", "Immunohistochemistry", "RNA Isolation and Microarray Analysis", "Quantitative Real-Time Polymerase Chain Reaction", "RESULTS", "Isolation and Differentiation of Adult Zebrafish Myogenic Progenitor Cells", "Transcriptome Profiles of Cell Fusion and Differentiation of Zebrafish Myogenic Progenitor Cells", "Zebrafish Myogenic Progenitor Cells Express Myogenic Genes at Critical Time-Points during Differentiation", "Comparison of Zebrafish Myogenic Progenitor Cell Transcriptome with other Mammalian Myogenic Transcriptomes: Strengths and Limitations", "DISCUSSION" ]
[ "[SUBTITLE] Fish Lines [SUBSECTION] The α-actin–RFP transgenic fish line was a generous gift from H.J. Tsai (Taiwan National University) and has been described previously.17 Additional experiments were done utilizing the wild-type AB strain, which was obtained from the Children's Hospital aquatics program and maintained in their aquatics facility. All animal protocols were approved by the animal resources committee of Children's Hospital.\nThe α-actin–RFP transgenic fish line was a generous gift from H.J. Tsai (Taiwan National University) and has been described previously.17 Additional experiments were done utilizing the wild-type AB strain, which was obtained from the Children's Hospital aquatics program and maintained in their aquatics facility. All animal protocols were approved by the animal resources committee of Children's Hospital.\n[SUBTITLE] Isolation of Zebrafish Myogenic Muscle Cells from Whole Dorsal Myotome [SUBSECTION] For each cell preparation, 15–20 adult zebrafish were euthanized in tricaine (Sigma-Aldrich) and the whole zebrafish was placed in 100% ethanol for 30 seconds as the first step for sterilization. The fish's head, tail, and fins were removed with a scalpel, and the skin and internal organs were removed with forceps. The fish's body was sterilized in 10% bleach for 30 seconds and then washed twice in sterile phosphate-buffered saline (PBS) for another 30 seconds. Fish dorsal muscle and bone were minced with a scalpel and then transferred to a pre-weighed culture plate. For every gram of fish tissue, 3.5 ml of collagenase IV (10 mg/ml stock solution) and 3.5 ml of dispase (2.4 units/ml stock solution; Worthington Chemicals) were added and mixed by pipetting (Worthington). The solution was incubated at room temperature for 45 minutes (mixed every 10 minutes by pipette) before 10 ml of growth medium (L15; Sigma-Aldrich), 3% fetal calf serum, 100 μg/ml penicillin/streptomycin, 2 mM glutamine, and 0.8 mM CaCl2 (all Sigma-Aldrich) were added to the cells to quench the activity of the collagenase and dispase proteases. Debris was removed by filtering the cells through a 70-μm filter and then through two 40-μm filters (BD Biosciences). On each occasion, the filters were washed with 5 ml of L15 medium.\nThe cells were isolated by centrifugation at 1000 × g for 10 minutes at 9°C, and the supernatant was aspirated. The cells were then resuspended in 3 ml of red blood cell lysis buffer (Qiagen) and incubated for 3 minutes at room temperature before neutralization with 22 ml of L15 growth medium. The cells were then pelleted at 1000 × g for 10 minutes at 9°C, the supernatant aspirated, and the cell pellet resuspended in 3 ml of cold 1× PBS and layered on top of 4 ml of Ficoll-Paque gradient (GE Healthcare) in a 15-ml tube. Samples were then centrifuged at 1400 × g for 40 minutes at 9°C. A mononuclear cell layer was then extracted by pipette and washed with 10 ml of ice-cold 1× PBS. Afterwards, the cells were resuspended in 10 ml of ice-cold L15 buffer. The cell density was determined using an automated hemocytometer (Countess; Invitrogen), and the cell suspension was diluted in L15 growth medium.\nThe cells were then pre-plated on uncoated plates for 1 hour in a 28°C tissue culture incubator at 5% CO2. After pre-plating, the cellular supernatant (non-adherent cells) was removed and placed on laminin-coated plates (BD Biocoat). Alternatively, 0.1% gelatin-coated (porcine) plates can be used. The medium was changed every 3 days. The zebrafish myogenic progenitor cells were able to be grown for up to seven doublings before evidence of cellular senescence, with an average of four and five doublings per myoblast isolation. On average, a yield of 5–10 million live (trypan blue–negative) cells were isolated from each preparation of between 15 and 20 adult zebrafish. Lower yields of 100,000–500,000 live cells were isolated when using 1–5 adult zebrafish.\nAn alternative to the L15 growth medium was later used in zebrafish myogenic progenitor cell cultures and achieved the same results. Human skeletal myoblast growth medium (Promocell) that contained 20% fetal bovine serum (Atlanta Biologicals), 1× antibiotic–antimycotic (Invitrogen), and 1× Glutamax (Invitrogen), and supplemented with 3 ng/ml recombinant human fibroblast-like growth factor (rhFGF; Promega), can be used in lieu of the L15 growth medium.\nFor each cell preparation, 15–20 adult zebrafish were euthanized in tricaine (Sigma-Aldrich) and the whole zebrafish was placed in 100% ethanol for 30 seconds as the first step for sterilization. The fish's head, tail, and fins were removed with a scalpel, and the skin and internal organs were removed with forceps. The fish's body was sterilized in 10% bleach for 30 seconds and then washed twice in sterile phosphate-buffered saline (PBS) for another 30 seconds. Fish dorsal muscle and bone were minced with a scalpel and then transferred to a pre-weighed culture plate. For every gram of fish tissue, 3.5 ml of collagenase IV (10 mg/ml stock solution) and 3.5 ml of dispase (2.4 units/ml stock solution; Worthington Chemicals) were added and mixed by pipetting (Worthington). The solution was incubated at room temperature for 45 minutes (mixed every 10 minutes by pipette) before 10 ml of growth medium (L15; Sigma-Aldrich), 3% fetal calf serum, 100 μg/ml penicillin/streptomycin, 2 mM glutamine, and 0.8 mM CaCl2 (all Sigma-Aldrich) were added to the cells to quench the activity of the collagenase and dispase proteases. Debris was removed by filtering the cells through a 70-μm filter and then through two 40-μm filters (BD Biosciences). On each occasion, the filters were washed with 5 ml of L15 medium.\nThe cells were isolated by centrifugation at 1000 × g for 10 minutes at 9°C, and the supernatant was aspirated. The cells were then resuspended in 3 ml of red blood cell lysis buffer (Qiagen) and incubated for 3 minutes at room temperature before neutralization with 22 ml of L15 growth medium. The cells were then pelleted at 1000 × g for 10 minutes at 9°C, the supernatant aspirated, and the cell pellet resuspended in 3 ml of cold 1× PBS and layered on top of 4 ml of Ficoll-Paque gradient (GE Healthcare) in a 15-ml tube. Samples were then centrifuged at 1400 × g for 40 minutes at 9°C. A mononuclear cell layer was then extracted by pipette and washed with 10 ml of ice-cold 1× PBS. Afterwards, the cells were resuspended in 10 ml of ice-cold L15 buffer. The cell density was determined using an automated hemocytometer (Countess; Invitrogen), and the cell suspension was diluted in L15 growth medium.\nThe cells were then pre-plated on uncoated plates for 1 hour in a 28°C tissue culture incubator at 5% CO2. After pre-plating, the cellular supernatant (non-adherent cells) was removed and placed on laminin-coated plates (BD Biocoat). Alternatively, 0.1% gelatin-coated (porcine) plates can be used. The medium was changed every 3 days. The zebrafish myogenic progenitor cells were able to be grown for up to seven doublings before evidence of cellular senescence, with an average of four and five doublings per myoblast isolation. On average, a yield of 5–10 million live (trypan blue–negative) cells were isolated from each preparation of between 15 and 20 adult zebrafish. Lower yields of 100,000–500,000 live cells were isolated when using 1–5 adult zebrafish.\nAn alternative to the L15 growth medium was later used in zebrafish myogenic progenitor cell cultures and achieved the same results. Human skeletal myoblast growth medium (Promocell) that contained 20% fetal bovine serum (Atlanta Biologicals), 1× antibiotic–antimycotic (Invitrogen), and 1× Glutamax (Invitrogen), and supplemented with 3 ng/ml recombinant human fibroblast-like growth factor (rhFGF; Promega), can be used in lieu of the L15 growth medium.\n[SUBTITLE] Myogenic Differentiation of Adult Zebrafish Myogenic Progenitor Cells [SUBSECTION] Approximately 300,000 cells/well were plated into six-well 0.1% gelatin-coated plates in 2 ml of growth medium and grown to 95% confluence. The medium was then changed to differentiation medium consisting of: 2% horse serum (Gibco) in Dulbecco modified Eagle medium (DMEM; Mediatech, Inc.) supplemented with 1× antibiotic–antimycotic (Invitrogen) and 1× Glutamax (Invitrogen). The differentiation medium was changed every other day, and cells were monitored for myotube fusion by phase and fluorescent microscopy. Multinucleated myotubes were observed during days 4–7.\nApproximately 300,000 cells/well were plated into six-well 0.1% gelatin-coated plates in 2 ml of growth medium and grown to 95% confluence. The medium was then changed to differentiation medium consisting of: 2% horse serum (Gibco) in Dulbecco modified Eagle medium (DMEM; Mediatech, Inc.) supplemented with 1× antibiotic–antimycotic (Invitrogen) and 1× Glutamax (Invitrogen). The differentiation medium was changed every other day, and cells were monitored for myotube fusion by phase and fluorescent microscopy. Multinucleated myotubes were observed during days 4–7.\n[SUBTITLE] Immunohistochemistry [SUBSECTION] The following primary antibodies were used for immunohistochemistry of zebrafish myogenic progenitor cells: Pax3 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); Pax7 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); anti-MyoD1 rabbit polyclonal (1:50; Santa Cruz Biotechnology); and anti-myogenin rabbit polyclonal (1:50; M-225; Santa Cruz Biotechnology). The myogenin antibody has been characterized previously in early zebrafish myogenic progenitor cells.18 The zebrafish myod1 epitope has been shown to be recognized by the myf5 antibody (Santa Cruz Biotechnology).19\nApproximately, 100,000 cells were pre-plated on uncoated coverslides (Nunc, Lab-Tek) and, after a 1-hour pre-plating, the supernatant was plated onto 0.1% gelatin-collated coverslips. The following day, the zebrafish myogenic cells attached were fixed in 4% paraformaldehyde (Electron Microscopy Sciences) at 4°C for 10 minutes. To block nonspecific binding of the antibodies, slides were incubated for 30 minutes at room temperature in PBS + 10% goat serum. After blocking, the slides were incubated overnight at 4°C using the primary antibodies. Slides were washed three times in 1× PBS, and sections were incubated with Alexa 488 (anti-mouse IgG)- or 568 (anti-rabbit IgG)-conjugated goat secondary antibodies (Invitrogen) at a 1:500 dilution for 45 minutes at room temperature. The slides were then washed three times in 1× PBS before mounting in Vectashield with 4′,6-diamidino-2-phenylindole (DAPI) (Vector Laboratories). Slides were analyzed by microscope (E1000 Nikon Eclipse; Nikon) and OpenLab software.\nThe following primary antibodies were used for immunohistochemistry of zebrafish myogenic progenitor cells: Pax3 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); Pax7 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); anti-MyoD1 rabbit polyclonal (1:50; Santa Cruz Biotechnology); and anti-myogenin rabbit polyclonal (1:50; M-225; Santa Cruz Biotechnology). The myogenin antibody has been characterized previously in early zebrafish myogenic progenitor cells.18 The zebrafish myod1 epitope has been shown to be recognized by the myf5 antibody (Santa Cruz Biotechnology).19\nApproximately, 100,000 cells were pre-plated on uncoated coverslides (Nunc, Lab-Tek) and, after a 1-hour pre-plating, the supernatant was plated onto 0.1% gelatin-collated coverslips. The following day, the zebrafish myogenic cells attached were fixed in 4% paraformaldehyde (Electron Microscopy Sciences) at 4°C for 10 minutes. To block nonspecific binding of the antibodies, slides were incubated for 30 minutes at room temperature in PBS + 10% goat serum. After blocking, the slides were incubated overnight at 4°C using the primary antibodies. Slides were washed three times in 1× PBS, and sections were incubated with Alexa 488 (anti-mouse IgG)- or 568 (anti-rabbit IgG)-conjugated goat secondary antibodies (Invitrogen) at a 1:500 dilution for 45 minutes at room temperature. The slides were then washed three times in 1× PBS before mounting in Vectashield with 4′,6-diamidino-2-phenylindole (DAPI) (Vector Laboratories). Slides were analyzed by microscope (E1000 Nikon Eclipse; Nikon) and OpenLab software.\n[SUBTITLE] RNA Isolation and Microarray Analysis [SUBSECTION] RNA was extracted directly from zebrafish myogenic progenitor cells in culture at various stages of differentiation using Tripure (Roche Applied Science), following the manufacturer's protocol. Zebrafish cDNA was hybridized to the Affymetrix GeneChip Zebrafish Genome Array (GenBank Release 36.0, June 2003) and processed following the manufacturer's protocol at the Molecular Genetics Core Facility at Children's Hospital Boston. The resulting. CEL files, which contain probe signal intensities of the samples, were preprocessed and normalized together using robust multiarray averaging (RMA), which returns the expression level of each probe set or gene as a positive real number in logarithmic base 2 scale.20 The complete microarray data are available from the NCBI Gene Expression Omnibus (GEO) as GSE19754.\nPrincipal component analysis (PCA) was used to survey gene variation across sample (time) and space, and sample variation across transcriptome space, separately.21 Because most of the time-points had replicate sample measurements, we computed the linear correlation between the unlogged replicate time profiles (A, B) for each probe set to assess the reproducibility of their time profile. We selected the probe set with the maximum replicate time profile correlation as the unique representative for genes with more than one probe set representative. The fold change of a probe set for days 10–14 vs. days 0–1 was computed as the average RMA signal of days 10–14 minus the average RMA signal of days 0–1. This fold change is in log base 2 scale, because the RMA signal is in log base 2 scale. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov) on the mouse homologs of zebrafish genes, because the ontological characterization of genes is currently richer for the mouse than for the zebrafish.22 We used the mouse C2C12 myogenic differentiation microarray dataset (GEO, GSE19968) for comparative genomic analysis.23\nRNA was extracted directly from zebrafish myogenic progenitor cells in culture at various stages of differentiation using Tripure (Roche Applied Science), following the manufacturer's protocol. Zebrafish cDNA was hybridized to the Affymetrix GeneChip Zebrafish Genome Array (GenBank Release 36.0, June 2003) and processed following the manufacturer's protocol at the Molecular Genetics Core Facility at Children's Hospital Boston. The resulting. CEL files, which contain probe signal intensities of the samples, were preprocessed and normalized together using robust multiarray averaging (RMA), which returns the expression level of each probe set or gene as a positive real number in logarithmic base 2 scale.20 The complete microarray data are available from the NCBI Gene Expression Omnibus (GEO) as GSE19754.\nPrincipal component analysis (PCA) was used to survey gene variation across sample (time) and space, and sample variation across transcriptome space, separately.21 Because most of the time-points had replicate sample measurements, we computed the linear correlation between the unlogged replicate time profiles (A, B) for each probe set to assess the reproducibility of their time profile. We selected the probe set with the maximum replicate time profile correlation as the unique representative for genes with more than one probe set representative. The fold change of a probe set for days 10–14 vs. days 0–1 was computed as the average RMA signal of days 10–14 minus the average RMA signal of days 0–1. This fold change is in log base 2 scale, because the RMA signal is in log base 2 scale. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov) on the mouse homologs of zebrafish genes, because the ontological characterization of genes is currently richer for the mouse than for the zebrafish.22 We used the mouse C2C12 myogenic differentiation microarray dataset (GEO, GSE19968) for comparative genomic analysis.23\n[SUBTITLE] Quantitative Real-Time Polymerase Chain Reaction [SUBSECTION] Total RNA (1 μg) was extracted from the zebrafish muscle myogenic progenitor cells in culture at various time-points during differentiation and subjected to reverse transcriptase using the First Strand Synthesis Kit (Invitrogen). cDNA was then diluted in sterile water into tenfold serial dilutions, and real-time polymerase chain reaction (PCR) was performed (SYBR Green Master Mix; Applied Biosystems). Gene-specific primers that overlapped introns were used (refer to Supplementary Material, Table S5). All samples were amplified on a light cycler (Model 7900HT; ABI). Cycle time (CT) values were normalized to a zebrafish ef1α loading control. All significant values were determined using Student t-tests (two-tailed).\nTotal RNA (1 μg) was extracted from the zebrafish muscle myogenic progenitor cells in culture at various time-points during differentiation and subjected to reverse transcriptase using the First Strand Synthesis Kit (Invitrogen). cDNA was then diluted in sterile water into tenfold serial dilutions, and real-time polymerase chain reaction (PCR) was performed (SYBR Green Master Mix; Applied Biosystems). Gene-specific primers that overlapped introns were used (refer to Supplementary Material, Table S5). All samples were amplified on a light cycler (Model 7900HT; ABI). Cycle time (CT) values were normalized to a zebrafish ef1α loading control. All significant values were determined using Student t-tests (two-tailed).", "The α-actin–RFP transgenic fish line was a generous gift from H.J. Tsai (Taiwan National University) and has been described previously.17 Additional experiments were done utilizing the wild-type AB strain, which was obtained from the Children's Hospital aquatics program and maintained in their aquatics facility. All animal protocols were approved by the animal resources committee of Children's Hospital.", "For each cell preparation, 15–20 adult zebrafish were euthanized in tricaine (Sigma-Aldrich) and the whole zebrafish was placed in 100% ethanol for 30 seconds as the first step for sterilization. The fish's head, tail, and fins were removed with a scalpel, and the skin and internal organs were removed with forceps. The fish's body was sterilized in 10% bleach for 30 seconds and then washed twice in sterile phosphate-buffered saline (PBS) for another 30 seconds. Fish dorsal muscle and bone were minced with a scalpel and then transferred to a pre-weighed culture plate. For every gram of fish tissue, 3.5 ml of collagenase IV (10 mg/ml stock solution) and 3.5 ml of dispase (2.4 units/ml stock solution; Worthington Chemicals) were added and mixed by pipetting (Worthington). The solution was incubated at room temperature for 45 minutes (mixed every 10 minutes by pipette) before 10 ml of growth medium (L15; Sigma-Aldrich), 3% fetal calf serum, 100 μg/ml penicillin/streptomycin, 2 mM glutamine, and 0.8 mM CaCl2 (all Sigma-Aldrich) were added to the cells to quench the activity of the collagenase and dispase proteases. Debris was removed by filtering the cells through a 70-μm filter and then through two 40-μm filters (BD Biosciences). On each occasion, the filters were washed with 5 ml of L15 medium.\nThe cells were isolated by centrifugation at 1000 × g for 10 minutes at 9°C, and the supernatant was aspirated. The cells were then resuspended in 3 ml of red blood cell lysis buffer (Qiagen) and incubated for 3 minutes at room temperature before neutralization with 22 ml of L15 growth medium. The cells were then pelleted at 1000 × g for 10 minutes at 9°C, the supernatant aspirated, and the cell pellet resuspended in 3 ml of cold 1× PBS and layered on top of 4 ml of Ficoll-Paque gradient (GE Healthcare) in a 15-ml tube. Samples were then centrifuged at 1400 × g for 40 minutes at 9°C. A mononuclear cell layer was then extracted by pipette and washed with 10 ml of ice-cold 1× PBS. Afterwards, the cells were resuspended in 10 ml of ice-cold L15 buffer. The cell density was determined using an automated hemocytometer (Countess; Invitrogen), and the cell suspension was diluted in L15 growth medium.\nThe cells were then pre-plated on uncoated plates for 1 hour in a 28°C tissue culture incubator at 5% CO2. After pre-plating, the cellular supernatant (non-adherent cells) was removed and placed on laminin-coated plates (BD Biocoat). Alternatively, 0.1% gelatin-coated (porcine) plates can be used. The medium was changed every 3 days. The zebrafish myogenic progenitor cells were able to be grown for up to seven doublings before evidence of cellular senescence, with an average of four and five doublings per myoblast isolation. On average, a yield of 5–10 million live (trypan blue–negative) cells were isolated from each preparation of between 15 and 20 adult zebrafish. Lower yields of 100,000–500,000 live cells were isolated when using 1–5 adult zebrafish.\nAn alternative to the L15 growth medium was later used in zebrafish myogenic progenitor cell cultures and achieved the same results. Human skeletal myoblast growth medium (Promocell) that contained 20% fetal bovine serum (Atlanta Biologicals), 1× antibiotic–antimycotic (Invitrogen), and 1× Glutamax (Invitrogen), and supplemented with 3 ng/ml recombinant human fibroblast-like growth factor (rhFGF; Promega), can be used in lieu of the L15 growth medium.", "Approximately 300,000 cells/well were plated into six-well 0.1% gelatin-coated plates in 2 ml of growth medium and grown to 95% confluence. The medium was then changed to differentiation medium consisting of: 2% horse serum (Gibco) in Dulbecco modified Eagle medium (DMEM; Mediatech, Inc.) supplemented with 1× antibiotic–antimycotic (Invitrogen) and 1× Glutamax (Invitrogen). The differentiation medium was changed every other day, and cells were monitored for myotube fusion by phase and fluorescent microscopy. Multinucleated myotubes were observed during days 4–7.", "The following primary antibodies were used for immunohistochemistry of zebrafish myogenic progenitor cells: Pax3 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); Pax7 mouse monoclonal (1:25; Developmental Studies Hybridoma Bank); anti-MyoD1 rabbit polyclonal (1:50; Santa Cruz Biotechnology); and anti-myogenin rabbit polyclonal (1:50; M-225; Santa Cruz Biotechnology). The myogenin antibody has been characterized previously in early zebrafish myogenic progenitor cells.18 The zebrafish myod1 epitope has been shown to be recognized by the myf5 antibody (Santa Cruz Biotechnology).19\nApproximately, 100,000 cells were pre-plated on uncoated coverslides (Nunc, Lab-Tek) and, after a 1-hour pre-plating, the supernatant was plated onto 0.1% gelatin-collated coverslips. The following day, the zebrafish myogenic cells attached were fixed in 4% paraformaldehyde (Electron Microscopy Sciences) at 4°C for 10 minutes. To block nonspecific binding of the antibodies, slides were incubated for 30 minutes at room temperature in PBS + 10% goat serum. After blocking, the slides were incubated overnight at 4°C using the primary antibodies. Slides were washed three times in 1× PBS, and sections were incubated with Alexa 488 (anti-mouse IgG)- or 568 (anti-rabbit IgG)-conjugated goat secondary antibodies (Invitrogen) at a 1:500 dilution for 45 minutes at room temperature. The slides were then washed three times in 1× PBS before mounting in Vectashield with 4′,6-diamidino-2-phenylindole (DAPI) (Vector Laboratories). Slides were analyzed by microscope (E1000 Nikon Eclipse; Nikon) and OpenLab software.", "RNA was extracted directly from zebrafish myogenic progenitor cells in culture at various stages of differentiation using Tripure (Roche Applied Science), following the manufacturer's protocol. Zebrafish cDNA was hybridized to the Affymetrix GeneChip Zebrafish Genome Array (GenBank Release 36.0, June 2003) and processed following the manufacturer's protocol at the Molecular Genetics Core Facility at Children's Hospital Boston. The resulting. CEL files, which contain probe signal intensities of the samples, were preprocessed and normalized together using robust multiarray averaging (RMA), which returns the expression level of each probe set or gene as a positive real number in logarithmic base 2 scale.20 The complete microarray data are available from the NCBI Gene Expression Omnibus (GEO) as GSE19754.\nPrincipal component analysis (PCA) was used to survey gene variation across sample (time) and space, and sample variation across transcriptome space, separately.21 Because most of the time-points had replicate sample measurements, we computed the linear correlation between the unlogged replicate time profiles (A, B) for each probe set to assess the reproducibility of their time profile. We selected the probe set with the maximum replicate time profile correlation as the unique representative for genes with more than one probe set representative. The fold change of a probe set for days 10–14 vs. days 0–1 was computed as the average RMA signal of days 10–14 minus the average RMA signal of days 0–1. This fold change is in log base 2 scale, because the RMA signal is in log base 2 scale. Gene ontology (GO) enrichment analysis was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID 6.7; http://david.abcc.ncifcrf.gov) on the mouse homologs of zebrafish genes, because the ontological characterization of genes is currently richer for the mouse than for the zebrafish.22 We used the mouse C2C12 myogenic differentiation microarray dataset (GEO, GSE19968) for comparative genomic analysis.23", "Total RNA (1 μg) was extracted from the zebrafish muscle myogenic progenitor cells in culture at various time-points during differentiation and subjected to reverse transcriptase using the First Strand Synthesis Kit (Invitrogen). cDNA was then diluted in sterile water into tenfold serial dilutions, and real-time polymerase chain reaction (PCR) was performed (SYBR Green Master Mix; Applied Biosystems). Gene-specific primers that overlapped introns were used (refer to Supplementary Material, Table S5). All samples were amplified on a light cycler (Model 7900HT; ABI). Cycle time (CT) values were normalized to a zebrafish ef1α loading control. All significant values were determined using Student t-tests (two-tailed).", "[SUBTITLE] Isolation and Differentiation of Adult Zebrafish Myogenic Progenitor Cells [SUBSECTION] In mammals, it is possible to identify muscle progenitor cells by their potential to differentiate into multinucleate myotubes in culture. To access this capability in adult zebrafish, myogenic progenitor cells were prepared from α-actin–RFP transgenic zebrafish, as outlined in Figure 1 and detailed in the previous section. Following cellular expansion, after reaching 95%+ confluency (after being plated at 300,000 cells 24 hours earlier), the myogenic progenitor cells were exposed to differentiation medium. Over the course of 14 days, cultured zebrafish muscle cells began to fuse and elongate (Fig. 2). The use of the α-actin–RFP transgenic line allowed for the easy identification of mature myotubes in contrast to any few remaining fibroblasts due to the skeletal muscle-specific enhancer that drives expression of the RFP reporter, as characterized elsewhere.17\nBasic protocol for the isolation of zebrafish skeletal muscle myogenic progenitor cells from whole dorsal myotome. Schematic showing the procedure for the isolation of skeletal myogenic progenitors from adult zebrafish dorsal muscle. Following euthanization of the zebrafish with tricaine, the fish are skinned, decapitated, de-finned, and de-gutted. A disassociation step in a mixture of collagenase IV and neutral protease breaks down cellular adhesion, whereas the use of a Ficoll gradient results in the isolation of a mononuclear cell layer. Pre-plating on uncoated plates was followed by an overnight (16-hour) transfer of the myoblast-enriched supernatant to gelatin-coated plates. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nIn vitro differentiation of primary myoblasts isolated from α-actin–RFP adult dorsal muscle. (A–D) Phase contrast of zebrafish myogenic progenitor cells differentiating from day 0 to day 14. (E–H) RFP expression of the α-actin promoter indicates myotube formation and myogenic differentiation. (I–L) Immunofluorescent staining of day 0 α-actin-–RFP myoblasts. Note that very few cells express high levels of the α-actin RFP transgene, as it undergoes higher levels of transcriptional expression during myogenic differentiation. Green fluorescent staining and open arrowheads demarcate myogenic markers (pax3, pax7, myod1, and myogenin). (M) Quantification of 500 DAPI-stained (blue) nuclei of the results from day 0 myoblast immunofluorescent staining in (I)–(L). Immunostaining was performed in triplicate. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nInitial plating of primary myoblasts from the α-actin–RFP fish resulted in very few RFP-positive cells either attached to the plates or free floating in the medium (Fig. 2A–H). At day 4, several clusters of RFP-positive cells emerged as the myoblasts began to undergo cellular fusion. The detection of RFP (α-actin) reporter was a strong indicator that the zebrafish myogenic progenitors had begun to activate transcripts essential for myoblast fusion and myotube structure, as the α-actin gene (promoter for RFP) expression is most robust in mature myofibers.17, 24 By day 7, long multinucleated RFP+ myotubes were identified that further expanded into twitching myotube clusters by day 14 (Fig. 2H).\nTo further characterize what stage of myogenesis these adult zebrafish myogenic progenitors resided in at the initial time of isolation (day 0), the cells were probed using immunofluoresence with the myogenic determination markers pax3 and pax7. Mammalian pax3 and pax7 function as determinants of the transition from embryonic myoblasts into muscle satellite cells, whereas, in zebrafish, these proteins function in the determination of fast muscle fibers used for swimming.11 Day 0 zebrafish myogenic progenitor cells had low levels of pax3 (1.53%) and pax7 (2.86%) protein expression, as quantified by immunofluorescence with monoclonal specific antibodies (Fig. 2I, J, and M). Conversely, these day 0 myogenic progenitors had significant levels of myod1 (74.86%), indicating that these cells were further committed than mammalian satellite cells to form myotubes (Fig. 2K and M). In addition, these cells had low expression of myogenin (3.27%) (Fig. 2L and M), a marker of myofiber determination. These experiments demonstrate that isolated myogenic progenitor cells can successfully fuse in cell culture as visualized by the α-actin–RFP fluorescent reporter, similar to the myoblast culture of larger fish species, such as the Atlantic salmon.13\nIn mammals, it is possible to identify muscle progenitor cells by their potential to differentiate into multinucleate myotubes in culture. To access this capability in adult zebrafish, myogenic progenitor cells were prepared from α-actin–RFP transgenic zebrafish, as outlined in Figure 1 and detailed in the previous section. Following cellular expansion, after reaching 95%+ confluency (after being plated at 300,000 cells 24 hours earlier), the myogenic progenitor cells were exposed to differentiation medium. Over the course of 14 days, cultured zebrafish muscle cells began to fuse and elongate (Fig. 2). The use of the α-actin–RFP transgenic line allowed for the easy identification of mature myotubes in contrast to any few remaining fibroblasts due to the skeletal muscle-specific enhancer that drives expression of the RFP reporter, as characterized elsewhere.17\nBasic protocol for the isolation of zebrafish skeletal muscle myogenic progenitor cells from whole dorsal myotome. Schematic showing the procedure for the isolation of skeletal myogenic progenitors from adult zebrafish dorsal muscle. Following euthanization of the zebrafish with tricaine, the fish are skinned, decapitated, de-finned, and de-gutted. A disassociation step in a mixture of collagenase IV and neutral protease breaks down cellular adhesion, whereas the use of a Ficoll gradient results in the isolation of a mononuclear cell layer. Pre-plating on uncoated plates was followed by an overnight (16-hour) transfer of the myoblast-enriched supernatant to gelatin-coated plates. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nIn vitro differentiation of primary myoblasts isolated from α-actin–RFP adult dorsal muscle. (A–D) Phase contrast of zebrafish myogenic progenitor cells differentiating from day 0 to day 14. (E–H) RFP expression of the α-actin promoter indicates myotube formation and myogenic differentiation. (I–L) Immunofluorescent staining of day 0 α-actin-–RFP myoblasts. Note that very few cells express high levels of the α-actin RFP transgene, as it undergoes higher levels of transcriptional expression during myogenic differentiation. Green fluorescent staining and open arrowheads demarcate myogenic markers (pax3, pax7, myod1, and myogenin). (M) Quantification of 500 DAPI-stained (blue) nuclei of the results from day 0 myoblast immunofluorescent staining in (I)–(L). Immunostaining was performed in triplicate. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nInitial plating of primary myoblasts from the α-actin–RFP fish resulted in very few RFP-positive cells either attached to the plates or free floating in the medium (Fig. 2A–H). At day 4, several clusters of RFP-positive cells emerged as the myoblasts began to undergo cellular fusion. The detection of RFP (α-actin) reporter was a strong indicator that the zebrafish myogenic progenitors had begun to activate transcripts essential for myoblast fusion and myotube structure, as the α-actin gene (promoter for RFP) expression is most robust in mature myofibers.17, 24 By day 7, long multinucleated RFP+ myotubes were identified that further expanded into twitching myotube clusters by day 14 (Fig. 2H).\nTo further characterize what stage of myogenesis these adult zebrafish myogenic progenitors resided in at the initial time of isolation (day 0), the cells were probed using immunofluoresence with the myogenic determination markers pax3 and pax7. Mammalian pax3 and pax7 function as determinants of the transition from embryonic myoblasts into muscle satellite cells, whereas, in zebrafish, these proteins function in the determination of fast muscle fibers used for swimming.11 Day 0 zebrafish myogenic progenitor cells had low levels of pax3 (1.53%) and pax7 (2.86%) protein expression, as quantified by immunofluorescence with monoclonal specific antibodies (Fig. 2I, J, and M). Conversely, these day 0 myogenic progenitors had significant levels of myod1 (74.86%), indicating that these cells were further committed than mammalian satellite cells to form myotubes (Fig. 2K and M). In addition, these cells had low expression of myogenin (3.27%) (Fig. 2L and M), a marker of myofiber determination. These experiments demonstrate that isolated myogenic progenitor cells can successfully fuse in cell culture as visualized by the α-actin–RFP fluorescent reporter, similar to the myoblast culture of larger fish species, such as the Atlantic salmon.13\n[SUBTITLE] Transcriptome Profiles of Cell Fusion and Differentiation of Zebrafish Myogenic Progenitor Cells [SUBSECTION] To identify the myogenic transcriptome of zebrafish myogenic progenitor cells from cell proliferation through cell fusion and differentiation into mature myotubes, total mRNA was interrogated by microarray at different time-points (days 0, 1, 4, 7, 10, and 14) from zebrafish myogenic progenitor cells of the α-actin–RFP transgenic line as the cells underwent myogenic differentiation in culture.\nDuplicate biological measurements (A, B) were made for most time-points. For each microarray gene probe set, we computed the correlation between duplicate profiles to assess the reproducibility of the myogenic developmental profile of the gene. There were 5960 microarray gene probe sets with a correlation >0.8 between duplicate profiles. Unless otherwise noted, this is the primary microarray gene set used in subsequent analyses. PCA of the standardized temporal expression profiles of these genes show them to have two large-scale temporal patterns (Fig. 3). Fifty-six percent (3340 genes, 2985 unique) have a profile that largely decreases with time (green dots, left hemisphere of PCA plot in Fig. 3A) and are enriched for development and cell signaling receptor ontologic terms (Supplemental Material, Table S1). Forty-four percent (2620 genes, 2414 unique) have a profile that is largely increasing with time (magenta dots, right hemisphere of PCA plot in Fig. 3A) and are enriched for oxidoreductive and metabolic enzyme ontologic terms (Supplementary Material, Table S2). The majority of genes change their expression level at day 4: high to low, and vice versa (Fig. 3B). Phenotypically, zebrafish muscle cells at day 4 of myogenic differentiation are in the initial stages of myotube fusion. To identify the active genes at day 4, we performed a differential analysis of day 4 vs. the other days (0, 1, 7, 10, and 14). Forty-seven unique genes were significantly upregulated at day 4 relative to the other days and were enriched for M-phase and mitosis ontologic terms (Supplementary Material, Table S3). Sixty unique genes were significantly downregulated at day 4 relative to the other days and were enriched for collagen and extracellular matrix ontological terms (Supplementary Material, Table S4). In addition, we examined the microarray expression profile of nine reproducible transcripts that have been reported previously to be differentially expressed during myogenesis.25\nMicroarray analysis of zebrafish myogenic progenitor cell differentiation transcriptome. (A) Principal components analysis (PCA) showing the principal components 1 vs. 2 plot of the zebrafish muscle cell differentiation microarray data of 5960 reproducible genes (shown as colored dots) in time and indicates two large-scale temporal patterns of expression. Genes on the left hemisphere (green) are highly expressed at days 0–1, and decrease over time. Genes on the right hemisphere (magenta) show low expression at days 0–1, and increase over time. The principal components axes are a linear combination of the time-points. (B) The average expression profile of the genes from the two large-scale temporal patterns of expression. (C) Standardized expression for upregulation (red) vs. downregulation (green) of nine differentially regulated myogenic genes. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nTo identify the myogenic transcriptome of zebrafish myogenic progenitor cells from cell proliferation through cell fusion and differentiation into mature myotubes, total mRNA was interrogated by microarray at different time-points (days 0, 1, 4, 7, 10, and 14) from zebrafish myogenic progenitor cells of the α-actin–RFP transgenic line as the cells underwent myogenic differentiation in culture.\nDuplicate biological measurements (A, B) were made for most time-points. For each microarray gene probe set, we computed the correlation between duplicate profiles to assess the reproducibility of the myogenic developmental profile of the gene. There were 5960 microarray gene probe sets with a correlation >0.8 between duplicate profiles. Unless otherwise noted, this is the primary microarray gene set used in subsequent analyses. PCA of the standardized temporal expression profiles of these genes show them to have two large-scale temporal patterns (Fig. 3). Fifty-six percent (3340 genes, 2985 unique) have a profile that largely decreases with time (green dots, left hemisphere of PCA plot in Fig. 3A) and are enriched for development and cell signaling receptor ontologic terms (Supplemental Material, Table S1). Forty-four percent (2620 genes, 2414 unique) have a profile that is largely increasing with time (magenta dots, right hemisphere of PCA plot in Fig. 3A) and are enriched for oxidoreductive and metabolic enzyme ontologic terms (Supplementary Material, Table S2). The majority of genes change their expression level at day 4: high to low, and vice versa (Fig. 3B). Phenotypically, zebrafish muscle cells at day 4 of myogenic differentiation are in the initial stages of myotube fusion. To identify the active genes at day 4, we performed a differential analysis of day 4 vs. the other days (0, 1, 7, 10, and 14). Forty-seven unique genes were significantly upregulated at day 4 relative to the other days and were enriched for M-phase and mitosis ontologic terms (Supplementary Material, Table S3). Sixty unique genes were significantly downregulated at day 4 relative to the other days and were enriched for collagen and extracellular matrix ontological terms (Supplementary Material, Table S4). In addition, we examined the microarray expression profile of nine reproducible transcripts that have been reported previously to be differentially expressed during myogenesis.25\nMicroarray analysis of zebrafish myogenic progenitor cell differentiation transcriptome. (A) Principal components analysis (PCA) showing the principal components 1 vs. 2 plot of the zebrafish muscle cell differentiation microarray data of 5960 reproducible genes (shown as colored dots) in time and indicates two large-scale temporal patterns of expression. Genes on the left hemisphere (green) are highly expressed at days 0–1, and decrease over time. Genes on the right hemisphere (magenta) show low expression at days 0–1, and increase over time. The principal components axes are a linear combination of the time-points. (B) The average expression profile of the genes from the two large-scale temporal patterns of expression. (C) Standardized expression for upregulation (red) vs. downregulation (green) of nine differentially regulated myogenic genes. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\n[SUBTITLE] Zebrafish Myogenic Progenitor Cells Express Myogenic Genes at Critical Time-Points during Differentiation [SUBSECTION] After myogenic progenitor cell microarray analysis of the zebrafish, samples were validated by quantitative real-time PCR for several important myogenic genes using exon-overlapping primers. Several myogenic structural (acta1a, desma), cell-signaling (cav3, cxcr4a), and transcription (myog, pax3a) factors were chosen for validation. In each case, each gene followed the expected microarray trend across myogenic differentiation (Fig. 4). The myogenic structural genes (acta1a and desma) were all upregulated as the zebrafish myogenic progenitor cells underwent myogenic fusion and myotube formation. As expected, the myogenic stem cell marker (cxcr4a) mRNA was downregulated as the zebrafish muscle cells underwent fusion, whereas, conversely, the myogenic transcription factor myogenin (myog) was upregulated. In addition, another marker of early myoblasts, pax3a, had significantly reduced expression as the cells underwent myogenic differentiation.\nValidation of myogenic differentiation in the zebrafish myogenic progenitor cells by microarray and real-time PCR. (A) Real-time quantitative PCR expression (magenta dashed line) levels of six myogenic differentiation factors (acta1a, cav3, cxcr4a, desma, myog, and pax3a) across time (x-axis; days 0–14) as compared with microarray data (green solid line). The y-axis is logarithm base 2 scale fold change of each time-point relative to day 0, which is the average ΔCT (day 0) minus average ΔCT (day N) value for quantitative PCR data (ddCT), and average RMA signal (day N) minus average RMA signal (day 0) for the microarray data. The quantitative PCR CT values were normalized to the zebrafish housekeeping gene ef1α housekeeping per condition. Note that acta1 and cxcr4 primers were specific to both a and b isoforms present in the zebrafish genome. (B) The table compares the log2 expression fold change of days 0–1 vs. 10–14 of the six myogenic differentiation factors between quantitative PCR and microarray data. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nAfter myogenic progenitor cell microarray analysis of the zebrafish, samples were validated by quantitative real-time PCR for several important myogenic genes using exon-overlapping primers. Several myogenic structural (acta1a, desma), cell-signaling (cav3, cxcr4a), and transcription (myog, pax3a) factors were chosen for validation. In each case, each gene followed the expected microarray trend across myogenic differentiation (Fig. 4). The myogenic structural genes (acta1a and desma) were all upregulated as the zebrafish myogenic progenitor cells underwent myogenic fusion and myotube formation. As expected, the myogenic stem cell marker (cxcr4a) mRNA was downregulated as the zebrafish muscle cells underwent fusion, whereas, conversely, the myogenic transcription factor myogenin (myog) was upregulated. In addition, another marker of early myoblasts, pax3a, had significantly reduced expression as the cells underwent myogenic differentiation.\nValidation of myogenic differentiation in the zebrafish myogenic progenitor cells by microarray and real-time PCR. (A) Real-time quantitative PCR expression (magenta dashed line) levels of six myogenic differentiation factors (acta1a, cav3, cxcr4a, desma, myog, and pax3a) across time (x-axis; days 0–14) as compared with microarray data (green solid line). The y-axis is logarithm base 2 scale fold change of each time-point relative to day 0, which is the average ΔCT (day 0) minus average ΔCT (day N) value for quantitative PCR data (ddCT), and average RMA signal (day N) minus average RMA signal (day 0) for the microarray data. The quantitative PCR CT values were normalized to the zebrafish housekeeping gene ef1α housekeeping per condition. Note that acta1 and cxcr4 primers were specific to both a and b isoforms present in the zebrafish genome. (B) The table compares the log2 expression fold change of days 0–1 vs. 10–14 of the six myogenic differentiation factors between quantitative PCR and microarray data. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\n[SUBTITLE] Comparison of Zebrafish Myogenic Progenitor Cell Transcriptome with other Mammalian Myogenic Transcriptomes: Strengths and Limitations [SUBSECTION] To gain insights into similarities between zebrafish and mammalian myogenic cells with respect to changes in gene expression during in vitro differentiation, we compared the zebrafish myogenic differentiation transcriptome data to a recent mouse C2C12 myogenic differentiation microarray dataset from the GEO, GSE19968.23 PCA of samples in transcriptome space of both datasets, done separately, showed a distinct dichotomy between the earlier vs. later time-points of myogenic differentiation along the first principal component (PC1), the direction of maximum sample variation (Fig. 5A). There is a clear transcriptome scale distinction when comparing days 0–1 vs. days 7–10 in the zebrafish, and between myoblasts and differentiated myotubes at day 4 in C2C12. There are 3784 homologous genes in common between the datasets, and 1400 have a correlation of >0.8 between replicate time profiles in both datasets, respectively. Of these 1400 reproducible genes, we investigated the concordance of differential expression of earlier vs. later time-points during myogenic differentiation. We computed the fold change of days 10–14 relative to days 0–1 in the zebrafish, and of myotubes at day 4 relative to myoblasts in C2C12. There was significant concordance among genes that were twofold magnitude changed at earlier vs. later time-points in both datasets: Fisher exact test P-value <7.0 × 10−7 (Fig. 5B).\nComparison of zebrafish and mouse C2C12 myogenic development. (A) Principal components analysis of samples in transcriptome space showing principal components 1 vs. 2, and 1 vs. 3 plots for the zebrafish and C2C12 (from Gene Expression Omnibus, GSE19968) data show transcriptome scale distinctions between earlier vs. later time-points of muscle development: days 0–1 vs. days 7–10 in zebrafish, and myoblasts vs. differentiated myotubes at day 4 in C2C12. Zebrafish samples are labeled by the time-point following myogenic differentiation (days 0–14). C2C12 samples are labeled as myoblasts (B), and time-points following myogenic differentiation (days 0, 1, and 4). (B) Contingency table of genes ≥2-fold magnitude changed in earlier vs. later time-points of 1400 reproducible genes common to both datasets: fold change of days 10–14 relative to days 0–1 in zebrafish, and fold change of myotubes at day 4 relative to myoblasts in C2C12. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nTo gain insights into similarities between zebrafish and mammalian myogenic cells with respect to changes in gene expression during in vitro differentiation, we compared the zebrafish myogenic differentiation transcriptome data to a recent mouse C2C12 myogenic differentiation microarray dataset from the GEO, GSE19968.23 PCA of samples in transcriptome space of both datasets, done separately, showed a distinct dichotomy between the earlier vs. later time-points of myogenic differentiation along the first principal component (PC1), the direction of maximum sample variation (Fig. 5A). There is a clear transcriptome scale distinction when comparing days 0–1 vs. days 7–10 in the zebrafish, and between myoblasts and differentiated myotubes at day 4 in C2C12. There are 3784 homologous genes in common between the datasets, and 1400 have a correlation of >0.8 between replicate time profiles in both datasets, respectively. Of these 1400 reproducible genes, we investigated the concordance of differential expression of earlier vs. later time-points during myogenic differentiation. We computed the fold change of days 10–14 relative to days 0–1 in the zebrafish, and of myotubes at day 4 relative to myoblasts in C2C12. There was significant concordance among genes that were twofold magnitude changed at earlier vs. later time-points in both datasets: Fisher exact test P-value <7.0 × 10−7 (Fig. 5B).\nComparison of zebrafish and mouse C2C12 myogenic development. (A) Principal components analysis of samples in transcriptome space showing principal components 1 vs. 2, and 1 vs. 3 plots for the zebrafish and C2C12 (from Gene Expression Omnibus, GSE19968) data show transcriptome scale distinctions between earlier vs. later time-points of muscle development: days 0–1 vs. days 7–10 in zebrafish, and myoblasts vs. differentiated myotubes at day 4 in C2C12. Zebrafish samples are labeled by the time-point following myogenic differentiation (days 0–14). C2C12 samples are labeled as myoblasts (B), and time-points following myogenic differentiation (days 0, 1, and 4). (B) Contingency table of genes ≥2-fold magnitude changed in earlier vs. later time-points of 1400 reproducible genes common to both datasets: fold change of days 10–14 relative to days 0–1 in zebrafish, and fold change of myotubes at day 4 relative to myoblasts in C2C12. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "In mammals, it is possible to identify muscle progenitor cells by their potential to differentiate into multinucleate myotubes in culture. To access this capability in adult zebrafish, myogenic progenitor cells were prepared from α-actin–RFP transgenic zebrafish, as outlined in Figure 1 and detailed in the previous section. Following cellular expansion, after reaching 95%+ confluency (after being plated at 300,000 cells 24 hours earlier), the myogenic progenitor cells were exposed to differentiation medium. Over the course of 14 days, cultured zebrafish muscle cells began to fuse and elongate (Fig. 2). The use of the α-actin–RFP transgenic line allowed for the easy identification of mature myotubes in contrast to any few remaining fibroblasts due to the skeletal muscle-specific enhancer that drives expression of the RFP reporter, as characterized elsewhere.17\nBasic protocol for the isolation of zebrafish skeletal muscle myogenic progenitor cells from whole dorsal myotome. Schematic showing the procedure for the isolation of skeletal myogenic progenitors from adult zebrafish dorsal muscle. Following euthanization of the zebrafish with tricaine, the fish are skinned, decapitated, de-finned, and de-gutted. A disassociation step in a mixture of collagenase IV and neutral protease breaks down cellular adhesion, whereas the use of a Ficoll gradient results in the isolation of a mononuclear cell layer. Pre-plating on uncoated plates was followed by an overnight (16-hour) transfer of the myoblast-enriched supernatant to gelatin-coated plates. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nIn vitro differentiation of primary myoblasts isolated from α-actin–RFP adult dorsal muscle. (A–D) Phase contrast of zebrafish myogenic progenitor cells differentiating from day 0 to day 14. (E–H) RFP expression of the α-actin promoter indicates myotube formation and myogenic differentiation. (I–L) Immunofluorescent staining of day 0 α-actin-–RFP myoblasts. Note that very few cells express high levels of the α-actin RFP transgene, as it undergoes higher levels of transcriptional expression during myogenic differentiation. Green fluorescent staining and open arrowheads demarcate myogenic markers (pax3, pax7, myod1, and myogenin). (M) Quantification of 500 DAPI-stained (blue) nuclei of the results from day 0 myoblast immunofluorescent staining in (I)–(L). Immunostaining was performed in triplicate. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]\nInitial plating of primary myoblasts from the α-actin–RFP fish resulted in very few RFP-positive cells either attached to the plates or free floating in the medium (Fig. 2A–H). At day 4, several clusters of RFP-positive cells emerged as the myoblasts began to undergo cellular fusion. The detection of RFP (α-actin) reporter was a strong indicator that the zebrafish myogenic progenitors had begun to activate transcripts essential for myoblast fusion and myotube structure, as the α-actin gene (promoter for RFP) expression is most robust in mature myofibers.17, 24 By day 7, long multinucleated RFP+ myotubes were identified that further expanded into twitching myotube clusters by day 14 (Fig. 2H).\nTo further characterize what stage of myogenesis these adult zebrafish myogenic progenitors resided in at the initial time of isolation (day 0), the cells were probed using immunofluoresence with the myogenic determination markers pax3 and pax7. Mammalian pax3 and pax7 function as determinants of the transition from embryonic myoblasts into muscle satellite cells, whereas, in zebrafish, these proteins function in the determination of fast muscle fibers used for swimming.11 Day 0 zebrafish myogenic progenitor cells had low levels of pax3 (1.53%) and pax7 (2.86%) protein expression, as quantified by immunofluorescence with monoclonal specific antibodies (Fig. 2I, J, and M). Conversely, these day 0 myogenic progenitors had significant levels of myod1 (74.86%), indicating that these cells were further committed than mammalian satellite cells to form myotubes (Fig. 2K and M). In addition, these cells had low expression of myogenin (3.27%) (Fig. 2L and M), a marker of myofiber determination. These experiments demonstrate that isolated myogenic progenitor cells can successfully fuse in cell culture as visualized by the α-actin–RFP fluorescent reporter, similar to the myoblast culture of larger fish species, such as the Atlantic salmon.13", "To identify the myogenic transcriptome of zebrafish myogenic progenitor cells from cell proliferation through cell fusion and differentiation into mature myotubes, total mRNA was interrogated by microarray at different time-points (days 0, 1, 4, 7, 10, and 14) from zebrafish myogenic progenitor cells of the α-actin–RFP transgenic line as the cells underwent myogenic differentiation in culture.\nDuplicate biological measurements (A, B) were made for most time-points. For each microarray gene probe set, we computed the correlation between duplicate profiles to assess the reproducibility of the myogenic developmental profile of the gene. There were 5960 microarray gene probe sets with a correlation >0.8 between duplicate profiles. Unless otherwise noted, this is the primary microarray gene set used in subsequent analyses. PCA of the standardized temporal expression profiles of these genes show them to have two large-scale temporal patterns (Fig. 3). Fifty-six percent (3340 genes, 2985 unique) have a profile that largely decreases with time (green dots, left hemisphere of PCA plot in Fig. 3A) and are enriched for development and cell signaling receptor ontologic terms (Supplemental Material, Table S1). Forty-four percent (2620 genes, 2414 unique) have a profile that is largely increasing with time (magenta dots, right hemisphere of PCA plot in Fig. 3A) and are enriched for oxidoreductive and metabolic enzyme ontologic terms (Supplementary Material, Table S2). The majority of genes change their expression level at day 4: high to low, and vice versa (Fig. 3B). Phenotypically, zebrafish muscle cells at day 4 of myogenic differentiation are in the initial stages of myotube fusion. To identify the active genes at day 4, we performed a differential analysis of day 4 vs. the other days (0, 1, 7, 10, and 14). Forty-seven unique genes were significantly upregulated at day 4 relative to the other days and were enriched for M-phase and mitosis ontologic terms (Supplementary Material, Table S3). Sixty unique genes were significantly downregulated at day 4 relative to the other days and were enriched for collagen and extracellular matrix ontological terms (Supplementary Material, Table S4). In addition, we examined the microarray expression profile of nine reproducible transcripts that have been reported previously to be differentially expressed during myogenesis.25\nMicroarray analysis of zebrafish myogenic progenitor cell differentiation transcriptome. (A) Principal components analysis (PCA) showing the principal components 1 vs. 2 plot of the zebrafish muscle cell differentiation microarray data of 5960 reproducible genes (shown as colored dots) in time and indicates two large-scale temporal patterns of expression. Genes on the left hemisphere (green) are highly expressed at days 0–1, and decrease over time. Genes on the right hemisphere (magenta) show low expression at days 0–1, and increase over time. The principal components axes are a linear combination of the time-points. (B) The average expression profile of the genes from the two large-scale temporal patterns of expression. (C) Standardized expression for upregulation (red) vs. downregulation (green) of nine differentially regulated myogenic genes. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "After myogenic progenitor cell microarray analysis of the zebrafish, samples were validated by quantitative real-time PCR for several important myogenic genes using exon-overlapping primers. Several myogenic structural (acta1a, desma), cell-signaling (cav3, cxcr4a), and transcription (myog, pax3a) factors were chosen for validation. In each case, each gene followed the expected microarray trend across myogenic differentiation (Fig. 4). The myogenic structural genes (acta1a and desma) were all upregulated as the zebrafish myogenic progenitor cells underwent myogenic fusion and myotube formation. As expected, the myogenic stem cell marker (cxcr4a) mRNA was downregulated as the zebrafish muscle cells underwent fusion, whereas, conversely, the myogenic transcription factor myogenin (myog) was upregulated. In addition, another marker of early myoblasts, pax3a, had significantly reduced expression as the cells underwent myogenic differentiation.\nValidation of myogenic differentiation in the zebrafish myogenic progenitor cells by microarray and real-time PCR. (A) Real-time quantitative PCR expression (magenta dashed line) levels of six myogenic differentiation factors (acta1a, cav3, cxcr4a, desma, myog, and pax3a) across time (x-axis; days 0–14) as compared with microarray data (green solid line). The y-axis is logarithm base 2 scale fold change of each time-point relative to day 0, which is the average ΔCT (day 0) minus average ΔCT (day N) value for quantitative PCR data (ddCT), and average RMA signal (day N) minus average RMA signal (day 0) for the microarray data. The quantitative PCR CT values were normalized to the zebrafish housekeeping gene ef1α housekeeping per condition. Note that acta1 and cxcr4 primers were specific to both a and b isoforms present in the zebrafish genome. (B) The table compares the log2 expression fold change of days 0–1 vs. 10–14 of the six myogenic differentiation factors between quantitative PCR and microarray data. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "To gain insights into similarities between zebrafish and mammalian myogenic cells with respect to changes in gene expression during in vitro differentiation, we compared the zebrafish myogenic differentiation transcriptome data to a recent mouse C2C12 myogenic differentiation microarray dataset from the GEO, GSE19968.23 PCA of samples in transcriptome space of both datasets, done separately, showed a distinct dichotomy between the earlier vs. later time-points of myogenic differentiation along the first principal component (PC1), the direction of maximum sample variation (Fig. 5A). There is a clear transcriptome scale distinction when comparing days 0–1 vs. days 7–10 in the zebrafish, and between myoblasts and differentiated myotubes at day 4 in C2C12. There are 3784 homologous genes in common between the datasets, and 1400 have a correlation of >0.8 between replicate time profiles in both datasets, respectively. Of these 1400 reproducible genes, we investigated the concordance of differential expression of earlier vs. later time-points during myogenic differentiation. We computed the fold change of days 10–14 relative to days 0–1 in the zebrafish, and of myotubes at day 4 relative to myoblasts in C2C12. There was significant concordance among genes that were twofold magnitude changed at earlier vs. later time-points in both datasets: Fisher exact test P-value <7.0 × 10−7 (Fig. 5B).\nComparison of zebrafish and mouse C2C12 myogenic development. (A) Principal components analysis of samples in transcriptome space showing principal components 1 vs. 2, and 1 vs. 3 plots for the zebrafish and C2C12 (from Gene Expression Omnibus, GSE19968) data show transcriptome scale distinctions between earlier vs. later time-points of muscle development: days 0–1 vs. days 7–10 in zebrafish, and myoblasts vs. differentiated myotubes at day 4 in C2C12. Zebrafish samples are labeled by the time-point following myogenic differentiation (days 0–14). C2C12 samples are labeled as myoblasts (B), and time-points following myogenic differentiation (days 0, 1, and 4). (B) Contingency table of genes ≥2-fold magnitude changed in earlier vs. later time-points of 1400 reproducible genes common to both datasets: fold change of days 10–14 relative to days 0–1 in zebrafish, and fold change of myotubes at day 4 relative to myoblasts in C2C12. [Color figure can be viewed in the online issue, which is available at wileyonlinelibrary.com.]", "Gene expression profiles of early differentiating zebrafish myogenic progenitor cells show expression profiles similar to those expected for mammalian muscle, namely that the expression of many sarcomeric proteins is strongly upregulated with differentiation. In comparison with microarray data from mouse C2C12 myoblast differentiation,25 many of the same myogenic differentiation factors, such as Pax3, Myf5, and MyoD1, decrease in transcript. Although Pax3 is a determinant of embryonic mouse myoblasts, recent studies involving the use of Pax3–green fluorescent protein (GFP) knock-in mice have revealed that a very small population of Pax3-positive myogenic progenitors does persist in adult muscle and are capable of restoring skeletal muscle after injury.26 In zebrafish dorsal muscle, a pax3- and pax7-positive myogenic progenitor population is essential for the expansion of fast- and slow-twitch myofibers through an upstream regulation of myf5 and myod1.27 It is likely that a similar population of pax3- and/or pax7-positive myogenic progenitors exists in adult zebrafish skeletal muscle, and will contribute to myofiber formation following injury. In addition, the presence and subsequent downregulation of a cxcr4a (a homolog of mammalian Cxcr4) cell population during myogenic differentiation is consistent with its role as a myogenic progenitor marker that can be used in myoblast transplantation.28 A transparent zebrafish strain that completely lacks pigmentation, the casper line, allows for the transplantation and long-term monitoring of fluorescently labeled cell populations into adult fish. One can envision that, after the isolation of adult zebrafish myogenic progenitor cells from skeletal muscle transgenic fish lines, engraftment of different populations could be observed in vivo, allowing for the capture in real time of the behavior of transplanted cells. This information is essential for the optimization of cell transplantation approaches (now available with the development of this zebrafish myogenic progenitor isolation protocol) which cannot be visualized in mice at the level of resolution that can be achieved in zebrafish.\nIn mice, many procedures have been used to purify muscle progenitor cells, although, in all cases, the purified population is still heterogeneous, requiring additional pre-plating purification to enrich for cells with myogenic potential.16 We have modified the mammalian pre-plating technique, added a Ficoll-gradient procedure to decrease bacterial contamination, and demonstrated that myogenic cells can be isolated and differentiated in cell culture. These results show that zebrafish have adult muscle progenitor cells that can be isolated and differentiated in cell culture.\nIn conclusion, we have isolated a myogenic progenitor cell in zebrafish dorsal muscle. We have shown that gene expression profiles in these zebrafish myogenic progenitor cells are similar to those of mammals. This successful culture and differentiation of a myogenic progenitor population expands the utility of the zebrafish in the study of adult skeletal muscle mutants. High-throughput screening of chemical libraries has allowed researchers to correct mutations in zebrafish mutants and holds promise for the treatment of muscular dystrophy and myopathies.29 Given the gaps in the zebrafish genome annotation that have frustrated researchers,30 it is likely that the release of a well-annotated copy of the zebrafish genome will lead to improved microarray platforms and increased use of the zebrafish in large-scale transcriptome studies. Until then, rigorous validation of zebrafish transcriptome data by quantitative reverse transcription PCR is essential for drawing valid conclusions from zebrafish microarray transcriptome experiments. Further studies using zebrafish myogenic progenitor cells to identify novel drug compounds will show them to be an attractive, cost-effective alternative to large-scale mammalian studies." ]
[ "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "differentiation", "muscle mutants", "muscle stem cells", "myogenic progenitors", "myogenesis", "transcriptome", "zebrafish" ]
In-hospital outcome of patients with culture-confirmed tuberculous pleurisy: clinical impact of pulmonary involvement.
21338482
Outcomes for hospitalized patients with tuberculous pleurisy (TP) have rarely been reported, and whether or not pulmonary involvement affects outcomes is uncertain. This study aimed to analyze the in-hospital mortality rate of culture-confirmed TP with an emphasis on the clinical impact of pulmonary involvement.
BACKGROUND
Patients who were hospitalized for pleural effusion (PE) of unconfirmed diagnosis and finally diagnosed as TP were identified. We classified them according to the disease extent: isolated pleurisy (isolated pleurisy group) and pleurisy with pulmonary involvement (pleuro-pulmonary group).
METHODS
Among the 205 patients hospitalized before the diagnosis was established, 51 (24.9%) belonged to the isolated pleurisy group. Compared to the pleuro-pulmonary group, patients in the isolated pleurisy group were younger, had fewer underlying co-morbidities, and presented more frequently with fever and chest pain. Fewer patients in the isolated pleurisy group had hypoalbuminemia (< 3.5 g/dL) and anemia. The two groups were similar with regards to PE analysis, resistance pattern, and timing of anti-tuberculous treatment. Patients who had a typical pathology of TP on pleural biopsy received anti-tuberculous treatment earlier than those who did not, and were all alive at discharge. The isolated pleurisy group had a lower in-hospital mortality rate, a shorter length of hospital stay and better short-term survival. In addition, the presence of underlying comorbidities and not receiving anti-tuberculous treatment were associated with a higher in-hospital mortality rate.
RESULTS
In culture-confirmed tuberculous pleurisy, those with pulmonary involvement were associated with a higher in-hospital mortality rate. A typical pathology for TP on pleura biopsy was associated with a better outcome.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Biopsy", "Female", "Hospitals", "Humans", "Lung", "Male", "Middle Aged", "Mycobacterium tuberculosis", "Pleura", "Treatment Outcome", "Tuberculosis, Pleural", "Tuberculosis, Pulmonary" ]
3051910
null
null
Methods
[SUBTITLE] Subjects of study [SUBSECTION] This retrospective study was conducted in a tertiary-care referral center in northern Taiwan by reviewing the medical charts as in our previous study [12]. The study was approved by the Institutional Review Board of the Research Ethics Committee of National Taiwan University Hospital (No.: 200809076R). The informed consent was deemed unnecessary for this retrospective study. We reviewed the mycobacterial laboratory registry database of the hospital and identified all patients with PE specimens sent for mycobacterial culture from January 2001 to December 2008. Among them, those who were hospitalized for PE before the diagnosis of TP was established by mycobacterial culture for PE were included for further investigation. Patients were classified into two groups according to the disease extent of TB: the isolated pleurisy group and pleuro-pulmonary group. The former was considered if all respiratory samples from a patient were culture-negative for M tuberculosis and there were no pulmonary parenchymal lesions compatible with active TB on chest radiographs, defined as new patch(es) of consolidation, collapse, lymphadenopathy, mass or nodule, cavitary lesion or infiltrate without other proven etiology [13]. The others were classified into the pleuro-pulmonary group. This retrospective study was conducted in a tertiary-care referral center in northern Taiwan by reviewing the medical charts as in our previous study [12]. The study was approved by the Institutional Review Board of the Research Ethics Committee of National Taiwan University Hospital (No.: 200809076R). The informed consent was deemed unnecessary for this retrospective study. We reviewed the mycobacterial laboratory registry database of the hospital and identified all patients with PE specimens sent for mycobacterial culture from January 2001 to December 2008. Among them, those who were hospitalized for PE before the diagnosis of TP was established by mycobacterial culture for PE were included for further investigation. Patients were classified into two groups according to the disease extent of TB: the isolated pleurisy group and pleuro-pulmonary group. The former was considered if all respiratory samples from a patient were culture-negative for M tuberculosis and there were no pulmonary parenchymal lesions compatible with active TB on chest radiographs, defined as new patch(es) of consolidation, collapse, lymphadenopathy, mass or nodule, cavitary lesion or infiltrate without other proven etiology [13]. The others were classified into the pleuro-pulmonary group. [SUBTITLE] Data collection [SUBSECTION] Patient data were collected by reviewing medical records and recorded in a standardized case report form by one chest physician, then verified by another physician from July 2009 to December 2009. Data included age, gender, underlying co-morbidities, initial symptoms, laboratory data and radiographic findings when the index PE sample was collected, as well as the course and outcome of anti-tuberculous treatment. Mycobacterial culture and susceptibility testing were performed according to standard procedures [3,14]. In our hospital, acid-fast smear and mycobacterial culture for pleural effusion samples were routinely performed in cases of lymphocytic pleural exudate by Light's criteria [15]. For patients with adequate cough power, sputum samples were collected by spontaneous expectoration after explanation without supervision. For the others, sputum samples were collected by a nurse using a suction tube inserted through mouth or nasal cavity. We routinely ordered at least three sets of mycobacterial cultures for sputum samples collected from each patient. Bilateral lesions were considered if the contra-lateral lung or pleural cavity were involved. Three histological findings of pleura tissue were considered typical for TP: (1) granulomatous inflammation, (2) caseous necrosis, and (3) the presence of acid fast bacilli [16]. Patients received standard short-course anti-TB treatment with isoniazid (INH), rifampicin (RIF), ethambutol (EMB) and pyrazinamide (PZA) for the initial 2 months, and INH plus RIF for the following 4 months. The standard regimen was modified if drug resistance or adverse effects were encountered [17,18]. Patients were followed for at least 6 months after the index PE samples were collected, or until death or loss of follow-up. Residual pleural thickening (RPT) on radiographs after 6 months of treatment was defined as minor if the pleural thickness was less than 10 mm, or major if equal to or greater than 10 mm. One pulmonologist and one radiologist, both blinded to the clinical data, interpreted the chest radiographs. If their opinions differed, the films were further reviewed by another senior pulmonologist blinded to the results. Patient data were collected by reviewing medical records and recorded in a standardized case report form by one chest physician, then verified by another physician from July 2009 to December 2009. Data included age, gender, underlying co-morbidities, initial symptoms, laboratory data and radiographic findings when the index PE sample was collected, as well as the course and outcome of anti-tuberculous treatment. Mycobacterial culture and susceptibility testing were performed according to standard procedures [3,14]. In our hospital, acid-fast smear and mycobacterial culture for pleural effusion samples were routinely performed in cases of lymphocytic pleural exudate by Light's criteria [15]. For patients with adequate cough power, sputum samples were collected by spontaneous expectoration after explanation without supervision. For the others, sputum samples were collected by a nurse using a suction tube inserted through mouth or nasal cavity. We routinely ordered at least three sets of mycobacterial cultures for sputum samples collected from each patient. Bilateral lesions were considered if the contra-lateral lung or pleural cavity were involved. Three histological findings of pleura tissue were considered typical for TP: (1) granulomatous inflammation, (2) caseous necrosis, and (3) the presence of acid fast bacilli [16]. Patients received standard short-course anti-TB treatment with isoniazid (INH), rifampicin (RIF), ethambutol (EMB) and pyrazinamide (PZA) for the initial 2 months, and INH plus RIF for the following 4 months. The standard regimen was modified if drug resistance or adverse effects were encountered [17,18]. Patients were followed for at least 6 months after the index PE samples were collected, or until death or loss of follow-up. Residual pleural thickening (RPT) on radiographs after 6 months of treatment was defined as minor if the pleural thickness was less than 10 mm, or major if equal to or greater than 10 mm. One pulmonologist and one radiologist, both blinded to the clinical data, interpreted the chest radiographs. If their opinions differed, the films were further reviewed by another senior pulmonologist blinded to the results. [SUBTITLE] Statistics [SUBSECTION] The inter-group differences were compared by using the independent t test for numerical variables and the chi-square test or Fisher's exact test for categorical variables as appropriate. Survival curves were generated using the Kaplan-Meier method and were compared using the log-rank test. Variables having a significant difference (p < 0.05) for in-hospital mortality in univariate analysis were further tested by logistic regression with the forward conditional method. The inter-group differences were compared by using the independent t test for numerical variables and the chi-square test or Fisher's exact test for categorical variables as appropriate. Survival curves were generated using the Kaplan-Meier method and were compared using the log-rank test. Variables having a significant difference (p < 0.05) for in-hospital mortality in univariate analysis were further tested by logistic regression with the forward conditional method.
null
null
null
null
[ "Background", "Subjects of study", "Data collection", "Statistics", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tuberculosis (TB) remains a global health problem even though it has nearly been eradicated in some developed countries [1,2]. The incidence in 2005 was 76 per 100,000 persons in Taiwan, 80 per 100,000 in the Republic of Korea, and 600 per 100000 in South Africa [3,4]. TB remains a leading cause of mortality in many countries. The mortality rate has been reported to be 6% in those with pulmonary TB, and as high as 31% in those with disseminated TB [3,4].\nBecause of variable manifestations and the difficulty in collecting clinical samples, extra-pulmonary TB is usually difficult to diagnose early [5]. Tuberculous pleurisy (TP) is the second most common extra-pulmonary infection [5], and accounts for approximately 5% of all forms of TB [6]. The gold standard for the diagnosis of TP is still mycobacterial culture of pleural effusion (PE), pleura tissue and respiratory specimens, which requires weeks to yield. The treatment could thus be delayed, resulting in an increased mortality rate [7]. For those requiring hospitalization, the mortality rate is further increasing due to increasingly severe infections and weaker host states [8,9].\nThe prognostic factors for hospitalized patients with TP are unclear. Only limited information is available on whether or not pulmonary involvement has a negative prognostic impact [6,10]. However, the mortality rate is high in tuberculosis patients if they are not promptly diagnosed and treated [11]. Therefore, we conducted this retrospective study to investigate the in-hospital mortality rate of culture-confirmed TP with an emphasis on the clinical impact of pulmonary involvement.", "This retrospective study was conducted in a tertiary-care referral center in northern Taiwan by reviewing the medical charts as in our previous study [12]. The study was approved by the Institutional Review Board of the Research Ethics Committee of National Taiwan University Hospital (No.: 200809076R). The informed consent was deemed unnecessary for this retrospective study. We reviewed the mycobacterial laboratory registry database of the hospital and identified all patients with PE specimens sent for mycobacterial culture from January 2001 to December 2008. Among them, those who were hospitalized for PE before the diagnosis of TP was established by mycobacterial culture for PE were included for further investigation. Patients were classified into two groups according to the disease extent of TB: the isolated pleurisy group and pleuro-pulmonary group. The former was considered if all respiratory samples from a patient were culture-negative for M tuberculosis and there were no pulmonary parenchymal lesions compatible with active TB on chest radiographs, defined as new patch(es) of consolidation, collapse, lymphadenopathy, mass or nodule, cavitary lesion or infiltrate without other proven etiology [13]. The others were classified into the pleuro-pulmonary group.", "Patient data were collected by reviewing medical records and recorded in a standardized case report form by one chest physician, then verified by another physician from July 2009 to December 2009. Data included age, gender, underlying co-morbidities, initial symptoms, laboratory data and radiographic findings when the index PE sample was collected, as well as the course and outcome of anti-tuberculous treatment. Mycobacterial culture and susceptibility testing were performed according to standard procedures [3,14]. In our hospital, acid-fast smear and mycobacterial culture for pleural effusion samples were routinely performed in cases of lymphocytic pleural exudate by Light's criteria [15]. For patients with adequate cough power, sputum samples were collected by spontaneous expectoration after explanation without supervision. For the others, sputum samples were collected by a nurse using a suction tube inserted through mouth or nasal cavity. We routinely ordered at least three sets of mycobacterial cultures for sputum samples collected from each patient. Bilateral lesions were considered if the contra-lateral lung or pleural cavity were involved. Three histological findings of pleura tissue were considered typical for TP: (1) granulomatous inflammation, (2) caseous necrosis, and (3) the presence of acid fast bacilli [16].\nPatients received standard short-course anti-TB treatment with isoniazid (INH), rifampicin (RIF), ethambutol (EMB) and pyrazinamide (PZA) for the initial 2 months, and INH plus RIF for the following 4 months. The standard regimen was modified if drug resistance or adverse effects were encountered [17,18]. Patients were followed for at least 6 months after the index PE samples were collected, or until death or loss of follow-up. Residual pleural thickening (RPT) on radiographs after 6 months of treatment was defined as minor if the pleural thickness was less than 10 mm, or major if equal to or greater than 10 mm. One pulmonologist and one radiologist, both blinded to the clinical data, interpreted the chest radiographs. If their opinions differed, the films were further reviewed by another senior pulmonologist blinded to the results.", "The inter-group differences were compared by using the independent t test for numerical variables and the chi-square test or Fisher's exact test for categorical variables as appropriate. Survival curves were generated using the Kaplan-Meier method and were compared using the log-rank test. Variables having a significant difference (p < 0.05) for in-hospital mortality in univariate analysis were further tested by logistic regression with the forward conditional method.", "During the 8-year study period, a total of 496 samples from 412 patients out of 24,759 PE samples yielded M. tuberculosis. Among them, 205 patients were hospitalized when TP was culture-confirmed. The indications for hospitalization were intolerant fever or dyspnea in 99, massive and/or loculated PE in 51, prolonged symptoms (> 14 days) in 51, and presence of lung mass in 14. Among the 205 patients, 51 were further classified into the isolated pleurisy group. The other 154, including 97 (63%) whose sputum samples were culture-positive for M. tuberculosis, were classified into the pleuro-pulmonary group. A total of 3,112 patients had culture-confirmed pulmonary TB.\nThe clinical characteristics of the patients with TP are listed in Table 1. Patients in the isolated pleurisy group were younger and less frequently had underlying co-morbid illnesses than those in the pleuro-pulmonary group. Among patients aged less than 65 years, underlying co-morbid illnesses were still less common in the isolated pleurisy group (11% vs. 45%, p = 0.003), but similar between the two groups in those aged 65 years or older (48% vs. 47%, p = 0.968). Malignancy and diabetes mellitus were the most common co-morbidities in the two groups. The serostatus of Human Immunodeficiency Virus (HIV) was tested in 63 (31%) patients and was positive in 6, with no inter-group difference. Of the 142 patients with unknown HIV serostatus, all were free of other acquired immunodeficiency syndrome (AIDS)-defined illness during follow-up. Male predominance was noted in both groups. The duration of symptoms was about 17 days, and 51% of the patients in the isolated TP group presented with fever. Fever was also more common in those aged less than 65 years (54% vs. 23%, p < 0.001), without underlying co-morbidities (40% vs. 24%, p = 0.017), or without hypoalbuminemia (defined as a serum level of albumin less than 3.5 g/dL) (42% vs. 26%, p = 0.049). More patients in the isolated pleurisy group suffered from chest pain, but dyspnea was most common in the pleuro-pulmonary group.\nClinical characteristics of the patients with tuberculous pleurisy\nData are no. (%) or mean [SD]\n* Three and twelve in the isolated pleurisy group and pleuro-pulmonary group, respectively, had two underlying co-morbid conditions.\n† Other symptoms included gastrointestinal symptoms, consciousness change and other non-specific symptoms.\n# 63 patients received human immunodeficiency tests.\nThe results of laboratory tests revealed that more patients in the pleuro-pulmonary group had anemia and hypoalbuminemia (Table 2). The two findings were also significantly associated with an age of 65 years or older (p = 0.008 and p < 0.001, respectively) and underlying comorbid condition (p < 0.001 for both). Pleural biopsy was performed in 69% (n = 35) of the isolated pleurisy group and in 33% (n = 51) of the pleuro-pulmonary group, with 75.6% (n = 65) showing granulomatous inflammation with/without caseating changes. Patients with a typical pleural pathology were treated earlier after index PE culture than those who did not have a typical pleural pathology (8.0 vs. 14.6 days, p < 0.001). The resistance patterns were similar between the isolated pleurisy group and pleuro-pulmonary group. Nineteen patients had resistance against at least one first-line drug, and four patients had multidrug-resistant TB. Radiographically, the isolated pleurisy group had fewer patients with bilateral lesions and more with loculated PE.\nLaboratory and radiographic findings of the patients with tuberculous pleurisy\nAFB = acid-fast bacilli, PE = pleural effusion\nData are no. (%) or mean [SD]\n* Hemoglobin < 12 g/dL in men or < 11 g/dL in women was considered anemia.\nA total of 29 patients did not receive anti-tuberculous treatment (Table 3). Of them, 19 patients in the pleuro-pulmonary group died before the diagnosis of TP was culture-confirmed. Another five in the pleuro-pulmonary group and five in the isolated pleurisy group were discharged and lost to follow-up before the results of mycobacterial culture became available. Among those who received anti-tuberculous treatment, the median interval from the sampling date of index PE specimen to anti-tuberculous treatment was 6 days in the isolated pleurisy group and 9 days in the pleuro-pulmonary group (p = 0.367) (Table 3). About two-thirds of each group received anti-tuberculous treatment within 2 weeks after the index PE samples were collected. Nine patients underwent video-assisted thoracoscopy for decortication and 19 received tube thoracostomy. There was no significant between-group difference.\nTreatment and outcomes\nData are no. (%) or mean [SD]\n* After six months of anti-tuberculous treatment, only 36 patients in the isolated pleurisy group and 72 in the pleuro-pulmonary group were still being followed in our hospital.\nOutcome analysis showed that the pleuro-pulmonary group had a higher in-hospital mortality rate and longer length of hospital stay than the isolated pleurisy group (Table 3). Among the 39 patients who died before discharge, 2 patients belonged to the isolated pleurisy group and both had underlying malignancy. The remaining 37 patients had pleuro-pulmonary TB. Among them, 24 (65%) of them had underlying diseases, including malignancy in 12, diabetes mellitus in 6, end-stage renal disease in 6, liver cirrhosis in 4, and autoimmune disease requiring immunosuppressant in 1 (5 of them had two underlying diseases). None of the 39 patients had HIV infection. The cause of death was multi-organ failure in 28, refractory respiratory failure in 10, and massive gastrointestinal bleeding in 1. Among those who died of multi-organ failure, only three were documented to have concomitant bacteremia or fungemia. The role of pleuro-pulmonary involvement continued in 2-month survival analysis (Figure 1, p = 0.003). Within the first 6 months of treatment, 67 patients died and 30 were lost to follow-up. Of the remaining 108 patients, 35 of the 36 patients in the isolated pleurisy group and 69 of the 72 in the pleuro-pulmonary group had received chest radiography after six months. The proportion of patients with RPT ≥ 10 mm was similar in the two groups (p = 0.542).\nSurvival curves were plotted using the Kaplan-Meier method for patients with tuberculous pleurisy according to the disease extent (the isolated pleurisy group and pleuro-pulmonary group). Black dots represent patients who were still alive at the end of the study.\nThe 65 patients with typical pleura pathology for TP were all alive at the time of discharge, whereas only 101 patients (72%) of the remaining 140 patients were alive at discharge (p < 0.001 by the chi-square test). Thus, we concluded that \"typical pleura pathology\" was a significant predictor of in-hospital mortality, and then excluded the 65 patients from multivariate logistic regression analysis. The results showed that pulmonary involvement, underlying comorbidity and not receiving anti-TB treatment were independent risk factors of in-hospital mortality (Table 4).\nFactors possibly associated with in-hospital mortality\nThe 65 patients with typical pleural pathology for TP were all alive at discharge, whereas 39 of the remaining 140 patients died in hospital (p < 0.001 by the chi-square test). Therefore, logistic regression was performed on the 140 patients who had not received a pleural biopsy or had no typical pleural pathology for TP.", "The pleural cavity is a common site of involvement in extra-pulmonary TB [5,16]; however, the outcomes and prognostic factors are unclear in hospitalized populations. In this retrospective study, those with pleuro-pulmonary TP accounted for three-fourths of all TP patients requiring hospitalization and had a higher in-hospital mortality rate. The in-hospital mortality rate was also higher among patients who had underlying comorbidities, did not receive anti-TB treatment and had no typical pleural pathology for TP.\nAlthough the residual RPT was similar, our analysis showed that the in-hospital mortality rate was six-fold higher in patients with pulmonary involvement than those with isolated pleurisy (24% vs. 4%). Compatible with a previous report showing high mortality in hospitalized TB patients [8], our previous study revealed that patients with neutrophil-predominant TP had an in-hospital mortality rate of 36% [7]. There are several possible explanations for the high in-hospital mortality rate of patients with TP, especially for those with pulmonary involvement. Because patients with isolated pleurisy are more likely to have local and systemic inflammatory symptoms such as chest pain and fever rather than hypoalbuminemia, pulmonary involvement probably represents an extensive and serious infection in a compromised and malnourished host. Another possible explanation is that TB is usually at the top of the list of the differential diagnoses for lymphocyte-rich pleurisy [15], whereas it accounts for only 1~2% of the etiologies for pneumonia [19], thus treatment is frequently delayed. Although a delay in treatment for more than 14 days was not an independent poor prognostic factor, the 19 cases of rapid mortality in our study suggest that TP could be an immediately fatal disease, and timely and effective anti-tuberculous treatment is vital, especially for those with pleuro-pulmonary involvement.\nHowever, two previous studies failed to demonstrate a difference in clinical outcomes between isolated TP and pleuro-pulmonary TB [6,10]. Again there are several possible explanations. First, the previous studies analyzed survival after completing anti-TB treatment and relapse, rather than in-hospital mortality. These long-term outcomes were more likely to be confounded by other factors, such as age, underlying co-morbidity, and socioeconomic status. Second, those needing admission were probably more severe cases, especially in a referral medical center. Finally, the patients in the previous reports were younger, around the fifth to early sixth decade, and less than 10% of them had underlying comorbid conditions [16,20].\nOur results revealed that histologic examination of the pleural biopsy is the key step for the early diagnosis of TP, because it can effectively demonstrate a typical pathology of TP in more than three-fourths of patients within 3 days, which is higher than the yield rate of mycobacterial cultures for PE samples (11%) [21]. Moreover, even when using the fluorometric BACTEC technique, the results of mycobacterial culture still take one to two weeks [22]. Hence, a typical pleura pathology could result in the early diagnose of TP and improved outcomes. Therefore, for in-patients with lymphocyte-rich PE, the possibility of tuberculosis should be kept in mind and pleural histology should be performed at an early stage if clinically feasible. For the early diagnosis of TP, biomarkers in pleural effusion such as adenosine deaminase and interferon-gamma have been shown to be helpful, but further investigations are needed for the application of nucleic acid amplification tests and interferon-gamma release assays [23,24].\nOur study has several limitations. First, in this retrospective study, the number of patients with culture-confirmed TP could have been underestimated because mycobacterial cultures were not routinely performed for every PE sample, and most studies show the sensitivity to be less than 30% [16]. Therefore, the patients with culture-negative TP might have been missed. However, the selected patients were all true cases of TP and represented a homogenous population for detailed analysis. Second, the 6-month follow-up rate was less than 90%. Third, our study population was selected from a large medical referral center. Whether our findings can be extrapolated to all TP patients should be further confirmed.", "Our study revealed that for hospitalized patients with TP, pulmonary involvement, underlying comorbidities, no typical pleura pathology and not receiving anti-TB treatment were associated with a worse in-hospital outcome. Aggressive examination, such as pleural biopsy, for pleural effusion with unknown cause is suggested for the early diagnosis and treatment if clinically appropriate.", "All of the authors declare no competing interest of any nature or kind in related products, services, and/or companies.", "JYW, JTW, and CCS designed the study, collected all relevant data and wrote the manuscript. CJY, and LL contributed to analyzing data. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/46/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects of study", "Data collection", "Statistics", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tuberculosis (TB) remains a global health problem even though it has nearly been eradicated in some developed countries [1,2]. The incidence in 2005 was 76 per 100,000 persons in Taiwan, 80 per 100,000 in the Republic of Korea, and 600 per 100000 in South Africa [3,4]. TB remains a leading cause of mortality in many countries. The mortality rate has been reported to be 6% in those with pulmonary TB, and as high as 31% in those with disseminated TB [3,4].\nBecause of variable manifestations and the difficulty in collecting clinical samples, extra-pulmonary TB is usually difficult to diagnose early [5]. Tuberculous pleurisy (TP) is the second most common extra-pulmonary infection [5], and accounts for approximately 5% of all forms of TB [6]. The gold standard for the diagnosis of TP is still mycobacterial culture of pleural effusion (PE), pleura tissue and respiratory specimens, which requires weeks to yield. The treatment could thus be delayed, resulting in an increased mortality rate [7]. For those requiring hospitalization, the mortality rate is further increasing due to increasingly severe infections and weaker host states [8,9].\nThe prognostic factors for hospitalized patients with TP are unclear. Only limited information is available on whether or not pulmonary involvement has a negative prognostic impact [6,10]. However, the mortality rate is high in tuberculosis patients if they are not promptly diagnosed and treated [11]. Therefore, we conducted this retrospective study to investigate the in-hospital mortality rate of culture-confirmed TP with an emphasis on the clinical impact of pulmonary involvement.", "[SUBTITLE] Subjects of study [SUBSECTION] This retrospective study was conducted in a tertiary-care referral center in northern Taiwan by reviewing the medical charts as in our previous study [12]. The study was approved by the Institutional Review Board of the Research Ethics Committee of National Taiwan University Hospital (No.: 200809076R). The informed consent was deemed unnecessary for this retrospective study. We reviewed the mycobacterial laboratory registry database of the hospital and identified all patients with PE specimens sent for mycobacterial culture from January 2001 to December 2008. Among them, those who were hospitalized for PE before the diagnosis of TP was established by mycobacterial culture for PE were included for further investigation. Patients were classified into two groups according to the disease extent of TB: the isolated pleurisy group and pleuro-pulmonary group. The former was considered if all respiratory samples from a patient were culture-negative for M tuberculosis and there were no pulmonary parenchymal lesions compatible with active TB on chest radiographs, defined as new patch(es) of consolidation, collapse, lymphadenopathy, mass or nodule, cavitary lesion or infiltrate without other proven etiology [13]. The others were classified into the pleuro-pulmonary group.\nThis retrospective study was conducted in a tertiary-care referral center in northern Taiwan by reviewing the medical charts as in our previous study [12]. The study was approved by the Institutional Review Board of the Research Ethics Committee of National Taiwan University Hospital (No.: 200809076R). The informed consent was deemed unnecessary for this retrospective study. We reviewed the mycobacterial laboratory registry database of the hospital and identified all patients with PE specimens sent for mycobacterial culture from January 2001 to December 2008. Among them, those who were hospitalized for PE before the diagnosis of TP was established by mycobacterial culture for PE were included for further investigation. Patients were classified into two groups according to the disease extent of TB: the isolated pleurisy group and pleuro-pulmonary group. The former was considered if all respiratory samples from a patient were culture-negative for M tuberculosis and there were no pulmonary parenchymal lesions compatible with active TB on chest radiographs, defined as new patch(es) of consolidation, collapse, lymphadenopathy, mass or nodule, cavitary lesion or infiltrate without other proven etiology [13]. The others were classified into the pleuro-pulmonary group.\n[SUBTITLE] Data collection [SUBSECTION] Patient data were collected by reviewing medical records and recorded in a standardized case report form by one chest physician, then verified by another physician from July 2009 to December 2009. Data included age, gender, underlying co-morbidities, initial symptoms, laboratory data and radiographic findings when the index PE sample was collected, as well as the course and outcome of anti-tuberculous treatment. Mycobacterial culture and susceptibility testing were performed according to standard procedures [3,14]. In our hospital, acid-fast smear and mycobacterial culture for pleural effusion samples were routinely performed in cases of lymphocytic pleural exudate by Light's criteria [15]. For patients with adequate cough power, sputum samples were collected by spontaneous expectoration after explanation without supervision. For the others, sputum samples were collected by a nurse using a suction tube inserted through mouth or nasal cavity. We routinely ordered at least three sets of mycobacterial cultures for sputum samples collected from each patient. Bilateral lesions were considered if the contra-lateral lung or pleural cavity were involved. Three histological findings of pleura tissue were considered typical for TP: (1) granulomatous inflammation, (2) caseous necrosis, and (3) the presence of acid fast bacilli [16].\nPatients received standard short-course anti-TB treatment with isoniazid (INH), rifampicin (RIF), ethambutol (EMB) and pyrazinamide (PZA) for the initial 2 months, and INH plus RIF for the following 4 months. The standard regimen was modified if drug resistance or adverse effects were encountered [17,18]. Patients were followed for at least 6 months after the index PE samples were collected, or until death or loss of follow-up. Residual pleural thickening (RPT) on radiographs after 6 months of treatment was defined as minor if the pleural thickness was less than 10 mm, or major if equal to or greater than 10 mm. One pulmonologist and one radiologist, both blinded to the clinical data, interpreted the chest radiographs. If their opinions differed, the films were further reviewed by another senior pulmonologist blinded to the results.\nPatient data were collected by reviewing medical records and recorded in a standardized case report form by one chest physician, then verified by another physician from July 2009 to December 2009. Data included age, gender, underlying co-morbidities, initial symptoms, laboratory data and radiographic findings when the index PE sample was collected, as well as the course and outcome of anti-tuberculous treatment. Mycobacterial culture and susceptibility testing were performed according to standard procedures [3,14]. In our hospital, acid-fast smear and mycobacterial culture for pleural effusion samples were routinely performed in cases of lymphocytic pleural exudate by Light's criteria [15]. For patients with adequate cough power, sputum samples were collected by spontaneous expectoration after explanation without supervision. For the others, sputum samples were collected by a nurse using a suction tube inserted through mouth or nasal cavity. We routinely ordered at least three sets of mycobacterial cultures for sputum samples collected from each patient. Bilateral lesions were considered if the contra-lateral lung or pleural cavity were involved. Three histological findings of pleura tissue were considered typical for TP: (1) granulomatous inflammation, (2) caseous necrosis, and (3) the presence of acid fast bacilli [16].\nPatients received standard short-course anti-TB treatment with isoniazid (INH), rifampicin (RIF), ethambutol (EMB) and pyrazinamide (PZA) for the initial 2 months, and INH plus RIF for the following 4 months. The standard regimen was modified if drug resistance or adverse effects were encountered [17,18]. Patients were followed for at least 6 months after the index PE samples were collected, or until death or loss of follow-up. Residual pleural thickening (RPT) on radiographs after 6 months of treatment was defined as minor if the pleural thickness was less than 10 mm, or major if equal to or greater than 10 mm. One pulmonologist and one radiologist, both blinded to the clinical data, interpreted the chest radiographs. If their opinions differed, the films were further reviewed by another senior pulmonologist blinded to the results.\n[SUBTITLE] Statistics [SUBSECTION] The inter-group differences were compared by using the independent t test for numerical variables and the chi-square test or Fisher's exact test for categorical variables as appropriate. Survival curves were generated using the Kaplan-Meier method and were compared using the log-rank test. Variables having a significant difference (p < 0.05) for in-hospital mortality in univariate analysis were further tested by logistic regression with the forward conditional method.\nThe inter-group differences were compared by using the independent t test for numerical variables and the chi-square test or Fisher's exact test for categorical variables as appropriate. Survival curves were generated using the Kaplan-Meier method and were compared using the log-rank test. Variables having a significant difference (p < 0.05) for in-hospital mortality in univariate analysis were further tested by logistic regression with the forward conditional method.", "This retrospective study was conducted in a tertiary-care referral center in northern Taiwan by reviewing the medical charts as in our previous study [12]. The study was approved by the Institutional Review Board of the Research Ethics Committee of National Taiwan University Hospital (No.: 200809076R). The informed consent was deemed unnecessary for this retrospective study. We reviewed the mycobacterial laboratory registry database of the hospital and identified all patients with PE specimens sent for mycobacterial culture from January 2001 to December 2008. Among them, those who were hospitalized for PE before the diagnosis of TP was established by mycobacterial culture for PE were included for further investigation. Patients were classified into two groups according to the disease extent of TB: the isolated pleurisy group and pleuro-pulmonary group. The former was considered if all respiratory samples from a patient were culture-negative for M tuberculosis and there were no pulmonary parenchymal lesions compatible with active TB on chest radiographs, defined as new patch(es) of consolidation, collapse, lymphadenopathy, mass or nodule, cavitary lesion or infiltrate without other proven etiology [13]. The others were classified into the pleuro-pulmonary group.", "Patient data were collected by reviewing medical records and recorded in a standardized case report form by one chest physician, then verified by another physician from July 2009 to December 2009. Data included age, gender, underlying co-morbidities, initial symptoms, laboratory data and radiographic findings when the index PE sample was collected, as well as the course and outcome of anti-tuberculous treatment. Mycobacterial culture and susceptibility testing were performed according to standard procedures [3,14]. In our hospital, acid-fast smear and mycobacterial culture for pleural effusion samples were routinely performed in cases of lymphocytic pleural exudate by Light's criteria [15]. For patients with adequate cough power, sputum samples were collected by spontaneous expectoration after explanation without supervision. For the others, sputum samples were collected by a nurse using a suction tube inserted through mouth or nasal cavity. We routinely ordered at least three sets of mycobacterial cultures for sputum samples collected from each patient. Bilateral lesions were considered if the contra-lateral lung or pleural cavity were involved. Three histological findings of pleura tissue were considered typical for TP: (1) granulomatous inflammation, (2) caseous necrosis, and (3) the presence of acid fast bacilli [16].\nPatients received standard short-course anti-TB treatment with isoniazid (INH), rifampicin (RIF), ethambutol (EMB) and pyrazinamide (PZA) for the initial 2 months, and INH plus RIF for the following 4 months. The standard regimen was modified if drug resistance or adverse effects were encountered [17,18]. Patients were followed for at least 6 months after the index PE samples were collected, or until death or loss of follow-up. Residual pleural thickening (RPT) on radiographs after 6 months of treatment was defined as minor if the pleural thickness was less than 10 mm, or major if equal to or greater than 10 mm. One pulmonologist and one radiologist, both blinded to the clinical data, interpreted the chest radiographs. If their opinions differed, the films were further reviewed by another senior pulmonologist blinded to the results.", "The inter-group differences were compared by using the independent t test for numerical variables and the chi-square test or Fisher's exact test for categorical variables as appropriate. Survival curves were generated using the Kaplan-Meier method and were compared using the log-rank test. Variables having a significant difference (p < 0.05) for in-hospital mortality in univariate analysis were further tested by logistic regression with the forward conditional method.", "During the 8-year study period, a total of 496 samples from 412 patients out of 24,759 PE samples yielded M. tuberculosis. Among them, 205 patients were hospitalized when TP was culture-confirmed. The indications for hospitalization were intolerant fever or dyspnea in 99, massive and/or loculated PE in 51, prolonged symptoms (> 14 days) in 51, and presence of lung mass in 14. Among the 205 patients, 51 were further classified into the isolated pleurisy group. The other 154, including 97 (63%) whose sputum samples were culture-positive for M. tuberculosis, were classified into the pleuro-pulmonary group. A total of 3,112 patients had culture-confirmed pulmonary TB.\nThe clinical characteristics of the patients with TP are listed in Table 1. Patients in the isolated pleurisy group were younger and less frequently had underlying co-morbid illnesses than those in the pleuro-pulmonary group. Among patients aged less than 65 years, underlying co-morbid illnesses were still less common in the isolated pleurisy group (11% vs. 45%, p = 0.003), but similar between the two groups in those aged 65 years or older (48% vs. 47%, p = 0.968). Malignancy and diabetes mellitus were the most common co-morbidities in the two groups. The serostatus of Human Immunodeficiency Virus (HIV) was tested in 63 (31%) patients and was positive in 6, with no inter-group difference. Of the 142 patients with unknown HIV serostatus, all were free of other acquired immunodeficiency syndrome (AIDS)-defined illness during follow-up. Male predominance was noted in both groups. The duration of symptoms was about 17 days, and 51% of the patients in the isolated TP group presented with fever. Fever was also more common in those aged less than 65 years (54% vs. 23%, p < 0.001), without underlying co-morbidities (40% vs. 24%, p = 0.017), or without hypoalbuminemia (defined as a serum level of albumin less than 3.5 g/dL) (42% vs. 26%, p = 0.049). More patients in the isolated pleurisy group suffered from chest pain, but dyspnea was most common in the pleuro-pulmonary group.\nClinical characteristics of the patients with tuberculous pleurisy\nData are no. (%) or mean [SD]\n* Three and twelve in the isolated pleurisy group and pleuro-pulmonary group, respectively, had two underlying co-morbid conditions.\n† Other symptoms included gastrointestinal symptoms, consciousness change and other non-specific symptoms.\n# 63 patients received human immunodeficiency tests.\nThe results of laboratory tests revealed that more patients in the pleuro-pulmonary group had anemia and hypoalbuminemia (Table 2). The two findings were also significantly associated with an age of 65 years or older (p = 0.008 and p < 0.001, respectively) and underlying comorbid condition (p < 0.001 for both). Pleural biopsy was performed in 69% (n = 35) of the isolated pleurisy group and in 33% (n = 51) of the pleuro-pulmonary group, with 75.6% (n = 65) showing granulomatous inflammation with/without caseating changes. Patients with a typical pleural pathology were treated earlier after index PE culture than those who did not have a typical pleural pathology (8.0 vs. 14.6 days, p < 0.001). The resistance patterns were similar between the isolated pleurisy group and pleuro-pulmonary group. Nineteen patients had resistance against at least one first-line drug, and four patients had multidrug-resistant TB. Radiographically, the isolated pleurisy group had fewer patients with bilateral lesions and more with loculated PE.\nLaboratory and radiographic findings of the patients with tuberculous pleurisy\nAFB = acid-fast bacilli, PE = pleural effusion\nData are no. (%) or mean [SD]\n* Hemoglobin < 12 g/dL in men or < 11 g/dL in women was considered anemia.\nA total of 29 patients did not receive anti-tuberculous treatment (Table 3). Of them, 19 patients in the pleuro-pulmonary group died before the diagnosis of TP was culture-confirmed. Another five in the pleuro-pulmonary group and five in the isolated pleurisy group were discharged and lost to follow-up before the results of mycobacterial culture became available. Among those who received anti-tuberculous treatment, the median interval from the sampling date of index PE specimen to anti-tuberculous treatment was 6 days in the isolated pleurisy group and 9 days in the pleuro-pulmonary group (p = 0.367) (Table 3). About two-thirds of each group received anti-tuberculous treatment within 2 weeks after the index PE samples were collected. Nine patients underwent video-assisted thoracoscopy for decortication and 19 received tube thoracostomy. There was no significant between-group difference.\nTreatment and outcomes\nData are no. (%) or mean [SD]\n* After six months of anti-tuberculous treatment, only 36 patients in the isolated pleurisy group and 72 in the pleuro-pulmonary group were still being followed in our hospital.\nOutcome analysis showed that the pleuro-pulmonary group had a higher in-hospital mortality rate and longer length of hospital stay than the isolated pleurisy group (Table 3). Among the 39 patients who died before discharge, 2 patients belonged to the isolated pleurisy group and both had underlying malignancy. The remaining 37 patients had pleuro-pulmonary TB. Among them, 24 (65%) of them had underlying diseases, including malignancy in 12, diabetes mellitus in 6, end-stage renal disease in 6, liver cirrhosis in 4, and autoimmune disease requiring immunosuppressant in 1 (5 of them had two underlying diseases). None of the 39 patients had HIV infection. The cause of death was multi-organ failure in 28, refractory respiratory failure in 10, and massive gastrointestinal bleeding in 1. Among those who died of multi-organ failure, only three were documented to have concomitant bacteremia or fungemia. The role of pleuro-pulmonary involvement continued in 2-month survival analysis (Figure 1, p = 0.003). Within the first 6 months of treatment, 67 patients died and 30 were lost to follow-up. Of the remaining 108 patients, 35 of the 36 patients in the isolated pleurisy group and 69 of the 72 in the pleuro-pulmonary group had received chest radiography after six months. The proportion of patients with RPT ≥ 10 mm was similar in the two groups (p = 0.542).\nSurvival curves were plotted using the Kaplan-Meier method for patients with tuberculous pleurisy according to the disease extent (the isolated pleurisy group and pleuro-pulmonary group). Black dots represent patients who were still alive at the end of the study.\nThe 65 patients with typical pleura pathology for TP were all alive at the time of discharge, whereas only 101 patients (72%) of the remaining 140 patients were alive at discharge (p < 0.001 by the chi-square test). Thus, we concluded that \"typical pleura pathology\" was a significant predictor of in-hospital mortality, and then excluded the 65 patients from multivariate logistic regression analysis. The results showed that pulmonary involvement, underlying comorbidity and not receiving anti-TB treatment were independent risk factors of in-hospital mortality (Table 4).\nFactors possibly associated with in-hospital mortality\nThe 65 patients with typical pleural pathology for TP were all alive at discharge, whereas 39 of the remaining 140 patients died in hospital (p < 0.001 by the chi-square test). Therefore, logistic regression was performed on the 140 patients who had not received a pleural biopsy or had no typical pleural pathology for TP.", "The pleural cavity is a common site of involvement in extra-pulmonary TB [5,16]; however, the outcomes and prognostic factors are unclear in hospitalized populations. In this retrospective study, those with pleuro-pulmonary TP accounted for three-fourths of all TP patients requiring hospitalization and had a higher in-hospital mortality rate. The in-hospital mortality rate was also higher among patients who had underlying comorbidities, did not receive anti-TB treatment and had no typical pleural pathology for TP.\nAlthough the residual RPT was similar, our analysis showed that the in-hospital mortality rate was six-fold higher in patients with pulmonary involvement than those with isolated pleurisy (24% vs. 4%). Compatible with a previous report showing high mortality in hospitalized TB patients [8], our previous study revealed that patients with neutrophil-predominant TP had an in-hospital mortality rate of 36% [7]. There are several possible explanations for the high in-hospital mortality rate of patients with TP, especially for those with pulmonary involvement. Because patients with isolated pleurisy are more likely to have local and systemic inflammatory symptoms such as chest pain and fever rather than hypoalbuminemia, pulmonary involvement probably represents an extensive and serious infection in a compromised and malnourished host. Another possible explanation is that TB is usually at the top of the list of the differential diagnoses for lymphocyte-rich pleurisy [15], whereas it accounts for only 1~2% of the etiologies for pneumonia [19], thus treatment is frequently delayed. Although a delay in treatment for more than 14 days was not an independent poor prognostic factor, the 19 cases of rapid mortality in our study suggest that TP could be an immediately fatal disease, and timely and effective anti-tuberculous treatment is vital, especially for those with pleuro-pulmonary involvement.\nHowever, two previous studies failed to demonstrate a difference in clinical outcomes between isolated TP and pleuro-pulmonary TB [6,10]. Again there are several possible explanations. First, the previous studies analyzed survival after completing anti-TB treatment and relapse, rather than in-hospital mortality. These long-term outcomes were more likely to be confounded by other factors, such as age, underlying co-morbidity, and socioeconomic status. Second, those needing admission were probably more severe cases, especially in a referral medical center. Finally, the patients in the previous reports were younger, around the fifth to early sixth decade, and less than 10% of them had underlying comorbid conditions [16,20].\nOur results revealed that histologic examination of the pleural biopsy is the key step for the early diagnosis of TP, because it can effectively demonstrate a typical pathology of TP in more than three-fourths of patients within 3 days, which is higher than the yield rate of mycobacterial cultures for PE samples (11%) [21]. Moreover, even when using the fluorometric BACTEC technique, the results of mycobacterial culture still take one to two weeks [22]. Hence, a typical pleura pathology could result in the early diagnose of TP and improved outcomes. Therefore, for in-patients with lymphocyte-rich PE, the possibility of tuberculosis should be kept in mind and pleural histology should be performed at an early stage if clinically feasible. For the early diagnosis of TP, biomarkers in pleural effusion such as adenosine deaminase and interferon-gamma have been shown to be helpful, but further investigations are needed for the application of nucleic acid amplification tests and interferon-gamma release assays [23,24].\nOur study has several limitations. First, in this retrospective study, the number of patients with culture-confirmed TP could have been underestimated because mycobacterial cultures were not routinely performed for every PE sample, and most studies show the sensitivity to be less than 30% [16]. Therefore, the patients with culture-negative TP might have been missed. However, the selected patients were all true cases of TP and represented a homogenous population for detailed analysis. Second, the 6-month follow-up rate was less than 90%. Third, our study population was selected from a large medical referral center. Whether our findings can be extrapolated to all TP patients should be further confirmed.", "Our study revealed that for hospitalized patients with TP, pulmonary involvement, underlying comorbidities, no typical pleura pathology and not receiving anti-TB treatment were associated with a worse in-hospital outcome. Aggressive examination, such as pleural biopsy, for pleural effusion with unknown cause is suggested for the early diagnosis and treatment if clinically appropriate.", "All of the authors declare no competing interest of any nature or kind in related products, services, and/or companies.", "JYW, JTW, and CCS designed the study, collected all relevant data and wrote the manuscript. CJY, and LL contributed to analyzing data. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/46/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Sun protection and sunbathing practices among at-risk family members of patients with melanoma.
21338483
Despite the increased level of familial risk, research indicates that family members of patients with melanoma engage in relatively low levels of sun protection and high levels of sun exposure. The goal of this study was to evaluate a broad range of demographic, medical, psychological, knowledge, and social influence correlates of sun protection and sunbathing practices among first-degree relatives (FDRs) of melanoma patients and to determine if correlates of sun protection and sunbathing were unique.
BACKGROUND
We evaluated correlates of sun protection and sunbathing among FDRs of melanoma patients who were at increased disease risk due to low compliance with sun protection and skin surveillance behaviors. Participants (N = 545) completed a phone survey.
METHODS
FDRs who reported higher sun protection had a higher education level, lower benefits of sunbathing, greater sunscreen self-efficacy, greater concerns about photo-aging and greater sun protection norms. FDRs who reported higher sunbathing were younger, more likely to be female, endorsed fewer sunscreen barriers, perceived more benefits of sunbathing, had lower image norms for tanness, and endorsed higher sunbathing norms.
RESULTS
Interventions for family members at risk for melanoma might benefit from improving sun protection self-efficacy, reducing perceived sunbathing benefits, and targeting normative influences to sunbathe.
CONCLUSION
[ "Adult", "Aged", "Family", "Female", "Genetic Predisposition to Disease", "Humans", "Interviews as Topic", "Male", "Melanoma", "Middle Aged", "Risk", "Skin Neoplasms", "Sunbathing", "Sunburn", "Sunscreening Agents" ]
3050750
null
null
Methods
[SUBTITLE] Participants and Approach [SUBSECTION] Data for this study were drawn from the pre-intervention data of a randomized clinical trial evaluating the efficacy of two behavioral interventions to improve skin cancer surveillance and prevention among family members of patients with melanoma [27]. Participants were FDRs of patients recruited from the cutaneous oncology practices at three participating medical centers (Fox Chase Cancer Center, Moffitt Cancer Center, and the University of Pennsylvania Health Systems). Prospective participants were identified from tumor registries or medical records. IRB approval was received for each site. Physicians of record gave permission for their patients to be contacted. Sample recruitment began in February 2006 and ended in June 2009. Eligibility criteria for patients whose FDRs are the focus of this study included: a) newly diagnosed with cutaneous malignant melanoma (CMM) since 2001 but more than 3 months prior to being approached; b) seen at one of the three participating sites; c) greater than 18 years of age; d) English speaking; e) able to give meaningful informed consent; f) does not have a FDR with CMM (to exclude patients with familial melanoma syndrome). Patients who met these criteria were mailed a letter describing the study and subsequently contacted by telephone to determine eligibility. At this time, patients gave permission to contact all of their FDRs and for medical information to be obtained from their medical charts. The Institutional Review Boards at the three participating sites approved this study. Next, identified FDRs were mailed a letter describing the study. They were contacted by telephone and eligibility was determined. Eligibility criteria for FDRs were: a) at least 20 years of age; b) had not had a total cutaneous examination in the past three years, had done a skin self-examination three or fewer times in the past year, and had a sun protection habits mean score less than four out of five; c) one or more of the following additional risk factors: blonde or red hair, marked freckling on the upper back, history of three or more blistering sunburns prior to age 20, three or more years of an outdoor summer job as a teenager, or actinic keratosis (a precancerous skin condition); d) able to give meaningful informed consent; e) English speaking; f) has residential phone service; g) no personal history of CMM or non-melanoma skin cancer; h) no personal history of dysplastic nevi (abnormal moles); i) only one FDR with CMM. After written informed consent and HIPAA acknowledgement, a baseline telephone survey (Additional file 1) was completed. Of the 3603 patients approached, 10.3% were ineligible (n = 370), 25.6% could not be located (n = 923), 35.6% refused (n = 1282), and 28.5% of patients provided permission to contact their relatives (n = 1028). These 1028 patients provided 3013 FDR names (2.95 per patient). Of these 3013, 43.9% were ineligible (n = 1324). Eight hundred fifty-five FDRs were ineligible because they did not meet sun or skin protection criteria, 419 were ineligible because of skin cancer medical history, and 50 were ineligible due to additional risk factors or being under the age of 20 years. Twenty percent could not be located (n = 603). Of the 1086 eligible and locatable FDRs identified, 541 refused (49.8%) and 545 (50.2%) enrolled. A comparison between the 541 FDRs who refused the study with the 545 FDR participants on available demographic information indicated that participants were significantly older than refusers (t (759) = 11.5, p < .001; M participants = 46.3, SD = 13.3, M refusers = 31.2, SD = 28.7) and that participants were more likely to be female (Percentage female participants = 62.4%; Percentage female refusers = 46.5%; χ2 (1, 1085) = 27.7, p < .001). Participants were also significantly more likely to be offspring of patients (56.1%) than refusers (31%). Data for this study were drawn from the pre-intervention data of a randomized clinical trial evaluating the efficacy of two behavioral interventions to improve skin cancer surveillance and prevention among family members of patients with melanoma [27]. Participants were FDRs of patients recruited from the cutaneous oncology practices at three participating medical centers (Fox Chase Cancer Center, Moffitt Cancer Center, and the University of Pennsylvania Health Systems). Prospective participants were identified from tumor registries or medical records. IRB approval was received for each site. Physicians of record gave permission for their patients to be contacted. Sample recruitment began in February 2006 and ended in June 2009. Eligibility criteria for patients whose FDRs are the focus of this study included: a) newly diagnosed with cutaneous malignant melanoma (CMM) since 2001 but more than 3 months prior to being approached; b) seen at one of the three participating sites; c) greater than 18 years of age; d) English speaking; e) able to give meaningful informed consent; f) does not have a FDR with CMM (to exclude patients with familial melanoma syndrome). Patients who met these criteria were mailed a letter describing the study and subsequently contacted by telephone to determine eligibility. At this time, patients gave permission to contact all of their FDRs and for medical information to be obtained from their medical charts. The Institutional Review Boards at the three participating sites approved this study. Next, identified FDRs were mailed a letter describing the study. They were contacted by telephone and eligibility was determined. Eligibility criteria for FDRs were: a) at least 20 years of age; b) had not had a total cutaneous examination in the past three years, had done a skin self-examination three or fewer times in the past year, and had a sun protection habits mean score less than four out of five; c) one or more of the following additional risk factors: blonde or red hair, marked freckling on the upper back, history of three or more blistering sunburns prior to age 20, three or more years of an outdoor summer job as a teenager, or actinic keratosis (a precancerous skin condition); d) able to give meaningful informed consent; e) English speaking; f) has residential phone service; g) no personal history of CMM or non-melanoma skin cancer; h) no personal history of dysplastic nevi (abnormal moles); i) only one FDR with CMM. After written informed consent and HIPAA acknowledgement, a baseline telephone survey (Additional file 1) was completed. Of the 3603 patients approached, 10.3% were ineligible (n = 370), 25.6% could not be located (n = 923), 35.6% refused (n = 1282), and 28.5% of patients provided permission to contact their relatives (n = 1028). These 1028 patients provided 3013 FDR names (2.95 per patient). Of these 3013, 43.9% were ineligible (n = 1324). Eight hundred fifty-five FDRs were ineligible because they did not meet sun or skin protection criteria, 419 were ineligible because of skin cancer medical history, and 50 were ineligible due to additional risk factors or being under the age of 20 years. Twenty percent could not be located (n = 603). Of the 1086 eligible and locatable FDRs identified, 541 refused (49.8%) and 545 (50.2%) enrolled. A comparison between the 541 FDRs who refused the study with the 545 FDR participants on available demographic information indicated that participants were significantly older than refusers (t (759) = 11.5, p < .001; M participants = 46.3, SD = 13.3, M refusers = 31.2, SD = 28.7) and that participants were more likely to be female (Percentage female participants = 62.4%; Percentage female refusers = 46.5%; χ2 (1, 1085) = 27.7, p < .001). Participants were also significantly more likely to be offspring of patients (56.1%) than refusers (31%). [SUBTITLE] Materials [SUBSECTION] For each of the multi-item scales assessing psychological factors and social influence factors, a scale score was created by averaging responses across the respective items. Additional information regarding the multi-item scales and internal consistency for these scales are shown in Table 1, and all survey items are contained in an online appendix (Additional file 1). Internal Reliability, Sample Items, and Response Options for Multi-Item Scales [SUBTITLE] Demographics [SUBSECTION] Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring). Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring). [SUBTITLE] Medical factors [SUBSECTION] Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records. Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records. [SUBTITLE] Psychological factors [SUBSECTION] The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item ("How distressed are you currently about the diagnosis and treatment of your family member's melanoma"?) with response options from 1 = not at all distressed to 5 = extremely distressed. The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item ("How distressed are you currently about the diagnosis and treatment of your family member's melanoma"?) with response options from 1 = not at all distressed to 5 = extremely distressed. [SUBTITLE] Knowledge [SUBSECTION] Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., "To work best, sunscreen needs to be applied a half-hour before you go outside") [13]. We summed the number of correct responses to the 11 items. Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., "To work best, sunscreen needs to be applied a half-hour before you go outside") [13]. We summed the number of correct responses to the 11 items. [SUBTITLE] Social influence factors [SUBSECTION] Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29]. Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29]. [SUBTITLE] Outcome variables: Sun protection behaviors and sunbathing [SUBSECTION] Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer. Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer. For each of the multi-item scales assessing psychological factors and social influence factors, a scale score was created by averaging responses across the respective items. Additional information regarding the multi-item scales and internal consistency for these scales are shown in Table 1, and all survey items are contained in an online appendix (Additional file 1). Internal Reliability, Sample Items, and Response Options for Multi-Item Scales [SUBTITLE] Demographics [SUBSECTION] Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring). Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring). [SUBTITLE] Medical factors [SUBSECTION] Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records. Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records. [SUBTITLE] Psychological factors [SUBSECTION] The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item ("How distressed are you currently about the diagnosis and treatment of your family member's melanoma"?) with response options from 1 = not at all distressed to 5 = extremely distressed. The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item ("How distressed are you currently about the diagnosis and treatment of your family member's melanoma"?) with response options from 1 = not at all distressed to 5 = extremely distressed. [SUBTITLE] Knowledge [SUBSECTION] Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., "To work best, sunscreen needs to be applied a half-hour before you go outside") [13]. We summed the number of correct responses to the 11 items. Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., "To work best, sunscreen needs to be applied a half-hour before you go outside") [13]. We summed the number of correct responses to the 11 items. [SUBTITLE] Social influence factors [SUBSECTION] Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29]. Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29]. [SUBTITLE] Outcome variables: Sun protection behaviors and sunbathing [SUBSECTION] Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer. Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer. [SUBTITLE] Statistical Analyses [SUBSECTION] All statistical analyses were conducted using SAS (version 9.2), and a cutoff of p < .05 was used to determine statistical significance. The primary analyses consisted of a series of multiple regressions to examine correlates of the two outcomes, sun protection behaviors and sunbathing. In order to account for the fact that some participants were members of the same family, all of the regression analyses were conducted using a generalized estimating equations (GEE) approach (PROC GENMOD in SAS), with the assumption of an exchangeable correlation matrix. Regression models for the sun protection behaviors measure were fit under the assumption of a normal distribution. Data from the sunbathing measure were positively skewed, and thus all regression models for that outcome were fit under the assumption of a gamma distribution. The p values reported for the regression analyses are from type 3 tests of model effects. We used the following analytic approach with sun protection behaviors and sunbathing as separate outcome variables in a series of GEE regression analyses. First, separately for each category of potential correlates (i.e., demographics, medical factors, psychological factors, knowledge, and social influence factors), we included all of the variables in that category as independent variables in a single regression model. Next, across all of the categories, the independent variables that were significantly associated with the outcome in the initial analyses were included together in a final regression model. There was no evidence of multicollinearity for any of the regression models. All statistical analyses were conducted using SAS (version 9.2), and a cutoff of p < .05 was used to determine statistical significance. The primary analyses consisted of a series of multiple regressions to examine correlates of the two outcomes, sun protection behaviors and sunbathing. In order to account for the fact that some participants were members of the same family, all of the regression analyses were conducted using a generalized estimating equations (GEE) approach (PROC GENMOD in SAS), with the assumption of an exchangeable correlation matrix. Regression models for the sun protection behaviors measure were fit under the assumption of a normal distribution. Data from the sunbathing measure were positively skewed, and thus all regression models for that outcome were fit under the assumption of a gamma distribution. The p values reported for the regression analyses are from type 3 tests of model effects. We used the following analytic approach with sun protection behaviors and sunbathing as separate outcome variables in a series of GEE regression analyses. First, separately for each category of potential correlates (i.e., demographics, medical factors, psychological factors, knowledge, and social influence factors), we included all of the variables in that category as independent variables in a single regression model. Next, across all of the categories, the independent variables that were significantly associated with the outcome in the initial analyses were included together in a final regression model. There was no evidence of multicollinearity for any of the regression models.
null
null
null
null
[ "Background", "Participants and Approach", "Materials", "Demographics", "Medical factors", "Psychological factors", "Knowledge", "Social influence factors", "Outcome variables: Sun protection behaviors and sunbathing", "Statistical Analyses", "Results", "Descriptive Statistics", "Correlates of Sun Protection Behaviors", "Correlates of Sunbathing", "Discussion", "Study Strengths and Limitations", "Implications", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Melanoma is the most deadly form of skin cancer, accounting for more than 70% of skin cancer deaths in the United States [1]. The incidence of melanoma is increasing rapidly [2] and faster than any other type of cancer [1]. Family history of melanoma is a known independent risk factor for melanoma [3]. While intense sun exposures and sunburns before the age of 18 are known risk factors, sun exposure during adulthood also impacts melanoma development [4,5]. The American Academy of Dermatology [6], the American Cancer Society [7], the Centers for Disease Control and Prevention [8], and the Task Force on Community Preventive Services on Reducing Exposure to Ultraviolet Light [9] recommend sun avoidance during peak ultraviolet light (UV) hours and use of sun protective clothing for the general population. First degree relatives (FDRs) of individuals who receive a diagnosis of melanoma are at increased disease risk and should pay special attention to precautions to limit sun exposure (e.g., [10]).\nDespite the increased level of familial risk, results of several studies indicate that family members of patients with melanoma engage in relatively low levels of UV protection and high levels of exposure. Bergenmaar and Brandberg [11] assessed young adults with a family history of melanoma and found that engagement in sun protection was low and that sun exposure was high. Almost a third of the sample reported sunbathing very often or often and 28% reported using a tanning bed at least once per month in the past year. Geller and colleagues [12] found that about half of the adult siblings of individuals diagnosed with melanoma did not report using sunscreen regularly. Manne and colleagues [13] reported that FDRs of individuals diagnosed with melanoma engaged in relatively low levels of sun protection. Sunbathing was not assessed in this study. Azzarello and colleagues [14] assessed sun protection practices among FDRs of individuals diagnosed with melanoma, and reported that more than one-third of relatives never or rarely used sunscreen, and more than 60% rarely or never wore protective clothing. Again, sunbathing was not assessed. Geller and colleagues [15] studied children of individuals diagnosed with all skin cancer types and found that use of sunscreen was relatively low (42%). Rates of frequent sunburn in the past year were also relatively high (39%), with particularly high rates of sunburn in the past year among female offspring of mothers who had received a diagnosis of skin cancer. Finally, Bishop and colleagues [16] studied sun protection and sun exposure individuals with a first degree relative with melanoma and found that about 33% of relatives had a sunburn in the previous summer and 64% reported getting a tan the previous summer. However, sunscreen use was high in this sample (90%) as was the use of other methods of sun protection.\nSeveral studies have examined correlates of sun protection practices among relatives of individuals diagnosed with melanoma. These studies have focused on demographic, phenotypic, health care access, and attitudinal factors. In terms of demographic factors, some studies suggest that female gender [12] and a college education [14] are associated with greater sun protection, whereas other studies do not suggest these associations [11,13]. In terms of phenotypic factors, a greater tendency to burn [12] and greater number of melanoma risk factors [14] have been associated with sun protection in some studies, but not in others [13]. Health care access and knowledge factors such as having a dermatologist [12], a physician recommendation to engage in sun protection [13], and a greater knowledge level regarding what suspicious moles look like [12] have been associated with higher engagement in sun protection. Attitudinal factors such as a greater perceived risk [14] have been associated with greater sun protection habits in some studies [14] but not others [12,13]. Greater self-efficacy has been consistently associated with engagement in sun protection [13,14]. Fewer perceived barriers to using sunscreen [13] and lower normative influences for sunbathing [11,13] have also been associated with sun protection. Appearance benefits and normative influences have been described as common reasons for sunbathing among relatives [11].\nAlthough there have been several studies focusing on sun habits of family members of melanoma patients, there are two gaps in the literature. First, no study has evaluated the role of a comprehensive set of attitudinal and knowledge factors in both sun protection and sunbathing practices among family members and compared whether the correlates of each behavior differ. The majority of studies have studied sun protection with little attention paid to correlates of sunbathing. Second, little is known about the population of relatives who are the least compliant with skin protection behaviors. This is a little-studied population that is most reluctant to adopt sun protection. It is important to better understand their sun protection and sunbathing habits among these individuals because they are at higher risk for skin cancer due to their skin cancer surveillance habits, and are therefore an appropriate target for intervention to improve sun protection.\nTo select correlates for the current study, we integrated constructs from two conceptual models, the Preventive Health Model (PHM) [17,18] and the Theory of Planned Behavior (TPB) [19]. We also based our selection on findings from prior research on correlates of sun protection and sunbathing behaviors from studies of individuals at average risk for skin cancer [20-25]. From the TPB, we included the role of normative influences and considered them as part of broader social influence factors to be examined. Drawing from the PHM, we examined the degree to which background demographic and medical factors (including medical factors of both the FDR and the family member with melanoma), psychological factors, and social influence factors were associated with sun protection and sunbathing. Specific psychological factors we examined included sun protection benefits, sunscreen barriers, benefits of sunbathing, sunscreen self-efficacy, photo-aging concerns, perceived risk and severity of melanoma, and distress about melanoma. The social influence factors we examined included physician recommendation for sun protection, image norms for tanness (i.e., image norms for what is portrayed as attractive in the media), sun protection norms, and sunbathing norms. In addition, we examined whether knowledge variables (i.e., knowledge of sun-protection guidelines and knowledge about sunscreen and sun exposure) were associated with sun protection or sunbathing practices. Few previous studies have examined the association between knowledge and skin cancer prevention behaviors and results have been equivocal [13,26].\nThe current study had two aims. The first aim was to evaluate demographic, medical, psychological, knowledge, and social influence correlates of sun protection and sunbathing practices among FDRs of melanoma patients. The second, exploratory aim was to examine whether there were unique correlates of sun protection and sunbathing practices. Specifically, we hypothesized that greater perceived sun protection benefits, sunscreen self-efficacy, photo-aging concerns, physician recommendation for sun protection, and sun protection norms would be associated with higher sun protection. In contrast, we hypothesized that greater perceived benefits of sunbathing, lower photo-aging concerns, greater image norms for tanness, and greater sunbathing norms would be associated with higher levels of sunbathing.", "Data for this study were drawn from the pre-intervention data of a randomized clinical trial evaluating the efficacy of two behavioral interventions to improve skin cancer surveillance and prevention among family members of patients with melanoma [27]. Participants were FDRs of patients recruited from the cutaneous oncology practices at three participating medical centers (Fox Chase Cancer Center, Moffitt Cancer Center, and the University of Pennsylvania Health Systems). Prospective participants were identified from tumor registries or medical records. IRB approval was received for each site. Physicians of record gave permission for their patients to be contacted. Sample recruitment began in February 2006 and ended in June 2009. Eligibility criteria for patients whose FDRs are the focus of this study included: a) newly diagnosed with cutaneous malignant melanoma (CMM) since 2001 but more than 3 months prior to being approached; b) seen at one of the three participating sites; c) greater than 18 years of age; d) English speaking; e) able to give meaningful informed consent; f) does not have a FDR with CMM (to exclude patients with familial melanoma syndrome). Patients who met these criteria were mailed a letter describing the study and subsequently contacted by telephone to determine eligibility. At this time, patients gave permission to contact all of their FDRs and for medical information to be obtained from their medical charts. The Institutional Review Boards at the three participating sites approved this study.\nNext, identified FDRs were mailed a letter describing the study. They were contacted by telephone and eligibility was determined. Eligibility criteria for FDRs were: a) at least 20 years of age; b) had not had a total cutaneous examination in the past three years, had done a skin self-examination three or fewer times in the past year, and had a sun protection habits mean score less than four out of five; c) one or more of the following additional risk factors: blonde or red hair, marked freckling on the upper back, history of three or more blistering sunburns prior to age 20, three or more years of an outdoor summer job as a teenager, or actinic keratosis (a precancerous skin condition); d) able to give meaningful informed consent; e) English speaking; f) has residential phone service; g) no personal history of CMM or non-melanoma skin cancer; h) no personal history of dysplastic nevi (abnormal moles); i) only one FDR with CMM. After written informed consent and HIPAA acknowledgement, a baseline telephone survey (Additional file 1) was completed.\nOf the 3603 patients approached, 10.3% were ineligible (n = 370), 25.6% could not be located (n = 923), 35.6% refused (n = 1282), and 28.5% of patients provided permission to contact their relatives (n = 1028). These 1028 patients provided 3013 FDR names (2.95 per patient). Of these 3013, 43.9% were ineligible (n = 1324). Eight hundred fifty-five FDRs were ineligible because they did not meet sun or skin protection criteria, 419 were ineligible because of skin cancer medical history, and 50 were ineligible due to additional risk factors or being under the age of 20 years. Twenty percent could not be located (n = 603). Of the 1086 eligible and locatable FDRs identified, 541 refused (49.8%) and 545 (50.2%) enrolled.\nA comparison between the 541 FDRs who refused the study with the 545 FDR participants on available demographic information indicated that participants were significantly older than refusers (t (759) = 11.5, p < .001; M participants = 46.3, SD = 13.3, M refusers = 31.2, SD = 28.7) and that participants were more likely to be female (Percentage female participants = 62.4%; Percentage female refusers = 46.5%; χ2 (1, 1085) = 27.7, p < .001). Participants were also significantly more likely to be offspring of patients (56.1%) than refusers (31%).", "For each of the multi-item scales assessing psychological factors and social influence factors, a scale score was created by averaging responses across the respective items. Additional information regarding the multi-item scales and internal consistency for these scales are shown in Table 1, and all survey items are contained in an online appendix (Additional file 1).\nInternal Reliability, Sample Items, and Response Options for Multi-Item Scales\n[SUBTITLE] Demographics [SUBSECTION] Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\nParticipants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\n[SUBTITLE] Medical factors [SUBSECTION] Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\nParticipants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\n[SUBTITLE] Psychological factors [SUBSECTION] The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\nThe measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\n[SUBTITLE] Knowledge [SUBSECTION] Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\nTwo multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\n[SUBTITLE] Social influence factors [SUBSECTION] Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\nThree items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\n[SUBTITLE] Outcome variables: Sun protection behaviors and sunbathing [SUBSECTION] Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.\nSun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.", "Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).", "Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.", "The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.", "Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.", "Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].", "Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.", "All statistical analyses were conducted using SAS (version 9.2), and a cutoff of p < .05 was used to determine statistical significance. The primary analyses consisted of a series of multiple regressions to examine correlates of the two outcomes, sun protection behaviors and sunbathing. In order to account for the fact that some participants were members of the same family, all of the regression analyses were conducted using a generalized estimating equations (GEE) approach (PROC GENMOD in SAS), with the assumption of an exchangeable correlation matrix. Regression models for the sun protection behaviors measure were fit under the assumption of a normal distribution. Data from the sunbathing measure were positively skewed, and thus all regression models for that outcome were fit under the assumption of a gamma distribution. The p values reported for the regression analyses are from type 3 tests of model effects. We used the following analytic approach with sun protection behaviors and sunbathing as separate outcome variables in a series of GEE regression analyses. First, separately for each category of potential correlates (i.e., demographics, medical factors, psychological factors, knowledge, and social influence factors), we included all of the variables in that category as independent variables in a single regression model. Next, across all of the categories, the independent variables that were significantly associated with the outcome in the initial analyses were included together in a final regression model. There was no evidence of multicollinearity for any of the regression models.", "[SUBTITLE] Descriptive Statistics [SUBSECTION] There were few missing data, with no more than eight individuals missing data for any one variable. Table 2 shows descriptive statistics for all of the study variables. With regard to the outcome variables, there was a small-to-moderate inverse correlation (rs = -.20, p < .001) between sun protection behaviors and sunbathing. Average levels of sun protection behaviors were close to the middle of the 5-point scale (M = 2.8, where 3 = sometimes), whereas average levels of sunbathing were relatively low (M = 1.9, where 2 = rarely).\nDescriptive Statistics for the Study Variables\nNote: N = 545\nThere were few missing data, with no more than eight individuals missing data for any one variable. Table 2 shows descriptive statistics for all of the study variables. With regard to the outcome variables, there was a small-to-moderate inverse correlation (rs = -.20, p < .001) between sun protection behaviors and sunbathing. Average levels of sun protection behaviors were close to the middle of the 5-point scale (M = 2.8, where 3 = sometimes), whereas average levels of sunbathing were relatively low (M = 1.9, where 2 = rarely).\nDescriptive Statistics for the Study Variables\nNote: N = 545\n[SUBTITLE] Correlates of Sun Protection Behaviors [SUBSECTION] Among the demographic factors, education was positively associated with sun protection behaviors (parameter estimate [b] = 0.05, SE = 0.03, p = .049). The relation of the participant to the patient with melanoma was also significantly associated with sun protection behaviors (p = .041), such that siblings (b = -0.16, SE = 0.07) and parents (b = -0.25, SE = 0.11) engaged in fewer sun protection behaviors than did offspring of patients. None of the medical factors were significantly associated with sun protection behaviors (ps ≥ .086). For the psychological factors, higher sun protection behaviors were found among individuals reporting fewer benefits of sunbathing (b = -0.08, SE = 0.02, p = .001), greater sunscreen self-efficacy (b = 0.15, SE = 0.03, p < .001), and greater photo-aging concerns (b = 0.07, SE = 0.03, p = .039). With regard to the knowledge variables, individuals with greater knowledge about sunscreen and sun exposure (b = 0.04, SE = 0.02, p = .024) had higher sun protection behaviors. Knowledge about sun protection guidelines (p = .153) was not associated with sun protection behaviors. Among the social influence factors, individuals reporting greater sun protection norms (b = 0.14, SE = 0.04, p < .001) or lower sunbathing norms (b = -0.07, SE = 0.03, p = .012) had higher sun protection behaviors.\nA final regression model was tested in which all of the significant correlates from the preceding analyses were included as independent variables, with sun protection behaviors as the dependent variable. As shown in Table 3, the relation of the participant to the melanoma patient, knowledge about sunscreen and sun exposure, and sunbathing norms were not significantly associated with sun protection behaviors. Higher sun protection behaviors were found among individuals with more education, individuals reporting fewer benefits of sunbathing, greater sunscreen self-efficacy, greater photo-aging concerns, and greater sun protection norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sun Protection Behaviors\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.\nAmong the demographic factors, education was positively associated with sun protection behaviors (parameter estimate [b] = 0.05, SE = 0.03, p = .049). The relation of the participant to the patient with melanoma was also significantly associated with sun protection behaviors (p = .041), such that siblings (b = -0.16, SE = 0.07) and parents (b = -0.25, SE = 0.11) engaged in fewer sun protection behaviors than did offspring of patients. None of the medical factors were significantly associated with sun protection behaviors (ps ≥ .086). For the psychological factors, higher sun protection behaviors were found among individuals reporting fewer benefits of sunbathing (b = -0.08, SE = 0.02, p = .001), greater sunscreen self-efficacy (b = 0.15, SE = 0.03, p < .001), and greater photo-aging concerns (b = 0.07, SE = 0.03, p = .039). With regard to the knowledge variables, individuals with greater knowledge about sunscreen and sun exposure (b = 0.04, SE = 0.02, p = .024) had higher sun protection behaviors. Knowledge about sun protection guidelines (p = .153) was not associated with sun protection behaviors. Among the social influence factors, individuals reporting greater sun protection norms (b = 0.14, SE = 0.04, p < .001) or lower sunbathing norms (b = -0.07, SE = 0.03, p = .012) had higher sun protection behaviors.\nA final regression model was tested in which all of the significant correlates from the preceding analyses were included as independent variables, with sun protection behaviors as the dependent variable. As shown in Table 3, the relation of the participant to the melanoma patient, knowledge about sunscreen and sun exposure, and sunbathing norms were not significantly associated with sun protection behaviors. Higher sun protection behaviors were found among individuals with more education, individuals reporting fewer benefits of sunbathing, greater sunscreen self-efficacy, greater photo-aging concerns, and greater sun protection norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sun Protection Behaviors\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.\n[SUBTITLE] Correlates of Sunbathing [SUBSECTION] Of the demographic factors examined, age (b = -0.01, SE = 0.002, p < .001) and sex (b = 0.16, SE = 0.05, p = .002) were significantly associated with sunbathing, with younger individuals and women reporting more sunbathing. The only medical factor that was significantly associated with sunbathing was visiting a dentist in the past year (b = 0.15, SE = 0.07, p = .039). In the analysis examining the association between the psychological factors and sunbathing, more frequent sunbathing was reported by those reporting fewer sunscreen barriers (b = -0.10, SE = 0.02, p < .001) and those reporting greater benefits of sunbathing (b = 0.22, SE = 0.02, p < .001). Neither of the knowledge variables was significantly associated with sunbathing (ps ≥ .180). Of the social influence factors examined, more frequent sunbathing was found among individuals with lower endorsement of image norms for tanness (b = -0.10, SE = 0.03, p < .001) and those with higher sunbathing norms (b = 0.28, SE = 0.02, p < .001). Neither physician recommendations for sun protection nor sun protection norms were significantly associated with sunbathing (ps ≥ .157).\nAll of the significant correlates from the preceding analyses were included as independent variables in a final model with sunbathing as the outcome variable. As shown in Table 4, with the exception of visiting a dentist in the past year, each correlate in the model was significantly associated with sunbathing. More frequent sunbathing was found among younger individuals, women, those reporting fewer sunscreen barriers, individuals reporting greater benefits of sunbathing, and those with lower endorsement of image norms for tanness or higher sunbathing norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sunbathing\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.\nOf the demographic factors examined, age (b = -0.01, SE = 0.002, p < .001) and sex (b = 0.16, SE = 0.05, p = .002) were significantly associated with sunbathing, with younger individuals and women reporting more sunbathing. The only medical factor that was significantly associated with sunbathing was visiting a dentist in the past year (b = 0.15, SE = 0.07, p = .039). In the analysis examining the association between the psychological factors and sunbathing, more frequent sunbathing was reported by those reporting fewer sunscreen barriers (b = -0.10, SE = 0.02, p < .001) and those reporting greater benefits of sunbathing (b = 0.22, SE = 0.02, p < .001). Neither of the knowledge variables was significantly associated with sunbathing (ps ≥ .180). Of the social influence factors examined, more frequent sunbathing was found among individuals with lower endorsement of image norms for tanness (b = -0.10, SE = 0.03, p < .001) and those with higher sunbathing norms (b = 0.28, SE = 0.02, p < .001). Neither physician recommendations for sun protection nor sun protection norms were significantly associated with sunbathing (ps ≥ .157).\nAll of the significant correlates from the preceding analyses were included as independent variables in a final model with sunbathing as the outcome variable. As shown in Table 4, with the exception of visiting a dentist in the past year, each correlate in the model was significantly associated with sunbathing. More frequent sunbathing was found among younger individuals, women, those reporting fewer sunscreen barriers, individuals reporting greater benefits of sunbathing, and those with lower endorsement of image norms for tanness or higher sunbathing norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sunbathing\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.", "There were few missing data, with no more than eight individuals missing data for any one variable. Table 2 shows descriptive statistics for all of the study variables. With regard to the outcome variables, there was a small-to-moderate inverse correlation (rs = -.20, p < .001) between sun protection behaviors and sunbathing. Average levels of sun protection behaviors were close to the middle of the 5-point scale (M = 2.8, where 3 = sometimes), whereas average levels of sunbathing were relatively low (M = 1.9, where 2 = rarely).\nDescriptive Statistics for the Study Variables\nNote: N = 545", "Among the demographic factors, education was positively associated with sun protection behaviors (parameter estimate [b] = 0.05, SE = 0.03, p = .049). The relation of the participant to the patient with melanoma was also significantly associated with sun protection behaviors (p = .041), such that siblings (b = -0.16, SE = 0.07) and parents (b = -0.25, SE = 0.11) engaged in fewer sun protection behaviors than did offspring of patients. None of the medical factors were significantly associated with sun protection behaviors (ps ≥ .086). For the psychological factors, higher sun protection behaviors were found among individuals reporting fewer benefits of sunbathing (b = -0.08, SE = 0.02, p = .001), greater sunscreen self-efficacy (b = 0.15, SE = 0.03, p < .001), and greater photo-aging concerns (b = 0.07, SE = 0.03, p = .039). With regard to the knowledge variables, individuals with greater knowledge about sunscreen and sun exposure (b = 0.04, SE = 0.02, p = .024) had higher sun protection behaviors. Knowledge about sun protection guidelines (p = .153) was not associated with sun protection behaviors. Among the social influence factors, individuals reporting greater sun protection norms (b = 0.14, SE = 0.04, p < .001) or lower sunbathing norms (b = -0.07, SE = 0.03, p = .012) had higher sun protection behaviors.\nA final regression model was tested in which all of the significant correlates from the preceding analyses were included as independent variables, with sun protection behaviors as the dependent variable. As shown in Table 3, the relation of the participant to the melanoma patient, knowledge about sunscreen and sun exposure, and sunbathing norms were not significantly associated with sun protection behaviors. Higher sun protection behaviors were found among individuals with more education, individuals reporting fewer benefits of sunbathing, greater sunscreen self-efficacy, greater photo-aging concerns, and greater sun protection norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sun Protection Behaviors\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.", "Of the demographic factors examined, age (b = -0.01, SE = 0.002, p < .001) and sex (b = 0.16, SE = 0.05, p = .002) were significantly associated with sunbathing, with younger individuals and women reporting more sunbathing. The only medical factor that was significantly associated with sunbathing was visiting a dentist in the past year (b = 0.15, SE = 0.07, p = .039). In the analysis examining the association between the psychological factors and sunbathing, more frequent sunbathing was reported by those reporting fewer sunscreen barriers (b = -0.10, SE = 0.02, p < .001) and those reporting greater benefits of sunbathing (b = 0.22, SE = 0.02, p < .001). Neither of the knowledge variables was significantly associated with sunbathing (ps ≥ .180). Of the social influence factors examined, more frequent sunbathing was found among individuals with lower endorsement of image norms for tanness (b = -0.10, SE = 0.03, p < .001) and those with higher sunbathing norms (b = 0.28, SE = 0.02, p < .001). Neither physician recommendations for sun protection nor sun protection norms were significantly associated with sunbathing (ps ≥ .157).\nAll of the significant correlates from the preceding analyses were included as independent variables in a final model with sunbathing as the outcome variable. As shown in Table 4, with the exception of visiting a dentist in the past year, each correlate in the model was significantly associated with sunbathing. More frequent sunbathing was found among younger individuals, women, those reporting fewer sunscreen barriers, individuals reporting greater benefits of sunbathing, and those with lower endorsement of image norms for tanness or higher sunbathing norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sunbathing\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.", "Results indicated that demographic, psychological, and social influence factors contributed to sun protection and sunbathing among close family members who are not compliant with sun protection or other skin surveillance practices. Relatives who reported higher sun protection practices were more educated, endorsed fewer benefits of sunbathing, greater sunscreen self-efficacy, had greater concerns about the effects of UV on photo-aging, and greater perceptions of sun protection norms. FDRs who reported more sunbathing were younger, more likely to be female, endorsed fewer barriers to using sunscreen, perceived more benefits of sunbathing, lower image norms for tanness, and endorsed higher sunbathing norms. Several medical, psychological, knowledge, and social factors were not associated with either sun protection or sunbathing. Overall, findings were consistent with previous literature as well as with the conceptual framework guiding this work. The results were relatively consistent with our exploratory hypotheses regarding the unique factors associated with sun protection or sunbathing. It is interesting to note that, although we selected our participants based upon low levels of sun protection and skin surveillance behaviors, the levels of sunbathing in our sample were relatively low and comparatively lower than rates of sunbathing [11,16] and sunburn [15,16] reported in previous studies. In the discussion that follows, we consider how the results of the current study extend what is known about correlates of sun protection and sunbathing among family members, and we also address clinical and research implications of the findings.\nGiven that the study focused on close relatives of individuals with melanoma, it is noteworthy that characteristics of the patient's disease, such as stage and time since diagnosis as well as attitudinal variables typically associated with the severity of cancer such as distress about the proband's melanoma, disease severity, and perceived risk, were not associated with sun protection or sunbathing. The fact that disease characteristics were not associated with sun protection is consistent with our previous study of family members of melanoma patients [13] as well as prior work with family members of colorectal cancer patients [31]. With regard to disease severity, perceived risk, and distress about the proband's cancer, our results are also consistent with our previous research [13]. These findings suggest that family members may not be influenced to alter sun protection or exposure by the severity of the patient's cancer or their own melanoma risk. However, it is possible that the lack of association between all of these factors and relatives' behavior is due to the fact that they were not aware of important facts about melanoma because the proband and relative did not have an in-depth discussion about this topic. During this discussion, it is likely the proband would discuss the cancer in more detail in terms of the level of risk conferred upon the family member. Family communication has been linked with engagement in cancer screening practices among family members at increased cancer risk (e.g., [32-34]). For similar reasons, it is also possible that the closeness of the relationship with the proband would have had a stronger association with sun protection and sunbathing practices than severity, risk, and distress about the proband's cancer, as this variable has been associated with other types of cancer risk reduction behavior [30,35]. Unfortunately, this measure was not included in this study. Without a qualitative examination of each family's communication about melanoma risk, it is difficult to conclude why these variables were not associated with sun protection and sunbathing practices.\nConsistent with previous research older age was associated with less sunbathing [23,26]. It is interesting that physician recommendation for sun protection was not associated with sun protection or sunbathing, which is not consistent with previous work evaluating correlates of sun protection among family members of patients with melanoma [13]. It is possible that this population of family members had not had contact with a dermatologist and thus there was less opportunity for a dermatologist to influence the adoption of sun protection practices. The other social influence factors we examined were varying types of norms. Sun protection norms were associated with sun protection and sunbathing norms were associated with sunbathing behavior. These findings suggest that peers' attitudes and behaviors may be more important than expert recommendations. Consistent with our expectations, sunscreen self-efficacy was associated with sun protection but not sunbathing.\nIn line with previous research, greater perceived benefits of sunbathing and higher perceptions that family and friends engage in tanning behaviors were associated with greater sunbathing [29]. However, a greater endorsement of positive image norms for tanness was associated with a lower frequency of sunbathing, which is opposite to the effect found in prior research [29]. One factor that might account for these discrepant findings is that our sample included both men and women and was older and at higher risk for skin cancer than the mostly female, college-aged samples studied previously. It is possible that perceptions of societal standards of attractiveness are more influential in personal choices among younger women as compared with older samples comprised of both genders, as well as among individuals at increased risk for melanoma. In addition, future studies should attempt to distinguish the role of perceptions of societal values versus the role of agreement with those values. The present measure did not separate perceptions of values from endorsement of them. Participants who reported having fewer barriers to using sunscreen engaged in more sunbathing. It is possible that individuals who sunbathe are generally more likely to use sunscreen because they are going to tan and thus they report fewer barriers to using it [36-39]. This may also be more likely to be the case among middle-aged and older individuals than among college women.\n[SUBTITLE] Study Strengths and Limitations [SUBSECTION] The strengths of this study include the large sample size, the focus on family members, the focus on high risk individuals who did not engage in regular sun protection and skin surveillance, the inclusion of sunbathing as an outcome, and the inclusion of previously unstudied correlates of behavior such as the medical status of the affected family member and the level of psychological distress about the affected family member's cancer. This study is also one of few to focus on an older sample of men and women.\nThere are several study limitations. The cross-sectional methodology precludes the ability to infer causal relationships. The sample was comprised of relatively well-educated and married individuals, and almost half the sample was comprised of patients' offspring. Female and older relatives were more likely to participate. It is not known whether levels and correlates of sun protection and sunbathing would have differed with a more heterogeneous sample. It is also not known whether the patients who provided family member names differed from those patients who we were not able to contact or who declined to provide family member names.\nThe strengths of this study include the large sample size, the focus on family members, the focus on high risk individuals who did not engage in regular sun protection and skin surveillance, the inclusion of sunbathing as an outcome, and the inclusion of previously unstudied correlates of behavior such as the medical status of the affected family member and the level of psychological distress about the affected family member's cancer. This study is also one of few to focus on an older sample of men and women.\nThere are several study limitations. The cross-sectional methodology precludes the ability to infer causal relationships. The sample was comprised of relatively well-educated and married individuals, and almost half the sample was comprised of patients' offspring. Female and older relatives were more likely to participate. It is not known whether levels and correlates of sun protection and sunbathing would have differed with a more heterogeneous sample. It is also not known whether the patients who provided family member names differed from those patients who we were not able to contact or who declined to provide family member names.\n[SUBTITLE] Implications [SUBSECTION] This study extends what is known about sun protection and sunbathing from previous work conducted on average risk populations to a population of high risk individuals. Although caution should be used in using cross-sectional results to guide interventions, these results provide information regarding the factors that might be focused on in future interventions to address sun protection and sunbathing in this population. In terms of implications for interventions to improve sun protection for at-risk family members, self-efficacy for using sunscreen could be highlighted by discussing recent developments in sunscreen manufacturing and marketing. These include the fact that SPF 15 or higher has been incorporated into many daily-use skin products such as moisturizers and that sunscreens can be sprayed on and can be purchased in unscented, non-greasy versions. Because our data suggest that men are more likely to consider sunscreen a hassle and a nuisance and not endorse the preventive influence sunscreen has on cosmetic effects of aging (unpublished data), future studies may need to employ qualitative methods to identify strategies for increasing positive perceptions of sunscreen. Emphasizing detrimental cosmetic and photo-aging effects of sun exposure through appearance-based materials, such as age-progressed pictures of the family member, may also prove beneficial. Overall, interventions to reduce sunbathing among FDRs of patients with melanoma should attempt to counteract both perceived benefits of sunbathing and normative influences of family and friends to sunbathe. Emphasis should also be placed on reasons why sunbathing should be avoided (e.g., sunscreen is not 100% effective) and should target younger family members by emphasizing the aging effects of sunbathing on the skin. In view of the evidence indicating that the correlates of sun protection and sunbathing are not the same, interventions may be more effective if they include separate components to address sun protection and sunbathing behaviors. Finally, because health care professionals did not influence sun protection and sunbathing, general practitioners should ask about a family history of skin cancer and refer these individuals to a dermatologist. In view of the rising incidence of melanoma, the development and testing of such interventions is an important public health issue.\nIn terms of recommendations for future research, we found it more difficult to recruit younger and male relatives into the study. Recruitment materials and more intensive recruitment efforts targeted towards younger relatives and men as well as educating melanoma probands about ways to facilitate participation of their younger and male relatives into the study may facilitate a higher uptake in this population of probands. Previous research has suggested that individuals with a family history of melanoma are more likely to speak to their female relatives about melanoma [32] and therefore it is possible that a greater proportion of male and younger relatives will participate in future research if family communication to male relatives is fostered.\nThis study extends what is known about sun protection and sunbathing from previous work conducted on average risk populations to a population of high risk individuals. Although caution should be used in using cross-sectional results to guide interventions, these results provide information regarding the factors that might be focused on in future interventions to address sun protection and sunbathing in this population. In terms of implications for interventions to improve sun protection for at-risk family members, self-efficacy for using sunscreen could be highlighted by discussing recent developments in sunscreen manufacturing and marketing. These include the fact that SPF 15 or higher has been incorporated into many daily-use skin products such as moisturizers and that sunscreens can be sprayed on and can be purchased in unscented, non-greasy versions. Because our data suggest that men are more likely to consider sunscreen a hassle and a nuisance and not endorse the preventive influence sunscreen has on cosmetic effects of aging (unpublished data), future studies may need to employ qualitative methods to identify strategies for increasing positive perceptions of sunscreen. Emphasizing detrimental cosmetic and photo-aging effects of sun exposure through appearance-based materials, such as age-progressed pictures of the family member, may also prove beneficial. Overall, interventions to reduce sunbathing among FDRs of patients with melanoma should attempt to counteract both perceived benefits of sunbathing and normative influences of family and friends to sunbathe. Emphasis should also be placed on reasons why sunbathing should be avoided (e.g., sunscreen is not 100% effective) and should target younger family members by emphasizing the aging effects of sunbathing on the skin. In view of the evidence indicating that the correlates of sun protection and sunbathing are not the same, interventions may be more effective if they include separate components to address sun protection and sunbathing behaviors. Finally, because health care professionals did not influence sun protection and sunbathing, general practitioners should ask about a family history of skin cancer and refer these individuals to a dermatologist. In view of the rising incidence of melanoma, the development and testing of such interventions is an important public health issue.\nIn terms of recommendations for future research, we found it more difficult to recruit younger and male relatives into the study. Recruitment materials and more intensive recruitment efforts targeted towards younger relatives and men as well as educating melanoma probands about ways to facilitate participation of their younger and male relatives into the study may facilitate a higher uptake in this population of probands. Previous research has suggested that individuals with a family history of melanoma are more likely to speak to their female relatives about melanoma [32] and therefore it is possible that a greater proportion of male and younger relatives will participate in future research if family communication to male relatives is fostered.", "The strengths of this study include the large sample size, the focus on family members, the focus on high risk individuals who did not engage in regular sun protection and skin surveillance, the inclusion of sunbathing as an outcome, and the inclusion of previously unstudied correlates of behavior such as the medical status of the affected family member and the level of psychological distress about the affected family member's cancer. This study is also one of few to focus on an older sample of men and women.\nThere are several study limitations. The cross-sectional methodology precludes the ability to infer causal relationships. The sample was comprised of relatively well-educated and married individuals, and almost half the sample was comprised of patients' offspring. Female and older relatives were more likely to participate. It is not known whether levels and correlates of sun protection and sunbathing would have differed with a more heterogeneous sample. It is also not known whether the patients who provided family member names differed from those patients who we were not able to contact or who declined to provide family member names.", "This study extends what is known about sun protection and sunbathing from previous work conducted on average risk populations to a population of high risk individuals. Although caution should be used in using cross-sectional results to guide interventions, these results provide information regarding the factors that might be focused on in future interventions to address sun protection and sunbathing in this population. In terms of implications for interventions to improve sun protection for at-risk family members, self-efficacy for using sunscreen could be highlighted by discussing recent developments in sunscreen manufacturing and marketing. These include the fact that SPF 15 or higher has been incorporated into many daily-use skin products such as moisturizers and that sunscreens can be sprayed on and can be purchased in unscented, non-greasy versions. Because our data suggest that men are more likely to consider sunscreen a hassle and a nuisance and not endorse the preventive influence sunscreen has on cosmetic effects of aging (unpublished data), future studies may need to employ qualitative methods to identify strategies for increasing positive perceptions of sunscreen. Emphasizing detrimental cosmetic and photo-aging effects of sun exposure through appearance-based materials, such as age-progressed pictures of the family member, may also prove beneficial. Overall, interventions to reduce sunbathing among FDRs of patients with melanoma should attempt to counteract both perceived benefits of sunbathing and normative influences of family and friends to sunbathe. Emphasis should also be placed on reasons why sunbathing should be avoided (e.g., sunscreen is not 100% effective) and should target younger family members by emphasizing the aging effects of sunbathing on the skin. In view of the evidence indicating that the correlates of sun protection and sunbathing are not the same, interventions may be more effective if they include separate components to address sun protection and sunbathing behaviors. Finally, because health care professionals did not influence sun protection and sunbathing, general practitioners should ask about a family history of skin cancer and refer these individuals to a dermatologist. In view of the rising incidence of melanoma, the development and testing of such interventions is an important public health issue.\nIn terms of recommendations for future research, we found it more difficult to recruit younger and male relatives into the study. Recruitment materials and more intensive recruitment efforts targeted towards younger relatives and men as well as educating melanoma probands about ways to facilitate participation of their younger and male relatives into the study may facilitate a higher uptake in this population of probands. Previous research has suggested that individuals with a family history of melanoma are more likely to speak to their female relatives about melanoma [32] and therefore it is possible that a greater proportion of male and younger relatives will participate in future research if family communication to male relatives is fostered.", "Demographic, psychological, and social influence factors contributed to sun protection and sunbathing practices among melanoma patients' close family members who were not compliant with sun protection or other skin surveillance practices. Less educated and female relatives are less compliant with recommended practices and may benefit from targeted interventions to improve their sun protection and sun exposure practices. Attitudinal factors such as concerns about photo-aging and the perceived benefits of sunbathing were key, and the sun protection and tanning practices of family, friends, and celebrities also played a role. Additionally, attitudes toward sunscreen use including self-efficacy and perceived barriers contributed to skin protection and sunbathing practices, respectively. These findings suggest that the effectiveness of behavioral interventions to improve these practices may be improved if we target less educated and female relatives as well as the attitudes and social influences that contribute to low levels of sun protection and sun avoidance in this population of at-risk family members.", "The authors declare that they have no competing interests.", "SLM conceived the study, coordinated data collection, assisted in data analyses, and was the primary author of the manuscript. EJC designed the study, conducted data analyses, and assisted in writing the manuscript. PBJ participated in the design of the study, coordinated data collection, and assisted with writing the manuscript. MM coordinated data collection and assisted with writing the manuscript. CJH assisted with the data interpretation and assisted in writing the manuscript. SL participated in the initial design of the study and coordinated data collection. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/122/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants and Approach", "Materials", "Demographics", "Medical factors", "Psychological factors", "Knowledge", "Social influence factors", "Outcome variables: Sun protection behaviors and sunbathing", "Statistical Analyses", "Results", "Descriptive Statistics", "Correlates of Sun Protection Behaviors", "Correlates of Sunbathing", "Discussion", "Study Strengths and Limitations", "Implications", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Melanoma is the most deadly form of skin cancer, accounting for more than 70% of skin cancer deaths in the United States [1]. The incidence of melanoma is increasing rapidly [2] and faster than any other type of cancer [1]. Family history of melanoma is a known independent risk factor for melanoma [3]. While intense sun exposures and sunburns before the age of 18 are known risk factors, sun exposure during adulthood also impacts melanoma development [4,5]. The American Academy of Dermatology [6], the American Cancer Society [7], the Centers for Disease Control and Prevention [8], and the Task Force on Community Preventive Services on Reducing Exposure to Ultraviolet Light [9] recommend sun avoidance during peak ultraviolet light (UV) hours and use of sun protective clothing for the general population. First degree relatives (FDRs) of individuals who receive a diagnosis of melanoma are at increased disease risk and should pay special attention to precautions to limit sun exposure (e.g., [10]).\nDespite the increased level of familial risk, results of several studies indicate that family members of patients with melanoma engage in relatively low levels of UV protection and high levels of exposure. Bergenmaar and Brandberg [11] assessed young adults with a family history of melanoma and found that engagement in sun protection was low and that sun exposure was high. Almost a third of the sample reported sunbathing very often or often and 28% reported using a tanning bed at least once per month in the past year. Geller and colleagues [12] found that about half of the adult siblings of individuals diagnosed with melanoma did not report using sunscreen regularly. Manne and colleagues [13] reported that FDRs of individuals diagnosed with melanoma engaged in relatively low levels of sun protection. Sunbathing was not assessed in this study. Azzarello and colleagues [14] assessed sun protection practices among FDRs of individuals diagnosed with melanoma, and reported that more than one-third of relatives never or rarely used sunscreen, and more than 60% rarely or never wore protective clothing. Again, sunbathing was not assessed. Geller and colleagues [15] studied children of individuals diagnosed with all skin cancer types and found that use of sunscreen was relatively low (42%). Rates of frequent sunburn in the past year were also relatively high (39%), with particularly high rates of sunburn in the past year among female offspring of mothers who had received a diagnosis of skin cancer. Finally, Bishop and colleagues [16] studied sun protection and sun exposure individuals with a first degree relative with melanoma and found that about 33% of relatives had a sunburn in the previous summer and 64% reported getting a tan the previous summer. However, sunscreen use was high in this sample (90%) as was the use of other methods of sun protection.\nSeveral studies have examined correlates of sun protection practices among relatives of individuals diagnosed with melanoma. These studies have focused on demographic, phenotypic, health care access, and attitudinal factors. In terms of demographic factors, some studies suggest that female gender [12] and a college education [14] are associated with greater sun protection, whereas other studies do not suggest these associations [11,13]. In terms of phenotypic factors, a greater tendency to burn [12] and greater number of melanoma risk factors [14] have been associated with sun protection in some studies, but not in others [13]. Health care access and knowledge factors such as having a dermatologist [12], a physician recommendation to engage in sun protection [13], and a greater knowledge level regarding what suspicious moles look like [12] have been associated with higher engagement in sun protection. Attitudinal factors such as a greater perceived risk [14] have been associated with greater sun protection habits in some studies [14] but not others [12,13]. Greater self-efficacy has been consistently associated with engagement in sun protection [13,14]. Fewer perceived barriers to using sunscreen [13] and lower normative influences for sunbathing [11,13] have also been associated with sun protection. Appearance benefits and normative influences have been described as common reasons for sunbathing among relatives [11].\nAlthough there have been several studies focusing on sun habits of family members of melanoma patients, there are two gaps in the literature. First, no study has evaluated the role of a comprehensive set of attitudinal and knowledge factors in both sun protection and sunbathing practices among family members and compared whether the correlates of each behavior differ. The majority of studies have studied sun protection with little attention paid to correlates of sunbathing. Second, little is known about the population of relatives who are the least compliant with skin protection behaviors. This is a little-studied population that is most reluctant to adopt sun protection. It is important to better understand their sun protection and sunbathing habits among these individuals because they are at higher risk for skin cancer due to their skin cancer surveillance habits, and are therefore an appropriate target for intervention to improve sun protection.\nTo select correlates for the current study, we integrated constructs from two conceptual models, the Preventive Health Model (PHM) [17,18] and the Theory of Planned Behavior (TPB) [19]. We also based our selection on findings from prior research on correlates of sun protection and sunbathing behaviors from studies of individuals at average risk for skin cancer [20-25]. From the TPB, we included the role of normative influences and considered them as part of broader social influence factors to be examined. Drawing from the PHM, we examined the degree to which background demographic and medical factors (including medical factors of both the FDR and the family member with melanoma), psychological factors, and social influence factors were associated with sun protection and sunbathing. Specific psychological factors we examined included sun protection benefits, sunscreen barriers, benefits of sunbathing, sunscreen self-efficacy, photo-aging concerns, perceived risk and severity of melanoma, and distress about melanoma. The social influence factors we examined included physician recommendation for sun protection, image norms for tanness (i.e., image norms for what is portrayed as attractive in the media), sun protection norms, and sunbathing norms. In addition, we examined whether knowledge variables (i.e., knowledge of sun-protection guidelines and knowledge about sunscreen and sun exposure) were associated with sun protection or sunbathing practices. Few previous studies have examined the association between knowledge and skin cancer prevention behaviors and results have been equivocal [13,26].\nThe current study had two aims. The first aim was to evaluate demographic, medical, psychological, knowledge, and social influence correlates of sun protection and sunbathing practices among FDRs of melanoma patients. The second, exploratory aim was to examine whether there were unique correlates of sun protection and sunbathing practices. Specifically, we hypothesized that greater perceived sun protection benefits, sunscreen self-efficacy, photo-aging concerns, physician recommendation for sun protection, and sun protection norms would be associated with higher sun protection. In contrast, we hypothesized that greater perceived benefits of sunbathing, lower photo-aging concerns, greater image norms for tanness, and greater sunbathing norms would be associated with higher levels of sunbathing.", "[SUBTITLE] Participants and Approach [SUBSECTION] Data for this study were drawn from the pre-intervention data of a randomized clinical trial evaluating the efficacy of two behavioral interventions to improve skin cancer surveillance and prevention among family members of patients with melanoma [27]. Participants were FDRs of patients recruited from the cutaneous oncology practices at three participating medical centers (Fox Chase Cancer Center, Moffitt Cancer Center, and the University of Pennsylvania Health Systems). Prospective participants were identified from tumor registries or medical records. IRB approval was received for each site. Physicians of record gave permission for their patients to be contacted. Sample recruitment began in February 2006 and ended in June 2009. Eligibility criteria for patients whose FDRs are the focus of this study included: a) newly diagnosed with cutaneous malignant melanoma (CMM) since 2001 but more than 3 months prior to being approached; b) seen at one of the three participating sites; c) greater than 18 years of age; d) English speaking; e) able to give meaningful informed consent; f) does not have a FDR with CMM (to exclude patients with familial melanoma syndrome). Patients who met these criteria were mailed a letter describing the study and subsequently contacted by telephone to determine eligibility. At this time, patients gave permission to contact all of their FDRs and for medical information to be obtained from their medical charts. The Institutional Review Boards at the three participating sites approved this study.\nNext, identified FDRs were mailed a letter describing the study. They were contacted by telephone and eligibility was determined. Eligibility criteria for FDRs were: a) at least 20 years of age; b) had not had a total cutaneous examination in the past three years, had done a skin self-examination three or fewer times in the past year, and had a sun protection habits mean score less than four out of five; c) one or more of the following additional risk factors: blonde or red hair, marked freckling on the upper back, history of three or more blistering sunburns prior to age 20, three or more years of an outdoor summer job as a teenager, or actinic keratosis (a precancerous skin condition); d) able to give meaningful informed consent; e) English speaking; f) has residential phone service; g) no personal history of CMM or non-melanoma skin cancer; h) no personal history of dysplastic nevi (abnormal moles); i) only one FDR with CMM. After written informed consent and HIPAA acknowledgement, a baseline telephone survey (Additional file 1) was completed.\nOf the 3603 patients approached, 10.3% were ineligible (n = 370), 25.6% could not be located (n = 923), 35.6% refused (n = 1282), and 28.5% of patients provided permission to contact their relatives (n = 1028). These 1028 patients provided 3013 FDR names (2.95 per patient). Of these 3013, 43.9% were ineligible (n = 1324). Eight hundred fifty-five FDRs were ineligible because they did not meet sun or skin protection criteria, 419 were ineligible because of skin cancer medical history, and 50 were ineligible due to additional risk factors or being under the age of 20 years. Twenty percent could not be located (n = 603). Of the 1086 eligible and locatable FDRs identified, 541 refused (49.8%) and 545 (50.2%) enrolled.\nA comparison between the 541 FDRs who refused the study with the 545 FDR participants on available demographic information indicated that participants were significantly older than refusers (t (759) = 11.5, p < .001; M participants = 46.3, SD = 13.3, M refusers = 31.2, SD = 28.7) and that participants were more likely to be female (Percentage female participants = 62.4%; Percentage female refusers = 46.5%; χ2 (1, 1085) = 27.7, p < .001). Participants were also significantly more likely to be offspring of patients (56.1%) than refusers (31%).\nData for this study were drawn from the pre-intervention data of a randomized clinical trial evaluating the efficacy of two behavioral interventions to improve skin cancer surveillance and prevention among family members of patients with melanoma [27]. Participants were FDRs of patients recruited from the cutaneous oncology practices at three participating medical centers (Fox Chase Cancer Center, Moffitt Cancer Center, and the University of Pennsylvania Health Systems). Prospective participants were identified from tumor registries or medical records. IRB approval was received for each site. Physicians of record gave permission for their patients to be contacted. Sample recruitment began in February 2006 and ended in June 2009. Eligibility criteria for patients whose FDRs are the focus of this study included: a) newly diagnosed with cutaneous malignant melanoma (CMM) since 2001 but more than 3 months prior to being approached; b) seen at one of the three participating sites; c) greater than 18 years of age; d) English speaking; e) able to give meaningful informed consent; f) does not have a FDR with CMM (to exclude patients with familial melanoma syndrome). Patients who met these criteria were mailed a letter describing the study and subsequently contacted by telephone to determine eligibility. At this time, patients gave permission to contact all of their FDRs and for medical information to be obtained from their medical charts. The Institutional Review Boards at the three participating sites approved this study.\nNext, identified FDRs were mailed a letter describing the study. They were contacted by telephone and eligibility was determined. Eligibility criteria for FDRs were: a) at least 20 years of age; b) had not had a total cutaneous examination in the past three years, had done a skin self-examination three or fewer times in the past year, and had a sun protection habits mean score less than four out of five; c) one or more of the following additional risk factors: blonde or red hair, marked freckling on the upper back, history of three or more blistering sunburns prior to age 20, three or more years of an outdoor summer job as a teenager, or actinic keratosis (a precancerous skin condition); d) able to give meaningful informed consent; e) English speaking; f) has residential phone service; g) no personal history of CMM or non-melanoma skin cancer; h) no personal history of dysplastic nevi (abnormal moles); i) only one FDR with CMM. After written informed consent and HIPAA acknowledgement, a baseline telephone survey (Additional file 1) was completed.\nOf the 3603 patients approached, 10.3% were ineligible (n = 370), 25.6% could not be located (n = 923), 35.6% refused (n = 1282), and 28.5% of patients provided permission to contact their relatives (n = 1028). These 1028 patients provided 3013 FDR names (2.95 per patient). Of these 3013, 43.9% were ineligible (n = 1324). Eight hundred fifty-five FDRs were ineligible because they did not meet sun or skin protection criteria, 419 were ineligible because of skin cancer medical history, and 50 were ineligible due to additional risk factors or being under the age of 20 years. Twenty percent could not be located (n = 603). Of the 1086 eligible and locatable FDRs identified, 541 refused (49.8%) and 545 (50.2%) enrolled.\nA comparison between the 541 FDRs who refused the study with the 545 FDR participants on available demographic information indicated that participants were significantly older than refusers (t (759) = 11.5, p < .001; M participants = 46.3, SD = 13.3, M refusers = 31.2, SD = 28.7) and that participants were more likely to be female (Percentage female participants = 62.4%; Percentage female refusers = 46.5%; χ2 (1, 1085) = 27.7, p < .001). Participants were also significantly more likely to be offspring of patients (56.1%) than refusers (31%).\n[SUBTITLE] Materials [SUBSECTION] For each of the multi-item scales assessing psychological factors and social influence factors, a scale score was created by averaging responses across the respective items. Additional information regarding the multi-item scales and internal consistency for these scales are shown in Table 1, and all survey items are contained in an online appendix (Additional file 1).\nInternal Reliability, Sample Items, and Response Options for Multi-Item Scales\n[SUBTITLE] Demographics [SUBSECTION] Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\nParticipants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\n[SUBTITLE] Medical factors [SUBSECTION] Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\nParticipants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\n[SUBTITLE] Psychological factors [SUBSECTION] The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\nThe measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\n[SUBTITLE] Knowledge [SUBSECTION] Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\nTwo multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\n[SUBTITLE] Social influence factors [SUBSECTION] Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\nThree items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\n[SUBTITLE] Outcome variables: Sun protection behaviors and sunbathing [SUBSECTION] Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.\nSun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.\nFor each of the multi-item scales assessing psychological factors and social influence factors, a scale score was created by averaging responses across the respective items. Additional information regarding the multi-item scales and internal consistency for these scales are shown in Table 1, and all survey items are contained in an online appendix (Additional file 1).\nInternal Reliability, Sample Items, and Response Options for Multi-Item Scales\n[SUBTITLE] Demographics [SUBSECTION] Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\nParticipants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\n[SUBTITLE] Medical factors [SUBSECTION] Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\nParticipants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\n[SUBTITLE] Psychological factors [SUBSECTION] The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\nThe measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\n[SUBTITLE] Knowledge [SUBSECTION] Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\nTwo multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\n[SUBTITLE] Social influence factors [SUBSECTION] Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\nThree items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\n[SUBTITLE] Outcome variables: Sun protection behaviors and sunbathing [SUBSECTION] Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.\nSun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.\n[SUBTITLE] Statistical Analyses [SUBSECTION] All statistical analyses were conducted using SAS (version 9.2), and a cutoff of p < .05 was used to determine statistical significance. The primary analyses consisted of a series of multiple regressions to examine correlates of the two outcomes, sun protection behaviors and sunbathing. In order to account for the fact that some participants were members of the same family, all of the regression analyses were conducted using a generalized estimating equations (GEE) approach (PROC GENMOD in SAS), with the assumption of an exchangeable correlation matrix. Regression models for the sun protection behaviors measure were fit under the assumption of a normal distribution. Data from the sunbathing measure were positively skewed, and thus all regression models for that outcome were fit under the assumption of a gamma distribution. The p values reported for the regression analyses are from type 3 tests of model effects. We used the following analytic approach with sun protection behaviors and sunbathing as separate outcome variables in a series of GEE regression analyses. First, separately for each category of potential correlates (i.e., demographics, medical factors, psychological factors, knowledge, and social influence factors), we included all of the variables in that category as independent variables in a single regression model. Next, across all of the categories, the independent variables that were significantly associated with the outcome in the initial analyses were included together in a final regression model. There was no evidence of multicollinearity for any of the regression models.\nAll statistical analyses were conducted using SAS (version 9.2), and a cutoff of p < .05 was used to determine statistical significance. The primary analyses consisted of a series of multiple regressions to examine correlates of the two outcomes, sun protection behaviors and sunbathing. In order to account for the fact that some participants were members of the same family, all of the regression analyses were conducted using a generalized estimating equations (GEE) approach (PROC GENMOD in SAS), with the assumption of an exchangeable correlation matrix. Regression models for the sun protection behaviors measure were fit under the assumption of a normal distribution. Data from the sunbathing measure were positively skewed, and thus all regression models for that outcome were fit under the assumption of a gamma distribution. The p values reported for the regression analyses are from type 3 tests of model effects. We used the following analytic approach with sun protection behaviors and sunbathing as separate outcome variables in a series of GEE regression analyses. First, separately for each category of potential correlates (i.e., demographics, medical factors, psychological factors, knowledge, and social influence factors), we included all of the variables in that category as independent variables in a single regression model. Next, across all of the categories, the independent variables that were significantly associated with the outcome in the initial analyses were included together in a final regression model. There was no evidence of multicollinearity for any of the regression models.", "Data for this study were drawn from the pre-intervention data of a randomized clinical trial evaluating the efficacy of two behavioral interventions to improve skin cancer surveillance and prevention among family members of patients with melanoma [27]. Participants were FDRs of patients recruited from the cutaneous oncology practices at three participating medical centers (Fox Chase Cancer Center, Moffitt Cancer Center, and the University of Pennsylvania Health Systems). Prospective participants were identified from tumor registries or medical records. IRB approval was received for each site. Physicians of record gave permission for their patients to be contacted. Sample recruitment began in February 2006 and ended in June 2009. Eligibility criteria for patients whose FDRs are the focus of this study included: a) newly diagnosed with cutaneous malignant melanoma (CMM) since 2001 but more than 3 months prior to being approached; b) seen at one of the three participating sites; c) greater than 18 years of age; d) English speaking; e) able to give meaningful informed consent; f) does not have a FDR with CMM (to exclude patients with familial melanoma syndrome). Patients who met these criteria were mailed a letter describing the study and subsequently contacted by telephone to determine eligibility. At this time, patients gave permission to contact all of their FDRs and for medical information to be obtained from their medical charts. The Institutional Review Boards at the three participating sites approved this study.\nNext, identified FDRs were mailed a letter describing the study. They were contacted by telephone and eligibility was determined. Eligibility criteria for FDRs were: a) at least 20 years of age; b) had not had a total cutaneous examination in the past three years, had done a skin self-examination three or fewer times in the past year, and had a sun protection habits mean score less than four out of five; c) one or more of the following additional risk factors: blonde or red hair, marked freckling on the upper back, history of three or more blistering sunburns prior to age 20, three or more years of an outdoor summer job as a teenager, or actinic keratosis (a precancerous skin condition); d) able to give meaningful informed consent; e) English speaking; f) has residential phone service; g) no personal history of CMM or non-melanoma skin cancer; h) no personal history of dysplastic nevi (abnormal moles); i) only one FDR with CMM. After written informed consent and HIPAA acknowledgement, a baseline telephone survey (Additional file 1) was completed.\nOf the 3603 patients approached, 10.3% were ineligible (n = 370), 25.6% could not be located (n = 923), 35.6% refused (n = 1282), and 28.5% of patients provided permission to contact their relatives (n = 1028). These 1028 patients provided 3013 FDR names (2.95 per patient). Of these 3013, 43.9% were ineligible (n = 1324). Eight hundred fifty-five FDRs were ineligible because they did not meet sun or skin protection criteria, 419 were ineligible because of skin cancer medical history, and 50 were ineligible due to additional risk factors or being under the age of 20 years. Twenty percent could not be located (n = 603). Of the 1086 eligible and locatable FDRs identified, 541 refused (49.8%) and 545 (50.2%) enrolled.\nA comparison between the 541 FDRs who refused the study with the 545 FDR participants on available demographic information indicated that participants were significantly older than refusers (t (759) = 11.5, p < .001; M participants = 46.3, SD = 13.3, M refusers = 31.2, SD = 28.7) and that participants were more likely to be female (Percentage female participants = 62.4%; Percentage female refusers = 46.5%; χ2 (1, 1085) = 27.7, p < .001). Participants were also significantly more likely to be offspring of patients (56.1%) than refusers (31%).", "For each of the multi-item scales assessing psychological factors and social influence factors, a scale score was created by averaging responses across the respective items. Additional information regarding the multi-item scales and internal consistency for these scales are shown in Table 1, and all survey items are contained in an online appendix (Additional file 1).\nInternal Reliability, Sample Items, and Response Options for Multi-Item Scales\n[SUBTITLE] Demographics [SUBSECTION] Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\nParticipants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).\n[SUBTITLE] Medical factors [SUBSECTION] Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\nParticipants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.\n[SUBTITLE] Psychological factors [SUBSECTION] The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\nThe measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.\n[SUBTITLE] Knowledge [SUBSECTION] Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\nTwo multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.\n[SUBTITLE] Social influence factors [SUBSECTION] Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\nThree items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].\n[SUBTITLE] Outcome variables: Sun protection behaviors and sunbathing [SUBSECTION] Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.\nSun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.", "Participants reported their age, sex, race/ethnicity, level of education, marital status, and their relation to the patient with melanoma (i.e., sibling, parent, or offspring).", "Participants indicated whether they had any form of health insurance, if they had visited a dentist in the past year, and the number of times they had visited a doctor in the past year. Questions also asked about five risk factors for melanoma (e.g., having blonde or red hair as a teenager, having three or more blistering sunburns before the age of 20); we created a total risk factor score by summing across the five items. For each melanoma patient, the disease stage at diagnosis and length of time since diagnosis was abstracted from medical records.", "The measures of sun protection benefits and sunscreen barriers were taken from Jackson and Aiken [21]. Sun protection behavior benefits were assessed using a measure developed by Glanz and colleagues [28]. Measures of benefits of sunbathing, sunscreen self-efficacy (for which the items asked about confidence in using sunscreen in various situations), and photo-aging concerns were taken from Jackson and Aiken [29]. Four items assessed perceived risk of developing melanoma [13]. One of the four items asked participants to indicate their overall perceived risk of developing melanoma during their lifetime (rated from 0 = not at all likely to 100 = extremely likely). The remaining three items asked about different aspects of comparative perceived risk. Perceived severity of melanoma was assessed using a measure adapted from Aiken and colleagues [30]. Distress about melanoma was assessed with a single item (\"How distressed are you currently about the diagnosis and treatment of your family member's melanoma\"?) with response options from 1 = not at all distressed to 5 = extremely distressed.", "Two multiple-choice items asked about knowledge of sun protection guidelines (i.e., the recommended minimum level of sunscreen sun protection factor (SPF) to use when in the sun, and the recommended hours during the day when people are advised to limit sun exposure). We summed the number of correct responses to the two items. Knowledge about sunscreen and sun exposure was assessed with 11 true-false items drawn from previous research (e.g., \"To work best, sunscreen needs to be applied a half-hour before you go outside\") [13]. We summed the number of correct responses to the 11 items.", "Three items drawn from Manne et al. [13] assessed physician recommendations for sun protection. The items asked whether a doctor had ever told the participant to reduce the amount of time spent in the sun, wear a hat or long sleeves when in the sun, or to use sunscreen regularly. Responses were summed across the three items. Measures of image norms for tanness (i.e., attitudes about tanness and paleness among celebrities), sun protection norms (i.e., sun protection practices and attitudes among friends and family), and sunbathing norms (i.e., sunbathing practices and attitudes among friends and family) were drawn from prior research [13,29].", "Sun protection behaviors were measured using a 5-item measure that asked about the frequency (from 1 = never to 5 = always) of engaging in the following behaviors when out in the sun for more than 30 minutes: using a sunscreen with an SPF of 15 or more, wearing a hat, wearing a shirt with long sleeves, staying in the shade, and wearing sunglasses [28]. Responses were averaged across the five items. Sunbathing was assessed with a single item that asked about the frequency (from 1 = never to 5 = always) of spending time in the sun to get a tan last summer.", "All statistical analyses were conducted using SAS (version 9.2), and a cutoff of p < .05 was used to determine statistical significance. The primary analyses consisted of a series of multiple regressions to examine correlates of the two outcomes, sun protection behaviors and sunbathing. In order to account for the fact that some participants were members of the same family, all of the regression analyses were conducted using a generalized estimating equations (GEE) approach (PROC GENMOD in SAS), with the assumption of an exchangeable correlation matrix. Regression models for the sun protection behaviors measure were fit under the assumption of a normal distribution. Data from the sunbathing measure were positively skewed, and thus all regression models for that outcome were fit under the assumption of a gamma distribution. The p values reported for the regression analyses are from type 3 tests of model effects. We used the following analytic approach with sun protection behaviors and sunbathing as separate outcome variables in a series of GEE regression analyses. First, separately for each category of potential correlates (i.e., demographics, medical factors, psychological factors, knowledge, and social influence factors), we included all of the variables in that category as independent variables in a single regression model. Next, across all of the categories, the independent variables that were significantly associated with the outcome in the initial analyses were included together in a final regression model. There was no evidence of multicollinearity for any of the regression models.", "[SUBTITLE] Descriptive Statistics [SUBSECTION] There were few missing data, with no more than eight individuals missing data for any one variable. Table 2 shows descriptive statistics for all of the study variables. With regard to the outcome variables, there was a small-to-moderate inverse correlation (rs = -.20, p < .001) between sun protection behaviors and sunbathing. Average levels of sun protection behaviors were close to the middle of the 5-point scale (M = 2.8, where 3 = sometimes), whereas average levels of sunbathing were relatively low (M = 1.9, where 2 = rarely).\nDescriptive Statistics for the Study Variables\nNote: N = 545\nThere were few missing data, with no more than eight individuals missing data for any one variable. Table 2 shows descriptive statistics for all of the study variables. With regard to the outcome variables, there was a small-to-moderate inverse correlation (rs = -.20, p < .001) between sun protection behaviors and sunbathing. Average levels of sun protection behaviors were close to the middle of the 5-point scale (M = 2.8, where 3 = sometimes), whereas average levels of sunbathing were relatively low (M = 1.9, where 2 = rarely).\nDescriptive Statistics for the Study Variables\nNote: N = 545\n[SUBTITLE] Correlates of Sun Protection Behaviors [SUBSECTION] Among the demographic factors, education was positively associated with sun protection behaviors (parameter estimate [b] = 0.05, SE = 0.03, p = .049). The relation of the participant to the patient with melanoma was also significantly associated with sun protection behaviors (p = .041), such that siblings (b = -0.16, SE = 0.07) and parents (b = -0.25, SE = 0.11) engaged in fewer sun protection behaviors than did offspring of patients. None of the medical factors were significantly associated with sun protection behaviors (ps ≥ .086). For the psychological factors, higher sun protection behaviors were found among individuals reporting fewer benefits of sunbathing (b = -0.08, SE = 0.02, p = .001), greater sunscreen self-efficacy (b = 0.15, SE = 0.03, p < .001), and greater photo-aging concerns (b = 0.07, SE = 0.03, p = .039). With regard to the knowledge variables, individuals with greater knowledge about sunscreen and sun exposure (b = 0.04, SE = 0.02, p = .024) had higher sun protection behaviors. Knowledge about sun protection guidelines (p = .153) was not associated with sun protection behaviors. Among the social influence factors, individuals reporting greater sun protection norms (b = 0.14, SE = 0.04, p < .001) or lower sunbathing norms (b = -0.07, SE = 0.03, p = .012) had higher sun protection behaviors.\nA final regression model was tested in which all of the significant correlates from the preceding analyses were included as independent variables, with sun protection behaviors as the dependent variable. As shown in Table 3, the relation of the participant to the melanoma patient, knowledge about sunscreen and sun exposure, and sunbathing norms were not significantly associated with sun protection behaviors. Higher sun protection behaviors were found among individuals with more education, individuals reporting fewer benefits of sunbathing, greater sunscreen self-efficacy, greater photo-aging concerns, and greater sun protection norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sun Protection Behaviors\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.\nAmong the demographic factors, education was positively associated with sun protection behaviors (parameter estimate [b] = 0.05, SE = 0.03, p = .049). The relation of the participant to the patient with melanoma was also significantly associated with sun protection behaviors (p = .041), such that siblings (b = -0.16, SE = 0.07) and parents (b = -0.25, SE = 0.11) engaged in fewer sun protection behaviors than did offspring of patients. None of the medical factors were significantly associated with sun protection behaviors (ps ≥ .086). For the psychological factors, higher sun protection behaviors were found among individuals reporting fewer benefits of sunbathing (b = -0.08, SE = 0.02, p = .001), greater sunscreen self-efficacy (b = 0.15, SE = 0.03, p < .001), and greater photo-aging concerns (b = 0.07, SE = 0.03, p = .039). With regard to the knowledge variables, individuals with greater knowledge about sunscreen and sun exposure (b = 0.04, SE = 0.02, p = .024) had higher sun protection behaviors. Knowledge about sun protection guidelines (p = .153) was not associated with sun protection behaviors. Among the social influence factors, individuals reporting greater sun protection norms (b = 0.14, SE = 0.04, p < .001) or lower sunbathing norms (b = -0.07, SE = 0.03, p = .012) had higher sun protection behaviors.\nA final regression model was tested in which all of the significant correlates from the preceding analyses were included as independent variables, with sun protection behaviors as the dependent variable. As shown in Table 3, the relation of the participant to the melanoma patient, knowledge about sunscreen and sun exposure, and sunbathing norms were not significantly associated with sun protection behaviors. Higher sun protection behaviors were found among individuals with more education, individuals reporting fewer benefits of sunbathing, greater sunscreen self-efficacy, greater photo-aging concerns, and greater sun protection norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sun Protection Behaviors\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.\n[SUBTITLE] Correlates of Sunbathing [SUBSECTION] Of the demographic factors examined, age (b = -0.01, SE = 0.002, p < .001) and sex (b = 0.16, SE = 0.05, p = .002) were significantly associated with sunbathing, with younger individuals and women reporting more sunbathing. The only medical factor that was significantly associated with sunbathing was visiting a dentist in the past year (b = 0.15, SE = 0.07, p = .039). In the analysis examining the association between the psychological factors and sunbathing, more frequent sunbathing was reported by those reporting fewer sunscreen barriers (b = -0.10, SE = 0.02, p < .001) and those reporting greater benefits of sunbathing (b = 0.22, SE = 0.02, p < .001). Neither of the knowledge variables was significantly associated with sunbathing (ps ≥ .180). Of the social influence factors examined, more frequent sunbathing was found among individuals with lower endorsement of image norms for tanness (b = -0.10, SE = 0.03, p < .001) and those with higher sunbathing norms (b = 0.28, SE = 0.02, p < .001). Neither physician recommendations for sun protection nor sun protection norms were significantly associated with sunbathing (ps ≥ .157).\nAll of the significant correlates from the preceding analyses were included as independent variables in a final model with sunbathing as the outcome variable. As shown in Table 4, with the exception of visiting a dentist in the past year, each correlate in the model was significantly associated with sunbathing. More frequent sunbathing was found among younger individuals, women, those reporting fewer sunscreen barriers, individuals reporting greater benefits of sunbathing, and those with lower endorsement of image norms for tanness or higher sunbathing norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sunbathing\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.\nOf the demographic factors examined, age (b = -0.01, SE = 0.002, p < .001) and sex (b = 0.16, SE = 0.05, p = .002) were significantly associated with sunbathing, with younger individuals and women reporting more sunbathing. The only medical factor that was significantly associated with sunbathing was visiting a dentist in the past year (b = 0.15, SE = 0.07, p = .039). In the analysis examining the association between the psychological factors and sunbathing, more frequent sunbathing was reported by those reporting fewer sunscreen barriers (b = -0.10, SE = 0.02, p < .001) and those reporting greater benefits of sunbathing (b = 0.22, SE = 0.02, p < .001). Neither of the knowledge variables was significantly associated with sunbathing (ps ≥ .180). Of the social influence factors examined, more frequent sunbathing was found among individuals with lower endorsement of image norms for tanness (b = -0.10, SE = 0.03, p < .001) and those with higher sunbathing norms (b = 0.28, SE = 0.02, p < .001). Neither physician recommendations for sun protection nor sun protection norms were significantly associated with sunbathing (ps ≥ .157).\nAll of the significant correlates from the preceding analyses were included as independent variables in a final model with sunbathing as the outcome variable. As shown in Table 4, with the exception of visiting a dentist in the past year, each correlate in the model was significantly associated with sunbathing. More frequent sunbathing was found among younger individuals, women, those reporting fewer sunscreen barriers, individuals reporting greater benefits of sunbathing, and those with lower endorsement of image norms for tanness or higher sunbathing norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sunbathing\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.", "There were few missing data, with no more than eight individuals missing data for any one variable. Table 2 shows descriptive statistics for all of the study variables. With regard to the outcome variables, there was a small-to-moderate inverse correlation (rs = -.20, p < .001) between sun protection behaviors and sunbathing. Average levels of sun protection behaviors were close to the middle of the 5-point scale (M = 2.8, where 3 = sometimes), whereas average levels of sunbathing were relatively low (M = 1.9, where 2 = rarely).\nDescriptive Statistics for the Study Variables\nNote: N = 545", "Among the demographic factors, education was positively associated with sun protection behaviors (parameter estimate [b] = 0.05, SE = 0.03, p = .049). The relation of the participant to the patient with melanoma was also significantly associated with sun protection behaviors (p = .041), such that siblings (b = -0.16, SE = 0.07) and parents (b = -0.25, SE = 0.11) engaged in fewer sun protection behaviors than did offspring of patients. None of the medical factors were significantly associated with sun protection behaviors (ps ≥ .086). For the psychological factors, higher sun protection behaviors were found among individuals reporting fewer benefits of sunbathing (b = -0.08, SE = 0.02, p = .001), greater sunscreen self-efficacy (b = 0.15, SE = 0.03, p < .001), and greater photo-aging concerns (b = 0.07, SE = 0.03, p = .039). With regard to the knowledge variables, individuals with greater knowledge about sunscreen and sun exposure (b = 0.04, SE = 0.02, p = .024) had higher sun protection behaviors. Knowledge about sun protection guidelines (p = .153) was not associated with sun protection behaviors. Among the social influence factors, individuals reporting greater sun protection norms (b = 0.14, SE = 0.04, p < .001) or lower sunbathing norms (b = -0.07, SE = 0.03, p = .012) had higher sun protection behaviors.\nA final regression model was tested in which all of the significant correlates from the preceding analyses were included as independent variables, with sun protection behaviors as the dependent variable. As shown in Table 3, the relation of the participant to the melanoma patient, knowledge about sunscreen and sun exposure, and sunbathing norms were not significantly associated with sun protection behaviors. Higher sun protection behaviors were found among individuals with more education, individuals reporting fewer benefits of sunbathing, greater sunscreen self-efficacy, greater photo-aging concerns, and greater sun protection norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sun Protection Behaviors\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.", "Of the demographic factors examined, age (b = -0.01, SE = 0.002, p < .001) and sex (b = 0.16, SE = 0.05, p = .002) were significantly associated with sunbathing, with younger individuals and women reporting more sunbathing. The only medical factor that was significantly associated with sunbathing was visiting a dentist in the past year (b = 0.15, SE = 0.07, p = .039). In the analysis examining the association between the psychological factors and sunbathing, more frequent sunbathing was reported by those reporting fewer sunscreen barriers (b = -0.10, SE = 0.02, p < .001) and those reporting greater benefits of sunbathing (b = 0.22, SE = 0.02, p < .001). Neither of the knowledge variables was significantly associated with sunbathing (ps ≥ .180). Of the social influence factors examined, more frequent sunbathing was found among individuals with lower endorsement of image norms for tanness (b = -0.10, SE = 0.03, p < .001) and those with higher sunbathing norms (b = 0.28, SE = 0.02, p < .001). Neither physician recommendations for sun protection nor sun protection norms were significantly associated with sunbathing (ps ≥ .157).\nAll of the significant correlates from the preceding analyses were included as independent variables in a final model with sunbathing as the outcome variable. As shown in Table 4, with the exception of visiting a dentist in the past year, each correlate in the model was significantly associated with sunbathing. More frequent sunbathing was found among younger individuals, women, those reporting fewer sunscreen barriers, individuals reporting greater benefits of sunbathing, and those with lower endorsement of image norms for tanness or higher sunbathing norms.\nResults of Generalized Estimating Equations (GEE) Multiple Regression Analysis Examining Correlates of Sunbathing\nNote: a Parameter estimates are unstandardized regression coefficients. b p values are from type 3 tests of model effects.", "Results indicated that demographic, psychological, and social influence factors contributed to sun protection and sunbathing among close family members who are not compliant with sun protection or other skin surveillance practices. Relatives who reported higher sun protection practices were more educated, endorsed fewer benefits of sunbathing, greater sunscreen self-efficacy, had greater concerns about the effects of UV on photo-aging, and greater perceptions of sun protection norms. FDRs who reported more sunbathing were younger, more likely to be female, endorsed fewer barriers to using sunscreen, perceived more benefits of sunbathing, lower image norms for tanness, and endorsed higher sunbathing norms. Several medical, psychological, knowledge, and social factors were not associated with either sun protection or sunbathing. Overall, findings were consistent with previous literature as well as with the conceptual framework guiding this work. The results were relatively consistent with our exploratory hypotheses regarding the unique factors associated with sun protection or sunbathing. It is interesting to note that, although we selected our participants based upon low levels of sun protection and skin surveillance behaviors, the levels of sunbathing in our sample were relatively low and comparatively lower than rates of sunbathing [11,16] and sunburn [15,16] reported in previous studies. In the discussion that follows, we consider how the results of the current study extend what is known about correlates of sun protection and sunbathing among family members, and we also address clinical and research implications of the findings.\nGiven that the study focused on close relatives of individuals with melanoma, it is noteworthy that characteristics of the patient's disease, such as stage and time since diagnosis as well as attitudinal variables typically associated with the severity of cancer such as distress about the proband's melanoma, disease severity, and perceived risk, were not associated with sun protection or sunbathing. The fact that disease characteristics were not associated with sun protection is consistent with our previous study of family members of melanoma patients [13] as well as prior work with family members of colorectal cancer patients [31]. With regard to disease severity, perceived risk, and distress about the proband's cancer, our results are also consistent with our previous research [13]. These findings suggest that family members may not be influenced to alter sun protection or exposure by the severity of the patient's cancer or their own melanoma risk. However, it is possible that the lack of association between all of these factors and relatives' behavior is due to the fact that they were not aware of important facts about melanoma because the proband and relative did not have an in-depth discussion about this topic. During this discussion, it is likely the proband would discuss the cancer in more detail in terms of the level of risk conferred upon the family member. Family communication has been linked with engagement in cancer screening practices among family members at increased cancer risk (e.g., [32-34]). For similar reasons, it is also possible that the closeness of the relationship with the proband would have had a stronger association with sun protection and sunbathing practices than severity, risk, and distress about the proband's cancer, as this variable has been associated with other types of cancer risk reduction behavior [30,35]. Unfortunately, this measure was not included in this study. Without a qualitative examination of each family's communication about melanoma risk, it is difficult to conclude why these variables were not associated with sun protection and sunbathing practices.\nConsistent with previous research older age was associated with less sunbathing [23,26]. It is interesting that physician recommendation for sun protection was not associated with sun protection or sunbathing, which is not consistent with previous work evaluating correlates of sun protection among family members of patients with melanoma [13]. It is possible that this population of family members had not had contact with a dermatologist and thus there was less opportunity for a dermatologist to influence the adoption of sun protection practices. The other social influence factors we examined were varying types of norms. Sun protection norms were associated with sun protection and sunbathing norms were associated with sunbathing behavior. These findings suggest that peers' attitudes and behaviors may be more important than expert recommendations. Consistent with our expectations, sunscreen self-efficacy was associated with sun protection but not sunbathing.\nIn line with previous research, greater perceived benefits of sunbathing and higher perceptions that family and friends engage in tanning behaviors were associated with greater sunbathing [29]. However, a greater endorsement of positive image norms for tanness was associated with a lower frequency of sunbathing, which is opposite to the effect found in prior research [29]. One factor that might account for these discrepant findings is that our sample included both men and women and was older and at higher risk for skin cancer than the mostly female, college-aged samples studied previously. It is possible that perceptions of societal standards of attractiveness are more influential in personal choices among younger women as compared with older samples comprised of both genders, as well as among individuals at increased risk for melanoma. In addition, future studies should attempt to distinguish the role of perceptions of societal values versus the role of agreement with those values. The present measure did not separate perceptions of values from endorsement of them. Participants who reported having fewer barriers to using sunscreen engaged in more sunbathing. It is possible that individuals who sunbathe are generally more likely to use sunscreen because they are going to tan and thus they report fewer barriers to using it [36-39]. This may also be more likely to be the case among middle-aged and older individuals than among college women.\n[SUBTITLE] Study Strengths and Limitations [SUBSECTION] The strengths of this study include the large sample size, the focus on family members, the focus on high risk individuals who did not engage in regular sun protection and skin surveillance, the inclusion of sunbathing as an outcome, and the inclusion of previously unstudied correlates of behavior such as the medical status of the affected family member and the level of psychological distress about the affected family member's cancer. This study is also one of few to focus on an older sample of men and women.\nThere are several study limitations. The cross-sectional methodology precludes the ability to infer causal relationships. The sample was comprised of relatively well-educated and married individuals, and almost half the sample was comprised of patients' offspring. Female and older relatives were more likely to participate. It is not known whether levels and correlates of sun protection and sunbathing would have differed with a more heterogeneous sample. It is also not known whether the patients who provided family member names differed from those patients who we were not able to contact or who declined to provide family member names.\nThe strengths of this study include the large sample size, the focus on family members, the focus on high risk individuals who did not engage in regular sun protection and skin surveillance, the inclusion of sunbathing as an outcome, and the inclusion of previously unstudied correlates of behavior such as the medical status of the affected family member and the level of psychological distress about the affected family member's cancer. This study is also one of few to focus on an older sample of men and women.\nThere are several study limitations. The cross-sectional methodology precludes the ability to infer causal relationships. The sample was comprised of relatively well-educated and married individuals, and almost half the sample was comprised of patients' offspring. Female and older relatives were more likely to participate. It is not known whether levels and correlates of sun protection and sunbathing would have differed with a more heterogeneous sample. It is also not known whether the patients who provided family member names differed from those patients who we were not able to contact or who declined to provide family member names.\n[SUBTITLE] Implications [SUBSECTION] This study extends what is known about sun protection and sunbathing from previous work conducted on average risk populations to a population of high risk individuals. Although caution should be used in using cross-sectional results to guide interventions, these results provide information regarding the factors that might be focused on in future interventions to address sun protection and sunbathing in this population. In terms of implications for interventions to improve sun protection for at-risk family members, self-efficacy for using sunscreen could be highlighted by discussing recent developments in sunscreen manufacturing and marketing. These include the fact that SPF 15 or higher has been incorporated into many daily-use skin products such as moisturizers and that sunscreens can be sprayed on and can be purchased in unscented, non-greasy versions. Because our data suggest that men are more likely to consider sunscreen a hassle and a nuisance and not endorse the preventive influence sunscreen has on cosmetic effects of aging (unpublished data), future studies may need to employ qualitative methods to identify strategies for increasing positive perceptions of sunscreen. Emphasizing detrimental cosmetic and photo-aging effects of sun exposure through appearance-based materials, such as age-progressed pictures of the family member, may also prove beneficial. Overall, interventions to reduce sunbathing among FDRs of patients with melanoma should attempt to counteract both perceived benefits of sunbathing and normative influences of family and friends to sunbathe. Emphasis should also be placed on reasons why sunbathing should be avoided (e.g., sunscreen is not 100% effective) and should target younger family members by emphasizing the aging effects of sunbathing on the skin. In view of the evidence indicating that the correlates of sun protection and sunbathing are not the same, interventions may be more effective if they include separate components to address sun protection and sunbathing behaviors. Finally, because health care professionals did not influence sun protection and sunbathing, general practitioners should ask about a family history of skin cancer and refer these individuals to a dermatologist. In view of the rising incidence of melanoma, the development and testing of such interventions is an important public health issue.\nIn terms of recommendations for future research, we found it more difficult to recruit younger and male relatives into the study. Recruitment materials and more intensive recruitment efforts targeted towards younger relatives and men as well as educating melanoma probands about ways to facilitate participation of their younger and male relatives into the study may facilitate a higher uptake in this population of probands. Previous research has suggested that individuals with a family history of melanoma are more likely to speak to their female relatives about melanoma [32] and therefore it is possible that a greater proportion of male and younger relatives will participate in future research if family communication to male relatives is fostered.\nThis study extends what is known about sun protection and sunbathing from previous work conducted on average risk populations to a population of high risk individuals. Although caution should be used in using cross-sectional results to guide interventions, these results provide information regarding the factors that might be focused on in future interventions to address sun protection and sunbathing in this population. In terms of implications for interventions to improve sun protection for at-risk family members, self-efficacy for using sunscreen could be highlighted by discussing recent developments in sunscreen manufacturing and marketing. These include the fact that SPF 15 or higher has been incorporated into many daily-use skin products such as moisturizers and that sunscreens can be sprayed on and can be purchased in unscented, non-greasy versions. Because our data suggest that men are more likely to consider sunscreen a hassle and a nuisance and not endorse the preventive influence sunscreen has on cosmetic effects of aging (unpublished data), future studies may need to employ qualitative methods to identify strategies for increasing positive perceptions of sunscreen. Emphasizing detrimental cosmetic and photo-aging effects of sun exposure through appearance-based materials, such as age-progressed pictures of the family member, may also prove beneficial. Overall, interventions to reduce sunbathing among FDRs of patients with melanoma should attempt to counteract both perceived benefits of sunbathing and normative influences of family and friends to sunbathe. Emphasis should also be placed on reasons why sunbathing should be avoided (e.g., sunscreen is not 100% effective) and should target younger family members by emphasizing the aging effects of sunbathing on the skin. In view of the evidence indicating that the correlates of sun protection and sunbathing are not the same, interventions may be more effective if they include separate components to address sun protection and sunbathing behaviors. Finally, because health care professionals did not influence sun protection and sunbathing, general practitioners should ask about a family history of skin cancer and refer these individuals to a dermatologist. In view of the rising incidence of melanoma, the development and testing of such interventions is an important public health issue.\nIn terms of recommendations for future research, we found it more difficult to recruit younger and male relatives into the study. Recruitment materials and more intensive recruitment efforts targeted towards younger relatives and men as well as educating melanoma probands about ways to facilitate participation of their younger and male relatives into the study may facilitate a higher uptake in this population of probands. Previous research has suggested that individuals with a family history of melanoma are more likely to speak to their female relatives about melanoma [32] and therefore it is possible that a greater proportion of male and younger relatives will participate in future research if family communication to male relatives is fostered.", "The strengths of this study include the large sample size, the focus on family members, the focus on high risk individuals who did not engage in regular sun protection and skin surveillance, the inclusion of sunbathing as an outcome, and the inclusion of previously unstudied correlates of behavior such as the medical status of the affected family member and the level of psychological distress about the affected family member's cancer. This study is also one of few to focus on an older sample of men and women.\nThere are several study limitations. The cross-sectional methodology precludes the ability to infer causal relationships. The sample was comprised of relatively well-educated and married individuals, and almost half the sample was comprised of patients' offspring. Female and older relatives were more likely to participate. It is not known whether levels and correlates of sun protection and sunbathing would have differed with a more heterogeneous sample. It is also not known whether the patients who provided family member names differed from those patients who we were not able to contact or who declined to provide family member names.", "This study extends what is known about sun protection and sunbathing from previous work conducted on average risk populations to a population of high risk individuals. Although caution should be used in using cross-sectional results to guide interventions, these results provide information regarding the factors that might be focused on in future interventions to address sun protection and sunbathing in this population. In terms of implications for interventions to improve sun protection for at-risk family members, self-efficacy for using sunscreen could be highlighted by discussing recent developments in sunscreen manufacturing and marketing. These include the fact that SPF 15 or higher has been incorporated into many daily-use skin products such as moisturizers and that sunscreens can be sprayed on and can be purchased in unscented, non-greasy versions. Because our data suggest that men are more likely to consider sunscreen a hassle and a nuisance and not endorse the preventive influence sunscreen has on cosmetic effects of aging (unpublished data), future studies may need to employ qualitative methods to identify strategies for increasing positive perceptions of sunscreen. Emphasizing detrimental cosmetic and photo-aging effects of sun exposure through appearance-based materials, such as age-progressed pictures of the family member, may also prove beneficial. Overall, interventions to reduce sunbathing among FDRs of patients with melanoma should attempt to counteract both perceived benefits of sunbathing and normative influences of family and friends to sunbathe. Emphasis should also be placed on reasons why sunbathing should be avoided (e.g., sunscreen is not 100% effective) and should target younger family members by emphasizing the aging effects of sunbathing on the skin. In view of the evidence indicating that the correlates of sun protection and sunbathing are not the same, interventions may be more effective if they include separate components to address sun protection and sunbathing behaviors. Finally, because health care professionals did not influence sun protection and sunbathing, general practitioners should ask about a family history of skin cancer and refer these individuals to a dermatologist. In view of the rising incidence of melanoma, the development and testing of such interventions is an important public health issue.\nIn terms of recommendations for future research, we found it more difficult to recruit younger and male relatives into the study. Recruitment materials and more intensive recruitment efforts targeted towards younger relatives and men as well as educating melanoma probands about ways to facilitate participation of their younger and male relatives into the study may facilitate a higher uptake in this population of probands. Previous research has suggested that individuals with a family history of melanoma are more likely to speak to their female relatives about melanoma [32] and therefore it is possible that a greater proportion of male and younger relatives will participate in future research if family communication to male relatives is fostered.", "Demographic, psychological, and social influence factors contributed to sun protection and sunbathing practices among melanoma patients' close family members who were not compliant with sun protection or other skin surveillance practices. Less educated and female relatives are less compliant with recommended practices and may benefit from targeted interventions to improve their sun protection and sun exposure practices. Attitudinal factors such as concerns about photo-aging and the perceived benefits of sunbathing were key, and the sun protection and tanning practices of family, friends, and celebrities also played a role. Additionally, attitudes toward sunscreen use including self-efficacy and perceived barriers contributed to skin protection and sunbathing practices, respectively. These findings suggest that the effectiveness of behavioral interventions to improve these practices may be improved if we target less educated and female relatives as well as the attitudes and social influences that contribute to low levels of sun protection and sun avoidance in this population of at-risk family members.", "The authors declare that they have no competing interests.", "SLM conceived the study, coordinated data collection, assisted in data analyses, and was the primary author of the manuscript. EJC designed the study, conducted data analyses, and assisted in writing the manuscript. PBJ participated in the design of the study, coordinated data collection, and assisted with writing the manuscript. MM coordinated data collection and assisted with writing the manuscript. CJH assisted with the data interpretation and assisted in writing the manuscript. SL participated in the initial design of the study and coordinated data collection. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/122/prepub\n", "Study Survey Items. The file contains all of the survey items used in the study.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Leptin promotes melanoma tumor growth in mice related to increasing circulating endothelial progenitor cells numbers and plasma NO production.
21338489
Epidemiological studies propose that obesity increases the risk of several cancers, including melanoma. Obesity increases the expression of leptin, a multifunctional peptide produced predominantly by adipocytes which may promote tumor growth. Several recently experiments have suggested that the tumors growth is in need of endothelial progenitor cell (EPC) dependent generation of new blood vessels.Our objectives in the present study were to examine the effects of leptin on melanoma growth, circulating EPCs number and plasma levels of nitric oxide metabolites (NOx).
BACKGROUND
2 × 106 B16F10 melanoma cells were injected to thirty two C57BL6 mice subcutaneously. The mice were randomly divided into 4 groups (n = 8) in 8th day. Two groups were received twice daily intraperitoneal(i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups were received i.p. injections of either 9F8 an anti leptin receptor antibody or the control mouse IgG at 50 μg/mouse every 3 consecutive days. By the end of the second week the animals were euthanized and blood samples and tumors were analyzed.
METHODS
The tumor weight, EPC numbers and NOx level in leptin, PBS, 9F8, and IgG group were (3.2 ± 0.6, 1.7 ± 0.3, 1.61 ± 0.2,1.7 ± 0.3 g), (222.66 ± 36.5, 133.33 ± 171, 23.33 ± 18, 132.66 ± 27.26/ml of blood), and (22.47 ± 5.5, 12.30 ± 1.5, 6.26 ± 0.84, 15.75 ± 6.3 μmol/L) respectively. Tumors weight and size, circulating EPC numbers and plasma levels of NOx were significantly more in the leptin than 9f8 and both control groups (p < 0.05). The plasma concentration of NOx significantly decreased in 9f8 treated mice compare to control group (p < 0.05).
RESULTS
In conclusion, our observations indicate that leptin causes melanoma growth likely through increased NO production and circulating EPC numbers and consequently vasculogenesis.
CONCLUSIONS
[ "Adipocytes", "Animals", "Endothelial Cells", "Leptin", "Male", "Melanoma, Experimental", "Mice", "Mice, Inbred C57BL", "Nitric Oxide", "Receptors, Leptin", "Skin Neoplasms", "Stem Cells" ]
3049751
null
null
Methods
[SUBTITLE] Cell culture [SUBSECTION] B16-F10 melanoma cells which can grow in the C57BL/6 strain mouse were purchased from the National Cell bank of Iran (NCBI, Pasteur institute of Iran). Cells were cultured in DMEM supplemented with 4 mM L-glutamine, 4.5 g/l glucose, 10% FBS, and antibiotics (100 μg/ml streptomycin, 100 μg/ml penicillin) under humidified air with 5% CO2 at 37°C. After 80% confluency of the melanoma cell monolayer in culture, the cells were washed and detached with PBS containing 0.25% trypsin and 0.03% EDTA and then pelleted by brief centrifugation at 100 g. The supernatant was removed, cell pellets were resuspended in PBS, and the cell number was counted. B16-F10 melanoma cells which can grow in the C57BL/6 strain mouse were purchased from the National Cell bank of Iran (NCBI, Pasteur institute of Iran). Cells were cultured in DMEM supplemented with 4 mM L-glutamine, 4.5 g/l glucose, 10% FBS, and antibiotics (100 μg/ml streptomycin, 100 μg/ml penicillin) under humidified air with 5% CO2 at 37°C. After 80% confluency of the melanoma cell monolayer in culture, the cells were washed and detached with PBS containing 0.25% trypsin and 0.03% EDTA and then pelleted by brief centrifugation at 100 g. The supernatant was removed, cell pellets were resuspended in PBS, and the cell number was counted. [SUBTITLE] Animal experiments [SUBSECTION] Six to 8 week-old male C57BL/6 mice were purchased from Pasteur institute of Iran and served as recipient mice for tumor inoculation. Mice were permitted 1 week to acclimate to the environment before experiment. All mice were treated according to the guidelines of the Institutional Ethics Committee. C57BL/6 mice were inoculated with 2 × 106 B16-F10 melanoma cells subcutaneously in the right flank using a disposable tuberculin syringe. The day of inoculation was defined as day 0. Primary palpable tumors developed on day 6-7. On day 8, the tumor bearing mice were randomly assigned into 4 groups and each group contained 8 mice. Two groups received twice daily intraperitoneal (i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups received i.p. injections of either 9F8 monoclonal antibody or the control mouse IgG at 50 μg/injection every 3 consecutive days on days 8, 11 after tumor induction. 9F8 is a monoclonal antibody to the human leptin receptor (ObR) which has been developed by Fazeli and Zarkesh-Esfahani and tested for antagonist activity using a leptin signaling bioassay [21]. 9F8 antibody was a kind gift from Professor Richard Ross, Sheffield University, UK. The mouse IgG was kindly gifted by Dr Ali Mostafaei (Medical Biology Research Center, Kermanshah University of Medical Sciences) At the day 14, all animals were euthanized via pentobarbital overdose. Tumors were then carefully dissected, and weighed. Moreover, tumor volumes were calculated as prolate spheroid: V = (4/3*π*(a)2*(b), were "a" is half of the minor axis and "b" is half of the major axis of the prolate spheroid. The weight of the mice was measured immediately after tumor resection. Six to 8 week-old male C57BL/6 mice were purchased from Pasteur institute of Iran and served as recipient mice for tumor inoculation. Mice were permitted 1 week to acclimate to the environment before experiment. All mice were treated according to the guidelines of the Institutional Ethics Committee. C57BL/6 mice were inoculated with 2 × 106 B16-F10 melanoma cells subcutaneously in the right flank using a disposable tuberculin syringe. The day of inoculation was defined as day 0. Primary palpable tumors developed on day 6-7. On day 8, the tumor bearing mice were randomly assigned into 4 groups and each group contained 8 mice. Two groups received twice daily intraperitoneal (i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups received i.p. injections of either 9F8 monoclonal antibody or the control mouse IgG at 50 μg/injection every 3 consecutive days on days 8, 11 after tumor induction. 9F8 is a monoclonal antibody to the human leptin receptor (ObR) which has been developed by Fazeli and Zarkesh-Esfahani and tested for antagonist activity using a leptin signaling bioassay [21]. 9F8 antibody was a kind gift from Professor Richard Ross, Sheffield University, UK. The mouse IgG was kindly gifted by Dr Ali Mostafaei (Medical Biology Research Center, Kermanshah University of Medical Sciences) At the day 14, all animals were euthanized via pentobarbital overdose. Tumors were then carefully dissected, and weighed. Moreover, tumor volumes were calculated as prolate spheroid: V = (4/3*π*(a)2*(b), were "a" is half of the minor axis and "b" is half of the major axis of the prolate spheroid. The weight of the mice was measured immediately after tumor resection. [SUBTITLE] Flow cytometry quantification of EPC [SUBSECTION] Mice were bled through heart puncture for EPC enumeration by flowcytometry. EPC were quantified using the endothelial murine markers VEGF receptor2(PE; R&D Systems,), and CD34(FITC;eBioscience Inc., SanDiego, California)and the CD45 (PerCP;Santa Cruz Biotechnology, Inc., Santa Cruz, California)as described previously with minor changes [22]. Briefly, blood collected in EDTA containing tubes were incubated for 10 minutes with FcR-blocking (miltenyibiotec, Germany). 500 μl of whole blood was incubated with 4 μl of CD45, 8 μl of KDR, and 5 μl of CD34. Respective isotype controls were used as anegativecontrol(eBioscience Inc., SanDiego, California) at 5 μg/ml concentration each. The samples were lysed before flow cytometry analysis. After RBC lysis, cellsuspensions were evaluated by a FACSCalibur (BD Biosciences). The numberof CD45dimCD34+KDR+ EPCswas determined by a two-dimensional side-scatter fluorescencedot-plot analysis of the sample after gating onthe lymphocyte population (Figure 1). The number of EPCs was expressed per 1 mlblood [22]. Characterization of endothelial progenitor cells (EPCs) by flowcytometry evaluation. First, cells were plotted in forward vs side scatter to gate the lymphocyte population selectively, where EPCs are usually found (a). For analysis of CD45dimCD34+KDR+ endothelial progenitor cells, CD45 was then plotted against the side scatter (b), followed by further analysis of the CD45dim population on coexpression of CD34/KDR (c). Mice were bled through heart puncture for EPC enumeration by flowcytometry. EPC were quantified using the endothelial murine markers VEGF receptor2(PE; R&D Systems,), and CD34(FITC;eBioscience Inc., SanDiego, California)and the CD45 (PerCP;Santa Cruz Biotechnology, Inc., Santa Cruz, California)as described previously with minor changes [22]. Briefly, blood collected in EDTA containing tubes were incubated for 10 minutes with FcR-blocking (miltenyibiotec, Germany). 500 μl of whole blood was incubated with 4 μl of CD45, 8 μl of KDR, and 5 μl of CD34. Respective isotype controls were used as anegativecontrol(eBioscience Inc., SanDiego, California) at 5 μg/ml concentration each. The samples were lysed before flow cytometry analysis. After RBC lysis, cellsuspensions were evaluated by a FACSCalibur (BD Biosciences). The numberof CD45dimCD34+KDR+ EPCswas determined by a two-dimensional side-scatter fluorescencedot-plot analysis of the sample after gating onthe lymphocyte population (Figure 1). The number of EPCs was expressed per 1 mlblood [22]. Characterization of endothelial progenitor cells (EPCs) by flowcytometry evaluation. First, cells were plotted in forward vs side scatter to gate the lymphocyte population selectively, where EPCs are usually found (a). For analysis of CD45dimCD34+KDR+ endothelial progenitor cells, CD45 was then plotted against the side scatter (b), followed by further analysis of the CD45dim population on coexpression of CD34/KDR (c). [SUBTITLE] Nitrite and leptin measurement [SUBSECTION] Mice were fasted for 14 h prior to sacrificing in order to obtain fasted blood samples. Plasma was isolated from whole blood collected and total nitrite (NOx) was measured (R&D Systems) as an indicator of endothelial release of NO as previously described [23]. Moreover, plasma leptin concentration was measured by ELISA kit (R&D Systems) in mice according to manufacturer's instructions. Mice were fasted for 14 h prior to sacrificing in order to obtain fasted blood samples. Plasma was isolated from whole blood collected and total nitrite (NOx) was measured (R&D Systems) as an indicator of endothelial release of NO as previously described [23]. Moreover, plasma leptin concentration was measured by ELISA kit (R&D Systems) in mice according to manufacturer's instructions. [SUBTITLE] Statistical analysis [SUBSECTION] Data are expressed as mean ± SD and were tested for normal distribution with the Kolmogorov-Smirnov test. Comparisons between groups were analysed by ANOVA followed by the Bonferroni method as post hoc-test. Differences in the weight of the mice were analyzed using the paired-sample t test. Statistical significance was assumed, if a null hypothesis could be rejected at p ≤ 0.05. All statistical analysis was performed with SPSS 16 (SPSS Inc.). Data are expressed as mean ± SD and were tested for normal distribution with the Kolmogorov-Smirnov test. Comparisons between groups were analysed by ANOVA followed by the Bonferroni method as post hoc-test. Differences in the weight of the mice were analyzed using the paired-sample t test. Statistical significance was assumed, if a null hypothesis could be rejected at p ≤ 0.05. All statistical analysis was performed with SPSS 16 (SPSS Inc.).
null
null
null
null
[ "Introduction", "Cell culture", "Animal experiments", "Flow cytometry quantification of EPC", "Nitrite and leptin measurement", "Statistical analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Tumor growth and metastasis is dependent on the formation and assembly of new blood vessels [1]. Several recent experiments have suggested that the growth of some types of tumors is not only dependent on angiogenesis (i.e., mature endothelial-cell dependent generation of new blood vessels) but also is associated with vasculogenesis, which means endothelial progenitor cell (EPC) dependent generation of new blood vessels [2].\nMobilization of EPCs from the bone marrow constitutes a critical step in the formation of de novo blood vessels, and levels of peripheral blood EPCs have been shown to be increased in certain malignant states.\nFurthermore, inhibition of EPCrecruitment in neoplastic conditions has been efficiently attenuated tumors growth and progression [3-6]. In this regard, EPCs holds potential pathophysiological role in melanoma and may offer a potentialpredictive indicator of tumor growth and progression.\nLeptin, a product of the obese (ob) gene, is a multifunctional peptide produced predominantly by adipocytes[7]. Besides itsseveral pleiotropic effects including regulation of food intake and energy expenditure, reproductionand immunefunctions, leptin has been found to exerts angiogenic effects in vitro and in vivo, which are mediated by enhancement of the endothelium derived nitric oxide (NO) production[8,9], the expression of vascular endothelial growth factor (VEGF) and VEGF-receptor 2 and activation of endogenous fibroblasticgrowth factor -2 [10,11].\nThe leptin receptor (ObR) is expressed on various cell types, including endothelial cells,[12,13] CD34-positive hematopoietic cells,[14] and peripheral blood-derived early and lateoutgrowth endothelial progenitor cells [15,16]. Furthermore leptin increased the adhesion, transmigration, and incorporation of early outgrowth progenitor cells into experimental arterial lesions [15].\nNitric oxide (NO) is recognized as an important final target of leptin effecton the endothelium. Leptin can induce NO formation by directly activating endothelial NO synthase through the Akt pathway[17,18].\nLeptin receptors are expressed in mouse melanoma cells, but there is very little previous information on the relationship between leptin and melanoma. One epidemiological study reported that high serum leptin was positively correlated with melanoma risk [19]. Moreover, it has been shown that leptin directly accelerated melanoma tumor growth in mice [20].\nIn the present study, we hypothesized that the leptin may increase the EPC numbers and NO production in peripheral blood of melanoma tumor bearing mice.", "B16-F10 melanoma cells which can grow in the C57BL/6 strain mouse were purchased from the National Cell bank of Iran (NCBI, Pasteur institute of Iran). Cells were cultured in DMEM supplemented with 4 mM L-glutamine, 4.5 g/l glucose, 10% FBS, and antibiotics (100 μg/ml streptomycin, 100 μg/ml penicillin) under humidified air with 5% CO2 at 37°C.\nAfter 80% confluency of the melanoma cell monolayer in culture, the cells were washed and detached with PBS containing 0.25% trypsin and 0.03% EDTA and then pelleted by brief centrifugation at 100 g. The supernatant was removed, cell pellets were resuspended in PBS, and the cell number was counted.", "Six to 8 week-old male C57BL/6 mice were purchased from Pasteur institute of Iran and served as recipient mice for tumor inoculation. Mice were permitted 1 week to acclimate to the environment before experiment. All mice were treated according to the guidelines of the Institutional Ethics Committee.\nC57BL/6 mice were inoculated with 2 × 106 B16-F10 melanoma cells subcutaneously in the right flank using a disposable tuberculin syringe. The day of inoculation was defined as day 0. Primary palpable tumors developed on day 6-7. On day 8, the tumor bearing mice were randomly assigned into 4 groups and each group contained 8 mice. Two groups received twice daily intraperitoneal (i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups received i.p. injections of either 9F8 monoclonal antibody or the control mouse IgG at 50 μg/injection every 3 consecutive days on days 8, 11 after tumor induction. 9F8 is a monoclonal antibody to the human leptin receptor (ObR) which has been developed by Fazeli and Zarkesh-Esfahani and tested for antagonist activity using a leptin signaling bioassay [21]. 9F8 antibody was a kind gift from Professor Richard Ross, Sheffield University, UK. The mouse IgG was kindly gifted by Dr Ali Mostafaei (Medical Biology Research Center, Kermanshah University of Medical Sciences) At the day 14, all animals were euthanized via pentobarbital overdose. Tumors were then carefully dissected, and weighed. Moreover, tumor volumes were calculated as prolate spheroid: V = (4/3*π*(a)2*(b), were \"a\" is half of the minor axis and \"b\" is half of the major axis of the prolate spheroid. The weight of the mice was measured immediately after tumor resection.", "Mice were bled through heart puncture for EPC enumeration by flowcytometry. EPC were quantified using the endothelial murine markers VEGF receptor2(PE; R&D Systems,), and CD34(FITC;eBioscience Inc., SanDiego, California)and the CD45 (PerCP;Santa Cruz Biotechnology, Inc., Santa Cruz, California)as described previously with minor changes [22]. Briefly, blood collected in EDTA containing tubes were incubated for 10 minutes with FcR-blocking (miltenyibiotec, Germany). 500 μl of whole blood was incubated with 4 μl of CD45, 8 μl of KDR, and 5 μl of CD34. Respective isotype controls were used as anegativecontrol(eBioscience Inc., SanDiego, California) at 5 μg/ml concentration each. The samples were lysed before flow cytometry analysis.\nAfter RBC lysis, cellsuspensions were evaluated by a FACSCalibur (BD Biosciences). The numberof CD45dimCD34+KDR+ EPCswas determined by a two-dimensional side-scatter fluorescencedot-plot analysis of the sample after gating onthe lymphocyte population (Figure 1). The number of EPCs was expressed per 1 mlblood [22].\nCharacterization of endothelial progenitor cells (EPCs) by flowcytometry evaluation. First, cells were plotted in forward vs side scatter to gate the lymphocyte population selectively, where EPCs are usually found (a). For analysis of CD45dimCD34+KDR+ endothelial progenitor cells, CD45 was then plotted against the side scatter (b), followed by further analysis of the CD45dim population on coexpression of CD34/KDR (c).", "Mice were fasted for 14 h prior to sacrificing in order to obtain fasted blood samples.\nPlasma was isolated from whole blood collected and total nitrite (NOx) was measured (R&D Systems) as an indicator of endothelial release of NO as previously described [23].\nMoreover, plasma leptin concentration was measured by ELISA kit (R&D Systems) in mice according to manufacturer's instructions.", "Data are expressed as mean ± SD and were tested for normal distribution with the Kolmogorov-Smirnov test. Comparisons between groups were analysed by ANOVA followed by the Bonferroni method as post hoc-test. Differences in the weight of the mice were analyzed using the paired-sample t test. Statistical significance was assumed, if a null hypothesis could be rejected at p ≤ 0.05. All statistical analysis was performed with SPSS 16 (SPSS Inc.).", "The plasma levels of leptin were significantly higher in leptin group compared to all other groups of mice while there was no significant difference between other groups (Figure 2).\nThe plasma levels of leptin were significantly higher in leptin group compared to all other groups of mice while there was no significant difference between other groups. * (p < 0.05).\nBody weights for each group of mice are shown in Table 1. There was a significant weight loss in mice of leptin group while the weight of the animals of 9F8 group increased significantly during the study. By the end of the experiment there was a significant difference between leptin and 9f8 group in body weight and also between each group and its relevant control group.\nThe weight of mice in each group of the study.\n*Significant difference with respective control group\nγ Significant difference with 9F8 group\nThe melanoma tumor weight of leptin treated mice were significantly more than tumors from other groups of mice while there was no significant difference between other groups (Figure 3).\nMean tumors size and weight. The weights and volume of melanoma tumors excised from leptin treated mice were significantly larger than tumors from other groups of mice. There was no significant difference between three other study groups. * (p < 0.05).\nLeptin treatment also resulted in significant more circulating EPCs in tumor bearing mice whereas there was no significant difference between other groups (Figure 4).\nThe circulating EPC numbers. Leptin treated melanoma tumor bearing mice have more EPCs in peripheral blood than all other study groups. There was no significant difference between three other study groups. * (p < 0.05).\nThe plasma concentration of NOx significantly increased in leptin group and significantly decreased in 9f8 treated mice compare to respective control groups (Figure 5).\nThe plasma concentration of NOx. The plasma concentration of NOx significantly increased in leptin group and significantly decreased in 9f8 treated mice compare to respective control groups. Furthermore leptin treated mice had significantly more NOx levels than 9F8 group. * (p < 0.05).", "Adipose tissue secretes several adipokines that are supposed to stimulate inflammation, cell proliferation and angiogenesis. One of the most important member of such adipokines family is leptin, which increases cell proliferation in several tumor cell lines, enhances endothelial cell migration in vitro, and has been suggested to be an angiogenic/vasculogenic factor [12-17,20].\nIt has been suggested that leptin may contribute to tumor growth. However, a direct cause and effect role of leptin in accelerating tumor growth is uncertain. Besides, most of the data supporting leptin's role in stimulating cell proliferation and angiogenesis have been derived from invitro studies.\nIn our study, the tumors weight of leptin treated mice were significantly more than tumors from all other groups of mice. Leptin has been identified in several types of human cancers and may also be linked to poor prognosis. In two studies, leptin and leptin receptor expression were significantly increased in primary and metastatic breast cancer relative to noncancerous tissues in women [24]. In a clinical study of colorectal cancer, leptin expression was associated with tumor G2 grade [25]. In renal cell carcinomas leptin and leptin receptor expression was well correlated with progression-free survival, venous invasion and lymph node metastasis [26]. Leptin has also been suggested to have a role in uterine and endometrial cancers [27]. There is very little previous information on the relationship between leptin and melanoma. Just one epidemiological study demonstrated that high serum leptin was positively correlated with melanoma risk [19].\nThe limited published animal studies trying to find whether leptin promote tumor growth have reported different results. Some studies support the hypothesis that the absence of leptin signaling diminishes mammary tumor growth in mice [10,20,28,29].\nBrandon et al, in their well-designed study have shown that leptin deficiency attenuates but does not abolish melanoma tumor growth [20].\nFurthermore, In mouse model of mammary tumor, using a leptin receptor antagonist [28]revealed that leptin signaling promotes the growth of some types of mammary tumors and increases the expression of proliferating cell nuclear antigen, cyclin D1, vascular endothelial growth factor (VEGF) and its receptor type two (VEGF-R2) [30,31]. Furthermore Fusco et al have recently shown that inactivation of LepR inhibits proliferation and viability of human breast cancer cell lines [32]. Inconsistent with the results of these studies, obese Zucker rats, which have defective leptin receptor, developed more mammary tumors than lean Zucker rats after exposure to the carcinogen, 7,12-dimethylbenzanthracene [33].\nLeptin administration led to increase plasma NO concentrations as have been reported previously in several other studies [34-37]. It has been shown that the leptin-induced NO production is mediated through protein kinase A and mitogen-activated protein kinase (MAPK) activation. Interestingly antagonism of leptin by 9f8 antibody resulted in significantly lower plasma NO concentrations compare to both leptin and control group. The significant effect of this antibody on NO production despite of non-significant effects on tumor growth and EPC numbers may be because of use of large, pharmacological concentrations of leptin to demonstrate the 2 latter effects in this study.\nLeptin receptors are expressed in mouse melanoma cells as well as EPCs [38].\nThe results of the present study indicated that leptin enhance the numbers of EPCs in peripheral blood. Recent studies indicated that the EPC derived from bone marrow also contributes to tumor vasculogenesis [3-5,39]. However the extent of EPCs incorporation into the tumor vasculature has been a subject of controversy [40-42]. To the best of our knowledge, this is the first time that has been shown that leptin increased EPCs in melanoma tumor model. It has been recently reported that leptin increased the adhesion and the homing potential of EPCs and may thus enhance their capacity to promote vascular regeneration in vivo [38]. Leptin induces NO, an important mediator of EPC mobilization. NO may trigger EPC recruitment from bone marrow probably by activating a phosphatidylinositol (PI) 3-kinase-independentAkt-eNOS phosphorylation pathway [42,43]. So, the mechanism of increased EPCs in the circulation may be due to mobilization of these cells from bone marrow. Furthermore it has been shown that leptin can increase other mediators of vasculogenesis such as VEGF, and intracellular signaling pathways of cell proliferation, including p38 MAPK and ERK1/2 MAPK phosphorylation [44].", "In conclusion, our observations indicate that leptin causes melanoma growth. The mechanisms by which leptin promotes melanoma growth likely involve increased NO production and circulating EPC numbers and consequently vasculogenesis.", "The authors declare that they have no competing interests.", "SHJ had substantial contributions to conception and design, analysis and interpretation of data, and writing the manuscript. FA carried out the cell culture, animal experiment and all other laboratory experiments. HZ and MK had contributions to conception and design. HZ has also been involved in analysis and interpretation of flowcytometry data and drafting the manuscript. MN carried out the flowcytometry measurements. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Cell culture", "Animal experiments", "Flow cytometry quantification of EPC", "Nitrite and leptin measurement", "Statistical analysis", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Tumor growth and metastasis is dependent on the formation and assembly of new blood vessels [1]. Several recent experiments have suggested that the growth of some types of tumors is not only dependent on angiogenesis (i.e., mature endothelial-cell dependent generation of new blood vessels) but also is associated with vasculogenesis, which means endothelial progenitor cell (EPC) dependent generation of new blood vessels [2].\nMobilization of EPCs from the bone marrow constitutes a critical step in the formation of de novo blood vessels, and levels of peripheral blood EPCs have been shown to be increased in certain malignant states.\nFurthermore, inhibition of EPCrecruitment in neoplastic conditions has been efficiently attenuated tumors growth and progression [3-6]. In this regard, EPCs holds potential pathophysiological role in melanoma and may offer a potentialpredictive indicator of tumor growth and progression.\nLeptin, a product of the obese (ob) gene, is a multifunctional peptide produced predominantly by adipocytes[7]. Besides itsseveral pleiotropic effects including regulation of food intake and energy expenditure, reproductionand immunefunctions, leptin has been found to exerts angiogenic effects in vitro and in vivo, which are mediated by enhancement of the endothelium derived nitric oxide (NO) production[8,9], the expression of vascular endothelial growth factor (VEGF) and VEGF-receptor 2 and activation of endogenous fibroblasticgrowth factor -2 [10,11].\nThe leptin receptor (ObR) is expressed on various cell types, including endothelial cells,[12,13] CD34-positive hematopoietic cells,[14] and peripheral blood-derived early and lateoutgrowth endothelial progenitor cells [15,16]. Furthermore leptin increased the adhesion, transmigration, and incorporation of early outgrowth progenitor cells into experimental arterial lesions [15].\nNitric oxide (NO) is recognized as an important final target of leptin effecton the endothelium. Leptin can induce NO formation by directly activating endothelial NO synthase through the Akt pathway[17,18].\nLeptin receptors are expressed in mouse melanoma cells, but there is very little previous information on the relationship between leptin and melanoma. One epidemiological study reported that high serum leptin was positively correlated with melanoma risk [19]. Moreover, it has been shown that leptin directly accelerated melanoma tumor growth in mice [20].\nIn the present study, we hypothesized that the leptin may increase the EPC numbers and NO production in peripheral blood of melanoma tumor bearing mice.", "[SUBTITLE] Cell culture [SUBSECTION] B16-F10 melanoma cells which can grow in the C57BL/6 strain mouse were purchased from the National Cell bank of Iran (NCBI, Pasteur institute of Iran). Cells were cultured in DMEM supplemented with 4 mM L-glutamine, 4.5 g/l glucose, 10% FBS, and antibiotics (100 μg/ml streptomycin, 100 μg/ml penicillin) under humidified air with 5% CO2 at 37°C.\nAfter 80% confluency of the melanoma cell monolayer in culture, the cells were washed and detached with PBS containing 0.25% trypsin and 0.03% EDTA and then pelleted by brief centrifugation at 100 g. The supernatant was removed, cell pellets were resuspended in PBS, and the cell number was counted.\nB16-F10 melanoma cells which can grow in the C57BL/6 strain mouse were purchased from the National Cell bank of Iran (NCBI, Pasteur institute of Iran). Cells were cultured in DMEM supplemented with 4 mM L-glutamine, 4.5 g/l glucose, 10% FBS, and antibiotics (100 μg/ml streptomycin, 100 μg/ml penicillin) under humidified air with 5% CO2 at 37°C.\nAfter 80% confluency of the melanoma cell monolayer in culture, the cells were washed and detached with PBS containing 0.25% trypsin and 0.03% EDTA and then pelleted by brief centrifugation at 100 g. The supernatant was removed, cell pellets were resuspended in PBS, and the cell number was counted.\n[SUBTITLE] Animal experiments [SUBSECTION] Six to 8 week-old male C57BL/6 mice were purchased from Pasteur institute of Iran and served as recipient mice for tumor inoculation. Mice were permitted 1 week to acclimate to the environment before experiment. All mice were treated according to the guidelines of the Institutional Ethics Committee.\nC57BL/6 mice were inoculated with 2 × 106 B16-F10 melanoma cells subcutaneously in the right flank using a disposable tuberculin syringe. The day of inoculation was defined as day 0. Primary palpable tumors developed on day 6-7. On day 8, the tumor bearing mice were randomly assigned into 4 groups and each group contained 8 mice. Two groups received twice daily intraperitoneal (i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups received i.p. injections of either 9F8 monoclonal antibody or the control mouse IgG at 50 μg/injection every 3 consecutive days on days 8, 11 after tumor induction. 9F8 is a monoclonal antibody to the human leptin receptor (ObR) which has been developed by Fazeli and Zarkesh-Esfahani and tested for antagonist activity using a leptin signaling bioassay [21]. 9F8 antibody was a kind gift from Professor Richard Ross, Sheffield University, UK. The mouse IgG was kindly gifted by Dr Ali Mostafaei (Medical Biology Research Center, Kermanshah University of Medical Sciences) At the day 14, all animals were euthanized via pentobarbital overdose. Tumors were then carefully dissected, and weighed. Moreover, tumor volumes were calculated as prolate spheroid: V = (4/3*π*(a)2*(b), were \"a\" is half of the minor axis and \"b\" is half of the major axis of the prolate spheroid. The weight of the mice was measured immediately after tumor resection.\nSix to 8 week-old male C57BL/6 mice were purchased from Pasteur institute of Iran and served as recipient mice for tumor inoculation. Mice were permitted 1 week to acclimate to the environment before experiment. All mice were treated according to the guidelines of the Institutional Ethics Committee.\nC57BL/6 mice were inoculated with 2 × 106 B16-F10 melanoma cells subcutaneously in the right flank using a disposable tuberculin syringe. The day of inoculation was defined as day 0. Primary palpable tumors developed on day 6-7. On day 8, the tumor bearing mice were randomly assigned into 4 groups and each group contained 8 mice. Two groups received twice daily intraperitoneal (i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups received i.p. injections of either 9F8 monoclonal antibody or the control mouse IgG at 50 μg/injection every 3 consecutive days on days 8, 11 after tumor induction. 9F8 is a monoclonal antibody to the human leptin receptor (ObR) which has been developed by Fazeli and Zarkesh-Esfahani and tested for antagonist activity using a leptin signaling bioassay [21]. 9F8 antibody was a kind gift from Professor Richard Ross, Sheffield University, UK. The mouse IgG was kindly gifted by Dr Ali Mostafaei (Medical Biology Research Center, Kermanshah University of Medical Sciences) At the day 14, all animals were euthanized via pentobarbital overdose. Tumors were then carefully dissected, and weighed. Moreover, tumor volumes were calculated as prolate spheroid: V = (4/3*π*(a)2*(b), were \"a\" is half of the minor axis and \"b\" is half of the major axis of the prolate spheroid. The weight of the mice was measured immediately after tumor resection.\n[SUBTITLE] Flow cytometry quantification of EPC [SUBSECTION] Mice were bled through heart puncture for EPC enumeration by flowcytometry. EPC were quantified using the endothelial murine markers VEGF receptor2(PE; R&D Systems,), and CD34(FITC;eBioscience Inc., SanDiego, California)and the CD45 (PerCP;Santa Cruz Biotechnology, Inc., Santa Cruz, California)as described previously with minor changes [22]. Briefly, blood collected in EDTA containing tubes were incubated for 10 minutes with FcR-blocking (miltenyibiotec, Germany). 500 μl of whole blood was incubated with 4 μl of CD45, 8 μl of KDR, and 5 μl of CD34. Respective isotype controls were used as anegativecontrol(eBioscience Inc., SanDiego, California) at 5 μg/ml concentration each. The samples were lysed before flow cytometry analysis.\nAfter RBC lysis, cellsuspensions were evaluated by a FACSCalibur (BD Biosciences). The numberof CD45dimCD34+KDR+ EPCswas determined by a two-dimensional side-scatter fluorescencedot-plot analysis of the sample after gating onthe lymphocyte population (Figure 1). The number of EPCs was expressed per 1 mlblood [22].\nCharacterization of endothelial progenitor cells (EPCs) by flowcytometry evaluation. First, cells were plotted in forward vs side scatter to gate the lymphocyte population selectively, where EPCs are usually found (a). For analysis of CD45dimCD34+KDR+ endothelial progenitor cells, CD45 was then plotted against the side scatter (b), followed by further analysis of the CD45dim population on coexpression of CD34/KDR (c).\nMice were bled through heart puncture for EPC enumeration by flowcytometry. EPC were quantified using the endothelial murine markers VEGF receptor2(PE; R&D Systems,), and CD34(FITC;eBioscience Inc., SanDiego, California)and the CD45 (PerCP;Santa Cruz Biotechnology, Inc., Santa Cruz, California)as described previously with minor changes [22]. Briefly, blood collected in EDTA containing tubes were incubated for 10 minutes with FcR-blocking (miltenyibiotec, Germany). 500 μl of whole blood was incubated with 4 μl of CD45, 8 μl of KDR, and 5 μl of CD34. Respective isotype controls were used as anegativecontrol(eBioscience Inc., SanDiego, California) at 5 μg/ml concentration each. The samples were lysed before flow cytometry analysis.\nAfter RBC lysis, cellsuspensions were evaluated by a FACSCalibur (BD Biosciences). The numberof CD45dimCD34+KDR+ EPCswas determined by a two-dimensional side-scatter fluorescencedot-plot analysis of the sample after gating onthe lymphocyte population (Figure 1). The number of EPCs was expressed per 1 mlblood [22].\nCharacterization of endothelial progenitor cells (EPCs) by flowcytometry evaluation. First, cells were plotted in forward vs side scatter to gate the lymphocyte population selectively, where EPCs are usually found (a). For analysis of CD45dimCD34+KDR+ endothelial progenitor cells, CD45 was then plotted against the side scatter (b), followed by further analysis of the CD45dim population on coexpression of CD34/KDR (c).\n[SUBTITLE] Nitrite and leptin measurement [SUBSECTION] Mice were fasted for 14 h prior to sacrificing in order to obtain fasted blood samples.\nPlasma was isolated from whole blood collected and total nitrite (NOx) was measured (R&D Systems) as an indicator of endothelial release of NO as previously described [23].\nMoreover, plasma leptin concentration was measured by ELISA kit (R&D Systems) in mice according to manufacturer's instructions.\nMice were fasted for 14 h prior to sacrificing in order to obtain fasted blood samples.\nPlasma was isolated from whole blood collected and total nitrite (NOx) was measured (R&D Systems) as an indicator of endothelial release of NO as previously described [23].\nMoreover, plasma leptin concentration was measured by ELISA kit (R&D Systems) in mice according to manufacturer's instructions.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data are expressed as mean ± SD and were tested for normal distribution with the Kolmogorov-Smirnov test. Comparisons between groups were analysed by ANOVA followed by the Bonferroni method as post hoc-test. Differences in the weight of the mice were analyzed using the paired-sample t test. Statistical significance was assumed, if a null hypothesis could be rejected at p ≤ 0.05. All statistical analysis was performed with SPSS 16 (SPSS Inc.).\nData are expressed as mean ± SD and were tested for normal distribution with the Kolmogorov-Smirnov test. Comparisons between groups were analysed by ANOVA followed by the Bonferroni method as post hoc-test. Differences in the weight of the mice were analyzed using the paired-sample t test. Statistical significance was assumed, if a null hypothesis could be rejected at p ≤ 0.05. All statistical analysis was performed with SPSS 16 (SPSS Inc.).", "B16-F10 melanoma cells which can grow in the C57BL/6 strain mouse were purchased from the National Cell bank of Iran (NCBI, Pasteur institute of Iran). Cells were cultured in DMEM supplemented with 4 mM L-glutamine, 4.5 g/l glucose, 10% FBS, and antibiotics (100 μg/ml streptomycin, 100 μg/ml penicillin) under humidified air with 5% CO2 at 37°C.\nAfter 80% confluency of the melanoma cell monolayer in culture, the cells were washed and detached with PBS containing 0.25% trypsin and 0.03% EDTA and then pelleted by brief centrifugation at 100 g. The supernatant was removed, cell pellets were resuspended in PBS, and the cell number was counted.", "Six to 8 week-old male C57BL/6 mice were purchased from Pasteur institute of Iran and served as recipient mice for tumor inoculation. Mice were permitted 1 week to acclimate to the environment before experiment. All mice were treated according to the guidelines of the Institutional Ethics Committee.\nC57BL/6 mice were inoculated with 2 × 106 B16-F10 melanoma cells subcutaneously in the right flank using a disposable tuberculin syringe. The day of inoculation was defined as day 0. Primary palpable tumors developed on day 6-7. On day 8, the tumor bearing mice were randomly assigned into 4 groups and each group contained 8 mice. Two groups received twice daily intraperitoneal (i.p) injections of either PBS or recombinant murine leptin (1 μg/g initial body weight). Two groups received i.p. injections of either 9F8 monoclonal antibody or the control mouse IgG at 50 μg/injection every 3 consecutive days on days 8, 11 after tumor induction. 9F8 is a monoclonal antibody to the human leptin receptor (ObR) which has been developed by Fazeli and Zarkesh-Esfahani and tested for antagonist activity using a leptin signaling bioassay [21]. 9F8 antibody was a kind gift from Professor Richard Ross, Sheffield University, UK. The mouse IgG was kindly gifted by Dr Ali Mostafaei (Medical Biology Research Center, Kermanshah University of Medical Sciences) At the day 14, all animals were euthanized via pentobarbital overdose. Tumors were then carefully dissected, and weighed. Moreover, tumor volumes were calculated as prolate spheroid: V = (4/3*π*(a)2*(b), were \"a\" is half of the minor axis and \"b\" is half of the major axis of the prolate spheroid. The weight of the mice was measured immediately after tumor resection.", "Mice were bled through heart puncture for EPC enumeration by flowcytometry. EPC were quantified using the endothelial murine markers VEGF receptor2(PE; R&D Systems,), and CD34(FITC;eBioscience Inc., SanDiego, California)and the CD45 (PerCP;Santa Cruz Biotechnology, Inc., Santa Cruz, California)as described previously with minor changes [22]. Briefly, blood collected in EDTA containing tubes were incubated for 10 minutes with FcR-blocking (miltenyibiotec, Germany). 500 μl of whole blood was incubated with 4 μl of CD45, 8 μl of KDR, and 5 μl of CD34. Respective isotype controls were used as anegativecontrol(eBioscience Inc., SanDiego, California) at 5 μg/ml concentration each. The samples were lysed before flow cytometry analysis.\nAfter RBC lysis, cellsuspensions were evaluated by a FACSCalibur (BD Biosciences). The numberof CD45dimCD34+KDR+ EPCswas determined by a two-dimensional side-scatter fluorescencedot-plot analysis of the sample after gating onthe lymphocyte population (Figure 1). The number of EPCs was expressed per 1 mlblood [22].\nCharacterization of endothelial progenitor cells (EPCs) by flowcytometry evaluation. First, cells were plotted in forward vs side scatter to gate the lymphocyte population selectively, where EPCs are usually found (a). For analysis of CD45dimCD34+KDR+ endothelial progenitor cells, CD45 was then plotted against the side scatter (b), followed by further analysis of the CD45dim population on coexpression of CD34/KDR (c).", "Mice were fasted for 14 h prior to sacrificing in order to obtain fasted blood samples.\nPlasma was isolated from whole blood collected and total nitrite (NOx) was measured (R&D Systems) as an indicator of endothelial release of NO as previously described [23].\nMoreover, plasma leptin concentration was measured by ELISA kit (R&D Systems) in mice according to manufacturer's instructions.", "Data are expressed as mean ± SD and were tested for normal distribution with the Kolmogorov-Smirnov test. Comparisons between groups were analysed by ANOVA followed by the Bonferroni method as post hoc-test. Differences in the weight of the mice were analyzed using the paired-sample t test. Statistical significance was assumed, if a null hypothesis could be rejected at p ≤ 0.05. All statistical analysis was performed with SPSS 16 (SPSS Inc.).", "The plasma levels of leptin were significantly higher in leptin group compared to all other groups of mice while there was no significant difference between other groups (Figure 2).\nThe plasma levels of leptin were significantly higher in leptin group compared to all other groups of mice while there was no significant difference between other groups. * (p < 0.05).\nBody weights for each group of mice are shown in Table 1. There was a significant weight loss in mice of leptin group while the weight of the animals of 9F8 group increased significantly during the study. By the end of the experiment there was a significant difference between leptin and 9f8 group in body weight and also between each group and its relevant control group.\nThe weight of mice in each group of the study.\n*Significant difference with respective control group\nγ Significant difference with 9F8 group\nThe melanoma tumor weight of leptin treated mice were significantly more than tumors from other groups of mice while there was no significant difference between other groups (Figure 3).\nMean tumors size and weight. The weights and volume of melanoma tumors excised from leptin treated mice were significantly larger than tumors from other groups of mice. There was no significant difference between three other study groups. * (p < 0.05).\nLeptin treatment also resulted in significant more circulating EPCs in tumor bearing mice whereas there was no significant difference between other groups (Figure 4).\nThe circulating EPC numbers. Leptin treated melanoma tumor bearing mice have more EPCs in peripheral blood than all other study groups. There was no significant difference between three other study groups. * (p < 0.05).\nThe plasma concentration of NOx significantly increased in leptin group and significantly decreased in 9f8 treated mice compare to respective control groups (Figure 5).\nThe plasma concentration of NOx. The plasma concentration of NOx significantly increased in leptin group and significantly decreased in 9f8 treated mice compare to respective control groups. Furthermore leptin treated mice had significantly more NOx levels than 9F8 group. * (p < 0.05).", "Adipose tissue secretes several adipokines that are supposed to stimulate inflammation, cell proliferation and angiogenesis. One of the most important member of such adipokines family is leptin, which increases cell proliferation in several tumor cell lines, enhances endothelial cell migration in vitro, and has been suggested to be an angiogenic/vasculogenic factor [12-17,20].\nIt has been suggested that leptin may contribute to tumor growth. However, a direct cause and effect role of leptin in accelerating tumor growth is uncertain. Besides, most of the data supporting leptin's role in stimulating cell proliferation and angiogenesis have been derived from invitro studies.\nIn our study, the tumors weight of leptin treated mice were significantly more than tumors from all other groups of mice. Leptin has been identified in several types of human cancers and may also be linked to poor prognosis. In two studies, leptin and leptin receptor expression were significantly increased in primary and metastatic breast cancer relative to noncancerous tissues in women [24]. In a clinical study of colorectal cancer, leptin expression was associated with tumor G2 grade [25]. In renal cell carcinomas leptin and leptin receptor expression was well correlated with progression-free survival, venous invasion and lymph node metastasis [26]. Leptin has also been suggested to have a role in uterine and endometrial cancers [27]. There is very little previous information on the relationship between leptin and melanoma. Just one epidemiological study demonstrated that high serum leptin was positively correlated with melanoma risk [19].\nThe limited published animal studies trying to find whether leptin promote tumor growth have reported different results. Some studies support the hypothesis that the absence of leptin signaling diminishes mammary tumor growth in mice [10,20,28,29].\nBrandon et al, in their well-designed study have shown that leptin deficiency attenuates but does not abolish melanoma tumor growth [20].\nFurthermore, In mouse model of mammary tumor, using a leptin receptor antagonist [28]revealed that leptin signaling promotes the growth of some types of mammary tumors and increases the expression of proliferating cell nuclear antigen, cyclin D1, vascular endothelial growth factor (VEGF) and its receptor type two (VEGF-R2) [30,31]. Furthermore Fusco et al have recently shown that inactivation of LepR inhibits proliferation and viability of human breast cancer cell lines [32]. Inconsistent with the results of these studies, obese Zucker rats, which have defective leptin receptor, developed more mammary tumors than lean Zucker rats after exposure to the carcinogen, 7,12-dimethylbenzanthracene [33].\nLeptin administration led to increase plasma NO concentrations as have been reported previously in several other studies [34-37]. It has been shown that the leptin-induced NO production is mediated through protein kinase A and mitogen-activated protein kinase (MAPK) activation. Interestingly antagonism of leptin by 9f8 antibody resulted in significantly lower plasma NO concentrations compare to both leptin and control group. The significant effect of this antibody on NO production despite of non-significant effects on tumor growth and EPC numbers may be because of use of large, pharmacological concentrations of leptin to demonstrate the 2 latter effects in this study.\nLeptin receptors are expressed in mouse melanoma cells as well as EPCs [38].\nThe results of the present study indicated that leptin enhance the numbers of EPCs in peripheral blood. Recent studies indicated that the EPC derived from bone marrow also contributes to tumor vasculogenesis [3-5,39]. However the extent of EPCs incorporation into the tumor vasculature has been a subject of controversy [40-42]. To the best of our knowledge, this is the first time that has been shown that leptin increased EPCs in melanoma tumor model. It has been recently reported that leptin increased the adhesion and the homing potential of EPCs and may thus enhance their capacity to promote vascular regeneration in vivo [38]. Leptin induces NO, an important mediator of EPC mobilization. NO may trigger EPC recruitment from bone marrow probably by activating a phosphatidylinositol (PI) 3-kinase-independentAkt-eNOS phosphorylation pathway [42,43]. So, the mechanism of increased EPCs in the circulation may be due to mobilization of these cells from bone marrow. Furthermore it has been shown that leptin can increase other mediators of vasculogenesis such as VEGF, and intracellular signaling pathways of cell proliferation, including p38 MAPK and ERK1/2 MAPK phosphorylation [44].", "In conclusion, our observations indicate that leptin causes melanoma growth. The mechanisms by which leptin promotes melanoma growth likely involve increased NO production and circulating EPC numbers and consequently vasculogenesis.", "The authors declare that they have no competing interests.", "SHJ had substantial contributions to conception and design, analysis and interpretation of data, and writing the manuscript. FA carried out the cell culture, animal experiment and all other laboratory experiments. HZ and MK had contributions to conception and design. HZ has also been involved in analysis and interpretation of flowcytometry data and drafting the manuscript. MN carried out the flowcytometry measurements. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[]
Work hours and self rated health of hospital doctors in Norway and Germany. A comparative study on national samples.
21338494
The relationship between extended work hours and health is well documented among hospital doctors, but the effect of national differences in work hours on health is unexplored. The study examines the relationship between work hours and self rated health in two national samples of hospital doctors.
BACKGROUND
The study population consisted of representative samples of 1,260 German and 562 Norwegian hospital doctors aged 25-65 years (N = 1,822) who received postal questionnaires in 2006 (Germany) and 2008 (Norway). The questionnaires contained items on demography, work hours (number of hours per workday and on-call per month) and self rated subjective health on a five point scale--dichotomized into "good" (above average) and "average or below".
METHODS
Compared to Norway, a significantly higher proportion of German doctors exceeded a 9 hour work day (58.8% vs. 26.7%) and 60 hours on-call per month (63.4% vs. 18.3%). Every third (32.2%) hospital doctor in Germany worked more than this, while this pattern was rare in Norway (2.9%). In a logistic regression model, working in Norway (OR 4.17; 95% CI 3.02-5.73), age 25-44 years (OR 1.66; 95% CI 1.29-2.14) and not exceeding 9 hour work day and 60 hours on-call per month (OR 1.35; 95% CI 1.03-1.77) were all independent significant predictors of good self reported health.
RESULTS
A lower percentage of German hospital doctors reported self rated health as "good", which is partly explained by the differences in work time pattern. Initiatives to increase doctors' control over their work time are recommended.
CONCLUSION
[ "Adult", "Aged", "Female", "Germany", "Health Status", "Hospitals", "Humans", "Male", "Middle Aged", "Norway", "Physicians", "Surveys and Questionnaires", "Workload" ]
3073890
null
null
Methods
[SUBTITLE] Data collection and sample [SUBSECTION] In Germany a 12 page postal questionnaire was sent in September/October 2006 from the German Hospital Institute to 3,295 hospital doctors, with no reminders. In Norway a 14 page postal questionnaire was sent in October and November 2008 to 1,650 doctors of all kinds, with one reminder. Both The German Hospital Institute and The Research Institute of The Norwegian Medical Association are independent research institutes with experience in surveys on doctors' health and work conditions. The response rates were 58.2% (1,917/3,295) in Germany and 65.0% (1,072/1,650) in Norway, of which 592 were hospital doctors. Age between 25 and 65 years and working in a hospital setting with a traditional work pattern - day time work usually combined with on-call duties - were inclusion criteria. The final sample comprised 1,822 respondents, 1,260 in Germany and 562 in Norway. In Germany a 12 page postal questionnaire was sent in September/October 2006 from the German Hospital Institute to 3,295 hospital doctors, with no reminders. In Norway a 14 page postal questionnaire was sent in October and November 2008 to 1,650 doctors of all kinds, with one reminder. Both The German Hospital Institute and The Research Institute of The Norwegian Medical Association are independent research institutes with experience in surveys on doctors' health and work conditions. The response rates were 58.2% (1,917/3,295) in Germany and 65.0% (1,072/1,650) in Norway, of which 592 were hospital doctors. Age between 25 and 65 years and working in a hospital setting with a traditional work pattern - day time work usually combined with on-call duties - were inclusion criteria. The final sample comprised 1,822 respondents, 1,260 in Germany and 562 in Norway. [SUBTITLE] Questionnaire and measurement [SUBSECTION] Both the German and the Norwegian questionnaire included a question on the average number of work hours per day: "On an average work day, how many hours do you work (including overtime, excluding on-call duties)". In addition, the average number of hours on-call per month was recorded, in Norway with the question: "In an average month, about how many hours do you have on-call duties?", in Germany with a similar question: "In an average month, about how many on-call duties do you have on weekday and weekend? About how many hours are you on-call duty on a weekday and weekend?" The standard full time workweek is between 38-40 hours in Norway and 40-42 hours in Germany [17,18], and almost all our respondents worked at least this much. Only doctors working full time were included in the study. Work hours of most hospital doctors in Norway, Germany and in other countries consist of hours at work days and on call duties. Work hours among hospital doctors can be measured by using a composite index of hours at work day and on-call duties [14,19]. Our hypothesis is that German hospital doctors report longer work hours and poorer health than their Norwegian colleagues. For the purpose of this study we made a distinction between the doctors who worked both more than 9 hours per day and more than 60 hours on-call per month, and those who did not. Since very few Norwegian doctors meet these criteria we denote not having this pattern of long hours for the "Norwegian work time pattern". Health was measured by a single question: "In general, would you say your health is (G: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschrieben? N: Stort sett, vil du si at din helse er:) with response alternatives in Germany very good (1: sehr gut), good (2: gut), average (3: zufriedenstellend, synonym for durchschnittlich [20]), less good (4: weniger gut) , and poor (5: schlecht) and in Norway good (1: god), fairly good (2: nokså god), average (3: middels), rather poor (4: nokså dårlig), and poor (5: dårlig). The wording of the two highest response levels differed in the two countries; "very good" and "good" in Germany and "good" and "fairly good" in Norway. However, the middle (average) level is the same, as are the levels below. Hence we dichotomized the original five response levels into "good" (above average; categories 1, 2) and "average or below" (categories 3-5). The question on self rated health is thoroughly validated and widely used in Norwegian [21], German and other surveys [22]. It is also considered to be a good indicator of mortality risk, morbidity, and general health status [21-23]. Both the German and the Norwegian questionnaire included a question on the average number of work hours per day: "On an average work day, how many hours do you work (including overtime, excluding on-call duties)". In addition, the average number of hours on-call per month was recorded, in Norway with the question: "In an average month, about how many hours do you have on-call duties?", in Germany with a similar question: "In an average month, about how many on-call duties do you have on weekday and weekend? About how many hours are you on-call duty on a weekday and weekend?" The standard full time workweek is between 38-40 hours in Norway and 40-42 hours in Germany [17,18], and almost all our respondents worked at least this much. Only doctors working full time were included in the study. Work hours of most hospital doctors in Norway, Germany and in other countries consist of hours at work days and on call duties. Work hours among hospital doctors can be measured by using a composite index of hours at work day and on-call duties [14,19]. Our hypothesis is that German hospital doctors report longer work hours and poorer health than their Norwegian colleagues. For the purpose of this study we made a distinction between the doctors who worked both more than 9 hours per day and more than 60 hours on-call per month, and those who did not. Since very few Norwegian doctors meet these criteria we denote not having this pattern of long hours for the "Norwegian work time pattern". Health was measured by a single question: "In general, would you say your health is (G: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschrieben? N: Stort sett, vil du si at din helse er:) with response alternatives in Germany very good (1: sehr gut), good (2: gut), average (3: zufriedenstellend, synonym for durchschnittlich [20]), less good (4: weniger gut) , and poor (5: schlecht) and in Norway good (1: god), fairly good (2: nokså god), average (3: middels), rather poor (4: nokså dårlig), and poor (5: dårlig). The wording of the two highest response levels differed in the two countries; "very good" and "good" in Germany and "good" and "fairly good" in Norway. However, the middle (average) level is the same, as are the levels below. Hence we dichotomized the original five response levels into "good" (above average; categories 1, 2) and "average or below" (categories 3-5). The question on self rated health is thoroughly validated and widely used in Norwegian [21], German and other surveys [22]. It is also considered to be a good indicator of mortality risk, morbidity, and general health status [21-23]. [SUBTITLE] Analyses [SUBSECTION] We compared proportions by Pearson's Chi-square test and interval variables (age) by calculating 95% confidence intervals. Logistic regression analyses were used to assess the simultaneous effects of workplace country, age, gender and work hours on self rated health. Units with missing items were excluded. SPSS, version 17.0 was used for the analyses. We compared proportions by Pearson's Chi-square test and interval variables (age) by calculating 95% confidence intervals. Logistic regression analyses were used to assess the simultaneous effects of workplace country, age, gender and work hours on self rated health. Units with missing items were excluded. SPSS, version 17.0 was used for the analyses.
null
null
null
null
[ "Background", "Data collection and sample", "Questionnaire and measurement", "Analyses", "Results", "Sample characteristics", "Work hours and health", "Discussion", "Strengths and limitations", "Conclusion", "Competing interest", "Authors' contributions", "Pre-publication history" ]
[ "The detrimental effect of long work hours on health in different occupational groups, including the medical profession, is well documented, [1]. It is also well known that working long hours, as a result of extended days and on-call duties, is common among many hospital doctors in Europe. Although the Working Time Directives of the European Union [2,3] and rulings of the European Court of Justice [4,5] limit the work hours of doctors in the member countries, there are large national variations in the actual work time burden of hospital doctors in Europe [6]. The possible association between international differences in actual work hours and the general health status of hospital doctors is of interest.\nA comparison of previous studies is limited by methodological differences regarding data collection, sample characteristics and measurements. However, there is evidence for a considerable difference in work time burden for hospital doctors in two European countries - Norway and Germany. More leisure time and shorter and more regulated work hours in Norwegian hospitals have been a main motive for the migration of German hospital doctors to Norway [7]. Hospital doctors in Germany report significantly lower job satisfaction, compared with their colleagues in Norway, and the largest difference was observed on satisfaction with work hours as one of ten components of the job satisfaction scale [8]. In this paper we look into the differences in doctors' actual work hours in Norwegian and German hospitals, and whether this difference is associated with self rated health.\nIt is feasible to perform reliable and comparable analyses in these two countries. The general health status of the populations in Norway and Germany, expressed by life expectancy at birth and estimated percentage of life lived in good health or free of disability are similar [9].\nThe effort-recovery model [10] explains the relationship between long work hours and poor health. It implies that long hours can lead to insufficient recovery, which in turn may cause various health problems [1]. Other theoretical models have assumed that the number of hours worked is directly related to stress [11], which may challenge the doctors' mental and physical health. Excessive work hours and insufficient rest periods are commonly known to be exhausting. Consequently, previous investigations have tended to concentrate on the effect of work hours on mental health such as fatigue, mood changes, sleep disturbance and burnout [12]. Most studies have focused on specific positions or specialties, while little attention has been given to the whole group of hospital doctors [13,14].\nThe aim of this study is to examine and compare the associations between actual work hours and self rated health in national samples of Norwegian and German hospital doctors. We expect to find that hospital doctors in Germany report longer work hours and poorer health than their colleagues in Norway.\nTo our knowledge, no comparative study like this has been done; hence this study may be of importance in the present discussion on doctors' work hours [6,15] and health [14,16].", "In Germany a 12 page postal questionnaire was sent in September/October 2006 from the German Hospital Institute to 3,295 hospital doctors, with no reminders. In Norway a 14 page postal questionnaire was sent in October and November 2008 to 1,650 doctors of all kinds, with one reminder. Both The German Hospital Institute and The Research Institute of The Norwegian Medical Association are independent research institutes with experience in surveys on doctors' health and work conditions. The response rates were 58.2% (1,917/3,295) in Germany and 65.0% (1,072/1,650) in Norway, of which 592 were hospital doctors. Age between 25 and 65 years and working in a hospital setting with a traditional work pattern - day time work usually combined with on-call duties - were inclusion criteria. The final sample comprised 1,822 respondents, 1,260 in Germany and 562 in Norway.", "Both the German and the Norwegian questionnaire included a question on the average number of work hours per day: \"On an average work day, how many hours do you work (including overtime, excluding on-call duties)\". In addition, the average number of hours on-call per month was recorded, in Norway with the question: \"In an average month, about how many hours do you have on-call duties?\", in Germany with a similar question: \"In an average month, about how many on-call duties do you have on weekday and weekend? About how many hours are you on-call duty on a weekday and weekend?\" The standard full time workweek is between 38-40 hours in Norway and 40-42 hours in Germany [17,18], and almost all our respondents worked at least this much. Only doctors working full time were included in the study.\nWork hours of most hospital doctors in Norway, Germany and in other countries consist of hours at work days and on call duties. Work hours among hospital doctors can be measured by using a composite index of hours at work day and on-call duties [14,19]. Our hypothesis is that German hospital doctors report longer work hours and poorer health than their Norwegian colleagues. For the purpose of this study we made a distinction between the doctors who worked both more than 9 hours per day and more than 60 hours on-call per month, and those who did not. Since very few Norwegian doctors meet these criteria we denote not having this pattern of long hours for the \"Norwegian work time pattern\".\nHealth was measured by a single question: \"In general, would you say your health is (G: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschrieben? N: Stort sett, vil du si at din helse er:) with response alternatives in Germany very good (1: sehr gut), good (2: gut), average (3: zufriedenstellend, synonym for durchschnittlich [20]), less good (4: weniger gut) , and poor (5: schlecht) and in Norway good (1: god), fairly good (2: nokså god), average (3: middels), rather poor (4: nokså dårlig), and poor (5: dårlig). The wording of the two highest response levels differed in the two countries; \"very good\" and \"good\" in Germany and \"good\" and \"fairly good\" in Norway. However, the middle (average) level is the same, as are the levels below. Hence we dichotomized the original five response levels into \"good\" (above average; categories 1, 2) and \"average or below\" (categories 3-5).\nThe question on self rated health is thoroughly validated and widely used in Norwegian [21], German and other surveys [22]. It is also considered to be a good indicator of mortality risk, morbidity, and general health status [21-23].", "We compared proportions by Pearson's Chi-square test and interval variables (age) by calculating 95% confidence intervals. Logistic regression analyses were used to assess the simultaneous effects of workplace country, age, gender and work hours on self rated health. Units with missing items were excluded. SPSS, version 17.0 was used for the analyses.", "[SUBTITLE] Sample characteristics [SUBSECTION] The gender distribution was similar in Germany and Norway, with 62.1% (783/1260) and 58.2% (327/560) males respectively. The German doctors were significantly younger, with a mean age of 42.7 (95% CI 42.1 to 43.3) vs. 48.6 (47.5 to 49.7) years for males, and 38.6 (37.9 to 39.4) vs. 42.7 (41.5 to 43.8) years for females.\nThe gender distribution was similar in Germany and Norway, with 62.1% (783/1260) and 58.2% (327/560) males respectively. The German doctors were significantly younger, with a mean age of 42.7 (95% CI 42.1 to 43.3) vs. 48.6 (47.5 to 49.7) years for males, and 38.6 (37.9 to 39.4) vs. 42.7 (41.5 to 43.8) years for females.\n[SUBTITLE] Work hours and health [SUBSECTION] Work hours and self rated general health are shown in Table 1. The work hours per day and on-call duties per month were significantly lower among both female and male hospital doctors in Norway than in Germany. A considerable lower proportion of Norwegian doctors exceeded a 9 hours' work day plus 60 hours on-call per month.\nWork time and self rated health of hospital doctors in Norway and Germany, aged 25-65 years and employed in full time. Data are % (n) of respondents.\n* p < 0.0001, differences between countries using Pearson' Chi-square test\nIn both countries, male doctors worked significantly longer days and female doctors more hours on-call. However, we found no sex differences in the prevalence of Norwegian work time pattern (data not shown).\nThe majority of the doctors in both countries reported good health, but this proportion was significantly lower in Germany (Table 1). There were no gender differences in self rated health in either country (data not shown).\nIn a logistic regression model (Table 2, Model I) the simultaneous effect of sex, age and work country on self rated health was explored. The model fit the data fairly well (p = .358, Hosmer-Lemeshow test). When \"Norwegian work time pattern\" was included as predictor (Model II), there was a moderate decrease in -2 Log likelihood from 1681 to 1677, suggesting an improvement in model fit (to p = .498, Hosmer Lemeshow).\nLogistic regressions with good self rated health as response variable, without (Model I) and with (Model II) the Norwegian work time pattern. 1 503 full time hospital doctors in Germany and Norway.\n(‡) Not having the combination of working more than 9 hours a day and more than 60 hours a month on-call\nWork hours and self rated general health are shown in Table 1. The work hours per day and on-call duties per month were significantly lower among both female and male hospital doctors in Norway than in Germany. A considerable lower proportion of Norwegian doctors exceeded a 9 hours' work day plus 60 hours on-call per month.\nWork time and self rated health of hospital doctors in Norway and Germany, aged 25-65 years and employed in full time. Data are % (n) of respondents.\n* p < 0.0001, differences between countries using Pearson' Chi-square test\nIn both countries, male doctors worked significantly longer days and female doctors more hours on-call. However, we found no sex differences in the prevalence of Norwegian work time pattern (data not shown).\nThe majority of the doctors in both countries reported good health, but this proportion was significantly lower in Germany (Table 1). There were no gender differences in self rated health in either country (data not shown).\nIn a logistic regression model (Table 2, Model I) the simultaneous effect of sex, age and work country on self rated health was explored. The model fit the data fairly well (p = .358, Hosmer-Lemeshow test). When \"Norwegian work time pattern\" was included as predictor (Model II), there was a moderate decrease in -2 Log likelihood from 1681 to 1677, suggesting an improvement in model fit (to p = .498, Hosmer Lemeshow).\nLogistic regressions with good self rated health as response variable, without (Model I) and with (Model II) the Norwegian work time pattern. 1 503 full time hospital doctors in Germany and Norway.\n(‡) Not having the combination of working more than 9 hours a day and more than 60 hours a month on-call", "The gender distribution was similar in Germany and Norway, with 62.1% (783/1260) and 58.2% (327/560) males respectively. The German doctors were significantly younger, with a mean age of 42.7 (95% CI 42.1 to 43.3) vs. 48.6 (47.5 to 49.7) years for males, and 38.6 (37.9 to 39.4) vs. 42.7 (41.5 to 43.8) years for females.", "Work hours and self rated general health are shown in Table 1. The work hours per day and on-call duties per month were significantly lower among both female and male hospital doctors in Norway than in Germany. A considerable lower proportion of Norwegian doctors exceeded a 9 hours' work day plus 60 hours on-call per month.\nWork time and self rated health of hospital doctors in Norway and Germany, aged 25-65 years and employed in full time. Data are % (n) of respondents.\n* p < 0.0001, differences between countries using Pearson' Chi-square test\nIn both countries, male doctors worked significantly longer days and female doctors more hours on-call. However, we found no sex differences in the prevalence of Norwegian work time pattern (data not shown).\nThe majority of the doctors in both countries reported good health, but this proportion was significantly lower in Germany (Table 1). There were no gender differences in self rated health in either country (data not shown).\nIn a logistic regression model (Table 2, Model I) the simultaneous effect of sex, age and work country on self rated health was explored. The model fit the data fairly well (p = .358, Hosmer-Lemeshow test). When \"Norwegian work time pattern\" was included as predictor (Model II), there was a moderate decrease in -2 Log likelihood from 1681 to 1677, suggesting an improvement in model fit (to p = .498, Hosmer Lemeshow).\nLogistic regressions with good self rated health as response variable, without (Model I) and with (Model II) the Norwegian work time pattern. 1 503 full time hospital doctors in Germany and Norway.\n(‡) Not having the combination of working more than 9 hours a day and more than 60 hours a month on-call", "The present study shows how self rated health is associated with hospital doctors' work hours in Norway and Germany. German doctors work considerably longer hours and report significantly lower rates of good self rated health than their Norwegian colleagues.\nThe Norwegian work time pattern (Table 2) was a significant predictor of good self rated health. This can partly be explained by more recovery time [1,10] and less strain related to long work hours [11]. In a logistic regression, the effect of working in Norway was a stronger independent predictor of good self rated health than following the Norwegian work time pattern. This suggests that cultural factors other than the actual work time pattern account for part of the observed difference in self reported health.\nHospital doctors' work conditions are strongly associated with the work organisation [24] and the national directives [15]. Thus, the national regulations of work conditions - all aspects of work life including salary, control over clinical work and professional autonomy, collegial support and work time - may impact on the doctors' health.\nThere are considerable differences in work conditions for doctors in the two countries. A recent study on job satisfaction of Norwegian and German hospital doctors shows that Norwegian doctors enjoy a higher level of job satisfaction, suggesting a better work atmosphere in Norwegian hospitals, with lower physical burden, better collegial environment, more professional autonomy, more control over clinical work and shorter work hours [8]. That job satisfaction and other work conditions are determinants of health are well documented [25,26]. In Germany, several regulations and restrictions on doctors' remuneration and workload have been implemented during the last few years. The workload, expressed by increasing patient throughput and a corresponding reduction in the average duration of hospital stay, has increased. The situation is aggravated by understaffing and increasing migration of German doctors to other countries, often motivated by unacceptable work conditions [27]. In 2006 work hours increased from 38.5 to 40 or 42 hours per week for most of German hospital doctors without a corresponding increase in salary [18]. Working overtime - usually uncompensated - is considered the norm in German hospitals [28]. In Norway, regular weekly hours for hospital doctors have remained stable at 38 to 40 for the last decade [17] with a steady growth in salary [29]. Norwegian hospital doctors also have a lower workload in terms of number of hospital dismissals and more practising doctors per capita [30].\nAnother cultural difference might lie in the adherence to mandatory regulations of hospital doctors' work time. According to a member survey of the German doctors union [28] and a report of the Norwegian Medical Association [31], the majority of doctors in German hospitals (59%) complained about the renege on stipulated maximum weekly work hours, while only 30% of the Norwegian hospital doctors reported a pressure from the hospital administrations to deviate from the work time agreements. Respect for work time regulations and a good balance between professional and private life seem to be important cultural values in Norway. In the most recent European Working Conditions Survey [32], Norway was found to have the second-lowest average weekly work time, and the lowest percentage among European countries of employees with a weekly work time over 48 hours.\nAccording to the job demand-control model of Karasek and Theorell [33], high job demands (workload) in combination with low job control (autonomy, decision latitude) may have negative health effects, and work overload has been shown to be a significant stressor among doctors [11]. A Norwegian study documents that stress among doctors increases with increasing voluntary or involuntary overtime [34]. A recent survey of Dutch full-time employees concludes that involuntary overtime without reward represents a threat to the workers' health [35].\nThus, the significantly lower percentage of doctors in Germany with good self reported health could be ascribed not only to the higher amount of work hours on weekdays and on call duties, but also to negative aspects of the work organization such as higher work load, less autonomy in job-related decisions combined with less control over work hours and higher demand for uncompensated overtime. Unfortunately we did not have comparable data on these worklife aspects for the present study.\nIn terms of health care policy, better work time control could be the first step to improve doctors' health. Work time reduction and control have traditionally been seen as a feature of health care in the European work time regulations [2-5]. Good professional climate, high professional autonomy and monetary recognition of clinical work are also essential [11,33-35]. The health of the doctors is an important public health issue with direct bearing on the quality and stability of health care systems, as well as on the doctors' well-being [1,13,14,16].\n[SUBTITLE] Strengths and limitations [SUBSECTION] The strength of this study lies first and foremost in the comparative and representative datasets, making the results generalisable to the entire population of hospital doctors in Germany and Norway. The high validity of the self rated health question [22,23], similarities in measurement methods, and comparable elements of work hours are also strengths of the study.\nOne limitation is clearly a possible cultural difference in health perception. According to the \"World Value Survey\" [36] 79.5% of Norwegians rate their subjective health as \"good or very good\", compared to 71.6% in Germany, indicating either a difference in health perception or an actual difference in population health. The latter is more or less ruled out by data from the \"Atlas of the Health in Europe\" [9] where the general health status expressed by estimated percentage of life lived in good health or free of disability is similar in Norway (male: 92.2%, female: 90.1%) and Germany (male: 92.1%, female: 90.7%). Furthermore, in a recent Finnish study on self rated health [23] it is argued that doctors \"probably share a fairly similar general understanding of what constitutes health and what information is essential to describing it\", indicating that the intercultural reliability of our health measure should be sufficient, particularly since our respondents are all doctors.\nThe fact that the two highest response levels of the health question had different wordings is of concern. However, since the \"average\" and the lower levels were identical, the dichotomization of this measure should make the two samples directly comparable.\nOne might speculate whether the two year time difference between the surveys (2006 in Germany and 2008 in Norway) may affect the results, but this does not seem to be the case. Between 2006 and 2008, the regulations of contracted weekly hours (N: 38-40 hours; G: 38.5-42 hours) and the maximum weekly hours including on-call of hospital doctors (N: 60 hours; G: 66 hours) remained unchanged [17,18,37,38]. According to a recent analysis, the satisfaction with work time among Norwegian hospital doctors was found to be stable from 2000 to 2006 [39]. In Germany, reports about poor working conditions, including low income, high workload and long work hours among hospital doctors continued from 2006 to 2008 [18].\nA further limitation is the relatively low response rates. This may reflect a limited willingness to participate in surveys compared with other Europeans [40]. Another reason for non-response could be that the doctors did not find time to complete the questionnaire. Nevertheless, it should be noted that despite the fact that no reminder was sent in Germany, response rates of 58.2% and 65% are better than in many other doctor surveys [19].\nThe intensity of on-call duties from home was not measured in Germany. Many doctors perform on-call duties from home up to every other day, as \"backup cover\". It should also be taken into account that scientific and administrative tasks are often carried out at home after regular work hours. These factors would increase the actual work time still further.\nDifferences in specialty patterns might explain some of our findings. In Norway, 14.5% of the respondents worked in the surgical domain, 34.0% in internal medicine and 51.5% in other specialties. In Germany the respective proportions were 29.7%, 29.1% and 41.2%. In Germany daily work hours were identical (median 10 hours) and monthly hours on-call higher among surgeons (median 128 hours) than in internal medicine (median 112 hours). In Norway surgeons and internal medicine doctors worked similar hours, median 9 hours per day and 19-20 hours on call per month. However, the inclusion of specialty as categorical variable in our logistic model (table 2) did not make any significant difference.\nIn our final model (table 2) we have included work hours, age, sex and workplace country as possible predictors of good self reported health. It is likely that also other variables such as coping, other workplace hazards or local regulations affect the relationship between work hours and health [1,35], but such data have not been available for this study.\nFurthermore, the study only includes doctors who are currently working in hospitals, and not those who have already left their jobs due to excessive demands or ill health. At present, there is an increasing influx of German doctors to other professions or to other countries, including Norway - usually driven by demanding work schedules and excessive work hours [7,27]. Therefore, it would be interesting to collect data from the hospital doctors in Germany who have moved to Norway, and compare with the doctors still working in German hospitals.\nThe strength of this study lies first and foremost in the comparative and representative datasets, making the results generalisable to the entire population of hospital doctors in Germany and Norway. The high validity of the self rated health question [22,23], similarities in measurement methods, and comparable elements of work hours are also strengths of the study.\nOne limitation is clearly a possible cultural difference in health perception. According to the \"World Value Survey\" [36] 79.5% of Norwegians rate their subjective health as \"good or very good\", compared to 71.6% in Germany, indicating either a difference in health perception or an actual difference in population health. The latter is more or less ruled out by data from the \"Atlas of the Health in Europe\" [9] where the general health status expressed by estimated percentage of life lived in good health or free of disability is similar in Norway (male: 92.2%, female: 90.1%) and Germany (male: 92.1%, female: 90.7%). Furthermore, in a recent Finnish study on self rated health [23] it is argued that doctors \"probably share a fairly similar general understanding of what constitutes health and what information is essential to describing it\", indicating that the intercultural reliability of our health measure should be sufficient, particularly since our respondents are all doctors.\nThe fact that the two highest response levels of the health question had different wordings is of concern. However, since the \"average\" and the lower levels were identical, the dichotomization of this measure should make the two samples directly comparable.\nOne might speculate whether the two year time difference between the surveys (2006 in Germany and 2008 in Norway) may affect the results, but this does not seem to be the case. Between 2006 and 2008, the regulations of contracted weekly hours (N: 38-40 hours; G: 38.5-42 hours) and the maximum weekly hours including on-call of hospital doctors (N: 60 hours; G: 66 hours) remained unchanged [17,18,37,38]. According to a recent analysis, the satisfaction with work time among Norwegian hospital doctors was found to be stable from 2000 to 2006 [39]. In Germany, reports about poor working conditions, including low income, high workload and long work hours among hospital doctors continued from 2006 to 2008 [18].\nA further limitation is the relatively low response rates. This may reflect a limited willingness to participate in surveys compared with other Europeans [40]. Another reason for non-response could be that the doctors did not find time to complete the questionnaire. Nevertheless, it should be noted that despite the fact that no reminder was sent in Germany, response rates of 58.2% and 65% are better than in many other doctor surveys [19].\nThe intensity of on-call duties from home was not measured in Germany. Many doctors perform on-call duties from home up to every other day, as \"backup cover\". It should also be taken into account that scientific and administrative tasks are often carried out at home after regular work hours. These factors would increase the actual work time still further.\nDifferences in specialty patterns might explain some of our findings. In Norway, 14.5% of the respondents worked in the surgical domain, 34.0% in internal medicine and 51.5% in other specialties. In Germany the respective proportions were 29.7%, 29.1% and 41.2%. In Germany daily work hours were identical (median 10 hours) and monthly hours on-call higher among surgeons (median 128 hours) than in internal medicine (median 112 hours). In Norway surgeons and internal medicine doctors worked similar hours, median 9 hours per day and 19-20 hours on call per month. However, the inclusion of specialty as categorical variable in our logistic model (table 2) did not make any significant difference.\nIn our final model (table 2) we have included work hours, age, sex and workplace country as possible predictors of good self reported health. It is likely that also other variables such as coping, other workplace hazards or local regulations affect the relationship between work hours and health [1,35], but such data have not been available for this study.\nFurthermore, the study only includes doctors who are currently working in hospitals, and not those who have already left their jobs due to excessive demands or ill health. At present, there is an increasing influx of German doctors to other professions or to other countries, including Norway - usually driven by demanding work schedules and excessive work hours [7,27]. Therefore, it would be interesting to collect data from the hospital doctors in Germany who have moved to Norway, and compare with the doctors still working in German hospitals.", "The strength of this study lies first and foremost in the comparative and representative datasets, making the results generalisable to the entire population of hospital doctors in Germany and Norway. The high validity of the self rated health question [22,23], similarities in measurement methods, and comparable elements of work hours are also strengths of the study.\nOne limitation is clearly a possible cultural difference in health perception. According to the \"World Value Survey\" [36] 79.5% of Norwegians rate their subjective health as \"good or very good\", compared to 71.6% in Germany, indicating either a difference in health perception or an actual difference in population health. The latter is more or less ruled out by data from the \"Atlas of the Health in Europe\" [9] where the general health status expressed by estimated percentage of life lived in good health or free of disability is similar in Norway (male: 92.2%, female: 90.1%) and Germany (male: 92.1%, female: 90.7%). Furthermore, in a recent Finnish study on self rated health [23] it is argued that doctors \"probably share a fairly similar general understanding of what constitutes health and what information is essential to describing it\", indicating that the intercultural reliability of our health measure should be sufficient, particularly since our respondents are all doctors.\nThe fact that the two highest response levels of the health question had different wordings is of concern. However, since the \"average\" and the lower levels were identical, the dichotomization of this measure should make the two samples directly comparable.\nOne might speculate whether the two year time difference between the surveys (2006 in Germany and 2008 in Norway) may affect the results, but this does not seem to be the case. Between 2006 and 2008, the regulations of contracted weekly hours (N: 38-40 hours; G: 38.5-42 hours) and the maximum weekly hours including on-call of hospital doctors (N: 60 hours; G: 66 hours) remained unchanged [17,18,37,38]. According to a recent analysis, the satisfaction with work time among Norwegian hospital doctors was found to be stable from 2000 to 2006 [39]. In Germany, reports about poor working conditions, including low income, high workload and long work hours among hospital doctors continued from 2006 to 2008 [18].\nA further limitation is the relatively low response rates. This may reflect a limited willingness to participate in surveys compared with other Europeans [40]. Another reason for non-response could be that the doctors did not find time to complete the questionnaire. Nevertheless, it should be noted that despite the fact that no reminder was sent in Germany, response rates of 58.2% and 65% are better than in many other doctor surveys [19].\nThe intensity of on-call duties from home was not measured in Germany. Many doctors perform on-call duties from home up to every other day, as \"backup cover\". It should also be taken into account that scientific and administrative tasks are often carried out at home after regular work hours. These factors would increase the actual work time still further.\nDifferences in specialty patterns might explain some of our findings. In Norway, 14.5% of the respondents worked in the surgical domain, 34.0% in internal medicine and 51.5% in other specialties. In Germany the respective proportions were 29.7%, 29.1% and 41.2%. In Germany daily work hours were identical (median 10 hours) and monthly hours on-call higher among surgeons (median 128 hours) than in internal medicine (median 112 hours). In Norway surgeons and internal medicine doctors worked similar hours, median 9 hours per day and 19-20 hours on call per month. However, the inclusion of specialty as categorical variable in our logistic model (table 2) did not make any significant difference.\nIn our final model (table 2) we have included work hours, age, sex and workplace country as possible predictors of good self reported health. It is likely that also other variables such as coping, other workplace hazards or local regulations affect the relationship between work hours and health [1,35], but such data have not been available for this study.\nFurthermore, the study only includes doctors who are currently working in hospitals, and not those who have already left their jobs due to excessive demands or ill health. At present, there is an increasing influx of German doctors to other professions or to other countries, including Norway - usually driven by demanding work schedules and excessive work hours [7,27]. Therefore, it would be interesting to collect data from the hospital doctors in Germany who have moved to Norway, and compare with the doctors still working in German hospitals.", "The current study contributes to the international literature on work time of hospital doctors in particular by documenting an association between work hour patterns and self rated health. A lower percentage of German hospital doctors reported self rated health as \"good\", and controlled for other possible cultural differences, the work time pattern was a significant predictor of self rated health. Improved work organisation, in the form of reduced work hours, as well as better control over own work time, preferably combined with lower work load and reward for overwork are recommended strategies to improve doctors' health.", "We declare that we have no conflicts of interest.", "Both authors have many years of experience in statistics and survey methods, and contributed equally to analysing the data and writing the article.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/40/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data collection and sample", "Questionnaire and measurement", "Analyses", "Results", "Sample characteristics", "Work hours and health", "Discussion", "Strengths and limitations", "Conclusion", "Competing interest", "Authors' contributions", "Pre-publication history" ]
[ "The detrimental effect of long work hours on health in different occupational groups, including the medical profession, is well documented, [1]. It is also well known that working long hours, as a result of extended days and on-call duties, is common among many hospital doctors in Europe. Although the Working Time Directives of the European Union [2,3] and rulings of the European Court of Justice [4,5] limit the work hours of doctors in the member countries, there are large national variations in the actual work time burden of hospital doctors in Europe [6]. The possible association between international differences in actual work hours and the general health status of hospital doctors is of interest.\nA comparison of previous studies is limited by methodological differences regarding data collection, sample characteristics and measurements. However, there is evidence for a considerable difference in work time burden for hospital doctors in two European countries - Norway and Germany. More leisure time and shorter and more regulated work hours in Norwegian hospitals have been a main motive for the migration of German hospital doctors to Norway [7]. Hospital doctors in Germany report significantly lower job satisfaction, compared with their colleagues in Norway, and the largest difference was observed on satisfaction with work hours as one of ten components of the job satisfaction scale [8]. In this paper we look into the differences in doctors' actual work hours in Norwegian and German hospitals, and whether this difference is associated with self rated health.\nIt is feasible to perform reliable and comparable analyses in these two countries. The general health status of the populations in Norway and Germany, expressed by life expectancy at birth and estimated percentage of life lived in good health or free of disability are similar [9].\nThe effort-recovery model [10] explains the relationship between long work hours and poor health. It implies that long hours can lead to insufficient recovery, which in turn may cause various health problems [1]. Other theoretical models have assumed that the number of hours worked is directly related to stress [11], which may challenge the doctors' mental and physical health. Excessive work hours and insufficient rest periods are commonly known to be exhausting. Consequently, previous investigations have tended to concentrate on the effect of work hours on mental health such as fatigue, mood changes, sleep disturbance and burnout [12]. Most studies have focused on specific positions or specialties, while little attention has been given to the whole group of hospital doctors [13,14].\nThe aim of this study is to examine and compare the associations between actual work hours and self rated health in national samples of Norwegian and German hospital doctors. We expect to find that hospital doctors in Germany report longer work hours and poorer health than their colleagues in Norway.\nTo our knowledge, no comparative study like this has been done; hence this study may be of importance in the present discussion on doctors' work hours [6,15] and health [14,16].", "[SUBTITLE] Data collection and sample [SUBSECTION] In Germany a 12 page postal questionnaire was sent in September/October 2006 from the German Hospital Institute to 3,295 hospital doctors, with no reminders. In Norway a 14 page postal questionnaire was sent in October and November 2008 to 1,650 doctors of all kinds, with one reminder. Both The German Hospital Institute and The Research Institute of The Norwegian Medical Association are independent research institutes with experience in surveys on doctors' health and work conditions. The response rates were 58.2% (1,917/3,295) in Germany and 65.0% (1,072/1,650) in Norway, of which 592 were hospital doctors. Age between 25 and 65 years and working in a hospital setting with a traditional work pattern - day time work usually combined with on-call duties - were inclusion criteria. The final sample comprised 1,822 respondents, 1,260 in Germany and 562 in Norway.\nIn Germany a 12 page postal questionnaire was sent in September/October 2006 from the German Hospital Institute to 3,295 hospital doctors, with no reminders. In Norway a 14 page postal questionnaire was sent in October and November 2008 to 1,650 doctors of all kinds, with one reminder. Both The German Hospital Institute and The Research Institute of The Norwegian Medical Association are independent research institutes with experience in surveys on doctors' health and work conditions. The response rates were 58.2% (1,917/3,295) in Germany and 65.0% (1,072/1,650) in Norway, of which 592 were hospital doctors. Age between 25 and 65 years and working in a hospital setting with a traditional work pattern - day time work usually combined with on-call duties - were inclusion criteria. The final sample comprised 1,822 respondents, 1,260 in Germany and 562 in Norway.\n[SUBTITLE] Questionnaire and measurement [SUBSECTION] Both the German and the Norwegian questionnaire included a question on the average number of work hours per day: \"On an average work day, how many hours do you work (including overtime, excluding on-call duties)\". In addition, the average number of hours on-call per month was recorded, in Norway with the question: \"In an average month, about how many hours do you have on-call duties?\", in Germany with a similar question: \"In an average month, about how many on-call duties do you have on weekday and weekend? About how many hours are you on-call duty on a weekday and weekend?\" The standard full time workweek is between 38-40 hours in Norway and 40-42 hours in Germany [17,18], and almost all our respondents worked at least this much. Only doctors working full time were included in the study.\nWork hours of most hospital doctors in Norway, Germany and in other countries consist of hours at work days and on call duties. Work hours among hospital doctors can be measured by using a composite index of hours at work day and on-call duties [14,19]. Our hypothesis is that German hospital doctors report longer work hours and poorer health than their Norwegian colleagues. For the purpose of this study we made a distinction between the doctors who worked both more than 9 hours per day and more than 60 hours on-call per month, and those who did not. Since very few Norwegian doctors meet these criteria we denote not having this pattern of long hours for the \"Norwegian work time pattern\".\nHealth was measured by a single question: \"In general, would you say your health is (G: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschrieben? N: Stort sett, vil du si at din helse er:) with response alternatives in Germany very good (1: sehr gut), good (2: gut), average (3: zufriedenstellend, synonym for durchschnittlich [20]), less good (4: weniger gut) , and poor (5: schlecht) and in Norway good (1: god), fairly good (2: nokså god), average (3: middels), rather poor (4: nokså dårlig), and poor (5: dårlig). The wording of the two highest response levels differed in the two countries; \"very good\" and \"good\" in Germany and \"good\" and \"fairly good\" in Norway. However, the middle (average) level is the same, as are the levels below. Hence we dichotomized the original five response levels into \"good\" (above average; categories 1, 2) and \"average or below\" (categories 3-5).\nThe question on self rated health is thoroughly validated and widely used in Norwegian [21], German and other surveys [22]. It is also considered to be a good indicator of mortality risk, morbidity, and general health status [21-23].\nBoth the German and the Norwegian questionnaire included a question on the average number of work hours per day: \"On an average work day, how many hours do you work (including overtime, excluding on-call duties)\". In addition, the average number of hours on-call per month was recorded, in Norway with the question: \"In an average month, about how many hours do you have on-call duties?\", in Germany with a similar question: \"In an average month, about how many on-call duties do you have on weekday and weekend? About how many hours are you on-call duty on a weekday and weekend?\" The standard full time workweek is between 38-40 hours in Norway and 40-42 hours in Germany [17,18], and almost all our respondents worked at least this much. Only doctors working full time were included in the study.\nWork hours of most hospital doctors in Norway, Germany and in other countries consist of hours at work days and on call duties. Work hours among hospital doctors can be measured by using a composite index of hours at work day and on-call duties [14,19]. Our hypothesis is that German hospital doctors report longer work hours and poorer health than their Norwegian colleagues. For the purpose of this study we made a distinction between the doctors who worked both more than 9 hours per day and more than 60 hours on-call per month, and those who did not. Since very few Norwegian doctors meet these criteria we denote not having this pattern of long hours for the \"Norwegian work time pattern\".\nHealth was measured by a single question: \"In general, would you say your health is (G: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschrieben? N: Stort sett, vil du si at din helse er:) with response alternatives in Germany very good (1: sehr gut), good (2: gut), average (3: zufriedenstellend, synonym for durchschnittlich [20]), less good (4: weniger gut) , and poor (5: schlecht) and in Norway good (1: god), fairly good (2: nokså god), average (3: middels), rather poor (4: nokså dårlig), and poor (5: dårlig). The wording of the two highest response levels differed in the two countries; \"very good\" and \"good\" in Germany and \"good\" and \"fairly good\" in Norway. However, the middle (average) level is the same, as are the levels below. Hence we dichotomized the original five response levels into \"good\" (above average; categories 1, 2) and \"average or below\" (categories 3-5).\nThe question on self rated health is thoroughly validated and widely used in Norwegian [21], German and other surveys [22]. It is also considered to be a good indicator of mortality risk, morbidity, and general health status [21-23].\n[SUBTITLE] Analyses [SUBSECTION] We compared proportions by Pearson's Chi-square test and interval variables (age) by calculating 95% confidence intervals. Logistic regression analyses were used to assess the simultaneous effects of workplace country, age, gender and work hours on self rated health. Units with missing items were excluded. SPSS, version 17.0 was used for the analyses.\nWe compared proportions by Pearson's Chi-square test and interval variables (age) by calculating 95% confidence intervals. Logistic regression analyses were used to assess the simultaneous effects of workplace country, age, gender and work hours on self rated health. Units with missing items were excluded. SPSS, version 17.0 was used for the analyses.", "In Germany a 12 page postal questionnaire was sent in September/October 2006 from the German Hospital Institute to 3,295 hospital doctors, with no reminders. In Norway a 14 page postal questionnaire was sent in October and November 2008 to 1,650 doctors of all kinds, with one reminder. Both The German Hospital Institute and The Research Institute of The Norwegian Medical Association are independent research institutes with experience in surveys on doctors' health and work conditions. The response rates were 58.2% (1,917/3,295) in Germany and 65.0% (1,072/1,650) in Norway, of which 592 were hospital doctors. Age between 25 and 65 years and working in a hospital setting with a traditional work pattern - day time work usually combined with on-call duties - were inclusion criteria. The final sample comprised 1,822 respondents, 1,260 in Germany and 562 in Norway.", "Both the German and the Norwegian questionnaire included a question on the average number of work hours per day: \"On an average work day, how many hours do you work (including overtime, excluding on-call duties)\". In addition, the average number of hours on-call per month was recorded, in Norway with the question: \"In an average month, about how many hours do you have on-call duties?\", in Germany with a similar question: \"In an average month, about how many on-call duties do you have on weekday and weekend? About how many hours are you on-call duty on a weekday and weekend?\" The standard full time workweek is between 38-40 hours in Norway and 40-42 hours in Germany [17,18], and almost all our respondents worked at least this much. Only doctors working full time were included in the study.\nWork hours of most hospital doctors in Norway, Germany and in other countries consist of hours at work days and on call duties. Work hours among hospital doctors can be measured by using a composite index of hours at work day and on-call duties [14,19]. Our hypothesis is that German hospital doctors report longer work hours and poorer health than their Norwegian colleagues. For the purpose of this study we made a distinction between the doctors who worked both more than 9 hours per day and more than 60 hours on-call per month, and those who did not. Since very few Norwegian doctors meet these criteria we denote not having this pattern of long hours for the \"Norwegian work time pattern\".\nHealth was measured by a single question: \"In general, would you say your health is (G: Wie würden Sie Ihren gegenwärtigen Gesundheitszustand beschrieben? N: Stort sett, vil du si at din helse er:) with response alternatives in Germany very good (1: sehr gut), good (2: gut), average (3: zufriedenstellend, synonym for durchschnittlich [20]), less good (4: weniger gut) , and poor (5: schlecht) and in Norway good (1: god), fairly good (2: nokså god), average (3: middels), rather poor (4: nokså dårlig), and poor (5: dårlig). The wording of the two highest response levels differed in the two countries; \"very good\" and \"good\" in Germany and \"good\" and \"fairly good\" in Norway. However, the middle (average) level is the same, as are the levels below. Hence we dichotomized the original five response levels into \"good\" (above average; categories 1, 2) and \"average or below\" (categories 3-5).\nThe question on self rated health is thoroughly validated and widely used in Norwegian [21], German and other surveys [22]. It is also considered to be a good indicator of mortality risk, morbidity, and general health status [21-23].", "We compared proportions by Pearson's Chi-square test and interval variables (age) by calculating 95% confidence intervals. Logistic regression analyses were used to assess the simultaneous effects of workplace country, age, gender and work hours on self rated health. Units with missing items were excluded. SPSS, version 17.0 was used for the analyses.", "[SUBTITLE] Sample characteristics [SUBSECTION] The gender distribution was similar in Germany and Norway, with 62.1% (783/1260) and 58.2% (327/560) males respectively. The German doctors were significantly younger, with a mean age of 42.7 (95% CI 42.1 to 43.3) vs. 48.6 (47.5 to 49.7) years for males, and 38.6 (37.9 to 39.4) vs. 42.7 (41.5 to 43.8) years for females.\nThe gender distribution was similar in Germany and Norway, with 62.1% (783/1260) and 58.2% (327/560) males respectively. The German doctors were significantly younger, with a mean age of 42.7 (95% CI 42.1 to 43.3) vs. 48.6 (47.5 to 49.7) years for males, and 38.6 (37.9 to 39.4) vs. 42.7 (41.5 to 43.8) years for females.\n[SUBTITLE] Work hours and health [SUBSECTION] Work hours and self rated general health are shown in Table 1. The work hours per day and on-call duties per month were significantly lower among both female and male hospital doctors in Norway than in Germany. A considerable lower proportion of Norwegian doctors exceeded a 9 hours' work day plus 60 hours on-call per month.\nWork time and self rated health of hospital doctors in Norway and Germany, aged 25-65 years and employed in full time. Data are % (n) of respondents.\n* p < 0.0001, differences between countries using Pearson' Chi-square test\nIn both countries, male doctors worked significantly longer days and female doctors more hours on-call. However, we found no sex differences in the prevalence of Norwegian work time pattern (data not shown).\nThe majority of the doctors in both countries reported good health, but this proportion was significantly lower in Germany (Table 1). There were no gender differences in self rated health in either country (data not shown).\nIn a logistic regression model (Table 2, Model I) the simultaneous effect of sex, age and work country on self rated health was explored. The model fit the data fairly well (p = .358, Hosmer-Lemeshow test). When \"Norwegian work time pattern\" was included as predictor (Model II), there was a moderate decrease in -2 Log likelihood from 1681 to 1677, suggesting an improvement in model fit (to p = .498, Hosmer Lemeshow).\nLogistic regressions with good self rated health as response variable, without (Model I) and with (Model II) the Norwegian work time pattern. 1 503 full time hospital doctors in Germany and Norway.\n(‡) Not having the combination of working more than 9 hours a day and more than 60 hours a month on-call\nWork hours and self rated general health are shown in Table 1. The work hours per day and on-call duties per month were significantly lower among both female and male hospital doctors in Norway than in Germany. A considerable lower proportion of Norwegian doctors exceeded a 9 hours' work day plus 60 hours on-call per month.\nWork time and self rated health of hospital doctors in Norway and Germany, aged 25-65 years and employed in full time. Data are % (n) of respondents.\n* p < 0.0001, differences between countries using Pearson' Chi-square test\nIn both countries, male doctors worked significantly longer days and female doctors more hours on-call. However, we found no sex differences in the prevalence of Norwegian work time pattern (data not shown).\nThe majority of the doctors in both countries reported good health, but this proportion was significantly lower in Germany (Table 1). There were no gender differences in self rated health in either country (data not shown).\nIn a logistic regression model (Table 2, Model I) the simultaneous effect of sex, age and work country on self rated health was explored. The model fit the data fairly well (p = .358, Hosmer-Lemeshow test). When \"Norwegian work time pattern\" was included as predictor (Model II), there was a moderate decrease in -2 Log likelihood from 1681 to 1677, suggesting an improvement in model fit (to p = .498, Hosmer Lemeshow).\nLogistic regressions with good self rated health as response variable, without (Model I) and with (Model II) the Norwegian work time pattern. 1 503 full time hospital doctors in Germany and Norway.\n(‡) Not having the combination of working more than 9 hours a day and more than 60 hours a month on-call", "The gender distribution was similar in Germany and Norway, with 62.1% (783/1260) and 58.2% (327/560) males respectively. The German doctors were significantly younger, with a mean age of 42.7 (95% CI 42.1 to 43.3) vs. 48.6 (47.5 to 49.7) years for males, and 38.6 (37.9 to 39.4) vs. 42.7 (41.5 to 43.8) years for females.", "Work hours and self rated general health are shown in Table 1. The work hours per day and on-call duties per month were significantly lower among both female and male hospital doctors in Norway than in Germany. A considerable lower proportion of Norwegian doctors exceeded a 9 hours' work day plus 60 hours on-call per month.\nWork time and self rated health of hospital doctors in Norway and Germany, aged 25-65 years and employed in full time. Data are % (n) of respondents.\n* p < 0.0001, differences between countries using Pearson' Chi-square test\nIn both countries, male doctors worked significantly longer days and female doctors more hours on-call. However, we found no sex differences in the prevalence of Norwegian work time pattern (data not shown).\nThe majority of the doctors in both countries reported good health, but this proportion was significantly lower in Germany (Table 1). There were no gender differences in self rated health in either country (data not shown).\nIn a logistic regression model (Table 2, Model I) the simultaneous effect of sex, age and work country on self rated health was explored. The model fit the data fairly well (p = .358, Hosmer-Lemeshow test). When \"Norwegian work time pattern\" was included as predictor (Model II), there was a moderate decrease in -2 Log likelihood from 1681 to 1677, suggesting an improvement in model fit (to p = .498, Hosmer Lemeshow).\nLogistic regressions with good self rated health as response variable, without (Model I) and with (Model II) the Norwegian work time pattern. 1 503 full time hospital doctors in Germany and Norway.\n(‡) Not having the combination of working more than 9 hours a day and more than 60 hours a month on-call", "The present study shows how self rated health is associated with hospital doctors' work hours in Norway and Germany. German doctors work considerably longer hours and report significantly lower rates of good self rated health than their Norwegian colleagues.\nThe Norwegian work time pattern (Table 2) was a significant predictor of good self rated health. This can partly be explained by more recovery time [1,10] and less strain related to long work hours [11]. In a logistic regression, the effect of working in Norway was a stronger independent predictor of good self rated health than following the Norwegian work time pattern. This suggests that cultural factors other than the actual work time pattern account for part of the observed difference in self reported health.\nHospital doctors' work conditions are strongly associated with the work organisation [24] and the national directives [15]. Thus, the national regulations of work conditions - all aspects of work life including salary, control over clinical work and professional autonomy, collegial support and work time - may impact on the doctors' health.\nThere are considerable differences in work conditions for doctors in the two countries. A recent study on job satisfaction of Norwegian and German hospital doctors shows that Norwegian doctors enjoy a higher level of job satisfaction, suggesting a better work atmosphere in Norwegian hospitals, with lower physical burden, better collegial environment, more professional autonomy, more control over clinical work and shorter work hours [8]. That job satisfaction and other work conditions are determinants of health are well documented [25,26]. In Germany, several regulations and restrictions on doctors' remuneration and workload have been implemented during the last few years. The workload, expressed by increasing patient throughput and a corresponding reduction in the average duration of hospital stay, has increased. The situation is aggravated by understaffing and increasing migration of German doctors to other countries, often motivated by unacceptable work conditions [27]. In 2006 work hours increased from 38.5 to 40 or 42 hours per week for most of German hospital doctors without a corresponding increase in salary [18]. Working overtime - usually uncompensated - is considered the norm in German hospitals [28]. In Norway, regular weekly hours for hospital doctors have remained stable at 38 to 40 for the last decade [17] with a steady growth in salary [29]. Norwegian hospital doctors also have a lower workload in terms of number of hospital dismissals and more practising doctors per capita [30].\nAnother cultural difference might lie in the adherence to mandatory regulations of hospital doctors' work time. According to a member survey of the German doctors union [28] and a report of the Norwegian Medical Association [31], the majority of doctors in German hospitals (59%) complained about the renege on stipulated maximum weekly work hours, while only 30% of the Norwegian hospital doctors reported a pressure from the hospital administrations to deviate from the work time agreements. Respect for work time regulations and a good balance between professional and private life seem to be important cultural values in Norway. In the most recent European Working Conditions Survey [32], Norway was found to have the second-lowest average weekly work time, and the lowest percentage among European countries of employees with a weekly work time over 48 hours.\nAccording to the job demand-control model of Karasek and Theorell [33], high job demands (workload) in combination with low job control (autonomy, decision latitude) may have negative health effects, and work overload has been shown to be a significant stressor among doctors [11]. A Norwegian study documents that stress among doctors increases with increasing voluntary or involuntary overtime [34]. A recent survey of Dutch full-time employees concludes that involuntary overtime without reward represents a threat to the workers' health [35].\nThus, the significantly lower percentage of doctors in Germany with good self reported health could be ascribed not only to the higher amount of work hours on weekdays and on call duties, but also to negative aspects of the work organization such as higher work load, less autonomy in job-related decisions combined with less control over work hours and higher demand for uncompensated overtime. Unfortunately we did not have comparable data on these worklife aspects for the present study.\nIn terms of health care policy, better work time control could be the first step to improve doctors' health. Work time reduction and control have traditionally been seen as a feature of health care in the European work time regulations [2-5]. Good professional climate, high professional autonomy and monetary recognition of clinical work are also essential [11,33-35]. The health of the doctors is an important public health issue with direct bearing on the quality and stability of health care systems, as well as on the doctors' well-being [1,13,14,16].\n[SUBTITLE] Strengths and limitations [SUBSECTION] The strength of this study lies first and foremost in the comparative and representative datasets, making the results generalisable to the entire population of hospital doctors in Germany and Norway. The high validity of the self rated health question [22,23], similarities in measurement methods, and comparable elements of work hours are also strengths of the study.\nOne limitation is clearly a possible cultural difference in health perception. According to the \"World Value Survey\" [36] 79.5% of Norwegians rate their subjective health as \"good or very good\", compared to 71.6% in Germany, indicating either a difference in health perception or an actual difference in population health. The latter is more or less ruled out by data from the \"Atlas of the Health in Europe\" [9] where the general health status expressed by estimated percentage of life lived in good health or free of disability is similar in Norway (male: 92.2%, female: 90.1%) and Germany (male: 92.1%, female: 90.7%). Furthermore, in a recent Finnish study on self rated health [23] it is argued that doctors \"probably share a fairly similar general understanding of what constitutes health and what information is essential to describing it\", indicating that the intercultural reliability of our health measure should be sufficient, particularly since our respondents are all doctors.\nThe fact that the two highest response levels of the health question had different wordings is of concern. However, since the \"average\" and the lower levels were identical, the dichotomization of this measure should make the two samples directly comparable.\nOne might speculate whether the two year time difference between the surveys (2006 in Germany and 2008 in Norway) may affect the results, but this does not seem to be the case. Between 2006 and 2008, the regulations of contracted weekly hours (N: 38-40 hours; G: 38.5-42 hours) and the maximum weekly hours including on-call of hospital doctors (N: 60 hours; G: 66 hours) remained unchanged [17,18,37,38]. According to a recent analysis, the satisfaction with work time among Norwegian hospital doctors was found to be stable from 2000 to 2006 [39]. In Germany, reports about poor working conditions, including low income, high workload and long work hours among hospital doctors continued from 2006 to 2008 [18].\nA further limitation is the relatively low response rates. This may reflect a limited willingness to participate in surveys compared with other Europeans [40]. Another reason for non-response could be that the doctors did not find time to complete the questionnaire. Nevertheless, it should be noted that despite the fact that no reminder was sent in Germany, response rates of 58.2% and 65% are better than in many other doctor surveys [19].\nThe intensity of on-call duties from home was not measured in Germany. Many doctors perform on-call duties from home up to every other day, as \"backup cover\". It should also be taken into account that scientific and administrative tasks are often carried out at home after regular work hours. These factors would increase the actual work time still further.\nDifferences in specialty patterns might explain some of our findings. In Norway, 14.5% of the respondents worked in the surgical domain, 34.0% in internal medicine and 51.5% in other specialties. In Germany the respective proportions were 29.7%, 29.1% and 41.2%. In Germany daily work hours were identical (median 10 hours) and monthly hours on-call higher among surgeons (median 128 hours) than in internal medicine (median 112 hours). In Norway surgeons and internal medicine doctors worked similar hours, median 9 hours per day and 19-20 hours on call per month. However, the inclusion of specialty as categorical variable in our logistic model (table 2) did not make any significant difference.\nIn our final model (table 2) we have included work hours, age, sex and workplace country as possible predictors of good self reported health. It is likely that also other variables such as coping, other workplace hazards or local regulations affect the relationship between work hours and health [1,35], but such data have not been available for this study.\nFurthermore, the study only includes doctors who are currently working in hospitals, and not those who have already left their jobs due to excessive demands or ill health. At present, there is an increasing influx of German doctors to other professions or to other countries, including Norway - usually driven by demanding work schedules and excessive work hours [7,27]. Therefore, it would be interesting to collect data from the hospital doctors in Germany who have moved to Norway, and compare with the doctors still working in German hospitals.\nThe strength of this study lies first and foremost in the comparative and representative datasets, making the results generalisable to the entire population of hospital doctors in Germany and Norway. The high validity of the self rated health question [22,23], similarities in measurement methods, and comparable elements of work hours are also strengths of the study.\nOne limitation is clearly a possible cultural difference in health perception. According to the \"World Value Survey\" [36] 79.5% of Norwegians rate their subjective health as \"good or very good\", compared to 71.6% in Germany, indicating either a difference in health perception or an actual difference in population health. The latter is more or less ruled out by data from the \"Atlas of the Health in Europe\" [9] where the general health status expressed by estimated percentage of life lived in good health or free of disability is similar in Norway (male: 92.2%, female: 90.1%) and Germany (male: 92.1%, female: 90.7%). Furthermore, in a recent Finnish study on self rated health [23] it is argued that doctors \"probably share a fairly similar general understanding of what constitutes health and what information is essential to describing it\", indicating that the intercultural reliability of our health measure should be sufficient, particularly since our respondents are all doctors.\nThe fact that the two highest response levels of the health question had different wordings is of concern. However, since the \"average\" and the lower levels were identical, the dichotomization of this measure should make the two samples directly comparable.\nOne might speculate whether the two year time difference between the surveys (2006 in Germany and 2008 in Norway) may affect the results, but this does not seem to be the case. Between 2006 and 2008, the regulations of contracted weekly hours (N: 38-40 hours; G: 38.5-42 hours) and the maximum weekly hours including on-call of hospital doctors (N: 60 hours; G: 66 hours) remained unchanged [17,18,37,38]. According to a recent analysis, the satisfaction with work time among Norwegian hospital doctors was found to be stable from 2000 to 2006 [39]. In Germany, reports about poor working conditions, including low income, high workload and long work hours among hospital doctors continued from 2006 to 2008 [18].\nA further limitation is the relatively low response rates. This may reflect a limited willingness to participate in surveys compared with other Europeans [40]. Another reason for non-response could be that the doctors did not find time to complete the questionnaire. Nevertheless, it should be noted that despite the fact that no reminder was sent in Germany, response rates of 58.2% and 65% are better than in many other doctor surveys [19].\nThe intensity of on-call duties from home was not measured in Germany. Many doctors perform on-call duties from home up to every other day, as \"backup cover\". It should also be taken into account that scientific and administrative tasks are often carried out at home after regular work hours. These factors would increase the actual work time still further.\nDifferences in specialty patterns might explain some of our findings. In Norway, 14.5% of the respondents worked in the surgical domain, 34.0% in internal medicine and 51.5% in other specialties. In Germany the respective proportions were 29.7%, 29.1% and 41.2%. In Germany daily work hours were identical (median 10 hours) and monthly hours on-call higher among surgeons (median 128 hours) than in internal medicine (median 112 hours). In Norway surgeons and internal medicine doctors worked similar hours, median 9 hours per day and 19-20 hours on call per month. However, the inclusion of specialty as categorical variable in our logistic model (table 2) did not make any significant difference.\nIn our final model (table 2) we have included work hours, age, sex and workplace country as possible predictors of good self reported health. It is likely that also other variables such as coping, other workplace hazards or local regulations affect the relationship between work hours and health [1,35], but such data have not been available for this study.\nFurthermore, the study only includes doctors who are currently working in hospitals, and not those who have already left their jobs due to excessive demands or ill health. At present, there is an increasing influx of German doctors to other professions or to other countries, including Norway - usually driven by demanding work schedules and excessive work hours [7,27]. Therefore, it would be interesting to collect data from the hospital doctors in Germany who have moved to Norway, and compare with the doctors still working in German hospitals.", "The strength of this study lies first and foremost in the comparative and representative datasets, making the results generalisable to the entire population of hospital doctors in Germany and Norway. The high validity of the self rated health question [22,23], similarities in measurement methods, and comparable elements of work hours are also strengths of the study.\nOne limitation is clearly a possible cultural difference in health perception. According to the \"World Value Survey\" [36] 79.5% of Norwegians rate their subjective health as \"good or very good\", compared to 71.6% in Germany, indicating either a difference in health perception or an actual difference in population health. The latter is more or less ruled out by data from the \"Atlas of the Health in Europe\" [9] where the general health status expressed by estimated percentage of life lived in good health or free of disability is similar in Norway (male: 92.2%, female: 90.1%) and Germany (male: 92.1%, female: 90.7%). Furthermore, in a recent Finnish study on self rated health [23] it is argued that doctors \"probably share a fairly similar general understanding of what constitutes health and what information is essential to describing it\", indicating that the intercultural reliability of our health measure should be sufficient, particularly since our respondents are all doctors.\nThe fact that the two highest response levels of the health question had different wordings is of concern. However, since the \"average\" and the lower levels were identical, the dichotomization of this measure should make the two samples directly comparable.\nOne might speculate whether the two year time difference between the surveys (2006 in Germany and 2008 in Norway) may affect the results, but this does not seem to be the case. Between 2006 and 2008, the regulations of contracted weekly hours (N: 38-40 hours; G: 38.5-42 hours) and the maximum weekly hours including on-call of hospital doctors (N: 60 hours; G: 66 hours) remained unchanged [17,18,37,38]. According to a recent analysis, the satisfaction with work time among Norwegian hospital doctors was found to be stable from 2000 to 2006 [39]. In Germany, reports about poor working conditions, including low income, high workload and long work hours among hospital doctors continued from 2006 to 2008 [18].\nA further limitation is the relatively low response rates. This may reflect a limited willingness to participate in surveys compared with other Europeans [40]. Another reason for non-response could be that the doctors did not find time to complete the questionnaire. Nevertheless, it should be noted that despite the fact that no reminder was sent in Germany, response rates of 58.2% and 65% are better than in many other doctor surveys [19].\nThe intensity of on-call duties from home was not measured in Germany. Many doctors perform on-call duties from home up to every other day, as \"backup cover\". It should also be taken into account that scientific and administrative tasks are often carried out at home after regular work hours. These factors would increase the actual work time still further.\nDifferences in specialty patterns might explain some of our findings. In Norway, 14.5% of the respondents worked in the surgical domain, 34.0% in internal medicine and 51.5% in other specialties. In Germany the respective proportions were 29.7%, 29.1% and 41.2%. In Germany daily work hours were identical (median 10 hours) and monthly hours on-call higher among surgeons (median 128 hours) than in internal medicine (median 112 hours). In Norway surgeons and internal medicine doctors worked similar hours, median 9 hours per day and 19-20 hours on call per month. However, the inclusion of specialty as categorical variable in our logistic model (table 2) did not make any significant difference.\nIn our final model (table 2) we have included work hours, age, sex and workplace country as possible predictors of good self reported health. It is likely that also other variables such as coping, other workplace hazards or local regulations affect the relationship between work hours and health [1,35], but such data have not been available for this study.\nFurthermore, the study only includes doctors who are currently working in hospitals, and not those who have already left their jobs due to excessive demands or ill health. At present, there is an increasing influx of German doctors to other professions or to other countries, including Norway - usually driven by demanding work schedules and excessive work hours [7,27]. Therefore, it would be interesting to collect data from the hospital doctors in Germany who have moved to Norway, and compare with the doctors still working in German hospitals.", "The current study contributes to the international literature on work time of hospital doctors in particular by documenting an association between work hour patterns and self rated health. A lower percentage of German hospital doctors reported self rated health as \"good\", and controlled for other possible cultural differences, the work time pattern was a significant predictor of self rated health. Improved work organisation, in the form of reduced work hours, as well as better control over own work time, preferably combined with lower work load and reward for overwork are recommended strategies to improve doctors' health.", "We declare that we have no conflicts of interest.", "Both authors have many years of experience in statistics and survey methods, and contributed equally to analysing the data and writing the article.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/40/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
A p53-independent role for the MDM2 antagonist Nutlin-3 in DNA damage response initiation.
21338495
The mammalian DNA-damage response (DDR) has evolved to protect genome stability and maximize cell survival following DNA-damage. One of the key regulators of the DDR is p53, itself tightly regulated by MDM2. Following double-strand DNA breaks (DSBs), mediators including ATM are recruited to the site of DNA-damage. Subsequent phosphorylation of p53 by ATM and ATM-induced CHK2 results in p53 stabilization, ultimately intensifying transcription of p53-responsive genes involved in DNA repair, cell-cycle checkpoint control and apoptosis.
BACKGROUND
In the current study, we investigated the stabilization and activation of p53 and associated DDR proteins in response to treatment of human colorectal cancer cells (HCT116p53+/+) with the MDM2 antagonist, Nutlin-3.
METHODS
Using immunoblotting, Nutlin-3 was observed to stabilize p53, and activate p53 target proteins. Unexpectedly, Nutlin-3 also mediated phosphorylation of p53 at key DNA-damage-specific serine residues (Ser15, 20 and 37). Furthermore, Nutlin-3 induced activation of CHK2 and ATM - proteins required for DNA-damage-dependent phosphorylation and activation of p53, and the phosphorylation of BRCA1 and H2AX - proteins known to be activated specifically in response to DNA damage. Indeed, using immunofluorescent labeling, Nutlin-3 was seen to induce formation of γH2AX foci, an early hallmark of the DDR. Moreover, Nutlin-3 induced phosphorylation of key DDR proteins, initiated cell cycle arrest and led to formation of γH2AX foci in cells lacking p53, whilst γH2AX foci were also noted in MDM2-deficient cells.
RESULTS
To our knowledge, this is the first solid evidence showing a secondary role for Nutlin-3 as a DDR triggering agent, independent of p53 status, and unrelated to its role as an MDM2 antagonist.
CONCLUSION
[ "Animals", "Cells, Cultured", "DNA Damage", "Gene Expression Regulation", "Gene Knockdown Techniques", "HCT116 Cells", "Humans", "Imidazoles", "Mice", "Models, Biological", "Phosphorylation", "Piperazines", "Protein Serine-Threonine Kinases", "Protein Stability", "Proto-Oncogene Proteins c-mdm2", "Tumor Suppressor Protein p53" ]
3050855
null
null
Methods
Unless otherwise stated all antibodies were purchased from New England Biolabs, Hertfordshire, UK, and all reagents, including Nutlin-3, were purchased from Sigma-Aldrich, Dorset, UK. [SUBTITLE] Cell Lines [SUBSECTION] Human colorectal cancer cell lines (HCT116p53-/- and HCT116p53+/+) were obtained from Professor Galina Selivanova (Karolinska Institute, Stockholm, Sweden), and mouse embryonic fibroblast (MEF) cells deficient in MDM2 (MEFMDM2-/-) were obtained from Professor Guillermina Lozano (MD Anderson Cancer Centre, University of Texas, USA). All cells were genotyped before arrival using where necessary primers specific to the deleted alleles. Cell lines were authenticated upon receipt using immunoblotting. Cells were sustained in Dulbecco's Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 1% Penicillin/Streptomycin/L-Glutamine and 1% Amphotericin B (Invitrogen, Renfrewshire, UK). Cells were incubated at 37°C in a humidified atmosphere containing 5% CO2. Cells were passaged twice weekly, and were seeded at 1 × 105cells/mL during all experiments. Human colorectal cancer cell lines (HCT116p53-/- and HCT116p53+/+) were obtained from Professor Galina Selivanova (Karolinska Institute, Stockholm, Sweden), and mouse embryonic fibroblast (MEF) cells deficient in MDM2 (MEFMDM2-/-) were obtained from Professor Guillermina Lozano (MD Anderson Cancer Centre, University of Texas, USA). All cells were genotyped before arrival using where necessary primers specific to the deleted alleles. Cell lines were authenticated upon receipt using immunoblotting. Cells were sustained in Dulbecco's Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 1% Penicillin/Streptomycin/L-Glutamine and 1% Amphotericin B (Invitrogen, Renfrewshire, UK). Cells were incubated at 37°C in a humidified atmosphere containing 5% CO2. Cells were passaged twice weekly, and were seeded at 1 × 105cells/mL during all experiments. [SUBTITLE] Western Blotting [SUBSECTION] Following varying length treatments with 10 μM Nutlin-3 or 100 μM Etoposide, cells were collected and lysed in 2X laemmli lysis buffer (4% w/v SDS, 20% v/v glycerol, 120 mM tris pH6.8). A 5 μL volume of each sample was diluted in 95 μL dH20, and added to 1 mL Lowry solution (50 parts 2% w/v sodium carbonate, 0.1 M sodium hydroxide solution, to 1 part 0.5% w/v copper(II)sulphate, 1% w/v sodium citrate solution), incubated at room temperature for 10 minutes, added to 100 μL 1 M folinciocalteau solution, and incubated for 30 minutes at room temperature before being transferred to cuvettes for determination of protein concentration using a CamSpec-M330 spectrometer. Samples of equal protein concentration were then loaded onto 6-15% acrylamide gels and underwent electrophoresis, followed by transfer onto PVDF (polyvinylidene fluoride) membranes. Membranes were blocked with 5% milk/TBS-T solution (5%w/v Marvel milk powder in 1X TBS-T solution comprising 50 mM Tris, 150 Mm sodium chloride, 0.364% v/v hydrochloric acid, 0.5% v/v Tween-20) and probed overnight at 4°C for specific proteins of interest. Standard primary antibody dilutions were 1:1000 in 5% milk/TBS-T solution, except for CHK2 (1:100 in 5% BSA/TBS-T solution), tubulin and actin (Merck Chemicals, Nottinghamshire, UK), used at 1:17000 in 5% milk/TBS-T solution. Standard secondary antibody dilutions were 1:2000 prepared in 5% milk/TBS-T solution. Chemiluminescence was detected using Lumiglo reagent (New England Biolabs, Hertfordshire, UK) according to manufacturer's instructions, and hyperfilms (GE Healthcare, Buckinghamshire, UK) were developed using an Amersham SRX100A Hyperprocessor. Following varying length treatments with 10 μM Nutlin-3 or 100 μM Etoposide, cells were collected and lysed in 2X laemmli lysis buffer (4% w/v SDS, 20% v/v glycerol, 120 mM tris pH6.8). A 5 μL volume of each sample was diluted in 95 μL dH20, and added to 1 mL Lowry solution (50 parts 2% w/v sodium carbonate, 0.1 M sodium hydroxide solution, to 1 part 0.5% w/v copper(II)sulphate, 1% w/v sodium citrate solution), incubated at room temperature for 10 minutes, added to 100 μL 1 M folinciocalteau solution, and incubated for 30 minutes at room temperature before being transferred to cuvettes for determination of protein concentration using a CamSpec-M330 spectrometer. Samples of equal protein concentration were then loaded onto 6-15% acrylamide gels and underwent electrophoresis, followed by transfer onto PVDF (polyvinylidene fluoride) membranes. Membranes were blocked with 5% milk/TBS-T solution (5%w/v Marvel milk powder in 1X TBS-T solution comprising 50 mM Tris, 150 Mm sodium chloride, 0.364% v/v hydrochloric acid, 0.5% v/v Tween-20) and probed overnight at 4°C for specific proteins of interest. Standard primary antibody dilutions were 1:1000 in 5% milk/TBS-T solution, except for CHK2 (1:100 in 5% BSA/TBS-T solution), tubulin and actin (Merck Chemicals, Nottinghamshire, UK), used at 1:17000 in 5% milk/TBS-T solution. Standard secondary antibody dilutions were 1:2000 prepared in 5% milk/TBS-T solution. Chemiluminescence was detected using Lumiglo reagent (New England Biolabs, Hertfordshire, UK) according to manufacturer's instructions, and hyperfilms (GE Healthcare, Buckinghamshire, UK) were developed using an Amersham SRX100A Hyperprocessor. [SUBTITLE] Flow Cytometry [SUBSECTION] After treatment with 100 μM Etoposide or 10 μM Nutlin-3 for various time periods, cells were trypsinised using 0.05% EDTA-free trypsin (Invitrogen, Renfrewshire, UK), collected and centrifuged, and the pellets resuspended in 70% ethanol before being stored for 24 hours at -20°C. Cells were later centrifuged, washed with 1X PBS and resuspended in 50 μg/mL Propidium Iodide/Rnase A solution before cell cycle distribution was assessed on a Beckman Coulter Cytomics FC500 flow cytometer. After treatment with 100 μM Etoposide or 10 μM Nutlin-3 for various time periods, cells were trypsinised using 0.05% EDTA-free trypsin (Invitrogen, Renfrewshire, UK), collected and centrifuged, and the pellets resuspended in 70% ethanol before being stored for 24 hours at -20°C. Cells were later centrifuged, washed with 1X PBS and resuspended in 50 μg/mL Propidium Iodide/Rnase A solution before cell cycle distribution was assessed on a Beckman Coulter Cytomics FC500 flow cytometer. [SUBTITLE] Immunofluorescence [SUBSECTION] Cells in 6-well plates were treated with 100 μM Etoposide or 10 μM Nutlin-3 for varying time periods before being fixed using 4% v/v paraformaldehyde solution, permeabilised with 0.5% v/v Triton-X100 solution, washed in 1X PBS and incubated overnight at 4°C with various antibodies prepared in 5% milk/TBS-T solution. Cells were then washed with 1X PBS, incubated for 2 hours with a 1:250 dilution of goat anti-rabbit Dylight488 antibody (New England Biolabs, Hertfordshire, UK) prepared in 1X PBS, before being washed once again with 1X PBS. Wells were then treated with one drop of Vectashield mounting media containing DAPI (Vector Laboratories, Cambridgeshire, UK), covered with glass coverslips and sealed with clear nail polish. Cells were then observed at 40× magnification using a Zeiss LSM500 confocal microscope and analysed using LSM Image Browser software (Carl Zeiss, Oberkochen, Germany). Cells in 6-well plates were treated with 100 μM Etoposide or 10 μM Nutlin-3 for varying time periods before being fixed using 4% v/v paraformaldehyde solution, permeabilised with 0.5% v/v Triton-X100 solution, washed in 1X PBS and incubated overnight at 4°C with various antibodies prepared in 5% milk/TBS-T solution. Cells were then washed with 1X PBS, incubated for 2 hours with a 1:250 dilution of goat anti-rabbit Dylight488 antibody (New England Biolabs, Hertfordshire, UK) prepared in 1X PBS, before being washed once again with 1X PBS. Wells were then treated with one drop of Vectashield mounting media containing DAPI (Vector Laboratories, Cambridgeshire, UK), covered with glass coverslips and sealed with clear nail polish. Cells were then observed at 40× magnification using a Zeiss LSM500 confocal microscope and analysed using LSM Image Browser software (Carl Zeiss, Oberkochen, Germany).
null
null
null
null
[ "Background", "Cell Lines", "Western Blotting", "Flow Cytometry", "Immunofluorescence", "Results", "Nutlin-3 induces stabilisation of p53 and activation of p53 target proteins", "Nutlin-3 induces phosphorylation of p53 at key serine residues and activates several important DDR mediators", "Nutlin-3 induces G1/S cell cycle arrest", "Nutlin-3 induces H2AX phosphorylation and foci formation", "Nutlin-3 induced responses are independent of p53 and Nutlin-3- mediated inhibition of MDM2", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The p53 tumour suppressor protein, often referred to as the 'guardian of the genom', plays a critical role in mediating cellular stress responses such as that brought about by DNA-damage, and is therefore key in regulating a vast array of proteins involved in cell cycle progression and check-points, DNA repair and apoptosis [1].\nIn the absence of cellular stress, p53 is maintained at low levels by its ubiquitination and subsequent proteasomal degradation. This process can be mediated by one of several E3 ubiquitin ligases [2], but principally by MDM2 (mouse double minute 2), as illustrated in Figure 1A.\nSchematic representation of the interactions between p53 and MDM2. (A) In the absence of stress signals, p53 is bound to its negative regulator MDM2. MDM2 ubiquitinates p53, targeting it for degradation by the 26 S proteasome. (B) Cellular stress signals, such as that bought about by DNA-damage lead to activation of ATM/ATR. ATM/ATR mediate the phosphorylation of MDM2 and p53. Phosphorylated MDM2 undergoes auto-ubiquitination and degradation by the 26 S proteasome. Phosphorylated p53 undergoes nuclear localisation, tetramerisation, and binds to p53-responsive promoters to induce transcription of genes involved in the DDR. (C) Chemical structure of Nutlin-3.\nConversely, in the presence of cellular stress stimuli, two protein kinases - ATM (ataxia-telangiectasia mutated) and ATR (ATM and Rad3-related) orchestrate the DDR in order to preserve genome integrity. Whilst ATM is mainly activated in response to double-strand DNA breaks (DSBs), ATR is primarily activated following replicative errors that result in single-stranded DNA, however recent findings indicate DSB-mediated activation of ATM can also trigger activation of ATR [3,4].\nActivation of ATM leads to phosphorylation and activation of CHK2, along with various other substrates, resulting in the subsequent phosphorylation of both p53 and its negative regulator MDM2 (Figure 1B). Phosphorylation of MDM2 in close proximity to its RING domain inhibits its ability to ubiquitinate p53, instead promoting self-ubiquitination and degradation by the proteasome.\nConversely, the phosphorylation of p53 results in its stabilisation and activation [5-7], bringing about its translocation to the nucleus, where it has been shown to bind preferentially to promoters which favour transcription of genes that encode proteins required in stress-induced cell cycle check-point control, DNA repair and apoptosis. Adding to the complexity of p53-mediated DDR signalling are several reports indicating that co-operation of p53 with other transcription factors such as hnRNP K and Miz-1 is necessary for the efficient transcription of some p53 target genes, particularly those encoding apoptogenic proteins [8-10].\nThe functional roles of p53 phosphorylation vary and are yet to be fully elucidated. Evidence suggests that phosphorylation of p53 at Ser20 leads to inhibition of the p53/MDM2 interaction, preventing ubiquitin-mediated p53 degradation and thereby enhancing p53 stabilisation [11-13]. On the other hand, phosphorylation of p53 at Ser46 has been shown to mediate the selectivity of p53 in favour of promoters which enhance apoptotic signalling, such as the p53-regulated apoptosis-inducing protein 1 (p53AIP) [14]. Furthermore, certain phosphorylations provide a means of negatively regulating p53, as evidenced by observations that phosphorylation of p53 at Thr55 inhibits its nuclear localisation [15] and mediates its degradation [16], whilst dephosphorylation of nuclear p53 at Ser276 has been observed to occur as an early response to ionising radiation [17].\nThere also exists much debate as to whether specific phosphorylations are prerequisite for the stabilisation and functional activity of p53. Findings in U2OS osteoblast cells show that isopropyl-ß-D-thiogalactoside-induced (IPTG) sequestration of MDM2 by p14/ARF led to phosphorylation of only a single p53 residue; Ser392, whilst adriamycin caused phosphorylation of all 6 key serine residues (Ser6, 10, 15, 20, 37 and 392), but no differences were observed between the activity of p53 in adriamycin versus IPTG-treated cells, seemingly indicating that phosphorylation is not necessary for p53 activity [18]. However, Chehab et al observed complete ablation of p53 stabilisation in response to UV treatment or irradiation in cells where Ser20 was substituted for alanine or aspartate [11].\nGiven the vast array of proteins under the regulation of p53, and the fact that mutations to p53 are present in over 50% of all human malignancies [19,20], there is much interest in developing pharmacological agents directed at p53-mediated responses. Recently, a novel small molecule MDM2 antagonist has been developed; Nutlin-3 (Figure 1C) interacts with the p53 binding domain of MDM2, preventing negative regulation of p53 by MDM2, hence allowing continuation of p53-mediated signalling [21]. Studies by the same group also showed that Nutlin-3 treatment of p53-positive HCT116 and RKO cells enhanced transcription of p53-responsive genes including p21, MIC1 and MDM2, leading to the initiation of apoptosis, despite the fact that no phosphorylation of p53 was observed at a number of key serine residues (Ser6, 15, 20, 37, 46 and 392) [22]. The authors attribute their findings to the proposed non-genotoxic action of Nutlin-3, however Nutlin-3-induced phosphorylation of p53 at Ser15 has since been reported in both B-cell chronic lymphocytic leukaemia (B-CLL) and mantle cell lymphoma (MCL) models [23].\nIn the current study we assessed the stabilisation and activation of p53 in HCT116p53+/+ cells in response to Nutlin-3, finding significant phosphorylation of Ser15, along with Ser20 and Ser37. Furthermore, on investigation of other components of the DDR pathway, we show Nutlin-3-mediated activation of ATM, CHK2, BRCA1 and H2AX, as well as upregulation of MDM2 and p21. Nutlin-3 led to G1/S arrest in HCT116p53+/+ cells, in keeping with the established role of p53 in instigating and maintaining G1 arrest, however in HCT116p53-/- cells, G2/M arrest was noted in response to Nutlin-3 treatment, demonstrating the ability of Nutlin-3 to induce cell cycle checkpoint controls in a p53-independent fashion. Additionally, in response to Nutlin-3, we show nuclear H2AX foci formation, an early event in the DDR caused by clustering of phosphorylated H2AX moieties (γH2AX) at the site of DSBs. Moreover, this phenomenon was also observed in HCT116 cells lacking p53 (HCT116p53-/-) and also in MDM2 deficient cells (MEFMDM2-/-), suggesting firstly that p53 status is dispensable in the Nutlin-3-induced DDR, and secondly, that the ability of Nutlin-3 to induce DNA-damage or initiate the DDR is not connected to its role as an MDM2 antagonist. These results suggest a secondary role for Nutlin-3 as a DNA-damaging agent, contrary to its proposed mechanism of action as a non-genotoxic antagonist of MDM2. These data have implications for the use of Nutlin-3, and for the future development of pharmacological MDM2 antagonists for the treatment of cancer.", "Human colorectal cancer cell lines (HCT116p53-/- and HCT116p53+/+) were obtained from Professor Galina Selivanova (Karolinska Institute, Stockholm, Sweden), and mouse embryonic fibroblast (MEF) cells deficient in MDM2 (MEFMDM2-/-) were obtained from Professor Guillermina Lozano (MD Anderson Cancer Centre, University of Texas, USA). All cells were genotyped before arrival using where necessary primers specific to the deleted alleles. Cell lines were authenticated upon receipt using immunoblotting. Cells were sustained in Dulbecco's Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 1% Penicillin/Streptomycin/L-Glutamine and 1% Amphotericin B (Invitrogen, Renfrewshire, UK). Cells were incubated at 37°C in a humidified atmosphere containing 5% CO2. Cells were passaged twice weekly, and were seeded at 1 × 105cells/mL during all experiments.", "Following varying length treatments with 10 μM Nutlin-3 or 100 μM Etoposide, cells were collected and lysed in 2X laemmli lysis buffer (4% w/v SDS, 20% v/v glycerol, 120 mM tris pH6.8). A 5 μL volume of each sample was diluted in 95 μL dH20, and added to 1 mL Lowry solution (50 parts 2% w/v sodium carbonate, 0.1 M sodium hydroxide solution, to 1 part 0.5% w/v copper(II)sulphate, 1% w/v sodium citrate solution), incubated at room temperature for 10 minutes, added to 100 μL 1 M folinciocalteau solution, and incubated for 30 minutes at room temperature before being transferred to cuvettes for determination of protein concentration using a CamSpec-M330 spectrometer. Samples of equal protein concentration were then loaded onto 6-15% acrylamide gels and underwent electrophoresis, followed by transfer onto PVDF (polyvinylidene fluoride) membranes. Membranes were blocked with 5% milk/TBS-T solution (5%w/v Marvel milk powder in 1X TBS-T solution comprising 50 mM Tris, 150 Mm sodium chloride, 0.364% v/v hydrochloric acid, 0.5% v/v Tween-20) and probed overnight at 4°C for specific proteins of interest. Standard primary antibody dilutions were 1:1000 in 5% milk/TBS-T solution, except for CHK2 (1:100 in 5% BSA/TBS-T solution), tubulin and actin (Merck Chemicals, Nottinghamshire, UK), used at 1:17000 in 5% milk/TBS-T solution. Standard secondary antibody dilutions were 1:2000 prepared in 5% milk/TBS-T solution. Chemiluminescence was detected using Lumiglo reagent (New England Biolabs, Hertfordshire, UK) according to manufacturer's instructions, and hyperfilms (GE Healthcare, Buckinghamshire, UK) were developed using an Amersham SRX100A Hyperprocessor.", "After treatment with 100 μM Etoposide or 10 μM Nutlin-3 for various time periods, cells were trypsinised using 0.05% EDTA-free trypsin (Invitrogen, Renfrewshire, UK), collected and centrifuged, and the pellets resuspended in 70% ethanol before being stored for 24 hours at -20°C. Cells were later centrifuged, washed with 1X PBS and resuspended in 50 μg/mL Propidium Iodide/Rnase A solution before cell cycle distribution was assessed on a Beckman Coulter Cytomics FC500 flow cytometer.", "Cells in 6-well plates were treated with 100 μM Etoposide or 10 μM Nutlin-3 for varying time periods before being fixed using 4% v/v paraformaldehyde solution, permeabilised with 0.5% v/v Triton-X100 solution, washed in 1X PBS and incubated overnight at 4°C with various antibodies prepared in 5% milk/TBS-T solution. Cells were then washed with 1X PBS, incubated for 2 hours with a 1:250 dilution of goat anti-rabbit Dylight488 antibody (New England Biolabs, Hertfordshire, UK) prepared in 1X PBS, before being washed once again with 1X PBS. Wells were then treated with one drop of Vectashield mounting media containing DAPI (Vector Laboratories, Cambridgeshire, UK), covered with glass coverslips and sealed with clear nail polish. Cells were then observed at 40× magnification using a Zeiss LSM500 confocal microscope and analysed using LSM Image Browser software (Carl Zeiss, Oberkochen, Germany).", "[SUBTITLE] Nutlin-3 induces stabilisation of p53 and activation of p53 target proteins [SUBSECTION] In order to compare the efficiency of Nutlin-3-dependent p53 stabilisation with that of known DNA-damaging agents, we treated human colorectal cancer cells (HCT116p53+/+) with Etoposide (100 μM) or Nutlin-3 (10 μM). Treatment of HCT116p53+/+ cells with these different agents led to stabilisation of p53 from 2 hours. Stabilisation of p53 was still apparent after 16 hours in cells treated with either Etoposide or Nutlin-3 (Figure 2A). As expected, no p53 was observed in HCT116p53 -/- cells treated with any of the two reagents throughout the time course examined (Figure 2C).\nNutlin-3 induces stabilisation and phosphorylation of p53, and activates key p53 target proteins. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto), or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse MDM2 and p21 activation. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading.\nGiven that we observed stabilisation of p53 in response to Nutlin-3, we sought to identify whether Nutlin-3 caused activation of p53 target proteins; MDM2 and p21. Indeed, following 4, 8 or 16 hour treatments with Nutlin-3, activation of p21 was observed to be similar to that induced by Etoposide. Additionally, Nutlin-3-induced activation of MDM2 greatly exceeded that resulting from Etoposide treatment throughout the time course studied (Figure 2B).\nIn order to compare the efficiency of Nutlin-3-dependent p53 stabilisation with that of known DNA-damaging agents, we treated human colorectal cancer cells (HCT116p53+/+) with Etoposide (100 μM) or Nutlin-3 (10 μM). Treatment of HCT116p53+/+ cells with these different agents led to stabilisation of p53 from 2 hours. Stabilisation of p53 was still apparent after 16 hours in cells treated with either Etoposide or Nutlin-3 (Figure 2A). As expected, no p53 was observed in HCT116p53 -/- cells treated with any of the two reagents throughout the time course examined (Figure 2C).\nNutlin-3 induces stabilisation and phosphorylation of p53, and activates key p53 target proteins. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto), or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse MDM2 and p21 activation. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading.\nGiven that we observed stabilisation of p53 in response to Nutlin-3, we sought to identify whether Nutlin-3 caused activation of p53 target proteins; MDM2 and p21. Indeed, following 4, 8 or 16 hour treatments with Nutlin-3, activation of p21 was observed to be similar to that induced by Etoposide. Additionally, Nutlin-3-induced activation of MDM2 greatly exceeded that resulting from Etoposide treatment throughout the time course studied (Figure 2B).\n[SUBTITLE] Nutlin-3 induces phosphorylation of p53 at key serine residues and activates several important DDR mediators [SUBSECTION] Following the observed stabilisation of p53 in HCT116p53+/+ cells induced by treatment with Etoposide or Nutlin-3, we next sought to investigate whether or not the observed Nutlin-3-dependent stabilisation of p53 was a result of Nutlin-3-induced p53 phosphorylation. Therefore, the phosphorylation status of various key serine residues known to be phosphorylated following DNA-damage was examined in response to the same two reagents over a 24 hours time-course. Indeed, phosphorylation of Ser15, 20 and 37 was observed at both 2 and 6 hour time points in response to Etoposide and Nutlin-3 treatment (Figure 3A). However a marked decrease in p53 phosphorylation was observed following Nutlin-3 treatment at 24 hour point (Additional file 1).\nNutlin-3 leads to phosphorylation of several important DDR mediators. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542) and CHK2 (Thr68) were analysed using immunoblotting. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981) and BRCA1 (Ser1542) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nSince it is well established that Etoposide-dependent phosphorylation of p53 is a response to DNA-damage generated by this agent, we went on to investigate whether the unexpected Nutlin-3-induced p53 phosphorylation was due to a Nutlin-3-mediated DDR. Therefore, we assessed the affect of Nutlin-3 on the activation of CHK2 and ATM which are required for DNA-damage-dependent phosphorylation and activation of p53. Indeed, phosphorylation of ATM and CHK2 were observed in HCT116p53+/+ cells following 1 hour treatments with either Etoposide or Nutlin-3, as was phosphorylation of BRCA1, an ATM target protein required for the ATM-dependent DDR (Figure 3B). Furthermore, in HCT116p53-/- cells, phosphorylation of both ATM and its target protein BRCA1 was also noted following a 1 hour treatment with both Nutlin-3 and Etoposide (Figure 3C).\nFollowing the observed stabilisation of p53 in HCT116p53+/+ cells induced by treatment with Etoposide or Nutlin-3, we next sought to investigate whether or not the observed Nutlin-3-dependent stabilisation of p53 was a result of Nutlin-3-induced p53 phosphorylation. Therefore, the phosphorylation status of various key serine residues known to be phosphorylated following DNA-damage was examined in response to the same two reagents over a 24 hours time-course. Indeed, phosphorylation of Ser15, 20 and 37 was observed at both 2 and 6 hour time points in response to Etoposide and Nutlin-3 treatment (Figure 3A). However a marked decrease in p53 phosphorylation was observed following Nutlin-3 treatment at 24 hour point (Additional file 1).\nNutlin-3 leads to phosphorylation of several important DDR mediators. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542) and CHK2 (Thr68) were analysed using immunoblotting. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981) and BRCA1 (Ser1542) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nSince it is well established that Etoposide-dependent phosphorylation of p53 is a response to DNA-damage generated by this agent, we went on to investigate whether the unexpected Nutlin-3-induced p53 phosphorylation was due to a Nutlin-3-mediated DDR. Therefore, we assessed the affect of Nutlin-3 on the activation of CHK2 and ATM which are required for DNA-damage-dependent phosphorylation and activation of p53. Indeed, phosphorylation of ATM and CHK2 were observed in HCT116p53+/+ cells following 1 hour treatments with either Etoposide or Nutlin-3, as was phosphorylation of BRCA1, an ATM target protein required for the ATM-dependent DDR (Figure 3B). Furthermore, in HCT116p53-/- cells, phosphorylation of both ATM and its target protein BRCA1 was also noted following a 1 hour treatment with both Nutlin-3 and Etoposide (Figure 3C).\n[SUBTITLE] Nutlin-3 induces G1/S cell cycle arrest [SUBSECTION] Given our findings that Nutlin-3 treatment induced p53 stabilisation and phosphorylation, as well as the activation of key DDR proteins and p53 target proteins known to be involved in cell cycle control, we went on to assess whether Nutlin-3 was capable of inducing cell cycle checkpoints. Following treatment with either Nutlin-3 or Etoptoside, HCT116p53+/+ andp53-/- cells were analysed by flow cytometry. While HCT116p53+/+ treatment with Nutlin-3 led to G1/S arrest, treatment with Etoposide led to G2/M arrest (Figure 4A, Additional file 2 and Table 1). In contrast HCT116 p53-/- cells were observed to arrest in G2/M in response to both Nutlin-3 and Etoposide (Figure 4A, Additional file 2 and Table 2). Furthermore and in contrast to HCT116p53+/+, an increase in subG1 cell population was observed in HCT116p53-/-following Nutlin-3 treatment (Figure 4A, Additional file 2 and Tables 1 and 2).\nNutlin-3 induces p53-independent cell cycle checkpoint controls. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry. (A) Chart showing percentage of HCT116p53+/+ cells in either G1 or G2 cell cycle population following either Nutlin-3 or etoposide treatment as described above. (B) Chart showing percentage of HCT116p53-/- cells in both G1 and G2 cell cycle population following either Nutlin-3 or Etoposide treatment as described above.\nRepresentative percentages of HCT11 p53+/+ cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nRepresentative percentages of HCT11 p53-/- cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nGiven our findings that Nutlin-3 treatment induced p53 stabilisation and phosphorylation, as well as the activation of key DDR proteins and p53 target proteins known to be involved in cell cycle control, we went on to assess whether Nutlin-3 was capable of inducing cell cycle checkpoints. Following treatment with either Nutlin-3 or Etoptoside, HCT116p53+/+ andp53-/- cells were analysed by flow cytometry. While HCT116p53+/+ treatment with Nutlin-3 led to G1/S arrest, treatment with Etoposide led to G2/M arrest (Figure 4A, Additional file 2 and Table 1). In contrast HCT116 p53-/- cells were observed to arrest in G2/M in response to both Nutlin-3 and Etoposide (Figure 4A, Additional file 2 and Table 2). Furthermore and in contrast to HCT116p53+/+, an increase in subG1 cell population was observed in HCT116p53-/-following Nutlin-3 treatment (Figure 4A, Additional file 2 and Tables 1 and 2).\nNutlin-3 induces p53-independent cell cycle checkpoint controls. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry. (A) Chart showing percentage of HCT116p53+/+ cells in either G1 or G2 cell cycle population following either Nutlin-3 or etoposide treatment as described above. (B) Chart showing percentage of HCT116p53-/- cells in both G1 and G2 cell cycle population following either Nutlin-3 or Etoposide treatment as described above.\nRepresentative percentages of HCT11 p53+/+ cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nRepresentative percentages of HCT11 p53-/- cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\n[SUBTITLE] Nutlin-3 induces H2AX phosphorylation and foci formation [SUBSECTION] One of the first proteins phosphorylated and activated in response to DNA-damage is the histone variant, H2AX [24]. Hence, we sought to investigate whether the observed Nutlin-3-dependent activation of ATM and CHK2 was due to a Nutlin-3-mediated DDR. Therefore, HCT116p53+/+ cells were treated with either Etoposide or Nutlin-3, and H2AX phosphorylation was checked both 1 and 4 hours following treatment. Indeed, H2AX phosphorylation was induced in response to both Etoposide and Nutlin-3 treatment (Figure 5A).\nNutlin-3 induces H2AX phosphorylation and γH2AX foci in HCT116p53+/+ cells. (A) HCT116p53+/+ cells were left untreated (unt) or treated with 100 μM Etoposide (Eto), 10 μM Nutlin-3 or (Nut) for 1 or 4 hours before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53+/+ cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nWe next sought to establish whether the observed Nutlin-3-induced activation of H2AX phosphorylation was indicative of γH2AX foci formation, an event recognised to occur early on in the DDR [24]. Indeed, treatment of HCT116p53+/+ cells with Etoposide or Nutlin-3 was observed to induce γH2AX foci formation from as early as 30 minutes following treatment. Foci formation was most notable in response to Etoposide treatment, but was nevertheless clearly visible in response to treatment with Nutlin-3 (Figure 5B).\nOne of the first proteins phosphorylated and activated in response to DNA-damage is the histone variant, H2AX [24]. Hence, we sought to investigate whether the observed Nutlin-3-dependent activation of ATM and CHK2 was due to a Nutlin-3-mediated DDR. Therefore, HCT116p53+/+ cells were treated with either Etoposide or Nutlin-3, and H2AX phosphorylation was checked both 1 and 4 hours following treatment. Indeed, H2AX phosphorylation was induced in response to both Etoposide and Nutlin-3 treatment (Figure 5A).\nNutlin-3 induces H2AX phosphorylation and γH2AX foci in HCT116p53+/+ cells. (A) HCT116p53+/+ cells were left untreated (unt) or treated with 100 μM Etoposide (Eto), 10 μM Nutlin-3 or (Nut) for 1 or 4 hours before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53+/+ cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nWe next sought to establish whether the observed Nutlin-3-induced activation of H2AX phosphorylation was indicative of γH2AX foci formation, an event recognised to occur early on in the DDR [24]. Indeed, treatment of HCT116p53+/+ cells with Etoposide or Nutlin-3 was observed to induce γH2AX foci formation from as early as 30 minutes following treatment. Foci formation was most notable in response to Etoposide treatment, but was nevertheless clearly visible in response to treatment with Nutlin-3 (Figure 5B).\n[SUBTITLE] Nutlin-3 induced responses are independent of p53 and Nutlin-3- mediated inhibition of MDM2 [SUBSECTION] We next sought to clarify the effect of p53 status on the ability of Nutlin-3 to induce the DDR. We therefore treated HCT116p53-/- cells with Etoposide or Nutlin-3 and assessed the phosphorylation of γH2AX. Here, increases in γH2AX phosphorylation were observed in HCT116p53-/- cells treated with either Etoposide or Nutlin-3 (Figure 6A). Furthermore, formation of γH2AX foci were clearly visible in HCT116p53-/- cells following 30 minutes treatment with Etoposide, an effect which was comparable in cells treated with Nutlin-3 for the same time period (Figure 6B).\nNutlin-3 induces H2AX phosphorylation and _H2AX foci formation independent of both p53 and MDM2 status. (A) HCT116p53-/- cells were left untreated (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for 1 hour before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes. (C) Representative confocal microscopy images of γH2AX foci formation in MEFMDM2-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nHaving established that Nutlin-3 was capable of inducing DDR independent of p53 status, we went on to assess whether the ability of Nutlin-3 to induce DDR was dependent on its ability to inhibit MDM2. Here we assessed the effect of Nutlin-3 on formation of γH2AX foci in mouse embryonic fibroblasts deficient in MDM2 (MEFMDM2-/-). We observed clear formation of γH2AX foci after 30 minutes Nutlin-3 treatment, similar to that induced in cells treated with Etoposide for the same length of time (Figure 6C). Furthermore, in MEFMDM2-/- cells, phosphorylation of ATM, ChK2, BRCA1 and γH2AX was noted following a 1 hour treatment with Nutlin-3, and markedly decreased by 24 hours (Additional file 3).\nWe next sought to clarify the effect of p53 status on the ability of Nutlin-3 to induce the DDR. We therefore treated HCT116p53-/- cells with Etoposide or Nutlin-3 and assessed the phosphorylation of γH2AX. Here, increases in γH2AX phosphorylation were observed in HCT116p53-/- cells treated with either Etoposide or Nutlin-3 (Figure 6A). Furthermore, formation of γH2AX foci were clearly visible in HCT116p53-/- cells following 30 minutes treatment with Etoposide, an effect which was comparable in cells treated with Nutlin-3 for the same time period (Figure 6B).\nNutlin-3 induces H2AX phosphorylation and _H2AX foci formation independent of both p53 and MDM2 status. (A) HCT116p53-/- cells were left untreated (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for 1 hour before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes. (C) Representative confocal microscopy images of γH2AX foci formation in MEFMDM2-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nHaving established that Nutlin-3 was capable of inducing DDR independent of p53 status, we went on to assess whether the ability of Nutlin-3 to induce DDR was dependent on its ability to inhibit MDM2. Here we assessed the effect of Nutlin-3 on formation of γH2AX foci in mouse embryonic fibroblasts deficient in MDM2 (MEFMDM2-/-). We observed clear formation of γH2AX foci after 30 minutes Nutlin-3 treatment, similar to that induced in cells treated with Etoposide for the same length of time (Figure 6C). Furthermore, in MEFMDM2-/- cells, phosphorylation of ATM, ChK2, BRCA1 and γH2AX was noted following a 1 hour treatment with Nutlin-3, and markedly decreased by 24 hours (Additional file 3).", "In order to compare the efficiency of Nutlin-3-dependent p53 stabilisation with that of known DNA-damaging agents, we treated human colorectal cancer cells (HCT116p53+/+) with Etoposide (100 μM) or Nutlin-3 (10 μM). Treatment of HCT116p53+/+ cells with these different agents led to stabilisation of p53 from 2 hours. Stabilisation of p53 was still apparent after 16 hours in cells treated with either Etoposide or Nutlin-3 (Figure 2A). As expected, no p53 was observed in HCT116p53 -/- cells treated with any of the two reagents throughout the time course examined (Figure 2C).\nNutlin-3 induces stabilisation and phosphorylation of p53, and activates key p53 target proteins. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto), or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse MDM2 and p21 activation. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading.\nGiven that we observed stabilisation of p53 in response to Nutlin-3, we sought to identify whether Nutlin-3 caused activation of p53 target proteins; MDM2 and p21. Indeed, following 4, 8 or 16 hour treatments with Nutlin-3, activation of p21 was observed to be similar to that induced by Etoposide. Additionally, Nutlin-3-induced activation of MDM2 greatly exceeded that resulting from Etoposide treatment throughout the time course studied (Figure 2B).", "Following the observed stabilisation of p53 in HCT116p53+/+ cells induced by treatment with Etoposide or Nutlin-3, we next sought to investigate whether or not the observed Nutlin-3-dependent stabilisation of p53 was a result of Nutlin-3-induced p53 phosphorylation. Therefore, the phosphorylation status of various key serine residues known to be phosphorylated following DNA-damage was examined in response to the same two reagents over a 24 hours time-course. Indeed, phosphorylation of Ser15, 20 and 37 was observed at both 2 and 6 hour time points in response to Etoposide and Nutlin-3 treatment (Figure 3A). However a marked decrease in p53 phosphorylation was observed following Nutlin-3 treatment at 24 hour point (Additional file 1).\nNutlin-3 leads to phosphorylation of several important DDR mediators. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542) and CHK2 (Thr68) were analysed using immunoblotting. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981) and BRCA1 (Ser1542) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nSince it is well established that Etoposide-dependent phosphorylation of p53 is a response to DNA-damage generated by this agent, we went on to investigate whether the unexpected Nutlin-3-induced p53 phosphorylation was due to a Nutlin-3-mediated DDR. Therefore, we assessed the affect of Nutlin-3 on the activation of CHK2 and ATM which are required for DNA-damage-dependent phosphorylation and activation of p53. Indeed, phosphorylation of ATM and CHK2 were observed in HCT116p53+/+ cells following 1 hour treatments with either Etoposide or Nutlin-3, as was phosphorylation of BRCA1, an ATM target protein required for the ATM-dependent DDR (Figure 3B). Furthermore, in HCT116p53-/- cells, phosphorylation of both ATM and its target protein BRCA1 was also noted following a 1 hour treatment with both Nutlin-3 and Etoposide (Figure 3C).", "Given our findings that Nutlin-3 treatment induced p53 stabilisation and phosphorylation, as well as the activation of key DDR proteins and p53 target proteins known to be involved in cell cycle control, we went on to assess whether Nutlin-3 was capable of inducing cell cycle checkpoints. Following treatment with either Nutlin-3 or Etoptoside, HCT116p53+/+ andp53-/- cells were analysed by flow cytometry. While HCT116p53+/+ treatment with Nutlin-3 led to G1/S arrest, treatment with Etoposide led to G2/M arrest (Figure 4A, Additional file 2 and Table 1). In contrast HCT116 p53-/- cells were observed to arrest in G2/M in response to both Nutlin-3 and Etoposide (Figure 4A, Additional file 2 and Table 2). Furthermore and in contrast to HCT116p53+/+, an increase in subG1 cell population was observed in HCT116p53-/-following Nutlin-3 treatment (Figure 4A, Additional file 2 and Tables 1 and 2).\nNutlin-3 induces p53-independent cell cycle checkpoint controls. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry. (A) Chart showing percentage of HCT116p53+/+ cells in either G1 or G2 cell cycle population following either Nutlin-3 or etoposide treatment as described above. (B) Chart showing percentage of HCT116p53-/- cells in both G1 and G2 cell cycle population following either Nutlin-3 or Etoposide treatment as described above.\nRepresentative percentages of HCT11 p53+/+ cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nRepresentative percentages of HCT11 p53-/- cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.", "One of the first proteins phosphorylated and activated in response to DNA-damage is the histone variant, H2AX [24]. Hence, we sought to investigate whether the observed Nutlin-3-dependent activation of ATM and CHK2 was due to a Nutlin-3-mediated DDR. Therefore, HCT116p53+/+ cells were treated with either Etoposide or Nutlin-3, and H2AX phosphorylation was checked both 1 and 4 hours following treatment. Indeed, H2AX phosphorylation was induced in response to both Etoposide and Nutlin-3 treatment (Figure 5A).\nNutlin-3 induces H2AX phosphorylation and γH2AX foci in HCT116p53+/+ cells. (A) HCT116p53+/+ cells were left untreated (unt) or treated with 100 μM Etoposide (Eto), 10 μM Nutlin-3 or (Nut) for 1 or 4 hours before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53+/+ cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nWe next sought to establish whether the observed Nutlin-3-induced activation of H2AX phosphorylation was indicative of γH2AX foci formation, an event recognised to occur early on in the DDR [24]. Indeed, treatment of HCT116p53+/+ cells with Etoposide or Nutlin-3 was observed to induce γH2AX foci formation from as early as 30 minutes following treatment. Foci formation was most notable in response to Etoposide treatment, but was nevertheless clearly visible in response to treatment with Nutlin-3 (Figure 5B).", "We next sought to clarify the effect of p53 status on the ability of Nutlin-3 to induce the DDR. We therefore treated HCT116p53-/- cells with Etoposide or Nutlin-3 and assessed the phosphorylation of γH2AX. Here, increases in γH2AX phosphorylation were observed in HCT116p53-/- cells treated with either Etoposide or Nutlin-3 (Figure 6A). Furthermore, formation of γH2AX foci were clearly visible in HCT116p53-/- cells following 30 minutes treatment with Etoposide, an effect which was comparable in cells treated with Nutlin-3 for the same time period (Figure 6B).\nNutlin-3 induces H2AX phosphorylation and _H2AX foci formation independent of both p53 and MDM2 status. (A) HCT116p53-/- cells were left untreated (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for 1 hour before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes. (C) Representative confocal microscopy images of γH2AX foci formation in MEFMDM2-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nHaving established that Nutlin-3 was capable of inducing DDR independent of p53 status, we went on to assess whether the ability of Nutlin-3 to induce DDR was dependent on its ability to inhibit MDM2. Here we assessed the effect of Nutlin-3 on formation of γH2AX foci in mouse embryonic fibroblasts deficient in MDM2 (MEFMDM2-/-). We observed clear formation of γH2AX foci after 30 minutes Nutlin-3 treatment, similar to that induced in cells treated with Etoposide for the same length of time (Figure 6C). Furthermore, in MEFMDM2-/- cells, phosphorylation of ATM, ChK2, BRCA1 and γH2AX was noted following a 1 hour treatment with Nutlin-3, and markedly decreased by 24 hours (Additional file 3).", "Numerous serine and threonine residues (mainly those located in the N-terminal part of the0020p53 protein) are targets for phosphorylation in response to a diverse range of stress factors. Following DNA-damage for instance, various protein kinases including ATM and CHK2 are activated and lead to p53 phosphorylation, subsequently resulting in stabilisation and activation of p53 [5-7].\nThe requirement of these phosphorylation events for the stabilisation and activation of p53 remains a somewhat controversial topic, as do the consequences of controlling the p53 pathway using the relatively newly developed MDM2 antagonists such as Nutlin-3. For example, there is debate as to whether MDM2 antagonism may affect p53 protein modifications or functions. A study carried out by Thompson et al using Nutlin-3, showed that phosphorylation of p53 on key serine residues was not necessary to bring about its stabilisation and activation. Indeed, whilst Thompson et al still observed stabilisation and activation of p53, no phosphorylation was detected following Nutlin-3 treatment [22].\nIn stark contrast, Drakos et al have since shown Nutlin-3-dependent induction of p53 phosphorylation at Ser15 in SP-53, Z-138, M-1 and Granta-519 MCL cell lines [23]. Nutlin-3-dependent p53 phosphorylation at Ser15 has also been observed in normal CD19+ B-cells, peripheral blood mononuclear cells (PBMCs), bone marrow mononuclear cells (BMMCs) and B-CLL cells to a level similar to that noted in response to fludarabine treatment, and in excess of that resulting from treatment with the protease inhibitor clasto-latacystin [25]. Indeed in the current study, we observed Nutlin-3-induced stabilisation and activation of p53 at levels comparable with that induced by the genotoxic DNA topoisomerase II inhibitor; Etoposide (Figure 2A and 2B). We also detected Nutlin-3-induced phosphorylation of p53 at Ser15, as well as at two other key serine residues; Ser20 and Ser37 (Figure 3A), indicating that Nutlin-3 does not only disrupt the interaction between MDM2 and p53, but could also play a role in activating DDR pathways resulting in p53 phosphorylation, and subsequent activation of downstream target proteins involved in for example, cell cycle checkpoint control. Our results are in sharp contrast to the previous observations of Thompson et al [22]. In the current study, we checked p53 phosphorylation at earlier time points following Nutlin-3 treatment (as early as 2 hours, see Figure 2), however data in the Thompson et al study were obtained after 24 hour treatments with Nutlin-3, which could explain why such a difference is seen between the two studies. Indeed we also observed a marked decrease in these phosphorylations at 24 hours in response to Nutlin-3 (Additional file 1).\nSince the activation of ATM and its downstream substrate CHK2 are well established as being responsible for DNA-damage-dependent p53 phosphorylation [5-7], we went on to investigate whether the observed Nutlin-3-dependent p53 phosphorylation was as a result of activation of these two kinases. Indeed, to our knowledge, we show for the first time that Nutlin-3 treatment triggers phosphorylation of ATM (Ser1981) and CHK2 (Thr68) in HCT116p53+/+ cells (Figure 3B), demonstrating that Nutlin-3-mediated p53 phosphorylation is due to Nutlin-3 behaving as an activator of ATM and CHK2. Indeed our observation that Nutlin-3 also led to phosphorylation of a well established ATM target; BRCA1 (Ser1524) further supports a role for Nutlin-3 as an activator of the ATM kinase. Moreover, the phosphorylation of ATM and its target protein BRCA1 in HCT116p53-/- cells (Figure 3C) suggests that the Nutlin-3-mediated activation of ATM and the subsequent phosphorylation of BRCA1 are triggered independently of p53.\nFollowing DNA-damage, it is known that cells activate checkpoints to temporarily halt the cell cycle [26], allowing for DNA repair or destruction of the damaged cell by apoptosis. The G1-S and intra-S-phase checkpoints regulate transition into, and progression through S phase in response to DNA-damage, while the G2-M checkpoint regulates entry into mitosis [26]. Since ATM and CHK2 are amongst the main activators of these checkpoints in response to DNA-damage, we sought to determine whether cell cycle checkpoints could be triggered by Nutlin-3 treatment. Whilst Etoposide led to clear G2/M arrest, Nutlin-3 treatment led to marked G1/S arrest in HCT116p53+/+ cells (Figure 4A), in keeping with the established role of p53 in triggering and maintaining G1/S arrest [27].\nConversely, in HCT116p53-/- cells, Nutlin-3 led to G2/M arrest (Figure 4B), demonstrating Nutlin-3-mediated p53-independent induction of the G2/M cell cycle checkpoint, similar to that observed following Etoposide treatment. In addition, an increase in the sub-G1 cell population was also observed. Since sub-G1 is indicative of apoptotic cells, this suggests that Nutlin-3 may trigger p53-independent apoptosis. Given the absence of functional p53 in this instance, this prompted us to question whether Nutlin-3 was inducing the DDR without directly generating DNA-damage, or if the DDR was being activated due to Nutlin-3-induced DNA-damage.\nOne widely established indicator of DNA damage is the rapid phosphorylation of the histone variant H2AX at its C-terminal serine residue (Ser139) to form γH2AX, activation of which leads to its recruitment and subsequent accumulation (along with various repair proteins) into foci at the site of DNA damage [24]. Here, Nutlin-3 clearly induced the phosphorylation of H2AX (Figure 5A), and in addition was observed using immunofluorescent staining to cause clear γH2AX foci formation, similar to that observed in Etoposide-treated cells (Figure 5B). These findings demonstrate that Nutlin-3-dependent phosphorylation of p53 is due to the ability of Nutlin-3 to induce DNA-damage, or to otherwise activate pathways that are stimulated in response to DNA damage.\nRecently, Verma et al have observed phosphorylation of H2AX in HCT116p53+/+ following Nutlin-3 treatment. Nevertheless, an absence of γH2AX staining was noted by Verma et al unless Nutlin-3 was combined with treatment with the DNA damage inducer Hydroxyurea, and no phosphorylation of Ser15 was seen [28]. It is noteworthy that Verma et al observed these effects following a 24 hour treatment with Nutlin-3, whilst in the current study earlier time points were used after considering previous findings indicating that H2AX foci formation occurs as early as 1 minute after DNA-damage and peak at around 30-60 minutes [29-31], and previous observations that DNA-damage-induced stabilization and phosphorylation of p53 peak at 4-6 hours, declining thereafter [32,33].\nVerma and colleagues attribute the induction of γH2AX staining to Nutlin-3-induced p53-mediated slowing of non-homologous end joining events following formation of DSBs during normal replicative processes, possibly as a way to ensure the accuracy of the repair process. However, in the current study we show Nutlin-3-induced phosphorylation of H2AX and formation of γH2AX foci in HCT116p53-/- cells (Figure 6A and 6B). Coupled with the G2/M arrest we observed in p53 negative HCT116 cells, our data indicate that p53 is dispensable in the Nutlin-3-induced DDR. Furthermore, our observation that Nutlin-3 induces formation of γH2AX foci as well as ATM, ChK2 and BRCA1 phosphorylation in cells devoid of MDM2 (Figure 6C and Additional file 3), suggests that the secondary ability of Nutlin-3 to induce DNA-damage is not related to its primary function as an MDM2 antagonist.", "Direct inhibition of MDM2 using Nutlin-3 clearly provides a means of activating p53, and restoring p53 signaling, however in light of recent findings including those presented in the current study, we suggest Nutlin-3 is itself capable of instigating DNA-damage signaling. To our knowledge, we show for the first time that Nutlin-3 induces DDR activation in a p53-and MDM2-independent fashion. Further investigation is required to fully elucidate the effect of Nutlin-3 on p53-dependent and-independent DDR mechanisms, as well as its effects on the post-translational modification and functionality of p53, understanding of which will undoubtedly facilitate the development of Nutlin-3 and other MDM2 antagonists as potential cancer therapies.", "The authors declare that they have no competing interests.", "AM conceived of the study, whilst AM and JV were responsible for its design. JV carried out all assays relating to the study, including western blots, FACs and fluorescence microscopy. SK carried out some western blots. AM and JV analysed the data and drafted the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/79/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Cell Lines", "Western Blotting", "Flow Cytometry", "Immunofluorescence", "Results", "Nutlin-3 induces stabilisation of p53 and activation of p53 target proteins", "Nutlin-3 induces phosphorylation of p53 at key serine residues and activates several important DDR mediators", "Nutlin-3 induces G1/S cell cycle arrest", "Nutlin-3 induces H2AX phosphorylation and foci formation", "Nutlin-3 induced responses are independent of p53 and Nutlin-3- mediated inhibition of MDM2", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "The p53 tumour suppressor protein, often referred to as the 'guardian of the genom', plays a critical role in mediating cellular stress responses such as that brought about by DNA-damage, and is therefore key in regulating a vast array of proteins involved in cell cycle progression and check-points, DNA repair and apoptosis [1].\nIn the absence of cellular stress, p53 is maintained at low levels by its ubiquitination and subsequent proteasomal degradation. This process can be mediated by one of several E3 ubiquitin ligases [2], but principally by MDM2 (mouse double minute 2), as illustrated in Figure 1A.\nSchematic representation of the interactions between p53 and MDM2. (A) In the absence of stress signals, p53 is bound to its negative regulator MDM2. MDM2 ubiquitinates p53, targeting it for degradation by the 26 S proteasome. (B) Cellular stress signals, such as that bought about by DNA-damage lead to activation of ATM/ATR. ATM/ATR mediate the phosphorylation of MDM2 and p53. Phosphorylated MDM2 undergoes auto-ubiquitination and degradation by the 26 S proteasome. Phosphorylated p53 undergoes nuclear localisation, tetramerisation, and binds to p53-responsive promoters to induce transcription of genes involved in the DDR. (C) Chemical structure of Nutlin-3.\nConversely, in the presence of cellular stress stimuli, two protein kinases - ATM (ataxia-telangiectasia mutated) and ATR (ATM and Rad3-related) orchestrate the DDR in order to preserve genome integrity. Whilst ATM is mainly activated in response to double-strand DNA breaks (DSBs), ATR is primarily activated following replicative errors that result in single-stranded DNA, however recent findings indicate DSB-mediated activation of ATM can also trigger activation of ATR [3,4].\nActivation of ATM leads to phosphorylation and activation of CHK2, along with various other substrates, resulting in the subsequent phosphorylation of both p53 and its negative regulator MDM2 (Figure 1B). Phosphorylation of MDM2 in close proximity to its RING domain inhibits its ability to ubiquitinate p53, instead promoting self-ubiquitination and degradation by the proteasome.\nConversely, the phosphorylation of p53 results in its stabilisation and activation [5-7], bringing about its translocation to the nucleus, where it has been shown to bind preferentially to promoters which favour transcription of genes that encode proteins required in stress-induced cell cycle check-point control, DNA repair and apoptosis. Adding to the complexity of p53-mediated DDR signalling are several reports indicating that co-operation of p53 with other transcription factors such as hnRNP K and Miz-1 is necessary for the efficient transcription of some p53 target genes, particularly those encoding apoptogenic proteins [8-10].\nThe functional roles of p53 phosphorylation vary and are yet to be fully elucidated. Evidence suggests that phosphorylation of p53 at Ser20 leads to inhibition of the p53/MDM2 interaction, preventing ubiquitin-mediated p53 degradation and thereby enhancing p53 stabilisation [11-13]. On the other hand, phosphorylation of p53 at Ser46 has been shown to mediate the selectivity of p53 in favour of promoters which enhance apoptotic signalling, such as the p53-regulated apoptosis-inducing protein 1 (p53AIP) [14]. Furthermore, certain phosphorylations provide a means of negatively regulating p53, as evidenced by observations that phosphorylation of p53 at Thr55 inhibits its nuclear localisation [15] and mediates its degradation [16], whilst dephosphorylation of nuclear p53 at Ser276 has been observed to occur as an early response to ionising radiation [17].\nThere also exists much debate as to whether specific phosphorylations are prerequisite for the stabilisation and functional activity of p53. Findings in U2OS osteoblast cells show that isopropyl-ß-D-thiogalactoside-induced (IPTG) sequestration of MDM2 by p14/ARF led to phosphorylation of only a single p53 residue; Ser392, whilst adriamycin caused phosphorylation of all 6 key serine residues (Ser6, 10, 15, 20, 37 and 392), but no differences were observed between the activity of p53 in adriamycin versus IPTG-treated cells, seemingly indicating that phosphorylation is not necessary for p53 activity [18]. However, Chehab et al observed complete ablation of p53 stabilisation in response to UV treatment or irradiation in cells where Ser20 was substituted for alanine or aspartate [11].\nGiven the vast array of proteins under the regulation of p53, and the fact that mutations to p53 are present in over 50% of all human malignancies [19,20], there is much interest in developing pharmacological agents directed at p53-mediated responses. Recently, a novel small molecule MDM2 antagonist has been developed; Nutlin-3 (Figure 1C) interacts with the p53 binding domain of MDM2, preventing negative regulation of p53 by MDM2, hence allowing continuation of p53-mediated signalling [21]. Studies by the same group also showed that Nutlin-3 treatment of p53-positive HCT116 and RKO cells enhanced transcription of p53-responsive genes including p21, MIC1 and MDM2, leading to the initiation of apoptosis, despite the fact that no phosphorylation of p53 was observed at a number of key serine residues (Ser6, 15, 20, 37, 46 and 392) [22]. The authors attribute their findings to the proposed non-genotoxic action of Nutlin-3, however Nutlin-3-induced phosphorylation of p53 at Ser15 has since been reported in both B-cell chronic lymphocytic leukaemia (B-CLL) and mantle cell lymphoma (MCL) models [23].\nIn the current study we assessed the stabilisation and activation of p53 in HCT116p53+/+ cells in response to Nutlin-3, finding significant phosphorylation of Ser15, along with Ser20 and Ser37. Furthermore, on investigation of other components of the DDR pathway, we show Nutlin-3-mediated activation of ATM, CHK2, BRCA1 and H2AX, as well as upregulation of MDM2 and p21. Nutlin-3 led to G1/S arrest in HCT116p53+/+ cells, in keeping with the established role of p53 in instigating and maintaining G1 arrest, however in HCT116p53-/- cells, G2/M arrest was noted in response to Nutlin-3 treatment, demonstrating the ability of Nutlin-3 to induce cell cycle checkpoint controls in a p53-independent fashion. Additionally, in response to Nutlin-3, we show nuclear H2AX foci formation, an early event in the DDR caused by clustering of phosphorylated H2AX moieties (γH2AX) at the site of DSBs. Moreover, this phenomenon was also observed in HCT116 cells lacking p53 (HCT116p53-/-) and also in MDM2 deficient cells (MEFMDM2-/-), suggesting firstly that p53 status is dispensable in the Nutlin-3-induced DDR, and secondly, that the ability of Nutlin-3 to induce DNA-damage or initiate the DDR is not connected to its role as an MDM2 antagonist. These results suggest a secondary role for Nutlin-3 as a DNA-damaging agent, contrary to its proposed mechanism of action as a non-genotoxic antagonist of MDM2. These data have implications for the use of Nutlin-3, and for the future development of pharmacological MDM2 antagonists for the treatment of cancer.", "Unless otherwise stated all antibodies were purchased from New England Biolabs, Hertfordshire, UK, and all reagents, including Nutlin-3, were purchased from Sigma-Aldrich, Dorset, UK.\n[SUBTITLE] Cell Lines [SUBSECTION] Human colorectal cancer cell lines (HCT116p53-/- and HCT116p53+/+) were obtained from Professor Galina Selivanova (Karolinska Institute, Stockholm, Sweden), and mouse embryonic fibroblast (MEF) cells deficient in MDM2 (MEFMDM2-/-) were obtained from Professor Guillermina Lozano (MD Anderson Cancer Centre, University of Texas, USA). All cells were genotyped before arrival using where necessary primers specific to the deleted alleles. Cell lines were authenticated upon receipt using immunoblotting. Cells were sustained in Dulbecco's Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 1% Penicillin/Streptomycin/L-Glutamine and 1% Amphotericin B (Invitrogen, Renfrewshire, UK). Cells were incubated at 37°C in a humidified atmosphere containing 5% CO2. Cells were passaged twice weekly, and were seeded at 1 × 105cells/mL during all experiments.\nHuman colorectal cancer cell lines (HCT116p53-/- and HCT116p53+/+) were obtained from Professor Galina Selivanova (Karolinska Institute, Stockholm, Sweden), and mouse embryonic fibroblast (MEF) cells deficient in MDM2 (MEFMDM2-/-) were obtained from Professor Guillermina Lozano (MD Anderson Cancer Centre, University of Texas, USA). All cells were genotyped before arrival using where necessary primers specific to the deleted alleles. Cell lines were authenticated upon receipt using immunoblotting. Cells were sustained in Dulbecco's Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 1% Penicillin/Streptomycin/L-Glutamine and 1% Amphotericin B (Invitrogen, Renfrewshire, UK). Cells were incubated at 37°C in a humidified atmosphere containing 5% CO2. Cells were passaged twice weekly, and were seeded at 1 × 105cells/mL during all experiments.\n[SUBTITLE] Western Blotting [SUBSECTION] Following varying length treatments with 10 μM Nutlin-3 or 100 μM Etoposide, cells were collected and lysed in 2X laemmli lysis buffer (4% w/v SDS, 20% v/v glycerol, 120 mM tris pH6.8). A 5 μL volume of each sample was diluted in 95 μL dH20, and added to 1 mL Lowry solution (50 parts 2% w/v sodium carbonate, 0.1 M sodium hydroxide solution, to 1 part 0.5% w/v copper(II)sulphate, 1% w/v sodium citrate solution), incubated at room temperature for 10 minutes, added to 100 μL 1 M folinciocalteau solution, and incubated for 30 minutes at room temperature before being transferred to cuvettes for determination of protein concentration using a CamSpec-M330 spectrometer. Samples of equal protein concentration were then loaded onto 6-15% acrylamide gels and underwent electrophoresis, followed by transfer onto PVDF (polyvinylidene fluoride) membranes. Membranes were blocked with 5% milk/TBS-T solution (5%w/v Marvel milk powder in 1X TBS-T solution comprising 50 mM Tris, 150 Mm sodium chloride, 0.364% v/v hydrochloric acid, 0.5% v/v Tween-20) and probed overnight at 4°C for specific proteins of interest. Standard primary antibody dilutions were 1:1000 in 5% milk/TBS-T solution, except for CHK2 (1:100 in 5% BSA/TBS-T solution), tubulin and actin (Merck Chemicals, Nottinghamshire, UK), used at 1:17000 in 5% milk/TBS-T solution. Standard secondary antibody dilutions were 1:2000 prepared in 5% milk/TBS-T solution. Chemiluminescence was detected using Lumiglo reagent (New England Biolabs, Hertfordshire, UK) according to manufacturer's instructions, and hyperfilms (GE Healthcare, Buckinghamshire, UK) were developed using an Amersham SRX100A Hyperprocessor.\nFollowing varying length treatments with 10 μM Nutlin-3 or 100 μM Etoposide, cells were collected and lysed in 2X laemmli lysis buffer (4% w/v SDS, 20% v/v glycerol, 120 mM tris pH6.8). A 5 μL volume of each sample was diluted in 95 μL dH20, and added to 1 mL Lowry solution (50 parts 2% w/v sodium carbonate, 0.1 M sodium hydroxide solution, to 1 part 0.5% w/v copper(II)sulphate, 1% w/v sodium citrate solution), incubated at room temperature for 10 minutes, added to 100 μL 1 M folinciocalteau solution, and incubated for 30 minutes at room temperature before being transferred to cuvettes for determination of protein concentration using a CamSpec-M330 spectrometer. Samples of equal protein concentration were then loaded onto 6-15% acrylamide gels and underwent electrophoresis, followed by transfer onto PVDF (polyvinylidene fluoride) membranes. Membranes were blocked with 5% milk/TBS-T solution (5%w/v Marvel milk powder in 1X TBS-T solution comprising 50 mM Tris, 150 Mm sodium chloride, 0.364% v/v hydrochloric acid, 0.5% v/v Tween-20) and probed overnight at 4°C for specific proteins of interest. Standard primary antibody dilutions were 1:1000 in 5% milk/TBS-T solution, except for CHK2 (1:100 in 5% BSA/TBS-T solution), tubulin and actin (Merck Chemicals, Nottinghamshire, UK), used at 1:17000 in 5% milk/TBS-T solution. Standard secondary antibody dilutions were 1:2000 prepared in 5% milk/TBS-T solution. Chemiluminescence was detected using Lumiglo reagent (New England Biolabs, Hertfordshire, UK) according to manufacturer's instructions, and hyperfilms (GE Healthcare, Buckinghamshire, UK) were developed using an Amersham SRX100A Hyperprocessor.\n[SUBTITLE] Flow Cytometry [SUBSECTION] After treatment with 100 μM Etoposide or 10 μM Nutlin-3 for various time periods, cells were trypsinised using 0.05% EDTA-free trypsin (Invitrogen, Renfrewshire, UK), collected and centrifuged, and the pellets resuspended in 70% ethanol before being stored for 24 hours at -20°C. Cells were later centrifuged, washed with 1X PBS and resuspended in 50 μg/mL Propidium Iodide/Rnase A solution before cell cycle distribution was assessed on a Beckman Coulter Cytomics FC500 flow cytometer.\nAfter treatment with 100 μM Etoposide or 10 μM Nutlin-3 for various time periods, cells were trypsinised using 0.05% EDTA-free trypsin (Invitrogen, Renfrewshire, UK), collected and centrifuged, and the pellets resuspended in 70% ethanol before being stored for 24 hours at -20°C. Cells were later centrifuged, washed with 1X PBS and resuspended in 50 μg/mL Propidium Iodide/Rnase A solution before cell cycle distribution was assessed on a Beckman Coulter Cytomics FC500 flow cytometer.\n[SUBTITLE] Immunofluorescence [SUBSECTION] Cells in 6-well plates were treated with 100 μM Etoposide or 10 μM Nutlin-3 for varying time periods before being fixed using 4% v/v paraformaldehyde solution, permeabilised with 0.5% v/v Triton-X100 solution, washed in 1X PBS and incubated overnight at 4°C with various antibodies prepared in 5% milk/TBS-T solution. Cells were then washed with 1X PBS, incubated for 2 hours with a 1:250 dilution of goat anti-rabbit Dylight488 antibody (New England Biolabs, Hertfordshire, UK) prepared in 1X PBS, before being washed once again with 1X PBS. Wells were then treated with one drop of Vectashield mounting media containing DAPI (Vector Laboratories, Cambridgeshire, UK), covered with glass coverslips and sealed with clear nail polish. Cells were then observed at 40× magnification using a Zeiss LSM500 confocal microscope and analysed using LSM Image Browser software (Carl Zeiss, Oberkochen, Germany).\nCells in 6-well plates were treated with 100 μM Etoposide or 10 μM Nutlin-3 for varying time periods before being fixed using 4% v/v paraformaldehyde solution, permeabilised with 0.5% v/v Triton-X100 solution, washed in 1X PBS and incubated overnight at 4°C with various antibodies prepared in 5% milk/TBS-T solution. Cells were then washed with 1X PBS, incubated for 2 hours with a 1:250 dilution of goat anti-rabbit Dylight488 antibody (New England Biolabs, Hertfordshire, UK) prepared in 1X PBS, before being washed once again with 1X PBS. Wells were then treated with one drop of Vectashield mounting media containing DAPI (Vector Laboratories, Cambridgeshire, UK), covered with glass coverslips and sealed with clear nail polish. Cells were then observed at 40× magnification using a Zeiss LSM500 confocal microscope and analysed using LSM Image Browser software (Carl Zeiss, Oberkochen, Germany).", "Human colorectal cancer cell lines (HCT116p53-/- and HCT116p53+/+) were obtained from Professor Galina Selivanova (Karolinska Institute, Stockholm, Sweden), and mouse embryonic fibroblast (MEF) cells deficient in MDM2 (MEFMDM2-/-) were obtained from Professor Guillermina Lozano (MD Anderson Cancer Centre, University of Texas, USA). All cells were genotyped before arrival using where necessary primers specific to the deleted alleles. Cell lines were authenticated upon receipt using immunoblotting. Cells were sustained in Dulbecco's Modified Eagle's Medium, supplemented with 10% fetal bovine serum, 1% Penicillin/Streptomycin/L-Glutamine and 1% Amphotericin B (Invitrogen, Renfrewshire, UK). Cells were incubated at 37°C in a humidified atmosphere containing 5% CO2. Cells were passaged twice weekly, and were seeded at 1 × 105cells/mL during all experiments.", "Following varying length treatments with 10 μM Nutlin-3 or 100 μM Etoposide, cells were collected and lysed in 2X laemmli lysis buffer (4% w/v SDS, 20% v/v glycerol, 120 mM tris pH6.8). A 5 μL volume of each sample was diluted in 95 μL dH20, and added to 1 mL Lowry solution (50 parts 2% w/v sodium carbonate, 0.1 M sodium hydroxide solution, to 1 part 0.5% w/v copper(II)sulphate, 1% w/v sodium citrate solution), incubated at room temperature for 10 minutes, added to 100 μL 1 M folinciocalteau solution, and incubated for 30 minutes at room temperature before being transferred to cuvettes for determination of protein concentration using a CamSpec-M330 spectrometer. Samples of equal protein concentration were then loaded onto 6-15% acrylamide gels and underwent electrophoresis, followed by transfer onto PVDF (polyvinylidene fluoride) membranes. Membranes were blocked with 5% milk/TBS-T solution (5%w/v Marvel milk powder in 1X TBS-T solution comprising 50 mM Tris, 150 Mm sodium chloride, 0.364% v/v hydrochloric acid, 0.5% v/v Tween-20) and probed overnight at 4°C for specific proteins of interest. Standard primary antibody dilutions were 1:1000 in 5% milk/TBS-T solution, except for CHK2 (1:100 in 5% BSA/TBS-T solution), tubulin and actin (Merck Chemicals, Nottinghamshire, UK), used at 1:17000 in 5% milk/TBS-T solution. Standard secondary antibody dilutions were 1:2000 prepared in 5% milk/TBS-T solution. Chemiluminescence was detected using Lumiglo reagent (New England Biolabs, Hertfordshire, UK) according to manufacturer's instructions, and hyperfilms (GE Healthcare, Buckinghamshire, UK) were developed using an Amersham SRX100A Hyperprocessor.", "After treatment with 100 μM Etoposide or 10 μM Nutlin-3 for various time periods, cells were trypsinised using 0.05% EDTA-free trypsin (Invitrogen, Renfrewshire, UK), collected and centrifuged, and the pellets resuspended in 70% ethanol before being stored for 24 hours at -20°C. Cells were later centrifuged, washed with 1X PBS and resuspended in 50 μg/mL Propidium Iodide/Rnase A solution before cell cycle distribution was assessed on a Beckman Coulter Cytomics FC500 flow cytometer.", "Cells in 6-well plates were treated with 100 μM Etoposide or 10 μM Nutlin-3 for varying time periods before being fixed using 4% v/v paraformaldehyde solution, permeabilised with 0.5% v/v Triton-X100 solution, washed in 1X PBS and incubated overnight at 4°C with various antibodies prepared in 5% milk/TBS-T solution. Cells were then washed with 1X PBS, incubated for 2 hours with a 1:250 dilution of goat anti-rabbit Dylight488 antibody (New England Biolabs, Hertfordshire, UK) prepared in 1X PBS, before being washed once again with 1X PBS. Wells were then treated with one drop of Vectashield mounting media containing DAPI (Vector Laboratories, Cambridgeshire, UK), covered with glass coverslips and sealed with clear nail polish. Cells were then observed at 40× magnification using a Zeiss LSM500 confocal microscope and analysed using LSM Image Browser software (Carl Zeiss, Oberkochen, Germany).", "[SUBTITLE] Nutlin-3 induces stabilisation of p53 and activation of p53 target proteins [SUBSECTION] In order to compare the efficiency of Nutlin-3-dependent p53 stabilisation with that of known DNA-damaging agents, we treated human colorectal cancer cells (HCT116p53+/+) with Etoposide (100 μM) or Nutlin-3 (10 μM). Treatment of HCT116p53+/+ cells with these different agents led to stabilisation of p53 from 2 hours. Stabilisation of p53 was still apparent after 16 hours in cells treated with either Etoposide or Nutlin-3 (Figure 2A). As expected, no p53 was observed in HCT116p53 -/- cells treated with any of the two reagents throughout the time course examined (Figure 2C).\nNutlin-3 induces stabilisation and phosphorylation of p53, and activates key p53 target proteins. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto), or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse MDM2 and p21 activation. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading.\nGiven that we observed stabilisation of p53 in response to Nutlin-3, we sought to identify whether Nutlin-3 caused activation of p53 target proteins; MDM2 and p21. Indeed, following 4, 8 or 16 hour treatments with Nutlin-3, activation of p21 was observed to be similar to that induced by Etoposide. Additionally, Nutlin-3-induced activation of MDM2 greatly exceeded that resulting from Etoposide treatment throughout the time course studied (Figure 2B).\nIn order to compare the efficiency of Nutlin-3-dependent p53 stabilisation with that of known DNA-damaging agents, we treated human colorectal cancer cells (HCT116p53+/+) with Etoposide (100 μM) or Nutlin-3 (10 μM). Treatment of HCT116p53+/+ cells with these different agents led to stabilisation of p53 from 2 hours. Stabilisation of p53 was still apparent after 16 hours in cells treated with either Etoposide or Nutlin-3 (Figure 2A). As expected, no p53 was observed in HCT116p53 -/- cells treated with any of the two reagents throughout the time course examined (Figure 2C).\nNutlin-3 induces stabilisation and phosphorylation of p53, and activates key p53 target proteins. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto), or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse MDM2 and p21 activation. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading.\nGiven that we observed stabilisation of p53 in response to Nutlin-3, we sought to identify whether Nutlin-3 caused activation of p53 target proteins; MDM2 and p21. Indeed, following 4, 8 or 16 hour treatments with Nutlin-3, activation of p21 was observed to be similar to that induced by Etoposide. Additionally, Nutlin-3-induced activation of MDM2 greatly exceeded that resulting from Etoposide treatment throughout the time course studied (Figure 2B).\n[SUBTITLE] Nutlin-3 induces phosphorylation of p53 at key serine residues and activates several important DDR mediators [SUBSECTION] Following the observed stabilisation of p53 in HCT116p53+/+ cells induced by treatment with Etoposide or Nutlin-3, we next sought to investigate whether or not the observed Nutlin-3-dependent stabilisation of p53 was a result of Nutlin-3-induced p53 phosphorylation. Therefore, the phosphorylation status of various key serine residues known to be phosphorylated following DNA-damage was examined in response to the same two reagents over a 24 hours time-course. Indeed, phosphorylation of Ser15, 20 and 37 was observed at both 2 and 6 hour time points in response to Etoposide and Nutlin-3 treatment (Figure 3A). However a marked decrease in p53 phosphorylation was observed following Nutlin-3 treatment at 24 hour point (Additional file 1).\nNutlin-3 leads to phosphorylation of several important DDR mediators. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542) and CHK2 (Thr68) were analysed using immunoblotting. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981) and BRCA1 (Ser1542) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nSince it is well established that Etoposide-dependent phosphorylation of p53 is a response to DNA-damage generated by this agent, we went on to investigate whether the unexpected Nutlin-3-induced p53 phosphorylation was due to a Nutlin-3-mediated DDR. Therefore, we assessed the affect of Nutlin-3 on the activation of CHK2 and ATM which are required for DNA-damage-dependent phosphorylation and activation of p53. Indeed, phosphorylation of ATM and CHK2 were observed in HCT116p53+/+ cells following 1 hour treatments with either Etoposide or Nutlin-3, as was phosphorylation of BRCA1, an ATM target protein required for the ATM-dependent DDR (Figure 3B). Furthermore, in HCT116p53-/- cells, phosphorylation of both ATM and its target protein BRCA1 was also noted following a 1 hour treatment with both Nutlin-3 and Etoposide (Figure 3C).\nFollowing the observed stabilisation of p53 in HCT116p53+/+ cells induced by treatment with Etoposide or Nutlin-3, we next sought to investigate whether or not the observed Nutlin-3-dependent stabilisation of p53 was a result of Nutlin-3-induced p53 phosphorylation. Therefore, the phosphorylation status of various key serine residues known to be phosphorylated following DNA-damage was examined in response to the same two reagents over a 24 hours time-course. Indeed, phosphorylation of Ser15, 20 and 37 was observed at both 2 and 6 hour time points in response to Etoposide and Nutlin-3 treatment (Figure 3A). However a marked decrease in p53 phosphorylation was observed following Nutlin-3 treatment at 24 hour point (Additional file 1).\nNutlin-3 leads to phosphorylation of several important DDR mediators. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542) and CHK2 (Thr68) were analysed using immunoblotting. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981) and BRCA1 (Ser1542) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nSince it is well established that Etoposide-dependent phosphorylation of p53 is a response to DNA-damage generated by this agent, we went on to investigate whether the unexpected Nutlin-3-induced p53 phosphorylation was due to a Nutlin-3-mediated DDR. Therefore, we assessed the affect of Nutlin-3 on the activation of CHK2 and ATM which are required for DNA-damage-dependent phosphorylation and activation of p53. Indeed, phosphorylation of ATM and CHK2 were observed in HCT116p53+/+ cells following 1 hour treatments with either Etoposide or Nutlin-3, as was phosphorylation of BRCA1, an ATM target protein required for the ATM-dependent DDR (Figure 3B). Furthermore, in HCT116p53-/- cells, phosphorylation of both ATM and its target protein BRCA1 was also noted following a 1 hour treatment with both Nutlin-3 and Etoposide (Figure 3C).\n[SUBTITLE] Nutlin-3 induces G1/S cell cycle arrest [SUBSECTION] Given our findings that Nutlin-3 treatment induced p53 stabilisation and phosphorylation, as well as the activation of key DDR proteins and p53 target proteins known to be involved in cell cycle control, we went on to assess whether Nutlin-3 was capable of inducing cell cycle checkpoints. Following treatment with either Nutlin-3 or Etoptoside, HCT116p53+/+ andp53-/- cells were analysed by flow cytometry. While HCT116p53+/+ treatment with Nutlin-3 led to G1/S arrest, treatment with Etoposide led to G2/M arrest (Figure 4A, Additional file 2 and Table 1). In contrast HCT116 p53-/- cells were observed to arrest in G2/M in response to both Nutlin-3 and Etoposide (Figure 4A, Additional file 2 and Table 2). Furthermore and in contrast to HCT116p53+/+, an increase in subG1 cell population was observed in HCT116p53-/-following Nutlin-3 treatment (Figure 4A, Additional file 2 and Tables 1 and 2).\nNutlin-3 induces p53-independent cell cycle checkpoint controls. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry. (A) Chart showing percentage of HCT116p53+/+ cells in either G1 or G2 cell cycle population following either Nutlin-3 or etoposide treatment as described above. (B) Chart showing percentage of HCT116p53-/- cells in both G1 and G2 cell cycle population following either Nutlin-3 or Etoposide treatment as described above.\nRepresentative percentages of HCT11 p53+/+ cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nRepresentative percentages of HCT11 p53-/- cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nGiven our findings that Nutlin-3 treatment induced p53 stabilisation and phosphorylation, as well as the activation of key DDR proteins and p53 target proteins known to be involved in cell cycle control, we went on to assess whether Nutlin-3 was capable of inducing cell cycle checkpoints. Following treatment with either Nutlin-3 or Etoptoside, HCT116p53+/+ andp53-/- cells were analysed by flow cytometry. While HCT116p53+/+ treatment with Nutlin-3 led to G1/S arrest, treatment with Etoposide led to G2/M arrest (Figure 4A, Additional file 2 and Table 1). In contrast HCT116 p53-/- cells were observed to arrest in G2/M in response to both Nutlin-3 and Etoposide (Figure 4A, Additional file 2 and Table 2). Furthermore and in contrast to HCT116p53+/+, an increase in subG1 cell population was observed in HCT116p53-/-following Nutlin-3 treatment (Figure 4A, Additional file 2 and Tables 1 and 2).\nNutlin-3 induces p53-independent cell cycle checkpoint controls. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry. (A) Chart showing percentage of HCT116p53+/+ cells in either G1 or G2 cell cycle population following either Nutlin-3 or etoposide treatment as described above. (B) Chart showing percentage of HCT116p53-/- cells in both G1 and G2 cell cycle population following either Nutlin-3 or Etoposide treatment as described above.\nRepresentative percentages of HCT11 p53+/+ cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nRepresentative percentages of HCT11 p53-/- cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\n[SUBTITLE] Nutlin-3 induces H2AX phosphorylation and foci formation [SUBSECTION] One of the first proteins phosphorylated and activated in response to DNA-damage is the histone variant, H2AX [24]. Hence, we sought to investigate whether the observed Nutlin-3-dependent activation of ATM and CHK2 was due to a Nutlin-3-mediated DDR. Therefore, HCT116p53+/+ cells were treated with either Etoposide or Nutlin-3, and H2AX phosphorylation was checked both 1 and 4 hours following treatment. Indeed, H2AX phosphorylation was induced in response to both Etoposide and Nutlin-3 treatment (Figure 5A).\nNutlin-3 induces H2AX phosphorylation and γH2AX foci in HCT116p53+/+ cells. (A) HCT116p53+/+ cells were left untreated (unt) or treated with 100 μM Etoposide (Eto), 10 μM Nutlin-3 or (Nut) for 1 or 4 hours before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53+/+ cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nWe next sought to establish whether the observed Nutlin-3-induced activation of H2AX phosphorylation was indicative of γH2AX foci formation, an event recognised to occur early on in the DDR [24]. Indeed, treatment of HCT116p53+/+ cells with Etoposide or Nutlin-3 was observed to induce γH2AX foci formation from as early as 30 minutes following treatment. Foci formation was most notable in response to Etoposide treatment, but was nevertheless clearly visible in response to treatment with Nutlin-3 (Figure 5B).\nOne of the first proteins phosphorylated and activated in response to DNA-damage is the histone variant, H2AX [24]. Hence, we sought to investigate whether the observed Nutlin-3-dependent activation of ATM and CHK2 was due to a Nutlin-3-mediated DDR. Therefore, HCT116p53+/+ cells were treated with either Etoposide or Nutlin-3, and H2AX phosphorylation was checked both 1 and 4 hours following treatment. Indeed, H2AX phosphorylation was induced in response to both Etoposide and Nutlin-3 treatment (Figure 5A).\nNutlin-3 induces H2AX phosphorylation and γH2AX foci in HCT116p53+/+ cells. (A) HCT116p53+/+ cells were left untreated (unt) or treated with 100 μM Etoposide (Eto), 10 μM Nutlin-3 or (Nut) for 1 or 4 hours before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53+/+ cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nWe next sought to establish whether the observed Nutlin-3-induced activation of H2AX phosphorylation was indicative of γH2AX foci formation, an event recognised to occur early on in the DDR [24]. Indeed, treatment of HCT116p53+/+ cells with Etoposide or Nutlin-3 was observed to induce γH2AX foci formation from as early as 30 minutes following treatment. Foci formation was most notable in response to Etoposide treatment, but was nevertheless clearly visible in response to treatment with Nutlin-3 (Figure 5B).\n[SUBTITLE] Nutlin-3 induced responses are independent of p53 and Nutlin-3- mediated inhibition of MDM2 [SUBSECTION] We next sought to clarify the effect of p53 status on the ability of Nutlin-3 to induce the DDR. We therefore treated HCT116p53-/- cells with Etoposide or Nutlin-3 and assessed the phosphorylation of γH2AX. Here, increases in γH2AX phosphorylation were observed in HCT116p53-/- cells treated with either Etoposide or Nutlin-3 (Figure 6A). Furthermore, formation of γH2AX foci were clearly visible in HCT116p53-/- cells following 30 minutes treatment with Etoposide, an effect which was comparable in cells treated with Nutlin-3 for the same time period (Figure 6B).\nNutlin-3 induces H2AX phosphorylation and _H2AX foci formation independent of both p53 and MDM2 status. (A) HCT116p53-/- cells were left untreated (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for 1 hour before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes. (C) Representative confocal microscopy images of γH2AX foci formation in MEFMDM2-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nHaving established that Nutlin-3 was capable of inducing DDR independent of p53 status, we went on to assess whether the ability of Nutlin-3 to induce DDR was dependent on its ability to inhibit MDM2. Here we assessed the effect of Nutlin-3 on formation of γH2AX foci in mouse embryonic fibroblasts deficient in MDM2 (MEFMDM2-/-). We observed clear formation of γH2AX foci after 30 minutes Nutlin-3 treatment, similar to that induced in cells treated with Etoposide for the same length of time (Figure 6C). Furthermore, in MEFMDM2-/- cells, phosphorylation of ATM, ChK2, BRCA1 and γH2AX was noted following a 1 hour treatment with Nutlin-3, and markedly decreased by 24 hours (Additional file 3).\nWe next sought to clarify the effect of p53 status on the ability of Nutlin-3 to induce the DDR. We therefore treated HCT116p53-/- cells with Etoposide or Nutlin-3 and assessed the phosphorylation of γH2AX. Here, increases in γH2AX phosphorylation were observed in HCT116p53-/- cells treated with either Etoposide or Nutlin-3 (Figure 6A). Furthermore, formation of γH2AX foci were clearly visible in HCT116p53-/- cells following 30 minutes treatment with Etoposide, an effect which was comparable in cells treated with Nutlin-3 for the same time period (Figure 6B).\nNutlin-3 induces H2AX phosphorylation and _H2AX foci formation independent of both p53 and MDM2 status. (A) HCT116p53-/- cells were left untreated (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for 1 hour before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes. (C) Representative confocal microscopy images of γH2AX foci formation in MEFMDM2-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nHaving established that Nutlin-3 was capable of inducing DDR independent of p53 status, we went on to assess whether the ability of Nutlin-3 to induce DDR was dependent on its ability to inhibit MDM2. Here we assessed the effect of Nutlin-3 on formation of γH2AX foci in mouse embryonic fibroblasts deficient in MDM2 (MEFMDM2-/-). We observed clear formation of γH2AX foci after 30 minutes Nutlin-3 treatment, similar to that induced in cells treated with Etoposide for the same length of time (Figure 6C). Furthermore, in MEFMDM2-/- cells, phosphorylation of ATM, ChK2, BRCA1 and γH2AX was noted following a 1 hour treatment with Nutlin-3, and markedly decreased by 24 hours (Additional file 3).", "In order to compare the efficiency of Nutlin-3-dependent p53 stabilisation with that of known DNA-damaging agents, we treated human colorectal cancer cells (HCT116p53+/+) with Etoposide (100 μM) or Nutlin-3 (10 μM). Treatment of HCT116p53+/+ cells with these different agents led to stabilisation of p53 from 2 hours. Stabilisation of p53 was still apparent after 16 hours in cells treated with either Etoposide or Nutlin-3 (Figure 2A). As expected, no p53 was observed in HCT116p53 -/- cells treated with any of the two reagents throughout the time course examined (Figure 2C).\nNutlin-3 induces stabilisation and phosphorylation of p53, and activates key p53 target proteins. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto), or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse MDM2 and p21 activation. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyze p53 stabilisation. Actin levels were used to assess equal loading.\nGiven that we observed stabilisation of p53 in response to Nutlin-3, we sought to identify whether Nutlin-3 caused activation of p53 target proteins; MDM2 and p21. Indeed, following 4, 8 or 16 hour treatments with Nutlin-3, activation of p21 was observed to be similar to that induced by Etoposide. Additionally, Nutlin-3-induced activation of MDM2 greatly exceeded that resulting from Etoposide treatment throughout the time course studied (Figure 2B).", "Following the observed stabilisation of p53 in HCT116p53+/+ cells induced by treatment with Etoposide or Nutlin-3, we next sought to investigate whether or not the observed Nutlin-3-dependent stabilisation of p53 was a result of Nutlin-3-induced p53 phosphorylation. Therefore, the phosphorylation status of various key serine residues known to be phosphorylated following DNA-damage was examined in response to the same two reagents over a 24 hours time-course. Indeed, phosphorylation of Ser15, 20 and 37 was observed at both 2 and 6 hour time points in response to Etoposide and Nutlin-3 treatment (Figure 3A). However a marked decrease in p53 phosphorylation was observed following Nutlin-3 treatment at 24 hour point (Additional file 1).\nNutlin-3 leads to phosphorylation of several important DDR mediators. (A) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading. (B) HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542) and CHK2 (Thr68) were analysed using immunoblotting. Actin levels were used to assess equal loading. (C) HCT116p53-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 or 100 μM of Etoposide (Eto) for 1 hour before the phosphorylation of ATM (Ser1981) and BRCA1 (Ser1542) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nSince it is well established that Etoposide-dependent phosphorylation of p53 is a response to DNA-damage generated by this agent, we went on to investigate whether the unexpected Nutlin-3-induced p53 phosphorylation was due to a Nutlin-3-mediated DDR. Therefore, we assessed the affect of Nutlin-3 on the activation of CHK2 and ATM which are required for DNA-damage-dependent phosphorylation and activation of p53. Indeed, phosphorylation of ATM and CHK2 were observed in HCT116p53+/+ cells following 1 hour treatments with either Etoposide or Nutlin-3, as was phosphorylation of BRCA1, an ATM target protein required for the ATM-dependent DDR (Figure 3B). Furthermore, in HCT116p53-/- cells, phosphorylation of both ATM and its target protein BRCA1 was also noted following a 1 hour treatment with both Nutlin-3 and Etoposide (Figure 3C).", "Given our findings that Nutlin-3 treatment induced p53 stabilisation and phosphorylation, as well as the activation of key DDR proteins and p53 target proteins known to be involved in cell cycle control, we went on to assess whether Nutlin-3 was capable of inducing cell cycle checkpoints. Following treatment with either Nutlin-3 or Etoptoside, HCT116p53+/+ andp53-/- cells were analysed by flow cytometry. While HCT116p53+/+ treatment with Nutlin-3 led to G1/S arrest, treatment with Etoposide led to G2/M arrest (Figure 4A, Additional file 2 and Table 1). In contrast HCT116 p53-/- cells were observed to arrest in G2/M in response to both Nutlin-3 and Etoposide (Figure 4A, Additional file 2 and Table 2). Furthermore and in contrast to HCT116p53+/+, an increase in subG1 cell population was observed in HCT116p53-/-following Nutlin-3 treatment (Figure 4A, Additional file 2 and Tables 1 and 2).\nNutlin-3 induces p53-independent cell cycle checkpoint controls. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry. (A) Chart showing percentage of HCT116p53+/+ cells in either G1 or G2 cell cycle population following either Nutlin-3 or etoposide treatment as described above. (B) Chart showing percentage of HCT116p53-/- cells in both G1 and G2 cell cycle population following either Nutlin-3 or Etoposide treatment as described above.\nRepresentative percentages of HCT11 p53+/+ cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.\nRepresentative percentages of HCT11 p53-/- cells in the different cell cycle phases following Nutlin-3 or Etoposide treatment as described in Additional file 2.", "One of the first proteins phosphorylated and activated in response to DNA-damage is the histone variant, H2AX [24]. Hence, we sought to investigate whether the observed Nutlin-3-dependent activation of ATM and CHK2 was due to a Nutlin-3-mediated DDR. Therefore, HCT116p53+/+ cells were treated with either Etoposide or Nutlin-3, and H2AX phosphorylation was checked both 1 and 4 hours following treatment. Indeed, H2AX phosphorylation was induced in response to both Etoposide and Nutlin-3 treatment (Figure 5A).\nNutlin-3 induces H2AX phosphorylation and γH2AX foci in HCT116p53+/+ cells. (A) HCT116p53+/+ cells were left untreated (unt) or treated with 100 μM Etoposide (Eto), 10 μM Nutlin-3 or (Nut) for 1 or 4 hours before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53+/+ cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nWe next sought to establish whether the observed Nutlin-3-induced activation of H2AX phosphorylation was indicative of γH2AX foci formation, an event recognised to occur early on in the DDR [24]. Indeed, treatment of HCT116p53+/+ cells with Etoposide or Nutlin-3 was observed to induce γH2AX foci formation from as early as 30 minutes following treatment. Foci formation was most notable in response to Etoposide treatment, but was nevertheless clearly visible in response to treatment with Nutlin-3 (Figure 5B).", "We next sought to clarify the effect of p53 status on the ability of Nutlin-3 to induce the DDR. We therefore treated HCT116p53-/- cells with Etoposide or Nutlin-3 and assessed the phosphorylation of γH2AX. Here, increases in γH2AX phosphorylation were observed in HCT116p53-/- cells treated with either Etoposide or Nutlin-3 (Figure 6A). Furthermore, formation of γH2AX foci were clearly visible in HCT116p53-/- cells following 30 minutes treatment with Etoposide, an effect which was comparable in cells treated with Nutlin-3 for the same time period (Figure 6B).\nNutlin-3 induces H2AX phosphorylation and _H2AX foci formation independent of both p53 and MDM2 status. (A) HCT116p53-/- cells were left untreated (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for 1 hour before the phosphorylation of H2AX (Ser139) was assessed using immunoblotting. Actin levels were used to assess equal loading. (B) Representative confocal microscopy images of γH2AX foci formation in HCT116p53-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes. (C) Representative confocal microscopy images of γH2AX foci formation in MEFMDM2-/- cells treated with either 100 μM Etoposide or 10 μM Nutlin-3 for 30 minutes.\nHaving established that Nutlin-3 was capable of inducing DDR independent of p53 status, we went on to assess whether the ability of Nutlin-3 to induce DDR was dependent on its ability to inhibit MDM2. Here we assessed the effect of Nutlin-3 on formation of γH2AX foci in mouse embryonic fibroblasts deficient in MDM2 (MEFMDM2-/-). We observed clear formation of γH2AX foci after 30 minutes Nutlin-3 treatment, similar to that induced in cells treated with Etoposide for the same length of time (Figure 6C). Furthermore, in MEFMDM2-/- cells, phosphorylation of ATM, ChK2, BRCA1 and γH2AX was noted following a 1 hour treatment with Nutlin-3, and markedly decreased by 24 hours (Additional file 3).", "Numerous serine and threonine residues (mainly those located in the N-terminal part of the0020p53 protein) are targets for phosphorylation in response to a diverse range of stress factors. Following DNA-damage for instance, various protein kinases including ATM and CHK2 are activated and lead to p53 phosphorylation, subsequently resulting in stabilisation and activation of p53 [5-7].\nThe requirement of these phosphorylation events for the stabilisation and activation of p53 remains a somewhat controversial topic, as do the consequences of controlling the p53 pathway using the relatively newly developed MDM2 antagonists such as Nutlin-3. For example, there is debate as to whether MDM2 antagonism may affect p53 protein modifications or functions. A study carried out by Thompson et al using Nutlin-3, showed that phosphorylation of p53 on key serine residues was not necessary to bring about its stabilisation and activation. Indeed, whilst Thompson et al still observed stabilisation and activation of p53, no phosphorylation was detected following Nutlin-3 treatment [22].\nIn stark contrast, Drakos et al have since shown Nutlin-3-dependent induction of p53 phosphorylation at Ser15 in SP-53, Z-138, M-1 and Granta-519 MCL cell lines [23]. Nutlin-3-dependent p53 phosphorylation at Ser15 has also been observed in normal CD19+ B-cells, peripheral blood mononuclear cells (PBMCs), bone marrow mononuclear cells (BMMCs) and B-CLL cells to a level similar to that noted in response to fludarabine treatment, and in excess of that resulting from treatment with the protease inhibitor clasto-latacystin [25]. Indeed in the current study, we observed Nutlin-3-induced stabilisation and activation of p53 at levels comparable with that induced by the genotoxic DNA topoisomerase II inhibitor; Etoposide (Figure 2A and 2B). We also detected Nutlin-3-induced phosphorylation of p53 at Ser15, as well as at two other key serine residues; Ser20 and Ser37 (Figure 3A), indicating that Nutlin-3 does not only disrupt the interaction between MDM2 and p53, but could also play a role in activating DDR pathways resulting in p53 phosphorylation, and subsequent activation of downstream target proteins involved in for example, cell cycle checkpoint control. Our results are in sharp contrast to the previous observations of Thompson et al [22]. In the current study, we checked p53 phosphorylation at earlier time points following Nutlin-3 treatment (as early as 2 hours, see Figure 2), however data in the Thompson et al study were obtained after 24 hour treatments with Nutlin-3, which could explain why such a difference is seen between the two studies. Indeed we also observed a marked decrease in these phosphorylations at 24 hours in response to Nutlin-3 (Additional file 1).\nSince the activation of ATM and its downstream substrate CHK2 are well established as being responsible for DNA-damage-dependent p53 phosphorylation [5-7], we went on to investigate whether the observed Nutlin-3-dependent p53 phosphorylation was as a result of activation of these two kinases. Indeed, to our knowledge, we show for the first time that Nutlin-3 treatment triggers phosphorylation of ATM (Ser1981) and CHK2 (Thr68) in HCT116p53+/+ cells (Figure 3B), demonstrating that Nutlin-3-mediated p53 phosphorylation is due to Nutlin-3 behaving as an activator of ATM and CHK2. Indeed our observation that Nutlin-3 also led to phosphorylation of a well established ATM target; BRCA1 (Ser1524) further supports a role for Nutlin-3 as an activator of the ATM kinase. Moreover, the phosphorylation of ATM and its target protein BRCA1 in HCT116p53-/- cells (Figure 3C) suggests that the Nutlin-3-mediated activation of ATM and the subsequent phosphorylation of BRCA1 are triggered independently of p53.\nFollowing DNA-damage, it is known that cells activate checkpoints to temporarily halt the cell cycle [26], allowing for DNA repair or destruction of the damaged cell by apoptosis. The G1-S and intra-S-phase checkpoints regulate transition into, and progression through S phase in response to DNA-damage, while the G2-M checkpoint regulates entry into mitosis [26]. Since ATM and CHK2 are amongst the main activators of these checkpoints in response to DNA-damage, we sought to determine whether cell cycle checkpoints could be triggered by Nutlin-3 treatment. Whilst Etoposide led to clear G2/M arrest, Nutlin-3 treatment led to marked G1/S arrest in HCT116p53+/+ cells (Figure 4A), in keeping with the established role of p53 in triggering and maintaining G1/S arrest [27].\nConversely, in HCT116p53-/- cells, Nutlin-3 led to G2/M arrest (Figure 4B), demonstrating Nutlin-3-mediated p53-independent induction of the G2/M cell cycle checkpoint, similar to that observed following Etoposide treatment. In addition, an increase in the sub-G1 cell population was also observed. Since sub-G1 is indicative of apoptotic cells, this suggests that Nutlin-3 may trigger p53-independent apoptosis. Given the absence of functional p53 in this instance, this prompted us to question whether Nutlin-3 was inducing the DDR without directly generating DNA-damage, or if the DDR was being activated due to Nutlin-3-induced DNA-damage.\nOne widely established indicator of DNA damage is the rapid phosphorylation of the histone variant H2AX at its C-terminal serine residue (Ser139) to form γH2AX, activation of which leads to its recruitment and subsequent accumulation (along with various repair proteins) into foci at the site of DNA damage [24]. Here, Nutlin-3 clearly induced the phosphorylation of H2AX (Figure 5A), and in addition was observed using immunofluorescent staining to cause clear γH2AX foci formation, similar to that observed in Etoposide-treated cells (Figure 5B). These findings demonstrate that Nutlin-3-dependent phosphorylation of p53 is due to the ability of Nutlin-3 to induce DNA-damage, or to otherwise activate pathways that are stimulated in response to DNA damage.\nRecently, Verma et al have observed phosphorylation of H2AX in HCT116p53+/+ following Nutlin-3 treatment. Nevertheless, an absence of γH2AX staining was noted by Verma et al unless Nutlin-3 was combined with treatment with the DNA damage inducer Hydroxyurea, and no phosphorylation of Ser15 was seen [28]. It is noteworthy that Verma et al observed these effects following a 24 hour treatment with Nutlin-3, whilst in the current study earlier time points were used after considering previous findings indicating that H2AX foci formation occurs as early as 1 minute after DNA-damage and peak at around 30-60 minutes [29-31], and previous observations that DNA-damage-induced stabilization and phosphorylation of p53 peak at 4-6 hours, declining thereafter [32,33].\nVerma and colleagues attribute the induction of γH2AX staining to Nutlin-3-induced p53-mediated slowing of non-homologous end joining events following formation of DSBs during normal replicative processes, possibly as a way to ensure the accuracy of the repair process. However, in the current study we show Nutlin-3-induced phosphorylation of H2AX and formation of γH2AX foci in HCT116p53-/- cells (Figure 6A and 6B). Coupled with the G2/M arrest we observed in p53 negative HCT116 cells, our data indicate that p53 is dispensable in the Nutlin-3-induced DDR. Furthermore, our observation that Nutlin-3 induces formation of γH2AX foci as well as ATM, ChK2 and BRCA1 phosphorylation in cells devoid of MDM2 (Figure 6C and Additional file 3), suggests that the secondary ability of Nutlin-3 to induce DNA-damage is not related to its primary function as an MDM2 antagonist.", "Direct inhibition of MDM2 using Nutlin-3 clearly provides a means of activating p53, and restoring p53 signaling, however in light of recent findings including those presented in the current study, we suggest Nutlin-3 is itself capable of instigating DNA-damage signaling. To our knowledge, we show for the first time that Nutlin-3 induces DDR activation in a p53-and MDM2-independent fashion. Further investigation is required to fully elucidate the effect of Nutlin-3 on p53-dependent and-independent DDR mechanisms, as well as its effects on the post-translational modification and functionality of p53, understanding of which will undoubtedly facilitate the development of Nutlin-3 and other MDM2 antagonists as potential cancer therapies.", "The authors declare that they have no competing interests.", "AM conceived of the study, whilst AM and JV were responsible for its design. JV carried out all assays relating to the study, including western blots, FACs and fluorescence microscopy. SK carried out some western blots. AM and JV analysed the data and drafted the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/79/prepub\n", "Nutlin-3 leads to phosphorylation of key p53 Serine residues associated with DNA-damage. HCT116p53+/+ cells were untreated (treated with DMSO only) (unt) or treated with 100 μM Etoposide (Eto) or 10 μM Nutlin-3 (Nut) for the times indicated before immunoblotting was used to analyse phosphorylation of p53 at Ser15, Ser20 and Ser37. Actin levels were used to assess equal loading.\nClick here for file\nNutlin-3 induces p53-independent cell cycle checkpoint controls. Representative histograms of HCT116p53+/+ and HCT116p53-/- cells following Nutlin-3 or Etoposide treatment. HCT116p53+/+ and HCT116p53-/- cells were treated with either 0, 10, 15 or 20 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto). After 18 hours, cell cycle distribution was assessed using flow cytometry.\nClick here for file\nNutlin-3 leads to phosphorylation of several important DDR mediators, and results in phosphorylation of H2AX in MDM2 minus cells. MEFMDM2-/- cells were untreated (treated with DMSO only) (unt) or treated with 10 μM Nutlin-3 (Nut) or 100 μM of Etoposide (Eto) for 1 or 24 hours before the phosphorylation of ATM (Ser1981), BRCA1 (Ser1542), CHK2 (Thr68) and H2AX (Ser139) were analysed using immunoblotting. Actin levels were used to assess equal loading.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Evaluation of physical activity programmes for elderly people - a descriptive study using the EFQM' criteria.
21338497
In the past years, there has been a growing concern in designing physical activity (PA) programmes for elderly people, because evidence suggests that such health promotion interventions may reduce the deleterious effects of the ageing process. Quality is an important issue when designing a PA programme for older people. Some studies support the Excellence Model of the European Foundation for Quality Management (EFQM) as an operational framework for evaluating the quality of an organization. Within this context, the aim of this study was to characterize the quality management models of the PA programmes developed by Portuguese Local Administration to enhance quality of life for elderly people, according to the criteria of the EFQM Excellence Model.
BACKGROUND
A methodological triangulation was conducted in 26 PA programmes using questionnaire surveys, semi-structured interviews and document analysis. We used standard approaches to the statistical analysis of data including frequencies and percentages for the categorical data.
METHODS
Results showed that Processes (65,38%), Leadership (61,03%), Customer results (58,46) and People (51,28%) had high percentage occurrences of quality practices. In contrast, Partnerships and resources (45,77%), People results (41,03%), Policy and strategy (37,91%), Key performance results (19,23%) and Society results (19,23%) had lower percentage occurrences.
RESULTS
Our findings suggest that although there are some good practices in PA programmes, there are still relevant areas that require improvement.
CONCLUSIONS
[ "Aged", "Exercise", "Female", "Health Promotion", "Humans", "Interviews as Topic", "Male", "Middle Aged", "Program Evaluation" ]
3050751
null
null
Methods
[SUBTITLE] Procedures [SUBSECTION] In order to gather empirical evidence, methodological triangulation -- i.e. questionnaire surveys, semi-structured interviews and additional document analysis -- was employed. A preliminary on-line questionnaire was sent out to all mainland Portuguese municipalities (n = 278) in May of 2008. This brief questionnaire provided the following information: geographic localization, name and objectives of PA programmes, age of the PA programme, characteristics of age groups and participants' age, number of activities included in the PA programme, frequency of the programme (days/week), quality initiatives, organization name and the identification details of the PA programme's coordinator (Additional file 1). Of the 278 municipalities, a total of 97 valid questionnaires were answered. Since some municipalities provided more than a single programme, 125 PA programmes were identified. Inclusion criteria for the purposive sample implied that at least one of the following conditions should be verified: i) programmes should belong to a District Capital in order to apply a geographic criterion; ii) programmes should include the following cumulative criteria: a) must have been in practice for 10 years or more [19], b) must have had two or more different types of activities [20,21], and c) must have had a frequency of two or more times a week [6]; iii) programmes that apply a quality initiative [14,16,22-25]. Therefore, 27 potentially eligible PA programmes for elderly people were identified, of which 18 were from a District Capital; eight were aged ten years or more, had two or more types of activities and a frequency of two or more times a week; and one had a quality initiative (Quality Certification). We screened each PA programme's coordinator by telephone to check eligibility, confirm willingness to participate and, accordingly, provide a written informed consent by email. At this stage, one programme was excluded because it did not meet any of the three conditions above. The characteristics of the 26 PA programmes included in our sample are described in Table 1. Characteristics of the 26 PA programmes To characterise the quality management models of the PA programmes, semi-structured face-to-face interviews with the PA programmes' coordinators (n = 26) were carried out between February and April of 2009. The questions were based on the EFQM Excellence Model's nine criteria and 32 sub-criteria. Before the 26 interviews, a pilot study was conducted among four PA programmes' coordinators, conveniently chosen from among the programmes that were not selected for the sample, to understand the process and evaluate the content understanding of the questions. As a result, some questions were adapted in accordance with respondents' comments. Afterwards, a standard interview guide was created and used for all interviews, which lasted 45 to 60 minutes and were tape-recorded and transcribed verbatim at a later date. Participants were asked about each sub-criterion of Leadership, Policy and Strategy, People, Partnerships and Resources, Processes, Customer Results, People Results, Society Results and Key Performance Results. A content analysis of the transcribed interviews was conducted. Two coding strategies were applied: (a) a priori categorisation of data based on the 32 sub-criteria and (b) a posteriori coding scheme, obtained directly from the data, using an inductive method to identify the themes and subthemes that emerged. To ensure rigour and reliability of analysis, the first three transcripts were coded in their entirety by two coders who achieved agreement through discussion and consensus. Two independent researchers double-coded two transcripts to assess the inter-rater reliability of coding. Intra-rater reliability was also conducted on a question of each criterion, within a 5-day interval. The inter-rater and intra-rater reliability were assured by the intercoder and intracoders' agreement, from Bellack's formula [26]. Both results obtained ranged from 95% to 100%, confirmed by Cohen's Kappa to eliminate the agreement by chance. Interscore reliability was in the range of 0.93 and above. To facilitate the coding process, we used the QSR NVivo software, which helps manage and organize qualitative data. An on-line questionnaire was also administrated to the 26 PA programmes' coordinators, between June and July 2009. This new questionnaire, based on the EFQM Excellence Model's nine criteria and 32 sub-criteria, was generated according to the literature review and the interviews' content analysis. For each sub-criterion, items were devised concerning the areas addressing the EFQM Excellence Model and the specificity of the PA programmes for elderly people. Closed questions with multiple choice answers and Likert scales were used. The first draft of the questionnaire was submitted to a panel of experts (n = 5) in the field of PA programmes for elderly people and/or EFQM Excellence Model, to ensure the content validity. The experts pointed out their level of accordance with the relevance of the items, ease of understanding and adequacy as an instrument to characterise the management models of the PA programmes. Based on their suggestion, fourteen items were reframed and two were eliminated, due to its irrelevance. After, the on-line questionnaire was tested among 15 PA programmes' coordinators, chosen from among the programmes that were not selected for the sample, for comments on readability. Some adjustments were made to make the questions clearer and more relevant to the PA programme case. The study design also included a test-retest reliability of the answers, performed with an interval of seven days. Agreement was estimated using kappa statistics (κ for categorical variables) and weighted kappa statistics (κw for ordinal variables). High levels of agreement (0.86 to 0.97) were found. The final version of the on-line questionnaire comprised 165 items and took a respondent about one hour to complete. In addition, document analysis was carried out. Written documents, including procedures, budgets, flyers, e-mails, reports, minutes of meetings, specifications, print screens, publications, price lists, etc. were made available by some of the coordinators. Other information was gathered from the web page of the organization. We used standard approaches to statistical analysis of data including frequencies and percentages for the categorical data, performed with the Statistical Package SPSS, version 17.0. In order to gather empirical evidence, methodological triangulation -- i.e. questionnaire surveys, semi-structured interviews and additional document analysis -- was employed. A preliminary on-line questionnaire was sent out to all mainland Portuguese municipalities (n = 278) in May of 2008. This brief questionnaire provided the following information: geographic localization, name and objectives of PA programmes, age of the PA programme, characteristics of age groups and participants' age, number of activities included in the PA programme, frequency of the programme (days/week), quality initiatives, organization name and the identification details of the PA programme's coordinator (Additional file 1). Of the 278 municipalities, a total of 97 valid questionnaires were answered. Since some municipalities provided more than a single programme, 125 PA programmes were identified. Inclusion criteria for the purposive sample implied that at least one of the following conditions should be verified: i) programmes should belong to a District Capital in order to apply a geographic criterion; ii) programmes should include the following cumulative criteria: a) must have been in practice for 10 years or more [19], b) must have had two or more different types of activities [20,21], and c) must have had a frequency of two or more times a week [6]; iii) programmes that apply a quality initiative [14,16,22-25]. Therefore, 27 potentially eligible PA programmes for elderly people were identified, of which 18 were from a District Capital; eight were aged ten years or more, had two or more types of activities and a frequency of two or more times a week; and one had a quality initiative (Quality Certification). We screened each PA programme's coordinator by telephone to check eligibility, confirm willingness to participate and, accordingly, provide a written informed consent by email. At this stage, one programme was excluded because it did not meet any of the three conditions above. The characteristics of the 26 PA programmes included in our sample are described in Table 1. Characteristics of the 26 PA programmes To characterise the quality management models of the PA programmes, semi-structured face-to-face interviews with the PA programmes' coordinators (n = 26) were carried out between February and April of 2009. The questions were based on the EFQM Excellence Model's nine criteria and 32 sub-criteria. Before the 26 interviews, a pilot study was conducted among four PA programmes' coordinators, conveniently chosen from among the programmes that were not selected for the sample, to understand the process and evaluate the content understanding of the questions. As a result, some questions were adapted in accordance with respondents' comments. Afterwards, a standard interview guide was created and used for all interviews, which lasted 45 to 60 minutes and were tape-recorded and transcribed verbatim at a later date. Participants were asked about each sub-criterion of Leadership, Policy and Strategy, People, Partnerships and Resources, Processes, Customer Results, People Results, Society Results and Key Performance Results. A content analysis of the transcribed interviews was conducted. Two coding strategies were applied: (a) a priori categorisation of data based on the 32 sub-criteria and (b) a posteriori coding scheme, obtained directly from the data, using an inductive method to identify the themes and subthemes that emerged. To ensure rigour and reliability of analysis, the first three transcripts were coded in their entirety by two coders who achieved agreement through discussion and consensus. Two independent researchers double-coded two transcripts to assess the inter-rater reliability of coding. Intra-rater reliability was also conducted on a question of each criterion, within a 5-day interval. The inter-rater and intra-rater reliability were assured by the intercoder and intracoders' agreement, from Bellack's formula [26]. Both results obtained ranged from 95% to 100%, confirmed by Cohen's Kappa to eliminate the agreement by chance. Interscore reliability was in the range of 0.93 and above. To facilitate the coding process, we used the QSR NVivo software, which helps manage and organize qualitative data. An on-line questionnaire was also administrated to the 26 PA programmes' coordinators, between June and July 2009. This new questionnaire, based on the EFQM Excellence Model's nine criteria and 32 sub-criteria, was generated according to the literature review and the interviews' content analysis. For each sub-criterion, items were devised concerning the areas addressing the EFQM Excellence Model and the specificity of the PA programmes for elderly people. Closed questions with multiple choice answers and Likert scales were used. The first draft of the questionnaire was submitted to a panel of experts (n = 5) in the field of PA programmes for elderly people and/or EFQM Excellence Model, to ensure the content validity. The experts pointed out their level of accordance with the relevance of the items, ease of understanding and adequacy as an instrument to characterise the management models of the PA programmes. Based on their suggestion, fourteen items were reframed and two were eliminated, due to its irrelevance. After, the on-line questionnaire was tested among 15 PA programmes' coordinators, chosen from among the programmes that were not selected for the sample, for comments on readability. Some adjustments were made to make the questions clearer and more relevant to the PA programme case. The study design also included a test-retest reliability of the answers, performed with an interval of seven days. Agreement was estimated using kappa statistics (κ for categorical variables) and weighted kappa statistics (κw for ordinal variables). High levels of agreement (0.86 to 0.97) were found. The final version of the on-line questionnaire comprised 165 items and took a respondent about one hour to complete. In addition, document analysis was carried out. Written documents, including procedures, budgets, flyers, e-mails, reports, minutes of meetings, specifications, print screens, publications, price lists, etc. were made available by some of the coordinators. Other information was gathered from the web page of the organization. We used standard approaches to statistical analysis of data including frequencies and percentages for the categorical data, performed with the Statistical Package SPSS, version 17.0. [SUBTITLE] Data presentation [SUBSECTION] A set of the most relevant items concerning quality practices associated with the EFQM Excellence Model criteria was adapted from an original scale created to measure the nine criteria [27] and assigned to each EFQM sub-criterion based on its content domain. Several adjustments were made to reflect the specificity of the PA programmes for elderly people, according to collected data. The presence or absence of a particular quality practice was encoded as: addressed/measured = 1; not addressed/not measured = 0. A set of the most relevant items concerning quality practices associated with the EFQM Excellence Model criteria was adapted from an original scale created to measure the nine criteria [27] and assigned to each EFQM sub-criterion based on its content domain. Several adjustments were made to reflect the specificity of the PA programmes for elderly people, according to collected data. The presence or absence of a particular quality practice was encoded as: addressed/measured = 1; not addressed/not measured = 0.
null
null
null
null
[ "Background", "Procedures", "Data presentation", "Results", "Discussion", "Conclusions", "Strengths and Limitations", "Authors' contributions", "Ethics approval", "Competing interests", "Pre-publication history" ]
[ "The last few decades have witnessed a significant demographic ageing process, causing deep social and political transformations, and challenging society and humanity's options for the 21st century. The population aged 60 or over is increasing rapidly and is expected to increase by more than 50 per cent over the next four decades, expanding from 264 million in 2009 to 416 million in 2050 in more developed regions [1]. Subsequently, there will be more older people than children in the world population for the first time in history.\nThe most important issue related to demographic ageing deals with its implications for the well-being of the elderly, such as access to appropriate health-care services. In developed countries, some degree of progress has been made to achieve this objective, all the more so as ageing is the most important contributor to the increase in health care costs [2].\nThe concept of 'active ageing' has been employed by the World Health Organization (WHO) since the late 1990s, and is defined as 'the process of optimizing opportunities for health, participation and security in order to enhance quality of life as people age' (WHO 2002 [3] p.12). Therefore, there has been a growing concern in designing physical activity (PA) programmes for elderly people, since evidence indicates that such health promotion interventions may reduce the deleterious effects of the ageing process [4,5] and improve quality of life [4-7]. Nevertheless, a substantial proportion of European elderly people have lower PA levels than those recommended for good health [8,9]. Therefore, increasing adherence to PA among elderly people is an important public health challenge.\nThe Centers for Disease Control and Prevention (CDC) developed guidelines with other American organizations for increasing PA across a large number of settings and populations, including elderly people [10]. They described a set of recommendations and strategies to improve programmes, developing new approaches and highlighting the need for effective programme evaluation [11,12]. This 'imperative' has a wide application (CDC 2002b [13] p.5) that reveals commitment to provide high quality programmes. Furthermore, programme evaluation is a useful tool for continuous quality improvement [14] and the WHO guidelines for the evaluation of health promotion emphasize the need to evaluate and propose the allocation of adequate resources for this action [15].\nHealthy Ageing - A Challenge for Europe Report [16] suggests a systematic application of quality management/assurance methods to increase project's quality; these indicate that Quality is an important issue for PA programmes for older people.\nWith the purpose of helping organizations to improve their quality, the European Foundation for Quality Management (EFQM) introduced the EFQM Excellence Model in 1991 with the support of EOQ, the European Organization for Quality, and the European Commission. The EFQM Excellence Model is a non-prescriptive framework based on nine criteria divided into thirty-two sub-criteria [17]. Of these nine criteria, five are 'Enablers' - what an organization does to achieve excellence - and four are 'Results' - what an organization achieves, i.e., the results achieved on the path to Excellence. As illustrated in Figure 1, the arrows presented in the Model show its dynamic nature; the issues related to 'Innovation and Learning', while horizontal vectors essential to the Model's architecture, also emerge as cross-sectional elements in all the criteria. They show innovation and learning can improve 'Enablers', which in turn lead to improved 'Results'. The Model recognizes that there are many approaches to achieving sustainable Excellence in all aspects of performance, based on the premise that: \"Excellent results with respect to Performance, Customers, People and Society are achieved through Leadership driving Policy and Strategy that is delivered through People, Partnerships and Resources, and Processes\" (EFQM 2003a [17] p.5).\nEFQM Excellence Model (adapted from EFQM, 2003).\nThe application of the EFQM Excellence Model promotes the use of a management methodology based on objective criteria that is applicable to all areas of business and constitutes a self-assessment exercise of the organization's quality. Self-assessment will shed light on the areas requiring improvement, as well as on the process and actions necessary to conduct improvement. The Model is currently used by thousands of organizations throughout Europe, such as firms, health institutions, schools, public safety services and governmental institutions, among others. It provides organizations with common management terminology and tools, thus facilitating the sharing of best practices between organizations of different sectors [18].\nDespite the numerous PA programmes for the elderly that have been created in recent years - especially by the Public Local Administration - their evaluation is scarce. Moreover, the EFQM Excellence Model had never been used in PA programmes for elderly people.\nIn this context, the purpose of this study was to characterise the quality management models of the PA programmes developed by the Portuguese Local Administration to enhance quality of life for elderly people, according to the criteria of the EFQM Excellence Model 2003.", "In order to gather empirical evidence, methodological triangulation -- i.e. questionnaire surveys, semi-structured interviews and additional document analysis -- was employed.\nA preliminary on-line questionnaire was sent out to all mainland Portuguese municipalities (n = 278) in May of 2008. This brief questionnaire provided the following information: geographic localization, name and objectives of PA programmes, age of the PA programme, characteristics of age groups and participants' age, number of activities included in the PA programme, frequency of the programme (days/week), quality initiatives, organization name and the identification details of the PA programme's coordinator (Additional file 1).\nOf the 278 municipalities, a total of 97 valid questionnaires were answered. Since some municipalities provided more than a single programme, 125 PA programmes were identified. Inclusion criteria for the purposive sample implied that at least one of the following conditions should be verified: i) programmes should belong to a District Capital in order to apply a geographic criterion; ii) programmes should include the following cumulative criteria: a) must have been in practice for 10 years or more [19], b) must have had two or more different types of activities [20,21], and c) must have had a frequency of two or more times a week [6]; iii) programmes that apply a quality initiative [14,16,22-25]. Therefore, 27 potentially eligible PA programmes for elderly people were identified, of which 18 were from a District Capital; eight were aged ten years or more, had two or more types of activities and a frequency of two or more times a week; and one had a quality initiative (Quality Certification). We screened each PA programme's coordinator by telephone to check eligibility, confirm willingness to participate and, accordingly, provide a written informed consent by email. At this stage, one programme was excluded because it did not meet any of the three conditions above. The characteristics of the 26 PA programmes included in our sample are described in Table 1.\nCharacteristics of the 26 PA programmes\nTo characterise the quality management models of the PA programmes, semi-structured face-to-face interviews with the PA programmes' coordinators (n = 26) were carried out between February and April of 2009. The questions were based on the EFQM Excellence Model's nine criteria and 32 sub-criteria. Before the 26 interviews, a pilot study was conducted among four PA programmes' coordinators, conveniently chosen from among the programmes that were not selected for the sample, to understand the process and evaluate the content understanding of the questions. As a result, some questions were adapted in accordance with respondents' comments. Afterwards, a standard interview guide was created and used for all interviews, which lasted 45 to 60 minutes and were tape-recorded and transcribed verbatim at a later date. Participants were asked about each sub-criterion of Leadership, Policy and Strategy, People, Partnerships and Resources, Processes, Customer Results, People Results, Society Results and Key Performance Results. A content analysis of the transcribed interviews was conducted. Two coding strategies were applied: (a) a priori categorisation of data based on the 32 sub-criteria and (b) a posteriori coding scheme, obtained directly from the data, using an inductive method to identify the themes and subthemes that emerged. To ensure rigour and reliability of analysis, the first three transcripts were coded in their entirety by two coders who achieved agreement through discussion and consensus. Two independent researchers double-coded two transcripts to assess the inter-rater reliability of coding. Intra-rater reliability was also conducted on a question of each criterion, within a 5-day interval. The inter-rater and intra-rater reliability were assured by the intercoder and intracoders' agreement, from Bellack's formula [26]. Both results obtained ranged from 95% to 100%, confirmed by Cohen's Kappa to eliminate the agreement by chance. Interscore reliability was in the range of 0.93 and above. To facilitate the coding process, we used the QSR NVivo software, which helps manage and organize qualitative data.\nAn on-line questionnaire was also administrated to the 26 PA programmes' coordinators, between June and July 2009. This new questionnaire, based on the EFQM Excellence Model's nine criteria and 32 sub-criteria, was generated according to the literature review and the interviews' content analysis. For each sub-criterion, items were devised concerning the areas addressing the EFQM Excellence Model and the specificity of the PA programmes for elderly people. Closed questions with multiple choice answers and Likert scales were used. The first draft of the questionnaire was submitted to a panel of experts (n = 5) in the field of PA programmes for elderly people and/or EFQM Excellence Model, to ensure the content validity. The experts pointed out their level of accordance with the relevance of the items, ease of understanding and adequacy as an instrument to characterise the management models of the PA programmes. Based on their suggestion, fourteen items were reframed and two were eliminated, due to its irrelevance. After, the on-line questionnaire was tested among 15 PA programmes' coordinators, chosen from among the programmes that were not selected for the sample, for comments on readability. Some adjustments were made to make the questions clearer and more relevant to the PA programme case. The study design also included a test-retest reliability of the answers, performed with an interval of seven days. Agreement was estimated using kappa statistics (κ for categorical variables) and weighted kappa statistics (κw for ordinal variables). High levels of agreement (0.86 to 0.97) were found. The final version of the on-line questionnaire comprised 165 items and took a respondent about one hour to complete.\nIn addition, document analysis was carried out. Written documents, including procedures, budgets, flyers, e-mails, reports, minutes of meetings, specifications, print screens, publications, price lists, etc. were made available by some of the coordinators. Other information was gathered from the web page of the organization.\nWe used standard approaches to statistical analysis of data including frequencies and percentages for the categorical data, performed with the Statistical Package SPSS, version 17.0.", "A set of the most relevant items concerning quality practices associated with the EFQM Excellence Model criteria was adapted from an original scale created to measure the nine criteria [27] and assigned to each EFQM sub-criterion based on its content domain. Several adjustments were made to reflect the specificity of the PA programmes for elderly people, according to collected data. The presence or absence of a particular quality practice was encoded as: addressed/measured = 1; not addressed/not measured = 0.", "Regarding Leadership, most of the coordinators who participated in this study revealed that they were personally involved in the development of a culture of Excellence, reinforcing a strong communicative culture throughout all areas of the organization (84,62%), encouraging people's empowerment and autonomy and ensuring that every member of the organization knows the role that the PA programme should play in society (both with 80,77%). Almost two-fifths (38,46%) of the coordinators ensured that people were capable of taking initiatives and fulfilling their responsibilities in the most appropriate way, and a single leader collaborated in quality training since only his programme was involved in a quality scheme (3,85%) (Table 2).\nFrequencies and percentages of quality practices in the criterion Leadership\nConcerning Policy and Strategy, the issues related to quality initiatives, such as the measurement of quality and non-quality costs, quality strategies and quality objectives were referenced by one coordinator (3,85%), the one who's programme was involved in a quality initiative. In contrast, 84,62% of the coordinators reported the identification of organizational processes and their interrelationships and 80,77% stated that all people are familiar with the mission and objectives of the PA programme (Table 3).\nFrequencies and percentages of quality practices in the criterion Policy and Strategy\nIn relation to the criterion People (the same as employees/workers), 84,62% of the coordinators reported that People maintain fluid communication with one another; in contrast, 15,38% indicated that People voluntarily pass on useful information to other members of the organization. Two items related to quality initiatives appear with a diminutive percentage (3,85%), namely People's access to information about quality results and the quality training they are offered. The majority of the coordinators (80,77%) stated that formal processes were used to find out people's opinions (Table 4).\nFrequencies and percentages of quality practices in the criterion People\nWith reference to Partnerships and Resources, less than 20% of the PA programmes had formal communication procedures with partners and 11,54% of coordinators revealed that relationships with academic partners allow the organization to have access to scientific information. Nearly three quarters (73%) of respondents reported that the organization has the capacity for external cooperation. The most reported item was the one related to the recording of information and knowledge (88,46%) (Table 5).\nFrequencies and percentages of quality practices in the criterion Partnerships and Resources\nAnalysis of the Processes criterion showed the items recommendations concerning exercise sessions phases and standardized systems to deal with customer complaints were accomplished by all PA programmes. We can also verify that most of the organizations advertised the PA programme and good accessibility was guaranteed (96,15%). Nonetheless, just 30,77% of organizations were oriented towards the fulfilment of customers' expectations and needs and only 19,23% kept documentation of work methods and organizational processes (Table 6).\nFrequencies and percentages of quality practices in the criterion Processes\nConcerning Customer results, 76,92% of the programmes evaluated customers' satisfaction and 34,62% had measures and/or indicators of customers' loyalty (Table 7).\nFrequencies and percentages of quality practices in the criterion Customer Results\nRelating to People results, 69,23% of the programmes evaluated people's absenteeism and 15,38% had measures and/or indicators of people's organizational commitment (Table 8).\nFrequencies and percentages of quality practices in the criterion People Results\nConcerning Society results, 15,38% PA programmes had measures and/or indicators of their involvement in their target community. 23,07% of the coordinators confirmed that the organization had measures and/or indicators of the programme's impact in society (Table 9).\nFrequencies and percentages of quality practices in the criterion Society Results\nIn Key performance results, one coordinator mentioned assessments of the quality of the service delivered and 42,31% of the coordinators reported that the organization has measures and/or indicators of the financial results of the PA programme (Table 10).\nFrequencies and percentages of quality practices in the criterion Key Performance Results\nFigure 2 shows the average of the percentages related to quality practices associated to the EFQM Excellence Model criteria. Four criteria (three Enablers and one Result) had values over 50%: Processes (65,38%), Leadership (61,03%), Customer results (58,46) and People (51,28%). In contrast, the other two Enablers and three Results had percentages under 50%: Partnerships and resources (45,77%), People results (41,03%), Policy and strategy (37,91%), Key performance results (19,23%) and Society results (19,23%).\nAverage of the percentages related to quality practices of the EFQM Excellence Model's criteria.", "To our knowledge, this was the first study applying the EFQM Excellence Model criteria to PA programmes for elderly people.\nResults showed that Processes, Leadership, Customer results and People had high percentage occurrences of quality practices. In contrast, Partnerships and resources, People results, Policy and strategy, Key performance results and Society results had lower percentage occurrences.\nPA programmes for elderly people play a significant role in senior citizens' health, quality of life, autonomy and capability to face daily tasks. It is widely accepted that the benefits of such programmes depend upon adherence to exercise [28]. Higher attendance in PA programmes and activity levels are strongly influenced by degrees of enjoyment [29,30]. Therefore, continuous quality improvement of the PA programmes for elderly people can be useful, and even critical, for elderly satisfaction and adherence.\nLeadership is the key for driving forward quality improvement activities [31-33] and involves a process of social influence on a group of people. Our data suggests that the coordinators are particularly involved in developing the vision and mission, and enhance a strong culture of communication. These aspects are considered fundamental to quality management [34-36]. Indeed, other studies in different sectors have focused on leadership and have shown that the commitment of the leaders operates as the thrust of the quality improvement process [37-39]. Moreover, their physical presence, visibility and concern for quality improvement were associated with transformational leadership [40], i.e., leadership that creates valuable and positive change in its followers. Our study also revealed that most of the leaders interact with customers, partners and representatives of society. Trustworthy leadership increases partnership building and sustainability, essential to guarantee the success of PA promotion as a public health strategy, as demonstrated in some programmes [41]. Several studies have focused on customers [42-44] since listening them appears to be a priority for organizations that want to succeed. With regard to PA programmes, the CDC mention the importance of interacting with all stakeholders [13]. Specifically related to the PA programmes for elderly people, the British Heart Foundation (BHF) stated that participants or other stakeholders must be actively involved in all aspects of programme development, including planning, promotion and evaluation [45]. The ACSM also recognizes that PA leaders should work closely with individuals to design a PA regimen that reflects the person's preferences and capabilities [46]. In addition, our results indicate that coordinators neglect to run the PA programme as a set of interrelated processes. Although there are no studies on this issue for PA programmes for elderly people, some organizations have made recommendations for their specific programme, namely the American Association of Cardiovascular and Pulmonary Rehabilitation (AACVPR), which states that the programme leaders are responsible for directing, integrating and coordinating programme services, and recommending a central location for all policies, procedures and guidelines references [31]. Another interesting result of our data concerns the fact that most of the leaders are not involved in quality training in terms of teaching people at lower hierarchical levels, which might be related to the fact that only a single programme concerned itself with quality initiatives.\nPolicy and strategy is defined as how the organisation implements its mission and vision via a clear stakeholder-focused strategy, supported by relevant policies, plans, objectives, targets and processes [17]. Our results point out a modest concern about the opinions of different stakeholders in setting targets for the PA programme, which has been described as one of the crucial steps in the planning and evaluation of PA programmes, or as a good practice [13,45]. In addition, contrary to the guidelines [45], our study showed that a minority of programmes establish the objectives according to the participants' stated aims. Furthermore, this fact is in the opposite direction from the results of an European cross-national report on PA Programmes and promotion strategies for older people, in which most of the PA Programme's directors reported that their programmes were adjusted according to the participants' aims [19]. Another result that stands out in our data is the fact that just about two thirds of the programmes systematically assess their effectiveness in order to improve their continuous quality improvement process, which opposes the Benchmark 3 from Physical Activity and Health Branch (PAHB), at the CDC [14]. As indicated by the CDC, 'the evaluation is the systematic examination and assessment of features of an initiative and its effects, in order to produce information that can be used by those who have an interest in its improvement or effectiveness' (CDC 2002b [13] p.5), consequently an 'imperative', as stated before. Jackson argues that every effort must be made to engage the organisational members in continuous improvement activities [47]. However, no programme can be planned or evaluated oblivious of the context that surrounds it, especially when what drives most decisions on policy and practice in the public sector are considerations of the available evidence [45]. Institutional, community and public policies may have either supporting or antagonistic effects on programmes [48]. In addition, there are several factors that influence health behaviour [49]. Therefore, it is necessary to include pertinent information regarding the programme context [13,14] that must be absorbed in different ways [50]. In the present study, only 38,46% of PA programmes capture this information, which may reflect a limited knowledge on the part of most of the programmes about the context in which they operate. On the other hand, about two thirds of the analysed programmes have an annual plan that is regularly reviewed and used in an annual report. The data from this report helps to improve the new annual planning cycle of the PA programme. These procedures are in agreement with those found in other studies [51,52] or in accordance to different documents, such as content of the planning and evaluation of PA programmes [13,53] and health promotion programmes [54]. Still regarding this criterion, most of the leaders of our study reported that everybody had full access to the information about the mission and objectives of the PA programme. In the field of Higher Education, Calvo-Mora and collaborators [37] alleged that the leader's communication and involvement of all staff in policy and strategy were crucial to the processes management. Moreover, in accordance with the same author [37], our study found that processes were clearly identified, as well as their interrelationships. With regard to quality strategies, in our study only one PA programme had regularly used internal quality assessment and external audits. However, several studies have focused on the reasons for the use of quality schemes and pointed out the advantages of their implementation in improving services [24,55,56]. On the other hand, Ritchie and Dale suggest the existence of some obstacles to implementing these initiatives within the organizations [57]. Similarly, Davies and collaborators reviewed the aspects of culture/context, which were specific to the university academic context, and could impact negatively on the implementation of a quality framework [58].\nRegarding People criterion, that is an important feature for quality management [59], most of the participants in our study reported the existence of procedures to find out employees' opinions, which was also found in a study related to quality management in sports facilities [60]. This initiative is considered a quality practice to Connolly and Connolly [61]. In fact, organizations have recognized the need to understand employee opinions to identify their concerns, assess the impact of a variety of agendas and provide employees with different communication channels [62]. Regarding this issue, our data also show that employees from the majority of PA programmes have an open dialogue with all stakeholders, especially with one another (76,92%). Furthermore, although the results are less obvious with regard to autonomy and decision-making, our study demonstrates that most of the PA programmes involved and empowered people in various ways (e.g. opinions and suggestions put forward by people, and teamwork). These findings are not totally in line with the arguments of Wilkinson and collaborators, who emphasized the employee involvement as a key theme for quality management, namely autonomy, creativity, active cooperation and self-control for employees [63]. Also, Osseo-Asare and collaborators concluded that a conceptual framework for achieving and sustaining quality in UK higher education institutions could be developed based on a set of principles which includes staff empowerment through participation and commitment [38]. In their study, these authors found a discrepancy between what respondents think about the importance of staff empowerment and the real practice in the organizations. Even with regard to the management of people, most of the participants in our study gave emphasis to the recruitment of people with high skills; however, only 34,62% require a specialization in the area of PA and ageing for instructors. These results are similar to those found on the Cross-National Expert Survey Report on Physical Activity Programmes and Physical Activity Promotion Strategies for Older People [19]. In this report, the authors make recommendations on the importance of recruiting teachers who have high levels of qualification and reinforce the importance of continuous professional development. Regarding this issue, the International Curriculum Guidelines for Preparing Physical Activity Instructors of Older Adults outlines each of the major content areas that should be included in any entry-level training programme [64]. The PAHB, established that a PA programme should be run by highly skilled PA practitioners [14]. Regarding the continuous training of people, our study revealed that over three quarters of the PA programmes take this aspect into account. In contrast, Hughes and collaborators found that only 56% of the PA programmes for older people trained their instructors [65]. The Guidelines for Cardiac Rehabilitation and Secondary Prevention Programs also emphasises these points, and goes further, establishing that the 'polices and procedures should include provisions for a competency-based job description; required education, continuing education, experiences, licences and certifications; and an orientation checklist, a competency assessment and a regularly performed - at least annually - performance appraisal' (AACVPR 2004 [31] p.193). Once more, our data showed that the items related to quality initiatives have only a passing reference, which appears to be related to the fact that just a single programme is involved in quality schemes, as previously explained.\nDifferent studies reported that the opportunities that are provided by Partnerships and resources should be maximized [38,60,66,67]. In addition, the development and sustainment of the community partnerships is the first public health benchmarks for PA Programmes established by the PAHB at the CDC [14]. In our study, 73,08% PA programmes have established partnerships, which is in line with the emphasis that some authors [41,68,69] have put on the importance of forging effective partnerships, creating value and promoting cooperation agreements based on mutually beneficial joint synergies. Especially in the PA programmes for elderly, some organizations reinforce the importance and strength of these partnerships, since they provide additional resources in the form of funding, facilities and equipment and being able to access wide-ranging abilities and knowledge [3,45]. The most surprising result of our data concerns the few partnerships with Higher Education Institutions (11,54%). Indeed, these academic institutions contribute to the creation of knowledge and its dissemination, so we consider it a disadvantage for programmes to not have direct access to their counsel. Moreover, such partnerships would have reciprocal benefits, since the programme also could provide means for researchers to get their answers in a more practical way. Additionally, disseminating this knowledge may promote the development of new programmes or improve the programme itself [13]. When we analyzed the partnerships with health institutions, the results are better, but still far from what is supported by some authors or organizations, who advocate the active participation of healthcare professionals in counselling patients on PA [45,70-72] or encouraging them to accumulate moderate-intensity PA [73]. Similar results arise from the European Network for Action on Ageing and Physical Activity (EUNAAPA) study, where sixty percent of the PA programme directors reported that they build partnerships with local healthcare professionals or organisations [19]. With regard to finances, our results appear to indicate that there is not a strict control of these resources, since there is still a considerable percentage of programmes that do not manage them (65,38%). These results are quite different from those reported by Scott and colleagues [19], where sixty five percent of the PA programme directors were able to estimate the total cost of their programme. In fact, most of the monetary funds of these programmes come from the public finance, and thus it appears to us that leaders should control these funds even more strictly. Although the PA programmes are not-for-profit, the management of its financial resources should be identified as key-process, in order to consolidate the programme's financial structure and to ensure it can fulfil its mission in the present and in the future. Despite the maintenance plans of equipment and buildings should be periodically provided [66], just about one third of the interviewed coordinators reported that their programme had maintenance plans. Another study [19] found a higher percentage of programmes with maintenance plans (46%), but the results were still not consistent with the recommendations [31,74]. Otherwise, the recognition that information technology has been a catalyst for progress and prosperity [75] seems to be accepted by the coordinators of our study, since most of them implemented new technologies in their programmes. Concerning information management, although there are no recommendations in the field of PA programmes for elderly, the AACVPR advises that information management involves supervision of the storage, communication, utilization and tracking of information related to the programme and facility [31]. In this respect, the majority of the coordinators indicated that information, concerning to all aspects of the programme, was systematically recorded. On the contrary, the results related to the systematic pursuit of the latest scientific knowledge are quite modest, since less than one third of the coordinators refer to this quality practice. The reason for this unexpected result becomes somewhat clearer when we realise that very few programmes have established partnerships with higher education experts who are up to date on the latest scientific knowledge. In an American study [76] most states provided evidence of competency with regard to using data and scientific information to develop and prioritise their PA programming.\nAn excellent organization adopts a management philosophy based on Processes [77,78]. Although the majority of the coordinators of our study stated that the methods and processes were defined, only a minority operationalised it in terms of documentation. For the AACVPR, policies and procedures related to information management should include a wide range of records and should specify uniform standards for evaluation, intervention and outcome measurement [31]. Furthermore, processes should be systematically reviewed [17,79]. Specifically with regard to emergency protocols, about one third of the coordinators stated that they are carried out periodically. Related results arise from the EUNAAPA study, where half of PA programme directors reported having emergency protocols in place and that staff members were trained annually, at the very least, in these protocols [19]. Both results indicate that AHA/ACSM's recommendations have not been followed. In fact, it is emphasized that emergency policies and procedures must be reviewed and practiced regularly [74]. With regard to the design of services and tailoring the programme to the needs and interest of participants, the results differ. On the one hand, more than two-thirds of coordinators recognized that the services are designed according to customer needs; on the other hand, less than a third is geared towards the fulfilment of their expectations and needs. In the Scott and collaborators study, almost two thirds of PA directors reported that participants were formally surveyed for the aims of their involvement in the programme and most of these directors also reported that their programmes were adjusted according to participants' stated aims [19]. Physical activity leaders should work closely with individuals to design a PA regimen that reflects the person's preferences and capabilities [46]. In the same line, the BHF recommends the involvement of participants in this process (BHF 2007). Moreover, tailoring the exercise programme to the needs and interest of participants is associated with higher programme attendance [80,81]. With regard to the preparticipation screening, less than half of our PA programmes' coordinators reported that a health check was required to guarantee a safe participation of the customers. Results from EUNAAPA study [19] are slightly different since only half of the PA programme directors reported that a health check was required before a potential participant would be eligible to enter their programme. Screening of older adults prior to starting an exercise programme continues to be a controversial issue [82]. In fact, the ACSM endorses the perspective that medical clearance should not be required prior to encouraging older individuals to begin a light-intensity activity programme, since it may be a disincentive to increasing PA among these individuals [46]. For higher intensity levels, AHA/ACSM recommend a pre-participation screening, primarily to identify those at increased risk of an adverse cardiac event [74]. In our study, about two-thirds of the PA coordinators indicated that the exercise prescription includes aerobic, muscle strength, flexibility and balance exercises. Additionally, they also reported incorporating progression as part of their programme. These are consistent with the ACSM position's stand [6] and ACSM's Guidelines [83]. In our study we found an unanimous result concerning the components of the exercise training session, which is in line with the ACSM recommendations [83]. Our results about exercise prescription, progression and components of the session are more consistent with the ACSM recommendations than those disclosed in the EUNAAPA study [19]. Concerning to environmental conditions, more than half of the coordinators reported that they are guaranteed, i.e. temperature of sports facilities, safe and pleasant conditions of sports equipment and facilities, places with good acoustics and access to a water source are incorporated in the programme. This represents an adequate degree of concordance with the recommendations [31,83]. With regard to advertising, more than three quarters of the coordinators revealed that the programme was promoted. Some authors and organizations believe that social marketing and communication campaigns are a part of a set of actions required to increase PA [12,84,85]. In addition, the BHF makes recommendations on marketing and promotion strategies among older people [45]; however, no scientific evidence was found about the most effective method of promoting a PA programme for this target population. Across all programmes, 76,92% offer different forms of access to facilitate the enrolment of seniors. The Task Force on Community Preventive Services recommends the creation of or enhanced access to places for PA, combined with informational outreach activities to increase PA [12], even giving examples of how to reduce some environmental barriers. Good accessibility is also provided in almost all analysed programmes (96,15%), which is an essential aspect of programme planning [12,45,72]. The BHF emphasises the proximity of programmes to residences in a friendly and accessible way, ensuring well-lit paths and providing good public transports [45]. In this regard, a qualitative study in older and rural African American and white women found that PA programmes' enabling factors included transportation and free facilities [86]. A study by Booth and collaborators showed that for adults over 60, neighbourhood safety and access to local facilities were important predictors of being active [87]. In our study, all the programmes had an effective complaints handling system and more than half had suggestions through standardized processes. In addition to what was mentioned above about the importance of customer suggestions or opinions, customer complaint information can be also used as a basis for customer-focused process improvement [88]. In this particular case, our results suggest that organizations have a preference for reactive methods and delayed methods, such as complaint analysis, over proactive methods, contrary to what was found in another study [44]. An excellent service can only be achieved with a profound knowledge of evolving customer needs; therefore, a functional customer complaint management system should be implemented in every organization [89].\nWith respect to Customer results, organizations must measure and achieve them [17]. Similarly, PA interventions should be evaluated in terms of their processes as well as their outcomes [11]. There are many studies addressing the measurement of PA in order to identify current levels of activity and assess the effectiveness of intervention programmes. However, few PA intervention studies specifically target Customer retention or Customer satisfaction. Actually, the EFQM argues that excellent organisations achieve the best results for their customers and achieve high levels of customer satisfaction [17]. Furthermore, customers do not only provide input (suggestions or complaints), but they also take part in the service process, influencing both the process's performance and the perception of quality of the service produced [90]. One of the most commonly used techniques for listening to customers is satisfaction surveys [44]. More than three quarters of our PA programmes' coordinators assured that the satisfaction of participants in their programme was formally measured. Another key predictor of customer results is loyalty [36], but less than 35% of the programmes studied evaluate this item. A recent study about PA programmes for older adults in the United States found that 74% tracked attendance [91]. Also, complaints handling and management are essential for achieving customer retention and loyalty [92]. Besides this, though all programmes have a complaints system in place, only approximately 70% evaluated their resolution process. Contrary to complaints, all the programmes that have a standardized system of suggestions also carried out its assessment. Although the measurement process represents one of the most important components of customer results from an exercise programme [83], just 57,69% of our coordinators reported that objective outcome measures were recorded for participants at regular intervals.\nTo achieve excellence, organisations must also focus on the People results [17], since people involvement is one of the most important drivers of continuous improvement [77]. Nevertheless, most coordinators of our study revealed that the organization does not have information on its employees' motivation and commitment. This result is not surprising, especially because organizations rarely use instruments to obtain information about how their employees assess the motivational aspects of their workplace [93], compared with job satisfaction measurement. However, some meta-analysis studies [94,95] concluded that people's satisfaction is not enough to improve their performance - people must also be highly motivated [93]. Furthermore, without satisfied and motivated employees it is impossible to achieve satisfied and loyal customers [44]. An empirical study observed that employees' loyalty is significantly related to service quality, which in turn impacts customer satisfaction and customer loyalty [96]. Martin-Castilla and Rodriguez-Ruiz give examples of the different aspects that must be evaluated, both in terms of people's motivation and satisfaction, such as the development of professional careers, learning opportunities, definition of objectives, employment conditions, salary, relation between peers, organisational role in the community, and work environment, among others [78]. Additionally, one of the key indicators of people satisfaction includes absenteeism [36]. While the majority of our PA programmes' coordinators confirmed that there were indicators of people's absenteeism (69,23%), only a minority stated that the employees' loyalty was measured (26,92%) as well as people's satisfaction (38,46%). We believe that people who are satisfied with regard to the management, employment conditions, relationships between peers and the organisational role in the community will be more prone to improve the quality of the PA programme; therefore, the evaluation of theses issues should not be neglected. Also, people's achievement is an important indicator, not only with regard to the development of people, but also in their ability to solve problems and take initiatives. Nearly two thirds of our PA coordinators had indicators of people's performance, which is defended by the AACVPR [31], as discussed previously. This result stems from the fact that the majority of people with employment contracts in the public sector is evaluated by the Integrated System on the Evaluation of the Public Administration Performance (SIADAP).\nThe Society results criterion is based on what an organisation is achieving in satisfying the needs and expectations of the community [17]. The programme's visibility, engagement and reputation are recognized as a result of its activities and the active participation of the organisation as a responsible member of the community. However, few participants (19,23%) reported indicators of the involvement of their programmes in the community and less than one quarter of the programme's impact on society (23,07%). Furthermore, the CDC claims the importance of assessing the programme effects on organizations or communities [13], but this is not our case. In fact, it is not just the impact of the programme from the standpoint of public health, but also the perceptions that society has about the programme as a barometer of its action in society. Also, social responsibility is a vital part of the work and role of the programme, as it tries to respond to a problem of the society as a whole [77], but again, only nearly 20% of the PA programmes' coordinators had measures or indicators to track this issue. As recognized by some authors [45,97], community involvement in these programmes is critical to its success, so it is concerning that the most of the coordinators do not pay attention to these indicators.\nThe Key performance results represent the global organizational performance and the fulfilment of expectations. The mission of the PA programmes is linked to a significant impact on the promotion of PA in the elderly population. However, less than 12% of our coordinators declared they had indicators of process efficiency, i.e. obtaining the best outcomes from a set of actions. Also, regarding the quality of the service delivered, only one PA coordinator assumed that this assessment was performed. This result may be associated with the fact that only one programme performed a quality assessment/audit. In this respect, several studies [23-25,55] found that quality initiatives may improve process and outcomes. Finally, less than fifty percent of the PA coordinators indicated that the organisation's financial resources were properly managed. Recognising that most of the PA programmes have limited municipal funds, we believe that there is still a modest understanding of the need to achieve a certain level of profitability to contribute to the sustainability of the programme, and that all activities must be cost-accountable.\nThe 'evaluation is integral to success' (Schmid 2006 [11] p.115) so, regardless of sector, size, structure or maturity, organisations need to establish an appropriate management framework to be successful [98]. We believe that this premise is also valid for PA programmes. Thus, it will help to improve services and, at the same time, to increase access and the level of PA of elderly citizens.", "Our findings suggest that although there are some good practices in the PA programmes under analysis, specifically in criteria Processes, Leadership, Customer results and People, there are still relevant areas that require improvement, namely those related to Partnerships and resources, People results, Policy and strategy, Key performance results and Society results.\n[SUBTITLE] Strengths and Limitations [SUBSECTION] To our knowledge, this was the first study applying the EFQM Excellence Model criteria in PA programmes for elderly people.\nHowever, the study has certain limitations, which must be considered when interpreting its results.\nFirst, the study was based on the PA programmes coordinators' perceptions. Consequently, such perceptions may not provide a complete and accurate picture of the reality. Actually, the results are mainly based on self-reporting which might also have contributed to a more favourable outcome. Conducting a study with the participation of different stakeholders of the PA programmes will be an asset in the future. Secondly, the research design employed was cross-sectional rather than longitudinal. In this regard, an evaluation of the quality practices is a process that develops over time and whose effects are only really appreciated in the long term. Therefore, it would be appropriate to follow a longitudinal approach in future studies. Finally, the external validity of the findings presented is low. Nevertheless, we are convicted that the study provides details about the management models of the PA programmes for elderly people developed by the Portuguese Local Administration, their strengths and weaknesses, in order to improve their quality.\nTo our knowledge, this was the first study applying the EFQM Excellence Model criteria in PA programmes for elderly people.\nHowever, the study has certain limitations, which must be considered when interpreting its results.\nFirst, the study was based on the PA programmes coordinators' perceptions. Consequently, such perceptions may not provide a complete and accurate picture of the reality. Actually, the results are mainly based on self-reporting which might also have contributed to a more favourable outcome. Conducting a study with the participation of different stakeholders of the PA programmes will be an asset in the future. Secondly, the research design employed was cross-sectional rather than longitudinal. In this regard, an evaluation of the quality practices is a process that develops over time and whose effects are only really appreciated in the long term. Therefore, it would be appropriate to follow a longitudinal approach in future studies. Finally, the external validity of the findings presented is low. Nevertheless, we are convicted that the study provides details about the management models of the PA programmes for elderly people developed by the Portuguese Local Administration, their strengths and weaknesses, in order to improve their quality.", "To our knowledge, this was the first study applying the EFQM Excellence Model criteria in PA programmes for elderly people.\nHowever, the study has certain limitations, which must be considered when interpreting its results.\nFirst, the study was based on the PA programmes coordinators' perceptions. Consequently, such perceptions may not provide a complete and accurate picture of the reality. Actually, the results are mainly based on self-reporting which might also have contributed to a more favourable outcome. Conducting a study with the participation of different stakeholders of the PA programmes will be an asset in the future. Secondly, the research design employed was cross-sectional rather than longitudinal. In this regard, an evaluation of the quality practices is a process that develops over time and whose effects are only really appreciated in the long term. Therefore, it would be appropriate to follow a longitudinal approach in future studies. Finally, the external validity of the findings presented is low. Nevertheless, we are convicted that the study provides details about the management models of the PA programmes for elderly people developed by the Portuguese Local Administration, their strengths and weaknesses, in order to improve their quality.", "AIM participated in the acquisition and analysis of data and participated in drafting and editing the manuscript. MJR managed the data collection and analysis and supervised the drafting and editing of manuscript. PS designed the study protocol and helped design the questionnaires/interviews. RS managed the data collection and analysis. JM participated in the coordination of the study and supervised the drafting and editing of manuscript. JC participated in the design of the questionnaires/interviews and coordination and management of the study.\nAll authors read and approved the final manuscript.", "The study was approved by the Scientific Council and Ethics Committee of the Faculty of Sport - University of Porto.", "The authors declare that they have no competing interests.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/123/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Procedures", "Data presentation", "Results", "Discussion", "Conclusions", "Strengths and Limitations", "Authors' contributions", "Ethics approval", "Competing interests", "Pre-publication history", "Supplementary Material" ]
[ "The last few decades have witnessed a significant demographic ageing process, causing deep social and political transformations, and challenging society and humanity's options for the 21st century. The population aged 60 or over is increasing rapidly and is expected to increase by more than 50 per cent over the next four decades, expanding from 264 million in 2009 to 416 million in 2050 in more developed regions [1]. Subsequently, there will be more older people than children in the world population for the first time in history.\nThe most important issue related to demographic ageing deals with its implications for the well-being of the elderly, such as access to appropriate health-care services. In developed countries, some degree of progress has been made to achieve this objective, all the more so as ageing is the most important contributor to the increase in health care costs [2].\nThe concept of 'active ageing' has been employed by the World Health Organization (WHO) since the late 1990s, and is defined as 'the process of optimizing opportunities for health, participation and security in order to enhance quality of life as people age' (WHO 2002 [3] p.12). Therefore, there has been a growing concern in designing physical activity (PA) programmes for elderly people, since evidence indicates that such health promotion interventions may reduce the deleterious effects of the ageing process [4,5] and improve quality of life [4-7]. Nevertheless, a substantial proportion of European elderly people have lower PA levels than those recommended for good health [8,9]. Therefore, increasing adherence to PA among elderly people is an important public health challenge.\nThe Centers for Disease Control and Prevention (CDC) developed guidelines with other American organizations for increasing PA across a large number of settings and populations, including elderly people [10]. They described a set of recommendations and strategies to improve programmes, developing new approaches and highlighting the need for effective programme evaluation [11,12]. This 'imperative' has a wide application (CDC 2002b [13] p.5) that reveals commitment to provide high quality programmes. Furthermore, programme evaluation is a useful tool for continuous quality improvement [14] and the WHO guidelines for the evaluation of health promotion emphasize the need to evaluate and propose the allocation of adequate resources for this action [15].\nHealthy Ageing - A Challenge for Europe Report [16] suggests a systematic application of quality management/assurance methods to increase project's quality; these indicate that Quality is an important issue for PA programmes for older people.\nWith the purpose of helping organizations to improve their quality, the European Foundation for Quality Management (EFQM) introduced the EFQM Excellence Model in 1991 with the support of EOQ, the European Organization for Quality, and the European Commission. The EFQM Excellence Model is a non-prescriptive framework based on nine criteria divided into thirty-two sub-criteria [17]. Of these nine criteria, five are 'Enablers' - what an organization does to achieve excellence - and four are 'Results' - what an organization achieves, i.e., the results achieved on the path to Excellence. As illustrated in Figure 1, the arrows presented in the Model show its dynamic nature; the issues related to 'Innovation and Learning', while horizontal vectors essential to the Model's architecture, also emerge as cross-sectional elements in all the criteria. They show innovation and learning can improve 'Enablers', which in turn lead to improved 'Results'. The Model recognizes that there are many approaches to achieving sustainable Excellence in all aspects of performance, based on the premise that: \"Excellent results with respect to Performance, Customers, People and Society are achieved through Leadership driving Policy and Strategy that is delivered through People, Partnerships and Resources, and Processes\" (EFQM 2003a [17] p.5).\nEFQM Excellence Model (adapted from EFQM, 2003).\nThe application of the EFQM Excellence Model promotes the use of a management methodology based on objective criteria that is applicable to all areas of business and constitutes a self-assessment exercise of the organization's quality. Self-assessment will shed light on the areas requiring improvement, as well as on the process and actions necessary to conduct improvement. The Model is currently used by thousands of organizations throughout Europe, such as firms, health institutions, schools, public safety services and governmental institutions, among others. It provides organizations with common management terminology and tools, thus facilitating the sharing of best practices between organizations of different sectors [18].\nDespite the numerous PA programmes for the elderly that have been created in recent years - especially by the Public Local Administration - their evaluation is scarce. Moreover, the EFQM Excellence Model had never been used in PA programmes for elderly people.\nIn this context, the purpose of this study was to characterise the quality management models of the PA programmes developed by the Portuguese Local Administration to enhance quality of life for elderly people, according to the criteria of the EFQM Excellence Model 2003.", "[SUBTITLE] Procedures [SUBSECTION] In order to gather empirical evidence, methodological triangulation -- i.e. questionnaire surveys, semi-structured interviews and additional document analysis -- was employed.\nA preliminary on-line questionnaire was sent out to all mainland Portuguese municipalities (n = 278) in May of 2008. This brief questionnaire provided the following information: geographic localization, name and objectives of PA programmes, age of the PA programme, characteristics of age groups and participants' age, number of activities included in the PA programme, frequency of the programme (days/week), quality initiatives, organization name and the identification details of the PA programme's coordinator (Additional file 1).\nOf the 278 municipalities, a total of 97 valid questionnaires were answered. Since some municipalities provided more than a single programme, 125 PA programmes were identified. Inclusion criteria for the purposive sample implied that at least one of the following conditions should be verified: i) programmes should belong to a District Capital in order to apply a geographic criterion; ii) programmes should include the following cumulative criteria: a) must have been in practice for 10 years or more [19], b) must have had two or more different types of activities [20,21], and c) must have had a frequency of two or more times a week [6]; iii) programmes that apply a quality initiative [14,16,22-25]. Therefore, 27 potentially eligible PA programmes for elderly people were identified, of which 18 were from a District Capital; eight were aged ten years or more, had two or more types of activities and a frequency of two or more times a week; and one had a quality initiative (Quality Certification). We screened each PA programme's coordinator by telephone to check eligibility, confirm willingness to participate and, accordingly, provide a written informed consent by email. At this stage, one programme was excluded because it did not meet any of the three conditions above. The characteristics of the 26 PA programmes included in our sample are described in Table 1.\nCharacteristics of the 26 PA programmes\nTo characterise the quality management models of the PA programmes, semi-structured face-to-face interviews with the PA programmes' coordinators (n = 26) were carried out between February and April of 2009. The questions were based on the EFQM Excellence Model's nine criteria and 32 sub-criteria. Before the 26 interviews, a pilot study was conducted among four PA programmes' coordinators, conveniently chosen from among the programmes that were not selected for the sample, to understand the process and evaluate the content understanding of the questions. As a result, some questions were adapted in accordance with respondents' comments. Afterwards, a standard interview guide was created and used for all interviews, which lasted 45 to 60 minutes and were tape-recorded and transcribed verbatim at a later date. Participants were asked about each sub-criterion of Leadership, Policy and Strategy, People, Partnerships and Resources, Processes, Customer Results, People Results, Society Results and Key Performance Results. A content analysis of the transcribed interviews was conducted. Two coding strategies were applied: (a) a priori categorisation of data based on the 32 sub-criteria and (b) a posteriori coding scheme, obtained directly from the data, using an inductive method to identify the themes and subthemes that emerged. To ensure rigour and reliability of analysis, the first three transcripts were coded in their entirety by two coders who achieved agreement through discussion and consensus. Two independent researchers double-coded two transcripts to assess the inter-rater reliability of coding. Intra-rater reliability was also conducted on a question of each criterion, within a 5-day interval. The inter-rater and intra-rater reliability were assured by the intercoder and intracoders' agreement, from Bellack's formula [26]. Both results obtained ranged from 95% to 100%, confirmed by Cohen's Kappa to eliminate the agreement by chance. Interscore reliability was in the range of 0.93 and above. To facilitate the coding process, we used the QSR NVivo software, which helps manage and organize qualitative data.\nAn on-line questionnaire was also administrated to the 26 PA programmes' coordinators, between June and July 2009. This new questionnaire, based on the EFQM Excellence Model's nine criteria and 32 sub-criteria, was generated according to the literature review and the interviews' content analysis. For each sub-criterion, items were devised concerning the areas addressing the EFQM Excellence Model and the specificity of the PA programmes for elderly people. Closed questions with multiple choice answers and Likert scales were used. The first draft of the questionnaire was submitted to a panel of experts (n = 5) in the field of PA programmes for elderly people and/or EFQM Excellence Model, to ensure the content validity. The experts pointed out their level of accordance with the relevance of the items, ease of understanding and adequacy as an instrument to characterise the management models of the PA programmes. Based on their suggestion, fourteen items were reframed and two were eliminated, due to its irrelevance. After, the on-line questionnaire was tested among 15 PA programmes' coordinators, chosen from among the programmes that were not selected for the sample, for comments on readability. Some adjustments were made to make the questions clearer and more relevant to the PA programme case. The study design also included a test-retest reliability of the answers, performed with an interval of seven days. Agreement was estimated using kappa statistics (κ for categorical variables) and weighted kappa statistics (κw for ordinal variables). High levels of agreement (0.86 to 0.97) were found. The final version of the on-line questionnaire comprised 165 items and took a respondent about one hour to complete.\nIn addition, document analysis was carried out. Written documents, including procedures, budgets, flyers, e-mails, reports, minutes of meetings, specifications, print screens, publications, price lists, etc. were made available by some of the coordinators. Other information was gathered from the web page of the organization.\nWe used standard approaches to statistical analysis of data including frequencies and percentages for the categorical data, performed with the Statistical Package SPSS, version 17.0.\nIn order to gather empirical evidence, methodological triangulation -- i.e. questionnaire surveys, semi-structured interviews and additional document analysis -- was employed.\nA preliminary on-line questionnaire was sent out to all mainland Portuguese municipalities (n = 278) in May of 2008. This brief questionnaire provided the following information: geographic localization, name and objectives of PA programmes, age of the PA programme, characteristics of age groups and participants' age, number of activities included in the PA programme, frequency of the programme (days/week), quality initiatives, organization name and the identification details of the PA programme's coordinator (Additional file 1).\nOf the 278 municipalities, a total of 97 valid questionnaires were answered. Since some municipalities provided more than a single programme, 125 PA programmes were identified. Inclusion criteria for the purposive sample implied that at least one of the following conditions should be verified: i) programmes should belong to a District Capital in order to apply a geographic criterion; ii) programmes should include the following cumulative criteria: a) must have been in practice for 10 years or more [19], b) must have had two or more different types of activities [20,21], and c) must have had a frequency of two or more times a week [6]; iii) programmes that apply a quality initiative [14,16,22-25]. Therefore, 27 potentially eligible PA programmes for elderly people were identified, of which 18 were from a District Capital; eight were aged ten years or more, had two or more types of activities and a frequency of two or more times a week; and one had a quality initiative (Quality Certification). We screened each PA programme's coordinator by telephone to check eligibility, confirm willingness to participate and, accordingly, provide a written informed consent by email. At this stage, one programme was excluded because it did not meet any of the three conditions above. The characteristics of the 26 PA programmes included in our sample are described in Table 1.\nCharacteristics of the 26 PA programmes\nTo characterise the quality management models of the PA programmes, semi-structured face-to-face interviews with the PA programmes' coordinators (n = 26) were carried out between February and April of 2009. The questions were based on the EFQM Excellence Model's nine criteria and 32 sub-criteria. Before the 26 interviews, a pilot study was conducted among four PA programmes' coordinators, conveniently chosen from among the programmes that were not selected for the sample, to understand the process and evaluate the content understanding of the questions. As a result, some questions were adapted in accordance with respondents' comments. Afterwards, a standard interview guide was created and used for all interviews, which lasted 45 to 60 minutes and were tape-recorded and transcribed verbatim at a later date. Participants were asked about each sub-criterion of Leadership, Policy and Strategy, People, Partnerships and Resources, Processes, Customer Results, People Results, Society Results and Key Performance Results. A content analysis of the transcribed interviews was conducted. Two coding strategies were applied: (a) a priori categorisation of data based on the 32 sub-criteria and (b) a posteriori coding scheme, obtained directly from the data, using an inductive method to identify the themes and subthemes that emerged. To ensure rigour and reliability of analysis, the first three transcripts were coded in their entirety by two coders who achieved agreement through discussion and consensus. Two independent researchers double-coded two transcripts to assess the inter-rater reliability of coding. Intra-rater reliability was also conducted on a question of each criterion, within a 5-day interval. The inter-rater and intra-rater reliability were assured by the intercoder and intracoders' agreement, from Bellack's formula [26]. Both results obtained ranged from 95% to 100%, confirmed by Cohen's Kappa to eliminate the agreement by chance. Interscore reliability was in the range of 0.93 and above. To facilitate the coding process, we used the QSR NVivo software, which helps manage and organize qualitative data.\nAn on-line questionnaire was also administrated to the 26 PA programmes' coordinators, between June and July 2009. This new questionnaire, based on the EFQM Excellence Model's nine criteria and 32 sub-criteria, was generated according to the literature review and the interviews' content analysis. For each sub-criterion, items were devised concerning the areas addressing the EFQM Excellence Model and the specificity of the PA programmes for elderly people. Closed questions with multiple choice answers and Likert scales were used. The first draft of the questionnaire was submitted to a panel of experts (n = 5) in the field of PA programmes for elderly people and/or EFQM Excellence Model, to ensure the content validity. The experts pointed out their level of accordance with the relevance of the items, ease of understanding and adequacy as an instrument to characterise the management models of the PA programmes. Based on their suggestion, fourteen items were reframed and two were eliminated, due to its irrelevance. After, the on-line questionnaire was tested among 15 PA programmes' coordinators, chosen from among the programmes that were not selected for the sample, for comments on readability. Some adjustments were made to make the questions clearer and more relevant to the PA programme case. The study design also included a test-retest reliability of the answers, performed with an interval of seven days. Agreement was estimated using kappa statistics (κ for categorical variables) and weighted kappa statistics (κw for ordinal variables). High levels of agreement (0.86 to 0.97) were found. The final version of the on-line questionnaire comprised 165 items and took a respondent about one hour to complete.\nIn addition, document analysis was carried out. Written documents, including procedures, budgets, flyers, e-mails, reports, minutes of meetings, specifications, print screens, publications, price lists, etc. were made available by some of the coordinators. Other information was gathered from the web page of the organization.\nWe used standard approaches to statistical analysis of data including frequencies and percentages for the categorical data, performed with the Statistical Package SPSS, version 17.0.\n[SUBTITLE] Data presentation [SUBSECTION] A set of the most relevant items concerning quality practices associated with the EFQM Excellence Model criteria was adapted from an original scale created to measure the nine criteria [27] and assigned to each EFQM sub-criterion based on its content domain. Several adjustments were made to reflect the specificity of the PA programmes for elderly people, according to collected data. The presence or absence of a particular quality practice was encoded as: addressed/measured = 1; not addressed/not measured = 0.\nA set of the most relevant items concerning quality practices associated with the EFQM Excellence Model criteria was adapted from an original scale created to measure the nine criteria [27] and assigned to each EFQM sub-criterion based on its content domain. Several adjustments were made to reflect the specificity of the PA programmes for elderly people, according to collected data. The presence or absence of a particular quality practice was encoded as: addressed/measured = 1; not addressed/not measured = 0.", "In order to gather empirical evidence, methodological triangulation -- i.e. questionnaire surveys, semi-structured interviews and additional document analysis -- was employed.\nA preliminary on-line questionnaire was sent out to all mainland Portuguese municipalities (n = 278) in May of 2008. This brief questionnaire provided the following information: geographic localization, name and objectives of PA programmes, age of the PA programme, characteristics of age groups and participants' age, number of activities included in the PA programme, frequency of the programme (days/week), quality initiatives, organization name and the identification details of the PA programme's coordinator (Additional file 1).\nOf the 278 municipalities, a total of 97 valid questionnaires were answered. Since some municipalities provided more than a single programme, 125 PA programmes were identified. Inclusion criteria for the purposive sample implied that at least one of the following conditions should be verified: i) programmes should belong to a District Capital in order to apply a geographic criterion; ii) programmes should include the following cumulative criteria: a) must have been in practice for 10 years or more [19], b) must have had two or more different types of activities [20,21], and c) must have had a frequency of two or more times a week [6]; iii) programmes that apply a quality initiative [14,16,22-25]. Therefore, 27 potentially eligible PA programmes for elderly people were identified, of which 18 were from a District Capital; eight were aged ten years or more, had two or more types of activities and a frequency of two or more times a week; and one had a quality initiative (Quality Certification). We screened each PA programme's coordinator by telephone to check eligibility, confirm willingness to participate and, accordingly, provide a written informed consent by email. At this stage, one programme was excluded because it did not meet any of the three conditions above. The characteristics of the 26 PA programmes included in our sample are described in Table 1.\nCharacteristics of the 26 PA programmes\nTo characterise the quality management models of the PA programmes, semi-structured face-to-face interviews with the PA programmes' coordinators (n = 26) were carried out between February and April of 2009. The questions were based on the EFQM Excellence Model's nine criteria and 32 sub-criteria. Before the 26 interviews, a pilot study was conducted among four PA programmes' coordinators, conveniently chosen from among the programmes that were not selected for the sample, to understand the process and evaluate the content understanding of the questions. As a result, some questions were adapted in accordance with respondents' comments. Afterwards, a standard interview guide was created and used for all interviews, which lasted 45 to 60 minutes and were tape-recorded and transcribed verbatim at a later date. Participants were asked about each sub-criterion of Leadership, Policy and Strategy, People, Partnerships and Resources, Processes, Customer Results, People Results, Society Results and Key Performance Results. A content analysis of the transcribed interviews was conducted. Two coding strategies were applied: (a) a priori categorisation of data based on the 32 sub-criteria and (b) a posteriori coding scheme, obtained directly from the data, using an inductive method to identify the themes and subthemes that emerged. To ensure rigour and reliability of analysis, the first three transcripts were coded in their entirety by two coders who achieved agreement through discussion and consensus. Two independent researchers double-coded two transcripts to assess the inter-rater reliability of coding. Intra-rater reliability was also conducted on a question of each criterion, within a 5-day interval. The inter-rater and intra-rater reliability were assured by the intercoder and intracoders' agreement, from Bellack's formula [26]. Both results obtained ranged from 95% to 100%, confirmed by Cohen's Kappa to eliminate the agreement by chance. Interscore reliability was in the range of 0.93 and above. To facilitate the coding process, we used the QSR NVivo software, which helps manage and organize qualitative data.\nAn on-line questionnaire was also administrated to the 26 PA programmes' coordinators, between June and July 2009. This new questionnaire, based on the EFQM Excellence Model's nine criteria and 32 sub-criteria, was generated according to the literature review and the interviews' content analysis. For each sub-criterion, items were devised concerning the areas addressing the EFQM Excellence Model and the specificity of the PA programmes for elderly people. Closed questions with multiple choice answers and Likert scales were used. The first draft of the questionnaire was submitted to a panel of experts (n = 5) in the field of PA programmes for elderly people and/or EFQM Excellence Model, to ensure the content validity. The experts pointed out their level of accordance with the relevance of the items, ease of understanding and adequacy as an instrument to characterise the management models of the PA programmes. Based on their suggestion, fourteen items were reframed and two were eliminated, due to its irrelevance. After, the on-line questionnaire was tested among 15 PA programmes' coordinators, chosen from among the programmes that were not selected for the sample, for comments on readability. Some adjustments were made to make the questions clearer and more relevant to the PA programme case. The study design also included a test-retest reliability of the answers, performed with an interval of seven days. Agreement was estimated using kappa statistics (κ for categorical variables) and weighted kappa statistics (κw for ordinal variables). High levels of agreement (0.86 to 0.97) were found. The final version of the on-line questionnaire comprised 165 items and took a respondent about one hour to complete.\nIn addition, document analysis was carried out. Written documents, including procedures, budgets, flyers, e-mails, reports, minutes of meetings, specifications, print screens, publications, price lists, etc. were made available by some of the coordinators. Other information was gathered from the web page of the organization.\nWe used standard approaches to statistical analysis of data including frequencies and percentages for the categorical data, performed with the Statistical Package SPSS, version 17.0.", "A set of the most relevant items concerning quality practices associated with the EFQM Excellence Model criteria was adapted from an original scale created to measure the nine criteria [27] and assigned to each EFQM sub-criterion based on its content domain. Several adjustments were made to reflect the specificity of the PA programmes for elderly people, according to collected data. The presence or absence of a particular quality practice was encoded as: addressed/measured = 1; not addressed/not measured = 0.", "Regarding Leadership, most of the coordinators who participated in this study revealed that they were personally involved in the development of a culture of Excellence, reinforcing a strong communicative culture throughout all areas of the organization (84,62%), encouraging people's empowerment and autonomy and ensuring that every member of the organization knows the role that the PA programme should play in society (both with 80,77%). Almost two-fifths (38,46%) of the coordinators ensured that people were capable of taking initiatives and fulfilling their responsibilities in the most appropriate way, and a single leader collaborated in quality training since only his programme was involved in a quality scheme (3,85%) (Table 2).\nFrequencies and percentages of quality practices in the criterion Leadership\nConcerning Policy and Strategy, the issues related to quality initiatives, such as the measurement of quality and non-quality costs, quality strategies and quality objectives were referenced by one coordinator (3,85%), the one who's programme was involved in a quality initiative. In contrast, 84,62% of the coordinators reported the identification of organizational processes and their interrelationships and 80,77% stated that all people are familiar with the mission and objectives of the PA programme (Table 3).\nFrequencies and percentages of quality practices in the criterion Policy and Strategy\nIn relation to the criterion People (the same as employees/workers), 84,62% of the coordinators reported that People maintain fluid communication with one another; in contrast, 15,38% indicated that People voluntarily pass on useful information to other members of the organization. Two items related to quality initiatives appear with a diminutive percentage (3,85%), namely People's access to information about quality results and the quality training they are offered. The majority of the coordinators (80,77%) stated that formal processes were used to find out people's opinions (Table 4).\nFrequencies and percentages of quality practices in the criterion People\nWith reference to Partnerships and Resources, less than 20% of the PA programmes had formal communication procedures with partners and 11,54% of coordinators revealed that relationships with academic partners allow the organization to have access to scientific information. Nearly three quarters (73%) of respondents reported that the organization has the capacity for external cooperation. The most reported item was the one related to the recording of information and knowledge (88,46%) (Table 5).\nFrequencies and percentages of quality practices in the criterion Partnerships and Resources\nAnalysis of the Processes criterion showed the items recommendations concerning exercise sessions phases and standardized systems to deal with customer complaints were accomplished by all PA programmes. We can also verify that most of the organizations advertised the PA programme and good accessibility was guaranteed (96,15%). Nonetheless, just 30,77% of organizations were oriented towards the fulfilment of customers' expectations and needs and only 19,23% kept documentation of work methods and organizational processes (Table 6).\nFrequencies and percentages of quality practices in the criterion Processes\nConcerning Customer results, 76,92% of the programmes evaluated customers' satisfaction and 34,62% had measures and/or indicators of customers' loyalty (Table 7).\nFrequencies and percentages of quality practices in the criterion Customer Results\nRelating to People results, 69,23% of the programmes evaluated people's absenteeism and 15,38% had measures and/or indicators of people's organizational commitment (Table 8).\nFrequencies and percentages of quality practices in the criterion People Results\nConcerning Society results, 15,38% PA programmes had measures and/or indicators of their involvement in their target community. 23,07% of the coordinators confirmed that the organization had measures and/or indicators of the programme's impact in society (Table 9).\nFrequencies and percentages of quality practices in the criterion Society Results\nIn Key performance results, one coordinator mentioned assessments of the quality of the service delivered and 42,31% of the coordinators reported that the organization has measures and/or indicators of the financial results of the PA programme (Table 10).\nFrequencies and percentages of quality practices in the criterion Key Performance Results\nFigure 2 shows the average of the percentages related to quality practices associated to the EFQM Excellence Model criteria. Four criteria (three Enablers and one Result) had values over 50%: Processes (65,38%), Leadership (61,03%), Customer results (58,46) and People (51,28%). In contrast, the other two Enablers and three Results had percentages under 50%: Partnerships and resources (45,77%), People results (41,03%), Policy and strategy (37,91%), Key performance results (19,23%) and Society results (19,23%).\nAverage of the percentages related to quality practices of the EFQM Excellence Model's criteria.", "To our knowledge, this was the first study applying the EFQM Excellence Model criteria to PA programmes for elderly people.\nResults showed that Processes, Leadership, Customer results and People had high percentage occurrences of quality practices. In contrast, Partnerships and resources, People results, Policy and strategy, Key performance results and Society results had lower percentage occurrences.\nPA programmes for elderly people play a significant role in senior citizens' health, quality of life, autonomy and capability to face daily tasks. It is widely accepted that the benefits of such programmes depend upon adherence to exercise [28]. Higher attendance in PA programmes and activity levels are strongly influenced by degrees of enjoyment [29,30]. Therefore, continuous quality improvement of the PA programmes for elderly people can be useful, and even critical, for elderly satisfaction and adherence.\nLeadership is the key for driving forward quality improvement activities [31-33] and involves a process of social influence on a group of people. Our data suggests that the coordinators are particularly involved in developing the vision and mission, and enhance a strong culture of communication. These aspects are considered fundamental to quality management [34-36]. Indeed, other studies in different sectors have focused on leadership and have shown that the commitment of the leaders operates as the thrust of the quality improvement process [37-39]. Moreover, their physical presence, visibility and concern for quality improvement were associated with transformational leadership [40], i.e., leadership that creates valuable and positive change in its followers. Our study also revealed that most of the leaders interact with customers, partners and representatives of society. Trustworthy leadership increases partnership building and sustainability, essential to guarantee the success of PA promotion as a public health strategy, as demonstrated in some programmes [41]. Several studies have focused on customers [42-44] since listening them appears to be a priority for organizations that want to succeed. With regard to PA programmes, the CDC mention the importance of interacting with all stakeholders [13]. Specifically related to the PA programmes for elderly people, the British Heart Foundation (BHF) stated that participants or other stakeholders must be actively involved in all aspects of programme development, including planning, promotion and evaluation [45]. The ACSM also recognizes that PA leaders should work closely with individuals to design a PA regimen that reflects the person's preferences and capabilities [46]. In addition, our results indicate that coordinators neglect to run the PA programme as a set of interrelated processes. Although there are no studies on this issue for PA programmes for elderly people, some organizations have made recommendations for their specific programme, namely the American Association of Cardiovascular and Pulmonary Rehabilitation (AACVPR), which states that the programme leaders are responsible for directing, integrating and coordinating programme services, and recommending a central location for all policies, procedures and guidelines references [31]. Another interesting result of our data concerns the fact that most of the leaders are not involved in quality training in terms of teaching people at lower hierarchical levels, which might be related to the fact that only a single programme concerned itself with quality initiatives.\nPolicy and strategy is defined as how the organisation implements its mission and vision via a clear stakeholder-focused strategy, supported by relevant policies, plans, objectives, targets and processes [17]. Our results point out a modest concern about the opinions of different stakeholders in setting targets for the PA programme, which has been described as one of the crucial steps in the planning and evaluation of PA programmes, or as a good practice [13,45]. In addition, contrary to the guidelines [45], our study showed that a minority of programmes establish the objectives according to the participants' stated aims. Furthermore, this fact is in the opposite direction from the results of an European cross-national report on PA Programmes and promotion strategies for older people, in which most of the PA Programme's directors reported that their programmes were adjusted according to the participants' aims [19]. Another result that stands out in our data is the fact that just about two thirds of the programmes systematically assess their effectiveness in order to improve their continuous quality improvement process, which opposes the Benchmark 3 from Physical Activity and Health Branch (PAHB), at the CDC [14]. As indicated by the CDC, 'the evaluation is the systematic examination and assessment of features of an initiative and its effects, in order to produce information that can be used by those who have an interest in its improvement or effectiveness' (CDC 2002b [13] p.5), consequently an 'imperative', as stated before. Jackson argues that every effort must be made to engage the organisational members in continuous improvement activities [47]. However, no programme can be planned or evaluated oblivious of the context that surrounds it, especially when what drives most decisions on policy and practice in the public sector are considerations of the available evidence [45]. Institutional, community and public policies may have either supporting or antagonistic effects on programmes [48]. In addition, there are several factors that influence health behaviour [49]. Therefore, it is necessary to include pertinent information regarding the programme context [13,14] that must be absorbed in different ways [50]. In the present study, only 38,46% of PA programmes capture this information, which may reflect a limited knowledge on the part of most of the programmes about the context in which they operate. On the other hand, about two thirds of the analysed programmes have an annual plan that is regularly reviewed and used in an annual report. The data from this report helps to improve the new annual planning cycle of the PA programme. These procedures are in agreement with those found in other studies [51,52] or in accordance to different documents, such as content of the planning and evaluation of PA programmes [13,53] and health promotion programmes [54]. Still regarding this criterion, most of the leaders of our study reported that everybody had full access to the information about the mission and objectives of the PA programme. In the field of Higher Education, Calvo-Mora and collaborators [37] alleged that the leader's communication and involvement of all staff in policy and strategy were crucial to the processes management. Moreover, in accordance with the same author [37], our study found that processes were clearly identified, as well as their interrelationships. With regard to quality strategies, in our study only one PA programme had regularly used internal quality assessment and external audits. However, several studies have focused on the reasons for the use of quality schemes and pointed out the advantages of their implementation in improving services [24,55,56]. On the other hand, Ritchie and Dale suggest the existence of some obstacles to implementing these initiatives within the organizations [57]. Similarly, Davies and collaborators reviewed the aspects of culture/context, which were specific to the university academic context, and could impact negatively on the implementation of a quality framework [58].\nRegarding People criterion, that is an important feature for quality management [59], most of the participants in our study reported the existence of procedures to find out employees' opinions, which was also found in a study related to quality management in sports facilities [60]. This initiative is considered a quality practice to Connolly and Connolly [61]. In fact, organizations have recognized the need to understand employee opinions to identify their concerns, assess the impact of a variety of agendas and provide employees with different communication channels [62]. Regarding this issue, our data also show that employees from the majority of PA programmes have an open dialogue with all stakeholders, especially with one another (76,92%). Furthermore, although the results are less obvious with regard to autonomy and decision-making, our study demonstrates that most of the PA programmes involved and empowered people in various ways (e.g. opinions and suggestions put forward by people, and teamwork). These findings are not totally in line with the arguments of Wilkinson and collaborators, who emphasized the employee involvement as a key theme for quality management, namely autonomy, creativity, active cooperation and self-control for employees [63]. Also, Osseo-Asare and collaborators concluded that a conceptual framework for achieving and sustaining quality in UK higher education institutions could be developed based on a set of principles which includes staff empowerment through participation and commitment [38]. In their study, these authors found a discrepancy between what respondents think about the importance of staff empowerment and the real practice in the organizations. Even with regard to the management of people, most of the participants in our study gave emphasis to the recruitment of people with high skills; however, only 34,62% require a specialization in the area of PA and ageing for instructors. These results are similar to those found on the Cross-National Expert Survey Report on Physical Activity Programmes and Physical Activity Promotion Strategies for Older People [19]. In this report, the authors make recommendations on the importance of recruiting teachers who have high levels of qualification and reinforce the importance of continuous professional development. Regarding this issue, the International Curriculum Guidelines for Preparing Physical Activity Instructors of Older Adults outlines each of the major content areas that should be included in any entry-level training programme [64]. The PAHB, established that a PA programme should be run by highly skilled PA practitioners [14]. Regarding the continuous training of people, our study revealed that over three quarters of the PA programmes take this aspect into account. In contrast, Hughes and collaborators found that only 56% of the PA programmes for older people trained their instructors [65]. The Guidelines for Cardiac Rehabilitation and Secondary Prevention Programs also emphasises these points, and goes further, establishing that the 'polices and procedures should include provisions for a competency-based job description; required education, continuing education, experiences, licences and certifications; and an orientation checklist, a competency assessment and a regularly performed - at least annually - performance appraisal' (AACVPR 2004 [31] p.193). Once more, our data showed that the items related to quality initiatives have only a passing reference, which appears to be related to the fact that just a single programme is involved in quality schemes, as previously explained.\nDifferent studies reported that the opportunities that are provided by Partnerships and resources should be maximized [38,60,66,67]. In addition, the development and sustainment of the community partnerships is the first public health benchmarks for PA Programmes established by the PAHB at the CDC [14]. In our study, 73,08% PA programmes have established partnerships, which is in line with the emphasis that some authors [41,68,69] have put on the importance of forging effective partnerships, creating value and promoting cooperation agreements based on mutually beneficial joint synergies. Especially in the PA programmes for elderly, some organizations reinforce the importance and strength of these partnerships, since they provide additional resources in the form of funding, facilities and equipment and being able to access wide-ranging abilities and knowledge [3,45]. The most surprising result of our data concerns the few partnerships with Higher Education Institutions (11,54%). Indeed, these academic institutions contribute to the creation of knowledge and its dissemination, so we consider it a disadvantage for programmes to not have direct access to their counsel. Moreover, such partnerships would have reciprocal benefits, since the programme also could provide means for researchers to get their answers in a more practical way. Additionally, disseminating this knowledge may promote the development of new programmes or improve the programme itself [13]. When we analyzed the partnerships with health institutions, the results are better, but still far from what is supported by some authors or organizations, who advocate the active participation of healthcare professionals in counselling patients on PA [45,70-72] or encouraging them to accumulate moderate-intensity PA [73]. Similar results arise from the European Network for Action on Ageing and Physical Activity (EUNAAPA) study, where sixty percent of the PA programme directors reported that they build partnerships with local healthcare professionals or organisations [19]. With regard to finances, our results appear to indicate that there is not a strict control of these resources, since there is still a considerable percentage of programmes that do not manage them (65,38%). These results are quite different from those reported by Scott and colleagues [19], where sixty five percent of the PA programme directors were able to estimate the total cost of their programme. In fact, most of the monetary funds of these programmes come from the public finance, and thus it appears to us that leaders should control these funds even more strictly. Although the PA programmes are not-for-profit, the management of its financial resources should be identified as key-process, in order to consolidate the programme's financial structure and to ensure it can fulfil its mission in the present and in the future. Despite the maintenance plans of equipment and buildings should be periodically provided [66], just about one third of the interviewed coordinators reported that their programme had maintenance plans. Another study [19] found a higher percentage of programmes with maintenance plans (46%), but the results were still not consistent with the recommendations [31,74]. Otherwise, the recognition that information technology has been a catalyst for progress and prosperity [75] seems to be accepted by the coordinators of our study, since most of them implemented new technologies in their programmes. Concerning information management, although there are no recommendations in the field of PA programmes for elderly, the AACVPR advises that information management involves supervision of the storage, communication, utilization and tracking of information related to the programme and facility [31]. In this respect, the majority of the coordinators indicated that information, concerning to all aspects of the programme, was systematically recorded. On the contrary, the results related to the systematic pursuit of the latest scientific knowledge are quite modest, since less than one third of the coordinators refer to this quality practice. The reason for this unexpected result becomes somewhat clearer when we realise that very few programmes have established partnerships with higher education experts who are up to date on the latest scientific knowledge. In an American study [76] most states provided evidence of competency with regard to using data and scientific information to develop and prioritise their PA programming.\nAn excellent organization adopts a management philosophy based on Processes [77,78]. Although the majority of the coordinators of our study stated that the methods and processes were defined, only a minority operationalised it in terms of documentation. For the AACVPR, policies and procedures related to information management should include a wide range of records and should specify uniform standards for evaluation, intervention and outcome measurement [31]. Furthermore, processes should be systematically reviewed [17,79]. Specifically with regard to emergency protocols, about one third of the coordinators stated that they are carried out periodically. Related results arise from the EUNAAPA study, where half of PA programme directors reported having emergency protocols in place and that staff members were trained annually, at the very least, in these protocols [19]. Both results indicate that AHA/ACSM's recommendations have not been followed. In fact, it is emphasized that emergency policies and procedures must be reviewed and practiced regularly [74]. With regard to the design of services and tailoring the programme to the needs and interest of participants, the results differ. On the one hand, more than two-thirds of coordinators recognized that the services are designed according to customer needs; on the other hand, less than a third is geared towards the fulfilment of their expectations and needs. In the Scott and collaborators study, almost two thirds of PA directors reported that participants were formally surveyed for the aims of their involvement in the programme and most of these directors also reported that their programmes were adjusted according to participants' stated aims [19]. Physical activity leaders should work closely with individuals to design a PA regimen that reflects the person's preferences and capabilities [46]. In the same line, the BHF recommends the involvement of participants in this process (BHF 2007). Moreover, tailoring the exercise programme to the needs and interest of participants is associated with higher programme attendance [80,81]. With regard to the preparticipation screening, less than half of our PA programmes' coordinators reported that a health check was required to guarantee a safe participation of the customers. Results from EUNAAPA study [19] are slightly different since only half of the PA programme directors reported that a health check was required before a potential participant would be eligible to enter their programme. Screening of older adults prior to starting an exercise programme continues to be a controversial issue [82]. In fact, the ACSM endorses the perspective that medical clearance should not be required prior to encouraging older individuals to begin a light-intensity activity programme, since it may be a disincentive to increasing PA among these individuals [46]. For higher intensity levels, AHA/ACSM recommend a pre-participation screening, primarily to identify those at increased risk of an adverse cardiac event [74]. In our study, about two-thirds of the PA coordinators indicated that the exercise prescription includes aerobic, muscle strength, flexibility and balance exercises. Additionally, they also reported incorporating progression as part of their programme. These are consistent with the ACSM position's stand [6] and ACSM's Guidelines [83]. In our study we found an unanimous result concerning the components of the exercise training session, which is in line with the ACSM recommendations [83]. Our results about exercise prescription, progression and components of the session are more consistent with the ACSM recommendations than those disclosed in the EUNAAPA study [19]. Concerning to environmental conditions, more than half of the coordinators reported that they are guaranteed, i.e. temperature of sports facilities, safe and pleasant conditions of sports equipment and facilities, places with good acoustics and access to a water source are incorporated in the programme. This represents an adequate degree of concordance with the recommendations [31,83]. With regard to advertising, more than three quarters of the coordinators revealed that the programme was promoted. Some authors and organizations believe that social marketing and communication campaigns are a part of a set of actions required to increase PA [12,84,85]. In addition, the BHF makes recommendations on marketing and promotion strategies among older people [45]; however, no scientific evidence was found about the most effective method of promoting a PA programme for this target population. Across all programmes, 76,92% offer different forms of access to facilitate the enrolment of seniors. The Task Force on Community Preventive Services recommends the creation of or enhanced access to places for PA, combined with informational outreach activities to increase PA [12], even giving examples of how to reduce some environmental barriers. Good accessibility is also provided in almost all analysed programmes (96,15%), which is an essential aspect of programme planning [12,45,72]. The BHF emphasises the proximity of programmes to residences in a friendly and accessible way, ensuring well-lit paths and providing good public transports [45]. In this regard, a qualitative study in older and rural African American and white women found that PA programmes' enabling factors included transportation and free facilities [86]. A study by Booth and collaborators showed that for adults over 60, neighbourhood safety and access to local facilities were important predictors of being active [87]. In our study, all the programmes had an effective complaints handling system and more than half had suggestions through standardized processes. In addition to what was mentioned above about the importance of customer suggestions or opinions, customer complaint information can be also used as a basis for customer-focused process improvement [88]. In this particular case, our results suggest that organizations have a preference for reactive methods and delayed methods, such as complaint analysis, over proactive methods, contrary to what was found in another study [44]. An excellent service can only be achieved with a profound knowledge of evolving customer needs; therefore, a functional customer complaint management system should be implemented in every organization [89].\nWith respect to Customer results, organizations must measure and achieve them [17]. Similarly, PA interventions should be evaluated in terms of their processes as well as their outcomes [11]. There are many studies addressing the measurement of PA in order to identify current levels of activity and assess the effectiveness of intervention programmes. However, few PA intervention studies specifically target Customer retention or Customer satisfaction. Actually, the EFQM argues that excellent organisations achieve the best results for their customers and achieve high levels of customer satisfaction [17]. Furthermore, customers do not only provide input (suggestions or complaints), but they also take part in the service process, influencing both the process's performance and the perception of quality of the service produced [90]. One of the most commonly used techniques for listening to customers is satisfaction surveys [44]. More than three quarters of our PA programmes' coordinators assured that the satisfaction of participants in their programme was formally measured. Another key predictor of customer results is loyalty [36], but less than 35% of the programmes studied evaluate this item. A recent study about PA programmes for older adults in the United States found that 74% tracked attendance [91]. Also, complaints handling and management are essential for achieving customer retention and loyalty [92]. Besides this, though all programmes have a complaints system in place, only approximately 70% evaluated their resolution process. Contrary to complaints, all the programmes that have a standardized system of suggestions also carried out its assessment. Although the measurement process represents one of the most important components of customer results from an exercise programme [83], just 57,69% of our coordinators reported that objective outcome measures were recorded for participants at regular intervals.\nTo achieve excellence, organisations must also focus on the People results [17], since people involvement is one of the most important drivers of continuous improvement [77]. Nevertheless, most coordinators of our study revealed that the organization does not have information on its employees' motivation and commitment. This result is not surprising, especially because organizations rarely use instruments to obtain information about how their employees assess the motivational aspects of their workplace [93], compared with job satisfaction measurement. However, some meta-analysis studies [94,95] concluded that people's satisfaction is not enough to improve their performance - people must also be highly motivated [93]. Furthermore, without satisfied and motivated employees it is impossible to achieve satisfied and loyal customers [44]. An empirical study observed that employees' loyalty is significantly related to service quality, which in turn impacts customer satisfaction and customer loyalty [96]. Martin-Castilla and Rodriguez-Ruiz give examples of the different aspects that must be evaluated, both in terms of people's motivation and satisfaction, such as the development of professional careers, learning opportunities, definition of objectives, employment conditions, salary, relation between peers, organisational role in the community, and work environment, among others [78]. Additionally, one of the key indicators of people satisfaction includes absenteeism [36]. While the majority of our PA programmes' coordinators confirmed that there were indicators of people's absenteeism (69,23%), only a minority stated that the employees' loyalty was measured (26,92%) as well as people's satisfaction (38,46%). We believe that people who are satisfied with regard to the management, employment conditions, relationships between peers and the organisational role in the community will be more prone to improve the quality of the PA programme; therefore, the evaluation of theses issues should not be neglected. Also, people's achievement is an important indicator, not only with regard to the development of people, but also in their ability to solve problems and take initiatives. Nearly two thirds of our PA coordinators had indicators of people's performance, which is defended by the AACVPR [31], as discussed previously. This result stems from the fact that the majority of people with employment contracts in the public sector is evaluated by the Integrated System on the Evaluation of the Public Administration Performance (SIADAP).\nThe Society results criterion is based on what an organisation is achieving in satisfying the needs and expectations of the community [17]. The programme's visibility, engagement and reputation are recognized as a result of its activities and the active participation of the organisation as a responsible member of the community. However, few participants (19,23%) reported indicators of the involvement of their programmes in the community and less than one quarter of the programme's impact on society (23,07%). Furthermore, the CDC claims the importance of assessing the programme effects on organizations or communities [13], but this is not our case. In fact, it is not just the impact of the programme from the standpoint of public health, but also the perceptions that society has about the programme as a barometer of its action in society. Also, social responsibility is a vital part of the work and role of the programme, as it tries to respond to a problem of the society as a whole [77], but again, only nearly 20% of the PA programmes' coordinators had measures or indicators to track this issue. As recognized by some authors [45,97], community involvement in these programmes is critical to its success, so it is concerning that the most of the coordinators do not pay attention to these indicators.\nThe Key performance results represent the global organizational performance and the fulfilment of expectations. The mission of the PA programmes is linked to a significant impact on the promotion of PA in the elderly population. However, less than 12% of our coordinators declared they had indicators of process efficiency, i.e. obtaining the best outcomes from a set of actions. Also, regarding the quality of the service delivered, only one PA coordinator assumed that this assessment was performed. This result may be associated with the fact that only one programme performed a quality assessment/audit. In this respect, several studies [23-25,55] found that quality initiatives may improve process and outcomes. Finally, less than fifty percent of the PA coordinators indicated that the organisation's financial resources were properly managed. Recognising that most of the PA programmes have limited municipal funds, we believe that there is still a modest understanding of the need to achieve a certain level of profitability to contribute to the sustainability of the programme, and that all activities must be cost-accountable.\nThe 'evaluation is integral to success' (Schmid 2006 [11] p.115) so, regardless of sector, size, structure or maturity, organisations need to establish an appropriate management framework to be successful [98]. We believe that this premise is also valid for PA programmes. Thus, it will help to improve services and, at the same time, to increase access and the level of PA of elderly citizens.", "Our findings suggest that although there are some good practices in the PA programmes under analysis, specifically in criteria Processes, Leadership, Customer results and People, there are still relevant areas that require improvement, namely those related to Partnerships and resources, People results, Policy and strategy, Key performance results and Society results.\n[SUBTITLE] Strengths and Limitations [SUBSECTION] To our knowledge, this was the first study applying the EFQM Excellence Model criteria in PA programmes for elderly people.\nHowever, the study has certain limitations, which must be considered when interpreting its results.\nFirst, the study was based on the PA programmes coordinators' perceptions. Consequently, such perceptions may not provide a complete and accurate picture of the reality. Actually, the results are mainly based on self-reporting which might also have contributed to a more favourable outcome. Conducting a study with the participation of different stakeholders of the PA programmes will be an asset in the future. Secondly, the research design employed was cross-sectional rather than longitudinal. In this regard, an evaluation of the quality practices is a process that develops over time and whose effects are only really appreciated in the long term. Therefore, it would be appropriate to follow a longitudinal approach in future studies. Finally, the external validity of the findings presented is low. Nevertheless, we are convicted that the study provides details about the management models of the PA programmes for elderly people developed by the Portuguese Local Administration, their strengths and weaknesses, in order to improve their quality.\nTo our knowledge, this was the first study applying the EFQM Excellence Model criteria in PA programmes for elderly people.\nHowever, the study has certain limitations, which must be considered when interpreting its results.\nFirst, the study was based on the PA programmes coordinators' perceptions. Consequently, such perceptions may not provide a complete and accurate picture of the reality. Actually, the results are mainly based on self-reporting which might also have contributed to a more favourable outcome. Conducting a study with the participation of different stakeholders of the PA programmes will be an asset in the future. Secondly, the research design employed was cross-sectional rather than longitudinal. In this regard, an evaluation of the quality practices is a process that develops over time and whose effects are only really appreciated in the long term. Therefore, it would be appropriate to follow a longitudinal approach in future studies. Finally, the external validity of the findings presented is low. Nevertheless, we are convicted that the study provides details about the management models of the PA programmes for elderly people developed by the Portuguese Local Administration, their strengths and weaknesses, in order to improve their quality.", "To our knowledge, this was the first study applying the EFQM Excellence Model criteria in PA programmes for elderly people.\nHowever, the study has certain limitations, which must be considered when interpreting its results.\nFirst, the study was based on the PA programmes coordinators' perceptions. Consequently, such perceptions may not provide a complete and accurate picture of the reality. Actually, the results are mainly based on self-reporting which might also have contributed to a more favourable outcome. Conducting a study with the participation of different stakeholders of the PA programmes will be an asset in the future. Secondly, the research design employed was cross-sectional rather than longitudinal. In this regard, an evaluation of the quality practices is a process that develops over time and whose effects are only really appreciated in the long term. Therefore, it would be appropriate to follow a longitudinal approach in future studies. Finally, the external validity of the findings presented is low. Nevertheless, we are convicted that the study provides details about the management models of the PA programmes for elderly people developed by the Portuguese Local Administration, their strengths and weaknesses, in order to improve their quality.", "AIM participated in the acquisition and analysis of data and participated in drafting and editing the manuscript. MJR managed the data collection and analysis and supervised the drafting and editing of manuscript. PS designed the study protocol and helped design the questionnaires/interviews. RS managed the data collection and analysis. JM participated in the coordination of the study and supervised the drafting and editing of manuscript. JC participated in the design of the questionnaires/interviews and coordination and management of the study.\nAll authors read and approved the final manuscript.", "The study was approved by the Scientific Council and Ethics Committee of the Faculty of Sport - University of Porto.", "The authors declare that they have no competing interests.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/123/prepub\n", "Preliminary on-line questionnaire. Explanation of the structure and content of the preliminary on-line questionnaire\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Maternal characteristics during pregnancy and risk factors for positive HIV RNA at delivery: a single-cohort observational study (Brescia, Northern Italy).
21338498
Detectable HIV RNA in mothers at delivery is an important risk factor for HIV transmission to newborns. Our hypothesis was that, in migrant women, the risk of detectable HIV RNA at delivery is greater owing to late HIV diagnosis. Therefore, we examined pregnant women by regional provenance and measured variables that could be associated with detectable HIV RNA at delivery.
BACKGROUND
A observational retrospective study was conducted from January 1999 to May 2008. Univariate and multivariable regression analyses (generalized linear models) were used, with detectable HIV RNA at delivery as dependent variable.
METHODS
The overall population comprised 154 women (46.8% migrants). Presentation was later in migrant women than Italians, as assessed by CD4-T-cell count at first contact (mean 417/mm³ versus 545/mm³, respectively; p = 0.003). Likewise, HIV diagnosis was made before pregnancy and HAART was already prescribed at the time of pregnancy in more Italians (91% and 75%, respectively) than migrants (61% and 42.8%, respectively). A subgroup of women with available HIV RNA close to term (i.e., ≤30 days before labour) was studied for risk factors of detectable HIV RNA (≥50 copies/ml) at delivery. Among 93 women, 25 (26.9%) had detectable HIV RNA. A trend toward an association between non-Italian nationality and detectable HIV RNA at delivery was demonstrated by univariate analysis (relative risk, RR = 1.86; p = 0.099). However, by multivariable regression analysis, the following factors appeared to be more important: lack of stable (i.e., ≥14 days) antiretroviral therapy at the time of HIV RNA testing (RR = 4.3; p < 0.0001), and higher CD4+ T-cell count at pregnancy (per 50/mm³, RR = 0.94; p = 0.038).
RESULTS
These results reinforce the importance of extensive screening for HIV infection, earlier initiation of antiretroviral therapy and stricter monitoring of pregnant women to reduce the risk of detectable HIV RNA at delivery. Public health interventions should be particularly targeted to migrant women who are frequently unaware of their HIV status at the time of pregnancy.
CONCLUSIONS
[ "Adult", "Antiretroviral Therapy, Highly Active", "CD4 Lymphocyte Count", "Cohort Studies", "Delivery, Obstetric", "Female", "HIV Seropositivity", "HIV-1", "Humans", "Italy", "Pregnancy", "Pregnancy Complications, Infectious", "RNA, Viral", "Regression Analysis", "Risk Factors", "Transients and Migrants" ]
3058020
null
null
Methods
[SUBTITLE] Characterisation of the overall population by geographical origin and management of pregnant women and newborns [SUBSECTION] A retrospective study was conducted on all pregnant women attending a large outpatient HIV clinic in Brescia (northern Italy) from January 1999 to July 2008. Pregnant women were initially referred to our clinic by general practitioners, obstetric services and services for sexually transmitted diseases at the time of HIV diagnosis, or were already on follow-up when they were found to be pregnant. On the basis of Italian guidelines for antiretroviral therapy available at the time of the study [7], the women were followed during pregnancy with examinations and clinical examinations every three months, using an integrated approach (HIV disease management and obstetric evaluation). Complete blood tests (including CD4+ T-cell count and HIV RNA) were performed, and the women were screened for latent tuberculosis infection, TORCH (T. gondii, others, Rubella, Cytomegalovirus, HIV) and malaria, if clinically indicated. HIV RNA was determined by branched DNA quantification (Bayer bDNA test, Versant HIV-1 RNA 3.0 assay [bDNA], Bayer Health Care). Antiretroviral treatment was commenced as soon as possible if indicated by the mothers' conditions [7]; otherwise, cART was initiated as soon as the mothers entered the 2nd trimester of pregnancy. Italian guidelines edited in 2005 recommended beginning antiretroviral therapy at CD4+ T-cell count ≤350 cells/mm3; treatment at 350 < CD4+ < 500 cells/mm3 was only suggested if HIV RNA was >100,000 copies/ml [7]. HIV-infected women were delivered by caesarean section, and zidovudine was infused as prophylaxis during delivery. Breastfeeding was discouraged. The children were followed at the Paediatric Clinic of the local hospital (Spedali Civili di Brescia) with clinical evaluation and complete blood tests including HIV antibody test (ELISA) and HIV RNA on day 0, week 4, week 8, week 24, week 48 and week 96. Children were considered to be negative for HIV infection if HIV RNA and HIV antibody tests were negative at week 48. This study was approved by the Ethical Committee of the Spedali Civili di Brescia. The patients signed the informed consent for participation to this study. A retrospective study was conducted on all pregnant women attending a large outpatient HIV clinic in Brescia (northern Italy) from January 1999 to July 2008. Pregnant women were initially referred to our clinic by general practitioners, obstetric services and services for sexually transmitted diseases at the time of HIV diagnosis, or were already on follow-up when they were found to be pregnant. On the basis of Italian guidelines for antiretroviral therapy available at the time of the study [7], the women were followed during pregnancy with examinations and clinical examinations every three months, using an integrated approach (HIV disease management and obstetric evaluation). Complete blood tests (including CD4+ T-cell count and HIV RNA) were performed, and the women were screened for latent tuberculosis infection, TORCH (T. gondii, others, Rubella, Cytomegalovirus, HIV) and malaria, if clinically indicated. HIV RNA was determined by branched DNA quantification (Bayer bDNA test, Versant HIV-1 RNA 3.0 assay [bDNA], Bayer Health Care). Antiretroviral treatment was commenced as soon as possible if indicated by the mothers' conditions [7]; otherwise, cART was initiated as soon as the mothers entered the 2nd trimester of pregnancy. Italian guidelines edited in 2005 recommended beginning antiretroviral therapy at CD4+ T-cell count ≤350 cells/mm3; treatment at 350 < CD4+ < 500 cells/mm3 was only suggested if HIV RNA was >100,000 copies/ml [7]. HIV-infected women were delivered by caesarean section, and zidovudine was infused as prophylaxis during delivery. Breastfeeding was discouraged. The children were followed at the Paediatric Clinic of the local hospital (Spedali Civili di Brescia) with clinical evaluation and complete blood tests including HIV antibody test (ELISA) and HIV RNA on day 0, week 4, week 8, week 24, week 48 and week 96. Children were considered to be negative for HIV infection if HIV RNA and HIV antibody tests were negative at week 48. This study was approved by the Ethical Committee of the Spedali Civili di Brescia. The patients signed the informed consent for participation to this study. [SUBTITLE] Factors associated with detectable HIV RNA at delivery (subgroup of mothers with available HIV RNA close to term) [SUBSECTION] For this analysis, we considered only patients with available HIV RNA measured close to term (i.e., ≤30 days before delivery). Log-normal generalized linear regression models were used to assess factors associated with the outcome, defined as detectable HIV RNA (i.e., ≥50 copies/ml) close to term. Variables were selected on the basis of plausibility of their association with the outcome and epidemiological relevance, and to avoid possible biases. For example, CD4+ T-cell counts at delivery were not included, since antiretroviral therapy and time of initiation were likely to influence these values and HIV RNA at the same time. Therefore, the following variables were selected: geographical provenance (Italy versus other countries), age, calendar year at pregnancy, CD4+ T-cell count at pregnancy, and presence/type of cART at the time of HIV RNA testing (no therapy versus two antiretroviral drugs versus at least three drugs in combination, i.e., HAART regimens). Therapy was considered to be present if it was initiated at least 14 days before HIV RNA testing. Since the 9 patients who were not prescribed cART at pregnancy were the same who were not on cART close to term, in the multivariable model we used a single dichotomy variable (no cART versus any cART close to term). Moreover, since all these 9 patients were non-Italians, there was an effect modification between country of origin and treatment exposure. For this reason, nationality was excluded from the main multivariable model. Lastly, in order to test whether nationality was correlated with detectable HIV RNA close to term, the 9 patients without treatment were excluded from a multivariable model that included nationality as variable. Risk ratios (RR) with corresponding 95% confidence intervals were used to describe the strength of the associations. For this analysis, we considered only patients with available HIV RNA measured close to term (i.e., ≤30 days before delivery). Log-normal generalized linear regression models were used to assess factors associated with the outcome, defined as detectable HIV RNA (i.e., ≥50 copies/ml) close to term. Variables were selected on the basis of plausibility of their association with the outcome and epidemiological relevance, and to avoid possible biases. For example, CD4+ T-cell counts at delivery were not included, since antiretroviral therapy and time of initiation were likely to influence these values and HIV RNA at the same time. Therefore, the following variables were selected: geographical provenance (Italy versus other countries), age, calendar year at pregnancy, CD4+ T-cell count at pregnancy, and presence/type of cART at the time of HIV RNA testing (no therapy versus two antiretroviral drugs versus at least three drugs in combination, i.e., HAART regimens). Therapy was considered to be present if it was initiated at least 14 days before HIV RNA testing. Since the 9 patients who were not prescribed cART at pregnancy were the same who were not on cART close to term, in the multivariable model we used a single dichotomy variable (no cART versus any cART close to term). Moreover, since all these 9 patients were non-Italians, there was an effect modification between country of origin and treatment exposure. For this reason, nationality was excluded from the main multivariable model. Lastly, in order to test whether nationality was correlated with detectable HIV RNA close to term, the 9 patients without treatment were excluded from a multivariable model that included nationality as variable. Risk ratios (RR) with corresponding 95% confidence intervals were used to describe the strength of the associations. [SUBTITLE] Additional definitions and statistical notes [SUBSECTION] The time of diagnosis of HIV infection was at the first positive HIV confirmatory test (western blotting), and the CD4+ T-cell count and HIV RNA at diagnosis were the first values obtained after HIV diagnosis. The time of pregnancy diagnosis was considered to be the last menstrual period, and CD4+ T-cell count and HIV RNA at pregnancy were the first values obtained after pregnancy diagnosis. Lastly, cART at delivery was the regimen at the last blood test before delivery. The chi square test and score test for trend were used when appropriate. All statistical tests were two sided. P-values <0.05 were considered to be significant. Data were collected in an electronic chart used in our Centre (NetCare, version 1.05.11 sp 26) and double-checked for consistency and completeness with paper charts. Statistical analyses were performed using STATA software (Stata Statistical Software release 9.2, 2007; Stata Corporation, College Station, Texas). The time of diagnosis of HIV infection was at the first positive HIV confirmatory test (western blotting), and the CD4+ T-cell count and HIV RNA at diagnosis were the first values obtained after HIV diagnosis. The time of pregnancy diagnosis was considered to be the last menstrual period, and CD4+ T-cell count and HIV RNA at pregnancy were the first values obtained after pregnancy diagnosis. Lastly, cART at delivery was the regimen at the last blood test before delivery. The chi square test and score test for trend were used when appropriate. All statistical tests were two sided. P-values <0.05 were considered to be significant. Data were collected in an electronic chart used in our Centre (NetCare, version 1.05.11 sp 26) and double-checked for consistency and completeness with paper charts. Statistical analyses were performed using STATA software (Stata Statistical Software release 9.2, 2007; Stata Corporation, College Station, Texas).
null
null
null
null
[ "Background", "Characterisation of the overall population by geographical origin and management of pregnant women and newborns", "Factors associated with detectable HIV RNA at delivery (subgroup of mothers with available HIV RNA close to term)", "Additional definitions and statistical notes", "Results", "Characteristics of the population by geographical origin", "Factors associated with detectable HIV RNA at delivery", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "UNAIDS estimated that 31.3 million patients lived with HIV infection worldwide in 2008; among them, 15.7 million were women [1]. Before the introduction of combination antiretroviral therapy (cART), about 25% of infants born to HIV-infected women became HIV infected, but this risk was reduced by cART [1]. As a consequence, many women desire to have children despite HIV infection, while others become pregnant unaware that they are infected with HIV. For instance, more than 6,000 women living with HIV become pregnant in the United States every year [2].\nDetectable HIV RNA at delivery is the strongest predictor of mother-to-child HIV transmission [3]. Katz et al. [4] examined the association between cART, demographic factors and detectable HIV-RNA at delivery. Alarmingly, 32% of the women displayed detectable HIV RNA at delivery, which was significantly associated with lower CD4+ counts and higher HIV RNA during pregnancy, but not type of cART [4]. A recent study of the European Collaborative Cohort highlighted a group of pregnant women with detectable HIV RNA at delivery notwithstanding cART, suggesting the importance of strict monitoring and support for HIV-infected pregnant women [5].\nIn Italy, characteristics of HIV-infected pregnant women and possible determinants of positive HIV RNA at delivery have been poorly studied [6]. In particular, maternal characteristics of migrants in comparison to autochthonous populations and the possible impact of migration on the risk of positive HIV RNA at delivery have not yet been investigated. We hypothesized that HIV infection is diagnosed in migrants later than in Italians, with consequences for the risk of detectable HIV RNA at delivery. First, we compared maternal characteristics of Italian and non-Italian women to determine whether there were any differences in terms of timing of presentation and cART initiation; second, we selected the women for whom HIV RNA close to term was available to explore possible risk factors for detectable HIV RNA at delivery, including patient nationality.", "A retrospective study was conducted on all pregnant women attending a large outpatient HIV clinic in Brescia (northern Italy) from January 1999 to July 2008. Pregnant women were initially referred to our clinic by general practitioners, obstetric services and services for sexually transmitted diseases at the time of HIV diagnosis, or were already on follow-up when they were found to be pregnant. On the basis of Italian guidelines for antiretroviral therapy available at the time of the study [7], the women were followed during pregnancy with examinations and clinical examinations every three months, using an integrated approach (HIV disease management and obstetric evaluation). Complete blood tests (including CD4+ T-cell count and HIV RNA) were performed, and the women were screened for latent tuberculosis infection, TORCH (T. gondii, others, Rubella, Cytomegalovirus, HIV) and malaria, if clinically indicated. HIV RNA was determined by branched DNA quantification (Bayer bDNA test, Versant HIV-1 RNA 3.0 assay [bDNA], Bayer Health Care). Antiretroviral treatment was commenced as soon as possible if indicated by the mothers' conditions [7]; otherwise, cART was initiated as soon as the mothers entered the 2nd trimester of pregnancy. Italian guidelines edited in 2005 recommended beginning antiretroviral therapy at CD4+ T-cell count ≤350 cells/mm3; treatment at 350 < CD4+ < 500 cells/mm3 was only suggested if HIV RNA was >100,000 copies/ml [7]. HIV-infected women were delivered by caesarean section, and zidovudine was infused as prophylaxis during delivery. Breastfeeding was discouraged.\nThe children were followed at the Paediatric Clinic of the local hospital (Spedali Civili di Brescia) with clinical evaluation and complete blood tests including HIV antibody test (ELISA) and HIV RNA on day 0, week 4, week 8, week 24, week 48 and week 96. Children were considered to be negative for HIV infection if HIV RNA and HIV antibody tests were negative at week 48.\nThis study was approved by the Ethical Committee of the Spedali Civili di Brescia. The patients signed the informed consent for participation to this study.", "For this analysis, we considered only patients with available HIV RNA measured close to term (i.e., ≤30 days before delivery). Log-normal generalized linear regression models were used to assess factors associated with the outcome, defined as detectable HIV RNA (i.e., ≥50 copies/ml) close to term. Variables were selected on the basis of plausibility of their association with the outcome and epidemiological relevance, and to avoid possible biases. For example, CD4+ T-cell counts at delivery were not included, since antiretroviral therapy and time of initiation were likely to influence these values and HIV RNA at the same time. Therefore, the following variables were selected: geographical provenance (Italy versus other countries), age, calendar year at pregnancy, CD4+ T-cell count at pregnancy, and presence/type of cART at the time of HIV RNA testing (no therapy versus two antiretroviral drugs versus at least three drugs in combination, i.e., HAART regimens). Therapy was considered to be present if it was initiated at least 14 days before HIV RNA testing.\nSince the 9 patients who were not prescribed cART at pregnancy were the same who were not on cART close to term, in the multivariable model we used a single dichotomy variable (no cART versus any cART close to term). Moreover, since all these 9 patients were non-Italians, there was an effect modification between country of origin and treatment exposure. For this reason, nationality was excluded from the main multivariable model. Lastly, in order to test whether nationality was correlated with detectable HIV RNA close to term, the 9 patients without treatment were excluded from a multivariable model that included nationality as variable.\nRisk ratios (RR) with corresponding 95% confidence intervals were used to describe the strength of the associations.", "The time of diagnosis of HIV infection was at the first positive HIV confirmatory test (western blotting), and the CD4+ T-cell count and HIV RNA at diagnosis were the first values obtained after HIV diagnosis. The time of pregnancy diagnosis was considered to be the last menstrual period, and CD4+ T-cell count and HIV RNA at pregnancy were the first values obtained after pregnancy diagnosis. Lastly, cART at delivery was the regimen at the last blood test before delivery.\nThe chi square test and score test for trend were used when appropriate. All statistical tests were two sided. P-values <0.05 were considered to be significant. Data were collected in an electronic chart used in our Centre (NetCare, version 1.05.11 sp 26) and double-checked for consistency and completeness with paper charts. Statistical analyses were performed using STATA software (Stata Statistical Software release 9.2, 2007; Stata Corporation, College Station, Texas).", "[SUBTITLE] Characteristics of the population by geographical origin [SUBSECTION] One hundred and fifty-four pregnant women were included in the study (table 1). Most of the patients (46.7%) came from Italy or from Africa (42.4%), the remainder from Eastern Europe (6.3%) or from different countries (4.5%). The non-Italian patients had acquired HIV through heterosexual intercourse more frequently (p < 0.0001), and had received HIV diagnosis later (p < 0.0001) and during pregnancy (p < 0.0001), than the Italians. Two Italian women were infected vertically. Moreover, non-Italians had more previous pregnancies (p = 0.078), and presented with lower CD4+ T-cell count (p = 0.003), than Italians. Conversely, Italians were prescribed cART earlier than the non-Italians (p = 0.006).\nCharacteristics of the cohort (overall and sub-study population)\nN: the number of data available for the characteristics considered. IDVU: intravenous drug users; SD: standard deviation; NRTIs: nucleoside/nucleotide reverse transcriptase inhibitors; NNRTIs: non nucleoside/nucleotide reverse transcriptase inhibitors; PIs: protease inhibitors; cART: combination antiretroviral therapy; N.A.: not applicable.\nFigure 1 depicts the flow of patients. Thirteen women had voluntary interruptions of pregnancy, 12 had spontaneous abortions, one had therapeutic abortion, and three were lost to follow-up soon after the first contact with the Centre. Therefore 125 women delivered. None of the children acquired HIV infection in this cohort.\nFlow chart of the patients included in the study.\nOne hundred and fifty-four pregnant women were included in the study (table 1). Most of the patients (46.7%) came from Italy or from Africa (42.4%), the remainder from Eastern Europe (6.3%) or from different countries (4.5%). The non-Italian patients had acquired HIV through heterosexual intercourse more frequently (p < 0.0001), and had received HIV diagnosis later (p < 0.0001) and during pregnancy (p < 0.0001), than the Italians. Two Italian women were infected vertically. Moreover, non-Italians had more previous pregnancies (p = 0.078), and presented with lower CD4+ T-cell count (p = 0.003), than Italians. Conversely, Italians were prescribed cART earlier than the non-Italians (p = 0.006).\nCharacteristics of the cohort (overall and sub-study population)\nN: the number of data available for the characteristics considered. IDVU: intravenous drug users; SD: standard deviation; NRTIs: nucleoside/nucleotide reverse transcriptase inhibitors; NNRTIs: non nucleoside/nucleotide reverse transcriptase inhibitors; PIs: protease inhibitors; cART: combination antiretroviral therapy; N.A.: not applicable.\nFigure 1 depicts the flow of patients. Thirteen women had voluntary interruptions of pregnancy, 12 had spontaneous abortions, one had therapeutic abortion, and three were lost to follow-up soon after the first contact with the Centre. Therefore 125 women delivered. None of the children acquired HIV infection in this cohort.\nFlow chart of the patients included in the study.\n[SUBTITLE] Factors associated with detectable HIV RNA at delivery [SUBSECTION] Among the 125 women who delivered, 93 had HIV RNA determination close to term (i.e., ≤ 30 days before delivery). Characteristics of these 93 women are illustrated in table 1. Moreover, CD4+ T-cell counts at delivery were slightly lower among non-Italians (mean: 464.7/mm3, SD 228.6/mm3) than Italians (mean: 582.1/mm3, SD 237.7/mm3) in the subset of patients with available HIV RNA determination close to term (p = 0.018). Significant differences were found between the 93 women with HIV RNA determination close to term and the remaining 61 women who lacked HIV RNA for CD4+ T-cell count at first contact (mean: 439.1 cell/mm3,SD 253.7/mm3 versus 535.7 cell/mm3, SD 298.2/mm3; p = 0.032) and initiation of cART before pregnancy (80.3% versus 54.8%, p = 0.014). No significant differences were found for the other variables between the two groups.\nFactors considered for association with detectable HIV RNA at delivery and linear regression analysis are shown in table 2. Among the 93 women with available HIV RNA at delivery, 32/39 (82.0%) of the Italians versus 36/54 (66.7%) of the non-Italians had undetectable HIV RNA at delivery. Therefore, by univariate linear regression analysis, non-Italians had RR = 1.86 for detectable HIV RNA at delivery with respect to the Italians (p = 0.099). Age and calendar year were not statistically associated with the outcome.\nUnivariate and multivariable regression analyses\nAnalyses were on patients with plasma viral load measured at the time of delivery (within 30 days before delivery). cART: combination antiretroviral therapy; Ref.: reference category.\nAmong nine patients who were not on cART for ≥14 days at the time of HIV RNA testing, only one (11%) had undetectable HIV RNA. By contrast, among 78 patients who were on ≥3 antiretroviral drugs, 61 (78%) had undetectable HIV RNA and this percentage was not statistically different from the percentage of patients with undetectable HIV RNA among those who were prescribed two antiretroviral drugs (6/6 = 100%). Therefore, by univariate analysis, the RR of detectable HIV RNA was 4.4 (p < 0.0001) for lack of cART versus any cART at the time of HIV RNA testing. Similar results were obtained for presence/timing of initiation of cART, resulting in an identical RR of detectable HIV RNA for lack of cART versus any cART initiated before or during pregnancy.\nLastly, increasing CD4+ T-cell count at pregnancy appeared to reduce the risk of detectable HIV RNA at delivery by univariate analysis (RR = 0.91 per 50 cells/mm3; p = 0.037). All continuous variables included in the regression model were also studied as categorical variables to confirm the linear relationship; in particular, upper classes of CD4+ T-cell counts (200-349; 350-499; >500/mm3) were associated with a lower risk of detectable HIV RNA at delivery with respect to CD4+ <200/mm3 (data not shown).\nMultivariable analysis confirmed that the risk of detectable HIV-RNA at delivery was increased by lack of cART, while higher CD4+ T-cell count at the beginning of pregnancy appeared to be protective; by contrast, age and calendar year did not appear to be associated with HIV-RNA at delivery (see Table 2). After excluding from the analysis the 9 patients who were not receiving any cART, we found that nationality was not independently associated with the risk of detectable HIV RNA close to term (RR = 0.91; p = 0.8). Also, in this model, results for the other variables did not change significantly. In particular, the protective role of CD4+ T-cell count at pregnancy was confirmed (risk ratio = 0.87 per 50 cells/mm3; p = 0.049).\nAmong the 125 women who delivered, 93 had HIV RNA determination close to term (i.e., ≤ 30 days before delivery). Characteristics of these 93 women are illustrated in table 1. Moreover, CD4+ T-cell counts at delivery were slightly lower among non-Italians (mean: 464.7/mm3, SD 228.6/mm3) than Italians (mean: 582.1/mm3, SD 237.7/mm3) in the subset of patients with available HIV RNA determination close to term (p = 0.018). Significant differences were found between the 93 women with HIV RNA determination close to term and the remaining 61 women who lacked HIV RNA for CD4+ T-cell count at first contact (mean: 439.1 cell/mm3,SD 253.7/mm3 versus 535.7 cell/mm3, SD 298.2/mm3; p = 0.032) and initiation of cART before pregnancy (80.3% versus 54.8%, p = 0.014). No significant differences were found for the other variables between the two groups.\nFactors considered for association with detectable HIV RNA at delivery and linear regression analysis are shown in table 2. Among the 93 women with available HIV RNA at delivery, 32/39 (82.0%) of the Italians versus 36/54 (66.7%) of the non-Italians had undetectable HIV RNA at delivery. Therefore, by univariate linear regression analysis, non-Italians had RR = 1.86 for detectable HIV RNA at delivery with respect to the Italians (p = 0.099). Age and calendar year were not statistically associated with the outcome.\nUnivariate and multivariable regression analyses\nAnalyses were on patients with plasma viral load measured at the time of delivery (within 30 days before delivery). cART: combination antiretroviral therapy; Ref.: reference category.\nAmong nine patients who were not on cART for ≥14 days at the time of HIV RNA testing, only one (11%) had undetectable HIV RNA. By contrast, among 78 patients who were on ≥3 antiretroviral drugs, 61 (78%) had undetectable HIV RNA and this percentage was not statistically different from the percentage of patients with undetectable HIV RNA among those who were prescribed two antiretroviral drugs (6/6 = 100%). Therefore, by univariate analysis, the RR of detectable HIV RNA was 4.4 (p < 0.0001) for lack of cART versus any cART at the time of HIV RNA testing. Similar results were obtained for presence/timing of initiation of cART, resulting in an identical RR of detectable HIV RNA for lack of cART versus any cART initiated before or during pregnancy.\nLastly, increasing CD4+ T-cell count at pregnancy appeared to reduce the risk of detectable HIV RNA at delivery by univariate analysis (RR = 0.91 per 50 cells/mm3; p = 0.037). All continuous variables included in the regression model were also studied as categorical variables to confirm the linear relationship; in particular, upper classes of CD4+ T-cell counts (200-349; 350-499; >500/mm3) were associated with a lower risk of detectable HIV RNA at delivery with respect to CD4+ <200/mm3 (data not shown).\nMultivariable analysis confirmed that the risk of detectable HIV-RNA at delivery was increased by lack of cART, while higher CD4+ T-cell count at the beginning of pregnancy appeared to be protective; by contrast, age and calendar year did not appear to be associated with HIV-RNA at delivery (see Table 2). After excluding from the analysis the 9 patients who were not receiving any cART, we found that nationality was not independently associated with the risk of detectable HIV RNA close to term (RR = 0.91; p = 0.8). Also, in this model, results for the other variables did not change significantly. In particular, the protective role of CD4+ T-cell count at pregnancy was confirmed (risk ratio = 0.87 per 50 cells/mm3; p = 0.049).", "One hundred and fifty-four pregnant women were included in the study (table 1). Most of the patients (46.7%) came from Italy or from Africa (42.4%), the remainder from Eastern Europe (6.3%) or from different countries (4.5%). The non-Italian patients had acquired HIV through heterosexual intercourse more frequently (p < 0.0001), and had received HIV diagnosis later (p < 0.0001) and during pregnancy (p < 0.0001), than the Italians. Two Italian women were infected vertically. Moreover, non-Italians had more previous pregnancies (p = 0.078), and presented with lower CD4+ T-cell count (p = 0.003), than Italians. Conversely, Italians were prescribed cART earlier than the non-Italians (p = 0.006).\nCharacteristics of the cohort (overall and sub-study population)\nN: the number of data available for the characteristics considered. IDVU: intravenous drug users; SD: standard deviation; NRTIs: nucleoside/nucleotide reverse transcriptase inhibitors; NNRTIs: non nucleoside/nucleotide reverse transcriptase inhibitors; PIs: protease inhibitors; cART: combination antiretroviral therapy; N.A.: not applicable.\nFigure 1 depicts the flow of patients. Thirteen women had voluntary interruptions of pregnancy, 12 had spontaneous abortions, one had therapeutic abortion, and three were lost to follow-up soon after the first contact with the Centre. Therefore 125 women delivered. None of the children acquired HIV infection in this cohort.\nFlow chart of the patients included in the study.", "Among the 125 women who delivered, 93 had HIV RNA determination close to term (i.e., ≤ 30 days before delivery). Characteristics of these 93 women are illustrated in table 1. Moreover, CD4+ T-cell counts at delivery were slightly lower among non-Italians (mean: 464.7/mm3, SD 228.6/mm3) than Italians (mean: 582.1/mm3, SD 237.7/mm3) in the subset of patients with available HIV RNA determination close to term (p = 0.018). Significant differences were found between the 93 women with HIV RNA determination close to term and the remaining 61 women who lacked HIV RNA for CD4+ T-cell count at first contact (mean: 439.1 cell/mm3,SD 253.7/mm3 versus 535.7 cell/mm3, SD 298.2/mm3; p = 0.032) and initiation of cART before pregnancy (80.3% versus 54.8%, p = 0.014). No significant differences were found for the other variables between the two groups.\nFactors considered for association with detectable HIV RNA at delivery and linear regression analysis are shown in table 2. Among the 93 women with available HIV RNA at delivery, 32/39 (82.0%) of the Italians versus 36/54 (66.7%) of the non-Italians had undetectable HIV RNA at delivery. Therefore, by univariate linear regression analysis, non-Italians had RR = 1.86 for detectable HIV RNA at delivery with respect to the Italians (p = 0.099). Age and calendar year were not statistically associated with the outcome.\nUnivariate and multivariable regression analyses\nAnalyses were on patients with plasma viral load measured at the time of delivery (within 30 days before delivery). cART: combination antiretroviral therapy; Ref.: reference category.\nAmong nine patients who were not on cART for ≥14 days at the time of HIV RNA testing, only one (11%) had undetectable HIV RNA. By contrast, among 78 patients who were on ≥3 antiretroviral drugs, 61 (78%) had undetectable HIV RNA and this percentage was not statistically different from the percentage of patients with undetectable HIV RNA among those who were prescribed two antiretroviral drugs (6/6 = 100%). Therefore, by univariate analysis, the RR of detectable HIV RNA was 4.4 (p < 0.0001) for lack of cART versus any cART at the time of HIV RNA testing. Similar results were obtained for presence/timing of initiation of cART, resulting in an identical RR of detectable HIV RNA for lack of cART versus any cART initiated before or during pregnancy.\nLastly, increasing CD4+ T-cell count at pregnancy appeared to reduce the risk of detectable HIV RNA at delivery by univariate analysis (RR = 0.91 per 50 cells/mm3; p = 0.037). All continuous variables included in the regression model were also studied as categorical variables to confirm the linear relationship; in particular, upper classes of CD4+ T-cell counts (200-349; 350-499; >500/mm3) were associated with a lower risk of detectable HIV RNA at delivery with respect to CD4+ <200/mm3 (data not shown).\nMultivariable analysis confirmed that the risk of detectable HIV-RNA at delivery was increased by lack of cART, while higher CD4+ T-cell count at the beginning of pregnancy appeared to be protective; by contrast, age and calendar year did not appear to be associated with HIV-RNA at delivery (see Table 2). After excluding from the analysis the 9 patients who were not receiving any cART, we found that nationality was not independently associated with the risk of detectable HIV RNA close to term (RR = 0.91; p = 0.8). Also, in this model, results for the other variables did not change significantly. In particular, the protective role of CD4+ T-cell count at pregnancy was confirmed (risk ratio = 0.87 per 50 cells/mm3; p = 0.049).", "Increasing numbers of HIV-infected women are becoming pregnant or are planning a pregnancy owing to the widespread use of cART and a decrease in HIV-related mortality and morbidity in developed countries [8]. In our HIV cohort we included 154 pregnant women from January 1999 to July 2008. More than half of them were migrants (especially from Africa), reflecting the burden of immigration in Italy in recent years [6,9]. Importantly, more than one quarter of the migrants were diagnosed with HIV at the time of pregnancy, underlining the fact that HIV would have been diagnosed before pregnancy if the screening policy were more extensively implemented, thus allowing cART to be initiated earlier, in line with current recommendations [2]. Indeed, current Italian guidelines suggest treatment irrespective of CD4+ count in case of pregnancy. At the same time, this finding indicates that pregnancy offers a unique opportunity for HIV screening, especially in high-risk populations of migrant women.\nLikewise, CD4+ T-cell counts at the first contact with our clinic were significantly lower among non-Italian women than Italians, reflecting more advanced stages of HIV infection and a later HIV diagnosis in the former. Furthermore, CD4+ T-cell counts at delivery were lower in the subset of non-Italian patients with available HIV RNA at delivery. It has already been demonstrated that lower CD4+ T-cell count at delivery is a risk factor for mother-to-child transmission of HIV [10]. Once again, these considerations reinforce the need for earlier diagnosis and treatment of HIV infection, particularly in migrant women.\nIt is important to note that, in about one quarter (31/125) of the women who delivered, HIV RNA was not measured in the 30 days before delivery, making it impossible to identify the risk of transmission to the newborn, suggesting appropriate interventions (e.g., treatment change or intensification, therapeutic drug monitoring to confirm low adherence and/or poor drug bioavailability, modification of drug dosage or counselling to improve adherence). The Italian guidelines for antiretroviral therapy edited in 2005 [7] suggested similar schedules of follow-up for pregnant women and for the other patients (every 2-3 months). The lack of HIV RNA determination close to term was due to the fact that 26/61 (42.6%) women had abortions and 3/61 were lost to follow-up. Furthermore, the lack of HIV RNA determination in the remaining 32/61 (52.5%) women was due to compliance with the previous guidelines, which suggested an infrequent schedule of monitoring rather than lack of patient adherence to the scheduled follow-up. Although none of the mothers transmitted the infection, such infrequent follow-up constituted a potential risk just because HIV RNA was not determined, so the risk was not identified and appropriately addressed. In view these considerations, the most recent Italian guidelines for antiretroviral treatment [11] has suggested that the follow-up of pregnant women should be stricter, at least in those who have started or modified cART during pregnancy. Moreover, further HIV RNA determinations are now recommended between the 34th and the 36th weeks of gestation [11]. Our findings support these recommendations.\nIn 25/93 (26.9%) women in whom HIV RNA was tested close to term, detectable HIV RNA values were found. Percentages of detectable HIV RNA at delivery were even greater in previous studies [4,5], highlighting the importance of studies on risk factors for detectable HIV RNA at delivery to optimize the clinical management of HIV in pregnant women.\nIn the subgroup of 93 women with available HIV RNA, a statistically significant association was found between risk of detectable HIV RNA at delivery and lower CD4+ T-cell count at pregnancy onset. This finding is not new since it was previously demonstrated by Katz et al. [4], indicating that initiation of cART when the immune system is less compromised could help reduce HIV RNA to undetectable levels at delivery. Interestingly, type of cART (dual or triple therapy) did not appear to influence the risk of detectable HIV RNA at delivery, while any cART had a significant protective effect, even though it was initiated during later stages of pregnancy. This finding is reassuring because it suggests that cART has a protective effect even though it is initiated late in pregnancy, but only in cases where earlier treatment is not feasible owing to late diagnosis of HIV infection. Indeed, our results must be interpreted with caution and must not be used to conclude that cART can be initiated later and with suboptimal regimens. In fact, dual therapy is not recommended [7]. Moreover, our sample size was extremely small for some categories (only six patients were treated with dual cART and only five started cART in the third trimester). Clearly, a larger number of patients might have altered the results.\nOur study was the first to investigate the potential effect of patient nationality on the risk of detectable HIV RNA at delivery. There was only a trend toward a statistically significant effect of non-Italian nationality by univariate analysis, and this disappeared on multivariable analysis. Once again, the small number of patients could have influenced the results, but it is possible that behavioural factors associated with nationality (i.e., late HIV diagnosis and late initiation of cART) or other factors not captured in this analysis (e.g., low patient adherence or unmeasured characteristics of the virus such as HIV drug resistance or viral subtype [12]) were more important than nationality \"per se\". Therefore our results suggest that early diagnosis and treatment, though prioritized in the migrants, should be extended to the overall population. A screening campaign in the general population and adherence to the recommendation to test pregnant women for HIV are mandatory.\nThis study has several limitations that need to be recognised. First, it was a retrospective analysis, so several items of information were not recorded. In particular, because they were not available for the earliest calendar years, HIV drug resistance testing, viral subtypes or therapeutic drug monitoring results were not considered. Second, as previously discussed, the sample size was small. Third, HIV RNA was available at delivery for only 93 women. Therefore, there should be further investigation of maternal characteristics and pregnancy outcomes in migrant and autochthonous women using prospective cohorts with completed data and larger sample size.", "Overall our results reinforce the recommendation to start HAART as soon as possible in pregnant women. Clearly, this is only achievable by testing for HIV as soon as possible. Migrant patients appeared to be a vulnerable population because they were more frequently unaware of their HIV status at the time of pregnancy than Italians. Since there is no nationwide legislation that imposes HIV-testing in pregnant women in Italy, our results have important public health implications. Moreover, they indicate the need for a strict virological and clinical follow-up of pregnant women to detect amendable causes of positive HIV RNA at delivery to minimize the risk of HIV transmission.", "CT and GC have received unrestricted educational grants (as speakers or for participation to conferences) from Abbott, Gilead, Merck, GSK, BMS, Schering Plough, Roche. The remaining authors declare no competing interests.", "CT, GC and II conceived of the study, participated in its design and helped to draft the manuscript. MAF, SC and EQ participated in design of the study and acquisition of data. MM participated in design of study and performed the statistical analysis. All authors read and approved the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/124/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Characterisation of the overall population by geographical origin and management of pregnant women and newborns", "Factors associated with detectable HIV RNA at delivery (subgroup of mothers with available HIV RNA close to term)", "Additional definitions and statistical notes", "Results", "Characteristics of the population by geographical origin", "Factors associated with detectable HIV RNA at delivery", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "UNAIDS estimated that 31.3 million patients lived with HIV infection worldwide in 2008; among them, 15.7 million were women [1]. Before the introduction of combination antiretroviral therapy (cART), about 25% of infants born to HIV-infected women became HIV infected, but this risk was reduced by cART [1]. As a consequence, many women desire to have children despite HIV infection, while others become pregnant unaware that they are infected with HIV. For instance, more than 6,000 women living with HIV become pregnant in the United States every year [2].\nDetectable HIV RNA at delivery is the strongest predictor of mother-to-child HIV transmission [3]. Katz et al. [4] examined the association between cART, demographic factors and detectable HIV-RNA at delivery. Alarmingly, 32% of the women displayed detectable HIV RNA at delivery, which was significantly associated with lower CD4+ counts and higher HIV RNA during pregnancy, but not type of cART [4]. A recent study of the European Collaborative Cohort highlighted a group of pregnant women with detectable HIV RNA at delivery notwithstanding cART, suggesting the importance of strict monitoring and support for HIV-infected pregnant women [5].\nIn Italy, characteristics of HIV-infected pregnant women and possible determinants of positive HIV RNA at delivery have been poorly studied [6]. In particular, maternal characteristics of migrants in comparison to autochthonous populations and the possible impact of migration on the risk of positive HIV RNA at delivery have not yet been investigated. We hypothesized that HIV infection is diagnosed in migrants later than in Italians, with consequences for the risk of detectable HIV RNA at delivery. First, we compared maternal characteristics of Italian and non-Italian women to determine whether there were any differences in terms of timing of presentation and cART initiation; second, we selected the women for whom HIV RNA close to term was available to explore possible risk factors for detectable HIV RNA at delivery, including patient nationality.", "[SUBTITLE] Characterisation of the overall population by geographical origin and management of pregnant women and newborns [SUBSECTION] A retrospective study was conducted on all pregnant women attending a large outpatient HIV clinic in Brescia (northern Italy) from January 1999 to July 2008. Pregnant women were initially referred to our clinic by general practitioners, obstetric services and services for sexually transmitted diseases at the time of HIV diagnosis, or were already on follow-up when they were found to be pregnant. On the basis of Italian guidelines for antiretroviral therapy available at the time of the study [7], the women were followed during pregnancy with examinations and clinical examinations every three months, using an integrated approach (HIV disease management and obstetric evaluation). Complete blood tests (including CD4+ T-cell count and HIV RNA) were performed, and the women were screened for latent tuberculosis infection, TORCH (T. gondii, others, Rubella, Cytomegalovirus, HIV) and malaria, if clinically indicated. HIV RNA was determined by branched DNA quantification (Bayer bDNA test, Versant HIV-1 RNA 3.0 assay [bDNA], Bayer Health Care). Antiretroviral treatment was commenced as soon as possible if indicated by the mothers' conditions [7]; otherwise, cART was initiated as soon as the mothers entered the 2nd trimester of pregnancy. Italian guidelines edited in 2005 recommended beginning antiretroviral therapy at CD4+ T-cell count ≤350 cells/mm3; treatment at 350 < CD4+ < 500 cells/mm3 was only suggested if HIV RNA was >100,000 copies/ml [7]. HIV-infected women were delivered by caesarean section, and zidovudine was infused as prophylaxis during delivery. Breastfeeding was discouraged.\nThe children were followed at the Paediatric Clinic of the local hospital (Spedali Civili di Brescia) with clinical evaluation and complete blood tests including HIV antibody test (ELISA) and HIV RNA on day 0, week 4, week 8, week 24, week 48 and week 96. Children were considered to be negative for HIV infection if HIV RNA and HIV antibody tests were negative at week 48.\nThis study was approved by the Ethical Committee of the Spedali Civili di Brescia. The patients signed the informed consent for participation to this study.\nA retrospective study was conducted on all pregnant women attending a large outpatient HIV clinic in Brescia (northern Italy) from January 1999 to July 2008. Pregnant women were initially referred to our clinic by general practitioners, obstetric services and services for sexually transmitted diseases at the time of HIV diagnosis, or were already on follow-up when they were found to be pregnant. On the basis of Italian guidelines for antiretroviral therapy available at the time of the study [7], the women were followed during pregnancy with examinations and clinical examinations every three months, using an integrated approach (HIV disease management and obstetric evaluation). Complete blood tests (including CD4+ T-cell count and HIV RNA) were performed, and the women were screened for latent tuberculosis infection, TORCH (T. gondii, others, Rubella, Cytomegalovirus, HIV) and malaria, if clinically indicated. HIV RNA was determined by branched DNA quantification (Bayer bDNA test, Versant HIV-1 RNA 3.0 assay [bDNA], Bayer Health Care). Antiretroviral treatment was commenced as soon as possible if indicated by the mothers' conditions [7]; otherwise, cART was initiated as soon as the mothers entered the 2nd trimester of pregnancy. Italian guidelines edited in 2005 recommended beginning antiretroviral therapy at CD4+ T-cell count ≤350 cells/mm3; treatment at 350 < CD4+ < 500 cells/mm3 was only suggested if HIV RNA was >100,000 copies/ml [7]. HIV-infected women were delivered by caesarean section, and zidovudine was infused as prophylaxis during delivery. Breastfeeding was discouraged.\nThe children were followed at the Paediatric Clinic of the local hospital (Spedali Civili di Brescia) with clinical evaluation and complete blood tests including HIV antibody test (ELISA) and HIV RNA on day 0, week 4, week 8, week 24, week 48 and week 96. Children were considered to be negative for HIV infection if HIV RNA and HIV antibody tests were negative at week 48.\nThis study was approved by the Ethical Committee of the Spedali Civili di Brescia. The patients signed the informed consent for participation to this study.\n[SUBTITLE] Factors associated with detectable HIV RNA at delivery (subgroup of mothers with available HIV RNA close to term) [SUBSECTION] For this analysis, we considered only patients with available HIV RNA measured close to term (i.e., ≤30 days before delivery). Log-normal generalized linear regression models were used to assess factors associated with the outcome, defined as detectable HIV RNA (i.e., ≥50 copies/ml) close to term. Variables were selected on the basis of plausibility of their association with the outcome and epidemiological relevance, and to avoid possible biases. For example, CD4+ T-cell counts at delivery were not included, since antiretroviral therapy and time of initiation were likely to influence these values and HIV RNA at the same time. Therefore, the following variables were selected: geographical provenance (Italy versus other countries), age, calendar year at pregnancy, CD4+ T-cell count at pregnancy, and presence/type of cART at the time of HIV RNA testing (no therapy versus two antiretroviral drugs versus at least three drugs in combination, i.e., HAART regimens). Therapy was considered to be present if it was initiated at least 14 days before HIV RNA testing.\nSince the 9 patients who were not prescribed cART at pregnancy were the same who were not on cART close to term, in the multivariable model we used a single dichotomy variable (no cART versus any cART close to term). Moreover, since all these 9 patients were non-Italians, there was an effect modification between country of origin and treatment exposure. For this reason, nationality was excluded from the main multivariable model. Lastly, in order to test whether nationality was correlated with detectable HIV RNA close to term, the 9 patients without treatment were excluded from a multivariable model that included nationality as variable.\nRisk ratios (RR) with corresponding 95% confidence intervals were used to describe the strength of the associations.\nFor this analysis, we considered only patients with available HIV RNA measured close to term (i.e., ≤30 days before delivery). Log-normal generalized linear regression models were used to assess factors associated with the outcome, defined as detectable HIV RNA (i.e., ≥50 copies/ml) close to term. Variables were selected on the basis of plausibility of their association with the outcome and epidemiological relevance, and to avoid possible biases. For example, CD4+ T-cell counts at delivery were not included, since antiretroviral therapy and time of initiation were likely to influence these values and HIV RNA at the same time. Therefore, the following variables were selected: geographical provenance (Italy versus other countries), age, calendar year at pregnancy, CD4+ T-cell count at pregnancy, and presence/type of cART at the time of HIV RNA testing (no therapy versus two antiretroviral drugs versus at least three drugs in combination, i.e., HAART regimens). Therapy was considered to be present if it was initiated at least 14 days before HIV RNA testing.\nSince the 9 patients who were not prescribed cART at pregnancy were the same who were not on cART close to term, in the multivariable model we used a single dichotomy variable (no cART versus any cART close to term). Moreover, since all these 9 patients were non-Italians, there was an effect modification between country of origin and treatment exposure. For this reason, nationality was excluded from the main multivariable model. Lastly, in order to test whether nationality was correlated with detectable HIV RNA close to term, the 9 patients without treatment were excluded from a multivariable model that included nationality as variable.\nRisk ratios (RR) with corresponding 95% confidence intervals were used to describe the strength of the associations.\n[SUBTITLE] Additional definitions and statistical notes [SUBSECTION] The time of diagnosis of HIV infection was at the first positive HIV confirmatory test (western blotting), and the CD4+ T-cell count and HIV RNA at diagnosis were the first values obtained after HIV diagnosis. The time of pregnancy diagnosis was considered to be the last menstrual period, and CD4+ T-cell count and HIV RNA at pregnancy were the first values obtained after pregnancy diagnosis. Lastly, cART at delivery was the regimen at the last blood test before delivery.\nThe chi square test and score test for trend were used when appropriate. All statistical tests were two sided. P-values <0.05 were considered to be significant. Data were collected in an electronic chart used in our Centre (NetCare, version 1.05.11 sp 26) and double-checked for consistency and completeness with paper charts. Statistical analyses were performed using STATA software (Stata Statistical Software release 9.2, 2007; Stata Corporation, College Station, Texas).\nThe time of diagnosis of HIV infection was at the first positive HIV confirmatory test (western blotting), and the CD4+ T-cell count and HIV RNA at diagnosis were the first values obtained after HIV diagnosis. The time of pregnancy diagnosis was considered to be the last menstrual period, and CD4+ T-cell count and HIV RNA at pregnancy were the first values obtained after pregnancy diagnosis. Lastly, cART at delivery was the regimen at the last blood test before delivery.\nThe chi square test and score test for trend were used when appropriate. All statistical tests were two sided. P-values <0.05 were considered to be significant. Data were collected in an electronic chart used in our Centre (NetCare, version 1.05.11 sp 26) and double-checked for consistency and completeness with paper charts. Statistical analyses were performed using STATA software (Stata Statistical Software release 9.2, 2007; Stata Corporation, College Station, Texas).", "A retrospective study was conducted on all pregnant women attending a large outpatient HIV clinic in Brescia (northern Italy) from January 1999 to July 2008. Pregnant women were initially referred to our clinic by general practitioners, obstetric services and services for sexually transmitted diseases at the time of HIV diagnosis, or were already on follow-up when they were found to be pregnant. On the basis of Italian guidelines for antiretroviral therapy available at the time of the study [7], the women were followed during pregnancy with examinations and clinical examinations every three months, using an integrated approach (HIV disease management and obstetric evaluation). Complete blood tests (including CD4+ T-cell count and HIV RNA) were performed, and the women were screened for latent tuberculosis infection, TORCH (T. gondii, others, Rubella, Cytomegalovirus, HIV) and malaria, if clinically indicated. HIV RNA was determined by branched DNA quantification (Bayer bDNA test, Versant HIV-1 RNA 3.0 assay [bDNA], Bayer Health Care). Antiretroviral treatment was commenced as soon as possible if indicated by the mothers' conditions [7]; otherwise, cART was initiated as soon as the mothers entered the 2nd trimester of pregnancy. Italian guidelines edited in 2005 recommended beginning antiretroviral therapy at CD4+ T-cell count ≤350 cells/mm3; treatment at 350 < CD4+ < 500 cells/mm3 was only suggested if HIV RNA was >100,000 copies/ml [7]. HIV-infected women were delivered by caesarean section, and zidovudine was infused as prophylaxis during delivery. Breastfeeding was discouraged.\nThe children were followed at the Paediatric Clinic of the local hospital (Spedali Civili di Brescia) with clinical evaluation and complete blood tests including HIV antibody test (ELISA) and HIV RNA on day 0, week 4, week 8, week 24, week 48 and week 96. Children were considered to be negative for HIV infection if HIV RNA and HIV antibody tests were negative at week 48.\nThis study was approved by the Ethical Committee of the Spedali Civili di Brescia. The patients signed the informed consent for participation to this study.", "For this analysis, we considered only patients with available HIV RNA measured close to term (i.e., ≤30 days before delivery). Log-normal generalized linear regression models were used to assess factors associated with the outcome, defined as detectable HIV RNA (i.e., ≥50 copies/ml) close to term. Variables were selected on the basis of plausibility of their association with the outcome and epidemiological relevance, and to avoid possible biases. For example, CD4+ T-cell counts at delivery were not included, since antiretroviral therapy and time of initiation were likely to influence these values and HIV RNA at the same time. Therefore, the following variables were selected: geographical provenance (Italy versus other countries), age, calendar year at pregnancy, CD4+ T-cell count at pregnancy, and presence/type of cART at the time of HIV RNA testing (no therapy versus two antiretroviral drugs versus at least three drugs in combination, i.e., HAART regimens). Therapy was considered to be present if it was initiated at least 14 days before HIV RNA testing.\nSince the 9 patients who were not prescribed cART at pregnancy were the same who were not on cART close to term, in the multivariable model we used a single dichotomy variable (no cART versus any cART close to term). Moreover, since all these 9 patients were non-Italians, there was an effect modification between country of origin and treatment exposure. For this reason, nationality was excluded from the main multivariable model. Lastly, in order to test whether nationality was correlated with detectable HIV RNA close to term, the 9 patients without treatment were excluded from a multivariable model that included nationality as variable.\nRisk ratios (RR) with corresponding 95% confidence intervals were used to describe the strength of the associations.", "The time of diagnosis of HIV infection was at the first positive HIV confirmatory test (western blotting), and the CD4+ T-cell count and HIV RNA at diagnosis were the first values obtained after HIV diagnosis. The time of pregnancy diagnosis was considered to be the last menstrual period, and CD4+ T-cell count and HIV RNA at pregnancy were the first values obtained after pregnancy diagnosis. Lastly, cART at delivery was the regimen at the last blood test before delivery.\nThe chi square test and score test for trend were used when appropriate. All statistical tests were two sided. P-values <0.05 were considered to be significant. Data were collected in an electronic chart used in our Centre (NetCare, version 1.05.11 sp 26) and double-checked for consistency and completeness with paper charts. Statistical analyses were performed using STATA software (Stata Statistical Software release 9.2, 2007; Stata Corporation, College Station, Texas).", "[SUBTITLE] Characteristics of the population by geographical origin [SUBSECTION] One hundred and fifty-four pregnant women were included in the study (table 1). Most of the patients (46.7%) came from Italy or from Africa (42.4%), the remainder from Eastern Europe (6.3%) or from different countries (4.5%). The non-Italian patients had acquired HIV through heterosexual intercourse more frequently (p < 0.0001), and had received HIV diagnosis later (p < 0.0001) and during pregnancy (p < 0.0001), than the Italians. Two Italian women were infected vertically. Moreover, non-Italians had more previous pregnancies (p = 0.078), and presented with lower CD4+ T-cell count (p = 0.003), than Italians. Conversely, Italians were prescribed cART earlier than the non-Italians (p = 0.006).\nCharacteristics of the cohort (overall and sub-study population)\nN: the number of data available for the characteristics considered. IDVU: intravenous drug users; SD: standard deviation; NRTIs: nucleoside/nucleotide reverse transcriptase inhibitors; NNRTIs: non nucleoside/nucleotide reverse transcriptase inhibitors; PIs: protease inhibitors; cART: combination antiretroviral therapy; N.A.: not applicable.\nFigure 1 depicts the flow of patients. Thirteen women had voluntary interruptions of pregnancy, 12 had spontaneous abortions, one had therapeutic abortion, and three were lost to follow-up soon after the first contact with the Centre. Therefore 125 women delivered. None of the children acquired HIV infection in this cohort.\nFlow chart of the patients included in the study.\nOne hundred and fifty-four pregnant women were included in the study (table 1). Most of the patients (46.7%) came from Italy or from Africa (42.4%), the remainder from Eastern Europe (6.3%) or from different countries (4.5%). The non-Italian patients had acquired HIV through heterosexual intercourse more frequently (p < 0.0001), and had received HIV diagnosis later (p < 0.0001) and during pregnancy (p < 0.0001), than the Italians. Two Italian women were infected vertically. Moreover, non-Italians had more previous pregnancies (p = 0.078), and presented with lower CD4+ T-cell count (p = 0.003), than Italians. Conversely, Italians were prescribed cART earlier than the non-Italians (p = 0.006).\nCharacteristics of the cohort (overall and sub-study population)\nN: the number of data available for the characteristics considered. IDVU: intravenous drug users; SD: standard deviation; NRTIs: nucleoside/nucleotide reverse transcriptase inhibitors; NNRTIs: non nucleoside/nucleotide reverse transcriptase inhibitors; PIs: protease inhibitors; cART: combination antiretroviral therapy; N.A.: not applicable.\nFigure 1 depicts the flow of patients. Thirteen women had voluntary interruptions of pregnancy, 12 had spontaneous abortions, one had therapeutic abortion, and three were lost to follow-up soon after the first contact with the Centre. Therefore 125 women delivered. None of the children acquired HIV infection in this cohort.\nFlow chart of the patients included in the study.\n[SUBTITLE] Factors associated with detectable HIV RNA at delivery [SUBSECTION] Among the 125 women who delivered, 93 had HIV RNA determination close to term (i.e., ≤ 30 days before delivery). Characteristics of these 93 women are illustrated in table 1. Moreover, CD4+ T-cell counts at delivery were slightly lower among non-Italians (mean: 464.7/mm3, SD 228.6/mm3) than Italians (mean: 582.1/mm3, SD 237.7/mm3) in the subset of patients with available HIV RNA determination close to term (p = 0.018). Significant differences were found between the 93 women with HIV RNA determination close to term and the remaining 61 women who lacked HIV RNA for CD4+ T-cell count at first contact (mean: 439.1 cell/mm3,SD 253.7/mm3 versus 535.7 cell/mm3, SD 298.2/mm3; p = 0.032) and initiation of cART before pregnancy (80.3% versus 54.8%, p = 0.014). No significant differences were found for the other variables between the two groups.\nFactors considered for association with detectable HIV RNA at delivery and linear regression analysis are shown in table 2. Among the 93 women with available HIV RNA at delivery, 32/39 (82.0%) of the Italians versus 36/54 (66.7%) of the non-Italians had undetectable HIV RNA at delivery. Therefore, by univariate linear regression analysis, non-Italians had RR = 1.86 for detectable HIV RNA at delivery with respect to the Italians (p = 0.099). Age and calendar year were not statistically associated with the outcome.\nUnivariate and multivariable regression analyses\nAnalyses were on patients with plasma viral load measured at the time of delivery (within 30 days before delivery). cART: combination antiretroviral therapy; Ref.: reference category.\nAmong nine patients who were not on cART for ≥14 days at the time of HIV RNA testing, only one (11%) had undetectable HIV RNA. By contrast, among 78 patients who were on ≥3 antiretroviral drugs, 61 (78%) had undetectable HIV RNA and this percentage was not statistically different from the percentage of patients with undetectable HIV RNA among those who were prescribed two antiretroviral drugs (6/6 = 100%). Therefore, by univariate analysis, the RR of detectable HIV RNA was 4.4 (p < 0.0001) for lack of cART versus any cART at the time of HIV RNA testing. Similar results were obtained for presence/timing of initiation of cART, resulting in an identical RR of detectable HIV RNA for lack of cART versus any cART initiated before or during pregnancy.\nLastly, increasing CD4+ T-cell count at pregnancy appeared to reduce the risk of detectable HIV RNA at delivery by univariate analysis (RR = 0.91 per 50 cells/mm3; p = 0.037). All continuous variables included in the regression model were also studied as categorical variables to confirm the linear relationship; in particular, upper classes of CD4+ T-cell counts (200-349; 350-499; >500/mm3) were associated with a lower risk of detectable HIV RNA at delivery with respect to CD4+ <200/mm3 (data not shown).\nMultivariable analysis confirmed that the risk of detectable HIV-RNA at delivery was increased by lack of cART, while higher CD4+ T-cell count at the beginning of pregnancy appeared to be protective; by contrast, age and calendar year did not appear to be associated with HIV-RNA at delivery (see Table 2). After excluding from the analysis the 9 patients who were not receiving any cART, we found that nationality was not independently associated with the risk of detectable HIV RNA close to term (RR = 0.91; p = 0.8). Also, in this model, results for the other variables did not change significantly. In particular, the protective role of CD4+ T-cell count at pregnancy was confirmed (risk ratio = 0.87 per 50 cells/mm3; p = 0.049).\nAmong the 125 women who delivered, 93 had HIV RNA determination close to term (i.e., ≤ 30 days before delivery). Characteristics of these 93 women are illustrated in table 1. Moreover, CD4+ T-cell counts at delivery were slightly lower among non-Italians (mean: 464.7/mm3, SD 228.6/mm3) than Italians (mean: 582.1/mm3, SD 237.7/mm3) in the subset of patients with available HIV RNA determination close to term (p = 0.018). Significant differences were found between the 93 women with HIV RNA determination close to term and the remaining 61 women who lacked HIV RNA for CD4+ T-cell count at first contact (mean: 439.1 cell/mm3,SD 253.7/mm3 versus 535.7 cell/mm3, SD 298.2/mm3; p = 0.032) and initiation of cART before pregnancy (80.3% versus 54.8%, p = 0.014). No significant differences were found for the other variables between the two groups.\nFactors considered for association with detectable HIV RNA at delivery and linear regression analysis are shown in table 2. Among the 93 women with available HIV RNA at delivery, 32/39 (82.0%) of the Italians versus 36/54 (66.7%) of the non-Italians had undetectable HIV RNA at delivery. Therefore, by univariate linear regression analysis, non-Italians had RR = 1.86 for detectable HIV RNA at delivery with respect to the Italians (p = 0.099). Age and calendar year were not statistically associated with the outcome.\nUnivariate and multivariable regression analyses\nAnalyses were on patients with plasma viral load measured at the time of delivery (within 30 days before delivery). cART: combination antiretroviral therapy; Ref.: reference category.\nAmong nine patients who were not on cART for ≥14 days at the time of HIV RNA testing, only one (11%) had undetectable HIV RNA. By contrast, among 78 patients who were on ≥3 antiretroviral drugs, 61 (78%) had undetectable HIV RNA and this percentage was not statistically different from the percentage of patients with undetectable HIV RNA among those who were prescribed two antiretroviral drugs (6/6 = 100%). Therefore, by univariate analysis, the RR of detectable HIV RNA was 4.4 (p < 0.0001) for lack of cART versus any cART at the time of HIV RNA testing. Similar results were obtained for presence/timing of initiation of cART, resulting in an identical RR of detectable HIV RNA for lack of cART versus any cART initiated before or during pregnancy.\nLastly, increasing CD4+ T-cell count at pregnancy appeared to reduce the risk of detectable HIV RNA at delivery by univariate analysis (RR = 0.91 per 50 cells/mm3; p = 0.037). All continuous variables included in the regression model were also studied as categorical variables to confirm the linear relationship; in particular, upper classes of CD4+ T-cell counts (200-349; 350-499; >500/mm3) were associated with a lower risk of detectable HIV RNA at delivery with respect to CD4+ <200/mm3 (data not shown).\nMultivariable analysis confirmed that the risk of detectable HIV-RNA at delivery was increased by lack of cART, while higher CD4+ T-cell count at the beginning of pregnancy appeared to be protective; by contrast, age and calendar year did not appear to be associated with HIV-RNA at delivery (see Table 2). After excluding from the analysis the 9 patients who were not receiving any cART, we found that nationality was not independently associated with the risk of detectable HIV RNA close to term (RR = 0.91; p = 0.8). Also, in this model, results for the other variables did not change significantly. In particular, the protective role of CD4+ T-cell count at pregnancy was confirmed (risk ratio = 0.87 per 50 cells/mm3; p = 0.049).", "One hundred and fifty-four pregnant women were included in the study (table 1). Most of the patients (46.7%) came from Italy or from Africa (42.4%), the remainder from Eastern Europe (6.3%) or from different countries (4.5%). The non-Italian patients had acquired HIV through heterosexual intercourse more frequently (p < 0.0001), and had received HIV diagnosis later (p < 0.0001) and during pregnancy (p < 0.0001), than the Italians. Two Italian women were infected vertically. Moreover, non-Italians had more previous pregnancies (p = 0.078), and presented with lower CD4+ T-cell count (p = 0.003), than Italians. Conversely, Italians were prescribed cART earlier than the non-Italians (p = 0.006).\nCharacteristics of the cohort (overall and sub-study population)\nN: the number of data available for the characteristics considered. IDVU: intravenous drug users; SD: standard deviation; NRTIs: nucleoside/nucleotide reverse transcriptase inhibitors; NNRTIs: non nucleoside/nucleotide reverse transcriptase inhibitors; PIs: protease inhibitors; cART: combination antiretroviral therapy; N.A.: not applicable.\nFigure 1 depicts the flow of patients. Thirteen women had voluntary interruptions of pregnancy, 12 had spontaneous abortions, one had therapeutic abortion, and three were lost to follow-up soon after the first contact with the Centre. Therefore 125 women delivered. None of the children acquired HIV infection in this cohort.\nFlow chart of the patients included in the study.", "Among the 125 women who delivered, 93 had HIV RNA determination close to term (i.e., ≤ 30 days before delivery). Characteristics of these 93 women are illustrated in table 1. Moreover, CD4+ T-cell counts at delivery were slightly lower among non-Italians (mean: 464.7/mm3, SD 228.6/mm3) than Italians (mean: 582.1/mm3, SD 237.7/mm3) in the subset of patients with available HIV RNA determination close to term (p = 0.018). Significant differences were found between the 93 women with HIV RNA determination close to term and the remaining 61 women who lacked HIV RNA for CD4+ T-cell count at first contact (mean: 439.1 cell/mm3,SD 253.7/mm3 versus 535.7 cell/mm3, SD 298.2/mm3; p = 0.032) and initiation of cART before pregnancy (80.3% versus 54.8%, p = 0.014). No significant differences were found for the other variables between the two groups.\nFactors considered for association with detectable HIV RNA at delivery and linear regression analysis are shown in table 2. Among the 93 women with available HIV RNA at delivery, 32/39 (82.0%) of the Italians versus 36/54 (66.7%) of the non-Italians had undetectable HIV RNA at delivery. Therefore, by univariate linear regression analysis, non-Italians had RR = 1.86 for detectable HIV RNA at delivery with respect to the Italians (p = 0.099). Age and calendar year were not statistically associated with the outcome.\nUnivariate and multivariable regression analyses\nAnalyses were on patients with plasma viral load measured at the time of delivery (within 30 days before delivery). cART: combination antiretroviral therapy; Ref.: reference category.\nAmong nine patients who were not on cART for ≥14 days at the time of HIV RNA testing, only one (11%) had undetectable HIV RNA. By contrast, among 78 patients who were on ≥3 antiretroviral drugs, 61 (78%) had undetectable HIV RNA and this percentage was not statistically different from the percentage of patients with undetectable HIV RNA among those who were prescribed two antiretroviral drugs (6/6 = 100%). Therefore, by univariate analysis, the RR of detectable HIV RNA was 4.4 (p < 0.0001) for lack of cART versus any cART at the time of HIV RNA testing. Similar results were obtained for presence/timing of initiation of cART, resulting in an identical RR of detectable HIV RNA for lack of cART versus any cART initiated before or during pregnancy.\nLastly, increasing CD4+ T-cell count at pregnancy appeared to reduce the risk of detectable HIV RNA at delivery by univariate analysis (RR = 0.91 per 50 cells/mm3; p = 0.037). All continuous variables included in the regression model were also studied as categorical variables to confirm the linear relationship; in particular, upper classes of CD4+ T-cell counts (200-349; 350-499; >500/mm3) were associated with a lower risk of detectable HIV RNA at delivery with respect to CD4+ <200/mm3 (data not shown).\nMultivariable analysis confirmed that the risk of detectable HIV-RNA at delivery was increased by lack of cART, while higher CD4+ T-cell count at the beginning of pregnancy appeared to be protective; by contrast, age and calendar year did not appear to be associated with HIV-RNA at delivery (see Table 2). After excluding from the analysis the 9 patients who were not receiving any cART, we found that nationality was not independently associated with the risk of detectable HIV RNA close to term (RR = 0.91; p = 0.8). Also, in this model, results for the other variables did not change significantly. In particular, the protective role of CD4+ T-cell count at pregnancy was confirmed (risk ratio = 0.87 per 50 cells/mm3; p = 0.049).", "Increasing numbers of HIV-infected women are becoming pregnant or are planning a pregnancy owing to the widespread use of cART and a decrease in HIV-related mortality and morbidity in developed countries [8]. In our HIV cohort we included 154 pregnant women from January 1999 to July 2008. More than half of them were migrants (especially from Africa), reflecting the burden of immigration in Italy in recent years [6,9]. Importantly, more than one quarter of the migrants were diagnosed with HIV at the time of pregnancy, underlining the fact that HIV would have been diagnosed before pregnancy if the screening policy were more extensively implemented, thus allowing cART to be initiated earlier, in line with current recommendations [2]. Indeed, current Italian guidelines suggest treatment irrespective of CD4+ count in case of pregnancy. At the same time, this finding indicates that pregnancy offers a unique opportunity for HIV screening, especially in high-risk populations of migrant women.\nLikewise, CD4+ T-cell counts at the first contact with our clinic were significantly lower among non-Italian women than Italians, reflecting more advanced stages of HIV infection and a later HIV diagnosis in the former. Furthermore, CD4+ T-cell counts at delivery were lower in the subset of non-Italian patients with available HIV RNA at delivery. It has already been demonstrated that lower CD4+ T-cell count at delivery is a risk factor for mother-to-child transmission of HIV [10]. Once again, these considerations reinforce the need for earlier diagnosis and treatment of HIV infection, particularly in migrant women.\nIt is important to note that, in about one quarter (31/125) of the women who delivered, HIV RNA was not measured in the 30 days before delivery, making it impossible to identify the risk of transmission to the newborn, suggesting appropriate interventions (e.g., treatment change or intensification, therapeutic drug monitoring to confirm low adherence and/or poor drug bioavailability, modification of drug dosage or counselling to improve adherence). The Italian guidelines for antiretroviral therapy edited in 2005 [7] suggested similar schedules of follow-up for pregnant women and for the other patients (every 2-3 months). The lack of HIV RNA determination close to term was due to the fact that 26/61 (42.6%) women had abortions and 3/61 were lost to follow-up. Furthermore, the lack of HIV RNA determination in the remaining 32/61 (52.5%) women was due to compliance with the previous guidelines, which suggested an infrequent schedule of monitoring rather than lack of patient adherence to the scheduled follow-up. Although none of the mothers transmitted the infection, such infrequent follow-up constituted a potential risk just because HIV RNA was not determined, so the risk was not identified and appropriately addressed. In view these considerations, the most recent Italian guidelines for antiretroviral treatment [11] has suggested that the follow-up of pregnant women should be stricter, at least in those who have started or modified cART during pregnancy. Moreover, further HIV RNA determinations are now recommended between the 34th and the 36th weeks of gestation [11]. Our findings support these recommendations.\nIn 25/93 (26.9%) women in whom HIV RNA was tested close to term, detectable HIV RNA values were found. Percentages of detectable HIV RNA at delivery were even greater in previous studies [4,5], highlighting the importance of studies on risk factors for detectable HIV RNA at delivery to optimize the clinical management of HIV in pregnant women.\nIn the subgroup of 93 women with available HIV RNA, a statistically significant association was found between risk of detectable HIV RNA at delivery and lower CD4+ T-cell count at pregnancy onset. This finding is not new since it was previously demonstrated by Katz et al. [4], indicating that initiation of cART when the immune system is less compromised could help reduce HIV RNA to undetectable levels at delivery. Interestingly, type of cART (dual or triple therapy) did not appear to influence the risk of detectable HIV RNA at delivery, while any cART had a significant protective effect, even though it was initiated during later stages of pregnancy. This finding is reassuring because it suggests that cART has a protective effect even though it is initiated late in pregnancy, but only in cases where earlier treatment is not feasible owing to late diagnosis of HIV infection. Indeed, our results must be interpreted with caution and must not be used to conclude that cART can be initiated later and with suboptimal regimens. In fact, dual therapy is not recommended [7]. Moreover, our sample size was extremely small for some categories (only six patients were treated with dual cART and only five started cART in the third trimester). Clearly, a larger number of patients might have altered the results.\nOur study was the first to investigate the potential effect of patient nationality on the risk of detectable HIV RNA at delivery. There was only a trend toward a statistically significant effect of non-Italian nationality by univariate analysis, and this disappeared on multivariable analysis. Once again, the small number of patients could have influenced the results, but it is possible that behavioural factors associated with nationality (i.e., late HIV diagnosis and late initiation of cART) or other factors not captured in this analysis (e.g., low patient adherence or unmeasured characteristics of the virus such as HIV drug resistance or viral subtype [12]) were more important than nationality \"per se\". Therefore our results suggest that early diagnosis and treatment, though prioritized in the migrants, should be extended to the overall population. A screening campaign in the general population and adherence to the recommendation to test pregnant women for HIV are mandatory.\nThis study has several limitations that need to be recognised. First, it was a retrospective analysis, so several items of information were not recorded. In particular, because they were not available for the earliest calendar years, HIV drug resistance testing, viral subtypes or therapeutic drug monitoring results were not considered. Second, as previously discussed, the sample size was small. Third, HIV RNA was available at delivery for only 93 women. Therefore, there should be further investigation of maternal characteristics and pregnancy outcomes in migrant and autochthonous women using prospective cohorts with completed data and larger sample size.", "Overall our results reinforce the recommendation to start HAART as soon as possible in pregnant women. Clearly, this is only achievable by testing for HIV as soon as possible. Migrant patients appeared to be a vulnerable population because they were more frequently unaware of their HIV status at the time of pregnancy than Italians. Since there is no nationwide legislation that imposes HIV-testing in pregnant women in Italy, our results have important public health implications. Moreover, they indicate the need for a strict virological and clinical follow-up of pregnant women to detect amendable causes of positive HIV RNA at delivery to minimize the risk of HIV transmission.", "CT and GC have received unrestricted educational grants (as speakers or for participation to conferences) from Abbott, Gilead, Merck, GSK, BMS, Schering Plough, Roche. The remaining authors declare no competing interests.", "CT, GC and II conceived of the study, participated in its design and helped to draft the manuscript. MAF, SC and EQ participated in design of the study and acquisition of data. MM participated in design of study and performed the statistical analysis. All authors read and approved the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/124/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[]
Poor mental health and sexual risk behaviours in Uganda: a cross-sectional population-based study.
21338500
Poor mental health predicts sexual risk behaviours in high-income countries, but little is known about this association in low-income settings in sub-Saharan Africa where HIV is prevalent. This study investigated whether depression, psychological distress and alcohol use are associated with sexual risk behaviours in young Ugandan adults.
BACKGROUND
Household sampling was performed in two Ugandan districts, with 646 men and women aged 18-30 years recruited. Hopkins Symptoms Checklist-25 was used to assess the presence of depression and psychological distress. Alcohol use was assessed using a question about self-reported heavy-episodic drinking. Information on sexual risk behaviour was obtained concerning number of lifetime sexual partners, ongoing concurrent sexual relationships and condom use.
METHOD
Depression was associated with a greater number of lifetime partners and with having concurrent partners among women. Psychological distress was associated with a greater number of lifetime partners in both men and women and was marginally associated (p = 0.05) with having concurrent partners among women. Psychological distress was associated with inconsistent condom use among men. Alcohol use was associated with a greater number of lifetime partners and with having concurrent partners in both men and women, with particularly strong associations for both outcome measures found among women.
RESULTS
Poor mental health is associated with sexual risk behaviours in a low-income sub-Saharan African setting. HIV preventive interventions should consider including mental health and alcohol use reduction components into their intervention packages, in settings where depression, psychological distress and alcohol use are common.
CONCLUSION
[ "Adolescent", "Adult", "Checklist", "Cross-Sectional Studies", "Female", "Humans", "Male", "Mental Disorders", "Poverty", "Sexuality", "Uganda", "Unsafe Sex", "Young Adult" ]
3056745
null
null
Method
[SUBTITLE] Participants [SUBSECTION] The study took place in Kampala (mainly Baganda ethnic group) and Mbarara district (mainly Banyankole ethnic group). The Baganda and Banyankole are culturally and linguistically related. We performed a cross-sectional population-based study in nine purposively selected study areas representing varying degrees of urbanicity in Uganda: 3 divisions in Kampala city (urban), 3 divisions in Mbarara town (semi-urban), and 3 sub-counties in Mbarara district (rural). Persons aged 18-30 years, residing in these areas and not reporting or displaying overt signs of severe mental or physical illness, and not under the influence of alcohol or drugs at the time of contact, were eligible. This age group was selected because it was deemed to represent the segment of the adult population being most sexually active. In each study area, seven villages (in rural areas) or neighbourhoods (in semi-urban/urban areas) were purposively selected, in order to reflect the range of levels of economic development of villages/neighbourhoods in the study area. From a central location in each selected village/neighbourhood, a walk was performed in a random direction, towards the end of the village/neighbourhood. The random direction was obtained spinning a pen and observing in what direction the pen pointed after the spinning had stopped. Every third household encountered during the walk was visited, until ten interviews had been conducted, thus yielding an approximate sample size of 9 × 7 × 10 = 630 participants in total. All eligible persons at home were interviewed. If no eligible person was at home the next household was visited, and then, every third household. Follow-up visits were not performed. In four of the 21 selected neighbourhoods in Kampala, household sampling proved not to be feasible using our random walk procedure, given a low density of accessible households and a predominance of shops and businesses in the neighbourhood. In these neighbourhoods a pragmatic approach was adopted: Instead of households, shops and businesses were sampled using the same systematic sampling strategy as that used for households, with participants systematically recruited inside the selected establishments. Among the persons who were approached and invited for participation, refusals were rare (estimated at <5%). Data was collected between September 2004 and June 2005 by seven Ugandan field workers (university students). Initial training focussed on interviewing techniques and ethical considerations specific to interviews about mental problems and sexual behaviours. Interviewers and respondents were not sex-matched. Continuous supervision was provided throughout the field-work by two of the authors (G.R. and S.A.). Study languages were English, Luganda and Runyankole, depending on the proficiency of the respondent. Informed consent was sought and the risks, benefits and right to decline or withdraw were carefully explained. Although potential participants were sometimes approached in the vicinity of other persons, after informed consent had been obtained, the research assistant and the participant withdrew to a more secluded location where privacy was ensured. All study procedures were approved by the Institutional Ethical Review Committee of Mbarara University. The study took place in Kampala (mainly Baganda ethnic group) and Mbarara district (mainly Banyankole ethnic group). The Baganda and Banyankole are culturally and linguistically related. We performed a cross-sectional population-based study in nine purposively selected study areas representing varying degrees of urbanicity in Uganda: 3 divisions in Kampala city (urban), 3 divisions in Mbarara town (semi-urban), and 3 sub-counties in Mbarara district (rural). Persons aged 18-30 years, residing in these areas and not reporting or displaying overt signs of severe mental or physical illness, and not under the influence of alcohol or drugs at the time of contact, were eligible. This age group was selected because it was deemed to represent the segment of the adult population being most sexually active. In each study area, seven villages (in rural areas) or neighbourhoods (in semi-urban/urban areas) were purposively selected, in order to reflect the range of levels of economic development of villages/neighbourhoods in the study area. From a central location in each selected village/neighbourhood, a walk was performed in a random direction, towards the end of the village/neighbourhood. The random direction was obtained spinning a pen and observing in what direction the pen pointed after the spinning had stopped. Every third household encountered during the walk was visited, until ten interviews had been conducted, thus yielding an approximate sample size of 9 × 7 × 10 = 630 participants in total. All eligible persons at home were interviewed. If no eligible person was at home the next household was visited, and then, every third household. Follow-up visits were not performed. In four of the 21 selected neighbourhoods in Kampala, household sampling proved not to be feasible using our random walk procedure, given a low density of accessible households and a predominance of shops and businesses in the neighbourhood. In these neighbourhoods a pragmatic approach was adopted: Instead of households, shops and businesses were sampled using the same systematic sampling strategy as that used for households, with participants systematically recruited inside the selected establishments. Among the persons who were approached and invited for participation, refusals were rare (estimated at <5%). Data was collected between September 2004 and June 2005 by seven Ugandan field workers (university students). Initial training focussed on interviewing techniques and ethical considerations specific to interviews about mental problems and sexual behaviours. Interviewers and respondents were not sex-matched. Continuous supervision was provided throughout the field-work by two of the authors (G.R. and S.A.). Study languages were English, Luganda and Runyankole, depending on the proficiency of the respondent. Informed consent was sought and the risks, benefits and right to decline or withdraw were carefully explained. Although potential participants were sometimes approached in the vicinity of other persons, after informed consent had been obtained, the research assistant and the participant withdrew to a more secluded location where privacy was ensured. All study procedures were approved by the Institutional Ethical Review Committee of Mbarara University. [SUBTITLE] Measures [SUBSECTION] Socio-demographic background factors were: age (18-24 yrs vs. 25-30 yrs), level of education (up to primary school vs. more than primary school), relationship status (single/widowed/separated vs. married/cohabiting), and place of residence (urban vs. semi-urban/rural with urban referring to Kampala district). Depression was assessed using the depression sub-section of the Hopkins Symptoms Checklist (HSCL-25)[28]. This instrument was developed for cross-cultural use and consists of 25 questions assessing symptoms of anxiety (10 items) and depression (15 items) during the past week, with the response to each item graded from 1 = "not at all" to 4 = "extremely". The depression sub-section of the questionnaire (15 items) has been validated among the Baganda as a screening tool for probable depression, with a cut-off point established based on comparison with structured clinical interviews[26,29]. Thus, we categorized participants having a depression sub-scale total score of 31 or above as having probable depression[26], in order to be able to assess the association of clinical depression with sexual risk behaviours. The Chronbach's alpha for the depression sub-scale was 0.84, i.e. similar to that previously reported in Uganda[26]. Psychological distress was assessed by calculating a total score for the entire HSCL-25 instrument including both depression and anxiety items[28] as has been done previously[30,31]. Depression and psychological distress only partly overlap. Thus, many persons in the general population have psychological distress but do not fulfil diagnostic criteria for clinical depression. Nevertheless, such persons might potentially have increased sexual risk behaviours. Therefore, we used the HSCL-25 as a continuous measure of psychological distress and investigated whether this measure was associated with sexual risk behaviours. The continuous psychological distress variable was categorised into gender-specific quartiles instead of using e.g. a dichotomous measure, in order to capture sub-clinical psychological distress, and given that a non-linear relationship between psychological distress and sexual risk behaviour was deemed possible[13]. Gender-specific quartiles were used in order to maximise statistical power for within-gender analyses of the association between psychological distress and sexual behaviours. The 1st quartile (i.e. least psychological distress) was used as the reference category to which the other HSCL-25 quartiles were compared. The Chronbach's alpha for the entire instrument was 0.92, indicating excellent internal consistency, and suggesting that all HSCL-25 items could indeed be used to measure a common underlying construct. Alcohol use referred to heavy episodic drinking, a behaviour associated with sexual risk taking[21,32,33], and was operationalised as the number of times 'drunk on alcohol' per week (0 vs. 1 or more). Self-reports of heavy episodic drinking have previously been used in sub-Saharan African contexts[34] and may provide a viable alternative in settings where estimating the number of 'standard drinks' is difficult: In rural Uganda, a significant proportion of the alcohol consumed is locally produced, has variable alcohol content and is consumed from plastic bags, cups or a common pot. Sexual behaviours were assessed using three measures: (1) 'Number of lifetime sexual partners' Sexual partner was defined as a person with whom one has ever had sexual intercourse. The number of lifetime sexual partners was categorized as 0-3 vs. 4 or above, approximately corresponding to the 75th percentile in the current sample. (2) 'Number of current sexual partners' Current sexual partner was defined as a partner with whom one currently has sexual intercourse regularly. The number of current sexual partners was categorized as 0-1 vs. 2 or more. This measure thus targeted ongoing regular sexual relationships as has been done previously[35], in order to estimate the point prevalence of concurrency[36]. Two or more current sexual partners are hereafter referred to as concurrent sexual partners. (3) 'Frequency of condom use when having sex' This question assessed the conditional likelihood that the person uses a condom in case he or she has sexual intercourse, and therefore used a relative measure of condom use (always, sometimes, never) and did not have a specific reference time period. Conceptualizing condom use as a habit, and using relative instead of count measures may increase sensitivity when investigating the psychological correlates of unprotected sex[37]. Condom use was categorised as always (consistent) vs. sometimes/never (inconsistent) condom use. The study instrument was translated from English into Runyankole and Luganda and independently back-translated. The two versions were compared and necessary adjustments made (P.L., G.R., S.A. and translators). Care was taken in order not to introduce culturally alien or offensive notions or expressions, while ensuring that the intended construct was indeed being measured. The questions about sexual behaviours were asked at the end of the interview for increased acceptability. After back-translation and modification, the questionnaire was pre-tested on persons not participating in the study, with results indicating that the questions were acceptable and comprehensible. Socio-demographic background factors were: age (18-24 yrs vs. 25-30 yrs), level of education (up to primary school vs. more than primary school), relationship status (single/widowed/separated vs. married/cohabiting), and place of residence (urban vs. semi-urban/rural with urban referring to Kampala district). Depression was assessed using the depression sub-section of the Hopkins Symptoms Checklist (HSCL-25)[28]. This instrument was developed for cross-cultural use and consists of 25 questions assessing symptoms of anxiety (10 items) and depression (15 items) during the past week, with the response to each item graded from 1 = "not at all" to 4 = "extremely". The depression sub-section of the questionnaire (15 items) has been validated among the Baganda as a screening tool for probable depression, with a cut-off point established based on comparison with structured clinical interviews[26,29]. Thus, we categorized participants having a depression sub-scale total score of 31 or above as having probable depression[26], in order to be able to assess the association of clinical depression with sexual risk behaviours. The Chronbach's alpha for the depression sub-scale was 0.84, i.e. similar to that previously reported in Uganda[26]. Psychological distress was assessed by calculating a total score for the entire HSCL-25 instrument including both depression and anxiety items[28] as has been done previously[30,31]. Depression and psychological distress only partly overlap. Thus, many persons in the general population have psychological distress but do not fulfil diagnostic criteria for clinical depression. Nevertheless, such persons might potentially have increased sexual risk behaviours. Therefore, we used the HSCL-25 as a continuous measure of psychological distress and investigated whether this measure was associated with sexual risk behaviours. The continuous psychological distress variable was categorised into gender-specific quartiles instead of using e.g. a dichotomous measure, in order to capture sub-clinical psychological distress, and given that a non-linear relationship between psychological distress and sexual risk behaviour was deemed possible[13]. Gender-specific quartiles were used in order to maximise statistical power for within-gender analyses of the association between psychological distress and sexual behaviours. The 1st quartile (i.e. least psychological distress) was used as the reference category to which the other HSCL-25 quartiles were compared. The Chronbach's alpha for the entire instrument was 0.92, indicating excellent internal consistency, and suggesting that all HSCL-25 items could indeed be used to measure a common underlying construct. Alcohol use referred to heavy episodic drinking, a behaviour associated with sexual risk taking[21,32,33], and was operationalised as the number of times 'drunk on alcohol' per week (0 vs. 1 or more). Self-reports of heavy episodic drinking have previously been used in sub-Saharan African contexts[34] and may provide a viable alternative in settings where estimating the number of 'standard drinks' is difficult: In rural Uganda, a significant proportion of the alcohol consumed is locally produced, has variable alcohol content and is consumed from plastic bags, cups or a common pot. Sexual behaviours were assessed using three measures: (1) 'Number of lifetime sexual partners' Sexual partner was defined as a person with whom one has ever had sexual intercourse. The number of lifetime sexual partners was categorized as 0-3 vs. 4 or above, approximately corresponding to the 75th percentile in the current sample. (2) 'Number of current sexual partners' Current sexual partner was defined as a partner with whom one currently has sexual intercourse regularly. The number of current sexual partners was categorized as 0-1 vs. 2 or more. This measure thus targeted ongoing regular sexual relationships as has been done previously[35], in order to estimate the point prevalence of concurrency[36]. Two or more current sexual partners are hereafter referred to as concurrent sexual partners. (3) 'Frequency of condom use when having sex' This question assessed the conditional likelihood that the person uses a condom in case he or she has sexual intercourse, and therefore used a relative measure of condom use (always, sometimes, never) and did not have a specific reference time period. Conceptualizing condom use as a habit, and using relative instead of count measures may increase sensitivity when investigating the psychological correlates of unprotected sex[37]. Condom use was categorised as always (consistent) vs. sometimes/never (inconsistent) condom use. The study instrument was translated from English into Runyankole and Luganda and independently back-translated. The two versions were compared and necessary adjustments made (P.L., G.R., S.A. and translators). Care was taken in order not to introduce culturally alien or offensive notions or expressions, while ensuring that the intended construct was indeed being measured. The questions about sexual behaviours were asked at the end of the interview for increased acceptability. After back-translation and modification, the questionnaire was pre-tested on persons not participating in the study, with results indicating that the questions were acceptable and comprehensible. [SUBTITLE] Analyses [SUBSECTION] All analyses were a priori stratified by gender. Missing values were excluded from analyses and all percentages are percentages of valid answers. For analyses pertaining to condom use, participants who reported never having had sex were excluded. Sexual risk behaviours were treated as outcome measures. Depression, psychological distress, and alcohol use were examined as possible predictors. However, depression, psychological distress, and alcohol use may all be inter-related. For instance, alcohol use may be both a consequence and a cause of psychological distress and depression. Thus, each of these measures was examined separately in relationship to sexual risk behaviour. Socio-demographic background factors were included into all models in order to adjust for confounding. Moreover, in explorative analyses we investigated whether the inter-relatedness between depression, psychological distress and alcohol use influenced their respective associations with the outcome. Thus, we examined whether the associations of depression and psychological distress with sexual risk behaviours were independent of alcohol use, and conversely, whether the association of alcohol use with sexual risk behaviours was independent of depression and psychological distress, respectively. The results of these explorative analyses are presented in the text, but not in the tables. We used a modified cluster sampling method, with the clusters being villages/neighbourhoods, as contrasted to a genuinely random sampling method. Thus, given that some statistical efficiency may be lost in case responses within clusters correlate, we investigated whether adjustment for such intra-cluster correlation substantially influenced the results obtained, using the Stata svy procedure. However, confidence intervals remained virtually unchanged after adjustment and the original confidence intervals are presented in this manuscript. Significance level was set at p < 0.05. All analyses were a priori stratified by gender. Missing values were excluded from analyses and all percentages are percentages of valid answers. For analyses pertaining to condom use, participants who reported never having had sex were excluded. Sexual risk behaviours were treated as outcome measures. Depression, psychological distress, and alcohol use were examined as possible predictors. However, depression, psychological distress, and alcohol use may all be inter-related. For instance, alcohol use may be both a consequence and a cause of psychological distress and depression. Thus, each of these measures was examined separately in relationship to sexual risk behaviour. Socio-demographic background factors were included into all models in order to adjust for confounding. Moreover, in explorative analyses we investigated whether the inter-relatedness between depression, psychological distress and alcohol use influenced their respective associations with the outcome. Thus, we examined whether the associations of depression and psychological distress with sexual risk behaviours were independent of alcohol use, and conversely, whether the association of alcohol use with sexual risk behaviours was independent of depression and psychological distress, respectively. The results of these explorative analyses are presented in the text, but not in the tables. We used a modified cluster sampling method, with the clusters being villages/neighbourhoods, as contrasted to a genuinely random sampling method. Thus, given that some statistical efficiency may be lost in case responses within clusters correlate, we investigated whether adjustment for such intra-cluster correlation substantially influenced the results obtained, using the Stata svy procedure. However, confidence intervals remained virtually unchanged after adjustment and the original confidence intervals are presented in this manuscript. Significance level was set at p < 0.05.
null
null
null
null
[ "Background", "Participants", "Measures", "Analyses", "Results", "Depression and sexual risk behaviours", "Psychological distress and sexual risk behaviours", "Alcohol use and sexual risk behaviours", "Discussion", "Main findings", "Interpretation", "Methodological considerations", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Is poor mental health associated with sexual risk behaviours in sub-Saharan Africa? This question has not been conclusively answered despite its relevance for HIV prevention[1,2].\nPoor mental health is a major cause of disability in low-income countries, with depression constituting the heaviest disease burden[3]. The HIV epidemic may contribute to increased depression rates in countries having high HIV prevalence: HIV may lead to depression both in persons who live with HIV[4,5] and in those who are indirectly affected[6-8]. An association between depression and sexual risk behaviours is thus particularly relevant in low-income countries with high HIV prevalence.\nIn high-income countries, poor mental health has been closely linked to risky sexual behaviours[9]. For instance, longitudinal studies from the United States suggest that depressive symptoms contribute to sexual risk behaviours (multiple sexual partners, unprotected sex) in the general population, with potential mechanisms including maladaptive coping, low self-efficacy, and self-destructiveness[10-12]. However, the relationship between depressive symptoms and sexual risk behaviours may be non-linear: persons having moderate, but not severe, depressive symptoms may engage in the most sexual risk behaviours[13].\nHowever, conclusions derived from findings in high-income countries may not be applicable to low-income settings. Sexual behaviours vary widely between and within countries, and are determined both by context and individual factors[14]. In many sub-Saharan African low-income countries, contextual factors such as poverty/wealth, mobility, and gender inequality heavily influence sexual behaviours[14-18]. In contrast, in high-income countries, personal choice may have a greater influence.\nNevertheless, to our knowledge only three studies have previously investigated the association between poor mental health and sexual risk behaviours in the general population in sub-Saharan African countries: In South Africa, depressive symptoms predicted transactional sex and intimate partner physical and sexual violence in women, and inconsistent condom use in men [19], while cross-sectional associations with sexual risk behaviours have also been found [19,20]. In Botswana, cross-sectional associations were found between depressive symptoms and having multiple partners among women, and with paying for sex among men [21]. In addition, South Africa and Botswana are both middle-income countries, and to our knowledge, no population-based study has investigated the association between poor mental health and sexual risk behaviours in a low-income sub-Saharan African setting.\nThus, we investigated the association of (1) depression, (2) psychological distress, i.e. depressive and anxiety symptoms, and (3) alcohol use, with sexual risk behaviours in young Ugandans in the general population. Uganda is a low-income country with a generalised HIV epidemic (prevalence in 2005: 6.4%[22]). Moreover, population-based surveys suggest that mental health problems are very common in the country (depression prevalence range: 10-50%)[23-26], and mental health services are sparse with 0.8 psychiatrists per one million population[27].", "The study took place in Kampala (mainly Baganda ethnic group) and Mbarara district (mainly Banyankole ethnic group). The Baganda and Banyankole are culturally and linguistically related.\nWe performed a cross-sectional population-based study in nine purposively selected study areas representing varying degrees of urbanicity in Uganda: 3 divisions in Kampala city (urban), 3 divisions in Mbarara town (semi-urban), and 3 sub-counties in Mbarara district (rural). Persons aged 18-30 years, residing in these areas and not reporting or displaying overt signs of severe mental or physical illness, and not under the influence of alcohol or drugs at the time of contact, were eligible. This age group was selected because it was deemed to represent the segment of the adult population being most sexually active.\nIn each study area, seven villages (in rural areas) or neighbourhoods (in semi-urban/urban areas) were purposively selected, in order to reflect the range of levels of economic development of villages/neighbourhoods in the study area. From a central location in each selected village/neighbourhood, a walk was performed in a random direction, towards the end of the village/neighbourhood. The random direction was obtained spinning a pen and observing in what direction the pen pointed after the spinning had stopped. Every third household encountered during the walk was visited, until ten interviews had been conducted, thus yielding an approximate sample size of 9 × 7 × 10 = 630 participants in total. All eligible persons at home were interviewed. If no eligible person was at home the next household was visited, and then, every third household. Follow-up visits were not performed. In four of the 21 selected neighbourhoods in Kampala, household sampling proved not to be feasible using our random walk procedure, given a low density of accessible households and a predominance of shops and businesses in the neighbourhood. In these neighbourhoods a pragmatic approach was adopted: Instead of households, shops and businesses were sampled using the same systematic sampling strategy as that used for households, with participants systematically recruited inside the selected establishments.\nAmong the persons who were approached and invited for participation, refusals were rare (estimated at <5%). Data was collected between September 2004 and June 2005 by seven Ugandan field workers (university students). Initial training focussed on interviewing techniques and ethical considerations specific to interviews about mental problems and sexual behaviours. Interviewers and respondents were not sex-matched. Continuous supervision was provided throughout the field-work by two of the authors (G.R. and S.A.). Study languages were English, Luganda and Runyankole, depending on the proficiency of the respondent. Informed consent was sought and the risks, benefits and right to decline or withdraw were carefully explained. Although potential participants were sometimes approached in the vicinity of other persons, after informed consent had been obtained, the research assistant and the participant withdrew to a more secluded location where privacy was ensured.\nAll study procedures were approved by the Institutional Ethical Review Committee of Mbarara University.", "Socio-demographic background factors were: age (18-24 yrs vs. 25-30 yrs), level of education (up to primary school vs. more than primary school), relationship status (single/widowed/separated vs. married/cohabiting), and place of residence (urban vs. semi-urban/rural with urban referring to Kampala district).\nDepression was assessed using the depression sub-section of the Hopkins Symptoms Checklist (HSCL-25)[28]. This instrument was developed for cross-cultural use and consists of 25 questions assessing symptoms of anxiety (10 items) and depression (15 items) during the past week, with the response to each item graded from 1 = \"not at all\" to 4 = \"extremely\". The depression sub-section of the questionnaire (15 items) has been validated among the Baganda as a screening tool for probable depression, with a cut-off point established based on comparison with structured clinical interviews[26,29]. Thus, we categorized participants having a depression sub-scale total score of 31 or above as having probable depression[26], in order to be able to assess the association of clinical depression with sexual risk behaviours. The Chronbach's alpha for the depression sub-scale was 0.84, i.e. similar to that previously reported in Uganda[26].\nPsychological distress was assessed by calculating a total score for the entire HSCL-25 instrument including both depression and anxiety items[28] as has been done previously[30,31]. Depression and psychological distress only partly overlap. Thus, many persons in the general population have psychological distress but do not fulfil diagnostic criteria for clinical depression. Nevertheless, such persons might potentially have increased sexual risk behaviours. Therefore, we used the HSCL-25 as a continuous measure of psychological distress and investigated whether this measure was associated with sexual risk behaviours. The continuous psychological distress variable was categorised into gender-specific quartiles instead of using e.g. a dichotomous measure, in order to capture sub-clinical psychological distress, and given that a non-linear relationship between psychological distress and sexual risk behaviour was deemed possible[13]. Gender-specific quartiles were used in order to maximise statistical power for within-gender analyses of the association between psychological distress and sexual behaviours. The 1st quartile (i.e. least psychological distress) was used as the reference category to which the other HSCL-25 quartiles were compared. The Chronbach's alpha for the entire instrument was 0.92, indicating excellent internal consistency, and suggesting that all HSCL-25 items could indeed be used to measure a common underlying construct.\nAlcohol use referred to heavy episodic drinking, a behaviour associated with sexual risk taking[21,32,33], and was operationalised as the number of times 'drunk on alcohol' per week (0 vs. 1 or more). Self-reports of heavy episodic drinking have previously been used in sub-Saharan African contexts[34] and may provide a viable alternative in settings where estimating the number of 'standard drinks' is difficult: In rural Uganda, a significant proportion of the alcohol consumed is locally produced, has variable alcohol content and is consumed from plastic bags, cups or a common pot.\nSexual behaviours were assessed using three measures:\n(1) 'Number of lifetime sexual partners'\nSexual partner was defined as a person with whom one has ever had sexual intercourse. The number of lifetime sexual partners was categorized as 0-3 vs. 4 or above, approximately corresponding to the 75th percentile in the current sample.\n(2) 'Number of current sexual partners'\nCurrent sexual partner was defined as a partner with whom one currently has sexual intercourse regularly. The number of current sexual partners was categorized as 0-1 vs. 2 or more. This measure thus targeted ongoing regular sexual relationships as has been done previously[35], in order to estimate the point prevalence of concurrency[36]. Two or more current sexual partners are hereafter referred to as concurrent sexual partners.\n(3) 'Frequency of condom use when having sex'\nThis question assessed the conditional likelihood that the person uses a condom in case he or she has sexual intercourse, and therefore used a relative measure of condom use (always, sometimes, never) and did not have a specific reference time period. Conceptualizing condom use as a habit, and using relative instead of count measures may increase sensitivity when investigating the psychological correlates of unprotected sex[37]. Condom use was categorised as always (consistent) vs. sometimes/never (inconsistent) condom use.\nThe study instrument was translated from English into Runyankole and Luganda and independently back-translated. The two versions were compared and necessary adjustments made (P.L., G.R., S.A. and translators). Care was taken in order not to introduce culturally alien or offensive notions or expressions, while ensuring that the intended construct was indeed being measured. The questions about sexual behaviours were asked at the end of the interview for increased acceptability. After back-translation and modification, the questionnaire was pre-tested on persons not participating in the study, with results indicating that the questions were acceptable and comprehensible.", "All analyses were a priori stratified by gender. Missing values were excluded from analyses and all percentages are percentages of valid answers. For analyses pertaining to condom use, participants who reported never having had sex were excluded.\nSexual risk behaviours were treated as outcome measures. Depression, psychological distress, and alcohol use were examined as possible predictors. However, depression, psychological distress, and alcohol use may all be inter-related. For instance, alcohol use may be both a consequence and a cause of psychological distress and depression. Thus, each of these measures was examined separately in relationship to sexual risk behaviour. Socio-demographic background factors were included into all models in order to adjust for confounding.\nMoreover, in explorative analyses we investigated whether the inter-relatedness between depression, psychological distress and alcohol use influenced their respective associations with the outcome. Thus, we examined whether the associations of depression and psychological distress with sexual risk behaviours were independent of alcohol use, and conversely, whether the association of alcohol use with sexual risk behaviours was independent of depression and psychological distress, respectively. The results of these explorative analyses are presented in the text, but not in the tables.\nWe used a modified cluster sampling method, with the clusters being villages/neighbourhoods, as contrasted to a genuinely random sampling method. Thus, given that some statistical efficiency may be lost in case responses within clusters correlate, we investigated whether adjustment for such intra-cluster correlation substantially influenced the results obtained, using the Stata svy procedure. However, confidence intervals remained virtually unchanged after adjustment and the original confidence intervals are presented in this manuscript. Significance level was set at p < 0.05.", "88% of the men and 82% of the women reported ever having had sex. 37% (n = 123) of the men and 23% (n = 70) of the women reported four or more lifetime sexual partners. 15% (n = 49) of the men and 4.5% (n = 14) of the women reported having concurrent sexual partners. 73% (n = 204) of the men and 77% (n = 193) of the women who had ever had sex reported inconsistent condom use.\nThe prevalence of probable depression was 12.0% among men and 17.9% among women. Heavy episodic drinking at least once per week was reported by 35.4% (n = 118) of the men and 13.3% (n = 41) of the women. The distribution of socio-demographic, mental health and sexual risk behaviour variables are presented in Table 1.\nSample characteristics\n1 Values represent HSCL-25 quartiles cut-off scores.\n2 Alcohol use refers to self-reported heavy episodic drinking.\n3 Condom use was assessed only among those participants who had ever had sex.\n[SUBTITLE] Depression and sexual risk behaviours [SUBSECTION] Depression was independently associated with a greater number of lifetime sexual partners among women, but not among men, after simultaneous adjustment for all socio-demographic background variables, see Table 2. Similarly, depression was independently associated with having concurrent sexual partners among women, but not among men. Associations between depression and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of depression with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above results were adjusted for alcohol use in explorative analyses, the association between depression and concurrent sexual partners in women was somewhat attenuated (OR 3.03; 95% CI 0.90-10.24), indicating that alcohol use mediated or confounded some part of that association. However, associations of depression with number of lifetime sexual partners and with condom use, respectively, were independent of alcohol use in both men and women.\nDepression was independently associated with a greater number of lifetime sexual partners among women, but not among men, after simultaneous adjustment for all socio-demographic background variables, see Table 2. Similarly, depression was independently associated with having concurrent sexual partners among women, but not among men. Associations between depression and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of depression with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above results were adjusted for alcohol use in explorative analyses, the association between depression and concurrent sexual partners in women was somewhat attenuated (OR 3.03; 95% CI 0.90-10.24), indicating that alcohol use mediated or confounded some part of that association. However, associations of depression with number of lifetime sexual partners and with condom use, respectively, were independent of alcohol use in both men and women.\n[SUBTITLE] Psychological distress and sexual risk behaviours [SUBSECTION] Psychological distress was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic background variables, see Table 3. The association between psychological distress and having concurrent sexual partners did not reach statistical significance in either men or women, although in women the association approached significance (p = 0.05). Psychological distress was associated with inconsistent condom use in men, while in women results did not reach statistical significance.\nAssociation of psychological distress with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Numbers represent HSCL-25 quartiles.\n2 Adjustment for age, education, marital status and place of residence.\n3 Analyses pertain only to participants who had ever had sex.\nExplorative adjustment of the above associations for alcohol use suggested that the associations between psychological distress and sexual risk behaviours were virtually independent of alcohol use in both men and women.\nPsychological distress was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic background variables, see Table 3. The association between psychological distress and having concurrent sexual partners did not reach statistical significance in either men or women, although in women the association approached significance (p = 0.05). Psychological distress was associated with inconsistent condom use in men, while in women results did not reach statistical significance.\nAssociation of psychological distress with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Numbers represent HSCL-25 quartiles.\n2 Adjustment for age, education, marital status and place of residence.\n3 Analyses pertain only to participants who had ever had sex.\nExplorative adjustment of the above associations for alcohol use suggested that the associations between psychological distress and sexual risk behaviours were virtually independent of alcohol use in both men and women.\n[SUBTITLE] Alcohol use and sexual risk behaviours [SUBSECTION] Alcohol use was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic variables, see Table 4. Moreover, alcohol use was independently associated with having concurrent sexual partners in both men and women. Associations between alcohol use and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of alcohol use with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above associations were adjusted for depression and psychological distress, respectively, the associations between alcohol use and sexual risk behaviours remained virtually unchanged in both men and women.\nAlcohol use was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic variables, see Table 4. Moreover, alcohol use was independently associated with having concurrent sexual partners in both men and women. Associations between alcohol use and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of alcohol use with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above associations were adjusted for depression and psychological distress, respectively, the associations between alcohol use and sexual risk behaviours remained virtually unchanged in both men and women.", "Depression was independently associated with a greater number of lifetime sexual partners among women, but not among men, after simultaneous adjustment for all socio-demographic background variables, see Table 2. Similarly, depression was independently associated with having concurrent sexual partners among women, but not among men. Associations between depression and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of depression with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above results were adjusted for alcohol use in explorative analyses, the association between depression and concurrent sexual partners in women was somewhat attenuated (OR 3.03; 95% CI 0.90-10.24), indicating that alcohol use mediated or confounded some part of that association. However, associations of depression with number of lifetime sexual partners and with condom use, respectively, were independent of alcohol use in both men and women.", "Psychological distress was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic background variables, see Table 3. The association between psychological distress and having concurrent sexual partners did not reach statistical significance in either men or women, although in women the association approached significance (p = 0.05). Psychological distress was associated with inconsistent condom use in men, while in women results did not reach statistical significance.\nAssociation of psychological distress with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Numbers represent HSCL-25 quartiles.\n2 Adjustment for age, education, marital status and place of residence.\n3 Analyses pertain only to participants who had ever had sex.\nExplorative adjustment of the above associations for alcohol use suggested that the associations between psychological distress and sexual risk behaviours were virtually independent of alcohol use in both men and women.", "Alcohol use was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic variables, see Table 4. Moreover, alcohol use was independently associated with having concurrent sexual partners in both men and women. Associations between alcohol use and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of alcohol use with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above associations were adjusted for depression and psychological distress, respectively, the associations between alcohol use and sexual risk behaviours remained virtually unchanged in both men and women.", "[SUBTITLE] Main findings [SUBSECTION] Depression, psychological distress and alcohol use were all associated with having a greater number of lifetime sexual partners and with having concurrent sexual partners, with stronger associations found among women. Moreover, although less consistent, associations were found between mental health indicators and inconsistent condom use, with stronger associations found among men.\nAll the above associations were adjusted for confounding by socio-demographic variables. Explorative analyses suggested that the associations of depression and psychological distress with sexual risk behaviours were virtually independent of alcohol use. Conversely, the association between alcohol use and sexual risk behaviours was virtually independent of depression and psychological distress.\nThe prevalence of depression was in line with earlier Ugandan studies[23,24] but lower than the extremely high prevalence recently reported in war-torn Northern Uganda[25,26]. Heavy episodic drinking at least once per week was common, consistent with reports suggesting that Uganda has one of the world's highest alcohol per capita consumptions[38]. The mean number of lifetime partners among persons who had ever had sex was broadly in the same range as that reported in a national survey conducted at the time of this study[22]. The point prevalence of concurrency was relatively low, although consistent with previous Ugandan studies on concurrency[39,40]. Inconsistent condom use was reported by approximately 75% of those who had ever had sex, in agreement with reports of low rates of condom use at last sexual intercourse from the period when this study was performed[22].\nDepression, psychological distress and alcohol use were all associated with having a greater number of lifetime sexual partners and with having concurrent sexual partners, with stronger associations found among women. Moreover, although less consistent, associations were found between mental health indicators and inconsistent condom use, with stronger associations found among men.\nAll the above associations were adjusted for confounding by socio-demographic variables. Explorative analyses suggested that the associations of depression and psychological distress with sexual risk behaviours were virtually independent of alcohol use. Conversely, the association between alcohol use and sexual risk behaviours was virtually independent of depression and psychological distress.\nThe prevalence of depression was in line with earlier Ugandan studies[23,24] but lower than the extremely high prevalence recently reported in war-torn Northern Uganda[25,26]. Heavy episodic drinking at least once per week was common, consistent with reports suggesting that Uganda has one of the world's highest alcohol per capita consumptions[38]. The mean number of lifetime partners among persons who had ever had sex was broadly in the same range as that reported in a national survey conducted at the time of this study[22]. The point prevalence of concurrency was relatively low, although consistent with previous Ugandan studies on concurrency[39,40]. Inconsistent condom use was reported by approximately 75% of those who had ever had sex, in agreement with reports of low rates of condom use at last sexual intercourse from the period when this study was performed[22].\n[SUBTITLE] Interpretation [SUBSECTION] Depression was associated with past and current multiple partners in women. A number of interpretations for these associations are possible. Firstly, depression and multiple partners may be indirectly linked through a common cause. For instance, poverty with food insecurity may cause poor mental health, and may also force women into transactional sexual relationships in order to obtain food[41]. Similarly, having an abusive partner may lead to depression, while also motivating women to look for extramarital or alternative partners. Secondly, depression might contribute to having multiple partners. Thus, fatalism and hopelessness may lead to disregard of the threat of HIV and thus hypothetically contribute to multiple sexual partnerships[42]. Moreover, depressed women may be more vulnerable to being coerced into sex, potentially leading to an increased number of sexual contacts[20]. Thirdly, having multiple partners might lead to depression. Thus, having multiple partners may cause HIV related worries[43]. Moreover, STD and HIV infection increase the risk of subsequent depression[4,10]. In addition, women with multiple partners in Uganda may have feelings of guilt and may fear being labelled as promiscuous[44]. More research is needed in order to clarify the meaning of the association between depression and multiple partners in the Ugandan context.\nSimilarly, the associations between psychological distress and past and current partners were stronger in women. Studies in high-income countries suggest that women having multiple partners may have particularly poor mental health[11,12]. Psychological distress and depression partly overlap, and factors contributing to multiple partners in depressed women could also be operating with regard to psychological distress. In addition, psychological distress may lead to casual sex as a coping strategy, and may also decrease self-efficacy for changing risky habits[45]. Lack of sense of the future may lead to non-planning and impulsivity[46]. In addition, psychological distress may cause destructiveness[29], potentially contributing to sexual risk behaviours.\nIn contrast, the associations of depression and psychological distress with inconsistent condom use were stronger among men. Men's mental state may have a greater influence than that of women on the condom-use decision, given that men generally decide over condom use in Uganda[47].\nWhile associations between alcohol use and sexual risk behaviours have previously been found in Uganda[48], the mechanisms explaining the association between alcohol use and sexual risk behaviours in this setting, or elsewhere, are not fully understood: Clearly, alcohol consumption causes behavioural disinhibition potentially leading to risky decisions. However, alcohol use may also be a marker for a psychological trait, or state, conducive to risky sexual behaviours[49,50], e.g. depression or psychological distress. In the context of this uncertainty, the current findings, although cross-sectional, do suggest that alcohol use and depression/psychological distress mainly influence sexual risk behaviours through mutually independent causal pathways.\nIn summary, the interrelationship between poor mental health and sexual risk behaviours in the Ugandan setting is likely complex, and may well be bi-directional.\nDepression was associated with past and current multiple partners in women. A number of interpretations for these associations are possible. Firstly, depression and multiple partners may be indirectly linked through a common cause. For instance, poverty with food insecurity may cause poor mental health, and may also force women into transactional sexual relationships in order to obtain food[41]. Similarly, having an abusive partner may lead to depression, while also motivating women to look for extramarital or alternative partners. Secondly, depression might contribute to having multiple partners. Thus, fatalism and hopelessness may lead to disregard of the threat of HIV and thus hypothetically contribute to multiple sexual partnerships[42]. Moreover, depressed women may be more vulnerable to being coerced into sex, potentially leading to an increased number of sexual contacts[20]. Thirdly, having multiple partners might lead to depression. Thus, having multiple partners may cause HIV related worries[43]. Moreover, STD and HIV infection increase the risk of subsequent depression[4,10]. In addition, women with multiple partners in Uganda may have feelings of guilt and may fear being labelled as promiscuous[44]. More research is needed in order to clarify the meaning of the association between depression and multiple partners in the Ugandan context.\nSimilarly, the associations between psychological distress and past and current partners were stronger in women. Studies in high-income countries suggest that women having multiple partners may have particularly poor mental health[11,12]. Psychological distress and depression partly overlap, and factors contributing to multiple partners in depressed women could also be operating with regard to psychological distress. In addition, psychological distress may lead to casual sex as a coping strategy, and may also decrease self-efficacy for changing risky habits[45]. Lack of sense of the future may lead to non-planning and impulsivity[46]. In addition, psychological distress may cause destructiveness[29], potentially contributing to sexual risk behaviours.\nIn contrast, the associations of depression and psychological distress with inconsistent condom use were stronger among men. Men's mental state may have a greater influence than that of women on the condom-use decision, given that men generally decide over condom use in Uganda[47].\nWhile associations between alcohol use and sexual risk behaviours have previously been found in Uganda[48], the mechanisms explaining the association between alcohol use and sexual risk behaviours in this setting, or elsewhere, are not fully understood: Clearly, alcohol consumption causes behavioural disinhibition potentially leading to risky decisions. However, alcohol use may also be a marker for a psychological trait, or state, conducive to risky sexual behaviours[49,50], e.g. depression or psychological distress. In the context of this uncertainty, the current findings, although cross-sectional, do suggest that alcohol use and depression/psychological distress mainly influence sexual risk behaviours through mutually independent causal pathways.\nIn summary, the interrelationship between poor mental health and sexual risk behaviours in the Ugandan setting is likely complex, and may well be bi-directional.\n[SUBTITLE] Methodological considerations [SUBSECTION] The current study was cross-sectional and its aim was to assess association. Thus, the interpretations presented regarding the possible nature and direction of the associations should be viewed as hypotheses for further testing.\nOur measure of concurrency was crude, although standard definitions of concurrency are still under development[51]. The measure involved some portion of subjectivity given that what is perceived as a current regular sexual partner depends on expectations of future encounters. However, 'current' regular relationships have been assessed also in previous studies[35,39,40]. Moreover, the measure was not subject to recall bias since participants were not required to reconstruct past partnerships. In addition, recall and concentration is affected by depressive symptoms[52] and an assessment of dates and frequencies of sexual behaviours in the past would potentially have entailed significant risks of bias.\nAs in much survey research on sensitive topics, responses may be subject to social desirability bias. Having multiple sexual partners is often not socially desirable for women in Uganda, and the numbers of partners and rates of concurrency reported among women should be viewed as minimum estimates. However, the rates found are largely consistent with evidence from previous studies on concurrency from Uganda[39,40].\nThe sampling method used was not random, since villages/neighbourhoods were purposively selected. Thus, the socio-demographic profile of the sample does not necessarily reflect that of the districts where the study was performed. In the current sample, the proportion of persons having completed primary school was higher and the proportion of persons who were married or cohabiting was lower than in the general population in Uganda [53], although it should be noted that only persons aged 18-30 years were included in the study. While the prevalence estimates should thus be interpreted with some caution, the purposive sampling of villages/neighbourhoods representing a broad range of economic development arguably increases the extent to which the associations found may be applicable to external populations from different socio-economic strata.\nFollow-up visits were not performed for households where no one was at home at the time of the interviewer's visit. If persons who were not at home systematically differed from those at home with respect to mental health and sexual risk behaviours, a selection bias could hypothetically have been introduced into the study.\nNo measure of personal or household wealth was included. However, results were adjusted for a proxy measure of socio-economic status (i.e. education) and place of residence.\nThe alcohol measure was crude and did not capture nuances in alcohol consumption, such as the exact number of drinks per heavy episodic drinking occasion. However, when investigating the behavioural impact of heavy episodic drinking, solely assessing the number of standard drinks may have limitations, given that the number of drinks required for altering behaviour is individual[54]. Thus, combining self-reports of heavy episodic drinking with standard drink counts may provide the best option for research on alcohol and sexual risk behaviours, particularly in settings where the 'standard drink' concept is difficult to operationalise.\nThe current study was cross-sectional and its aim was to assess association. Thus, the interpretations presented regarding the possible nature and direction of the associations should be viewed as hypotheses for further testing.\nOur measure of concurrency was crude, although standard definitions of concurrency are still under development[51]. The measure involved some portion of subjectivity given that what is perceived as a current regular sexual partner depends on expectations of future encounters. However, 'current' regular relationships have been assessed also in previous studies[35,39,40]. Moreover, the measure was not subject to recall bias since participants were not required to reconstruct past partnerships. In addition, recall and concentration is affected by depressive symptoms[52] and an assessment of dates and frequencies of sexual behaviours in the past would potentially have entailed significant risks of bias.\nAs in much survey research on sensitive topics, responses may be subject to social desirability bias. Having multiple sexual partners is often not socially desirable for women in Uganda, and the numbers of partners and rates of concurrency reported among women should be viewed as minimum estimates. However, the rates found are largely consistent with evidence from previous studies on concurrency from Uganda[39,40].\nThe sampling method used was not random, since villages/neighbourhoods were purposively selected. Thus, the socio-demographic profile of the sample does not necessarily reflect that of the districts where the study was performed. In the current sample, the proportion of persons having completed primary school was higher and the proportion of persons who were married or cohabiting was lower than in the general population in Uganda [53], although it should be noted that only persons aged 18-30 years were included in the study. While the prevalence estimates should thus be interpreted with some caution, the purposive sampling of villages/neighbourhoods representing a broad range of economic development arguably increases the extent to which the associations found may be applicable to external populations from different socio-economic strata.\nFollow-up visits were not performed for households where no one was at home at the time of the interviewer's visit. If persons who were not at home systematically differed from those at home with respect to mental health and sexual risk behaviours, a selection bias could hypothetically have been introduced into the study.\nNo measure of personal or household wealth was included. However, results were adjusted for a proxy measure of socio-economic status (i.e. education) and place of residence.\nThe alcohol measure was crude and did not capture nuances in alcohol consumption, such as the exact number of drinks per heavy episodic drinking occasion. However, when investigating the behavioural impact of heavy episodic drinking, solely assessing the number of standard drinks may have limitations, given that the number of drinks required for altering behaviour is individual[54]. Thus, combining self-reports of heavy episodic drinking with standard drink counts may provide the best option for research on alcohol and sexual risk behaviours, particularly in settings where the 'standard drink' concept is difficult to operationalise.", "Depression, psychological distress and alcohol use were all associated with having a greater number of lifetime sexual partners and with having concurrent sexual partners, with stronger associations found among women. Moreover, although less consistent, associations were found between mental health indicators and inconsistent condom use, with stronger associations found among men.\nAll the above associations were adjusted for confounding by socio-demographic variables. Explorative analyses suggested that the associations of depression and psychological distress with sexual risk behaviours were virtually independent of alcohol use. Conversely, the association between alcohol use and sexual risk behaviours was virtually independent of depression and psychological distress.\nThe prevalence of depression was in line with earlier Ugandan studies[23,24] but lower than the extremely high prevalence recently reported in war-torn Northern Uganda[25,26]. Heavy episodic drinking at least once per week was common, consistent with reports suggesting that Uganda has one of the world's highest alcohol per capita consumptions[38]. The mean number of lifetime partners among persons who had ever had sex was broadly in the same range as that reported in a national survey conducted at the time of this study[22]. The point prevalence of concurrency was relatively low, although consistent with previous Ugandan studies on concurrency[39,40]. Inconsistent condom use was reported by approximately 75% of those who had ever had sex, in agreement with reports of low rates of condom use at last sexual intercourse from the period when this study was performed[22].", "Depression was associated with past and current multiple partners in women. A number of interpretations for these associations are possible. Firstly, depression and multiple partners may be indirectly linked through a common cause. For instance, poverty with food insecurity may cause poor mental health, and may also force women into transactional sexual relationships in order to obtain food[41]. Similarly, having an abusive partner may lead to depression, while also motivating women to look for extramarital or alternative partners. Secondly, depression might contribute to having multiple partners. Thus, fatalism and hopelessness may lead to disregard of the threat of HIV and thus hypothetically contribute to multiple sexual partnerships[42]. Moreover, depressed women may be more vulnerable to being coerced into sex, potentially leading to an increased number of sexual contacts[20]. Thirdly, having multiple partners might lead to depression. Thus, having multiple partners may cause HIV related worries[43]. Moreover, STD and HIV infection increase the risk of subsequent depression[4,10]. In addition, women with multiple partners in Uganda may have feelings of guilt and may fear being labelled as promiscuous[44]. More research is needed in order to clarify the meaning of the association between depression and multiple partners in the Ugandan context.\nSimilarly, the associations between psychological distress and past and current partners were stronger in women. Studies in high-income countries suggest that women having multiple partners may have particularly poor mental health[11,12]. Psychological distress and depression partly overlap, and factors contributing to multiple partners in depressed women could also be operating with regard to psychological distress. In addition, psychological distress may lead to casual sex as a coping strategy, and may also decrease self-efficacy for changing risky habits[45]. Lack of sense of the future may lead to non-planning and impulsivity[46]. In addition, psychological distress may cause destructiveness[29], potentially contributing to sexual risk behaviours.\nIn contrast, the associations of depression and psychological distress with inconsistent condom use were stronger among men. Men's mental state may have a greater influence than that of women on the condom-use decision, given that men generally decide over condom use in Uganda[47].\nWhile associations between alcohol use and sexual risk behaviours have previously been found in Uganda[48], the mechanisms explaining the association between alcohol use and sexual risk behaviours in this setting, or elsewhere, are not fully understood: Clearly, alcohol consumption causes behavioural disinhibition potentially leading to risky decisions. However, alcohol use may also be a marker for a psychological trait, or state, conducive to risky sexual behaviours[49,50], e.g. depression or psychological distress. In the context of this uncertainty, the current findings, although cross-sectional, do suggest that alcohol use and depression/psychological distress mainly influence sexual risk behaviours through mutually independent causal pathways.\nIn summary, the interrelationship between poor mental health and sexual risk behaviours in the Ugandan setting is likely complex, and may well be bi-directional.", "The current study was cross-sectional and its aim was to assess association. Thus, the interpretations presented regarding the possible nature and direction of the associations should be viewed as hypotheses for further testing.\nOur measure of concurrency was crude, although standard definitions of concurrency are still under development[51]. The measure involved some portion of subjectivity given that what is perceived as a current regular sexual partner depends on expectations of future encounters. However, 'current' regular relationships have been assessed also in previous studies[35,39,40]. Moreover, the measure was not subject to recall bias since participants were not required to reconstruct past partnerships. In addition, recall and concentration is affected by depressive symptoms[52] and an assessment of dates and frequencies of sexual behaviours in the past would potentially have entailed significant risks of bias.\nAs in much survey research on sensitive topics, responses may be subject to social desirability bias. Having multiple sexual partners is often not socially desirable for women in Uganda, and the numbers of partners and rates of concurrency reported among women should be viewed as minimum estimates. However, the rates found are largely consistent with evidence from previous studies on concurrency from Uganda[39,40].\nThe sampling method used was not random, since villages/neighbourhoods were purposively selected. Thus, the socio-demographic profile of the sample does not necessarily reflect that of the districts where the study was performed. In the current sample, the proportion of persons having completed primary school was higher and the proportion of persons who were married or cohabiting was lower than in the general population in Uganda [53], although it should be noted that only persons aged 18-30 years were included in the study. While the prevalence estimates should thus be interpreted with some caution, the purposive sampling of villages/neighbourhoods representing a broad range of economic development arguably increases the extent to which the associations found may be applicable to external populations from different socio-economic strata.\nFollow-up visits were not performed for households where no one was at home at the time of the interviewer's visit. If persons who were not at home systematically differed from those at home with respect to mental health and sexual risk behaviours, a selection bias could hypothetically have been introduced into the study.\nNo measure of personal or household wealth was included. However, results were adjusted for a proxy measure of socio-economic status (i.e. education) and place of residence.\nThe alcohol measure was crude and did not capture nuances in alcohol consumption, such as the exact number of drinks per heavy episodic drinking occasion. However, when investigating the behavioural impact of heavy episodic drinking, solely assessing the number of standard drinks may have limitations, given that the number of drinks required for altering behaviour is individual[54]. Thus, combining self-reports of heavy episodic drinking with standard drink counts may provide the best option for research on alcohol and sexual risk behaviours, particularly in settings where the 'standard drink' concept is difficult to operationalise.", "To our knowledge, this is the first population-based study to demonstrate an association of depression and psychological distress with sexual risk behaviours in a low-income sub-Saharan African setting. Although preliminary, the current findings indicate that the association between poor mental health and sexual risk behaviours may be present across both high and low-income settings, despite radical contextual differences e.g. in terms of economic wealth, gender inequality, level of urbanisation, personal mobility and religious norms, i.e. structural factors of relevance for sexual risk behaviours[14,18].\nIndeed, the current findings are consistent with the notion that depression, psychological distress and alcohol use are risk factors for sexual risk behaviours also in sub-Saharan African low-income settings, although longitudinal studies are needed in order to confirm this. Depression, psychological distress and alcohol use are prevalent in many countries with generalised HIV epidemics[6,23-26,55,56]. Thus, assuming causality, these conditions could potentially have considerable impact on population rates of sexual risk behaviours in low-income countries with high HIV prevalence.\nImproving mental health may theoretically decrease sexual risk behaviours. Based on our findings and the evidence from high and middle-income countries[9-12,19-21,45] we support the call for mental health intervention trials to include sexual risk behaviour and biological variables as outcome measures[2], particularly in low-income settings with generalised HIV epidemics. Moreover, qualitative studies should further explore subjective experiences of how poor mental health and sexual risk behaviours are inter-connected in low-income settings[57].\nIrrespective of the direction of causality, the mere co-existence of poor mental health and sexual risk behaviours has implications for HIV prevention: Those with the greatest HIV prevention needs may not always be sufficiently psychologically fit to benefit from intervention messages. For instance, learning to be assertive when communicating about sex with one's partner may be difficult for a woman who is depressed and anxious. HIV intervention trials should assess to what extent participants' mental health at baseline influence intervention outcomes. Moreover, HIV preventive programmes may need to consider including mental health and alcohol use reduction components into their intervention packages, in settings and groups where depression, psychological distress and alcohol use are common.", "The authors declare that they have no competing interests.", "PL designed and coordinated the study, performed analyses and wrote the manuscript. GR and SA led the data collection. AT and PA contributed to data analysis and results interpretation. PO contributed to study design, data analysis and results interpretation. EC conceived of the study and contributed to all stages of the work. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/125/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Participants", "Measures", "Analyses", "Results", "Depression and sexual risk behaviours", "Psychological distress and sexual risk behaviours", "Alcohol use and sexual risk behaviours", "Discussion", "Main findings", "Interpretation", "Methodological considerations", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Is poor mental health associated with sexual risk behaviours in sub-Saharan Africa? This question has not been conclusively answered despite its relevance for HIV prevention[1,2].\nPoor mental health is a major cause of disability in low-income countries, with depression constituting the heaviest disease burden[3]. The HIV epidemic may contribute to increased depression rates in countries having high HIV prevalence: HIV may lead to depression both in persons who live with HIV[4,5] and in those who are indirectly affected[6-8]. An association between depression and sexual risk behaviours is thus particularly relevant in low-income countries with high HIV prevalence.\nIn high-income countries, poor mental health has been closely linked to risky sexual behaviours[9]. For instance, longitudinal studies from the United States suggest that depressive symptoms contribute to sexual risk behaviours (multiple sexual partners, unprotected sex) in the general population, with potential mechanisms including maladaptive coping, low self-efficacy, and self-destructiveness[10-12]. However, the relationship between depressive symptoms and sexual risk behaviours may be non-linear: persons having moderate, but not severe, depressive symptoms may engage in the most sexual risk behaviours[13].\nHowever, conclusions derived from findings in high-income countries may not be applicable to low-income settings. Sexual behaviours vary widely between and within countries, and are determined both by context and individual factors[14]. In many sub-Saharan African low-income countries, contextual factors such as poverty/wealth, mobility, and gender inequality heavily influence sexual behaviours[14-18]. In contrast, in high-income countries, personal choice may have a greater influence.\nNevertheless, to our knowledge only three studies have previously investigated the association between poor mental health and sexual risk behaviours in the general population in sub-Saharan African countries: In South Africa, depressive symptoms predicted transactional sex and intimate partner physical and sexual violence in women, and inconsistent condom use in men [19], while cross-sectional associations with sexual risk behaviours have also been found [19,20]. In Botswana, cross-sectional associations were found between depressive symptoms and having multiple partners among women, and with paying for sex among men [21]. In addition, South Africa and Botswana are both middle-income countries, and to our knowledge, no population-based study has investigated the association between poor mental health and sexual risk behaviours in a low-income sub-Saharan African setting.\nThus, we investigated the association of (1) depression, (2) psychological distress, i.e. depressive and anxiety symptoms, and (3) alcohol use, with sexual risk behaviours in young Ugandans in the general population. Uganda is a low-income country with a generalised HIV epidemic (prevalence in 2005: 6.4%[22]). Moreover, population-based surveys suggest that mental health problems are very common in the country (depression prevalence range: 10-50%)[23-26], and mental health services are sparse with 0.8 psychiatrists per one million population[27].", "[SUBTITLE] Participants [SUBSECTION] The study took place in Kampala (mainly Baganda ethnic group) and Mbarara district (mainly Banyankole ethnic group). The Baganda and Banyankole are culturally and linguistically related.\nWe performed a cross-sectional population-based study in nine purposively selected study areas representing varying degrees of urbanicity in Uganda: 3 divisions in Kampala city (urban), 3 divisions in Mbarara town (semi-urban), and 3 sub-counties in Mbarara district (rural). Persons aged 18-30 years, residing in these areas and not reporting or displaying overt signs of severe mental or physical illness, and not under the influence of alcohol or drugs at the time of contact, were eligible. This age group was selected because it was deemed to represent the segment of the adult population being most sexually active.\nIn each study area, seven villages (in rural areas) or neighbourhoods (in semi-urban/urban areas) were purposively selected, in order to reflect the range of levels of economic development of villages/neighbourhoods in the study area. From a central location in each selected village/neighbourhood, a walk was performed in a random direction, towards the end of the village/neighbourhood. The random direction was obtained spinning a pen and observing in what direction the pen pointed after the spinning had stopped. Every third household encountered during the walk was visited, until ten interviews had been conducted, thus yielding an approximate sample size of 9 × 7 × 10 = 630 participants in total. All eligible persons at home were interviewed. If no eligible person was at home the next household was visited, and then, every third household. Follow-up visits were not performed. In four of the 21 selected neighbourhoods in Kampala, household sampling proved not to be feasible using our random walk procedure, given a low density of accessible households and a predominance of shops and businesses in the neighbourhood. In these neighbourhoods a pragmatic approach was adopted: Instead of households, shops and businesses were sampled using the same systematic sampling strategy as that used for households, with participants systematically recruited inside the selected establishments.\nAmong the persons who were approached and invited for participation, refusals were rare (estimated at <5%). Data was collected between September 2004 and June 2005 by seven Ugandan field workers (university students). Initial training focussed on interviewing techniques and ethical considerations specific to interviews about mental problems and sexual behaviours. Interviewers and respondents were not sex-matched. Continuous supervision was provided throughout the field-work by two of the authors (G.R. and S.A.). Study languages were English, Luganda and Runyankole, depending on the proficiency of the respondent. Informed consent was sought and the risks, benefits and right to decline or withdraw were carefully explained. Although potential participants were sometimes approached in the vicinity of other persons, after informed consent had been obtained, the research assistant and the participant withdrew to a more secluded location where privacy was ensured.\nAll study procedures were approved by the Institutional Ethical Review Committee of Mbarara University.\nThe study took place in Kampala (mainly Baganda ethnic group) and Mbarara district (mainly Banyankole ethnic group). The Baganda and Banyankole are culturally and linguistically related.\nWe performed a cross-sectional population-based study in nine purposively selected study areas representing varying degrees of urbanicity in Uganda: 3 divisions in Kampala city (urban), 3 divisions in Mbarara town (semi-urban), and 3 sub-counties in Mbarara district (rural). Persons aged 18-30 years, residing in these areas and not reporting or displaying overt signs of severe mental or physical illness, and not under the influence of alcohol or drugs at the time of contact, were eligible. This age group was selected because it was deemed to represent the segment of the adult population being most sexually active.\nIn each study area, seven villages (in rural areas) or neighbourhoods (in semi-urban/urban areas) were purposively selected, in order to reflect the range of levels of economic development of villages/neighbourhoods in the study area. From a central location in each selected village/neighbourhood, a walk was performed in a random direction, towards the end of the village/neighbourhood. The random direction was obtained spinning a pen and observing in what direction the pen pointed after the spinning had stopped. Every third household encountered during the walk was visited, until ten interviews had been conducted, thus yielding an approximate sample size of 9 × 7 × 10 = 630 participants in total. All eligible persons at home were interviewed. If no eligible person was at home the next household was visited, and then, every third household. Follow-up visits were not performed. In four of the 21 selected neighbourhoods in Kampala, household sampling proved not to be feasible using our random walk procedure, given a low density of accessible households and a predominance of shops and businesses in the neighbourhood. In these neighbourhoods a pragmatic approach was adopted: Instead of households, shops and businesses were sampled using the same systematic sampling strategy as that used for households, with participants systematically recruited inside the selected establishments.\nAmong the persons who were approached and invited for participation, refusals were rare (estimated at <5%). Data was collected between September 2004 and June 2005 by seven Ugandan field workers (university students). Initial training focussed on interviewing techniques and ethical considerations specific to interviews about mental problems and sexual behaviours. Interviewers and respondents were not sex-matched. Continuous supervision was provided throughout the field-work by two of the authors (G.R. and S.A.). Study languages were English, Luganda and Runyankole, depending on the proficiency of the respondent. Informed consent was sought and the risks, benefits and right to decline or withdraw were carefully explained. Although potential participants were sometimes approached in the vicinity of other persons, after informed consent had been obtained, the research assistant and the participant withdrew to a more secluded location where privacy was ensured.\nAll study procedures were approved by the Institutional Ethical Review Committee of Mbarara University.\n[SUBTITLE] Measures [SUBSECTION] Socio-demographic background factors were: age (18-24 yrs vs. 25-30 yrs), level of education (up to primary school vs. more than primary school), relationship status (single/widowed/separated vs. married/cohabiting), and place of residence (urban vs. semi-urban/rural with urban referring to Kampala district).\nDepression was assessed using the depression sub-section of the Hopkins Symptoms Checklist (HSCL-25)[28]. This instrument was developed for cross-cultural use and consists of 25 questions assessing symptoms of anxiety (10 items) and depression (15 items) during the past week, with the response to each item graded from 1 = \"not at all\" to 4 = \"extremely\". The depression sub-section of the questionnaire (15 items) has been validated among the Baganda as a screening tool for probable depression, with a cut-off point established based on comparison with structured clinical interviews[26,29]. Thus, we categorized participants having a depression sub-scale total score of 31 or above as having probable depression[26], in order to be able to assess the association of clinical depression with sexual risk behaviours. The Chronbach's alpha for the depression sub-scale was 0.84, i.e. similar to that previously reported in Uganda[26].\nPsychological distress was assessed by calculating a total score for the entire HSCL-25 instrument including both depression and anxiety items[28] as has been done previously[30,31]. Depression and psychological distress only partly overlap. Thus, many persons in the general population have psychological distress but do not fulfil diagnostic criteria for clinical depression. Nevertheless, such persons might potentially have increased sexual risk behaviours. Therefore, we used the HSCL-25 as a continuous measure of psychological distress and investigated whether this measure was associated with sexual risk behaviours. The continuous psychological distress variable was categorised into gender-specific quartiles instead of using e.g. a dichotomous measure, in order to capture sub-clinical psychological distress, and given that a non-linear relationship between psychological distress and sexual risk behaviour was deemed possible[13]. Gender-specific quartiles were used in order to maximise statistical power for within-gender analyses of the association between psychological distress and sexual behaviours. The 1st quartile (i.e. least psychological distress) was used as the reference category to which the other HSCL-25 quartiles were compared. The Chronbach's alpha for the entire instrument was 0.92, indicating excellent internal consistency, and suggesting that all HSCL-25 items could indeed be used to measure a common underlying construct.\nAlcohol use referred to heavy episodic drinking, a behaviour associated with sexual risk taking[21,32,33], and was operationalised as the number of times 'drunk on alcohol' per week (0 vs. 1 or more). Self-reports of heavy episodic drinking have previously been used in sub-Saharan African contexts[34] and may provide a viable alternative in settings where estimating the number of 'standard drinks' is difficult: In rural Uganda, a significant proportion of the alcohol consumed is locally produced, has variable alcohol content and is consumed from plastic bags, cups or a common pot.\nSexual behaviours were assessed using three measures:\n(1) 'Number of lifetime sexual partners'\nSexual partner was defined as a person with whom one has ever had sexual intercourse. The number of lifetime sexual partners was categorized as 0-3 vs. 4 or above, approximately corresponding to the 75th percentile in the current sample.\n(2) 'Number of current sexual partners'\nCurrent sexual partner was defined as a partner with whom one currently has sexual intercourse regularly. The number of current sexual partners was categorized as 0-1 vs. 2 or more. This measure thus targeted ongoing regular sexual relationships as has been done previously[35], in order to estimate the point prevalence of concurrency[36]. Two or more current sexual partners are hereafter referred to as concurrent sexual partners.\n(3) 'Frequency of condom use when having sex'\nThis question assessed the conditional likelihood that the person uses a condom in case he or she has sexual intercourse, and therefore used a relative measure of condom use (always, sometimes, never) and did not have a specific reference time period. Conceptualizing condom use as a habit, and using relative instead of count measures may increase sensitivity when investigating the psychological correlates of unprotected sex[37]. Condom use was categorised as always (consistent) vs. sometimes/never (inconsistent) condom use.\nThe study instrument was translated from English into Runyankole and Luganda and independently back-translated. The two versions were compared and necessary adjustments made (P.L., G.R., S.A. and translators). Care was taken in order not to introduce culturally alien or offensive notions or expressions, while ensuring that the intended construct was indeed being measured. The questions about sexual behaviours were asked at the end of the interview for increased acceptability. After back-translation and modification, the questionnaire was pre-tested on persons not participating in the study, with results indicating that the questions were acceptable and comprehensible.\nSocio-demographic background factors were: age (18-24 yrs vs. 25-30 yrs), level of education (up to primary school vs. more than primary school), relationship status (single/widowed/separated vs. married/cohabiting), and place of residence (urban vs. semi-urban/rural with urban referring to Kampala district).\nDepression was assessed using the depression sub-section of the Hopkins Symptoms Checklist (HSCL-25)[28]. This instrument was developed for cross-cultural use and consists of 25 questions assessing symptoms of anxiety (10 items) and depression (15 items) during the past week, with the response to each item graded from 1 = \"not at all\" to 4 = \"extremely\". The depression sub-section of the questionnaire (15 items) has been validated among the Baganda as a screening tool for probable depression, with a cut-off point established based on comparison with structured clinical interviews[26,29]. Thus, we categorized participants having a depression sub-scale total score of 31 or above as having probable depression[26], in order to be able to assess the association of clinical depression with sexual risk behaviours. The Chronbach's alpha for the depression sub-scale was 0.84, i.e. similar to that previously reported in Uganda[26].\nPsychological distress was assessed by calculating a total score for the entire HSCL-25 instrument including both depression and anxiety items[28] as has been done previously[30,31]. Depression and psychological distress only partly overlap. Thus, many persons in the general population have psychological distress but do not fulfil diagnostic criteria for clinical depression. Nevertheless, such persons might potentially have increased sexual risk behaviours. Therefore, we used the HSCL-25 as a continuous measure of psychological distress and investigated whether this measure was associated with sexual risk behaviours. The continuous psychological distress variable was categorised into gender-specific quartiles instead of using e.g. a dichotomous measure, in order to capture sub-clinical psychological distress, and given that a non-linear relationship between psychological distress and sexual risk behaviour was deemed possible[13]. Gender-specific quartiles were used in order to maximise statistical power for within-gender analyses of the association between psychological distress and sexual behaviours. The 1st quartile (i.e. least psychological distress) was used as the reference category to which the other HSCL-25 quartiles were compared. The Chronbach's alpha for the entire instrument was 0.92, indicating excellent internal consistency, and suggesting that all HSCL-25 items could indeed be used to measure a common underlying construct.\nAlcohol use referred to heavy episodic drinking, a behaviour associated with sexual risk taking[21,32,33], and was operationalised as the number of times 'drunk on alcohol' per week (0 vs. 1 or more). Self-reports of heavy episodic drinking have previously been used in sub-Saharan African contexts[34] and may provide a viable alternative in settings where estimating the number of 'standard drinks' is difficult: In rural Uganda, a significant proportion of the alcohol consumed is locally produced, has variable alcohol content and is consumed from plastic bags, cups or a common pot.\nSexual behaviours were assessed using three measures:\n(1) 'Number of lifetime sexual partners'\nSexual partner was defined as a person with whom one has ever had sexual intercourse. The number of lifetime sexual partners was categorized as 0-3 vs. 4 or above, approximately corresponding to the 75th percentile in the current sample.\n(2) 'Number of current sexual partners'\nCurrent sexual partner was defined as a partner with whom one currently has sexual intercourse regularly. The number of current sexual partners was categorized as 0-1 vs. 2 or more. This measure thus targeted ongoing regular sexual relationships as has been done previously[35], in order to estimate the point prevalence of concurrency[36]. Two or more current sexual partners are hereafter referred to as concurrent sexual partners.\n(3) 'Frequency of condom use when having sex'\nThis question assessed the conditional likelihood that the person uses a condom in case he or she has sexual intercourse, and therefore used a relative measure of condom use (always, sometimes, never) and did not have a specific reference time period. Conceptualizing condom use as a habit, and using relative instead of count measures may increase sensitivity when investigating the psychological correlates of unprotected sex[37]. Condom use was categorised as always (consistent) vs. sometimes/never (inconsistent) condom use.\nThe study instrument was translated from English into Runyankole and Luganda and independently back-translated. The two versions were compared and necessary adjustments made (P.L., G.R., S.A. and translators). Care was taken in order not to introduce culturally alien or offensive notions or expressions, while ensuring that the intended construct was indeed being measured. The questions about sexual behaviours were asked at the end of the interview for increased acceptability. After back-translation and modification, the questionnaire was pre-tested on persons not participating in the study, with results indicating that the questions were acceptable and comprehensible.\n[SUBTITLE] Analyses [SUBSECTION] All analyses were a priori stratified by gender. Missing values were excluded from analyses and all percentages are percentages of valid answers. For analyses pertaining to condom use, participants who reported never having had sex were excluded.\nSexual risk behaviours were treated as outcome measures. Depression, psychological distress, and alcohol use were examined as possible predictors. However, depression, psychological distress, and alcohol use may all be inter-related. For instance, alcohol use may be both a consequence and a cause of psychological distress and depression. Thus, each of these measures was examined separately in relationship to sexual risk behaviour. Socio-demographic background factors were included into all models in order to adjust for confounding.\nMoreover, in explorative analyses we investigated whether the inter-relatedness between depression, psychological distress and alcohol use influenced their respective associations with the outcome. Thus, we examined whether the associations of depression and psychological distress with sexual risk behaviours were independent of alcohol use, and conversely, whether the association of alcohol use with sexual risk behaviours was independent of depression and psychological distress, respectively. The results of these explorative analyses are presented in the text, but not in the tables.\nWe used a modified cluster sampling method, with the clusters being villages/neighbourhoods, as contrasted to a genuinely random sampling method. Thus, given that some statistical efficiency may be lost in case responses within clusters correlate, we investigated whether adjustment for such intra-cluster correlation substantially influenced the results obtained, using the Stata svy procedure. However, confidence intervals remained virtually unchanged after adjustment and the original confidence intervals are presented in this manuscript. Significance level was set at p < 0.05.\nAll analyses were a priori stratified by gender. Missing values were excluded from analyses and all percentages are percentages of valid answers. For analyses pertaining to condom use, participants who reported never having had sex were excluded.\nSexual risk behaviours were treated as outcome measures. Depression, psychological distress, and alcohol use were examined as possible predictors. However, depression, psychological distress, and alcohol use may all be inter-related. For instance, alcohol use may be both a consequence and a cause of psychological distress and depression. Thus, each of these measures was examined separately in relationship to sexual risk behaviour. Socio-demographic background factors were included into all models in order to adjust for confounding.\nMoreover, in explorative analyses we investigated whether the inter-relatedness between depression, psychological distress and alcohol use influenced their respective associations with the outcome. Thus, we examined whether the associations of depression and psychological distress with sexual risk behaviours were independent of alcohol use, and conversely, whether the association of alcohol use with sexual risk behaviours was independent of depression and psychological distress, respectively. The results of these explorative analyses are presented in the text, but not in the tables.\nWe used a modified cluster sampling method, with the clusters being villages/neighbourhoods, as contrasted to a genuinely random sampling method. Thus, given that some statistical efficiency may be lost in case responses within clusters correlate, we investigated whether adjustment for such intra-cluster correlation substantially influenced the results obtained, using the Stata svy procedure. However, confidence intervals remained virtually unchanged after adjustment and the original confidence intervals are presented in this manuscript. Significance level was set at p < 0.05.", "The study took place in Kampala (mainly Baganda ethnic group) and Mbarara district (mainly Banyankole ethnic group). The Baganda and Banyankole are culturally and linguistically related.\nWe performed a cross-sectional population-based study in nine purposively selected study areas representing varying degrees of urbanicity in Uganda: 3 divisions in Kampala city (urban), 3 divisions in Mbarara town (semi-urban), and 3 sub-counties in Mbarara district (rural). Persons aged 18-30 years, residing in these areas and not reporting or displaying overt signs of severe mental or physical illness, and not under the influence of alcohol or drugs at the time of contact, were eligible. This age group was selected because it was deemed to represent the segment of the adult population being most sexually active.\nIn each study area, seven villages (in rural areas) or neighbourhoods (in semi-urban/urban areas) were purposively selected, in order to reflect the range of levels of economic development of villages/neighbourhoods in the study area. From a central location in each selected village/neighbourhood, a walk was performed in a random direction, towards the end of the village/neighbourhood. The random direction was obtained spinning a pen and observing in what direction the pen pointed after the spinning had stopped. Every third household encountered during the walk was visited, until ten interviews had been conducted, thus yielding an approximate sample size of 9 × 7 × 10 = 630 participants in total. All eligible persons at home were interviewed. If no eligible person was at home the next household was visited, and then, every third household. Follow-up visits were not performed. In four of the 21 selected neighbourhoods in Kampala, household sampling proved not to be feasible using our random walk procedure, given a low density of accessible households and a predominance of shops and businesses in the neighbourhood. In these neighbourhoods a pragmatic approach was adopted: Instead of households, shops and businesses were sampled using the same systematic sampling strategy as that used for households, with participants systematically recruited inside the selected establishments.\nAmong the persons who were approached and invited for participation, refusals were rare (estimated at <5%). Data was collected between September 2004 and June 2005 by seven Ugandan field workers (university students). Initial training focussed on interviewing techniques and ethical considerations specific to interviews about mental problems and sexual behaviours. Interviewers and respondents were not sex-matched. Continuous supervision was provided throughout the field-work by two of the authors (G.R. and S.A.). Study languages were English, Luganda and Runyankole, depending on the proficiency of the respondent. Informed consent was sought and the risks, benefits and right to decline or withdraw were carefully explained. Although potential participants were sometimes approached in the vicinity of other persons, after informed consent had been obtained, the research assistant and the participant withdrew to a more secluded location where privacy was ensured.\nAll study procedures were approved by the Institutional Ethical Review Committee of Mbarara University.", "Socio-demographic background factors were: age (18-24 yrs vs. 25-30 yrs), level of education (up to primary school vs. more than primary school), relationship status (single/widowed/separated vs. married/cohabiting), and place of residence (urban vs. semi-urban/rural with urban referring to Kampala district).\nDepression was assessed using the depression sub-section of the Hopkins Symptoms Checklist (HSCL-25)[28]. This instrument was developed for cross-cultural use and consists of 25 questions assessing symptoms of anxiety (10 items) and depression (15 items) during the past week, with the response to each item graded from 1 = \"not at all\" to 4 = \"extremely\". The depression sub-section of the questionnaire (15 items) has been validated among the Baganda as a screening tool for probable depression, with a cut-off point established based on comparison with structured clinical interviews[26,29]. Thus, we categorized participants having a depression sub-scale total score of 31 or above as having probable depression[26], in order to be able to assess the association of clinical depression with sexual risk behaviours. The Chronbach's alpha for the depression sub-scale was 0.84, i.e. similar to that previously reported in Uganda[26].\nPsychological distress was assessed by calculating a total score for the entire HSCL-25 instrument including both depression and anxiety items[28] as has been done previously[30,31]. Depression and psychological distress only partly overlap. Thus, many persons in the general population have psychological distress but do not fulfil diagnostic criteria for clinical depression. Nevertheless, such persons might potentially have increased sexual risk behaviours. Therefore, we used the HSCL-25 as a continuous measure of psychological distress and investigated whether this measure was associated with sexual risk behaviours. The continuous psychological distress variable was categorised into gender-specific quartiles instead of using e.g. a dichotomous measure, in order to capture sub-clinical psychological distress, and given that a non-linear relationship between psychological distress and sexual risk behaviour was deemed possible[13]. Gender-specific quartiles were used in order to maximise statistical power for within-gender analyses of the association between psychological distress and sexual behaviours. The 1st quartile (i.e. least psychological distress) was used as the reference category to which the other HSCL-25 quartiles were compared. The Chronbach's alpha for the entire instrument was 0.92, indicating excellent internal consistency, and suggesting that all HSCL-25 items could indeed be used to measure a common underlying construct.\nAlcohol use referred to heavy episodic drinking, a behaviour associated with sexual risk taking[21,32,33], and was operationalised as the number of times 'drunk on alcohol' per week (0 vs. 1 or more). Self-reports of heavy episodic drinking have previously been used in sub-Saharan African contexts[34] and may provide a viable alternative in settings where estimating the number of 'standard drinks' is difficult: In rural Uganda, a significant proportion of the alcohol consumed is locally produced, has variable alcohol content and is consumed from plastic bags, cups or a common pot.\nSexual behaviours were assessed using three measures:\n(1) 'Number of lifetime sexual partners'\nSexual partner was defined as a person with whom one has ever had sexual intercourse. The number of lifetime sexual partners was categorized as 0-3 vs. 4 or above, approximately corresponding to the 75th percentile in the current sample.\n(2) 'Number of current sexual partners'\nCurrent sexual partner was defined as a partner with whom one currently has sexual intercourse regularly. The number of current sexual partners was categorized as 0-1 vs. 2 or more. This measure thus targeted ongoing regular sexual relationships as has been done previously[35], in order to estimate the point prevalence of concurrency[36]. Two or more current sexual partners are hereafter referred to as concurrent sexual partners.\n(3) 'Frequency of condom use when having sex'\nThis question assessed the conditional likelihood that the person uses a condom in case he or she has sexual intercourse, and therefore used a relative measure of condom use (always, sometimes, never) and did not have a specific reference time period. Conceptualizing condom use as a habit, and using relative instead of count measures may increase sensitivity when investigating the psychological correlates of unprotected sex[37]. Condom use was categorised as always (consistent) vs. sometimes/never (inconsistent) condom use.\nThe study instrument was translated from English into Runyankole and Luganda and independently back-translated. The two versions were compared and necessary adjustments made (P.L., G.R., S.A. and translators). Care was taken in order not to introduce culturally alien or offensive notions or expressions, while ensuring that the intended construct was indeed being measured. The questions about sexual behaviours were asked at the end of the interview for increased acceptability. After back-translation and modification, the questionnaire was pre-tested on persons not participating in the study, with results indicating that the questions were acceptable and comprehensible.", "All analyses were a priori stratified by gender. Missing values were excluded from analyses and all percentages are percentages of valid answers. For analyses pertaining to condom use, participants who reported never having had sex were excluded.\nSexual risk behaviours were treated as outcome measures. Depression, psychological distress, and alcohol use were examined as possible predictors. However, depression, psychological distress, and alcohol use may all be inter-related. For instance, alcohol use may be both a consequence and a cause of psychological distress and depression. Thus, each of these measures was examined separately in relationship to sexual risk behaviour. Socio-demographic background factors were included into all models in order to adjust for confounding.\nMoreover, in explorative analyses we investigated whether the inter-relatedness between depression, psychological distress and alcohol use influenced their respective associations with the outcome. Thus, we examined whether the associations of depression and psychological distress with sexual risk behaviours were independent of alcohol use, and conversely, whether the association of alcohol use with sexual risk behaviours was independent of depression and psychological distress, respectively. The results of these explorative analyses are presented in the text, but not in the tables.\nWe used a modified cluster sampling method, with the clusters being villages/neighbourhoods, as contrasted to a genuinely random sampling method. Thus, given that some statistical efficiency may be lost in case responses within clusters correlate, we investigated whether adjustment for such intra-cluster correlation substantially influenced the results obtained, using the Stata svy procedure. However, confidence intervals remained virtually unchanged after adjustment and the original confidence intervals are presented in this manuscript. Significance level was set at p < 0.05.", "88% of the men and 82% of the women reported ever having had sex. 37% (n = 123) of the men and 23% (n = 70) of the women reported four or more lifetime sexual partners. 15% (n = 49) of the men and 4.5% (n = 14) of the women reported having concurrent sexual partners. 73% (n = 204) of the men and 77% (n = 193) of the women who had ever had sex reported inconsistent condom use.\nThe prevalence of probable depression was 12.0% among men and 17.9% among women. Heavy episodic drinking at least once per week was reported by 35.4% (n = 118) of the men and 13.3% (n = 41) of the women. The distribution of socio-demographic, mental health and sexual risk behaviour variables are presented in Table 1.\nSample characteristics\n1 Values represent HSCL-25 quartiles cut-off scores.\n2 Alcohol use refers to self-reported heavy episodic drinking.\n3 Condom use was assessed only among those participants who had ever had sex.\n[SUBTITLE] Depression and sexual risk behaviours [SUBSECTION] Depression was independently associated with a greater number of lifetime sexual partners among women, but not among men, after simultaneous adjustment for all socio-demographic background variables, see Table 2. Similarly, depression was independently associated with having concurrent sexual partners among women, but not among men. Associations between depression and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of depression with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above results were adjusted for alcohol use in explorative analyses, the association between depression and concurrent sexual partners in women was somewhat attenuated (OR 3.03; 95% CI 0.90-10.24), indicating that alcohol use mediated or confounded some part of that association. However, associations of depression with number of lifetime sexual partners and with condom use, respectively, were independent of alcohol use in both men and women.\nDepression was independently associated with a greater number of lifetime sexual partners among women, but not among men, after simultaneous adjustment for all socio-demographic background variables, see Table 2. Similarly, depression was independently associated with having concurrent sexual partners among women, but not among men. Associations between depression and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of depression with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above results were adjusted for alcohol use in explorative analyses, the association between depression and concurrent sexual partners in women was somewhat attenuated (OR 3.03; 95% CI 0.90-10.24), indicating that alcohol use mediated or confounded some part of that association. However, associations of depression with number of lifetime sexual partners and with condom use, respectively, were independent of alcohol use in both men and women.\n[SUBTITLE] Psychological distress and sexual risk behaviours [SUBSECTION] Psychological distress was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic background variables, see Table 3. The association between psychological distress and having concurrent sexual partners did not reach statistical significance in either men or women, although in women the association approached significance (p = 0.05). Psychological distress was associated with inconsistent condom use in men, while in women results did not reach statistical significance.\nAssociation of psychological distress with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Numbers represent HSCL-25 quartiles.\n2 Adjustment for age, education, marital status and place of residence.\n3 Analyses pertain only to participants who had ever had sex.\nExplorative adjustment of the above associations for alcohol use suggested that the associations between psychological distress and sexual risk behaviours were virtually independent of alcohol use in both men and women.\nPsychological distress was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic background variables, see Table 3. The association between psychological distress and having concurrent sexual partners did not reach statistical significance in either men or women, although in women the association approached significance (p = 0.05). Psychological distress was associated with inconsistent condom use in men, while in women results did not reach statistical significance.\nAssociation of psychological distress with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Numbers represent HSCL-25 quartiles.\n2 Adjustment for age, education, marital status and place of residence.\n3 Analyses pertain only to participants who had ever had sex.\nExplorative adjustment of the above associations for alcohol use suggested that the associations between psychological distress and sexual risk behaviours were virtually independent of alcohol use in both men and women.\n[SUBTITLE] Alcohol use and sexual risk behaviours [SUBSECTION] Alcohol use was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic variables, see Table 4. Moreover, alcohol use was independently associated with having concurrent sexual partners in both men and women. Associations between alcohol use and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of alcohol use with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above associations were adjusted for depression and psychological distress, respectively, the associations between alcohol use and sexual risk behaviours remained virtually unchanged in both men and women.\nAlcohol use was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic variables, see Table 4. Moreover, alcohol use was independently associated with having concurrent sexual partners in both men and women. Associations between alcohol use and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of alcohol use with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above associations were adjusted for depression and psychological distress, respectively, the associations between alcohol use and sexual risk behaviours remained virtually unchanged in both men and women.", "Depression was independently associated with a greater number of lifetime sexual partners among women, but not among men, after simultaneous adjustment for all socio-demographic background variables, see Table 2. Similarly, depression was independently associated with having concurrent sexual partners among women, but not among men. Associations between depression and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of depression with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above results were adjusted for alcohol use in explorative analyses, the association between depression and concurrent sexual partners in women was somewhat attenuated (OR 3.03; 95% CI 0.90-10.24), indicating that alcohol use mediated or confounded some part of that association. However, associations of depression with number of lifetime sexual partners and with condom use, respectively, were independent of alcohol use in both men and women.", "Psychological distress was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic background variables, see Table 3. The association between psychological distress and having concurrent sexual partners did not reach statistical significance in either men or women, although in women the association approached significance (p = 0.05). Psychological distress was associated with inconsistent condom use in men, while in women results did not reach statistical significance.\nAssociation of psychological distress with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Numbers represent HSCL-25 quartiles.\n2 Adjustment for age, education, marital status and place of residence.\n3 Analyses pertain only to participants who had ever had sex.\nExplorative adjustment of the above associations for alcohol use suggested that the associations between psychological distress and sexual risk behaviours were virtually independent of alcohol use in both men and women.", "Alcohol use was independently associated with a greater number of lifetime sexual partners in both men and women, after simultaneous adjustment for socio-demographic variables, see Table 4. Moreover, alcohol use was independently associated with having concurrent sexual partners in both men and women. Associations between alcohol use and inconsistent condom use did not reach statistical significance in either men or women.\nAssociation of alcohol use with sexual risk behaviours in 334 men and 312 women aged 18-30 years in the general population in Uganda\n1 Adjustment for age, education, marital status and place of residence.\n2 Analyses pertain only to participants who had ever had sex.\nWhen the above associations were adjusted for depression and psychological distress, respectively, the associations between alcohol use and sexual risk behaviours remained virtually unchanged in both men and women.", "[SUBTITLE] Main findings [SUBSECTION] Depression, psychological distress and alcohol use were all associated with having a greater number of lifetime sexual partners and with having concurrent sexual partners, with stronger associations found among women. Moreover, although less consistent, associations were found between mental health indicators and inconsistent condom use, with stronger associations found among men.\nAll the above associations were adjusted for confounding by socio-demographic variables. Explorative analyses suggested that the associations of depression and psychological distress with sexual risk behaviours were virtually independent of alcohol use. Conversely, the association between alcohol use and sexual risk behaviours was virtually independent of depression and psychological distress.\nThe prevalence of depression was in line with earlier Ugandan studies[23,24] but lower than the extremely high prevalence recently reported in war-torn Northern Uganda[25,26]. Heavy episodic drinking at least once per week was common, consistent with reports suggesting that Uganda has one of the world's highest alcohol per capita consumptions[38]. The mean number of lifetime partners among persons who had ever had sex was broadly in the same range as that reported in a national survey conducted at the time of this study[22]. The point prevalence of concurrency was relatively low, although consistent with previous Ugandan studies on concurrency[39,40]. Inconsistent condom use was reported by approximately 75% of those who had ever had sex, in agreement with reports of low rates of condom use at last sexual intercourse from the period when this study was performed[22].\nDepression, psychological distress and alcohol use were all associated with having a greater number of lifetime sexual partners and with having concurrent sexual partners, with stronger associations found among women. Moreover, although less consistent, associations were found between mental health indicators and inconsistent condom use, with stronger associations found among men.\nAll the above associations were adjusted for confounding by socio-demographic variables. Explorative analyses suggested that the associations of depression and psychological distress with sexual risk behaviours were virtually independent of alcohol use. Conversely, the association between alcohol use and sexual risk behaviours was virtually independent of depression and psychological distress.\nThe prevalence of depression was in line with earlier Ugandan studies[23,24] but lower than the extremely high prevalence recently reported in war-torn Northern Uganda[25,26]. Heavy episodic drinking at least once per week was common, consistent with reports suggesting that Uganda has one of the world's highest alcohol per capita consumptions[38]. The mean number of lifetime partners among persons who had ever had sex was broadly in the same range as that reported in a national survey conducted at the time of this study[22]. The point prevalence of concurrency was relatively low, although consistent with previous Ugandan studies on concurrency[39,40]. Inconsistent condom use was reported by approximately 75% of those who had ever had sex, in agreement with reports of low rates of condom use at last sexual intercourse from the period when this study was performed[22].\n[SUBTITLE] Interpretation [SUBSECTION] Depression was associated with past and current multiple partners in women. A number of interpretations for these associations are possible. Firstly, depression and multiple partners may be indirectly linked through a common cause. For instance, poverty with food insecurity may cause poor mental health, and may also force women into transactional sexual relationships in order to obtain food[41]. Similarly, having an abusive partner may lead to depression, while also motivating women to look for extramarital or alternative partners. Secondly, depression might contribute to having multiple partners. Thus, fatalism and hopelessness may lead to disregard of the threat of HIV and thus hypothetically contribute to multiple sexual partnerships[42]. Moreover, depressed women may be more vulnerable to being coerced into sex, potentially leading to an increased number of sexual contacts[20]. Thirdly, having multiple partners might lead to depression. Thus, having multiple partners may cause HIV related worries[43]. Moreover, STD and HIV infection increase the risk of subsequent depression[4,10]. In addition, women with multiple partners in Uganda may have feelings of guilt and may fear being labelled as promiscuous[44]. More research is needed in order to clarify the meaning of the association between depression and multiple partners in the Ugandan context.\nSimilarly, the associations between psychological distress and past and current partners were stronger in women. Studies in high-income countries suggest that women having multiple partners may have particularly poor mental health[11,12]. Psychological distress and depression partly overlap, and factors contributing to multiple partners in depressed women could also be operating with regard to psychological distress. In addition, psychological distress may lead to casual sex as a coping strategy, and may also decrease self-efficacy for changing risky habits[45]. Lack of sense of the future may lead to non-planning and impulsivity[46]. In addition, psychological distress may cause destructiveness[29], potentially contributing to sexual risk behaviours.\nIn contrast, the associations of depression and psychological distress with inconsistent condom use were stronger among men. Men's mental state may have a greater influence than that of women on the condom-use decision, given that men generally decide over condom use in Uganda[47].\nWhile associations between alcohol use and sexual risk behaviours have previously been found in Uganda[48], the mechanisms explaining the association between alcohol use and sexual risk behaviours in this setting, or elsewhere, are not fully understood: Clearly, alcohol consumption causes behavioural disinhibition potentially leading to risky decisions. However, alcohol use may also be a marker for a psychological trait, or state, conducive to risky sexual behaviours[49,50], e.g. depression or psychological distress. In the context of this uncertainty, the current findings, although cross-sectional, do suggest that alcohol use and depression/psychological distress mainly influence sexual risk behaviours through mutually independent causal pathways.\nIn summary, the interrelationship between poor mental health and sexual risk behaviours in the Ugandan setting is likely complex, and may well be bi-directional.\nDepression was associated with past and current multiple partners in women. A number of interpretations for these associations are possible. Firstly, depression and multiple partners may be indirectly linked through a common cause. For instance, poverty with food insecurity may cause poor mental health, and may also force women into transactional sexual relationships in order to obtain food[41]. Similarly, having an abusive partner may lead to depression, while also motivating women to look for extramarital or alternative partners. Secondly, depression might contribute to having multiple partners. Thus, fatalism and hopelessness may lead to disregard of the threat of HIV and thus hypothetically contribute to multiple sexual partnerships[42]. Moreover, depressed women may be more vulnerable to being coerced into sex, potentially leading to an increased number of sexual contacts[20]. Thirdly, having multiple partners might lead to depression. Thus, having multiple partners may cause HIV related worries[43]. Moreover, STD and HIV infection increase the risk of subsequent depression[4,10]. In addition, women with multiple partners in Uganda may have feelings of guilt and may fear being labelled as promiscuous[44]. More research is needed in order to clarify the meaning of the association between depression and multiple partners in the Ugandan context.\nSimilarly, the associations between psychological distress and past and current partners were stronger in women. Studies in high-income countries suggest that women having multiple partners may have particularly poor mental health[11,12]. Psychological distress and depression partly overlap, and factors contributing to multiple partners in depressed women could also be operating with regard to psychological distress. In addition, psychological distress may lead to casual sex as a coping strategy, and may also decrease self-efficacy for changing risky habits[45]. Lack of sense of the future may lead to non-planning and impulsivity[46]. In addition, psychological distress may cause destructiveness[29], potentially contributing to sexual risk behaviours.\nIn contrast, the associations of depression and psychological distress with inconsistent condom use were stronger among men. Men's mental state may have a greater influence than that of women on the condom-use decision, given that men generally decide over condom use in Uganda[47].\nWhile associations between alcohol use and sexual risk behaviours have previously been found in Uganda[48], the mechanisms explaining the association between alcohol use and sexual risk behaviours in this setting, or elsewhere, are not fully understood: Clearly, alcohol consumption causes behavioural disinhibition potentially leading to risky decisions. However, alcohol use may also be a marker for a psychological trait, or state, conducive to risky sexual behaviours[49,50], e.g. depression or psychological distress. In the context of this uncertainty, the current findings, although cross-sectional, do suggest that alcohol use and depression/psychological distress mainly influence sexual risk behaviours through mutually independent causal pathways.\nIn summary, the interrelationship between poor mental health and sexual risk behaviours in the Ugandan setting is likely complex, and may well be bi-directional.\n[SUBTITLE] Methodological considerations [SUBSECTION] The current study was cross-sectional and its aim was to assess association. Thus, the interpretations presented regarding the possible nature and direction of the associations should be viewed as hypotheses for further testing.\nOur measure of concurrency was crude, although standard definitions of concurrency are still under development[51]. The measure involved some portion of subjectivity given that what is perceived as a current regular sexual partner depends on expectations of future encounters. However, 'current' regular relationships have been assessed also in previous studies[35,39,40]. Moreover, the measure was not subject to recall bias since participants were not required to reconstruct past partnerships. In addition, recall and concentration is affected by depressive symptoms[52] and an assessment of dates and frequencies of sexual behaviours in the past would potentially have entailed significant risks of bias.\nAs in much survey research on sensitive topics, responses may be subject to social desirability bias. Having multiple sexual partners is often not socially desirable for women in Uganda, and the numbers of partners and rates of concurrency reported among women should be viewed as minimum estimates. However, the rates found are largely consistent with evidence from previous studies on concurrency from Uganda[39,40].\nThe sampling method used was not random, since villages/neighbourhoods were purposively selected. Thus, the socio-demographic profile of the sample does not necessarily reflect that of the districts where the study was performed. In the current sample, the proportion of persons having completed primary school was higher and the proportion of persons who were married or cohabiting was lower than in the general population in Uganda [53], although it should be noted that only persons aged 18-30 years were included in the study. While the prevalence estimates should thus be interpreted with some caution, the purposive sampling of villages/neighbourhoods representing a broad range of economic development arguably increases the extent to which the associations found may be applicable to external populations from different socio-economic strata.\nFollow-up visits were not performed for households where no one was at home at the time of the interviewer's visit. If persons who were not at home systematically differed from those at home with respect to mental health and sexual risk behaviours, a selection bias could hypothetically have been introduced into the study.\nNo measure of personal or household wealth was included. However, results were adjusted for a proxy measure of socio-economic status (i.e. education) and place of residence.\nThe alcohol measure was crude and did not capture nuances in alcohol consumption, such as the exact number of drinks per heavy episodic drinking occasion. However, when investigating the behavioural impact of heavy episodic drinking, solely assessing the number of standard drinks may have limitations, given that the number of drinks required for altering behaviour is individual[54]. Thus, combining self-reports of heavy episodic drinking with standard drink counts may provide the best option for research on alcohol and sexual risk behaviours, particularly in settings where the 'standard drink' concept is difficult to operationalise.\nThe current study was cross-sectional and its aim was to assess association. Thus, the interpretations presented regarding the possible nature and direction of the associations should be viewed as hypotheses for further testing.\nOur measure of concurrency was crude, although standard definitions of concurrency are still under development[51]. The measure involved some portion of subjectivity given that what is perceived as a current regular sexual partner depends on expectations of future encounters. However, 'current' regular relationships have been assessed also in previous studies[35,39,40]. Moreover, the measure was not subject to recall bias since participants were not required to reconstruct past partnerships. In addition, recall and concentration is affected by depressive symptoms[52] and an assessment of dates and frequencies of sexual behaviours in the past would potentially have entailed significant risks of bias.\nAs in much survey research on sensitive topics, responses may be subject to social desirability bias. Having multiple sexual partners is often not socially desirable for women in Uganda, and the numbers of partners and rates of concurrency reported among women should be viewed as minimum estimates. However, the rates found are largely consistent with evidence from previous studies on concurrency from Uganda[39,40].\nThe sampling method used was not random, since villages/neighbourhoods were purposively selected. Thus, the socio-demographic profile of the sample does not necessarily reflect that of the districts where the study was performed. In the current sample, the proportion of persons having completed primary school was higher and the proportion of persons who were married or cohabiting was lower than in the general population in Uganda [53], although it should be noted that only persons aged 18-30 years were included in the study. While the prevalence estimates should thus be interpreted with some caution, the purposive sampling of villages/neighbourhoods representing a broad range of economic development arguably increases the extent to which the associations found may be applicable to external populations from different socio-economic strata.\nFollow-up visits were not performed for households where no one was at home at the time of the interviewer's visit. If persons who were not at home systematically differed from those at home with respect to mental health and sexual risk behaviours, a selection bias could hypothetically have been introduced into the study.\nNo measure of personal or household wealth was included. However, results were adjusted for a proxy measure of socio-economic status (i.e. education) and place of residence.\nThe alcohol measure was crude and did not capture nuances in alcohol consumption, such as the exact number of drinks per heavy episodic drinking occasion. However, when investigating the behavioural impact of heavy episodic drinking, solely assessing the number of standard drinks may have limitations, given that the number of drinks required for altering behaviour is individual[54]. Thus, combining self-reports of heavy episodic drinking with standard drink counts may provide the best option for research on alcohol and sexual risk behaviours, particularly in settings where the 'standard drink' concept is difficult to operationalise.", "Depression, psychological distress and alcohol use were all associated with having a greater number of lifetime sexual partners and with having concurrent sexual partners, with stronger associations found among women. Moreover, although less consistent, associations were found between mental health indicators and inconsistent condom use, with stronger associations found among men.\nAll the above associations were adjusted for confounding by socio-demographic variables. Explorative analyses suggested that the associations of depression and psychological distress with sexual risk behaviours were virtually independent of alcohol use. Conversely, the association between alcohol use and sexual risk behaviours was virtually independent of depression and psychological distress.\nThe prevalence of depression was in line with earlier Ugandan studies[23,24] but lower than the extremely high prevalence recently reported in war-torn Northern Uganda[25,26]. Heavy episodic drinking at least once per week was common, consistent with reports suggesting that Uganda has one of the world's highest alcohol per capita consumptions[38]. The mean number of lifetime partners among persons who had ever had sex was broadly in the same range as that reported in a national survey conducted at the time of this study[22]. The point prevalence of concurrency was relatively low, although consistent with previous Ugandan studies on concurrency[39,40]. Inconsistent condom use was reported by approximately 75% of those who had ever had sex, in agreement with reports of low rates of condom use at last sexual intercourse from the period when this study was performed[22].", "Depression was associated with past and current multiple partners in women. A number of interpretations for these associations are possible. Firstly, depression and multiple partners may be indirectly linked through a common cause. For instance, poverty with food insecurity may cause poor mental health, and may also force women into transactional sexual relationships in order to obtain food[41]. Similarly, having an abusive partner may lead to depression, while also motivating women to look for extramarital or alternative partners. Secondly, depression might contribute to having multiple partners. Thus, fatalism and hopelessness may lead to disregard of the threat of HIV and thus hypothetically contribute to multiple sexual partnerships[42]. Moreover, depressed women may be more vulnerable to being coerced into sex, potentially leading to an increased number of sexual contacts[20]. Thirdly, having multiple partners might lead to depression. Thus, having multiple partners may cause HIV related worries[43]. Moreover, STD and HIV infection increase the risk of subsequent depression[4,10]. In addition, women with multiple partners in Uganda may have feelings of guilt and may fear being labelled as promiscuous[44]. More research is needed in order to clarify the meaning of the association between depression and multiple partners in the Ugandan context.\nSimilarly, the associations between psychological distress and past and current partners were stronger in women. Studies in high-income countries suggest that women having multiple partners may have particularly poor mental health[11,12]. Psychological distress and depression partly overlap, and factors contributing to multiple partners in depressed women could also be operating with regard to psychological distress. In addition, psychological distress may lead to casual sex as a coping strategy, and may also decrease self-efficacy for changing risky habits[45]. Lack of sense of the future may lead to non-planning and impulsivity[46]. In addition, psychological distress may cause destructiveness[29], potentially contributing to sexual risk behaviours.\nIn contrast, the associations of depression and psychological distress with inconsistent condom use were stronger among men. Men's mental state may have a greater influence than that of women on the condom-use decision, given that men generally decide over condom use in Uganda[47].\nWhile associations between alcohol use and sexual risk behaviours have previously been found in Uganda[48], the mechanisms explaining the association between alcohol use and sexual risk behaviours in this setting, or elsewhere, are not fully understood: Clearly, alcohol consumption causes behavioural disinhibition potentially leading to risky decisions. However, alcohol use may also be a marker for a psychological trait, or state, conducive to risky sexual behaviours[49,50], e.g. depression or psychological distress. In the context of this uncertainty, the current findings, although cross-sectional, do suggest that alcohol use and depression/psychological distress mainly influence sexual risk behaviours through mutually independent causal pathways.\nIn summary, the interrelationship between poor mental health and sexual risk behaviours in the Ugandan setting is likely complex, and may well be bi-directional.", "The current study was cross-sectional and its aim was to assess association. Thus, the interpretations presented regarding the possible nature and direction of the associations should be viewed as hypotheses for further testing.\nOur measure of concurrency was crude, although standard definitions of concurrency are still under development[51]. The measure involved some portion of subjectivity given that what is perceived as a current regular sexual partner depends on expectations of future encounters. However, 'current' regular relationships have been assessed also in previous studies[35,39,40]. Moreover, the measure was not subject to recall bias since participants were not required to reconstruct past partnerships. In addition, recall and concentration is affected by depressive symptoms[52] and an assessment of dates and frequencies of sexual behaviours in the past would potentially have entailed significant risks of bias.\nAs in much survey research on sensitive topics, responses may be subject to social desirability bias. Having multiple sexual partners is often not socially desirable for women in Uganda, and the numbers of partners and rates of concurrency reported among women should be viewed as minimum estimates. However, the rates found are largely consistent with evidence from previous studies on concurrency from Uganda[39,40].\nThe sampling method used was not random, since villages/neighbourhoods were purposively selected. Thus, the socio-demographic profile of the sample does not necessarily reflect that of the districts where the study was performed. In the current sample, the proportion of persons having completed primary school was higher and the proportion of persons who were married or cohabiting was lower than in the general population in Uganda [53], although it should be noted that only persons aged 18-30 years were included in the study. While the prevalence estimates should thus be interpreted with some caution, the purposive sampling of villages/neighbourhoods representing a broad range of economic development arguably increases the extent to which the associations found may be applicable to external populations from different socio-economic strata.\nFollow-up visits were not performed for households where no one was at home at the time of the interviewer's visit. If persons who were not at home systematically differed from those at home with respect to mental health and sexual risk behaviours, a selection bias could hypothetically have been introduced into the study.\nNo measure of personal or household wealth was included. However, results were adjusted for a proxy measure of socio-economic status (i.e. education) and place of residence.\nThe alcohol measure was crude and did not capture nuances in alcohol consumption, such as the exact number of drinks per heavy episodic drinking occasion. However, when investigating the behavioural impact of heavy episodic drinking, solely assessing the number of standard drinks may have limitations, given that the number of drinks required for altering behaviour is individual[54]. Thus, combining self-reports of heavy episodic drinking with standard drink counts may provide the best option for research on alcohol and sexual risk behaviours, particularly in settings where the 'standard drink' concept is difficult to operationalise.", "To our knowledge, this is the first population-based study to demonstrate an association of depression and psychological distress with sexual risk behaviours in a low-income sub-Saharan African setting. Although preliminary, the current findings indicate that the association between poor mental health and sexual risk behaviours may be present across both high and low-income settings, despite radical contextual differences e.g. in terms of economic wealth, gender inequality, level of urbanisation, personal mobility and religious norms, i.e. structural factors of relevance for sexual risk behaviours[14,18].\nIndeed, the current findings are consistent with the notion that depression, psychological distress and alcohol use are risk factors for sexual risk behaviours also in sub-Saharan African low-income settings, although longitudinal studies are needed in order to confirm this. Depression, psychological distress and alcohol use are prevalent in many countries with generalised HIV epidemics[6,23-26,55,56]. Thus, assuming causality, these conditions could potentially have considerable impact on population rates of sexual risk behaviours in low-income countries with high HIV prevalence.\nImproving mental health may theoretically decrease sexual risk behaviours. Based on our findings and the evidence from high and middle-income countries[9-12,19-21,45] we support the call for mental health intervention trials to include sexual risk behaviour and biological variables as outcome measures[2], particularly in low-income settings with generalised HIV epidemics. Moreover, qualitative studies should further explore subjective experiences of how poor mental health and sexual risk behaviours are inter-connected in low-income settings[57].\nIrrespective of the direction of causality, the mere co-existence of poor mental health and sexual risk behaviours has implications for HIV prevention: Those with the greatest HIV prevention needs may not always be sufficiently psychologically fit to benefit from intervention messages. For instance, learning to be assertive when communicating about sex with one's partner may be difficult for a woman who is depressed and anxious. HIV intervention trials should assess to what extent participants' mental health at baseline influence intervention outcomes. Moreover, HIV preventive programmes may need to consider including mental health and alcohol use reduction components into their intervention packages, in settings and groups where depression, psychological distress and alcohol use are common.", "The authors declare that they have no competing interests.", "PL designed and coordinated the study, performed analyses and wrote the manuscript. GR and SA led the data collection. AT and PA contributed to data analysis and results interpretation. PO contributed to study design, data analysis and results interpretation. EC conceived of the study and contributed to all stages of the work. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/125/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Rotational IMRT techniques compared to fixed gantry IMRT and tomotherapy: multi-institutional planning study for head-and-neck cases.
21338501
Recent developments enable to deliver rotational IMRT with standard C-arm gantry based linear accelerators. This upcoming treatment technique was benchmarked in a multi-center treatment planning study against static gantry IMRT and rotational IMRT based on a ring gantry for a complex parotid gland sparing head-and-neck technique.
BACKGROUND
Treatment plans were created for 10 patients with head-and-neck tumours (oropharynx, hypopharynx, larynx) using the following treatment planning systems (TPS) for rotational IMRT: Monaco (ELEKTA VMAT solution), Eclipse (Varian RapidArc solution) and HiArt for the helical tomotherapy (Tomotherapy). Planning of static gantry IMRT was performed with KonRad, Pinnacle and Panther DAO based on step&shoot IMRT delivery and Eclipse for sliding window IMRT. The prescribed doses for the high dose PTVs were 65.1Gy or 60.9Gy and for the low dose PTVs 55.8Gy or 52.5Gy dependend on resection status. Plan evaluation was based on target coverage, conformity and homogeneity, DVHs of OARs and the volume of normal tissue receiving more than 5Gy (V5Gy). Additionally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. All evaluation parameters were averaged over all 10 patients for each technique and planning modality.
METHODS
Depending on IMRT technique and TPS, the mean CI values of all patients ranged from 1.17 to 2.82; and mean HI values varied from 0.05 to 0.10. The mean values of the median doses of the spared parotid were 26.5Gy for RapidArc and 23Gy for VMAT, 14.1Gy for Tomo. For fixed gantry techniques 21Gy was achieved for step&shoot+KonRad, 17.0Gy for step&shoot+Panther DAO, 23.3Gy for step&shoot+Pinnacle and 18.6Gy for sliding window.V5Gy values were lowest for the sliding window IMRT technique (3499 ccm) and largest for RapidArc (5480 ccm). The lowest mean MU value of 408 was achieved by Panther DAO, compared to 1140 for sliding window IMRT.
RESULTS
All IMRT delivery technologies with their associated TPS provide plans with satisfying target coverage while at the same time respecting the defined OAR criteria. Sliding window IMRT, RapidArc and Tomo techniques resulted in better target dose homogeneity compared to VMAT and step&shoot IMRT. Rotational IMRT based on C-arm linacs and Tomotherapy seem to be advantageous with respect to OAR sparing and treatment delivery efficiency, at the cost of higher dose delivered to normal tissues. The overall treatment plan quality using Tomo seems to be better than the other TPS technology combinations.
CONCLUSIONS
[ "Algorithms", "Carcinoma", "Equipment Design", "Head and Neck Neoplasms", "Humans", "Organs at Risk", "Particle Accelerators", "Radiotherapy Dosage", "Radiotherapy Planning, Computer-Assisted", "Radiotherapy, Intensity-Modulated", "Rotation" ]
3050734
null
null
Methods
[SUBTITLE] Patients [SUBSECTION] Ten patients with complex shaped targets in the head-and-neck region (orpharynx, hypopharynx, larynx) suitable for an SIB technique were selected for this retrospective multi-centre treatment planning study. The characteristics of these patients are shown in Table 1. Overview of the patients Ten patients with complex shaped targets in the head-and-neck region (orpharynx, hypopharynx, larynx) suitable for an SIB technique were selected for this retrospective multi-centre treatment planning study. The characteristics of these patients are shown in Table 1. Overview of the patients [SUBTITLE] Treatment techniques [SUBSECTION] All PTVs and OARs were contoured in one TPS at the study coordination centre in Jena. CT data including structure sets of all patients were transferred to different centres which provided one of the following treatment technologies: Tomotherapy, VMAT, RapidArc, sliding window and step&shoot IMRT. More specifically, the following TPS were used: the TPS HiArt (Tomotherapy) was used for the helical tomotherapy (Tomo); rotational IMRT (VMAT) for an ELEKTA linac was planned with the TPS Monaco while rotational IMRT performed with a Varian linac (RadpidArc) was planned with Eclipse. For the static gantry IMRT four TPS were used: for step&shoot IMRT the KonRad (Siemens) system, the TPS Pinacle (ADAC) and the Panther DAO (Prowess), and finally for sliding window IMRT the Eclipse (Varian) system. All treatment plans were calculated with a nominal energy of 6 MV. The detailed overview about the used technologies, the TPS, linac e.t.c. is shown in table 2. Overview of used technologies, TPS and versions, linacs, number of beams or arcs and energy The aim of the planning study was to achieve similar median doses in the PTVs for all ten patients. Dependent on the therapy concept which is based on the status of resection, the prescribed median PTV dose was defined as 52.2Gy or 55.8Gy to the lymph node region (PTV2) and as 60.9Gy or 65.1Gy to the integrated boost volume (PTV1). The minimal criterium (93% of the prescribed dose to minimal 99% of the PTV) was deduced from the RTOG H0022 protocol. The maximum dose criterion was defined as maximal 1% of the PTV receives maximal 110%. Additionally, the OAR objective for the parotid glands (Dmedian < 26Gy), for the mandibular (Dmedian < 45Gy) and the spinal cord plus a 7 mm margin (Dmax < 43Gy) should be satisfied. Fulfilling of the dose criteria for the PTV is given highest priority for treatment planning, except the criteria for the spinal cord could not be met. All PTVs and OARs were contoured in one TPS at the study coordination centre in Jena. CT data including structure sets of all patients were transferred to different centres which provided one of the following treatment technologies: Tomotherapy, VMAT, RapidArc, sliding window and step&shoot IMRT. More specifically, the following TPS were used: the TPS HiArt (Tomotherapy) was used for the helical tomotherapy (Tomo); rotational IMRT (VMAT) for an ELEKTA linac was planned with the TPS Monaco while rotational IMRT performed with a Varian linac (RadpidArc) was planned with Eclipse. For the static gantry IMRT four TPS were used: for step&shoot IMRT the KonRad (Siemens) system, the TPS Pinacle (ADAC) and the Panther DAO (Prowess), and finally for sliding window IMRT the Eclipse (Varian) system. All treatment plans were calculated with a nominal energy of 6 MV. The detailed overview about the used technologies, the TPS, linac e.t.c. is shown in table 2. Overview of used technologies, TPS and versions, linacs, number of beams or arcs and energy The aim of the planning study was to achieve similar median doses in the PTVs for all ten patients. Dependent on the therapy concept which is based on the status of resection, the prescribed median PTV dose was defined as 52.2Gy or 55.8Gy to the lymph node region (PTV2) and as 60.9Gy or 65.1Gy to the integrated boost volume (PTV1). The minimal criterium (93% of the prescribed dose to minimal 99% of the PTV) was deduced from the RTOG H0022 protocol. The maximum dose criterion was defined as maximal 1% of the PTV receives maximal 110%. Additionally, the OAR objective for the parotid glands (Dmedian < 26Gy), for the mandibular (Dmedian < 45Gy) and the spinal cord plus a 7 mm margin (Dmax < 43Gy) should be satisfied. Fulfilling of the dose criteria for the PTV is given highest priority for treatment planning, except the criteria for the spinal cord could not be met. [SUBTITLE] Treatment plan evaluation [SUBSECTION] All doses in the evaluation are relative doses, normalised to the prescribed doses of PTV1 and PTV2. The evaluation was based on several criteria. The first criterium was the PTV coverage with 93% of the prescribed dose. The conformation of the PTVs (with respect to 93% of the prescribed dose) was described by the conformity index (CI = Volume93%/PTV). This specific formula was selected based on the assumption that no more than 1% of any PTV should receive <93% of its prescribed dose as minimum criteria, i.e. almost 100% of the PTV should received at least 93% of the dose. Target dose heterogeneity was described by the homogeneity index (HI=[D5%-D95%]/Dmean), i.e. a small HI indicates a better plan in the comparison. Another main focus of the comparison was put on the DVHs of the OARs and the volume of healthy tissue receiving more than 5Gy (V5Gy). Finally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. For that purpose the different linac calibrations conditions were normalised except the Tomotherapy machine. All evaluation parameters were averaged over the 10 patients for each technique and planning modality. The standard deviations for all evaluation values were calculated over the ten patients. All doses in the evaluation are relative doses, normalised to the prescribed doses of PTV1 and PTV2. The evaluation was based on several criteria. The first criterium was the PTV coverage with 93% of the prescribed dose. The conformation of the PTVs (with respect to 93% of the prescribed dose) was described by the conformity index (CI = Volume93%/PTV). This specific formula was selected based on the assumption that no more than 1% of any PTV should receive <93% of its prescribed dose as minimum criteria, i.e. almost 100% of the PTV should received at least 93% of the dose. Target dose heterogeneity was described by the homogeneity index (HI=[D5%-D95%]/Dmean), i.e. a small HI indicates a better plan in the comparison. Another main focus of the comparison was put on the DVHs of the OARs and the volume of healthy tissue receiving more than 5Gy (V5Gy). Finally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. For that purpose the different linac calibrations conditions were normalised except the Tomotherapy machine. All evaluation parameters were averaged over the 10 patients for each technique and planning modality. The standard deviations for all evaluation values were calculated over the ten patients. [SUBTITLE] Results [SUBSECTION] All IMRT technologies with their respective TPSs were able to provide treatment plans which fulfilled the planning goals. Figure 1 shows as an example DVHs for one patient for both PTVs and all IMRT techniques. The coverage of the PTVs is seen in figure 2 and 3. In that figures the doses which is given to 99% of the PTVs is used as criterium. These doses are in a range of 91% till 95% of the prescribed dose for PTV1 and between 84% and 93% for the PTV2. The prescribed doses are 55.8 Gy to the low dose region and 65.1Gy to the high dose region. The PTV2 is a subset of PTV1. Dose at 99% of the PTV2 dependend on technology and TPS. Dose at 99% of the PTV1 dependend on technology and TPS. The median doses of the low and high dose PTVs are in a range of 99.9% (Tomo) and 104.9% (VMAT) for PTV2 and between 101.4% (Konrad) and 105.8% (VMAT) for PTV1 as seen in figure 4 and figure 5. Median doses of the PTV2 dependend on technologie and TPS. Median doses of the PTV1 dependend on technologie and TPS. All IMRT technologies with their respective TPSs were able to provide treatment plans which fulfilled the planning goals. Figure 1 shows as an example DVHs for one patient for both PTVs and all IMRT techniques. The coverage of the PTVs is seen in figure 2 and 3. In that figures the doses which is given to 99% of the PTVs is used as criterium. These doses are in a range of 91% till 95% of the prescribed dose for PTV1 and between 84% and 93% for the PTV2. The prescribed doses are 55.8 Gy to the low dose region and 65.1Gy to the high dose region. The PTV2 is a subset of PTV1. Dose at 99% of the PTV2 dependend on technology and TPS. Dose at 99% of the PTV1 dependend on technology and TPS. The median doses of the low and high dose PTVs are in a range of 99.9% (Tomo) and 104.9% (VMAT) for PTV2 and between 101.4% (Konrad) and 105.8% (VMAT) for PTV1 as seen in figure 4 and figure 5. Median doses of the PTV2 dependend on technologie and TPS. Median doses of the PTV1 dependend on technologie and TPS. [SUBTITLE] Conformation evaluation [SUBSECTION] Figure 6 and figure 7 show the CI values. The best conformation was achieved with the KonRad+step&shoot with a mean CI of 1.17 for the PTV2. The CI values of the PTV2 were rather similar with 1.30 for sliding window, 1.31. for Tomo, 1.32 for DAO+step&shoot and 1.33 for Pinacle+step&shoot, while it was 1.38 for both VMAT and RapidArc. Conformity index of the PTV2 dependend on technologie and TPS. Conformity index of the PTV1 dependend on technologie and TPS. The conformation of the PTV1 was again best for KonRad+Step&shoot (1.33). The second best result was achieved by the sliding window technique and Tomo (both 1.47), followed by RapiArc (1.63), DAO+step&shoot (1.68), VMAT (1.94) and Pinacle+step&shoot only with 2.82. Figure 6 and figure 7 show the CI values. The best conformation was achieved with the KonRad+step&shoot with a mean CI of 1.17 for the PTV2. The CI values of the PTV2 were rather similar with 1.30 for sliding window, 1.31. for Tomo, 1.32 for DAO+step&shoot and 1.33 for Pinacle+step&shoot, while it was 1.38 for both VMAT and RapidArc. Conformity index of the PTV2 dependend on technologie and TPS. Conformity index of the PTV1 dependend on technologie and TPS. The conformation of the PTV1 was again best for KonRad+Step&shoot (1.33). The second best result was achieved by the sliding window technique and Tomo (both 1.47), followed by RapiArc (1.63), DAO+step&shoot (1.68), VMAT (1.94) and Pinacle+step&shoot only with 2.82. [SUBTITLE] Homogeneity evaluation [SUBSECTION] The HI values for PTV2 were not evaluated because not all TPS were able to provide PTV2 excluded the Boost PTV. HI values for PTV1 are shown in figure 8. The best HI for the PTV1 was found with Tomo (0.047), followed by sliding window (0.062). Higher HI values were found for RapidArc (0.078) and DAO+step&shoot (0.083), VMAT (0.091) treatment plans, as well as for KonRad+step&shoot and Pinacle+step&shoot plans (both 0.100). Homogeneity index of the PTV1 dependend on technologie and TPS. The HI values for PTV2 were not evaluated because not all TPS were able to provide PTV2 excluded the Boost PTV. HI values for PTV1 are shown in figure 8. The best HI for the PTV1 was found with Tomo (0.047), followed by sliding window (0.062). Higher HI values were found for RapidArc (0.078) and DAO+step&shoot (0.083), VMAT (0.091) treatment plans, as well as for KonRad+step&shoot and Pinacle+step&shoot plans (both 0.100). Homogeneity index of the PTV1 dependend on technologie and TPS. [SUBTITLE] Evaluation of OAR sparing [SUBSECTION] A summary of the results concerning OAR sparing is shown in table 3. Not all TPS could reach the OAR objectives. The median doses of the parotids were 14.1Gy for Tomo, 17.0 Gy for step&shoot+DAO, 18.6Gy for sliding window, 21Gy for step&shoot+KonRad, 23Gy for VMAT, 23.3 Gy for step&shoot+Pinnacle and 26.5.Gy for RapidArc. OAR doses dependend on IMRT technology The maximal doses to the myelon plus 7 mm margin varied between 34.2Gy (Tomo), 40.6Gy (VMAT), 42 Gy (RapidArc), 42.4 Gy (step&shoot+DAO), 42.9Gy (KonRad+step&shoot), 43.2 Gy (Pinnacle+step&shoot), to 44.9 Gy (sliding window). The median doses to the mandible were 36.1Gy (Tomo), 39.5 (Pinnacle+step&shoot), 40Gy (KonRad+step&shoot), 41.2Gy (RapidArc), 42.9Gy (step&shoot+DAO), 43.1Gy (VMAT), 43.7Gy (sliding window). A summary of the results concerning OAR sparing is shown in table 3. Not all TPS could reach the OAR objectives. The median doses of the parotids were 14.1Gy for Tomo, 17.0 Gy for step&shoot+DAO, 18.6Gy for sliding window, 21Gy for step&shoot+KonRad, 23Gy for VMAT, 23.3 Gy for step&shoot+Pinnacle and 26.5.Gy for RapidArc. OAR doses dependend on IMRT technology The maximal doses to the myelon plus 7 mm margin varied between 34.2Gy (Tomo), 40.6Gy (VMAT), 42 Gy (RapidArc), 42.4 Gy (step&shoot+DAO), 42.9Gy (KonRad+step&shoot), 43.2 Gy (Pinnacle+step&shoot), to 44.9 Gy (sliding window). The median doses to the mandible were 36.1Gy (Tomo), 39.5 (Pinnacle+step&shoot), 40Gy (KonRad+step&shoot), 41.2Gy (RapidArc), 42.9Gy (step&shoot+DAO), 43.1Gy (VMAT), 43.7Gy (sliding window). [SUBTITLE] Evaluation of low dose burden, MUs and treatment time [SUBSECTION] Table 4 summarized the results of the volume receiving more than 5Gy (V5Gy), the MU and treatment time, respectively. The lowest V5Gy values were achieved with the sliding window technique with fixed gantry angles (3499 ccm). The other technologies present the following values in increasing order: VMAT (4498 ccm), KonRad+step&shoot (4525 ccm), Pinacle+step&shoot (5010 ccm), Tomo (5122 ccm), DAO+step&shoot (5332 ccm) and RapidArc (5480 ccm). MUs, treatment time, V5Gy dependend on IMRT technology The comparison of the MUs for the different technologies showed a wide range. The normalised MUs were lowest for DAO+step&shoot (408), followed by RapidArc (437) and VMAT (501). The step&shoot technique planned with KonRad required on average 800 MU, but when planned with Pinnacle it increased up to 1059 MU on average. The sliding window technique needs on average 1140 MU for IMRT delivery. The shortest mean treatment times were associated with RapidArc (2.5 min with 2 arcs), followed by DAO+step&shoot (7 min), Tomo (8 min), VMAT (9 min with 2 arcs), sliding window (10.5 min) and step&shoot with KonRad and Pinnacle (11 min). Table 4 summarized the results of the volume receiving more than 5Gy (V5Gy), the MU and treatment time, respectively. The lowest V5Gy values were achieved with the sliding window technique with fixed gantry angles (3499 ccm). The other technologies present the following values in increasing order: VMAT (4498 ccm), KonRad+step&shoot (4525 ccm), Pinacle+step&shoot (5010 ccm), Tomo (5122 ccm), DAO+step&shoot (5332 ccm) and RapidArc (5480 ccm). MUs, treatment time, V5Gy dependend on IMRT technology The comparison of the MUs for the different technologies showed a wide range. The normalised MUs were lowest for DAO+step&shoot (408), followed by RapidArc (437) and VMAT (501). The step&shoot technique planned with KonRad required on average 800 MU, but when planned with Pinnacle it increased up to 1059 MU on average. The sliding window technique needs on average 1140 MU for IMRT delivery. The shortest mean treatment times were associated with RapidArc (2.5 min with 2 arcs), followed by DAO+step&shoot (7 min), Tomo (8 min), VMAT (9 min with 2 arcs), sliding window (10.5 min) and step&shoot with KonRad and Pinnacle (11 min).
null
null
null
null
[ "Background", "Patients", "Treatment techniques", "Treatment plan evaluation", "Results", "Conformation evaluation", "Homogeneity evaluation", "Evaluation of OAR sparing", "Evaluation of low dose burden, MUs and treatment time", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Today intensity-modulated radiation therapy (IMRT) is the method of choice for the treatment of patients with complex-shaped planning target volumes (PTV) targets, especially when concave targets are close to a larger number of organs-at-risk (OAR) with different dose constraints and for multiple integrated targets with different dose prescriptions e.g. simultaneous integrated boost (SIB) treatments. The advantage of IMRT for head-and-neck cancer patients is the dose reduction in the parotid glands which implies less xerostomia and therefore has a big impact on the quality of life. Besides all these advantages of IMRT there are some disadvantages too. The delivery of complex plans with traditional IMRT techniques takes extra time and the dose distribution in the PTV is more inhomogeneous compared to conformal techniques. Another important aspect is the higher number of monitor units (MU) in comparison with non-wedged conformal plans. These higher numbers of MUs result in increased peripheral dose, which adds to the generally increased low dose region when applying IMRT [1-3]. Different factors that influence the quality and the complexity of IMRT plans have been investigated by various authors [4-10].\nFurthermore, there are some extra requirements for the delivery of IMRT, for instance the high mechanical and dosimetric accuracy of the treatment machine and a TPS with a powerful optimisation and segmentation algorithm.\nDuring the last years new rotational IMRT treatment technologies have become available. These technologies utilize a higher number of degrees of freedom for dose sculpting, i.e. the beam is on during gantry rotation, and at the same time gantry speed, leaf positions, leaf speed and dose rate may be varied. Helical tomotherapy (HT) (Tomotherapy) and rotational IMRT techniques like volumetric-modulated arc therapy (VMAT/Elekta) or RapidArc (Varian) are the most prominent examples. These new technologies enable to achieve treatment plans of similar or better quality compared to static IMRT [11-25]. VMAT and RapidArc can be delivered with standard C-arm gantry linacs. Several authors investigated the plan quality and other parameters in comparisons of these new IMRT modalities with HT or standard IMRT with fixed gantry angles.\nAlthough several papers were published on comparing static with rotational IMRT, they were limited mostly to two treatment planning systems and were usually performed in one institution, i.e. they were limited by planning traditions. To overcome this limitation it was the aim of the present study to benchmark as many upcoming rotational IMRT techniques as possible against a wide range of commonly practised static IMRT and dynamic IMRT techniques using one of the most complex treatment situations in today's clinical practice, a parotid gland sparing head-and-neck technique with simultaneous integrated boost (SIB). The influence of different optimisation algorithms (3 different algorithms for step&shoot) was integral part of this multi-institutional study, but the influence of the dose calculation algorithms was not taken into account for current comparison.", "Ten patients with complex shaped targets in the head-and-neck region (orpharynx, hypopharynx, larynx) suitable for an SIB technique were selected for this retrospective multi-centre treatment planning study. The characteristics of these patients are shown in Table 1.\nOverview of the patients", "All PTVs and OARs were contoured in one TPS at the study coordination centre in Jena. CT data including structure sets of all patients were transferred to different centres which provided one of the following treatment technologies: Tomotherapy, VMAT, RapidArc, sliding window and step&shoot IMRT. More specifically, the following TPS were used: the TPS HiArt (Tomotherapy) was used for the helical tomotherapy (Tomo); rotational IMRT (VMAT) for an ELEKTA linac was planned with the TPS Monaco while rotational IMRT performed with a Varian linac (RadpidArc) was planned with Eclipse. For the static gantry IMRT four TPS were used: for step&shoot IMRT the KonRad (Siemens) system, the TPS Pinacle (ADAC) and the Panther DAO (Prowess), and finally for sliding window IMRT the Eclipse (Varian) system. All treatment plans were calculated with a nominal energy of 6 MV. The detailed overview about the used technologies, the TPS, linac e.t.c. is shown in table 2.\nOverview of used technologies, TPS and versions, linacs, number of beams or arcs and energy\nThe aim of the planning study was to achieve similar median doses in the PTVs for all ten patients. Dependent on the therapy concept which is based on the status of resection, the prescribed median PTV dose was defined as 52.2Gy or 55.8Gy to the lymph node region (PTV2) and as 60.9Gy or 65.1Gy to the integrated boost volume (PTV1). The minimal criterium (93% of the prescribed dose to minimal 99% of the PTV) was deduced from the RTOG H0022 protocol. The maximum dose criterion was defined as maximal 1% of the PTV receives maximal 110%. Additionally, the OAR objective for the parotid glands (Dmedian < 26Gy), for the mandibular (Dmedian < 45Gy) and the spinal cord plus a 7 mm margin (Dmax < 43Gy) should be satisfied. Fulfilling of the dose criteria for the PTV is given highest priority for treatment planning, except the criteria for the spinal cord could not be met.", "All doses in the evaluation are relative doses, normalised to the prescribed doses of PTV1 and PTV2. The evaluation was based on several criteria. The first criterium was the PTV coverage with 93% of the prescribed dose. The conformation of the PTVs (with respect to 93% of the prescribed dose) was described by the conformity index (CI = Volume93%/PTV). This specific formula was selected based on the assumption that no more than 1% of any PTV should receive <93% of its prescribed dose as minimum criteria, i.e. almost 100% of the PTV should received at least 93% of the dose. Target dose heterogeneity was described by the homogeneity index (HI=[D5%-D95%]/Dmean), i.e. a small HI indicates a better plan in the comparison. Another main focus of the comparison was put on the DVHs of the OARs and the volume of healthy tissue receiving more than 5Gy (V5Gy). Finally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. For that purpose the different linac calibrations conditions were normalised except the Tomotherapy machine.\nAll evaluation parameters were averaged over the 10 patients for each technique and planning modality. The standard deviations for all evaluation values were calculated over the ten patients.", "All IMRT technologies with their respective TPSs were able to provide treatment plans which fulfilled the planning goals. Figure 1 shows as an example DVHs for one patient for both PTVs and all IMRT techniques. The coverage of the PTVs is seen in figure 2 and 3. In that figures the doses which is given to 99% of the PTVs is used as criterium. These doses are in a range of 91% till 95% of the prescribed dose for PTV1 and between 84% and 93% for the PTV2.\nThe prescribed doses are 55.8 Gy to the low dose region and 65.1Gy to the high dose region. The PTV2 is a subset of PTV1.\nDose at 99% of the PTV2 dependend on technology and TPS.\nDose at 99% of the PTV1 dependend on technology and TPS.\nThe median doses of the low and high dose PTVs are in a range of 99.9% (Tomo) and 104.9% (VMAT) for PTV2 and between 101.4% (Konrad) and 105.8% (VMAT) for PTV1 as seen in figure 4 and figure 5.\nMedian doses of the PTV2 dependend on technologie and TPS.\nMedian doses of the PTV1 dependend on technologie and TPS.", "Figure 6 and figure 7 show the CI values. The best conformation was achieved with the KonRad+step&shoot with a mean CI of 1.17 for the PTV2. The CI values of the PTV2 were rather similar with 1.30 for sliding window, 1.31. for Tomo, 1.32 for DAO+step&shoot and 1.33 for Pinacle+step&shoot, while it was 1.38 for both VMAT and RapidArc.\nConformity index of the PTV2 dependend on technologie and TPS.\nConformity index of the PTV1 dependend on technologie and TPS.\nThe conformation of the PTV1 was again best for KonRad+Step&shoot (1.33). The second best result was achieved by the sliding window technique and Tomo (both 1.47), followed by RapiArc (1.63), DAO+step&shoot (1.68), VMAT (1.94) and Pinacle+step&shoot only with 2.82.", "The HI values for PTV2 were not evaluated because not all TPS were able to provide PTV2 excluded the Boost PTV. HI values for PTV1 are shown in figure 8. The best HI for the PTV1 was found with Tomo (0.047), followed by sliding window (0.062). Higher HI values were found for RapidArc (0.078) and DAO+step&shoot (0.083), VMAT (0.091) treatment plans, as well as for KonRad+step&shoot and Pinacle+step&shoot plans (both 0.100).\nHomogeneity index of the PTV1 dependend on technologie and TPS.", "A summary of the results concerning OAR sparing is shown in table 3. Not all TPS could reach the OAR objectives. The median doses of the parotids were 14.1Gy for Tomo, 17.0 Gy for step&shoot+DAO, 18.6Gy for sliding window, 21Gy for step&shoot+KonRad, 23Gy for VMAT, 23.3 Gy for step&shoot+Pinnacle and 26.5.Gy for RapidArc.\nOAR doses dependend on IMRT technology\nThe maximal doses to the myelon plus 7 mm margin varied between 34.2Gy (Tomo), 40.6Gy (VMAT), 42 Gy (RapidArc), 42.4 Gy (step&shoot+DAO), 42.9Gy (KonRad+step&shoot), 43.2 Gy (Pinnacle+step&shoot), to 44.9 Gy (sliding window).\nThe median doses to the mandible were 36.1Gy (Tomo), 39.5 (Pinnacle+step&shoot), 40Gy (KonRad+step&shoot), 41.2Gy (RapidArc), 42.9Gy (step&shoot+DAO), 43.1Gy (VMAT), 43.7Gy (sliding window).", "Table 4 summarized the results of the volume receiving more than 5Gy (V5Gy), the MU and treatment time, respectively. The lowest V5Gy values were achieved with the sliding window technique with fixed gantry angles (3499 ccm). The other technologies present the following values in increasing order: VMAT (4498 ccm), KonRad+step&shoot (4525 ccm), Pinacle+step&shoot (5010 ccm), Tomo (5122 ccm), DAO+step&shoot (5332 ccm) and RapidArc (5480 ccm).\nMUs, treatment time, V5Gy dependend on IMRT technology\nThe comparison of the MUs for the different technologies showed a wide range. The normalised MUs were lowest for DAO+step&shoot (408), followed by RapidArc (437) and VMAT (501). The step&shoot technique planned with KonRad required on average 800 MU, but when planned with Pinnacle it increased up to 1059 MU on average. The sliding window technique needs on average 1140 MU for IMRT delivery.\nThe shortest mean treatment times were associated with RapidArc (2.5 min with 2 arcs), followed by DAO+step&shoot (7 min), Tomo (8 min), VMAT (9 min with 2 arcs), sliding window (10.5 min) and step&shoot with KonRad and Pinnacle (11 min).", "The present study is a multi-institutional study; this implies that there are some \"subjective\" factors depending on planning philosophy of the respective hospital e.g. number of beam directions, number of segments and arcs, limitations of the MLCs, weighting of the importance of PTV and OAR. Another role plays the level of experience of the planners in the different centres that's why we selected for every technology and TPS combination experienced users. But in the last consequence the results of this multi-institutional study show that all used IMRT technologies together with their TPSs have the power to provide treatment plans with a satisfying target coverage while at the same time respecting the defined OAR criteria. At least there is no best technology with respect to all evaluation parameters, i.e. all techniques are connected with some advantages and with some disadvantages. As far as treatment planning is concerned, there were substantial differences in terms of usability to specify the planning goals for the different volumes. It would be of great help for treatment planning if functions where available in TPS that excluded intersections automatically or where priorities to different PTVs with intersections could be assigned.\nThe results are in good agreement with published data [26-29] regarding the volumatric arc therapy. Only the results of our study getting with sliding window are much better than in [17]. A differentiation of the patients in the two groups (post-operative patients and primary RT) did not show significant differences in the results.\nAll treatment plans offer a very good coverage of the PTV1 and a good coverage of the PTV2. The lowest dose to the PTV2 with clearly inferior results compared to the other techniques was achieved with the Pinnacle step&shoot combination. The median doses for the PTV2 and the PTV1 were in a range between 100% and 106%. This implies that the planners of the participating institutes improved the coverage of the PTVs with the help of an increase of the median dose. The requirements demanded by the HR0022 protocol are more or less fulfilled. ICRU recommendations for prescribing, reporting and recording IMRT have just been which will be helpful in the future to harmonize IMRT practice [30].\nSliding window, RapidArc and Tomo techniques resulted in better target dose homogeneity for the PTV1 compared to VMAT and step&shoot with Panther DAO, Pinnacle and KonRad.\nAll technologies TPS combinations fulfill the OAR constrains. Only the high myelon maximal dose receiving with sliding window is demonstrative (but with a margin of 7 mm clinically acceptable). The highest median dose to the spared parotid while using the RapiArc is peculiar too.\nThe volume which receives equal or more than 5Gy is lowest with the sliding window technique (3800 ccm), followed by the VMAT and KonRad step&shoot (about 4500 ccm). Pinnacle step&shoot, Tomo, Panther DAO and RapidArc deliver doses of equal or more than 5Gy to volumes of 5000 ccm or bigger. It is of interest that neither the \"classic IMRT\" with fixed gantry angles nor the rotation based IMRT is clearly the superior solution. It seems that rotational IMRT techniques do not automatically generate more volume that receives dose of equal or more than 5Gy. The volume could probably be even further reduced using higher photon beam energies.\nThe treatment delivery times obtained in the present study were shortest for the RapidArc solution. The delivery times for Tomo and Panther DAO were in the medium range while VMAT, step&shoot with Konrad or Pinnacle and with sliding window were characterised by the longest ones. As far as the VMAT results on delivery efficiency are concerned, it needs to be emphasized that Monaco Version 2.01 was used in the present study, which was improved recently with a new sequencer available in successive versions of this TPS.\nThe MUs are significantly reduced for the DAO step&shoot (408MU), RapidArc (437MU) and VMAT (501MU). The MUs needed for a step&shoot KonRad plan is situated in the centre (about 800MU). Pinnacle step&shoot needs 1060MU and sliding window takes the highest number of 1140MU. It is known that the number of MU is one factor which influences the peripheral dose, but there are some other factors like the linac head shielding and collimation system (shape, thickness, material), the focus body distance and the spectrum of the beam. The peripheral dose is of importance without any doubt but in the particular case subordinated relativ to the treatment plan quality.", "This is the first multi-institutional study that determined the influence of seven different combinations of treatment technologies and TPS combinations for the planning of head and neck cancer treatments for a simultaneous integrated boost technique. The results presented above indicate that all IMRT delivery technologies with their associated TPS provide IMRT plans with satisfying target coverage while at the same time mostly respecting the defined OAR criteria.\nSliding window, RapidArc and Tomo techniques provide better target dose homogeneity compared to VMAT and step&shoot with Panther DAO, Pinacle and KonRad. The conformity reached was best for KonRad for high and low dose PTV with a remarkable distance to the all other IMRT techniques. The overall treatment plan quality using Tomo regarding target coverage, HI, CI and OAR sparing seems to be better than the other TPS technology combinations. For the parotid gland clear median dose differences were observed for the different IMRT techniques. Rotational IMRT and Tomo seem to be advantageous with respect to OAR sparing sometimes and treatment delivery efficiency, at the cost of higher dose burden (>5Gy) to normal tissues. The application times are shortest for RapidArc with some concessives e.g. parotid sparing. The combination of Panther DAO and step&shoot shows that a segmentation algorithm which is optimised for time saving applications reduces the treatment time with plan quality concessions too. The applications need the most time with VMAT, with step&shoot with Konrad or Pinacle and with sliding window.\nWe expect a medical relevance of the results of our study e.g. partial underdosage, different OAR sparing, dose burden with 5Gy or more; but this should be investigated in prospective studies.", "The authors declare that they have no competing interests.", "TW coordinated the entire study. Patient accrual and clinical data collection was done by TGW. Treatment planning was conducted by TW, EB, IF, GH, MK, GL, KS, HS, DW.\nData collection was worked out by TB. Data analysis was done by TW and TB.\nThe manuscript was prepared by TW. Corrections and/or improvements were suggested by DG, IF, HS, KS and TGW. Major revisions were done by TW. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Treatment techniques", "Treatment plan evaluation", "Results", "Conformation evaluation", "Homogeneity evaluation", "Evaluation of OAR sparing", "Evaluation of low dose burden, MUs and treatment time", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Today intensity-modulated radiation therapy (IMRT) is the method of choice for the treatment of patients with complex-shaped planning target volumes (PTV) targets, especially when concave targets are close to a larger number of organs-at-risk (OAR) with different dose constraints and for multiple integrated targets with different dose prescriptions e.g. simultaneous integrated boost (SIB) treatments. The advantage of IMRT for head-and-neck cancer patients is the dose reduction in the parotid glands which implies less xerostomia and therefore has a big impact on the quality of life. Besides all these advantages of IMRT there are some disadvantages too. The delivery of complex plans with traditional IMRT techniques takes extra time and the dose distribution in the PTV is more inhomogeneous compared to conformal techniques. Another important aspect is the higher number of monitor units (MU) in comparison with non-wedged conformal plans. These higher numbers of MUs result in increased peripheral dose, which adds to the generally increased low dose region when applying IMRT [1-3]. Different factors that influence the quality and the complexity of IMRT plans have been investigated by various authors [4-10].\nFurthermore, there are some extra requirements for the delivery of IMRT, for instance the high mechanical and dosimetric accuracy of the treatment machine and a TPS with a powerful optimisation and segmentation algorithm.\nDuring the last years new rotational IMRT treatment technologies have become available. These technologies utilize a higher number of degrees of freedom for dose sculpting, i.e. the beam is on during gantry rotation, and at the same time gantry speed, leaf positions, leaf speed and dose rate may be varied. Helical tomotherapy (HT) (Tomotherapy) and rotational IMRT techniques like volumetric-modulated arc therapy (VMAT/Elekta) or RapidArc (Varian) are the most prominent examples. These new technologies enable to achieve treatment plans of similar or better quality compared to static IMRT [11-25]. VMAT and RapidArc can be delivered with standard C-arm gantry linacs. Several authors investigated the plan quality and other parameters in comparisons of these new IMRT modalities with HT or standard IMRT with fixed gantry angles.\nAlthough several papers were published on comparing static with rotational IMRT, they were limited mostly to two treatment planning systems and were usually performed in one institution, i.e. they were limited by planning traditions. To overcome this limitation it was the aim of the present study to benchmark as many upcoming rotational IMRT techniques as possible against a wide range of commonly practised static IMRT and dynamic IMRT techniques using one of the most complex treatment situations in today's clinical practice, a parotid gland sparing head-and-neck technique with simultaneous integrated boost (SIB). The influence of different optimisation algorithms (3 different algorithms for step&shoot) was integral part of this multi-institutional study, but the influence of the dose calculation algorithms was not taken into account for current comparison.", "[SUBTITLE] Patients [SUBSECTION] Ten patients with complex shaped targets in the head-and-neck region (orpharynx, hypopharynx, larynx) suitable for an SIB technique were selected for this retrospective multi-centre treatment planning study. The characteristics of these patients are shown in Table 1.\nOverview of the patients\nTen patients with complex shaped targets in the head-and-neck region (orpharynx, hypopharynx, larynx) suitable for an SIB technique were selected for this retrospective multi-centre treatment planning study. The characteristics of these patients are shown in Table 1.\nOverview of the patients\n[SUBTITLE] Treatment techniques [SUBSECTION] All PTVs and OARs were contoured in one TPS at the study coordination centre in Jena. CT data including structure sets of all patients were transferred to different centres which provided one of the following treatment technologies: Tomotherapy, VMAT, RapidArc, sliding window and step&shoot IMRT. More specifically, the following TPS were used: the TPS HiArt (Tomotherapy) was used for the helical tomotherapy (Tomo); rotational IMRT (VMAT) for an ELEKTA linac was planned with the TPS Monaco while rotational IMRT performed with a Varian linac (RadpidArc) was planned with Eclipse. For the static gantry IMRT four TPS were used: for step&shoot IMRT the KonRad (Siemens) system, the TPS Pinacle (ADAC) and the Panther DAO (Prowess), and finally for sliding window IMRT the Eclipse (Varian) system. All treatment plans were calculated with a nominal energy of 6 MV. The detailed overview about the used technologies, the TPS, linac e.t.c. is shown in table 2.\nOverview of used technologies, TPS and versions, linacs, number of beams or arcs and energy\nThe aim of the planning study was to achieve similar median doses in the PTVs for all ten patients. Dependent on the therapy concept which is based on the status of resection, the prescribed median PTV dose was defined as 52.2Gy or 55.8Gy to the lymph node region (PTV2) and as 60.9Gy or 65.1Gy to the integrated boost volume (PTV1). The minimal criterium (93% of the prescribed dose to minimal 99% of the PTV) was deduced from the RTOG H0022 protocol. The maximum dose criterion was defined as maximal 1% of the PTV receives maximal 110%. Additionally, the OAR objective for the parotid glands (Dmedian < 26Gy), for the mandibular (Dmedian < 45Gy) and the spinal cord plus a 7 mm margin (Dmax < 43Gy) should be satisfied. Fulfilling of the dose criteria for the PTV is given highest priority for treatment planning, except the criteria for the spinal cord could not be met.\nAll PTVs and OARs were contoured in one TPS at the study coordination centre in Jena. CT data including structure sets of all patients were transferred to different centres which provided one of the following treatment technologies: Tomotherapy, VMAT, RapidArc, sliding window and step&shoot IMRT. More specifically, the following TPS were used: the TPS HiArt (Tomotherapy) was used for the helical tomotherapy (Tomo); rotational IMRT (VMAT) for an ELEKTA linac was planned with the TPS Monaco while rotational IMRT performed with a Varian linac (RadpidArc) was planned with Eclipse. For the static gantry IMRT four TPS were used: for step&shoot IMRT the KonRad (Siemens) system, the TPS Pinacle (ADAC) and the Panther DAO (Prowess), and finally for sliding window IMRT the Eclipse (Varian) system. All treatment plans were calculated with a nominal energy of 6 MV. The detailed overview about the used technologies, the TPS, linac e.t.c. is shown in table 2.\nOverview of used technologies, TPS and versions, linacs, number of beams or arcs and energy\nThe aim of the planning study was to achieve similar median doses in the PTVs for all ten patients. Dependent on the therapy concept which is based on the status of resection, the prescribed median PTV dose was defined as 52.2Gy or 55.8Gy to the lymph node region (PTV2) and as 60.9Gy or 65.1Gy to the integrated boost volume (PTV1). The minimal criterium (93% of the prescribed dose to minimal 99% of the PTV) was deduced from the RTOG H0022 protocol. The maximum dose criterion was defined as maximal 1% of the PTV receives maximal 110%. Additionally, the OAR objective for the parotid glands (Dmedian < 26Gy), for the mandibular (Dmedian < 45Gy) and the spinal cord plus a 7 mm margin (Dmax < 43Gy) should be satisfied. Fulfilling of the dose criteria for the PTV is given highest priority for treatment planning, except the criteria for the spinal cord could not be met.\n[SUBTITLE] Treatment plan evaluation [SUBSECTION] All doses in the evaluation are relative doses, normalised to the prescribed doses of PTV1 and PTV2. The evaluation was based on several criteria. The first criterium was the PTV coverage with 93% of the prescribed dose. The conformation of the PTVs (with respect to 93% of the prescribed dose) was described by the conformity index (CI = Volume93%/PTV). This specific formula was selected based on the assumption that no more than 1% of any PTV should receive <93% of its prescribed dose as minimum criteria, i.e. almost 100% of the PTV should received at least 93% of the dose. Target dose heterogeneity was described by the homogeneity index (HI=[D5%-D95%]/Dmean), i.e. a small HI indicates a better plan in the comparison. Another main focus of the comparison was put on the DVHs of the OARs and the volume of healthy tissue receiving more than 5Gy (V5Gy). Finally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. For that purpose the different linac calibrations conditions were normalised except the Tomotherapy machine.\nAll evaluation parameters were averaged over the 10 patients for each technique and planning modality. The standard deviations for all evaluation values were calculated over the ten patients.\nAll doses in the evaluation are relative doses, normalised to the prescribed doses of PTV1 and PTV2. The evaluation was based on several criteria. The first criterium was the PTV coverage with 93% of the prescribed dose. The conformation of the PTVs (with respect to 93% of the prescribed dose) was described by the conformity index (CI = Volume93%/PTV). This specific formula was selected based on the assumption that no more than 1% of any PTV should receive <93% of its prescribed dose as minimum criteria, i.e. almost 100% of the PTV should received at least 93% of the dose. Target dose heterogeneity was described by the homogeneity index (HI=[D5%-D95%]/Dmean), i.e. a small HI indicates a better plan in the comparison. Another main focus of the comparison was put on the DVHs of the OARs and the volume of healthy tissue receiving more than 5Gy (V5Gy). Finally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. For that purpose the different linac calibrations conditions were normalised except the Tomotherapy machine.\nAll evaluation parameters were averaged over the 10 patients for each technique and planning modality. The standard deviations for all evaluation values were calculated over the ten patients.\n[SUBTITLE] Results [SUBSECTION] All IMRT technologies with their respective TPSs were able to provide treatment plans which fulfilled the planning goals. Figure 1 shows as an example DVHs for one patient for both PTVs and all IMRT techniques. The coverage of the PTVs is seen in figure 2 and 3. In that figures the doses which is given to 99% of the PTVs is used as criterium. These doses are in a range of 91% till 95% of the prescribed dose for PTV1 and between 84% and 93% for the PTV2.\nThe prescribed doses are 55.8 Gy to the low dose region and 65.1Gy to the high dose region. The PTV2 is a subset of PTV1.\nDose at 99% of the PTV2 dependend on technology and TPS.\nDose at 99% of the PTV1 dependend on technology and TPS.\nThe median doses of the low and high dose PTVs are in a range of 99.9% (Tomo) and 104.9% (VMAT) for PTV2 and between 101.4% (Konrad) and 105.8% (VMAT) for PTV1 as seen in figure 4 and figure 5.\nMedian doses of the PTV2 dependend on technologie and TPS.\nMedian doses of the PTV1 dependend on technologie and TPS.\nAll IMRT technologies with their respective TPSs were able to provide treatment plans which fulfilled the planning goals. Figure 1 shows as an example DVHs for one patient for both PTVs and all IMRT techniques. The coverage of the PTVs is seen in figure 2 and 3. In that figures the doses which is given to 99% of the PTVs is used as criterium. These doses are in a range of 91% till 95% of the prescribed dose for PTV1 and between 84% and 93% for the PTV2.\nThe prescribed doses are 55.8 Gy to the low dose region and 65.1Gy to the high dose region. The PTV2 is a subset of PTV1.\nDose at 99% of the PTV2 dependend on technology and TPS.\nDose at 99% of the PTV1 dependend on technology and TPS.\nThe median doses of the low and high dose PTVs are in a range of 99.9% (Tomo) and 104.9% (VMAT) for PTV2 and between 101.4% (Konrad) and 105.8% (VMAT) for PTV1 as seen in figure 4 and figure 5.\nMedian doses of the PTV2 dependend on technologie and TPS.\nMedian doses of the PTV1 dependend on technologie and TPS.\n[SUBTITLE] Conformation evaluation [SUBSECTION] Figure 6 and figure 7 show the CI values. The best conformation was achieved with the KonRad+step&shoot with a mean CI of 1.17 for the PTV2. The CI values of the PTV2 were rather similar with 1.30 for sliding window, 1.31. for Tomo, 1.32 for DAO+step&shoot and 1.33 for Pinacle+step&shoot, while it was 1.38 for both VMAT and RapidArc.\nConformity index of the PTV2 dependend on technologie and TPS.\nConformity index of the PTV1 dependend on technologie and TPS.\nThe conformation of the PTV1 was again best for KonRad+Step&shoot (1.33). The second best result was achieved by the sliding window technique and Tomo (both 1.47), followed by RapiArc (1.63), DAO+step&shoot (1.68), VMAT (1.94) and Pinacle+step&shoot only with 2.82.\nFigure 6 and figure 7 show the CI values. The best conformation was achieved with the KonRad+step&shoot with a mean CI of 1.17 for the PTV2. The CI values of the PTV2 were rather similar with 1.30 for sliding window, 1.31. for Tomo, 1.32 for DAO+step&shoot and 1.33 for Pinacle+step&shoot, while it was 1.38 for both VMAT and RapidArc.\nConformity index of the PTV2 dependend on technologie and TPS.\nConformity index of the PTV1 dependend on technologie and TPS.\nThe conformation of the PTV1 was again best for KonRad+Step&shoot (1.33). The second best result was achieved by the sliding window technique and Tomo (both 1.47), followed by RapiArc (1.63), DAO+step&shoot (1.68), VMAT (1.94) and Pinacle+step&shoot only with 2.82.\n[SUBTITLE] Homogeneity evaluation [SUBSECTION] The HI values for PTV2 were not evaluated because not all TPS were able to provide PTV2 excluded the Boost PTV. HI values for PTV1 are shown in figure 8. The best HI for the PTV1 was found with Tomo (0.047), followed by sliding window (0.062). Higher HI values were found for RapidArc (0.078) and DAO+step&shoot (0.083), VMAT (0.091) treatment plans, as well as for KonRad+step&shoot and Pinacle+step&shoot plans (both 0.100).\nHomogeneity index of the PTV1 dependend on technologie and TPS.\nThe HI values for PTV2 were not evaluated because not all TPS were able to provide PTV2 excluded the Boost PTV. HI values for PTV1 are shown in figure 8. The best HI for the PTV1 was found with Tomo (0.047), followed by sliding window (0.062). Higher HI values were found for RapidArc (0.078) and DAO+step&shoot (0.083), VMAT (0.091) treatment plans, as well as for KonRad+step&shoot and Pinacle+step&shoot plans (both 0.100).\nHomogeneity index of the PTV1 dependend on technologie and TPS.\n[SUBTITLE] Evaluation of OAR sparing [SUBSECTION] A summary of the results concerning OAR sparing is shown in table 3. Not all TPS could reach the OAR objectives. The median doses of the parotids were 14.1Gy for Tomo, 17.0 Gy for step&shoot+DAO, 18.6Gy for sliding window, 21Gy for step&shoot+KonRad, 23Gy for VMAT, 23.3 Gy for step&shoot+Pinnacle and 26.5.Gy for RapidArc.\nOAR doses dependend on IMRT technology\nThe maximal doses to the myelon plus 7 mm margin varied between 34.2Gy (Tomo), 40.6Gy (VMAT), 42 Gy (RapidArc), 42.4 Gy (step&shoot+DAO), 42.9Gy (KonRad+step&shoot), 43.2 Gy (Pinnacle+step&shoot), to 44.9 Gy (sliding window).\nThe median doses to the mandible were 36.1Gy (Tomo), 39.5 (Pinnacle+step&shoot), 40Gy (KonRad+step&shoot), 41.2Gy (RapidArc), 42.9Gy (step&shoot+DAO), 43.1Gy (VMAT), 43.7Gy (sliding window).\nA summary of the results concerning OAR sparing is shown in table 3. Not all TPS could reach the OAR objectives. The median doses of the parotids were 14.1Gy for Tomo, 17.0 Gy for step&shoot+DAO, 18.6Gy for sliding window, 21Gy for step&shoot+KonRad, 23Gy for VMAT, 23.3 Gy for step&shoot+Pinnacle and 26.5.Gy for RapidArc.\nOAR doses dependend on IMRT technology\nThe maximal doses to the myelon plus 7 mm margin varied between 34.2Gy (Tomo), 40.6Gy (VMAT), 42 Gy (RapidArc), 42.4 Gy (step&shoot+DAO), 42.9Gy (KonRad+step&shoot), 43.2 Gy (Pinnacle+step&shoot), to 44.9 Gy (sliding window).\nThe median doses to the mandible were 36.1Gy (Tomo), 39.5 (Pinnacle+step&shoot), 40Gy (KonRad+step&shoot), 41.2Gy (RapidArc), 42.9Gy (step&shoot+DAO), 43.1Gy (VMAT), 43.7Gy (sliding window).\n[SUBTITLE] Evaluation of low dose burden, MUs and treatment time [SUBSECTION] Table 4 summarized the results of the volume receiving more than 5Gy (V5Gy), the MU and treatment time, respectively. The lowest V5Gy values were achieved with the sliding window technique with fixed gantry angles (3499 ccm). The other technologies present the following values in increasing order: VMAT (4498 ccm), KonRad+step&shoot (4525 ccm), Pinacle+step&shoot (5010 ccm), Tomo (5122 ccm), DAO+step&shoot (5332 ccm) and RapidArc (5480 ccm).\nMUs, treatment time, V5Gy dependend on IMRT technology\nThe comparison of the MUs for the different technologies showed a wide range. The normalised MUs were lowest for DAO+step&shoot (408), followed by RapidArc (437) and VMAT (501). The step&shoot technique planned with KonRad required on average 800 MU, but when planned with Pinnacle it increased up to 1059 MU on average. The sliding window technique needs on average 1140 MU for IMRT delivery.\nThe shortest mean treatment times were associated with RapidArc (2.5 min with 2 arcs), followed by DAO+step&shoot (7 min), Tomo (8 min), VMAT (9 min with 2 arcs), sliding window (10.5 min) and step&shoot with KonRad and Pinnacle (11 min).\nTable 4 summarized the results of the volume receiving more than 5Gy (V5Gy), the MU and treatment time, respectively. The lowest V5Gy values were achieved with the sliding window technique with fixed gantry angles (3499 ccm). The other technologies present the following values in increasing order: VMAT (4498 ccm), KonRad+step&shoot (4525 ccm), Pinacle+step&shoot (5010 ccm), Tomo (5122 ccm), DAO+step&shoot (5332 ccm) and RapidArc (5480 ccm).\nMUs, treatment time, V5Gy dependend on IMRT technology\nThe comparison of the MUs for the different technologies showed a wide range. The normalised MUs were lowest for DAO+step&shoot (408), followed by RapidArc (437) and VMAT (501). The step&shoot technique planned with KonRad required on average 800 MU, but when planned with Pinnacle it increased up to 1059 MU on average. The sliding window technique needs on average 1140 MU for IMRT delivery.\nThe shortest mean treatment times were associated with RapidArc (2.5 min with 2 arcs), followed by DAO+step&shoot (7 min), Tomo (8 min), VMAT (9 min with 2 arcs), sliding window (10.5 min) and step&shoot with KonRad and Pinnacle (11 min).", "Ten patients with complex shaped targets in the head-and-neck region (orpharynx, hypopharynx, larynx) suitable for an SIB technique were selected for this retrospective multi-centre treatment planning study. The characteristics of these patients are shown in Table 1.\nOverview of the patients", "All PTVs and OARs were contoured in one TPS at the study coordination centre in Jena. CT data including structure sets of all patients were transferred to different centres which provided one of the following treatment technologies: Tomotherapy, VMAT, RapidArc, sliding window and step&shoot IMRT. More specifically, the following TPS were used: the TPS HiArt (Tomotherapy) was used for the helical tomotherapy (Tomo); rotational IMRT (VMAT) for an ELEKTA linac was planned with the TPS Monaco while rotational IMRT performed with a Varian linac (RadpidArc) was planned with Eclipse. For the static gantry IMRT four TPS were used: for step&shoot IMRT the KonRad (Siemens) system, the TPS Pinacle (ADAC) and the Panther DAO (Prowess), and finally for sliding window IMRT the Eclipse (Varian) system. All treatment plans were calculated with a nominal energy of 6 MV. The detailed overview about the used technologies, the TPS, linac e.t.c. is shown in table 2.\nOverview of used technologies, TPS and versions, linacs, number of beams or arcs and energy\nThe aim of the planning study was to achieve similar median doses in the PTVs for all ten patients. Dependent on the therapy concept which is based on the status of resection, the prescribed median PTV dose was defined as 52.2Gy or 55.8Gy to the lymph node region (PTV2) and as 60.9Gy or 65.1Gy to the integrated boost volume (PTV1). The minimal criterium (93% of the prescribed dose to minimal 99% of the PTV) was deduced from the RTOG H0022 protocol. The maximum dose criterion was defined as maximal 1% of the PTV receives maximal 110%. Additionally, the OAR objective for the parotid glands (Dmedian < 26Gy), for the mandibular (Dmedian < 45Gy) and the spinal cord plus a 7 mm margin (Dmax < 43Gy) should be satisfied. Fulfilling of the dose criteria for the PTV is given highest priority for treatment planning, except the criteria for the spinal cord could not be met.", "All doses in the evaluation are relative doses, normalised to the prescribed doses of PTV1 and PTV2. The evaluation was based on several criteria. The first criterium was the PTV coverage with 93% of the prescribed dose. The conformation of the PTVs (with respect to 93% of the prescribed dose) was described by the conformity index (CI = Volume93%/PTV). This specific formula was selected based on the assumption that no more than 1% of any PTV should receive <93% of its prescribed dose as minimum criteria, i.e. almost 100% of the PTV should received at least 93% of the dose. Target dose heterogeneity was described by the homogeneity index (HI=[D5%-D95%]/Dmean), i.e. a small HI indicates a better plan in the comparison. Another main focus of the comparison was put on the DVHs of the OARs and the volume of healthy tissue receiving more than 5Gy (V5Gy). Finally, the cumulative monitor units (MUs) and treatment times of the different technologies were compared. For that purpose the different linac calibrations conditions were normalised except the Tomotherapy machine.\nAll evaluation parameters were averaged over the 10 patients for each technique and planning modality. The standard deviations for all evaluation values were calculated over the ten patients.", "All IMRT technologies with their respective TPSs were able to provide treatment plans which fulfilled the planning goals. Figure 1 shows as an example DVHs for one patient for both PTVs and all IMRT techniques. The coverage of the PTVs is seen in figure 2 and 3. In that figures the doses which is given to 99% of the PTVs is used as criterium. These doses are in a range of 91% till 95% of the prescribed dose for PTV1 and between 84% and 93% for the PTV2.\nThe prescribed doses are 55.8 Gy to the low dose region and 65.1Gy to the high dose region. The PTV2 is a subset of PTV1.\nDose at 99% of the PTV2 dependend on technology and TPS.\nDose at 99% of the PTV1 dependend on technology and TPS.\nThe median doses of the low and high dose PTVs are in a range of 99.9% (Tomo) and 104.9% (VMAT) for PTV2 and between 101.4% (Konrad) and 105.8% (VMAT) for PTV1 as seen in figure 4 and figure 5.\nMedian doses of the PTV2 dependend on technologie and TPS.\nMedian doses of the PTV1 dependend on technologie and TPS.", "Figure 6 and figure 7 show the CI values. The best conformation was achieved with the KonRad+step&shoot with a mean CI of 1.17 for the PTV2. The CI values of the PTV2 were rather similar with 1.30 for sliding window, 1.31. for Tomo, 1.32 for DAO+step&shoot and 1.33 for Pinacle+step&shoot, while it was 1.38 for both VMAT and RapidArc.\nConformity index of the PTV2 dependend on technologie and TPS.\nConformity index of the PTV1 dependend on technologie and TPS.\nThe conformation of the PTV1 was again best for KonRad+Step&shoot (1.33). The second best result was achieved by the sliding window technique and Tomo (both 1.47), followed by RapiArc (1.63), DAO+step&shoot (1.68), VMAT (1.94) and Pinacle+step&shoot only with 2.82.", "The HI values for PTV2 were not evaluated because not all TPS were able to provide PTV2 excluded the Boost PTV. HI values for PTV1 are shown in figure 8. The best HI for the PTV1 was found with Tomo (0.047), followed by sliding window (0.062). Higher HI values were found for RapidArc (0.078) and DAO+step&shoot (0.083), VMAT (0.091) treatment plans, as well as for KonRad+step&shoot and Pinacle+step&shoot plans (both 0.100).\nHomogeneity index of the PTV1 dependend on technologie and TPS.", "A summary of the results concerning OAR sparing is shown in table 3. Not all TPS could reach the OAR objectives. The median doses of the parotids were 14.1Gy for Tomo, 17.0 Gy for step&shoot+DAO, 18.6Gy for sliding window, 21Gy for step&shoot+KonRad, 23Gy for VMAT, 23.3 Gy for step&shoot+Pinnacle and 26.5.Gy for RapidArc.\nOAR doses dependend on IMRT technology\nThe maximal doses to the myelon plus 7 mm margin varied between 34.2Gy (Tomo), 40.6Gy (VMAT), 42 Gy (RapidArc), 42.4 Gy (step&shoot+DAO), 42.9Gy (KonRad+step&shoot), 43.2 Gy (Pinnacle+step&shoot), to 44.9 Gy (sliding window).\nThe median doses to the mandible were 36.1Gy (Tomo), 39.5 (Pinnacle+step&shoot), 40Gy (KonRad+step&shoot), 41.2Gy (RapidArc), 42.9Gy (step&shoot+DAO), 43.1Gy (VMAT), 43.7Gy (sliding window).", "Table 4 summarized the results of the volume receiving more than 5Gy (V5Gy), the MU and treatment time, respectively. The lowest V5Gy values were achieved with the sliding window technique with fixed gantry angles (3499 ccm). The other technologies present the following values in increasing order: VMAT (4498 ccm), KonRad+step&shoot (4525 ccm), Pinacle+step&shoot (5010 ccm), Tomo (5122 ccm), DAO+step&shoot (5332 ccm) and RapidArc (5480 ccm).\nMUs, treatment time, V5Gy dependend on IMRT technology\nThe comparison of the MUs for the different technologies showed a wide range. The normalised MUs were lowest for DAO+step&shoot (408), followed by RapidArc (437) and VMAT (501). The step&shoot technique planned with KonRad required on average 800 MU, but when planned with Pinnacle it increased up to 1059 MU on average. The sliding window technique needs on average 1140 MU for IMRT delivery.\nThe shortest mean treatment times were associated with RapidArc (2.5 min with 2 arcs), followed by DAO+step&shoot (7 min), Tomo (8 min), VMAT (9 min with 2 arcs), sliding window (10.5 min) and step&shoot with KonRad and Pinnacle (11 min).", "The present study is a multi-institutional study; this implies that there are some \"subjective\" factors depending on planning philosophy of the respective hospital e.g. number of beam directions, number of segments and arcs, limitations of the MLCs, weighting of the importance of PTV and OAR. Another role plays the level of experience of the planners in the different centres that's why we selected for every technology and TPS combination experienced users. But in the last consequence the results of this multi-institutional study show that all used IMRT technologies together with their TPSs have the power to provide treatment plans with a satisfying target coverage while at the same time respecting the defined OAR criteria. At least there is no best technology with respect to all evaluation parameters, i.e. all techniques are connected with some advantages and with some disadvantages. As far as treatment planning is concerned, there were substantial differences in terms of usability to specify the planning goals for the different volumes. It would be of great help for treatment planning if functions where available in TPS that excluded intersections automatically or where priorities to different PTVs with intersections could be assigned.\nThe results are in good agreement with published data [26-29] regarding the volumatric arc therapy. Only the results of our study getting with sliding window are much better than in [17]. A differentiation of the patients in the two groups (post-operative patients and primary RT) did not show significant differences in the results.\nAll treatment plans offer a very good coverage of the PTV1 and a good coverage of the PTV2. The lowest dose to the PTV2 with clearly inferior results compared to the other techniques was achieved with the Pinnacle step&shoot combination. The median doses for the PTV2 and the PTV1 were in a range between 100% and 106%. This implies that the planners of the participating institutes improved the coverage of the PTVs with the help of an increase of the median dose. The requirements demanded by the HR0022 protocol are more or less fulfilled. ICRU recommendations for prescribing, reporting and recording IMRT have just been which will be helpful in the future to harmonize IMRT practice [30].\nSliding window, RapidArc and Tomo techniques resulted in better target dose homogeneity for the PTV1 compared to VMAT and step&shoot with Panther DAO, Pinnacle and KonRad.\nAll technologies TPS combinations fulfill the OAR constrains. Only the high myelon maximal dose receiving with sliding window is demonstrative (but with a margin of 7 mm clinically acceptable). The highest median dose to the spared parotid while using the RapiArc is peculiar too.\nThe volume which receives equal or more than 5Gy is lowest with the sliding window technique (3800 ccm), followed by the VMAT and KonRad step&shoot (about 4500 ccm). Pinnacle step&shoot, Tomo, Panther DAO and RapidArc deliver doses of equal or more than 5Gy to volumes of 5000 ccm or bigger. It is of interest that neither the \"classic IMRT\" with fixed gantry angles nor the rotation based IMRT is clearly the superior solution. It seems that rotational IMRT techniques do not automatically generate more volume that receives dose of equal or more than 5Gy. The volume could probably be even further reduced using higher photon beam energies.\nThe treatment delivery times obtained in the present study were shortest for the RapidArc solution. The delivery times for Tomo and Panther DAO were in the medium range while VMAT, step&shoot with Konrad or Pinnacle and with sliding window were characterised by the longest ones. As far as the VMAT results on delivery efficiency are concerned, it needs to be emphasized that Monaco Version 2.01 was used in the present study, which was improved recently with a new sequencer available in successive versions of this TPS.\nThe MUs are significantly reduced for the DAO step&shoot (408MU), RapidArc (437MU) and VMAT (501MU). The MUs needed for a step&shoot KonRad plan is situated in the centre (about 800MU). Pinnacle step&shoot needs 1060MU and sliding window takes the highest number of 1140MU. It is known that the number of MU is one factor which influences the peripheral dose, but there are some other factors like the linac head shielding and collimation system (shape, thickness, material), the focus body distance and the spectrum of the beam. The peripheral dose is of importance without any doubt but in the particular case subordinated relativ to the treatment plan quality.", "This is the first multi-institutional study that determined the influence of seven different combinations of treatment technologies and TPS combinations for the planning of head and neck cancer treatments for a simultaneous integrated boost technique. The results presented above indicate that all IMRT delivery technologies with their associated TPS provide IMRT plans with satisfying target coverage while at the same time mostly respecting the defined OAR criteria.\nSliding window, RapidArc and Tomo techniques provide better target dose homogeneity compared to VMAT and step&shoot with Panther DAO, Pinacle and KonRad. The conformity reached was best for KonRad for high and low dose PTV with a remarkable distance to the all other IMRT techniques. The overall treatment plan quality using Tomo regarding target coverage, HI, CI and OAR sparing seems to be better than the other TPS technology combinations. For the parotid gland clear median dose differences were observed for the different IMRT techniques. Rotational IMRT and Tomo seem to be advantageous with respect to OAR sparing sometimes and treatment delivery efficiency, at the cost of higher dose burden (>5Gy) to normal tissues. The application times are shortest for RapidArc with some concessives e.g. parotid sparing. The combination of Panther DAO and step&shoot shows that a segmentation algorithm which is optimised for time saving applications reduces the treatment time with plan quality concessions too. The applications need the most time with VMAT, with step&shoot with Konrad or Pinacle and with sliding window.\nWe expect a medical relevance of the results of our study e.g. partial underdosage, different OAR sparing, dose burden with 5Gy or more; but this should be investigated in prospective studies.", "The authors declare that they have no competing interests.", "TW coordinated the entire study. Patient accrual and clinical data collection was done by TGW. Treatment planning was conducted by TW, EB, IF, GH, MK, GL, KS, HS, DW.\nData collection was worked out by TB. Data analysis was done by TW and TB.\nThe manuscript was prepared by TW. Corrections and/or improvements were suggested by DG, IF, HS, KS and TGW. Major revisions were done by TW. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery.
21338504
A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides.
BACKGROUND
In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast) and preoperative (radiographic template) models, obtained by both CT and optical scanning processes.
METHODS
A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates.
RESULTS
The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.
CONCLUSIONS
[ "Algorithms", "Dental Implantation", "Humans", "Image Enhancement", "Image Interpretation, Computer-Assisted", "Imaging, Three-Dimensional", "Pattern Recognition, Automated", "Reproducibility of Results", "Sensitivity and Specificity", "Subtraction Technique", "Surgery, Computer-Assisted", "Tomography, X-Ray Computed" ]
3047291
null
null
Methods
The proposed methodology is based on the combined use of CT scan data and a structured light vision system. In particular, the data acquisition phase regards two different scanning technologies: radiological scanning and optical scanning. A clinical case, relative to a fully edentulous patient, has been used as test case to assess the feasibility of the proposed methodology. The ethics approval was obtained by Human Research Ethics Committee at the Sassari Hospital (n° 971) and written form approval was obtained by the patient. [SUBTITLE] Optical scanning [SUBSECTION] The 3D optical scanner used in this work is based on a stereo vision approach with structured coded light projection [13]. The optical unit is composed of a monochrome digital camera (CCD - 1280 × 960 pixels) and a multimedia white light projector (DLP - 1024 × 768 pixels) that are used as active devices for a triangulation process. The digitizer is integrated with a rotary axis, automatically controlled by a stepper motor with a resolution of 400 steps per round (Figure 1). The scanner is capable of measuring about 1 million 3D points within the field of view (100 mm × 80 mm), with a spatial resolution of 0.1 mm and an overall accuracy of 0.01 mm [13]. Optical scanner. 3D optical scanner used to capture dental models The 3D optical scanner used in this work is based on a stereo vision approach with structured coded light projection [13]. The optical unit is composed of a monochrome digital camera (CCD - 1280 × 960 pixels) and a multimedia white light projector (DLP - 1024 × 768 pixels) that are used as active devices for a triangulation process. The digitizer is integrated with a rotary axis, automatically controlled by a stepper motor with a resolution of 400 steps per round (Figure 1). The scanner is capable of measuring about 1 million 3D points within the field of view (100 mm × 80 mm), with a spatial resolution of 0.1 mm and an overall accuracy of 0.01 mm [13]. Optical scanner. 3D optical scanner used to capture dental models [SUBTITLE] CT scan data [SUBSECTION] CT scanning of maxillofacial region is based on the acquisition of several slices of the jaw bone at each turn of a helical movement of an x-ray source and a reciprocating area detector. The acquired data can be stored in DICOM format. In this work, CT scanning has been performed using a system Toshiba Aquilion by Toshiba Medical Systems, Japan, with 0.5 mm slice thickness. 3D models have been reconstructed processing DICOM images by means of 3D Slicer (version 3.2), a freely available open source software initially developed as a joint effort between the Surgical Planning Lab at Brigham and Women's Hospital and the MIT Artificial Intelligence Lab. The software has now evolved into a national platform supported by a variety of federal funding sources [14]. 3D Slicer is an end-user application to process medical images and to generate 3D volumetric data set, which can be used to provide primary reconstruction images in three orthogonal planes (axial, sagittal and coronal). 3D models of anatomical structure can be generated through a powerful and robust segmentation tool on the basis of a semi-automated approach. The displayed gray level of the voxels representing hard tissues can be dynamically altered to provide the most realistic appearance of the bone structure, minimizing soft tissues and the superimposition of metal artifacts (Figure 2). Initial segmentation of CT data can then be obtained by threshold segmentation. This involves the manual selection of a threshold value that can be dynamically adjusted to provide the optimal filling of the interested structure in all the slices acquired. CT data. Maxilla CT data in the axial, sagittal and coronal planes and a fully 3D vision CT scanning of maxillofacial region is based on the acquisition of several slices of the jaw bone at each turn of a helical movement of an x-ray source and a reciprocating area detector. The acquired data can be stored in DICOM format. In this work, CT scanning has been performed using a system Toshiba Aquilion by Toshiba Medical Systems, Japan, with 0.5 mm slice thickness. 3D models have been reconstructed processing DICOM images by means of 3D Slicer (version 3.2), a freely available open source software initially developed as a joint effort between the Surgical Planning Lab at Brigham and Women's Hospital and the MIT Artificial Intelligence Lab. The software has now evolved into a national platform supported by a variety of federal funding sources [14]. 3D Slicer is an end-user application to process medical images and to generate 3D volumetric data set, which can be used to provide primary reconstruction images in three orthogonal planes (axial, sagittal and coronal). 3D models of anatomical structure can be generated through a powerful and robust segmentation tool on the basis of a semi-automated approach. The displayed gray level of the voxels representing hard tissues can be dynamically altered to provide the most realistic appearance of the bone structure, minimizing soft tissues and the superimposition of metal artifacts (Figure 2). Initial segmentation of CT data can then be obtained by threshold segmentation. This involves the manual selection of a threshold value that can be dynamically adjusted to provide the optimal filling of the interested structure in all the slices acquired. CT data. Maxilla CT data in the axial, sagittal and coronal planes and a fully 3D vision [SUBTITLE] 3D reconstructions [SUBSECTION] The accuracy of 3D reconstruction based on CT data analysis may be affected by several factors that should be considered in surgical treatment planning. A reduction of image quality may be caused by metallic artifacts and/or patient motions. Moreover, the influence of an appropriate segmentation on the final 3D representation is a matter of utmost importance [15]. The segmentation process typically relies on the adopted mathematical algorithm, on spatial and contrast resolution of the slice images, on technical skills of the operator in selecting the optimal threshold value. Metal restorations as well as tissues not belonging to the structure of interest (i.e. antagonistic teeth) must be carefully cleaned up from the CT scan images when models for interactive planning are prepared. This process can lead to different volume reconstructions due to the operator's selection of threshold values, even if proved and patented software is used. In particular, the detection of the optimal threshold value is not straightforward when images presenting smooth intensity distributions are processed (Figure 3). For this reason, a methodology to verify the accuracy of the 3D reconstruction of CT derived images would be necessary for clinical applications. (A-D) CT data segmentation process. (A) DICOM image of the radiographic template with associated a row grey intensity level, (B-D) segmentation with three different threshold values. In this work, a validation process for 3D reconstructions of radiographic templates used in implant guided surgery has been developed using the optical scanner. As previously illustrated, the radiological template (Figure 4A) is manually manufactured on the basis of the diagnostic wax-up to take into account prosthesis design, and on the gypsum dental cast (Figure 4B) to assure the optimal fitting of the mating surfaces. The 3D model of the radiographic template is reconstructed processing the DICOM images (Figure 5A). The radiographic template is also acquired by the optical scanner. The 3D model as obtained by the structured light scanning system (Figure 5B) is used as the gold standard to improve the accuracy of the CT reconstruction. The comparison between the CT reconstructed and the optically captured models gives the information to optimize the parameters of the DICOM images segmentation process. The data acquired by the optical scanner are aligned to the model obtained by the CT reconstruction through a point-based registration technique. Correspondent pairs of points are manually selected on the two different models and the rigid transformation between the two objects is determined by applying the singular value decomposition (SVD) method [5]. The alignment is then refined by applying a surface-based registration technique through best fitting algorithms [16]. (A-B) Preoperative and anatomical dental models. (A) Radiographic template with gutta-percha markers, (B) gypsum dental cast. (A-B) Digital models of the radiographic template. 3D digital models of the radiographic template obtained by CT data (A) and by the optical scanner (B). Figure 6 shows the full-field 3D compare of three different reconstructions of the radiological guide, obtained varying the threshold values, with respect to the model obtained by the optical scanner. The distribution of discrepancies between the datasets obtained using the two scanning technologies, with both positive and negative deviations, quantifies the dimensional difference of the CT based reconstruction that can turn out to be smaller (Figure 6A) or greater (Figure 6C). The search of the optimal threshold value can therefore be made by minimizing the absolute mean of the distances between the two models (Figure 6B). Histogram plots of these distributions are reported in Figure 6D, whereas Table 1 reports the associated statistical data (mean and standard deviation). Statistical data relative to different DICOM reconstructions Mean and standard deviation of the discrepancies in the three different cases reported in Figure 6 and relative to the threshold values used in Figure 3 (B-D) (A-D) Full-field 3D comparisons of three different reconstructions of the radiographic template. Full-field 3D compare of three different DICOM reconstructions of the radiographic template with respect to the model obtained by the optical scanner and relative histogram plots (D). The DICOM model (Figure 5A) results smaller (A), comparable (B) and greater (C) than the one obtained by the optical scanner (Figure 5B). The accuracy of 3D reconstruction based on CT data analysis may be affected by several factors that should be considered in surgical treatment planning. A reduction of image quality may be caused by metallic artifacts and/or patient motions. Moreover, the influence of an appropriate segmentation on the final 3D representation is a matter of utmost importance [15]. The segmentation process typically relies on the adopted mathematical algorithm, on spatial and contrast resolution of the slice images, on technical skills of the operator in selecting the optimal threshold value. Metal restorations as well as tissues not belonging to the structure of interest (i.e. antagonistic teeth) must be carefully cleaned up from the CT scan images when models for interactive planning are prepared. This process can lead to different volume reconstructions due to the operator's selection of threshold values, even if proved and patented software is used. In particular, the detection of the optimal threshold value is not straightforward when images presenting smooth intensity distributions are processed (Figure 3). For this reason, a methodology to verify the accuracy of the 3D reconstruction of CT derived images would be necessary for clinical applications. (A-D) CT data segmentation process. (A) DICOM image of the radiographic template with associated a row grey intensity level, (B-D) segmentation with three different threshold values. In this work, a validation process for 3D reconstructions of radiographic templates used in implant guided surgery has been developed using the optical scanner. As previously illustrated, the radiological template (Figure 4A) is manually manufactured on the basis of the diagnostic wax-up to take into account prosthesis design, and on the gypsum dental cast (Figure 4B) to assure the optimal fitting of the mating surfaces. The 3D model of the radiographic template is reconstructed processing the DICOM images (Figure 5A). The radiographic template is also acquired by the optical scanner. The 3D model as obtained by the structured light scanning system (Figure 5B) is used as the gold standard to improve the accuracy of the CT reconstruction. The comparison between the CT reconstructed and the optically captured models gives the information to optimize the parameters of the DICOM images segmentation process. The data acquired by the optical scanner are aligned to the model obtained by the CT reconstruction through a point-based registration technique. Correspondent pairs of points are manually selected on the two different models and the rigid transformation between the two objects is determined by applying the singular value decomposition (SVD) method [5]. The alignment is then refined by applying a surface-based registration technique through best fitting algorithms [16]. (A-B) Preoperative and anatomical dental models. (A) Radiographic template with gutta-percha markers, (B) gypsum dental cast. (A-B) Digital models of the radiographic template. 3D digital models of the radiographic template obtained by CT data (A) and by the optical scanner (B). Figure 6 shows the full-field 3D compare of three different reconstructions of the radiological guide, obtained varying the threshold values, with respect to the model obtained by the optical scanner. The distribution of discrepancies between the datasets obtained using the two scanning technologies, with both positive and negative deviations, quantifies the dimensional difference of the CT based reconstruction that can turn out to be smaller (Figure 6A) or greater (Figure 6C). The search of the optimal threshold value can therefore be made by minimizing the absolute mean of the distances between the two models (Figure 6B). Histogram plots of these distributions are reported in Figure 6D, whereas Table 1 reports the associated statistical data (mean and standard deviation). Statistical data relative to different DICOM reconstructions Mean and standard deviation of the discrepancies in the three different cases reported in Figure 6 and relative to the threshold values used in Figure 3 (B-D) (A-D) Full-field 3D comparisons of three different reconstructions of the radiographic template. Full-field 3D compare of three different DICOM reconstructions of the radiographic template with respect to the model obtained by the optical scanner and relative histogram plots (D). The DICOM model (Figure 5A) results smaller (A), comparable (B) and greater (C) than the one obtained by the optical scanner (Figure 5B).
null
null
null
null
[ "Background", "Optical scanning", "CT scan data", "3D reconstructions", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Over the last few years, dental prostheses supported by osseointegrated implants have progressively replaced the use of removable dentures in the treatment of edentulous patients. The restoration of missing teeth must provide a patient with aesthetical, biomechanical and functional requirements of natural dentition, particularly concerning chewing functions. When conventional implantation techniques are used, the clinical outcome is often unpredictable, since it greatly relies on skills and experience of dental surgeons.\nThe placement of endosseous implants is based on invasive procedures which require a long time to be completed. Recently, many different implant planning procedures have been developed to support oral implant positioning. Number, size, position of implants must be related to bone morphology, as well as to the accompanying vital structures (e.g. neurovascular bundles). Complex surgical interventions can be performed using preoperative planning based on 3D imaging. The developments in computer-assisted surgery have brought to the definition of effective operating procedures in dental implantology. Several systems have been designed to guide treatment-planning processes: from simulation environments to surgical fields [1]. The guided approaches are generally based on three-dimensional reconstructions of patient anatomies processing data obtained by either Computed Tomography (CT) or Cone-Beam Computed Tomography (CBCT) [2]. These methodologies allow more accurate assessments of surgical difficulties through less invasive procedures and operating time reductions. In particular, radiographic data (depth and proximity to anatomical landmarks) and restorative requirements are crucial for a complete transfer of implant planning (positioning, trajectory and distribution) to surgical field [3]. Virtual planning processes provide digital models of drill guides, which are typically manufactured by stereo-lithography and used as surgical guidance in the preparation of implant receptor sites.\nIn the past decade, a methodology based on the use of two different guides and a double CT scan procedure, has been introduced [4] and later commercialized as NobelGuide® by NobelBiocare (Zurich, Switzerland). This procedure involves an intermediate template (radiographic template) that is used to refer the soft tissues with respect to the bone structure derived from patient CT scan data. The guide is manufactured on the basis of diagnostic wax-up reproducing the desired prosthetic end result. The diagnostic wax-up is obtained starting from the dental cast, produced from the impression of the patient's mouth, and helps in the definition of a proper dental prosthesis design. Moreover, the radiographic template is made of a non radio-opaque material, usually acrylic resin, to avoid image disturbs when CT scans of patients are carried. Then, the template is separately scanned changing radiological parameters in order to visualize the acrylic resin. The computer-based alignment of the prosthetic model with respect to the maxillofacial structure is obtained by small radio-opaque gutta-percha spheres inserted within the radiographic template. These gutta-percha markers are visible in both the different CT scans and can be used as references to register the two data sets through point-based rigid registration techniques [5].\nSpecific 3D image-based software programs for implant surgery planning, based on CT scan data, have been recently developed and clinically approved by many manufacturers. These software applications allow surgeons to locate implant receptor sites and simulate implant placement [6]. The planned implant positions are then transferred to the surgical field by means of a surgical guide made by stereo-lithographic techniques. Surgical guides can be bone-supported, tooth-supported or mucosa-supported depending on the specific patient's conditions. Bone-supported guides are designed to fit on the jawbone and can be used for partially or fully edentulous cases, while tooth-supported guides are tailored to fit directly on the teeth. The latters are mostly effective for single tooth and partially edentulous cases. Mucosa-supported surgical guides are rather designed for placement on soft tissues and are recommended for fully edentulous patients when minimally invasive surgery is required.\nThe surgical guide is then placed within the patient's mouth and can be anchored, especially when mucosa-supported guides are used, to the jawbone by stabilizing pins (Anchor Pins).\nThe weak point of the whole procedure relies on the accuracy in transferring information deriving from CT data into surgical planning. Geometrical deviations of implant positions between planning and intervention stages could cause irreversible damages of anatomical structure, such as sensory nerves. The surgical guide should closely fit with the hard and/or soft tissue surface in a unique and stable position in order to accurately transfer the pre-operative treatment plan. If the surgical template is not accurate, the fit will be improper, compromising the implant placement. Even small angular errors in the placement of perforation guides can, indeed, propagate in considerable horizontal deviations due to the depth of the implant.\nA previous in ex vivo study to assess the accuracy of 10-15 mm-long implant positioning using CBCT, revealed a mean angular deviation of 2° (SD ± 0.8, range 0.7° ÷ 4°) and a mean linear deviation of 1.1 mm (SD ± 0.7 mm, range 0.3 ÷ 2.3 mm) at the hexagon and 2 mm (SD ± 0.7 mm, range 0.7 ÷ 2.4 mm) at the tip [7].\nSarment et al. [8] compared the accuracy of a stereo-lithographic surgical template to conventional surgical template in vitro. An average linear deviation of 1.5 mm at the entrance, and 2.1 mm at the apex for the conventional template, as compared with 0.9 and 1.0 mm for the stereo-lithographic surgical template was reported.\nDi Giacomo et al. [9] published a preliminary study involving the placement of 21 implants using a stereo-lithographic surgical template, showing an angular deviation of 7.25° between planned and actual implant axes, whereas the linear deviation was 1.45 mm.\nIn a recent study [10], the accuracy of a surgical template in transferring planned implant position to the real patient surgery has been assessed. The mean mesio-distal angular deviation of the planned to the actual was 0.17° (SD ± 5.02°) ranging from 0.262° to 12.2°, though, the mean bucco-lingual angular deviation was 0.46° (SD ± 4.48°) ranging from 0.085° to 7.67°.\nThese studies confirm that the error could be high, especially in neurovascular anatomical districts, such as the mandibular nerve. In this anatomical area, a moderate damage may also result in severe symptoms. For example, the lesion of the mandibular nerve is of the Wallerian degenerative type [11], which is a slow degenerative process and the diagnosis by laser-evoked potentials and trigeminal reflexes would allow early decompression [12].\nDeviations between planning and postoperative outcome may reflect the sum of many error sources. For instance, CT scan quality and processing of DICOM (Digital Imaging and Communication in Medicine) images affect the creation of the corresponding 3D digital models. Misalignment errors can also be introduced during the arrangement of the radiographic template within the maxillofacial structures by the gutta-percha markers. Moreover, further inaccuracies can be introduced in manufacturing physical models by stereo-lithographic techniques.\nThis paper concerns the development of an innovative methodology to evaluate the accuracy in transferring CT based implant planning into surgical fields for oral rehabilitation.", "The 3D optical scanner used in this work is based on a stereo vision approach with structured coded light projection [13]. The optical unit is composed of a monochrome digital camera (CCD - 1280 × 960 pixels) and a multimedia white light projector (DLP - 1024 × 768 pixels) that are used as active devices for a triangulation process. The digitizer is integrated with a rotary axis, automatically controlled by a stepper motor with a resolution of 400 steps per round (Figure 1). The scanner is capable of measuring about 1 million 3D points within the field of view (100 mm × 80 mm), with a spatial resolution of 0.1 mm and an overall accuracy of 0.01 mm [13].\nOptical scanner. 3D optical scanner used to capture dental models", "CT scanning of maxillofacial region is based on the acquisition of several slices of the jaw bone at each turn of a helical movement of an x-ray source and a reciprocating area detector. The acquired data can be stored in DICOM format.\nIn this work, CT scanning has been performed using a system Toshiba Aquilion by Toshiba Medical Systems, Japan, with 0.5 mm slice thickness. 3D models have been reconstructed processing DICOM images by means of 3D Slicer (version 3.2), a freely available open source software initially developed as a joint effort between the Surgical Planning Lab at Brigham and Women's Hospital and the MIT Artificial Intelligence Lab. The software has now evolved into a national platform supported by a variety of federal funding sources [14]. 3D Slicer is an end-user application to process medical images and to generate 3D volumetric data set, which can be used to provide primary reconstruction images in three orthogonal planes (axial, sagittal and coronal). 3D models of anatomical structure can be generated through a powerful and robust segmentation tool on the basis of a semi-automated approach. The displayed gray level of the voxels representing hard tissues can be dynamically altered to provide the most realistic appearance of the bone structure, minimizing soft tissues and the superimposition of metal artifacts (Figure 2). Initial segmentation of CT data can then be obtained by threshold segmentation. This involves the manual selection of a threshold value that can be dynamically adjusted to provide the optimal filling of the interested structure in all the slices acquired.\nCT data. Maxilla CT data in the axial, sagittal and coronal planes and a fully 3D vision", "The accuracy of 3D reconstruction based on CT data analysis may be affected by several factors that should be considered in surgical treatment planning. A reduction of image quality may be caused by metallic artifacts and/or patient motions. Moreover, the influence of an appropriate segmentation on the final 3D representation is a matter of utmost importance [15]. The segmentation process typically relies on the adopted mathematical algorithm, on spatial and contrast resolution of the slice images, on technical skills of the operator in selecting the optimal threshold value. Metal restorations as well as tissues not belonging to the structure of interest (i.e. antagonistic teeth) must be carefully cleaned up from the CT scan images when models for interactive planning are prepared. This process can lead to different volume reconstructions due to the operator's selection of threshold values, even if proved and patented software is used. In particular, the detection of the optimal threshold value is not straightforward when images presenting smooth intensity distributions are processed (Figure 3). For this reason, a methodology to verify the accuracy of the 3D reconstruction of CT derived images would be necessary for clinical applications.\n(A-D) CT data segmentation process. (A) DICOM image of the radiographic template with associated a row grey intensity level, (B-D) segmentation with three different threshold values.\nIn this work, a validation process for 3D reconstructions of radiographic templates used in implant guided surgery has been developed using the optical scanner. As previously illustrated, the radiological template (Figure 4A) is manually manufactured on the basis of the diagnostic wax-up to take into account prosthesis design, and on the gypsum dental cast (Figure 4B) to assure the optimal fitting of the mating surfaces. The 3D model of the radiographic template is reconstructed processing the DICOM images (Figure 5A). The radiographic template is also acquired by the optical scanner. The 3D model as obtained by the structured light scanning system (Figure 5B) is used as the gold standard to improve the accuracy of the CT reconstruction. The comparison between the CT reconstructed and the optically captured models gives the information to optimize the parameters of the DICOM images segmentation process. The data acquired by the optical scanner are aligned to the model obtained by the CT reconstruction through a point-based registration technique. Correspondent pairs of points are manually selected on the two different models and the rigid transformation between the two objects is determined by applying the singular value decomposition (SVD) method [5]. The alignment is then refined by applying a surface-based registration technique through best fitting algorithms [16].\n(A-B) Preoperative and anatomical dental models. (A) Radiographic template with gutta-percha markers, (B) gypsum dental cast.\n(A-B) Digital models of the radiographic template. 3D digital models of the radiographic template obtained by CT data (A) and by the optical scanner (B).\nFigure 6 shows the full-field 3D compare of three different reconstructions of the radiological guide, obtained varying the threshold values, with respect to the model obtained by the optical scanner. The distribution of discrepancies between the datasets obtained using the two scanning technologies, with both positive and negative deviations, quantifies the dimensional difference of the CT based reconstruction that can turn out to be smaller (Figure 6A) or greater (Figure 6C). The search of the optimal threshold value can therefore be made by minimizing the absolute mean of the distances between the two models (Figure 6B). Histogram plots of these distributions are reported in Figure 6D, whereas Table 1 reports the associated statistical data (mean and standard deviation).\nStatistical data relative to different DICOM reconstructions\nMean and standard deviation of the discrepancies in the three different cases reported in Figure 6 and relative to the threshold values used in Figure 3 (B-D)\n(A-D) Full-field 3D comparisons of three different reconstructions of the radiographic template. Full-field 3D compare of three different DICOM reconstructions of the radiographic template with respect to the model obtained by the optical scanner and relative histogram plots (D). The DICOM model (Figure 5A) results smaller (A), comparable (B) and greater (C) than the one obtained by the optical scanner (Figure 5B).", "In the present work, a clinical case, relative to a fully edentulous patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. A study surgical template (Figure 7B), called Duplicate Radiographic Template (D.R.T) and based on the same CT data used to fabricate the mucosa-supported surgical guide, has been manufactured by a stereo-lithographic process. This template does not present the holes to hold the drill guides since the first requirement was just the reproduction of the only functional areas to wearing the guide. All the physical models (impression, cast, radiographic template, study surgical template) have been acquired by the optical scanner. The 3D digital models have been realigned by best fitting techniques in order to evaluate the discrepancies between the different shapes. The virtual alignments have been conducted by only referring the mating surfaces of the various models, since the crucial problem regards the proper fit between the final surgical guide and the patient's mucosa.\n(A-B) Impression and radiographic template. Patient's mouth impression (A) and Duplicate Radiographic Template (B)\nFigure 8 shows the 3D compare between the patient mouth's impression (Figure 7A) and the relative study cast (mean value -0.004 mm, SD 0.067 mm). The manufacturing of the gypsum cast is the first critical step of the whole process that can be verified, since the accuracy in detecting the impression is not measurable. Mismatch between the impression and the gypsum cast may cause improper fitting of the radiographic template, which could result stable on the cast, but floating or not wearable in the patient's mouth.\nFull-field 3D comparison between impression and cast. 3D compare between the impression and the gypsum cast models obtained by optical scanning.\nIn Figure 9, the distributions of the optical measurement discrepancies between corresponding points of the gypsum cast and, respectively, the radiological guide (Figure 9B) (mean value -0.009 mm, SD 0.069 mm) and the surgical guide or Duplicate Radiographic Template (Figure 9C) (mean value 0.013 mm, SD 0.141 mm) are reported. Moreover, the fitting of the radiological guide model, obtained by processing DICOM images on the gypsum cast has been verified (Figure 9A) (mean value -0.004 mm, SD 0.082 mm). Table 2 summarizes the same results in terms of mean value and standard deviation of the misalignments. Histogram plots relative to these distributions are reported in Figure 9D.\nStatistical data relative to discrepancies between cast and dental models\nMean and standard deviation of the discrepancies reported in Figure 8 and Figure 9\n(A-D) Full-field 3D comparisons between cast and dental models. Full-field distributions of the measurements discrepancies between gypsum cast model and, respectively, the radiological guide model as obtained by DICOM processing (A), the radiological guide model (B) and the Duplicate radiographic Template \"D.R.T.\" (C) as obtained by the optical scanner. (D) Relative histogram plots.", "The analysis of the results allows the detection of possible errors occurred in manufacturing surgical guides. Low discrepancy values between the impression and cast models prove the correctness in the manufacturing process of the gypsum cast. The almost perfect superimposition between the radiological template and the study cast should have been expected since the radiological template is customized by manually fitting it on the cast. The transfer from the radiological to the surgical guides involves two distinct processes: the reconstruction of the radiological guide model by CT scanning and the manufacturing of the surgical guide starting from this digital model. The accuracy of the first step has been verified aligning the model obtained by processing the DICOM images with the gypsum cast. The fine adjustment of the threshold value in the segmentation process, using the model obtained by optical scanning as the anatomical truth, has allowed the minimization of the deviations with respect to the cast. For this reason, the high misalignment errors regarding the surgical template can be attributed to the stereo-lithographic process, which has been used to manufacture the surgical guide. The geometrical differences of the surfaces mating with the gypsum cast, certainly affect the overall accuracy in the implant placement positions. As a further proof, the surgical guide has demonstrated to improperly fit the physical model of the dental gypsum cast. This could lead the surgeon to anchor the template in the wrong way, compromising the desired implant placement.\nA thorough study of the effect of these discrepancies on the maximum deviations obtained between the planned positions of the implants and the postoperative result should be done.", "In this paper, a methodology to evaluate the transfer accuracy of CT dental information into periodontal surgical field has been proposed. The procedure is based on the integration of a structured light vision system within the CT scan based preoperative planning process. The use of the optical scanner, having a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to evaluate the precision of the various physical models adopted and to point out possible error sources. Optical scanning of the radiological guide, mounted on the gypsum cast, could be furthermore helpful for the integration of the prosthetic data within the bone structure. In case of not fully edentulous patients, the acquisition of teeth's shape could be used, in addition to gutta-percha markers, to optimize or verify the positioning of the radiological guide with respect to the maxillofacial structure. Moreover, the accurate digital model of the mouth impression could be the base for the direct design of the radiological guide using CAD/CAM technologies, without passing through manufacturing the gypsum cast, drastically reducing errors and planning time.", "The authors declare that they have no competing interests.", "GF, GC, SB, AP, AR and FF participated to the conception and design of the work, to the acquisition of data, wrote the paper, participated in the analysis and interpretation of data and reviewed the manuscript. All the authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2342/11/5/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Optical scanning", "CT scan data", "3D reconstructions", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Over the last few years, dental prostheses supported by osseointegrated implants have progressively replaced the use of removable dentures in the treatment of edentulous patients. The restoration of missing teeth must provide a patient with aesthetical, biomechanical and functional requirements of natural dentition, particularly concerning chewing functions. When conventional implantation techniques are used, the clinical outcome is often unpredictable, since it greatly relies on skills and experience of dental surgeons.\nThe placement of endosseous implants is based on invasive procedures which require a long time to be completed. Recently, many different implant planning procedures have been developed to support oral implant positioning. Number, size, position of implants must be related to bone morphology, as well as to the accompanying vital structures (e.g. neurovascular bundles). Complex surgical interventions can be performed using preoperative planning based on 3D imaging. The developments in computer-assisted surgery have brought to the definition of effective operating procedures in dental implantology. Several systems have been designed to guide treatment-planning processes: from simulation environments to surgical fields [1]. The guided approaches are generally based on three-dimensional reconstructions of patient anatomies processing data obtained by either Computed Tomography (CT) or Cone-Beam Computed Tomography (CBCT) [2]. These methodologies allow more accurate assessments of surgical difficulties through less invasive procedures and operating time reductions. In particular, radiographic data (depth and proximity to anatomical landmarks) and restorative requirements are crucial for a complete transfer of implant planning (positioning, trajectory and distribution) to surgical field [3]. Virtual planning processes provide digital models of drill guides, which are typically manufactured by stereo-lithography and used as surgical guidance in the preparation of implant receptor sites.\nIn the past decade, a methodology based on the use of two different guides and a double CT scan procedure, has been introduced [4] and later commercialized as NobelGuide® by NobelBiocare (Zurich, Switzerland). This procedure involves an intermediate template (radiographic template) that is used to refer the soft tissues with respect to the bone structure derived from patient CT scan data. The guide is manufactured on the basis of diagnostic wax-up reproducing the desired prosthetic end result. The diagnostic wax-up is obtained starting from the dental cast, produced from the impression of the patient's mouth, and helps in the definition of a proper dental prosthesis design. Moreover, the radiographic template is made of a non radio-opaque material, usually acrylic resin, to avoid image disturbs when CT scans of patients are carried. Then, the template is separately scanned changing radiological parameters in order to visualize the acrylic resin. The computer-based alignment of the prosthetic model with respect to the maxillofacial structure is obtained by small radio-opaque gutta-percha spheres inserted within the radiographic template. These gutta-percha markers are visible in both the different CT scans and can be used as references to register the two data sets through point-based rigid registration techniques [5].\nSpecific 3D image-based software programs for implant surgery planning, based on CT scan data, have been recently developed and clinically approved by many manufacturers. These software applications allow surgeons to locate implant receptor sites and simulate implant placement [6]. The planned implant positions are then transferred to the surgical field by means of a surgical guide made by stereo-lithographic techniques. Surgical guides can be bone-supported, tooth-supported or mucosa-supported depending on the specific patient's conditions. Bone-supported guides are designed to fit on the jawbone and can be used for partially or fully edentulous cases, while tooth-supported guides are tailored to fit directly on the teeth. The latters are mostly effective for single tooth and partially edentulous cases. Mucosa-supported surgical guides are rather designed for placement on soft tissues and are recommended for fully edentulous patients when minimally invasive surgery is required.\nThe surgical guide is then placed within the patient's mouth and can be anchored, especially when mucosa-supported guides are used, to the jawbone by stabilizing pins (Anchor Pins).\nThe weak point of the whole procedure relies on the accuracy in transferring information deriving from CT data into surgical planning. Geometrical deviations of implant positions between planning and intervention stages could cause irreversible damages of anatomical structure, such as sensory nerves. The surgical guide should closely fit with the hard and/or soft tissue surface in a unique and stable position in order to accurately transfer the pre-operative treatment plan. If the surgical template is not accurate, the fit will be improper, compromising the implant placement. Even small angular errors in the placement of perforation guides can, indeed, propagate in considerable horizontal deviations due to the depth of the implant.\nA previous in ex vivo study to assess the accuracy of 10-15 mm-long implant positioning using CBCT, revealed a mean angular deviation of 2° (SD ± 0.8, range 0.7° ÷ 4°) and a mean linear deviation of 1.1 mm (SD ± 0.7 mm, range 0.3 ÷ 2.3 mm) at the hexagon and 2 mm (SD ± 0.7 mm, range 0.7 ÷ 2.4 mm) at the tip [7].\nSarment et al. [8] compared the accuracy of a stereo-lithographic surgical template to conventional surgical template in vitro. An average linear deviation of 1.5 mm at the entrance, and 2.1 mm at the apex for the conventional template, as compared with 0.9 and 1.0 mm for the stereo-lithographic surgical template was reported.\nDi Giacomo et al. [9] published a preliminary study involving the placement of 21 implants using a stereo-lithographic surgical template, showing an angular deviation of 7.25° between planned and actual implant axes, whereas the linear deviation was 1.45 mm.\nIn a recent study [10], the accuracy of a surgical template in transferring planned implant position to the real patient surgery has been assessed. The mean mesio-distal angular deviation of the planned to the actual was 0.17° (SD ± 5.02°) ranging from 0.262° to 12.2°, though, the mean bucco-lingual angular deviation was 0.46° (SD ± 4.48°) ranging from 0.085° to 7.67°.\nThese studies confirm that the error could be high, especially in neurovascular anatomical districts, such as the mandibular nerve. In this anatomical area, a moderate damage may also result in severe symptoms. For example, the lesion of the mandibular nerve is of the Wallerian degenerative type [11], which is a slow degenerative process and the diagnosis by laser-evoked potentials and trigeminal reflexes would allow early decompression [12].\nDeviations between planning and postoperative outcome may reflect the sum of many error sources. For instance, CT scan quality and processing of DICOM (Digital Imaging and Communication in Medicine) images affect the creation of the corresponding 3D digital models. Misalignment errors can also be introduced during the arrangement of the radiographic template within the maxillofacial structures by the gutta-percha markers. Moreover, further inaccuracies can be introduced in manufacturing physical models by stereo-lithographic techniques.\nThis paper concerns the development of an innovative methodology to evaluate the accuracy in transferring CT based implant planning into surgical fields for oral rehabilitation.", "The proposed methodology is based on the combined use of CT scan data and a structured light vision system. In particular, the data acquisition phase regards two different scanning technologies: radiological scanning and optical scanning.\nA clinical case, relative to a fully edentulous patient, has been used as test case to assess the feasibility of the proposed methodology. The ethics approval was obtained by Human Research Ethics Committee at the Sassari Hospital (n° 971) and written form approval was obtained by the patient.\n[SUBTITLE] Optical scanning [SUBSECTION] The 3D optical scanner used in this work is based on a stereo vision approach with structured coded light projection [13]. The optical unit is composed of a monochrome digital camera (CCD - 1280 × 960 pixels) and a multimedia white light projector (DLP - 1024 × 768 pixels) that are used as active devices for a triangulation process. The digitizer is integrated with a rotary axis, automatically controlled by a stepper motor with a resolution of 400 steps per round (Figure 1). The scanner is capable of measuring about 1 million 3D points within the field of view (100 mm × 80 mm), with a spatial resolution of 0.1 mm and an overall accuracy of 0.01 mm [13].\nOptical scanner. 3D optical scanner used to capture dental models\nThe 3D optical scanner used in this work is based on a stereo vision approach with structured coded light projection [13]. The optical unit is composed of a monochrome digital camera (CCD - 1280 × 960 pixels) and a multimedia white light projector (DLP - 1024 × 768 pixels) that are used as active devices for a triangulation process. The digitizer is integrated with a rotary axis, automatically controlled by a stepper motor with a resolution of 400 steps per round (Figure 1). The scanner is capable of measuring about 1 million 3D points within the field of view (100 mm × 80 mm), with a spatial resolution of 0.1 mm and an overall accuracy of 0.01 mm [13].\nOptical scanner. 3D optical scanner used to capture dental models\n[SUBTITLE] CT scan data [SUBSECTION] CT scanning of maxillofacial region is based on the acquisition of several slices of the jaw bone at each turn of a helical movement of an x-ray source and a reciprocating area detector. The acquired data can be stored in DICOM format.\nIn this work, CT scanning has been performed using a system Toshiba Aquilion by Toshiba Medical Systems, Japan, with 0.5 mm slice thickness. 3D models have been reconstructed processing DICOM images by means of 3D Slicer (version 3.2), a freely available open source software initially developed as a joint effort between the Surgical Planning Lab at Brigham and Women's Hospital and the MIT Artificial Intelligence Lab. The software has now evolved into a national platform supported by a variety of federal funding sources [14]. 3D Slicer is an end-user application to process medical images and to generate 3D volumetric data set, which can be used to provide primary reconstruction images in three orthogonal planes (axial, sagittal and coronal). 3D models of anatomical structure can be generated through a powerful and robust segmentation tool on the basis of a semi-automated approach. The displayed gray level of the voxels representing hard tissues can be dynamically altered to provide the most realistic appearance of the bone structure, minimizing soft tissues and the superimposition of metal artifacts (Figure 2). Initial segmentation of CT data can then be obtained by threshold segmentation. This involves the manual selection of a threshold value that can be dynamically adjusted to provide the optimal filling of the interested structure in all the slices acquired.\nCT data. Maxilla CT data in the axial, sagittal and coronal planes and a fully 3D vision\nCT scanning of maxillofacial region is based on the acquisition of several slices of the jaw bone at each turn of a helical movement of an x-ray source and a reciprocating area detector. The acquired data can be stored in DICOM format.\nIn this work, CT scanning has been performed using a system Toshiba Aquilion by Toshiba Medical Systems, Japan, with 0.5 mm slice thickness. 3D models have been reconstructed processing DICOM images by means of 3D Slicer (version 3.2), a freely available open source software initially developed as a joint effort between the Surgical Planning Lab at Brigham and Women's Hospital and the MIT Artificial Intelligence Lab. The software has now evolved into a national platform supported by a variety of federal funding sources [14]. 3D Slicer is an end-user application to process medical images and to generate 3D volumetric data set, which can be used to provide primary reconstruction images in three orthogonal planes (axial, sagittal and coronal). 3D models of anatomical structure can be generated through a powerful and robust segmentation tool on the basis of a semi-automated approach. The displayed gray level of the voxels representing hard tissues can be dynamically altered to provide the most realistic appearance of the bone structure, minimizing soft tissues and the superimposition of metal artifacts (Figure 2). Initial segmentation of CT data can then be obtained by threshold segmentation. This involves the manual selection of a threshold value that can be dynamically adjusted to provide the optimal filling of the interested structure in all the slices acquired.\nCT data. Maxilla CT data in the axial, sagittal and coronal planes and a fully 3D vision\n[SUBTITLE] 3D reconstructions [SUBSECTION] The accuracy of 3D reconstruction based on CT data analysis may be affected by several factors that should be considered in surgical treatment planning. A reduction of image quality may be caused by metallic artifacts and/or patient motions. Moreover, the influence of an appropriate segmentation on the final 3D representation is a matter of utmost importance [15]. The segmentation process typically relies on the adopted mathematical algorithm, on spatial and contrast resolution of the slice images, on technical skills of the operator in selecting the optimal threshold value. Metal restorations as well as tissues not belonging to the structure of interest (i.e. antagonistic teeth) must be carefully cleaned up from the CT scan images when models for interactive planning are prepared. This process can lead to different volume reconstructions due to the operator's selection of threshold values, even if proved and patented software is used. In particular, the detection of the optimal threshold value is not straightforward when images presenting smooth intensity distributions are processed (Figure 3). For this reason, a methodology to verify the accuracy of the 3D reconstruction of CT derived images would be necessary for clinical applications.\n(A-D) CT data segmentation process. (A) DICOM image of the radiographic template with associated a row grey intensity level, (B-D) segmentation with three different threshold values.\nIn this work, a validation process for 3D reconstructions of radiographic templates used in implant guided surgery has been developed using the optical scanner. As previously illustrated, the radiological template (Figure 4A) is manually manufactured on the basis of the diagnostic wax-up to take into account prosthesis design, and on the gypsum dental cast (Figure 4B) to assure the optimal fitting of the mating surfaces. The 3D model of the radiographic template is reconstructed processing the DICOM images (Figure 5A). The radiographic template is also acquired by the optical scanner. The 3D model as obtained by the structured light scanning system (Figure 5B) is used as the gold standard to improve the accuracy of the CT reconstruction. The comparison between the CT reconstructed and the optically captured models gives the information to optimize the parameters of the DICOM images segmentation process. The data acquired by the optical scanner are aligned to the model obtained by the CT reconstruction through a point-based registration technique. Correspondent pairs of points are manually selected on the two different models and the rigid transformation between the two objects is determined by applying the singular value decomposition (SVD) method [5]. The alignment is then refined by applying a surface-based registration technique through best fitting algorithms [16].\n(A-B) Preoperative and anatomical dental models. (A) Radiographic template with gutta-percha markers, (B) gypsum dental cast.\n(A-B) Digital models of the radiographic template. 3D digital models of the radiographic template obtained by CT data (A) and by the optical scanner (B).\nFigure 6 shows the full-field 3D compare of three different reconstructions of the radiological guide, obtained varying the threshold values, with respect to the model obtained by the optical scanner. The distribution of discrepancies between the datasets obtained using the two scanning technologies, with both positive and negative deviations, quantifies the dimensional difference of the CT based reconstruction that can turn out to be smaller (Figure 6A) or greater (Figure 6C). The search of the optimal threshold value can therefore be made by minimizing the absolute mean of the distances between the two models (Figure 6B). Histogram plots of these distributions are reported in Figure 6D, whereas Table 1 reports the associated statistical data (mean and standard deviation).\nStatistical data relative to different DICOM reconstructions\nMean and standard deviation of the discrepancies in the three different cases reported in Figure 6 and relative to the threshold values used in Figure 3 (B-D)\n(A-D) Full-field 3D comparisons of three different reconstructions of the radiographic template. Full-field 3D compare of three different DICOM reconstructions of the radiographic template with respect to the model obtained by the optical scanner and relative histogram plots (D). The DICOM model (Figure 5A) results smaller (A), comparable (B) and greater (C) than the one obtained by the optical scanner (Figure 5B).\nThe accuracy of 3D reconstruction based on CT data analysis may be affected by several factors that should be considered in surgical treatment planning. A reduction of image quality may be caused by metallic artifacts and/or patient motions. Moreover, the influence of an appropriate segmentation on the final 3D representation is a matter of utmost importance [15]. The segmentation process typically relies on the adopted mathematical algorithm, on spatial and contrast resolution of the slice images, on technical skills of the operator in selecting the optimal threshold value. Metal restorations as well as tissues not belonging to the structure of interest (i.e. antagonistic teeth) must be carefully cleaned up from the CT scan images when models for interactive planning are prepared. This process can lead to different volume reconstructions due to the operator's selection of threshold values, even if proved and patented software is used. In particular, the detection of the optimal threshold value is not straightforward when images presenting smooth intensity distributions are processed (Figure 3). For this reason, a methodology to verify the accuracy of the 3D reconstruction of CT derived images would be necessary for clinical applications.\n(A-D) CT data segmentation process. (A) DICOM image of the radiographic template with associated a row grey intensity level, (B-D) segmentation with three different threshold values.\nIn this work, a validation process for 3D reconstructions of radiographic templates used in implant guided surgery has been developed using the optical scanner. As previously illustrated, the radiological template (Figure 4A) is manually manufactured on the basis of the diagnostic wax-up to take into account prosthesis design, and on the gypsum dental cast (Figure 4B) to assure the optimal fitting of the mating surfaces. The 3D model of the radiographic template is reconstructed processing the DICOM images (Figure 5A). The radiographic template is also acquired by the optical scanner. The 3D model as obtained by the structured light scanning system (Figure 5B) is used as the gold standard to improve the accuracy of the CT reconstruction. The comparison between the CT reconstructed and the optically captured models gives the information to optimize the parameters of the DICOM images segmentation process. The data acquired by the optical scanner are aligned to the model obtained by the CT reconstruction through a point-based registration technique. Correspondent pairs of points are manually selected on the two different models and the rigid transformation between the two objects is determined by applying the singular value decomposition (SVD) method [5]. The alignment is then refined by applying a surface-based registration technique through best fitting algorithms [16].\n(A-B) Preoperative and anatomical dental models. (A) Radiographic template with gutta-percha markers, (B) gypsum dental cast.\n(A-B) Digital models of the radiographic template. 3D digital models of the radiographic template obtained by CT data (A) and by the optical scanner (B).\nFigure 6 shows the full-field 3D compare of three different reconstructions of the radiological guide, obtained varying the threshold values, with respect to the model obtained by the optical scanner. The distribution of discrepancies between the datasets obtained using the two scanning technologies, with both positive and negative deviations, quantifies the dimensional difference of the CT based reconstruction that can turn out to be smaller (Figure 6A) or greater (Figure 6C). The search of the optimal threshold value can therefore be made by minimizing the absolute mean of the distances between the two models (Figure 6B). Histogram plots of these distributions are reported in Figure 6D, whereas Table 1 reports the associated statistical data (mean and standard deviation).\nStatistical data relative to different DICOM reconstructions\nMean and standard deviation of the discrepancies in the three different cases reported in Figure 6 and relative to the threshold values used in Figure 3 (B-D)\n(A-D) Full-field 3D comparisons of three different reconstructions of the radiographic template. Full-field 3D compare of three different DICOM reconstructions of the radiographic template with respect to the model obtained by the optical scanner and relative histogram plots (D). The DICOM model (Figure 5A) results smaller (A), comparable (B) and greater (C) than the one obtained by the optical scanner (Figure 5B).", "The 3D optical scanner used in this work is based on a stereo vision approach with structured coded light projection [13]. The optical unit is composed of a monochrome digital camera (CCD - 1280 × 960 pixels) and a multimedia white light projector (DLP - 1024 × 768 pixels) that are used as active devices for a triangulation process. The digitizer is integrated with a rotary axis, automatically controlled by a stepper motor with a resolution of 400 steps per round (Figure 1). The scanner is capable of measuring about 1 million 3D points within the field of view (100 mm × 80 mm), with a spatial resolution of 0.1 mm and an overall accuracy of 0.01 mm [13].\nOptical scanner. 3D optical scanner used to capture dental models", "CT scanning of maxillofacial region is based on the acquisition of several slices of the jaw bone at each turn of a helical movement of an x-ray source and a reciprocating area detector. The acquired data can be stored in DICOM format.\nIn this work, CT scanning has been performed using a system Toshiba Aquilion by Toshiba Medical Systems, Japan, with 0.5 mm slice thickness. 3D models have been reconstructed processing DICOM images by means of 3D Slicer (version 3.2), a freely available open source software initially developed as a joint effort between the Surgical Planning Lab at Brigham and Women's Hospital and the MIT Artificial Intelligence Lab. The software has now evolved into a national platform supported by a variety of federal funding sources [14]. 3D Slicer is an end-user application to process medical images and to generate 3D volumetric data set, which can be used to provide primary reconstruction images in three orthogonal planes (axial, sagittal and coronal). 3D models of anatomical structure can be generated through a powerful and robust segmentation tool on the basis of a semi-automated approach. The displayed gray level of the voxels representing hard tissues can be dynamically altered to provide the most realistic appearance of the bone structure, minimizing soft tissues and the superimposition of metal artifacts (Figure 2). Initial segmentation of CT data can then be obtained by threshold segmentation. This involves the manual selection of a threshold value that can be dynamically adjusted to provide the optimal filling of the interested structure in all the slices acquired.\nCT data. Maxilla CT data in the axial, sagittal and coronal planes and a fully 3D vision", "The accuracy of 3D reconstruction based on CT data analysis may be affected by several factors that should be considered in surgical treatment planning. A reduction of image quality may be caused by metallic artifacts and/or patient motions. Moreover, the influence of an appropriate segmentation on the final 3D representation is a matter of utmost importance [15]. The segmentation process typically relies on the adopted mathematical algorithm, on spatial and contrast resolution of the slice images, on technical skills of the operator in selecting the optimal threshold value. Metal restorations as well as tissues not belonging to the structure of interest (i.e. antagonistic teeth) must be carefully cleaned up from the CT scan images when models for interactive planning are prepared. This process can lead to different volume reconstructions due to the operator's selection of threshold values, even if proved and patented software is used. In particular, the detection of the optimal threshold value is not straightforward when images presenting smooth intensity distributions are processed (Figure 3). For this reason, a methodology to verify the accuracy of the 3D reconstruction of CT derived images would be necessary for clinical applications.\n(A-D) CT data segmentation process. (A) DICOM image of the radiographic template with associated a row grey intensity level, (B-D) segmentation with three different threshold values.\nIn this work, a validation process for 3D reconstructions of radiographic templates used in implant guided surgery has been developed using the optical scanner. As previously illustrated, the radiological template (Figure 4A) is manually manufactured on the basis of the diagnostic wax-up to take into account prosthesis design, and on the gypsum dental cast (Figure 4B) to assure the optimal fitting of the mating surfaces. The 3D model of the radiographic template is reconstructed processing the DICOM images (Figure 5A). The radiographic template is also acquired by the optical scanner. The 3D model as obtained by the structured light scanning system (Figure 5B) is used as the gold standard to improve the accuracy of the CT reconstruction. The comparison between the CT reconstructed and the optically captured models gives the information to optimize the parameters of the DICOM images segmentation process. The data acquired by the optical scanner are aligned to the model obtained by the CT reconstruction through a point-based registration technique. Correspondent pairs of points are manually selected on the two different models and the rigid transformation between the two objects is determined by applying the singular value decomposition (SVD) method [5]. The alignment is then refined by applying a surface-based registration technique through best fitting algorithms [16].\n(A-B) Preoperative and anatomical dental models. (A) Radiographic template with gutta-percha markers, (B) gypsum dental cast.\n(A-B) Digital models of the radiographic template. 3D digital models of the radiographic template obtained by CT data (A) and by the optical scanner (B).\nFigure 6 shows the full-field 3D compare of three different reconstructions of the radiological guide, obtained varying the threshold values, with respect to the model obtained by the optical scanner. The distribution of discrepancies between the datasets obtained using the two scanning technologies, with both positive and negative deviations, quantifies the dimensional difference of the CT based reconstruction that can turn out to be smaller (Figure 6A) or greater (Figure 6C). The search of the optimal threshold value can therefore be made by minimizing the absolute mean of the distances between the two models (Figure 6B). Histogram plots of these distributions are reported in Figure 6D, whereas Table 1 reports the associated statistical data (mean and standard deviation).\nStatistical data relative to different DICOM reconstructions\nMean and standard deviation of the discrepancies in the three different cases reported in Figure 6 and relative to the threshold values used in Figure 3 (B-D)\n(A-D) Full-field 3D comparisons of three different reconstructions of the radiographic template. Full-field 3D compare of three different DICOM reconstructions of the radiographic template with respect to the model obtained by the optical scanner and relative histogram plots (D). The DICOM model (Figure 5A) results smaller (A), comparable (B) and greater (C) than the one obtained by the optical scanner (Figure 5B).", "In the present work, a clinical case, relative to a fully edentulous patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. A study surgical template (Figure 7B), called Duplicate Radiographic Template (D.R.T) and based on the same CT data used to fabricate the mucosa-supported surgical guide, has been manufactured by a stereo-lithographic process. This template does not present the holes to hold the drill guides since the first requirement was just the reproduction of the only functional areas to wearing the guide. All the physical models (impression, cast, radiographic template, study surgical template) have been acquired by the optical scanner. The 3D digital models have been realigned by best fitting techniques in order to evaluate the discrepancies between the different shapes. The virtual alignments have been conducted by only referring the mating surfaces of the various models, since the crucial problem regards the proper fit between the final surgical guide and the patient's mucosa.\n(A-B) Impression and radiographic template. Patient's mouth impression (A) and Duplicate Radiographic Template (B)\nFigure 8 shows the 3D compare between the patient mouth's impression (Figure 7A) and the relative study cast (mean value -0.004 mm, SD 0.067 mm). The manufacturing of the gypsum cast is the first critical step of the whole process that can be verified, since the accuracy in detecting the impression is not measurable. Mismatch between the impression and the gypsum cast may cause improper fitting of the radiographic template, which could result stable on the cast, but floating or not wearable in the patient's mouth.\nFull-field 3D comparison between impression and cast. 3D compare between the impression and the gypsum cast models obtained by optical scanning.\nIn Figure 9, the distributions of the optical measurement discrepancies between corresponding points of the gypsum cast and, respectively, the radiological guide (Figure 9B) (mean value -0.009 mm, SD 0.069 mm) and the surgical guide or Duplicate Radiographic Template (Figure 9C) (mean value 0.013 mm, SD 0.141 mm) are reported. Moreover, the fitting of the radiological guide model, obtained by processing DICOM images on the gypsum cast has been verified (Figure 9A) (mean value -0.004 mm, SD 0.082 mm). Table 2 summarizes the same results in terms of mean value and standard deviation of the misalignments. Histogram plots relative to these distributions are reported in Figure 9D.\nStatistical data relative to discrepancies between cast and dental models\nMean and standard deviation of the discrepancies reported in Figure 8 and Figure 9\n(A-D) Full-field 3D comparisons between cast and dental models. Full-field distributions of the measurements discrepancies between gypsum cast model and, respectively, the radiological guide model as obtained by DICOM processing (A), the radiological guide model (B) and the Duplicate radiographic Template \"D.R.T.\" (C) as obtained by the optical scanner. (D) Relative histogram plots.", "The analysis of the results allows the detection of possible errors occurred in manufacturing surgical guides. Low discrepancy values between the impression and cast models prove the correctness in the manufacturing process of the gypsum cast. The almost perfect superimposition between the radiological template and the study cast should have been expected since the radiological template is customized by manually fitting it on the cast. The transfer from the radiological to the surgical guides involves two distinct processes: the reconstruction of the radiological guide model by CT scanning and the manufacturing of the surgical guide starting from this digital model. The accuracy of the first step has been verified aligning the model obtained by processing the DICOM images with the gypsum cast. The fine adjustment of the threshold value in the segmentation process, using the model obtained by optical scanning as the anatomical truth, has allowed the minimization of the deviations with respect to the cast. For this reason, the high misalignment errors regarding the surgical template can be attributed to the stereo-lithographic process, which has been used to manufacture the surgical guide. The geometrical differences of the surfaces mating with the gypsum cast, certainly affect the overall accuracy in the implant placement positions. As a further proof, the surgical guide has demonstrated to improperly fit the physical model of the dental gypsum cast. This could lead the surgeon to anchor the template in the wrong way, compromising the desired implant placement.\nA thorough study of the effect of these discrepancies on the maximum deviations obtained between the planned positions of the implants and the postoperative result should be done.", "In this paper, a methodology to evaluate the transfer accuracy of CT dental information into periodontal surgical field has been proposed. The procedure is based on the integration of a structured light vision system within the CT scan based preoperative planning process. The use of the optical scanner, having a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to evaluate the precision of the various physical models adopted and to point out possible error sources. Optical scanning of the radiological guide, mounted on the gypsum cast, could be furthermore helpful for the integration of the prosthetic data within the bone structure. In case of not fully edentulous patients, the acquisition of teeth's shape could be used, in addition to gutta-percha markers, to optimize or verify the positioning of the radiological guide with respect to the maxillofacial structure. Moreover, the accurate digital model of the mouth impression could be the base for the direct design of the radiological guide using CAD/CAM technologies, without passing through manufacturing the gypsum cast, drastically reducing errors and planning time.", "The authors declare that they have no competing interests.", "GF, GC, SB, AP, AR and FF participated to the conception and design of the work, to the acquisition of data, wrote the paper, participated in the analysis and interpretation of data and reviewed the manuscript. All the authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2342/11/5/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Effects of rasagiline, its metabolite aminoindan and selegiline on glutamate receptor mediated signalling in the rat hippocampus slice in vitro.
21338509
Rasagiline, a new drug developed to treat Parkinson's disease, is known to inhibit monoamine oxidase B. However, its metabolite R-(-)-aminoindan does not show this kind of activity. The present series of in vitro experiments using the rat hippocampal slice preparation deals with effects of both compounds on the pyramidal cell response after electric stimulation of the Schaffer Collaterals in comparison to selegiline, another MAO B inhibitor.
BACKGROUND
Stimulation of the Schaffer Collaterals by single stimuli (SS) or theta burst stimulation (TBS) resulted in stable responses of pyramidal cells measured as population spike amplitude (about 1 mV under control SS conditions or about 2 mV after TBS).
METHOD
During the first series, this response was attenuated in the presence of rasagiline and aminoindan-to a lesser degree of selegiline-in a concentration dependent manner (5-50 μM) after single stimuli as well as under TBS. During oxygen/glucose deprivation for 10 min the amplitude of the population spike breaks down by 75%. The presence of rasagiline and aminoindan, but rarely the presence of selegiline, prevented this break down. Following glutamate receptor mediated enhancements of neuronal transmission in a second series of experiments very clear differences could be observed in comparison to the action of selegiline: NMDA receptor, AMPA receptor as well as metabotropic glutamate receptor mediated increases of transmission were concentration dependently (0,3 - 2 μM) antagonized by rasagiline and aminoindan, but not by selegiline. On the opposite, only selegiline attenuated kainate receptor mediated increases of excitability. Thus, both monoamino oxidase (MAO) B inhibitors show attenuation of glutamatergic transmission in the hippocampus but interfere with different receptor mediated excitatory modulations at low concentrations.
RESULTS
Since aminoindan does not induce MAO B inhibition, these effects must be regarded as being independent from MAO B inhibition. The results provide strong evidence for a neuroprotective activity of rasagiline and aminoindan in concert with an extended clinical indication into the direction of other diseases like Alzheimer's disease or stroke.
CONCLUSIONS
[ "Action Potentials", "Animals", "Dose-Response Relationship, Drug", "Electric Stimulation", "Hippocampus", "In Vitro Techniques", "Indans", "Male", "Monoamine Oxidase Inhibitors", "Neuroprotective Agents", "Pyramidal Cells", "Rats", "Rats, Sprague-Dawley", "Receptors, AMPA", "Receptors, Glutamate", "Selegiline", "Signal Transduction" ]
3051903
null
null
Methods
Hippocampus slices were obtained from 43 adult male Sprague-Dawley rats (Charles River Wiga, Sulzbach, Germany). Rats were kept under a reversed day/night cycle for 2 weeks prior start of the experiments, to allow recording of in vitro activity from slices during the active phase of their circadian rhythm [10,11]. Animals were exsanguinated under ether anaesthesia, the brain was removed and the hippocampal formation was isolated under microstereoscopic sight. The midsection of the hippocampus was fixed to the table of a vibrating microtome (Rhema Labortechnik, Hofheim, Germany) using a cyanoacrylate adhesive, submerged in chilled bicarbonate-buffered saline (artificial cerebrospinal fluid (ACSF): NaCl: 124 mM, KCl: 5 mM, CaCl2: 2 mM, MgSO4: 2 mM, NaHCO3: 26 mM, glucose: 10 mM, and cut into slices of 400 μm thickness. All slices were pre-incubated for at least 1 h in Carbogen saturated ACSF (pH 7.4) in a pre-chamber before use [12]. During the experiment the slices were held and treated in a special superfusion chamber (List Electronics, Darmstadt, Germany) according to [13] at 35°C [14]. Five slices per rat were used. The preparation was superfused ACSF at 220 ml/h. Electrical stimulation (200 μA constant current pulses of 200 μs pulse width) of the Schaffer Collaterals within the CA2 area and recording of extracellular field potentials from the pyramidal cell layer of CA1 [12] was performed according to conventional electrophysiological methods using the "Labteam" Computer system "NeuroTool" software package (MediSyst GmbH, Linden, Germany). Measurements were performed at 10 min intervals in order to avoid potentiation mechanisms after single stimuli (first recording at 10 min is discarded for stability purposes). Four stimulations-each 20 s apart-were averaged for each time point. After averaging the last three of four responses to single stimuli (SS) to give one value, potentiation was induced by applying a theta burst type pattern (TBS; [7]). The mean amplitude of three signals 20 seconds apart were averaged to give the mean of absolute voltage values (microvolt) ± standard error of the mean for each experimental condition (single stimulus or theta burst stimulation). Electrical stimulation of the Schaffer Collaterals within the C2 area with single stimuli resulted in stable responses of the pyramidal cells in form of population spikes with an amplitude of about 1 mV and about 2 mV after theta burst stimulation (TBS) (representative example is given in Figure 1). Oxygen and Glucose deprivation (OGD) was performed in analogy to [15] by shutting off oxygen and glucose for 10 minutes. In this case glucose was replaced by sucrose. Documentation of original signals showing the effects of using single stimuli (SS) or theta burst stimulation (TBS) in control slices (left panel) or in the presence of rasagiline (right panel) diluted in artificial cerebro-spinal fluid (ASCF). The amplitude is calculated from baseline to the down reflection of the signal (shadowed). Stimulus artefacts are omitted for the sake of clarity. Scales: Time is given in milliseconds (ms), amplitude in millivolt (mV). For stimulation of glutamate receptors (NMDA, AMPA, Kainate and metabotropic receptor) four agonists were used, respectively: trans-1-Aminocyclobutan-1,3-dicarboxylic acid (ACBD; [16]), (S)-(-)-α-Amino-5-fluoro-3,4-dihydro-2,4-dioxo-1(2H)-pyrimidinepropanoic acid (S-Fluorowillardiine; [17-19]), (RS)-2-Amino-3-(3-hydroxy-5-tert-butyliosxazol-4-yl)propanoic acid (ATPA; [20-23]) and (±)-1-Aminocyclopentane-trans-1,3-dicarboxylic acid (t-ACPD; [24-26]). All agonists were tested in pilot experiments in order to detect a concentration leading to strong increases of population spike amplitude in the presence of single stimuli (SS) and theta burst stimulation (TBS). Origin of the chemicals is given in Table 1. The allowance to keep animals for this purpose was obtained from governmental authorities, dated 2009-09-01 under the document Nr. 0200052529. Experiments were performed in accordance with the German Animal Protection Law. Compounds used Origin of chemical compounds used for this experimental series.
null
null
null
null
[ "Background", "Results", "a) Neurophysiological evidence for neuroprotective effects", "b) Functional interference with NMDA receptor activation", "c) Functional interference with AMPA receptor activation", "d) Functional interference with Kainate receptor activation", "e) Functional interference with metabotropic glutamate receptor activation", "Discussion", "Conclusions", "Authors' contributions" ]
[ "Rasagiline (N-propargyl-1-(R)-aminoindan) and selegiline are drugs prescribed for the treatment of Parkinson's disease. Both are believed to act by inhibition of monoamine oxidase B (MAO B). However, both are metabolized in a different way: rasagiline gives rise to aminoindan, a compound reported to have neuroprotective capabilities of its own, whereas selegiline gives rise to the neurotoxic metabolite methamphetamine [1,2]. Similar electropharmacograms obtained by quantitative brain field potential analysis were obtained from freely moving rats in the presence of rasagiline and its metabolite aminoindan (not inhibiting monoamine oxidase B). Selegiline-on the other hand-produced a time dependent biphasic action presumably due to the action of its active metabolites [3]. Available evidence suggests an additional mechanism of action for these drugs independently from MAO B inhibition.\nFor example, a neuroprotective action unrelated to MAO inhibition has been reported by [4] for rasagiline as well as for its major metabolite 1-(R)-aminoindan [5]. For review of neuroprotective effects of rasagiline and aminoindan see [6]. But again, no final mechanism has been reported to explain the proposed neuroprotective action. There is solid evidence of an involvement of glutamatergic transmission in neuroprotection. This calls for an experimental setup to dissect the possible interference of these compounds within the glutamatergic system. To our knowledge, no neurophysiological techniques have been applied up to now to characterize the effects of these compounds on glutamatergic transmission in the hippocampus. This model should be suitable since the communication between Schaffer-Collaterals and the hippocampal pyramidal cells takes place by using glutamate as transmitter.\nThe hippocampus slice preparation is a validated model for direct analysis of interaction of substances with living neuronal tissue [7,8]. Due to the preservation of the three dimensional structure of the hippocampus, drug effects on the excitability of pyramidal cells can be studied in a unique manner. Electric stimulation of Schaffer Collaterals leads to release of glutamate resulting in excitation of the postsynaptic pyramidal cells. The result of the electrical stimulation can be recorded as a so-called population spike (pop-spike). The amplitude of the resulting population spike represents the number of recruited pyramidal cells and relates to the extent of glutamatergic transmission. The advantage of the model not only consists in the possibility of physiological recording in vitro during 8 hours but also to modify the excitability of the system in order to create pathophysiological conditions like transient oxygen and glucose deprivation (OGD) [9].\nThe first part of the present investigation aimed at the characterization of the effects of rasagiline and its metabolite aminoindan in comparison to selegiline on glutamatergic transmission within a physiological environment and under pathophysiological conditions. The principle of the second part of the investigation was to use the enhancement of the pyramidal cell response (increased amplitudes of population spike) in the presence of highly specific and selective agonists of different glutamate receptors as a challenge. Accordingly, these responses were followed in the presence of several concentrations of rasagiline, aminoindan and selegiline. This approach should reveal great similarities between rasagiline and aminoindan on one side and a great difference to the action of selegiline on the other side.", "[SUBTITLE] a) Neurophysiological evidence for neuroprotective effects [SUBSECTION] Using single stimulus administration rasagiline and - to a lesser degree-selegiline attenuated the pyramidal cells response significantly at a concentration of 30 μM. In the presence of aminoindan, however, significant attenuation was observed already at 15 μM. At a concentration of 50 μM rasagiline and aminoindan reduced the amplitude by about 60%, selegiline by about 40%. The course of the concentration dependence is given in Figure 2 for all three compounds. Under the condition of theta burst stimuli, rasagiline was able to reduce the signal amplitude significantly at 10 μM, whereas the effect of selegiline reached statistical significance at a concentration of 15 μM. The effects of aminoindan became statistically significant already at a concentration of 7.5 μM. Thus, in the presence of rasagiline, aminoindan and selegiline a concentration dependent decrease of the amplitudes of the population spike could be observed during single shock stimulation as well as during theta burst stimulation. Effects of selegiline were weakest (Figure 2).\nConcentration dependent effects of rasagiline, aminoindan and selegiline on pyramidal cell activity in terms of changes of population spike amplitude. Results from single slices as obtained after single stimuli (SS) and after theta burst stimuli (TBS). Data are given in microvolt for a mean of four slices and standard error of the mean. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to proof, that this attenuation of glutamatergic transmission could be related to neuroprotective features of the compounds, a pathophysiological situation was created in slices by turning off oxygen and glucose for 10 minutes. This procedure succeeded in a breakdown of the signal amplitudes after electrical single stimuli by about 75%. This breakdown was nearly totally prevented (p < 0.05) by the presence of a concentration of 5 μM rasagiline or aminoindan in the superfusion medium but rarely by selegiline (p < 0.1). Time courses of the experiments are depicted in Figure 3. This effect was still visible but not statistically significant from control at time period 60 and 70 minutes after start of the experiment. Thus, rasagiline and aminoindan showed a clearly better neuroprotective effect than selegiline in this model.\nComplete time course of experiments. Bar indicates 10 min of oxygen and glucose deprivation (OGD) before measurement. Nearly complete prevention of OGD-induced break down of population spike amplitude (s. control) by rasagiline and aminoindan but only to a minor degree by selegiline (p < 0.1). Stars indicate statistical significance of p < 0.05 in comparison to control.\nUsing single stimulus administration rasagiline and - to a lesser degree-selegiline attenuated the pyramidal cells response significantly at a concentration of 30 μM. In the presence of aminoindan, however, significant attenuation was observed already at 15 μM. At a concentration of 50 μM rasagiline and aminoindan reduced the amplitude by about 60%, selegiline by about 40%. The course of the concentration dependence is given in Figure 2 for all three compounds. Under the condition of theta burst stimuli, rasagiline was able to reduce the signal amplitude significantly at 10 μM, whereas the effect of selegiline reached statistical significance at a concentration of 15 μM. The effects of aminoindan became statistically significant already at a concentration of 7.5 μM. Thus, in the presence of rasagiline, aminoindan and selegiline a concentration dependent decrease of the amplitudes of the population spike could be observed during single shock stimulation as well as during theta burst stimulation. Effects of selegiline were weakest (Figure 2).\nConcentration dependent effects of rasagiline, aminoindan and selegiline on pyramidal cell activity in terms of changes of population spike amplitude. Results from single slices as obtained after single stimuli (SS) and after theta burst stimuli (TBS). Data are given in microvolt for a mean of four slices and standard error of the mean. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to proof, that this attenuation of glutamatergic transmission could be related to neuroprotective features of the compounds, a pathophysiological situation was created in slices by turning off oxygen and glucose for 10 minutes. This procedure succeeded in a breakdown of the signal amplitudes after electrical single stimuli by about 75%. This breakdown was nearly totally prevented (p < 0.05) by the presence of a concentration of 5 μM rasagiline or aminoindan in the superfusion medium but rarely by selegiline (p < 0.1). Time courses of the experiments are depicted in Figure 3. This effect was still visible but not statistically significant from control at time period 60 and 70 minutes after start of the experiment. Thus, rasagiline and aminoindan showed a clearly better neuroprotective effect than selegiline in this model.\nComplete time course of experiments. Bar indicates 10 min of oxygen and glucose deprivation (OGD) before measurement. Nearly complete prevention of OGD-induced break down of population spike amplitude (s. control) by rasagiline and aminoindan but only to a minor degree by selegiline (p < 0.1). Stars indicate statistical significance of p < 0.05 in comparison to control.\n[SUBTITLE] b) Functional interference with NMDA receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindan or selegiline with NMDA receptor activation, glutamatergic neurotransmission was modulated by ACBD, a very potent and selective NMDA receptor agonist. A concentration of 50 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1106 to 1940 μV (176% of control value) could be observed (Figure 4). In the presence of rasagiline the amplitude remained at control value (changing from 1102 to 1185 μV). Statistically significant differences to the ACBD induced increase were already observed with a concentration of 1 μM of rasagiline (p < 0.01).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the NMDA glutamate receptor by ACBD (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to ACBD-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 0.3 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nSimilar results were obtained in the presence of theta burst stimulation. Presence of ACBD in the superfusion medium increased the amplitude to 3173 μV. Rasagiline at a concentration of 5 μM attenuated the ACBD-induced signal down to 2074 μV (about control value). A statistically significant difference to ACBD-induced values was obtained at the very low concentration of 300 nM of rasagiline and aminoindan (p < 0.01). Thus, a concentration dependent attenuation of NMDA receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 4). On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of NMDA glutamate receptor stimulation.\nIn order to test a possible interference of rasagiline, aminoindan or selegiline with NMDA receptor activation, glutamatergic neurotransmission was modulated by ACBD, a very potent and selective NMDA receptor agonist. A concentration of 50 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1106 to 1940 μV (176% of control value) could be observed (Figure 4). In the presence of rasagiline the amplitude remained at control value (changing from 1102 to 1185 μV). Statistically significant differences to the ACBD induced increase were already observed with a concentration of 1 μM of rasagiline (p < 0.01).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the NMDA glutamate receptor by ACBD (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to ACBD-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 0.3 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nSimilar results were obtained in the presence of theta burst stimulation. Presence of ACBD in the superfusion medium increased the amplitude to 3173 μV. Rasagiline at a concentration of 5 μM attenuated the ACBD-induced signal down to 2074 μV (about control value). A statistically significant difference to ACBD-induced values was obtained at the very low concentration of 300 nM of rasagiline and aminoindan (p < 0.01). Thus, a concentration dependent attenuation of NMDA receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 4). On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of NMDA glutamate receptor stimulation.\n[SUBTITLE] c) Functional interference with AMPA receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindan or selegiline with AMPA receptor activation, the glutamatergic neurotransmission was stimulated by fluorowillardiine, a very potent and selective AMPA receptor agonist. A concentration of 100 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1135 to 1692 μV (151% of control value) could be observed (Figure 5). In the presence of 5 μM of rasagiline the amplitude remained at control value (changing from 1089 to 1137 μV).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the AMPA glutamate receptor by fluorowillardiine (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to fluorowillardiine-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 1 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nStatistically significant differences to the effect of fluorowillardiine were observed with 2.5 μM of rasagiline (p < 0.02) and aminoindan (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. Fluorowillardiine increased the amplitude to 2873 μV. Rasagiline at a concentration of 5 μV attenuated the fluorowillardiine-induced signal to a control value of 1950 μV. A statistically significant difference to fluorowillardiine-induced values was obtained at the very low concentration of 1 μM of rasagiline (p < 0.05).\nEven stronger effects were seen in the presence of aminoindan (s. Figure 2). Aminoindan attenuated the amplitude of the population spike from 2888 μV down to 1152 μV, which is far beyond the control values. Statistical significance in comparison to AMPA receptor stimulation was obtained already at 1 μM of aminoindan. Thus, a concentration dependent attenuation of AMPA receptor induced increases of population spike amplitudes was recognized for rasagiline and even more for its metabolite aminoindan. On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism also of AMPA glutamate receptor stimulation.\nIn order to test a possible interference of rasagiline, aminoindan or selegiline with AMPA receptor activation, the glutamatergic neurotransmission was stimulated by fluorowillardiine, a very potent and selective AMPA receptor agonist. A concentration of 100 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1135 to 1692 μV (151% of control value) could be observed (Figure 5). In the presence of 5 μM of rasagiline the amplitude remained at control value (changing from 1089 to 1137 μV).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the AMPA glutamate receptor by fluorowillardiine (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to fluorowillardiine-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 1 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nStatistically significant differences to the effect of fluorowillardiine were observed with 2.5 μM of rasagiline (p < 0.02) and aminoindan (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. Fluorowillardiine increased the amplitude to 2873 μV. Rasagiline at a concentration of 5 μV attenuated the fluorowillardiine-induced signal to a control value of 1950 μV. A statistically significant difference to fluorowillardiine-induced values was obtained at the very low concentration of 1 μM of rasagiline (p < 0.05).\nEven stronger effects were seen in the presence of aminoindan (s. Figure 2). Aminoindan attenuated the amplitude of the population spike from 2888 μV down to 1152 μV, which is far beyond the control values. Statistical significance in comparison to AMPA receptor stimulation was obtained already at 1 μM of aminoindan. Thus, a concentration dependent attenuation of AMPA receptor induced increases of population spike amplitudes was recognized for rasagiline and even more for its metabolite aminoindan. On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism also of AMPA glutamate receptor stimulation.\n[SUBTITLE] d) Functional interference with Kainate receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindane or selegiline with Kainate receptor activation, glutamatergic neurotransmission was stimulated by ATPA, a very potent and selective Kainate receptor agonist. A concentration of 50 nM induced a significant enhancement of the populations spike amplitude. Under the condition of single stimuli increase of the amplitude from 1097 to 1904 μV (174% of control value) could be observed (Table 2). Virtually no effect on this signal could be seen in the presence of rasagiline or aminoindan up to a concentration of 5 μM. However, in the presence of selegiline the amplitude remained at control values (changing from 1083 to 1257 μV). Statistically significant differences to the ATPA induced increase were observed already with 2.5 μM of selegiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ATPA increased the amplitude to 3055 μV. Selegiline at a concentration of 5 μV attenuated the ATPA-induced signal down to 2134 μV. Thus, a concentration dependent attenuation of Kainate receptor induced increases of population spike amplitudes was recognized only for selegiline but not for rasagiline or aminoindan. Again a clear difference could be observed between the effects of rasagiline and aminoindan on one site and selegiline on the other side, but in a reversed manner.\nAmplitudes of population spike\nEffect of selegiline on Kainate receptor dependent increase of population spike amplitude, but lack of effect by rasagiline or aminoindan. Values are given as mean of n = 4 slices +- S.E.M. Statistical significance to ATPA signal is given as p-value.\nIn order to test a possible interference of rasagiline, aminoindane or selegiline with Kainate receptor activation, glutamatergic neurotransmission was stimulated by ATPA, a very potent and selective Kainate receptor agonist. A concentration of 50 nM induced a significant enhancement of the populations spike amplitude. Under the condition of single stimuli increase of the amplitude from 1097 to 1904 μV (174% of control value) could be observed (Table 2). Virtually no effect on this signal could be seen in the presence of rasagiline or aminoindan up to a concentration of 5 μM. However, in the presence of selegiline the amplitude remained at control values (changing from 1083 to 1257 μV). Statistically significant differences to the ATPA induced increase were observed already with 2.5 μM of selegiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ATPA increased the amplitude to 3055 μV. Selegiline at a concentration of 5 μV attenuated the ATPA-induced signal down to 2134 μV. Thus, a concentration dependent attenuation of Kainate receptor induced increases of population spike amplitudes was recognized only for selegiline but not for rasagiline or aminoindan. Again a clear difference could be observed between the effects of rasagiline and aminoindan on one site and selegiline on the other side, but in a reversed manner.\nAmplitudes of population spike\nEffect of selegiline on Kainate receptor dependent increase of population spike amplitude, but lack of effect by rasagiline or aminoindan. Values are given as mean of n = 4 slices +- S.E.M. Statistical significance to ATPA signal is given as p-value.\n[SUBTITLE] e) Functional interference with metabotropic glutamate receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindan or selegiline with metabotropic glutamate receptor activation, ACPD, a very potent and selective metabotropic glutamate receptor agonist, was used to enhance pyramidal cell responses. A concentration of 25 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1068 to 2003 μV (188% of control value) was observed (Figure 6). In the presence of rasagiline and SS conditions the amplitude remained at control value (changing from 1111 to 1134 μV). Statistically significant differences to the ACPD induced increase were observed with 1 μM of rasagiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ACPD increased the amplitude to 3027 μV. Rasagiline at a concentration of 5 μV attenuated the ACBD-induced signal down to control value (2050 μV). Thus, a concentration dependent attenuation of the metabotropic glutamate receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 6). On the opposite, virtually no effect could be seen in the presence of selegiline with a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of metabotropic glutamate receptor stimulation.\nConcentration dependent effect of rasagiline, aminoindan or selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the metabotropic glutamate receptor by ACPD. Data are presented for the mean of n = 4 slices +- SEM. Statistically significant attenuation of pop-spike amplitude in comparison to ACPD-induced increases were obtained in the presence of 2.5 μM of rasagiline and aminoindan following single stimuli (SS) or theta burst stimulation. No effect was observed with selegiline. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to test a possible interference of rasagiline, aminoindan or selegiline with metabotropic glutamate receptor activation, ACPD, a very potent and selective metabotropic glutamate receptor agonist, was used to enhance pyramidal cell responses. A concentration of 25 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1068 to 2003 μV (188% of control value) was observed (Figure 6). In the presence of rasagiline and SS conditions the amplitude remained at control value (changing from 1111 to 1134 μV). Statistically significant differences to the ACPD induced increase were observed with 1 μM of rasagiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ACPD increased the amplitude to 3027 μV. Rasagiline at a concentration of 5 μV attenuated the ACBD-induced signal down to control value (2050 μV). Thus, a concentration dependent attenuation of the metabotropic glutamate receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 6). On the opposite, virtually no effect could be seen in the presence of selegiline with a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of metabotropic glutamate receptor stimulation.\nConcentration dependent effect of rasagiline, aminoindan or selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the metabotropic glutamate receptor by ACPD. Data are presented for the mean of n = 4 slices +- SEM. Statistically significant attenuation of pop-spike amplitude in comparison to ACPD-induced increases were obtained in the presence of 2.5 μM of rasagiline and aminoindan following single stimuli (SS) or theta burst stimulation. No effect was observed with selegiline. Stars indicate statistical significance of p < 0.05 in comparison to control.", "Using single stimulus administration rasagiline and - to a lesser degree-selegiline attenuated the pyramidal cells response significantly at a concentration of 30 μM. In the presence of aminoindan, however, significant attenuation was observed already at 15 μM. At a concentration of 50 μM rasagiline and aminoindan reduced the amplitude by about 60%, selegiline by about 40%. The course of the concentration dependence is given in Figure 2 for all three compounds. Under the condition of theta burst stimuli, rasagiline was able to reduce the signal amplitude significantly at 10 μM, whereas the effect of selegiline reached statistical significance at a concentration of 15 μM. The effects of aminoindan became statistically significant already at a concentration of 7.5 μM. Thus, in the presence of rasagiline, aminoindan and selegiline a concentration dependent decrease of the amplitudes of the population spike could be observed during single shock stimulation as well as during theta burst stimulation. Effects of selegiline were weakest (Figure 2).\nConcentration dependent effects of rasagiline, aminoindan and selegiline on pyramidal cell activity in terms of changes of population spike amplitude. Results from single slices as obtained after single stimuli (SS) and after theta burst stimuli (TBS). Data are given in microvolt for a mean of four slices and standard error of the mean. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to proof, that this attenuation of glutamatergic transmission could be related to neuroprotective features of the compounds, a pathophysiological situation was created in slices by turning off oxygen and glucose for 10 minutes. This procedure succeeded in a breakdown of the signal amplitudes after electrical single stimuli by about 75%. This breakdown was nearly totally prevented (p < 0.05) by the presence of a concentration of 5 μM rasagiline or aminoindan in the superfusion medium but rarely by selegiline (p < 0.1). Time courses of the experiments are depicted in Figure 3. This effect was still visible but not statistically significant from control at time period 60 and 70 minutes after start of the experiment. Thus, rasagiline and aminoindan showed a clearly better neuroprotective effect than selegiline in this model.\nComplete time course of experiments. Bar indicates 10 min of oxygen and glucose deprivation (OGD) before measurement. Nearly complete prevention of OGD-induced break down of population spike amplitude (s. control) by rasagiline and aminoindan but only to a minor degree by selegiline (p < 0.1). Stars indicate statistical significance of p < 0.05 in comparison to control.", "In order to test a possible interference of rasagiline, aminoindan or selegiline with NMDA receptor activation, glutamatergic neurotransmission was modulated by ACBD, a very potent and selective NMDA receptor agonist. A concentration of 50 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1106 to 1940 μV (176% of control value) could be observed (Figure 4). In the presence of rasagiline the amplitude remained at control value (changing from 1102 to 1185 μV). Statistically significant differences to the ACBD induced increase were already observed with a concentration of 1 μM of rasagiline (p < 0.01).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the NMDA glutamate receptor by ACBD (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to ACBD-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 0.3 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nSimilar results were obtained in the presence of theta burst stimulation. Presence of ACBD in the superfusion medium increased the amplitude to 3173 μV. Rasagiline at a concentration of 5 μM attenuated the ACBD-induced signal down to 2074 μV (about control value). A statistically significant difference to ACBD-induced values was obtained at the very low concentration of 300 nM of rasagiline and aminoindan (p < 0.01). Thus, a concentration dependent attenuation of NMDA receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 4). On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of NMDA glutamate receptor stimulation.", "In order to test a possible interference of rasagiline, aminoindan or selegiline with AMPA receptor activation, the glutamatergic neurotransmission was stimulated by fluorowillardiine, a very potent and selective AMPA receptor agonist. A concentration of 100 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1135 to 1692 μV (151% of control value) could be observed (Figure 5). In the presence of 5 μM of rasagiline the amplitude remained at control value (changing from 1089 to 1137 μV).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the AMPA glutamate receptor by fluorowillardiine (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to fluorowillardiine-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 1 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nStatistically significant differences to the effect of fluorowillardiine were observed with 2.5 μM of rasagiline (p < 0.02) and aminoindan (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. Fluorowillardiine increased the amplitude to 2873 μV. Rasagiline at a concentration of 5 μV attenuated the fluorowillardiine-induced signal to a control value of 1950 μV. A statistically significant difference to fluorowillardiine-induced values was obtained at the very low concentration of 1 μM of rasagiline (p < 0.05).\nEven stronger effects were seen in the presence of aminoindan (s. Figure 2). Aminoindan attenuated the amplitude of the population spike from 2888 μV down to 1152 μV, which is far beyond the control values. Statistical significance in comparison to AMPA receptor stimulation was obtained already at 1 μM of aminoindan. Thus, a concentration dependent attenuation of AMPA receptor induced increases of population spike amplitudes was recognized for rasagiline and even more for its metabolite aminoindan. On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism also of AMPA glutamate receptor stimulation.", "In order to test a possible interference of rasagiline, aminoindane or selegiline with Kainate receptor activation, glutamatergic neurotransmission was stimulated by ATPA, a very potent and selective Kainate receptor agonist. A concentration of 50 nM induced a significant enhancement of the populations spike amplitude. Under the condition of single stimuli increase of the amplitude from 1097 to 1904 μV (174% of control value) could be observed (Table 2). Virtually no effect on this signal could be seen in the presence of rasagiline or aminoindan up to a concentration of 5 μM. However, in the presence of selegiline the amplitude remained at control values (changing from 1083 to 1257 μV). Statistically significant differences to the ATPA induced increase were observed already with 2.5 μM of selegiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ATPA increased the amplitude to 3055 μV. Selegiline at a concentration of 5 μV attenuated the ATPA-induced signal down to 2134 μV. Thus, a concentration dependent attenuation of Kainate receptor induced increases of population spike amplitudes was recognized only for selegiline but not for rasagiline or aminoindan. Again a clear difference could be observed between the effects of rasagiline and aminoindan on one site and selegiline on the other side, but in a reversed manner.\nAmplitudes of population spike\nEffect of selegiline on Kainate receptor dependent increase of population spike amplitude, but lack of effect by rasagiline or aminoindan. Values are given as mean of n = 4 slices +- S.E.M. Statistical significance to ATPA signal is given as p-value.", "In order to test a possible interference of rasagiline, aminoindan or selegiline with metabotropic glutamate receptor activation, ACPD, a very potent and selective metabotropic glutamate receptor agonist, was used to enhance pyramidal cell responses. A concentration of 25 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1068 to 2003 μV (188% of control value) was observed (Figure 6). In the presence of rasagiline and SS conditions the amplitude remained at control value (changing from 1111 to 1134 μV). Statistically significant differences to the ACPD induced increase were observed with 1 μM of rasagiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ACPD increased the amplitude to 3027 μV. Rasagiline at a concentration of 5 μV attenuated the ACBD-induced signal down to control value (2050 μV). Thus, a concentration dependent attenuation of the metabotropic glutamate receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 6). On the opposite, virtually no effect could be seen in the presence of selegiline with a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of metabotropic glutamate receptor stimulation.\nConcentration dependent effect of rasagiline, aminoindan or selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the metabotropic glutamate receptor by ACPD. Data are presented for the mean of n = 4 slices +- SEM. Statistically significant attenuation of pop-spike amplitude in comparison to ACPD-induced increases were obtained in the presence of 2.5 μM of rasagiline and aminoindan following single stimuli (SS) or theta burst stimulation. No effect was observed with selegiline. Stars indicate statistical significance of p < 0.05 in comparison to control.", "The rat hippocampal in vitro slice preparation has been used under physiological and pathophysiological conditions. Two monoamine oxidase B inhibitors (rasagiline and selegiline) and one compound lacking monoamine oxidase B inhibition (aminoindan) have been compared with respect to their ability to attenuate glutamatergic transmission represented by decreasing responses of pyramidal cells to electric stimulation. This result is interpreted to represent functional neuroprotection against massive glutamatergic excitation.\nSince simulation of ischemic conditions by oxygen-glucose deprivation (OGD) likewise resulted in showing that rasagiline and aminoindan prevented the breakdown of excitability, these effects probably also relate to neuroprotection (for selegiline this could be shown only to a minor degree). The term neuroprotection usually is taken to describe effects of drugs which might result in disease modifying actions during the course of Alzheimer's or Parkinson's illness. With respect to the latter, better neuroprotective and neurorestorative actions have been described for rasagiline in comparison to selegiline against lactacystin-induced nigrostriatal dopaminergic degeneration [27]. Also in a tissue culture model using PC12 cells under oxygen-glucose deprivation, rasagiline was clearly more effective than selegiline [28]. In addition, these authors could show that the neuroprotective effects of selegiline were blocked by its metabolite l-methamphetamine whereas aminoindan added to the effects of rasagiline. Taken together, all these findings suggest that the aminoindan moiety might be more important for neuroprotection than the propargyl moiety as suspected earlier [29]. Our results are therefore in line with earlier preclinical evidence for a neuroprotective action of rasagiline and its metabolite aminoindan. The functional impairment of glutamate dependent transmission obviously is not dependent on inhibition of monoamine oxidase B. However, a link between indirect inhibition of monoamine oxidase B and blockade of glyceraldehyde-3-phosphate dehydrogenase has recently been reported, which could also serve as an explanation for neuroprotective effects of rasagiline, selegiline and aminoindan [30].\nThe second part of the present investigation provides solid evidence that both rasagiline and selegiline interact functionally with glutamatergic receptor mediated transmission in addition to their known effects on MAO B, but by a different mechanism of action. The effects must be independent of the enzyme inhibition for the following reasons: firstly, aminoindan does not inhibit MAO B; secondly, both MAO inhibitors-rasagiline and selegiline-develop different receptor-mediated functional consequences within the glutamatergic system. This implicates that rasagiline and its metabolite aminoindan probably develop clinical properties different from that of selegiline.\nA hypothesis exists that particular glutamate receptors of the N-methyl-D-aspartate type are over-activated in a tonic rather than a phasic manner, which under chronic conditions leads to neuronal damage [31]. Another clinical implication could be suspected from the combined attenuation of NMDA and AMPA receptor dependent effects: simultaneous administration of sub-threshold dosages of NMDA and AMPA antagonists had a positive influence on the development of L-dopa induced dyskinesias in rats and monkeys [32]. These data are corroborated by earlier findings showing glutamate super sensitivity in the putamen of Parkinson patients treated chronically with L-dopa [33]. A common disadvantage of currently available rather unselective NMDA receptor antagonists is the occurrence of adverse effects like hallucinations [34]. Therefore, rasagiline and its metabolite aminoindan, which do not induce such side effects, but not selegiline with methamphetamine as its metabolite, should have a positive effect on motor fluctuations in Parkinson patients.\nWith respect to the involvement of metabotropic glutamate receptors in Parkinson's disease there is evidence that they are involved in the pathologically altered circuitry in the basal ganglia. Several antagonists at this receptor alleviated L-dopa induced dyskinesia in 6-OH DA-lesioned rats [35]. Spontaneous firing of neurons in primate pallidum was increased by metabotropic glutamate receptor agonist DHPG and decreased by selective antagonists [36], which is in line with our results. Since glutamatergic input from the subthalamic nucleus shows over-activity during the disease, antagonists very well could compensate for this.", "Taking the effects of rasagiline and aminoindan together, not only neuroprotective effects could be measured but attenuation of NMDA, AMPA and metabotropic receptor mediated over-excitability of the glutamatergic system, also motor complications in Parkinson's disease-induced by imbalance of the glutamatergic system-should be ameliorated by a monotherapy with rasagiline. In addition, the newly discovered mechanism of action of rasagiline and aminoindan should be considered in the light of an extension of the clinical indication i.e. to treat Alzheimer's disease (for relation between Alzheimer's disease and glutamatergic system [37,38]. Last not least, over-activation of the glutamatergic system also is one of the consequences during stroke, amyotropic lateral sclerosis, Huntington's disease and neuropathic pain [39]. It remains to be tested if pharmacological intervention by rasagiline and its metabolite aminoindan provides a valuable therapeutic strategy for treatment of these diseases in addition to treatment of Parkinson's disease.", "WD provided the electrophysiological technology, supervised the performance of the experiments, gave interpretation of the results and wrote the manuscript. JAH initiated the study and made major contributions to the design. He also provided important information on the pharmacology of the preparation. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "a) Neurophysiological evidence for neuroprotective effects", "b) Functional interference with NMDA receptor activation", "c) Functional interference with AMPA receptor activation", "d) Functional interference with Kainate receptor activation", "e) Functional interference with metabotropic glutamate receptor activation", "Discussion", "Conclusions", "Authors' contributions" ]
[ "Rasagiline (N-propargyl-1-(R)-aminoindan) and selegiline are drugs prescribed for the treatment of Parkinson's disease. Both are believed to act by inhibition of monoamine oxidase B (MAO B). However, both are metabolized in a different way: rasagiline gives rise to aminoindan, a compound reported to have neuroprotective capabilities of its own, whereas selegiline gives rise to the neurotoxic metabolite methamphetamine [1,2]. Similar electropharmacograms obtained by quantitative brain field potential analysis were obtained from freely moving rats in the presence of rasagiline and its metabolite aminoindan (not inhibiting monoamine oxidase B). Selegiline-on the other hand-produced a time dependent biphasic action presumably due to the action of its active metabolites [3]. Available evidence suggests an additional mechanism of action for these drugs independently from MAO B inhibition.\nFor example, a neuroprotective action unrelated to MAO inhibition has been reported by [4] for rasagiline as well as for its major metabolite 1-(R)-aminoindan [5]. For review of neuroprotective effects of rasagiline and aminoindan see [6]. But again, no final mechanism has been reported to explain the proposed neuroprotective action. There is solid evidence of an involvement of glutamatergic transmission in neuroprotection. This calls for an experimental setup to dissect the possible interference of these compounds within the glutamatergic system. To our knowledge, no neurophysiological techniques have been applied up to now to characterize the effects of these compounds on glutamatergic transmission in the hippocampus. This model should be suitable since the communication between Schaffer-Collaterals and the hippocampal pyramidal cells takes place by using glutamate as transmitter.\nThe hippocampus slice preparation is a validated model for direct analysis of interaction of substances with living neuronal tissue [7,8]. Due to the preservation of the three dimensional structure of the hippocampus, drug effects on the excitability of pyramidal cells can be studied in a unique manner. Electric stimulation of Schaffer Collaterals leads to release of glutamate resulting in excitation of the postsynaptic pyramidal cells. The result of the electrical stimulation can be recorded as a so-called population spike (pop-spike). The amplitude of the resulting population spike represents the number of recruited pyramidal cells and relates to the extent of glutamatergic transmission. The advantage of the model not only consists in the possibility of physiological recording in vitro during 8 hours but also to modify the excitability of the system in order to create pathophysiological conditions like transient oxygen and glucose deprivation (OGD) [9].\nThe first part of the present investigation aimed at the characterization of the effects of rasagiline and its metabolite aminoindan in comparison to selegiline on glutamatergic transmission within a physiological environment and under pathophysiological conditions. The principle of the second part of the investigation was to use the enhancement of the pyramidal cell response (increased amplitudes of population spike) in the presence of highly specific and selective agonists of different glutamate receptors as a challenge. Accordingly, these responses were followed in the presence of several concentrations of rasagiline, aminoindan and selegiline. This approach should reveal great similarities between rasagiline and aminoindan on one side and a great difference to the action of selegiline on the other side.", "Hippocampus slices were obtained from 43 adult male Sprague-Dawley rats (Charles River Wiga, Sulzbach, Germany). Rats were kept under a reversed day/night cycle for 2 weeks prior start of the experiments, to allow recording of in vitro activity from slices during the active phase of their circadian rhythm [10,11]. Animals were exsanguinated under ether anaesthesia, the brain was removed and the hippocampal formation was isolated under microstereoscopic sight. The midsection of the hippocampus was fixed to the table of a vibrating microtome (Rhema Labortechnik, Hofheim, Germany) using a cyanoacrylate adhesive, submerged in chilled bicarbonate-buffered saline (artificial cerebrospinal fluid (ACSF): NaCl: 124 mM, KCl: 5 mM, CaCl2: 2 mM, MgSO4: 2 mM, NaHCO3: 26 mM, glucose: 10 mM, and cut into slices of 400 μm thickness. All slices were pre-incubated for at least 1 h in Carbogen saturated ACSF (pH 7.4) in a pre-chamber before use [12].\nDuring the experiment the slices were held and treated in a special superfusion chamber (List Electronics, Darmstadt, Germany) according to [13] at 35°C [14]. Five slices per rat were used. The preparation was superfused ACSF at 220 ml/h. Electrical stimulation (200 μA constant current pulses of 200 μs pulse width) of the Schaffer Collaterals within the CA2 area and recording of extracellular field potentials from the pyramidal cell layer of CA1 [12] was performed according to conventional electrophysiological methods using the \"Labteam\" Computer system \"NeuroTool\" software package (MediSyst GmbH, Linden, Germany). Measurements were performed at 10 min intervals in order to avoid potentiation mechanisms after single stimuli (first recording at 10 min is discarded for stability purposes). Four stimulations-each 20 s apart-were averaged for each time point. After averaging the last three of four responses to single stimuli (SS) to give one value, potentiation was induced by applying a theta burst type pattern (TBS; [7]). The mean amplitude of three signals 20 seconds apart were averaged to give the mean of absolute voltage values (microvolt) ± standard error of the mean for each experimental condition (single stimulus or theta burst stimulation). Electrical stimulation of the Schaffer Collaterals within the C2 area with single stimuli resulted in stable responses of the pyramidal cells in form of population spikes with an amplitude of about 1 mV and about 2 mV after theta burst stimulation (TBS) (representative example is given in Figure 1). Oxygen and Glucose deprivation (OGD) was performed in analogy to [15] by shutting off oxygen and glucose for 10 minutes. In this case glucose was replaced by sucrose.\nDocumentation of original signals showing the effects of using single stimuli (SS) or theta burst stimulation (TBS) in control slices (left panel) or in the presence of rasagiline (right panel) diluted in artificial cerebro-spinal fluid (ASCF). The amplitude is calculated from baseline to the down reflection of the signal (shadowed). Stimulus artefacts are omitted for the sake of clarity. Scales: Time is given in milliseconds (ms), amplitude in millivolt (mV).\nFor stimulation of glutamate receptors (NMDA, AMPA, Kainate and metabotropic receptor) four agonists were used, respectively: trans-1-Aminocyclobutan-1,3-dicarboxylic acid (ACBD; [16]), (S)-(-)-α-Amino-5-fluoro-3,4-dihydro-2,4-dioxo-1(2H)-pyrimidinepropanoic acid (S-Fluorowillardiine; [17-19]), (RS)-2-Amino-3-(3-hydroxy-5-tert-butyliosxazol-4-yl)propanoic acid (ATPA; [20-23]) and (±)-1-Aminocyclopentane-trans-1,3-dicarboxylic acid (t-ACPD; [24-26]). All agonists were tested in pilot experiments in order to detect a concentration leading to strong increases of population spike amplitude in the presence of single stimuli (SS) and theta burst stimulation (TBS). Origin of the chemicals is given in Table 1. The allowance to keep animals for this purpose was obtained from governmental authorities, dated 2009-09-01 under the document Nr. 0200052529. Experiments were performed in accordance with the German Animal Protection Law.\nCompounds used\nOrigin of chemical compounds used for this experimental series.", "[SUBTITLE] a) Neurophysiological evidence for neuroprotective effects [SUBSECTION] Using single stimulus administration rasagiline and - to a lesser degree-selegiline attenuated the pyramidal cells response significantly at a concentration of 30 μM. In the presence of aminoindan, however, significant attenuation was observed already at 15 μM. At a concentration of 50 μM rasagiline and aminoindan reduced the amplitude by about 60%, selegiline by about 40%. The course of the concentration dependence is given in Figure 2 for all three compounds. Under the condition of theta burst stimuli, rasagiline was able to reduce the signal amplitude significantly at 10 μM, whereas the effect of selegiline reached statistical significance at a concentration of 15 μM. The effects of aminoindan became statistically significant already at a concentration of 7.5 μM. Thus, in the presence of rasagiline, aminoindan and selegiline a concentration dependent decrease of the amplitudes of the population spike could be observed during single shock stimulation as well as during theta burst stimulation. Effects of selegiline were weakest (Figure 2).\nConcentration dependent effects of rasagiline, aminoindan and selegiline on pyramidal cell activity in terms of changes of population spike amplitude. Results from single slices as obtained after single stimuli (SS) and after theta burst stimuli (TBS). Data are given in microvolt for a mean of four slices and standard error of the mean. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to proof, that this attenuation of glutamatergic transmission could be related to neuroprotective features of the compounds, a pathophysiological situation was created in slices by turning off oxygen and glucose for 10 minutes. This procedure succeeded in a breakdown of the signal amplitudes after electrical single stimuli by about 75%. This breakdown was nearly totally prevented (p < 0.05) by the presence of a concentration of 5 μM rasagiline or aminoindan in the superfusion medium but rarely by selegiline (p < 0.1). Time courses of the experiments are depicted in Figure 3. This effect was still visible but not statistically significant from control at time period 60 and 70 minutes after start of the experiment. Thus, rasagiline and aminoindan showed a clearly better neuroprotective effect than selegiline in this model.\nComplete time course of experiments. Bar indicates 10 min of oxygen and glucose deprivation (OGD) before measurement. Nearly complete prevention of OGD-induced break down of population spike amplitude (s. control) by rasagiline and aminoindan but only to a minor degree by selegiline (p < 0.1). Stars indicate statistical significance of p < 0.05 in comparison to control.\nUsing single stimulus administration rasagiline and - to a lesser degree-selegiline attenuated the pyramidal cells response significantly at a concentration of 30 μM. In the presence of aminoindan, however, significant attenuation was observed already at 15 μM. At a concentration of 50 μM rasagiline and aminoindan reduced the amplitude by about 60%, selegiline by about 40%. The course of the concentration dependence is given in Figure 2 for all three compounds. Under the condition of theta burst stimuli, rasagiline was able to reduce the signal amplitude significantly at 10 μM, whereas the effect of selegiline reached statistical significance at a concentration of 15 μM. The effects of aminoindan became statistically significant already at a concentration of 7.5 μM. Thus, in the presence of rasagiline, aminoindan and selegiline a concentration dependent decrease of the amplitudes of the population spike could be observed during single shock stimulation as well as during theta burst stimulation. Effects of selegiline were weakest (Figure 2).\nConcentration dependent effects of rasagiline, aminoindan and selegiline on pyramidal cell activity in terms of changes of population spike amplitude. Results from single slices as obtained after single stimuli (SS) and after theta burst stimuli (TBS). Data are given in microvolt for a mean of four slices and standard error of the mean. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to proof, that this attenuation of glutamatergic transmission could be related to neuroprotective features of the compounds, a pathophysiological situation was created in slices by turning off oxygen and glucose for 10 minutes. This procedure succeeded in a breakdown of the signal amplitudes after electrical single stimuli by about 75%. This breakdown was nearly totally prevented (p < 0.05) by the presence of a concentration of 5 μM rasagiline or aminoindan in the superfusion medium but rarely by selegiline (p < 0.1). Time courses of the experiments are depicted in Figure 3. This effect was still visible but not statistically significant from control at time period 60 and 70 minutes after start of the experiment. Thus, rasagiline and aminoindan showed a clearly better neuroprotective effect than selegiline in this model.\nComplete time course of experiments. Bar indicates 10 min of oxygen and glucose deprivation (OGD) before measurement. Nearly complete prevention of OGD-induced break down of population spike amplitude (s. control) by rasagiline and aminoindan but only to a minor degree by selegiline (p < 0.1). Stars indicate statistical significance of p < 0.05 in comparison to control.\n[SUBTITLE] b) Functional interference with NMDA receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindan or selegiline with NMDA receptor activation, glutamatergic neurotransmission was modulated by ACBD, a very potent and selective NMDA receptor agonist. A concentration of 50 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1106 to 1940 μV (176% of control value) could be observed (Figure 4). In the presence of rasagiline the amplitude remained at control value (changing from 1102 to 1185 μV). Statistically significant differences to the ACBD induced increase were already observed with a concentration of 1 μM of rasagiline (p < 0.01).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the NMDA glutamate receptor by ACBD (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to ACBD-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 0.3 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nSimilar results were obtained in the presence of theta burst stimulation. Presence of ACBD in the superfusion medium increased the amplitude to 3173 μV. Rasagiline at a concentration of 5 μM attenuated the ACBD-induced signal down to 2074 μV (about control value). A statistically significant difference to ACBD-induced values was obtained at the very low concentration of 300 nM of rasagiline and aminoindan (p < 0.01). Thus, a concentration dependent attenuation of NMDA receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 4). On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of NMDA glutamate receptor stimulation.\nIn order to test a possible interference of rasagiline, aminoindan or selegiline with NMDA receptor activation, glutamatergic neurotransmission was modulated by ACBD, a very potent and selective NMDA receptor agonist. A concentration of 50 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1106 to 1940 μV (176% of control value) could be observed (Figure 4). In the presence of rasagiline the amplitude remained at control value (changing from 1102 to 1185 μV). Statistically significant differences to the ACBD induced increase were already observed with a concentration of 1 μM of rasagiline (p < 0.01).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the NMDA glutamate receptor by ACBD (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to ACBD-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 0.3 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nSimilar results were obtained in the presence of theta burst stimulation. Presence of ACBD in the superfusion medium increased the amplitude to 3173 μV. Rasagiline at a concentration of 5 μM attenuated the ACBD-induced signal down to 2074 μV (about control value). A statistically significant difference to ACBD-induced values was obtained at the very low concentration of 300 nM of rasagiline and aminoindan (p < 0.01). Thus, a concentration dependent attenuation of NMDA receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 4). On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of NMDA glutamate receptor stimulation.\n[SUBTITLE] c) Functional interference with AMPA receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindan or selegiline with AMPA receptor activation, the glutamatergic neurotransmission was stimulated by fluorowillardiine, a very potent and selective AMPA receptor agonist. A concentration of 100 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1135 to 1692 μV (151% of control value) could be observed (Figure 5). In the presence of 5 μM of rasagiline the amplitude remained at control value (changing from 1089 to 1137 μV).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the AMPA glutamate receptor by fluorowillardiine (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to fluorowillardiine-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 1 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nStatistically significant differences to the effect of fluorowillardiine were observed with 2.5 μM of rasagiline (p < 0.02) and aminoindan (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. Fluorowillardiine increased the amplitude to 2873 μV. Rasagiline at a concentration of 5 μV attenuated the fluorowillardiine-induced signal to a control value of 1950 μV. A statistically significant difference to fluorowillardiine-induced values was obtained at the very low concentration of 1 μM of rasagiline (p < 0.05).\nEven stronger effects were seen in the presence of aminoindan (s. Figure 2). Aminoindan attenuated the amplitude of the population spike from 2888 μV down to 1152 μV, which is far beyond the control values. Statistical significance in comparison to AMPA receptor stimulation was obtained already at 1 μM of aminoindan. Thus, a concentration dependent attenuation of AMPA receptor induced increases of population spike amplitudes was recognized for rasagiline and even more for its metabolite aminoindan. On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism also of AMPA glutamate receptor stimulation.\nIn order to test a possible interference of rasagiline, aminoindan or selegiline with AMPA receptor activation, the glutamatergic neurotransmission was stimulated by fluorowillardiine, a very potent and selective AMPA receptor agonist. A concentration of 100 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1135 to 1692 μV (151% of control value) could be observed (Figure 5). In the presence of 5 μM of rasagiline the amplitude remained at control value (changing from 1089 to 1137 μV).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the AMPA glutamate receptor by fluorowillardiine (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to fluorowillardiine-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 1 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nStatistically significant differences to the effect of fluorowillardiine were observed with 2.5 μM of rasagiline (p < 0.02) and aminoindan (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. Fluorowillardiine increased the amplitude to 2873 μV. Rasagiline at a concentration of 5 μV attenuated the fluorowillardiine-induced signal to a control value of 1950 μV. A statistically significant difference to fluorowillardiine-induced values was obtained at the very low concentration of 1 μM of rasagiline (p < 0.05).\nEven stronger effects were seen in the presence of aminoindan (s. Figure 2). Aminoindan attenuated the amplitude of the population spike from 2888 μV down to 1152 μV, which is far beyond the control values. Statistical significance in comparison to AMPA receptor stimulation was obtained already at 1 μM of aminoindan. Thus, a concentration dependent attenuation of AMPA receptor induced increases of population spike amplitudes was recognized for rasagiline and even more for its metabolite aminoindan. On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism also of AMPA glutamate receptor stimulation.\n[SUBTITLE] d) Functional interference with Kainate receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindane or selegiline with Kainate receptor activation, glutamatergic neurotransmission was stimulated by ATPA, a very potent and selective Kainate receptor agonist. A concentration of 50 nM induced a significant enhancement of the populations spike amplitude. Under the condition of single stimuli increase of the amplitude from 1097 to 1904 μV (174% of control value) could be observed (Table 2). Virtually no effect on this signal could be seen in the presence of rasagiline or aminoindan up to a concentration of 5 μM. However, in the presence of selegiline the amplitude remained at control values (changing from 1083 to 1257 μV). Statistically significant differences to the ATPA induced increase were observed already with 2.5 μM of selegiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ATPA increased the amplitude to 3055 μV. Selegiline at a concentration of 5 μV attenuated the ATPA-induced signal down to 2134 μV. Thus, a concentration dependent attenuation of Kainate receptor induced increases of population spike amplitudes was recognized only for selegiline but not for rasagiline or aminoindan. Again a clear difference could be observed between the effects of rasagiline and aminoindan on one site and selegiline on the other side, but in a reversed manner.\nAmplitudes of population spike\nEffect of selegiline on Kainate receptor dependent increase of population spike amplitude, but lack of effect by rasagiline or aminoindan. Values are given as mean of n = 4 slices +- S.E.M. Statistical significance to ATPA signal is given as p-value.\nIn order to test a possible interference of rasagiline, aminoindane or selegiline with Kainate receptor activation, glutamatergic neurotransmission was stimulated by ATPA, a very potent and selective Kainate receptor agonist. A concentration of 50 nM induced a significant enhancement of the populations spike amplitude. Under the condition of single stimuli increase of the amplitude from 1097 to 1904 μV (174% of control value) could be observed (Table 2). Virtually no effect on this signal could be seen in the presence of rasagiline or aminoindan up to a concentration of 5 μM. However, in the presence of selegiline the amplitude remained at control values (changing from 1083 to 1257 μV). Statistically significant differences to the ATPA induced increase were observed already with 2.5 μM of selegiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ATPA increased the amplitude to 3055 μV. Selegiline at a concentration of 5 μV attenuated the ATPA-induced signal down to 2134 μV. Thus, a concentration dependent attenuation of Kainate receptor induced increases of population spike amplitudes was recognized only for selegiline but not for rasagiline or aminoindan. Again a clear difference could be observed between the effects of rasagiline and aminoindan on one site and selegiline on the other side, but in a reversed manner.\nAmplitudes of population spike\nEffect of selegiline on Kainate receptor dependent increase of population spike amplitude, but lack of effect by rasagiline or aminoindan. Values are given as mean of n = 4 slices +- S.E.M. Statistical significance to ATPA signal is given as p-value.\n[SUBTITLE] e) Functional interference with metabotropic glutamate receptor activation [SUBSECTION] In order to test a possible interference of rasagiline, aminoindan or selegiline with metabotropic glutamate receptor activation, ACPD, a very potent and selective metabotropic glutamate receptor agonist, was used to enhance pyramidal cell responses. A concentration of 25 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1068 to 2003 μV (188% of control value) was observed (Figure 6). In the presence of rasagiline and SS conditions the amplitude remained at control value (changing from 1111 to 1134 μV). Statistically significant differences to the ACPD induced increase were observed with 1 μM of rasagiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ACPD increased the amplitude to 3027 μV. Rasagiline at a concentration of 5 μV attenuated the ACBD-induced signal down to control value (2050 μV). Thus, a concentration dependent attenuation of the metabotropic glutamate receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 6). On the opposite, virtually no effect could be seen in the presence of selegiline with a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of metabotropic glutamate receptor stimulation.\nConcentration dependent effect of rasagiline, aminoindan or selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the metabotropic glutamate receptor by ACPD. Data are presented for the mean of n = 4 slices +- SEM. Statistically significant attenuation of pop-spike amplitude in comparison to ACPD-induced increases were obtained in the presence of 2.5 μM of rasagiline and aminoindan following single stimuli (SS) or theta burst stimulation. No effect was observed with selegiline. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to test a possible interference of rasagiline, aminoindan or selegiline with metabotropic glutamate receptor activation, ACPD, a very potent and selective metabotropic glutamate receptor agonist, was used to enhance pyramidal cell responses. A concentration of 25 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1068 to 2003 μV (188% of control value) was observed (Figure 6). In the presence of rasagiline and SS conditions the amplitude remained at control value (changing from 1111 to 1134 μV). Statistically significant differences to the ACPD induced increase were observed with 1 μM of rasagiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ACPD increased the amplitude to 3027 μV. Rasagiline at a concentration of 5 μV attenuated the ACBD-induced signal down to control value (2050 μV). Thus, a concentration dependent attenuation of the metabotropic glutamate receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 6). On the opposite, virtually no effect could be seen in the presence of selegiline with a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of metabotropic glutamate receptor stimulation.\nConcentration dependent effect of rasagiline, aminoindan or selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the metabotropic glutamate receptor by ACPD. Data are presented for the mean of n = 4 slices +- SEM. Statistically significant attenuation of pop-spike amplitude in comparison to ACPD-induced increases were obtained in the presence of 2.5 μM of rasagiline and aminoindan following single stimuli (SS) or theta burst stimulation. No effect was observed with selegiline. Stars indicate statistical significance of p < 0.05 in comparison to control.", "Using single stimulus administration rasagiline and - to a lesser degree-selegiline attenuated the pyramidal cells response significantly at a concentration of 30 μM. In the presence of aminoindan, however, significant attenuation was observed already at 15 μM. At a concentration of 50 μM rasagiline and aminoindan reduced the amplitude by about 60%, selegiline by about 40%. The course of the concentration dependence is given in Figure 2 for all three compounds. Under the condition of theta burst stimuli, rasagiline was able to reduce the signal amplitude significantly at 10 μM, whereas the effect of selegiline reached statistical significance at a concentration of 15 μM. The effects of aminoindan became statistically significant already at a concentration of 7.5 μM. Thus, in the presence of rasagiline, aminoindan and selegiline a concentration dependent decrease of the amplitudes of the population spike could be observed during single shock stimulation as well as during theta burst stimulation. Effects of selegiline were weakest (Figure 2).\nConcentration dependent effects of rasagiline, aminoindan and selegiline on pyramidal cell activity in terms of changes of population spike amplitude. Results from single slices as obtained after single stimuli (SS) and after theta burst stimuli (TBS). Data are given in microvolt for a mean of four slices and standard error of the mean. Stars indicate statistical significance of p < 0.05 in comparison to control.\nIn order to proof, that this attenuation of glutamatergic transmission could be related to neuroprotective features of the compounds, a pathophysiological situation was created in slices by turning off oxygen and glucose for 10 minutes. This procedure succeeded in a breakdown of the signal amplitudes after electrical single stimuli by about 75%. This breakdown was nearly totally prevented (p < 0.05) by the presence of a concentration of 5 μM rasagiline or aminoindan in the superfusion medium but rarely by selegiline (p < 0.1). Time courses of the experiments are depicted in Figure 3. This effect was still visible but not statistically significant from control at time period 60 and 70 minutes after start of the experiment. Thus, rasagiline and aminoindan showed a clearly better neuroprotective effect than selegiline in this model.\nComplete time course of experiments. Bar indicates 10 min of oxygen and glucose deprivation (OGD) before measurement. Nearly complete prevention of OGD-induced break down of population spike amplitude (s. control) by rasagiline and aminoindan but only to a minor degree by selegiline (p < 0.1). Stars indicate statistical significance of p < 0.05 in comparison to control.", "In order to test a possible interference of rasagiline, aminoindan or selegiline with NMDA receptor activation, glutamatergic neurotransmission was modulated by ACBD, a very potent and selective NMDA receptor agonist. A concentration of 50 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1106 to 1940 μV (176% of control value) could be observed (Figure 4). In the presence of rasagiline the amplitude remained at control value (changing from 1102 to 1185 μV). Statistically significant differences to the ACBD induced increase were already observed with a concentration of 1 μM of rasagiline (p < 0.01).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the NMDA glutamate receptor by ACBD (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to ACBD-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 0.3 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nSimilar results were obtained in the presence of theta burst stimulation. Presence of ACBD in the superfusion medium increased the amplitude to 3173 μV. Rasagiline at a concentration of 5 μM attenuated the ACBD-induced signal down to 2074 μV (about control value). A statistically significant difference to ACBD-induced values was obtained at the very low concentration of 300 nM of rasagiline and aminoindan (p < 0.01). Thus, a concentration dependent attenuation of NMDA receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 4). On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of NMDA glutamate receptor stimulation.", "In order to test a possible interference of rasagiline, aminoindan or selegiline with AMPA receptor activation, the glutamatergic neurotransmission was stimulated by fluorowillardiine, a very potent and selective AMPA receptor agonist. A concentration of 100 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1135 to 1692 μV (151% of control value) could be observed (Figure 5). In the presence of 5 μM of rasagiline the amplitude remained at control value (changing from 1089 to 1137 μV).\nConcentration dependent effects of rasagiline, aminoindan and selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the AMPA glutamate receptor by fluorowillardiine (n = 4 slices +- SEM). Statistically significant attenuation of pop-spike amplitude in comparison to fluorowillardiine-induced increases were obtained in the presence of 1 μM of rasagiline or aminoindan following single stimuli (SS). During theta burst stimulation (TBS) already a concentration of 1 μM of rasagiline or aminoindan became statistically significant. Stars indicate statistical significance of p < 0.05 in comparison to control.\nStatistically significant differences to the effect of fluorowillardiine were observed with 2.5 μM of rasagiline (p < 0.02) and aminoindan (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. Fluorowillardiine increased the amplitude to 2873 μV. Rasagiline at a concentration of 5 μV attenuated the fluorowillardiine-induced signal to a control value of 1950 μV. A statistically significant difference to fluorowillardiine-induced values was obtained at the very low concentration of 1 μM of rasagiline (p < 0.05).\nEven stronger effects were seen in the presence of aminoindan (s. Figure 2). Aminoindan attenuated the amplitude of the population spike from 2888 μV down to 1152 μV, which is far beyond the control values. Statistical significance in comparison to AMPA receptor stimulation was obtained already at 1 μM of aminoindan. Thus, a concentration dependent attenuation of AMPA receptor induced increases of population spike amplitudes was recognized for rasagiline and even more for its metabolite aminoindan. On the opposite, virtually no effect could be seen in the presence of selegiline up to a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism also of AMPA glutamate receptor stimulation.", "In order to test a possible interference of rasagiline, aminoindane or selegiline with Kainate receptor activation, glutamatergic neurotransmission was stimulated by ATPA, a very potent and selective Kainate receptor agonist. A concentration of 50 nM induced a significant enhancement of the populations spike amplitude. Under the condition of single stimuli increase of the amplitude from 1097 to 1904 μV (174% of control value) could be observed (Table 2). Virtually no effect on this signal could be seen in the presence of rasagiline or aminoindan up to a concentration of 5 μM. However, in the presence of selegiline the amplitude remained at control values (changing from 1083 to 1257 μV). Statistically significant differences to the ATPA induced increase were observed already with 2.5 μM of selegiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ATPA increased the amplitude to 3055 μV. Selegiline at a concentration of 5 μV attenuated the ATPA-induced signal down to 2134 μV. Thus, a concentration dependent attenuation of Kainate receptor induced increases of population spike amplitudes was recognized only for selegiline but not for rasagiline or aminoindan. Again a clear difference could be observed between the effects of rasagiline and aminoindan on one site and selegiline on the other side, but in a reversed manner.\nAmplitudes of population spike\nEffect of selegiline on Kainate receptor dependent increase of population spike amplitude, but lack of effect by rasagiline or aminoindan. Values are given as mean of n = 4 slices +- S.E.M. Statistical significance to ATPA signal is given as p-value.", "In order to test a possible interference of rasagiline, aminoindan or selegiline with metabotropic glutamate receptor activation, ACPD, a very potent and selective metabotropic glutamate receptor agonist, was used to enhance pyramidal cell responses. A concentration of 25 nM induced a significant enhancement of the population spike amplitude. Under the condition of single stimuli increase of the amplitude from 1068 to 2003 μV (188% of control value) was observed (Figure 6). In the presence of rasagiline and SS conditions the amplitude remained at control value (changing from 1111 to 1134 μV). Statistically significant differences to the ACPD induced increase were observed with 1 μM of rasagiline (p < 0.01). Similar results were obtained in the presence of theta burst stimulation. ACPD increased the amplitude to 3027 μV. Rasagiline at a concentration of 5 μV attenuated the ACBD-induced signal down to control value (2050 μV). Thus, a concentration dependent attenuation of the metabotropic glutamate receptor induced increases of population spike amplitudes was recognized. Nearly identical results were seen in the presence of aminoindan (s. Figure 6). On the opposite, virtually no effect could be seen in the presence of selegiline with a concentration of 5 μM. Thus, a clear difference could be observed between rasagiline and aminoindan on one site and selegiline on the other side with respect to functional antagonism of metabotropic glutamate receptor stimulation.\nConcentration dependent effect of rasagiline, aminoindan or selegiline in the presence of single stimuli (SS) or theta burst stimuli (TBS) after stimulation of the metabotropic glutamate receptor by ACPD. Data are presented for the mean of n = 4 slices +- SEM. Statistically significant attenuation of pop-spike amplitude in comparison to ACPD-induced increases were obtained in the presence of 2.5 μM of rasagiline and aminoindan following single stimuli (SS) or theta burst stimulation. No effect was observed with selegiline. Stars indicate statistical significance of p < 0.05 in comparison to control.", "The rat hippocampal in vitro slice preparation has been used under physiological and pathophysiological conditions. Two monoamine oxidase B inhibitors (rasagiline and selegiline) and one compound lacking monoamine oxidase B inhibition (aminoindan) have been compared with respect to their ability to attenuate glutamatergic transmission represented by decreasing responses of pyramidal cells to electric stimulation. This result is interpreted to represent functional neuroprotection against massive glutamatergic excitation.\nSince simulation of ischemic conditions by oxygen-glucose deprivation (OGD) likewise resulted in showing that rasagiline and aminoindan prevented the breakdown of excitability, these effects probably also relate to neuroprotection (for selegiline this could be shown only to a minor degree). The term neuroprotection usually is taken to describe effects of drugs which might result in disease modifying actions during the course of Alzheimer's or Parkinson's illness. With respect to the latter, better neuroprotective and neurorestorative actions have been described for rasagiline in comparison to selegiline against lactacystin-induced nigrostriatal dopaminergic degeneration [27]. Also in a tissue culture model using PC12 cells under oxygen-glucose deprivation, rasagiline was clearly more effective than selegiline [28]. In addition, these authors could show that the neuroprotective effects of selegiline were blocked by its metabolite l-methamphetamine whereas aminoindan added to the effects of rasagiline. Taken together, all these findings suggest that the aminoindan moiety might be more important for neuroprotection than the propargyl moiety as suspected earlier [29]. Our results are therefore in line with earlier preclinical evidence for a neuroprotective action of rasagiline and its metabolite aminoindan. The functional impairment of glutamate dependent transmission obviously is not dependent on inhibition of monoamine oxidase B. However, a link between indirect inhibition of monoamine oxidase B and blockade of glyceraldehyde-3-phosphate dehydrogenase has recently been reported, which could also serve as an explanation for neuroprotective effects of rasagiline, selegiline and aminoindan [30].\nThe second part of the present investigation provides solid evidence that both rasagiline and selegiline interact functionally with glutamatergic receptor mediated transmission in addition to their known effects on MAO B, but by a different mechanism of action. The effects must be independent of the enzyme inhibition for the following reasons: firstly, aminoindan does not inhibit MAO B; secondly, both MAO inhibitors-rasagiline and selegiline-develop different receptor-mediated functional consequences within the glutamatergic system. This implicates that rasagiline and its metabolite aminoindan probably develop clinical properties different from that of selegiline.\nA hypothesis exists that particular glutamate receptors of the N-methyl-D-aspartate type are over-activated in a tonic rather than a phasic manner, which under chronic conditions leads to neuronal damage [31]. Another clinical implication could be suspected from the combined attenuation of NMDA and AMPA receptor dependent effects: simultaneous administration of sub-threshold dosages of NMDA and AMPA antagonists had a positive influence on the development of L-dopa induced dyskinesias in rats and monkeys [32]. These data are corroborated by earlier findings showing glutamate super sensitivity in the putamen of Parkinson patients treated chronically with L-dopa [33]. A common disadvantage of currently available rather unselective NMDA receptor antagonists is the occurrence of adverse effects like hallucinations [34]. Therefore, rasagiline and its metabolite aminoindan, which do not induce such side effects, but not selegiline with methamphetamine as its metabolite, should have a positive effect on motor fluctuations in Parkinson patients.\nWith respect to the involvement of metabotropic glutamate receptors in Parkinson's disease there is evidence that they are involved in the pathologically altered circuitry in the basal ganglia. Several antagonists at this receptor alleviated L-dopa induced dyskinesia in 6-OH DA-lesioned rats [35]. Spontaneous firing of neurons in primate pallidum was increased by metabotropic glutamate receptor agonist DHPG and decreased by selective antagonists [36], which is in line with our results. Since glutamatergic input from the subthalamic nucleus shows over-activity during the disease, antagonists very well could compensate for this.", "Taking the effects of rasagiline and aminoindan together, not only neuroprotective effects could be measured but attenuation of NMDA, AMPA and metabotropic receptor mediated over-excitability of the glutamatergic system, also motor complications in Parkinson's disease-induced by imbalance of the glutamatergic system-should be ameliorated by a monotherapy with rasagiline. In addition, the newly discovered mechanism of action of rasagiline and aminoindan should be considered in the light of an extension of the clinical indication i.e. to treat Alzheimer's disease (for relation between Alzheimer's disease and glutamatergic system [37,38]. Last not least, over-activation of the glutamatergic system also is one of the consequences during stroke, amyotropic lateral sclerosis, Huntington's disease and neuropathic pain [39]. It remains to be tested if pharmacological intervention by rasagiline and its metabolite aminoindan provides a valuable therapeutic strategy for treatment of these diseases in addition to treatment of Parkinson's disease.", "WD provided the electrophysiological technology, supervised the performance of the experiments, gave interpretation of the results and wrote the manuscript. JAH initiated the study and made major contributions to the design. He also provided important information on the pharmacology of the preparation. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Reduced immunomodulation potential of bone marrow-derived mesenchymal stem cells induced CCR4+CCR6+ Th/Treg cell subset imbalance in ankylosing spondylitis.
21338515
Ankylosing spondylitis (AS) is a chronic autoimmune disease, and the precise pathogenesis is largely unknown at present. Bone marrow-derived mesenchymal stem cells (BMSCs) with immunosuppressive and anti-inflammatory potential and Th17/Treg cells with a reciprocal relationship regulated by BMSCs have been reported to be involved in some autoimmune disorders. Here we studied the biological and immunological characteristics of BMSCs, the frequency and phenotype of CCR4+CCR6+ Th/Treg cells and their interaction in vitro in AS.
INTRODUCTION
The biological and immunomodulation characteristics of BMSCs were examined by induced multiple-differentiation and two-way mixed peripheral blood mononuclear cell (PBMC) reactions or after stimulation with phytohemagglutinin, respectively. The interactions of BMSCs and PBMCs were detected with a direct-contact co-culturing system. CCR4+CCR6+ Th/Treg cells and surface markers of BMSCs were assayed using flow cytometry.
METHODS
The AS-BMSCs at active stage showed normal proliferation, cell viability, surface markers and multiple differentiation characteristics, but significantly reduced immunomodulation potential (decreased 68 ± 14%); the frequencies of Treg and Fox-P3+ cells in AS-PBMCs decreased, while CCR4+CCR6+ Th cells increased, compared with healthy donors. Moreover, the AS-BMSCs induced imbalance in the ratio of CCR4+CCR6+ Th/Treg cells by reducing Treg/PBMCs and increasing CCR4+CCR6+ Th/PBMCs, and also reduced Fox-P3+ cells when co-cultured with PBMCs. Correlation analysis showed that the immunomodulation potential of BMSCs has significant negative correlations with the ratio of CCR4+CCR6+ Th to Treg cells in peripheral blood.
RESULTS
The immunomodulation potential of BMSCs is reduced and the ratio of CCR4+CCR6+ Th/Treg cells is imbalanced in AS. The BMSCs with reduced immunomodulation potential may play a novel role in AS pathogenesis by inducing CCR4+CCR6+ Th/Treg cell imbalance.
CONCLUSIONS
[ "Adolescent", "Bone Marrow Cells", "Cell Communication", "Cell Differentiation", "Cell Proliferation", "Cell Separation", "Cell Survival", "Cells, Cultured", "Coculture Techniques", "Female", "Flow Cytometry", "Humans", "Immunomodulation", "Male", "Mesenchymal Stem Cells", "Receptors, CCR4", "Receptors, CCR6", "Spondylitis, Ankylosing", "T-Lymphocyte Subsets", "T-Lymphocytes, Regulatory", "Young Adult" ]
3241373
null
null
null
null
Results
[SUBTITLE] Growth characteristics and cell viability of AS-BMSCs are normal [SUBSECTION] To evaluate the biological properties of AS-BMSCs, compared with those of HD-BMSCs, the studies for growth characteristics, cell viability and multiple differentiation potentials in vitro were performed. The AS-BMSC growth curves have the same tendency as those for HD-BMSCs. The BMSC proliferation data of these two groups at each day (1 to 12 days) were tested by unpaired Student's t test, and the statistical result indicates that there was no statistically significant difference in BMSC growth characteristics between ASp and HDs (HD1 and HD2) (P > 0.05, Student's t test for independent samples). Established cultures (12 days) of BMSCs exhibited close, even equivalent, cell viability at each point of time from 1 to 12 days, as determined by cellular viability assays, and the difference of OD at 490 nm between ASp and HDs (HD1 and HD2) at each day (1 to 12 days) was not also statistically significant (P > 0.05, Student's t test for independent samples). The cultures have similar purities: (QL [the lower point of interquartile range], QU [the upper point of interquartile range]) = (95%, 99%) for AS-BMSCs, (QL, QU) = (96%, 98%) for HD1-BMSCs and (QL, QU) = (96%, 99%) for HD2-BMSCs. To evaluate the biological properties of AS-BMSCs, compared with those of HD-BMSCs, the studies for growth characteristics, cell viability and multiple differentiation potentials in vitro were performed. The AS-BMSC growth curves have the same tendency as those for HD-BMSCs. The BMSC proliferation data of these two groups at each day (1 to 12 days) were tested by unpaired Student's t test, and the statistical result indicates that there was no statistically significant difference in BMSC growth characteristics between ASp and HDs (HD1 and HD2) (P > 0.05, Student's t test for independent samples). Established cultures (12 days) of BMSCs exhibited close, even equivalent, cell viability at each point of time from 1 to 12 days, as determined by cellular viability assays, and the difference of OD at 490 nm between ASp and HDs (HD1 and HD2) at each day (1 to 12 days) was not also statistically significant (P > 0.05, Student's t test for independent samples). The cultures have similar purities: (QL [the lower point of interquartile range], QU [the upper point of interquartile range]) = (95%, 99%) for AS-BMSCs, (QL, QU) = (96%, 98%) for HD1-BMSCs and (QL, QU) = (96%, 99%) for HD2-BMSCs. [SUBTITLE] Triple differentiation potentials of AS-BMSCs in vitro were not changed [SUBSECTION] To explore whether the multiple differentiation potentials of BMSCs in AS were abnormal, we investigate the osteogenic, adipogenic and chondrogenic differentiation potentials of AS-BMSCs and HD-BMSCs in the present study. Obvious differentiated osteocytes and adipocytes were detected as early as the 7th day after being induced for osteogenic and adipogenic differentiation, and obvious differentiated chondrocytes were seen at about 14 days since induction (Figure 1A to 1C). Bone marrow-derived mensenchymal stem cell triple differentiation potentials from ankylosing spondylitis patients and healthy donors. (A), (B), (C) Morphological characteristics of bone marrow-derived mensenchymal stem cells (BMSCs) for osteogenic, adipogenic and chondrogenic differentiation evaluated by the inverted phase contrast microscope; ankylosing spondylitis (AS)-BMSCs have the same morphological properties as the HLA-B27-negative healthy donor (HD1)-BMSCs and HLA-B27-positive healthy donor (HD2)-BMSCs. Osteocytes were stained for calcium deposition using Alizarin Red-S (A1, B1, C1: x400) and for alkaline phosphatase (ALP) with the Cell Alkaline Phosphatase-S assay (A2, B2, C2: x200). Adipocytes were filled with many fat vacuoles, and Red Oil O was used to stain the fat vacuoles of adipocytes (A3, B3, C3: x200). Chondroblast differentiation from BMSCs was identified with Alcian blue staining (A4, B4, C4: x200). (D) General photographs of BMSCs for osteogenic differentiation, stained with Alizarin Red-S. (E) ALP activities of AS-BMSCs, HD1-BMSCs and HD2-BMSCs were 644 ± 45, 655 ± 49 and 646 ± 51, respectively; differences not statistically significant (P > 0.05). Data presented as mean ± standard deviation. MLR, mixed peripheral blood mononuclear cell reaction; PHA, phytohemagglutinin. There appeared to be two stages in the BMSC differentiation process for both ASp and HDs. In the early stage, only a few osteocytes, adipocytes and chondrocytes were found within the undifferentiated BMSCs. Gradually, these three kinds of cells increased; simultaneously, the cell's body got bigger and cytoplasm became more abundant because, for example, osteocytes made closer contact, fat vacuoles of adipocytes multiplied and grew bigger, and chondrocytes began to gain many collagen fibers. In the later stage, these three kinds of cells increased rapidly and nearly predominated. For the adipocytes, osteocytes and chondrocytes derived from BMSCs, the purities were (QL, QU) = (90%, 97%), (QL, QU) = (91%, 96%) and (QL, QU) = (88%, 95%) for ASp, (QL, QU) = (88%, 98%), (QL, QU) = (90%, 97%) and (QL, QU) = (90%, 96%) for HD1, and (QL, QU) = (86%, 95%), (QL, QU) = (89%, 98%) and (QL, QU) = (92%, 97%) for HD2, respectively. The calcium nodules were stained to present a red color (Figure 1A1 to 1D1), after Alizarin Red staining for calcium deposits of osteocytes was performed to determine the mineralization of BMSCs. For the adipogenic differentiation, the mass fat vacuoles of adipocytes were also stained to present a red color by Oil Red O staining (Figure 1A3 to 1C3). The well-differentiated chondrocytes were Alcian Blue-positive, and presented a bright blue color after staining (Figure 1A4 to 1C4). The ALP activity, normalized to DNA concentration, is plotted in Figure 1E. The ALP activity (mean ± SD) was 644 ± 45 (mM p-nitrophenyl phosphate/minute per mg DNA) for AS-BMSCs (n = 51), which is lower than the 655 ± 49 for HD1-BMSCs (n = 37) (P > 0.05) and the 646 ± 51 for HD2-BMSCs (n = 12) (P > 0.05). All three values were much higher than those of the baseline ALP for BMSCs of ASp, HD1 and HD2 (85 ± 40, 88 ± 48 and 82 ± 13, respectively) (P < 0.001) in control medium without the osteogenic factors. ALP staining was performed on the 14th day to investigate the maturity degree of osteocytes in the groups of Asp, HD1 and HD2 (Figure 1A2 to 1C2). To explore whether the multiple differentiation potentials of BMSCs in AS were abnormal, we investigate the osteogenic, adipogenic and chondrogenic differentiation potentials of AS-BMSCs and HD-BMSCs in the present study. Obvious differentiated osteocytes and adipocytes were detected as early as the 7th day after being induced for osteogenic and adipogenic differentiation, and obvious differentiated chondrocytes were seen at about 14 days since induction (Figure 1A to 1C). Bone marrow-derived mensenchymal stem cell triple differentiation potentials from ankylosing spondylitis patients and healthy donors. (A), (B), (C) Morphological characteristics of bone marrow-derived mensenchymal stem cells (BMSCs) for osteogenic, adipogenic and chondrogenic differentiation evaluated by the inverted phase contrast microscope; ankylosing spondylitis (AS)-BMSCs have the same morphological properties as the HLA-B27-negative healthy donor (HD1)-BMSCs and HLA-B27-positive healthy donor (HD2)-BMSCs. Osteocytes were stained for calcium deposition using Alizarin Red-S (A1, B1, C1: x400) and for alkaline phosphatase (ALP) with the Cell Alkaline Phosphatase-S assay (A2, B2, C2: x200). Adipocytes were filled with many fat vacuoles, and Red Oil O was used to stain the fat vacuoles of adipocytes (A3, B3, C3: x200). Chondroblast differentiation from BMSCs was identified with Alcian blue staining (A4, B4, C4: x200). (D) General photographs of BMSCs for osteogenic differentiation, stained with Alizarin Red-S. (E) ALP activities of AS-BMSCs, HD1-BMSCs and HD2-BMSCs were 644 ± 45, 655 ± 49 and 646 ± 51, respectively; differences not statistically significant (P > 0.05). Data presented as mean ± standard deviation. MLR, mixed peripheral blood mononuclear cell reaction; PHA, phytohemagglutinin. There appeared to be two stages in the BMSC differentiation process for both ASp and HDs. In the early stage, only a few osteocytes, adipocytes and chondrocytes were found within the undifferentiated BMSCs. Gradually, these three kinds of cells increased; simultaneously, the cell's body got bigger and cytoplasm became more abundant because, for example, osteocytes made closer contact, fat vacuoles of adipocytes multiplied and grew bigger, and chondrocytes began to gain many collagen fibers. In the later stage, these three kinds of cells increased rapidly and nearly predominated. For the adipocytes, osteocytes and chondrocytes derived from BMSCs, the purities were (QL, QU) = (90%, 97%), (QL, QU) = (91%, 96%) and (QL, QU) = (88%, 95%) for ASp, (QL, QU) = (88%, 98%), (QL, QU) = (90%, 97%) and (QL, QU) = (90%, 96%) for HD1, and (QL, QU) = (86%, 95%), (QL, QU) = (89%, 98%) and (QL, QU) = (92%, 97%) for HD2, respectively. The calcium nodules were stained to present a red color (Figure 1A1 to 1D1), after Alizarin Red staining for calcium deposits of osteocytes was performed to determine the mineralization of BMSCs. For the adipogenic differentiation, the mass fat vacuoles of adipocytes were also stained to present a red color by Oil Red O staining (Figure 1A3 to 1C3). The well-differentiated chondrocytes were Alcian Blue-positive, and presented a bright blue color after staining (Figure 1A4 to 1C4). The ALP activity, normalized to DNA concentration, is plotted in Figure 1E. The ALP activity (mean ± SD) was 644 ± 45 (mM p-nitrophenyl phosphate/minute per mg DNA) for AS-BMSCs (n = 51), which is lower than the 655 ± 49 for HD1-BMSCs (n = 37) (P > 0.05) and the 646 ± 51 for HD2-BMSCs (n = 12) (P > 0.05). All three values were much higher than those of the baseline ALP for BMSCs of ASp, HD1 and HD2 (85 ± 40, 88 ± 48 and 82 ± 13, respectively) (P < 0.001) in control medium without the osteogenic factors. ALP staining was performed on the 14th day to investigate the maturity degree of osteocytes in the groups of Asp, HD1 and HD2 (Figure 1A2 to 1C2). [SUBTITLE] Phenotype of bone marrow-derived mesenchymal stem cells [SUBSECTION] The AS-BMSCs and HD-BMSCs were then examined for typical MSC phenotypic surface markers. Flow cytometric analysis showed that the AS-BMSCs and HD-BMSCs (HD1-BMSCs and HD2-BMSCs) have the same phenotypic surface markers, just as the typical MSCs did. The samples all express high levels of the surface markers CD105, CD73 and CD90, and lack expression of CD45, CD34, CD14 and HLA-DR surface molecules (Figure 2). Phenotyping of bone marrow-derived mensenchymal stem cells for typical mensenchymal stromal cell surface markers. Single-parameter histograms for (A1) to (A3), (B1) to (B3), (C1) to (C3) individual mensenchymal stromal cell (MSC) markers and (A4) to (A7), (B4) to (B7), (C4) to (C7) MSC exclusion markers, representative of samples from patients with ankylosing spondylitis (AS) and from healthy donors (blue lines). Red lines indicate background fluorescence obtained with isotype control IgG. x axis, fluorescence intensity; y axis, cell counts. BMSC, bone marrow-derived mensenchymal stem cell; HD1, HLA-B27-negative healthy donors; HD2, HLA-B27-positive healthy donors. The AS-BMSCs and HD-BMSCs were then examined for typical MSC phenotypic surface markers. Flow cytometric analysis showed that the AS-BMSCs and HD-BMSCs (HD1-BMSCs and HD2-BMSCs) have the same phenotypic surface markers, just as the typical MSCs did. The samples all express high levels of the surface markers CD105, CD73 and CD90, and lack expression of CD45, CD34, CD14 and HLA-DR surface molecules (Figure 2). Phenotyping of bone marrow-derived mensenchymal stem cells for typical mensenchymal stromal cell surface markers. Single-parameter histograms for (A1) to (A3), (B1) to (B3), (C1) to (C3) individual mensenchymal stromal cell (MSC) markers and (A4) to (A7), (B4) to (B7), (C4) to (C7) MSC exclusion markers, representative of samples from patients with ankylosing spondylitis (AS) and from healthy donors (blue lines). Red lines indicate background fluorescence obtained with isotype control IgG. x axis, fluorescence intensity; y axis, cell counts. BMSC, bone marrow-derived mensenchymal stem cell; HD1, HLA-B27-negative healthy donors; HD2, HLA-B27-positive healthy donors. [SUBTITLE] Decreased suppressive potential of AS-BMSCs on either two-way MLR or PBMC proliferation stimulated with PHA [SUBSECTION] Under the condition that the proliferation characteristics, cell viability, multiple-differentiation potentials and surface markers of AS-BMSCs were normal, compared with HD-BMSCs, the immunomodulation potential of AS-BMSCs was evaluated in the present study. The effects of BMSCs from ASp (n = 51), HD1 (n = 37) and HD2 (n = 12) on two-way MLR or PBMC proliferation in the presence of PHA were evaluated by mixing BMSCs and mixed PBMCs for two-way MLR, or by PBMCs from a third healthy volunteer in the presence of PHA for PBMC proliferation assay at five BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1, respectively. [SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3). Reduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density. The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3). Reduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density. [SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1). Furthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly. Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1). Furthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly. Under the condition that the proliferation characteristics, cell viability, multiple-differentiation potentials and surface markers of AS-BMSCs were normal, compared with HD-BMSCs, the immunomodulation potential of AS-BMSCs was evaluated in the present study. The effects of BMSCs from ASp (n = 51), HD1 (n = 37) and HD2 (n = 12) on two-way MLR or PBMC proliferation in the presence of PHA were evaluated by mixing BMSCs and mixed PBMCs for two-way MLR, or by PBMCs from a third healthy volunteer in the presence of PHA for PBMC proliferation assay at five BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1, respectively. [SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3). Reduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density. The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3). Reduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density. [SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1). Furthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly. Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1). Furthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly. [SUBTITLE] Increased CCR4+CCR6+ Th and decreased Treg populations in peripheral blood of patients with AS [SUBSECTION] Recent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], so we also examined the frequencies of CCR4+CCR6+ Th and Treg cells in PBMCs of ASp and HDs (Figure 4). The PBMCs from ASp and HDs were examined for the subset populations using flow cytometry, defined as the percentages of CCR4+CCR6+ Th cells (CCR4/CCR6 double-positive) [32] and Treg cells (CD4/CD25/Fox-P3 triple-positive) accounting for the total CD4-positive Th cells (CCR4+CCR6+ Th/Th, Treg/Th), CD3-positive T cells (CCR4+CCR6+ Th/T, Treg/T), lymphocytes (CCR4+CCR6+ Th/L, Treg/L), and peripheral blood mononuclear cells (CCR4+CCR6+ Th/PBMCs, Treg/PBMCs) respectively. The proportions of Fox-P3-positive occupying CD4/CD25 double-positive cells (Fox-P3+/CD4+CD25+) and PBMCs (Fox-P3+/PBMCs) were also tested. Compared with healthy donors (HD1 and HD2), the CCR4+CCR6+ Th population of ASp was significantly increased (P < 0.001, Table 3 and Figure 5), whereas Treg cells and Fox-P3-positive cells were found to be significantly decreased (P < 0.001, Student's t test for independent data, Table 3 and Figure 5). There were no significant differences between HD1 and HD2. Representative plots of CCR4+CCR6+ T-helper and regulatory T cells. Representative plots of peripheral circulating populations of (A1) to (A3) CCR4+CCR6+ T-helper (Th) cells (green) and (B1) to (B3) regulatory T (Treg) cells (red) in peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp) (A1, B1), HLA-B27-negative healthy donors (HD1) (A2, B2) and HLA-B27-positive healthy donors (HD2) (A3, B3). FOX-P3, forkhead box P3; FSH, forward scatter-height; SSH, side scatter-height. Percentages of CCR4+/6+ Th and Treg cells in appropriate cell subsets Data presented as the percentage mean ± standard deviation. The differences for all percentages between patients with ankylosing spondylitis (ASp) and either HLA-B27-negative healthy donors (HD1) or HLA-B27-positive healthy donors (HD2) were significant (P < 0.001, according to a two-tailed significant level of 0.05). There were no significant differences between HD1 and HD2 (P > 0.05). Th, CD4+ T-helper cells; T, T lymphocytes; L, lymphocytes; Treg, forkhead box P3-positive regulatory T cells. Percentages of CCR4+CCR6+ T-helper, regulatory T and Fox-P3-positive cells in peripheral blood mononuclear cells. Statistical multiple bar graphs showing percentages of increased CCR4+CCR6+ T-helper (Th) cells, reduced regulatory T (Treg) cells and reduced forkhead box P3 (Fox-P3)-positive cells in the peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp), compared with values for healthy donors (HLA-B27-negative healthy donors (HD1) and HLA-B27-positive healthy donors (HD2)). *P < 0.001, Student's t test for independent data. Recent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], so we also examined the frequencies of CCR4+CCR6+ Th and Treg cells in PBMCs of ASp and HDs (Figure 4). The PBMCs from ASp and HDs were examined for the subset populations using flow cytometry, defined as the percentages of CCR4+CCR6+ Th cells (CCR4/CCR6 double-positive) [32] and Treg cells (CD4/CD25/Fox-P3 triple-positive) accounting for the total CD4-positive Th cells (CCR4+CCR6+ Th/Th, Treg/Th), CD3-positive T cells (CCR4+CCR6+ Th/T, Treg/T), lymphocytes (CCR4+CCR6+ Th/L, Treg/L), and peripheral blood mononuclear cells (CCR4+CCR6+ Th/PBMCs, Treg/PBMCs) respectively. The proportions of Fox-P3-positive occupying CD4/CD25 double-positive cells (Fox-P3+/CD4+CD25+) and PBMCs (Fox-P3+/PBMCs) were also tested. Compared with healthy donors (HD1 and HD2), the CCR4+CCR6+ Th population of ASp was significantly increased (P < 0.001, Table 3 and Figure 5), whereas Treg cells and Fox-P3-positive cells were found to be significantly decreased (P < 0.001, Student's t test for independent data, Table 3 and Figure 5). There were no significant differences between HD1 and HD2. Representative plots of CCR4+CCR6+ T-helper and regulatory T cells. Representative plots of peripheral circulating populations of (A1) to (A3) CCR4+CCR6+ T-helper (Th) cells (green) and (B1) to (B3) regulatory T (Treg) cells (red) in peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp) (A1, B1), HLA-B27-negative healthy donors (HD1) (A2, B2) and HLA-B27-positive healthy donors (HD2) (A3, B3). FOX-P3, forkhead box P3; FSH, forward scatter-height; SSH, side scatter-height. Percentages of CCR4+/6+ Th and Treg cells in appropriate cell subsets Data presented as the percentage mean ± standard deviation. The differences for all percentages between patients with ankylosing spondylitis (ASp) and either HLA-B27-negative healthy donors (HD1) or HLA-B27-positive healthy donors (HD2) were significant (P < 0.001, according to a two-tailed significant level of 0.05). There were no significant differences between HD1 and HD2 (P > 0.05). Th, CD4+ T-helper cells; T, T lymphocytes; L, lymphocytes; Treg, forkhead box P3-positive regulatory T cells. Percentages of CCR4+CCR6+ T-helper, regulatory T and Fox-P3-positive cells in peripheral blood mononuclear cells. Statistical multiple bar graphs showing percentages of increased CCR4+CCR6+ T-helper (Th) cells, reduced regulatory T (Treg) cells and reduced forkhead box P3 (Fox-P3)-positive cells in the peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp), compared with values for healthy donors (HLA-B27-negative healthy donors (HD1) and HLA-B27-positive healthy donors (HD2)). *P < 0.001, Student's t test for independent data. [SUBTITLE] BMSCs of patients with AS-induced CCR4+CCR6+ Th/Treg imbalance [SUBSECTION] We performed the direct contact co-culture of BMSCs and PBMCs to explore whether the reduced immunomodulation potential of AS-BMSCs altered the balance of CCR4+CCR6+ Th/Treg cells. The PBMCs were collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells after co-culture with BMSCs of ASp and HDs (HD1 and HD2) for 3 days. The percentages of Treg cells (0.63 ± 0.23%) and Fox-P3-positive cells (0.74 ± 0.11%) in PBMCs after co-culture with AS-BMSCs for 3 days reduced significantly, whereas the percentages of CCR4+CCR6+ Th cells (1.87 ± 0.29%) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly, compared with these values of groups, including 3-day HD1, 3-day HD2, 3-day control, and 0 days (P < 0.001, Figure 6). Impressively, the ratio of CCR4+CCR6+ Th cells to Treg cells (CCR4+CCR6+ Th/Treg) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly (P < 0.001, Figure 6). Ratios of Fox-P3/PBMCs, CCR4+CCR6+ Th/PBMCs, Treg/PBMCs and CCR4+CCR6+ Th/Treg. Peripheral blood mononuclear cells (PBMCs) were collected to be assayed by flow cytometry for the CCR4+CCR6+ T-helper (Th) cells, regulatory T (Treg) cells and forkhead box P3 (Fox-P3)-positive cells after co-culture with or without (3 day-control) ankylosing spondylitis (AS)-bone marrow-derived mensenchymal stem cells (BMSCs) (3 day-ASp), HLA-B27-negative healthy donors (HD1)-BMSCs (3 day-HD1) or HLA-B27-positive healthy donors (HD2)-BMSCs (3 day-HD2) for 72 hours. Statistical multiple bar graphs show that AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg imbalance via reducing Treg/PBMCs but increasing CCR4+CCR6+ Th/PBMCs; it also produced a significant reduction of Fox-P3-positive cells in PBMCs (*P < 0.001, Student's t test for independent data). We performed the direct contact co-culture of BMSCs and PBMCs to explore whether the reduced immunomodulation potential of AS-BMSCs altered the balance of CCR4+CCR6+ Th/Treg cells. The PBMCs were collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells after co-culture with BMSCs of ASp and HDs (HD1 and HD2) for 3 days. The percentages of Treg cells (0.63 ± 0.23%) and Fox-P3-positive cells (0.74 ± 0.11%) in PBMCs after co-culture with AS-BMSCs for 3 days reduced significantly, whereas the percentages of CCR4+CCR6+ Th cells (1.87 ± 0.29%) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly, compared with these values of groups, including 3-day HD1, 3-day HD2, 3-day control, and 0 days (P < 0.001, Figure 6). Impressively, the ratio of CCR4+CCR6+ Th cells to Treg cells (CCR4+CCR6+ Th/Treg) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly (P < 0.001, Figure 6). Ratios of Fox-P3/PBMCs, CCR4+CCR6+ Th/PBMCs, Treg/PBMCs and CCR4+CCR6+ Th/Treg. Peripheral blood mononuclear cells (PBMCs) were collected to be assayed by flow cytometry for the CCR4+CCR6+ T-helper (Th) cells, regulatory T (Treg) cells and forkhead box P3 (Fox-P3)-positive cells after co-culture with or without (3 day-control) ankylosing spondylitis (AS)-bone marrow-derived mensenchymal stem cells (BMSCs) (3 day-ASp), HLA-B27-negative healthy donors (HD1)-BMSCs (3 day-HD1) or HLA-B27-positive healthy donors (HD2)-BMSCs (3 day-HD2) for 72 hours. Statistical multiple bar graphs show that AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg imbalance via reducing Treg/PBMCs but increasing CCR4+CCR6+ Th/PBMCs; it also produced a significant reduction of Fox-P3-positive cells in PBMCs (*P < 0.001, Student's t test for independent data). [SUBTITLE] Negative correlations between percentages of CCR4+CCR6+ Th/Treg cells and the suppressive ratio of BMSCs [SUBSECTION] When examining data from all subjects tested, we observed positive correlations between the percentage inhibition of BMSCs (MLR) and the percentage inhibition of BMSCs (PHA) for all of the 51 ASp, 37 HD1 and 12 HD2. Interestingly, however, for all of the ASp (Figure 7A), HD1 (Figure 7B) and HD2 (Figure 7C) there were significantly negative correlations between the ratios of CCR4+CCR6+ Th cells to Treg cells in the peripheral blood and either percentage inhibition of BMSCs (MLR) or percentage inhibition of BMSCs (PHA) at all five ratios (P < 0.01, respectively). Correlation analysis between CCR4+CCR6+ Th/Treg ratio and immunomodulation potential of bone marrow-derived mensenchymal stem cells. (A) All patients with ankylosing spondylitis (ASp), (B) HLA-B27-negative healthy donors (HD1) and (C) HLA-B27-positive healthy donors (HD2) presented significantly negative correlations between the ratio of CCR4+CCR6+ T-helper (Th) cells to regulatory T (Treg) cells in the peripheral blood mononuclear cells (PBMCs) and percentage inhibition either from two-way mixed PBMC reaction (MLR) (upper panel) or PBMC proliferation induced by phytohemagglutinin (PHA) (lower panel) at a bone marrow-derived mensenchymal stem cell (BMSC):PBMC ratio of 1:10 (P < 0.01, respectively). When examining data from all subjects tested, we observed positive correlations between the percentage inhibition of BMSCs (MLR) and the percentage inhibition of BMSCs (PHA) for all of the 51 ASp, 37 HD1 and 12 HD2. Interestingly, however, for all of the ASp (Figure 7A), HD1 (Figure 7B) and HD2 (Figure 7C) there were significantly negative correlations between the ratios of CCR4+CCR6+ Th cells to Treg cells in the peripheral blood and either percentage inhibition of BMSCs (MLR) or percentage inhibition of BMSCs (PHA) at all five ratios (P < 0.01, respectively). Correlation analysis between CCR4+CCR6+ Th/Treg ratio and immunomodulation potential of bone marrow-derived mensenchymal stem cells. (A) All patients with ankylosing spondylitis (ASp), (B) HLA-B27-negative healthy donors (HD1) and (C) HLA-B27-positive healthy donors (HD2) presented significantly negative correlations between the ratio of CCR4+CCR6+ T-helper (Th) cells to regulatory T (Treg) cells in the peripheral blood mononuclear cells (PBMCs) and percentage inhibition either from two-way mixed PBMC reaction (MLR) (upper panel) or PBMC proliferation induced by phytohemagglutinin (PHA) (lower panel) at a bone marrow-derived mensenchymal stem cell (BMSC):PBMC ratio of 1:10 (P < 0.01, respectively).
Conclusions
The reduced immunomodulation potential of BMSCs may be an initiating factor for AS pathogenesis, and may play a novelty role in triggering the onset of AS via inducing the CCR4+CCR6+ Th/Treg cell subset imbalance. BMSCs may therefore be an interesting therapeutic target in AS, suggesting the use of BMSCs from HDs in the disease.
[ "Introduction", "Patients and controls", "Bone marrow aspiration, human BMSCs and PBMCs", "Cell viability and proliferation test for BMSCs", "In vitro differentiation potential assay of BMSCs", "Alkaline phosphatase measurement", "Immunomodulation potential of BMSCs", "Two-way mixed PBMC reaction", "Allogeneic PBMC proliferation assay", "Direct contact co-culture of BMSCs and PBMCs", "Antibodies and flow cytometry", "Statistical analysis", "Growth characteristics and cell viability of AS-BMSCs are normal", "Triple differentiation potentials of AS-BMSCs in vitro were not changed", "Phenotype of bone marrow-derived mesenchymal stem cells", "Decreased suppressive potential of AS-BMSCs on either two-way MLR or PBMC proliferation stimulated with PHA", "Two-way mixed PBMC reaction", "Allogeneic PBMC proliferation assay", "Increased CCR4+CCR6+ Th and decreased Treg populations in peripheral blood of patients with AS", "BMSCs of patients with AS-induced CCR4+CCR6+ Th/Treg imbalance", "Negative correlations between percentages of CCR4+CCR6+ Th/Treg cells and the suppressive ratio of BMSCs", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Ankylosing spondylitis (AS) is a chronic autoimmune inflammatory disease, the prototypic seronegative spondylarthritis that primarily affects the sacroiliac joints and the axial skeleton, which was characterized by inflammatory back pain, enthesitis, and specific organ involvement [1]. AS is a complex multifactorial disease; several pathogenetic factors, including infection [1,2], environmental triggers [1], genetic susceptibility such as HLA-B27 positivity [3,4] and HLA-E gene polymorphism [5], and in particular, autoimmune disorders [1] have been reported to potentially trigger the onset or maintain the pathogenesis progress of AS. Additionally, the genome-wide association study of AS identifies non-MHC susceptibility loci [6], such as IL-23R (rs11209026) and ERAP1 (rs27434). There were also, however, some controversies; for example, no candidate bacteria were detected by PCR in biopsies from sacroiliac joints [7] and most HLA B27-positive individuals remain healthy [1]. The precise pathogenesis of AS is therefore largely unknown at present. Nowadays, more and more studies have focused on the immunological factors for AS.\nMesenchymal stromal cells (MSCs) isolated from a variety of adult tissues, including the bone marrow, have multiple differentiation potentials in different cell types, and also display immunosuppressive (in vitro [8,9], in vivo [10-12]) and anti-inflammatory properties [13], so their putative therapeutic role in a variety of inflammatory autoimmune diseases is currently under investigation. Recently, many findings indicate that MSC immunomodulation potential plays a critical role in severe aplastic anemia [14]. Simultaneously, substantial disorders and abnormalities of MSCs exist in many autoimmune diseases [15]. Few studies, however, have so far focused on whether there were some abnormalities in bone marrow-derived mesenchymal stem cells (BMSCs) of patients with ankylosing spondylitis (ASp) with regard to the biological and immunological properties.\nMore recently, two additional subsets, the forkhead box P3 (Fox-P3)-positive regulatory subset (Treg) and the IL-17-producing subset (Th17) [16-19], have emerged and together with Th1 and Th2 cells, formed a functional quartet of CD4+ T cells that provides a closer insight into the mechanisms of immune-mediated diseases such as AS. Autoimmune diseases are thought to arise from a breakdown of immunological self-tolerance leading to aberrant immune responses to self-antigen. Ordinarily, regulatory T (Treg) cells - including both natural and induced Treg cells - control these self-reactive cells [20]. Several studies of patients with connective tissue diseases found reduced [21] or functionally impaired [22] Treg cells, and Treg cells of autoimmune hepatitis patients have reduced expression of Fox-P3 and CTLA-4, which may lead to impaired suppressor activity [23]. On the contrary, these proinflammatory Th17 cells are implicated in different autoimmune disease models [24-26]. Furthermore, these cells typically express IL-23R on their membrane [27], and recent studies in AS [28-30] show an important genetic contribution for polymorphisms in the gene that codes for this IL-23R. The active polymorphisms in the IL-23R gene could thus indicate an important role for this pathogenic T-cell subset (Th17) in the development and maintenance of AS. The involvement of Treg and Th17 cells in AS, however, has not yet been clearly established.\nAs previously described, skewing of responses towards Th17 and away from BMSCs or Treg cells may be responsible for the development and/or progression of AS [31]. Furthermore, CCR6 and CCR4 identified true Th17 memory cells producing IL-17 [32] and the majority of Th17 cells were CCR6+CCR4+ [33]. Aimed at investigating the puzzling issues above, the present study was designed to examine the biological and immunological properties of BMSCs, to examine the frequencies and phenotypes of Treg cells and proinflammatory CCR4+CCR6+ Th memory cells, and to study the interactions between BMSCs and CCR4+CCR6+ Th/Treg cells in peripheral blood mononuclear cells (PBMCs) for AS.", "The present study was approved by the ethics committee of the Sun Yat-Sen Memorial Hospital of Sun Yat-Sen University, Guangzhou, China. In addition, informed consent was obtained from all patients and all healthy donors (HDs). Fifty-one ASp (eight women and 43 men) with an average age of 29.4 years (17 to 45 years) and 49 HDs (eight women and 41 men) with an average age of 27.3 years (18 to 39 years) were included in the study. All of the AS patients were diagnosed according to the New York modified criteria [34] and were HLA-B27-positive; conversely, 37 healthy donors were HLA-B27-negative (HD1) and 12 healthy donors were HLA-B27-positive (HD2). Sixteen patients were diagnosed for the first time, and the research samples from all ASp were taken at the active stage (all Bath Ankylosing Spondylitis Disease Activity Index ≥4) and without taking any medicine for at least 2 weeks.", "After being informed regarding the scientific contributions, possible risks and complications and the corresponding prevention and treating measures for bone marrow aspirations, all of the healthy controls and ASp expressed approval and signed the informed consent. The bone marrow aspirations were all performed by skilled allied health professionals strictly according to the international standardized procedure for bone marrow aspirations. The bone marrow samples of AS patients and HDs were diluted with DMEM (low-glucose DMEM) containing 10% FBS. The mononuclear cells were prepared by gradient centrifugation at 900 × g for 30 minutes on Percoll (Pharmacia Biotech, Uppsala, Sweden) of density 1.073 g/ml. The cells were washed, counted, seeded at 2 × 106 cells/cm2 in 25-cm2 flasks containing low-glucose DMEM supplemented with 10% FBS and cultured at 37°C, 5% carbon dioxide. Medium was replaced and the cells in suspension were removed at 48 hours and every 3 or 4 days thereafter. BMSCs were recovered using 0.25% Trypsin-ethylenediamine tetraacetic acid and replated at a density of 5 × 103 to 6 × 103 cells/cm2 surface area as passage 1 cells when the culture reached 90% confluency. BMSCs after the third subculture were used for described experiments. PBMCs were obtained by the Ficoll-Hypaque (Pharmacia Biotech, Uppsala, Sweden) gradient separation of the buffy coat of ASp and HDs.", "BMSCs were seeded in 96-well plates at a concentration of 1 × 104/ml, in a final volume of 100 μl fresh medium (10% FBS + low-glucose DMEM), and three wells of each sample were digested using 0.25% Trypsin-ethylenediamine tetraacetic acid for cell counting per day up to 12 days. The BMSC growth curves were made using the data for cell proliferation obtained above. Using MTT (5 mg/ml; Sigma-Aldrich Co., St. Louis, MO, USA), dimethyl sulphoxide (Sigma) and an EL800 microplate reader (BioTek Instruments, Winooski, VT, USA) that was to determine absorbance at 490 nm, the cell viability curves for BMSCs were acquired in the same way according to the day and the absorbance. The BMSC proliferation ability was also examined by 3H-TdR assay. Fresh medium was used as a negative control.", "To induce osteogenic differentiation, BMSCs were initially seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were allowed to grow in osteogenic medium (high-glucose DMEM supplemented with 10% FBS, 50 mg/l ascorbic acid, 10 mM β-glycerolphosphate and 10 nM dexamethasone; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, according to the experimental requirements for up to 14 days, and the medium was replaced every 3 days before harvest. The alkaline phosphatase (ALP) and mineralization of BMSCs were assayed using Cell Alkaline Phosphatase Staining assay (Sigma) and Alizarin Red staining (AR-S, 40 mmol/l, pH 4.2; Sigma) on the 14th day, respectively.\nTo induce adipogenic differentiation, the BMSCs were seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were shifted to adipogenic medium (low-glucose DMEM supplemented with 10% FBS, 1 μM dexamethasone, 10 μg/ml insulin, 0.5 mM 3-isobutyl-1-methylxanthine and 0.2 mM indomethacin; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, and the medium was replaced every 3 days before harvest. The intracellular lipid accumulation as an indicator was visualized on the 14th day by Oil Red O staining after fixed with 4% cold paraformaldehyde in PBS (pH 7.4) and washed with distilled water.\nTo induce chondrogenic differentiation, aliquots of 2.5 × 105 BMSCs were centrifuged at 1,000 rpm for 5 minutes in 15-ml polypropylene conical tubes to form pellets, which were then cultured in high-glucose DMEM supplemented with 1% ITS-Premix (Becton-Dickinson, Mountain View, CA, USA), 50 mg/ml ascorbic acid (Sigma), 10-3 M sodium pyruvate (Sigma), 10-7 M dexametazone (Sigma), and 10 ng/ml transforming growth factor-β3 (R&D Systems, Minneapolis, MN, USA) for 28 days. The pellets were then fixed with 4% paraformaldehyde, embedded in paraffin, and subjected to Alcian blue staining to confirm chondrogenic differentiation.\nThe BMSCs in fresh medium (high-glucose DMEM supplemented with 10% FBS) without these differentiation-inducing factors were used as the experimental control, and fresh medium without any cells was used as a negative control. All measurements were performed in triplicate. The images were visualized using an inverted phase contrast microscope (Nikon Eclipse Ti-S, Nikon Corporation, Tokyo Prefecture, Japan).", "On the 14th day the osteogenic medium was removed, and then 1.0 ml Triton X-100 (Sigma) was added to each well. A cell scraper was used to remove the BMSCs from the well bottom, and then the 1.0 ml cell lysate were placed in a 1.5-ml centrifuge tube. The samples were then processed through two freeze-thaw cycles (-70°C and room temperature, 45 minutes each) to rupture the cell membrane and extract the proteins and DNA from the cells. A p-nitrophenyl phosphate liquid substrate system (Stanbio, Boerne, TX, USA) was used to analyze the ALP concentration from the cells of each group. Then 10 μl each cell lysate solution was added to 190 μl p-nitrophenyl phosphate substrate and incubated in the dark at room temperature for 1 minute. The absorbance was read using a plate reader (M5 SpectraMax; Molecular Devices, Sunnyvale, CA, USA) at 405 nm and normalized to the PicoGreen assay [35]. DNA was quantified using the Quant-iT PicoGreen Kit (Invitrogen, Carlsbad, CA, USA) following standard protocols. Briefly, 100 μl each cell lysate solution was added to 100 μl PicoGreen reagent and incubated in the dark at room temperature for 5 minutes. The absorbance was read at an excitation/emission of 480 to 520 nm on the plate reader.", "The inhibitory effects of BMSCs on mixed PBMC reaction (MLR) and PBMC proliferation stimulated by phytohemagglutinin (PHA) (4 μg/ml; Roche, Mannheim, Germany) were measured using the MTT assay [36] and the 3H-TdR assay [10] as described previously. Briefly, BMSCs were seeded in V-bottomed, 96-well culture plates for 4 hours for adherence, and then irradiated (30 Gy) with Co60 before being cultured with the mixed PBMCs or the PBMCs stimulated by PHA.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] For the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\nFor the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Compared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.\nCompared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.", "For the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.", "Compared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.", "BMSCs were trypsinized and then irradiated (30 Gy) with Co60 before being co-cultured with PBMCs from a healthy volunteer in the presence of PHA (4 μg/ml; Roche) in 24-well plates (Nunclon, Roskilde, Denmark) and plated at a ratio of 1:10 in a total volume of 2 ml/well in triplicate for 72 hours. The cell density was 5 × 104/cm2 BMSCs and 5 × 105/cm2 PBMCs in a mix. Phorbol myristate acetate (50 ng/ml; Sigma, St Louis, MO, USA) and calcium ionomycin (1 μg/ml; Sigma) were added 6 hours prior to the end of the 72-hour co-culture. All of the PBMCs were then collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells. PBMCs were also grown alone in BMSC-free medium and used as control.", "To detect the surface markers [37] of BMSCs and the frequency of CCR4+CCR6+ Th and Treg cells in PBMCs, the antibodies (Table 2) - including CD105(FITC), CD73(FITC), CD90(FITC), CD34(FITC), CD45(FITC), CD14(PE) and HLA-DR(FITC) for BMSCs; CCR4(PE-Cy7), CD196(CCR6)(PE) and CD4(FITC) for CCR4+CCR6+ Th cells [29]; and CD4(FITC), CD25(APC) and Fox-P3(PE) antibodies for Treg cells - were used according to the manufacturers' recommendations. BMSCs and PBMCs marked with appropriate antibodies were measured with a FACScan laser flow cytometry system (Becton Dickinson) immediately. In each experiment, control staining with the appropriate isotype monoclonal antibodies was included. Results were expressed as the mean (frequency, %) ± SD.\nAntibodies used to detect CCR4+CCR6+ Th and Treg cells in PBMCs and phenotype BMSCs by flow cytometry\naAntibodies used to phenotype bone marrow-derived mensenchymal stem cells (BMSCs) by flow cytometry. bAntibodies used to detect CCR4+CCR6+ CD4+ T-helper (Th) cells and forkhead box P3-positive regulatory T (Treg) cells by flow cytometry. PBMC, peripheral blood mononuclear cell.", "Data are expressed as the mean ± SD, and the significance of the results was determined using the unpaired Student's t test. The product-moment correlation coefficient was used to test the correlations between the suppression ratios of BMSCs and the ratio of CCR4+CCR6+ Th cells to Treg cells in peripheral blood. Statistical analysis was performed using the SPSS computer program (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant.", "To evaluate the biological properties of AS-BMSCs, compared with those of HD-BMSCs, the studies for growth characteristics, cell viability and multiple differentiation potentials in vitro were performed. The AS-BMSC growth curves have the same tendency as those for HD-BMSCs. The BMSC proliferation data of these two groups at each day (1 to 12 days) were tested by unpaired Student's t test, and the statistical result indicates that there was no statistically significant difference in BMSC growth characteristics between ASp and HDs (HD1 and HD2) (P > 0.05, Student's t test for independent samples). Established cultures (12 days) of BMSCs exhibited close, even equivalent, cell viability at each point of time from 1 to 12 days, as determined by cellular viability assays, and the difference of OD at 490 nm between ASp and HDs (HD1 and HD2) at each day (1 to 12 days) was not also statistically significant (P > 0.05, Student's t test for independent samples). The cultures have similar purities: (QL [the lower point of interquartile range], QU [the upper point of interquartile range]) = (95%, 99%) for AS-BMSCs, (QL, QU) = (96%, 98%) for HD1-BMSCs and (QL, QU) = (96%, 99%) for HD2-BMSCs.", "To explore whether the multiple differentiation potentials of BMSCs in AS were abnormal, we investigate the osteogenic, adipogenic and chondrogenic differentiation potentials of AS-BMSCs and HD-BMSCs in the present study. Obvious differentiated osteocytes and adipocytes were detected as early as the 7th day after being induced for osteogenic and adipogenic differentiation, and obvious differentiated chondrocytes were seen at about 14 days since induction (Figure 1A to 1C).\nBone marrow-derived mensenchymal stem cell triple differentiation potentials from ankylosing spondylitis patients and healthy donors. (A), (B), (C) Morphological characteristics of bone marrow-derived mensenchymal stem cells (BMSCs) for osteogenic, adipogenic and chondrogenic differentiation evaluated by the inverted phase contrast microscope; ankylosing spondylitis (AS)-BMSCs have the same morphological properties as the HLA-B27-negative healthy donor (HD1)-BMSCs and HLA-B27-positive healthy donor (HD2)-BMSCs. Osteocytes were stained for calcium deposition using Alizarin Red-S (A1, B1, C1: x400) and for alkaline phosphatase (ALP) with the Cell Alkaline Phosphatase-S assay (A2, B2, C2: x200). Adipocytes were filled with many fat vacuoles, and Red Oil O was used to stain the fat vacuoles of adipocytes (A3, B3, C3: x200). Chondroblast differentiation from BMSCs was identified with Alcian blue staining (A4, B4, C4: x200). (D) General photographs of BMSCs for osteogenic differentiation, stained with Alizarin Red-S. (E) ALP activities of AS-BMSCs, HD1-BMSCs and HD2-BMSCs were 644 ± 45, 655 ± 49 and 646 ± 51, respectively; differences not statistically significant (P > 0.05). Data presented as mean ± standard deviation. MLR, mixed peripheral blood mononuclear cell reaction; PHA, phytohemagglutinin.\nThere appeared to be two stages in the BMSC differentiation process for both ASp and HDs. In the early stage, only a few osteocytes, adipocytes and chondrocytes were found within the undifferentiated BMSCs. Gradually, these three kinds of cells increased; simultaneously, the cell's body got bigger and cytoplasm became more abundant because, for example, osteocytes made closer contact, fat vacuoles of adipocytes multiplied and grew bigger, and chondrocytes began to gain many collagen fibers. In the later stage, these three kinds of cells increased rapidly and nearly predominated. For the adipocytes, osteocytes and chondrocytes derived from BMSCs, the purities were (QL, QU) = (90%, 97%), (QL, QU) = (91%, 96%) and (QL, QU) = (88%, 95%) for ASp, (QL, QU) = (88%, 98%), (QL, QU) = (90%, 97%) and (QL, QU) = (90%, 96%) for HD1, and (QL, QU) = (86%, 95%), (QL, QU) = (89%, 98%) and (QL, QU) = (92%, 97%) for HD2, respectively.\nThe calcium nodules were stained to present a red color (Figure 1A1 to 1D1), after Alizarin Red staining for calcium deposits of osteocytes was performed to determine the mineralization of BMSCs. For the adipogenic differentiation, the mass fat vacuoles of adipocytes were also stained to present a red color by Oil Red O staining (Figure 1A3 to 1C3). The well-differentiated chondrocytes were Alcian Blue-positive, and presented a bright blue color after staining (Figure 1A4 to 1C4).\nThe ALP activity, normalized to DNA concentration, is plotted in Figure 1E. The ALP activity (mean ± SD) was 644 ± 45 (mM p-nitrophenyl phosphate/minute per mg DNA) for AS-BMSCs (n = 51), which is lower than the 655 ± 49 for HD1-BMSCs (n = 37) (P > 0.05) and the 646 ± 51 for HD2-BMSCs (n = 12) (P > 0.05). All three values were much higher than those of the baseline ALP for BMSCs of ASp, HD1 and HD2 (85 ± 40, 88 ± 48 and 82 ± 13, respectively) (P < 0.001) in control medium without the osteogenic factors. ALP staining was performed on the 14th day to investigate the maturity degree of osteocytes in the groups of Asp, HD1 and HD2 (Figure 1A2 to 1C2).", "The AS-BMSCs and HD-BMSCs were then examined for typical MSC phenotypic surface markers. Flow cytometric analysis showed that the AS-BMSCs and HD-BMSCs (HD1-BMSCs and HD2-BMSCs) have the same phenotypic surface markers, just as the typical MSCs did. The samples all express high levels of the surface markers CD105, CD73 and CD90, and lack expression of CD45, CD34, CD14 and HLA-DR surface molecules (Figure 2).\nPhenotyping of bone marrow-derived mensenchymal stem cells for typical mensenchymal stromal cell surface markers. Single-parameter histograms for (A1) to (A3), (B1) to (B3), (C1) to (C3) individual mensenchymal stromal cell (MSC) markers and (A4) to (A7), (B4) to (B7), (C4) to (C7) MSC exclusion markers, representative of samples from patients with ankylosing spondylitis (AS) and from healthy donors (blue lines). Red lines indicate background fluorescence obtained with isotype control IgG. x axis, fluorescence intensity; y axis, cell counts. BMSC, bone marrow-derived mensenchymal stem cell; HD1, HLA-B27-negative healthy donors; HD2, HLA-B27-positive healthy donors.", "Under the condition that the proliferation characteristics, cell viability, multiple-differentiation potentials and surface markers of AS-BMSCs were normal, compared with HD-BMSCs, the immunomodulation potential of AS-BMSCs was evaluated in the present study. The effects of BMSCs from ASp (n = 51), HD1 (n = 37) and HD2 (n = 12) on two-way MLR or PBMC proliferation in the presence of PHA were evaluated by mixing BMSCs and mixed PBMCs for two-way MLR, or by PBMCs from a third healthy volunteer in the presence of PHA for PBMC proliferation assay at five BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1, respectively.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\nThe differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.\nSimilarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.", "The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.", "Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.", "Recent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], so we also examined the frequencies of CCR4+CCR6+ Th and Treg cells in PBMCs of ASp and HDs (Figure 4). The PBMCs from ASp and HDs were examined for the subset populations using flow cytometry, defined as the percentages of CCR4+CCR6+ Th cells (CCR4/CCR6 double-positive) [32] and Treg cells (CD4/CD25/Fox-P3 triple-positive) accounting for the total CD4-positive Th cells (CCR4+CCR6+ Th/Th, Treg/Th), CD3-positive T cells (CCR4+CCR6+ Th/T, Treg/T), lymphocytes (CCR4+CCR6+ Th/L, Treg/L), and peripheral blood mononuclear cells (CCR4+CCR6+ Th/PBMCs, Treg/PBMCs) respectively. The proportions of Fox-P3-positive occupying CD4/CD25 double-positive cells (Fox-P3+/CD4+CD25+) and PBMCs (Fox-P3+/PBMCs) were also tested. Compared with healthy donors (HD1 and HD2), the CCR4+CCR6+ Th population of ASp was significantly increased (P < 0.001, Table 3 and Figure 5), whereas Treg cells and Fox-P3-positive cells were found to be significantly decreased (P < 0.001, Student's t test for independent data, Table 3 and Figure 5). There were no significant differences between HD1 and HD2.\nRepresentative plots of CCR4+CCR6+ T-helper and regulatory T cells. Representative plots of peripheral circulating populations of (A1) to (A3) CCR4+CCR6+ T-helper (Th) cells (green) and (B1) to (B3) regulatory T (Treg) cells (red) in peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp) (A1, B1), HLA-B27-negative healthy donors (HD1) (A2, B2) and HLA-B27-positive healthy donors (HD2) (A3, B3). FOX-P3, forkhead box P3; FSH, forward scatter-height; SSH, side scatter-height.\nPercentages of CCR4+/6+ Th and Treg cells in appropriate cell subsets\nData presented as the percentage mean ± standard deviation. The differences for all percentages between patients with ankylosing spondylitis (ASp) and either HLA-B27-negative healthy donors (HD1) or HLA-B27-positive healthy donors (HD2) were significant (P < 0.001, according to a two-tailed significant level of 0.05). There were no significant differences between HD1 and HD2 (P > 0.05). Th, CD4+ T-helper cells; T, T lymphocytes; L, lymphocytes; Treg, forkhead box P3-positive regulatory T cells.\nPercentages of CCR4+CCR6+ T-helper, regulatory T and Fox-P3-positive cells in peripheral blood mononuclear cells. Statistical multiple bar graphs showing percentages of increased CCR4+CCR6+ T-helper (Th) cells, reduced regulatory T (Treg) cells and reduced forkhead box P3 (Fox-P3)-positive cells in the peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp), compared with values for healthy donors (HLA-B27-negative healthy donors (HD1) and HLA-B27-positive healthy donors (HD2)). *P < 0.001, Student's t test for independent data.", "We performed the direct contact co-culture of BMSCs and PBMCs to explore whether the reduced immunomodulation potential of AS-BMSCs altered the balance of CCR4+CCR6+ Th/Treg cells. The PBMCs were collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells after co-culture with BMSCs of ASp and HDs (HD1 and HD2) for 3 days. The percentages of Treg cells (0.63 ± 0.23%) and Fox-P3-positive cells (0.74 ± 0.11%) in PBMCs after co-culture with AS-BMSCs for 3 days reduced significantly, whereas the percentages of CCR4+CCR6+ Th cells (1.87 ± 0.29%) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly, compared with these values of groups, including 3-day HD1, 3-day HD2, 3-day control, and 0 days (P < 0.001, Figure 6). Impressively, the ratio of CCR4+CCR6+ Th cells to Treg cells (CCR4+CCR6+ Th/Treg) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly (P < 0.001, Figure 6).\nRatios of Fox-P3/PBMCs, CCR4+CCR6+ Th/PBMCs, Treg/PBMCs and CCR4+CCR6+ Th/Treg. Peripheral blood mononuclear cells (PBMCs) were collected to be assayed by flow cytometry for the CCR4+CCR6+ T-helper (Th) cells, regulatory T (Treg) cells and forkhead box P3 (Fox-P3)-positive cells after co-culture with or without (3 day-control) ankylosing spondylitis (AS)-bone marrow-derived mensenchymal stem cells (BMSCs) (3 day-ASp), HLA-B27-negative healthy donors (HD1)-BMSCs (3 day-HD1) or HLA-B27-positive healthy donors (HD2)-BMSCs (3 day-HD2) for 72 hours. Statistical multiple bar graphs show that AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg imbalance via reducing Treg/PBMCs but increasing CCR4+CCR6+ Th/PBMCs; it also produced a significant reduction of Fox-P3-positive cells in PBMCs (*P < 0.001, Student's t test for independent data).", "When examining data from all subjects tested, we observed positive correlations between the percentage inhibition of BMSCs (MLR) and the percentage inhibition of BMSCs (PHA) for all of the 51 ASp, 37 HD1 and 12 HD2. Interestingly, however, for all of the ASp (Figure 7A), HD1 (Figure 7B) and HD2 (Figure 7C) there were significantly negative correlations between the ratios of CCR4+CCR6+ Th cells to Treg cells in the peripheral blood and either percentage inhibition of BMSCs (MLR) or percentage inhibition of BMSCs (PHA) at all five ratios (P < 0.01, respectively).\nCorrelation analysis between CCR4+CCR6+ Th/Treg ratio and immunomodulation potential of bone marrow-derived mensenchymal stem cells. (A) All patients with ankylosing spondylitis (ASp), (B) HLA-B27-negative healthy donors (HD1) and (C) HLA-B27-positive healthy donors (HD2) presented significantly negative correlations between the ratio of CCR4+CCR6+ T-helper (Th) cells to regulatory T (Treg) cells in the peripheral blood mononuclear cells (PBMCs) and percentage inhibition either from two-way mixed PBMC reaction (MLR) (upper panel) or PBMC proliferation induced by phytohemagglutinin (PHA) (lower panel) at a bone marrow-derived mensenchymal stem cell (BMSC):PBMC ratio of 1:10 (P < 0.01, respectively).", "ALP: alkaline phosphatase; AS: ankylosing spondylitis; AS-BMSC: bone marrow-derived mensenchymal stem cell of patient with AS; ASp: patients with ankylosing spondylitis; AS-PBMC: peripheral blood mononuclear cell of patient with AS; BMSC: bone marrow-derived mensenchymal stem cell; CPM: counts per minute; DMEM: Dulbecco's modified Eagle's medium; FBS: fetal bovine serum; Fox-P3: forkhead box P3; 3H-TdR: 3H-thymidine; HD: healthy donor; HD1: HLA-B27-negative healthy donors; HD2: HLA-B27-positive healthy donors; HD-BMSC: bone marrow-derived mensenchymal stem cell of healthy donor; HD-PBMC: peripheral blood mononuclear cell of healthy donor; IL: interleukin; MLR: mixed peripheral blood mononuclear cell reaction; MSC: mensenchymal stromal cell; MTT: methyl thiazolyl tetrazolium; OD: optical density; PBMC: peripheral blood mononuclear cell; PBS: phosphate-buffered saline; PCR: polymerase chain reaction; PHA: phytohemagglutinin; QL: the lower point of interquartile range; QU: the upper point of interquartile range; SD: standard deviation; TGFβ: transforming growth factor beta; Th: T-helper; TNF: tumor necrosis factor; Treg: regulatory T.", "The authors declare that they have no competing interests.", "RY, XL, YM and YT carried out the experimental work and the data collection and interpretation. LH, KC and JY participated in the design and coordination of experimental work, and the acquisition of data. MR and YW participated in the study design, data collection, analysis of data and preparation of the manuscript. PW and HS carried out the study design, the analysis and interpretation of data and drafted the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Patients and controls", "Bone marrow aspiration, human BMSCs and PBMCs", "Cell viability and proliferation test for BMSCs", "In vitro differentiation potential assay of BMSCs", "Alkaline phosphatase measurement", "Immunomodulation potential of BMSCs", "Two-way mixed PBMC reaction", "Allogeneic PBMC proliferation assay", "Direct contact co-culture of BMSCs and PBMCs", "Antibodies and flow cytometry", "Statistical analysis", "Results", "Growth characteristics and cell viability of AS-BMSCs are normal", "Triple differentiation potentials of AS-BMSCs in vitro were not changed", "Phenotype of bone marrow-derived mesenchymal stem cells", "Decreased suppressive potential of AS-BMSCs on either two-way MLR or PBMC proliferation stimulated with PHA", "Two-way mixed PBMC reaction", "Allogeneic PBMC proliferation assay", "Increased CCR4+CCR6+ Th and decreased Treg populations in peripheral blood of patients with AS", "BMSCs of patients with AS-induced CCR4+CCR6+ Th/Treg imbalance", "Negative correlations between percentages of CCR4+CCR6+ Th/Treg cells and the suppressive ratio of BMSCs", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Ankylosing spondylitis (AS) is a chronic autoimmune inflammatory disease, the prototypic seronegative spondylarthritis that primarily affects the sacroiliac joints and the axial skeleton, which was characterized by inflammatory back pain, enthesitis, and specific organ involvement [1]. AS is a complex multifactorial disease; several pathogenetic factors, including infection [1,2], environmental triggers [1], genetic susceptibility such as HLA-B27 positivity [3,4] and HLA-E gene polymorphism [5], and in particular, autoimmune disorders [1] have been reported to potentially trigger the onset or maintain the pathogenesis progress of AS. Additionally, the genome-wide association study of AS identifies non-MHC susceptibility loci [6], such as IL-23R (rs11209026) and ERAP1 (rs27434). There were also, however, some controversies; for example, no candidate bacteria were detected by PCR in biopsies from sacroiliac joints [7] and most HLA B27-positive individuals remain healthy [1]. The precise pathogenesis of AS is therefore largely unknown at present. Nowadays, more and more studies have focused on the immunological factors for AS.\nMesenchymal stromal cells (MSCs) isolated from a variety of adult tissues, including the bone marrow, have multiple differentiation potentials in different cell types, and also display immunosuppressive (in vitro [8,9], in vivo [10-12]) and anti-inflammatory properties [13], so their putative therapeutic role in a variety of inflammatory autoimmune diseases is currently under investigation. Recently, many findings indicate that MSC immunomodulation potential plays a critical role in severe aplastic anemia [14]. Simultaneously, substantial disorders and abnormalities of MSCs exist in many autoimmune diseases [15]. Few studies, however, have so far focused on whether there were some abnormalities in bone marrow-derived mesenchymal stem cells (BMSCs) of patients with ankylosing spondylitis (ASp) with regard to the biological and immunological properties.\nMore recently, two additional subsets, the forkhead box P3 (Fox-P3)-positive regulatory subset (Treg) and the IL-17-producing subset (Th17) [16-19], have emerged and together with Th1 and Th2 cells, formed a functional quartet of CD4+ T cells that provides a closer insight into the mechanisms of immune-mediated diseases such as AS. Autoimmune diseases are thought to arise from a breakdown of immunological self-tolerance leading to aberrant immune responses to self-antigen. Ordinarily, regulatory T (Treg) cells - including both natural and induced Treg cells - control these self-reactive cells [20]. Several studies of patients with connective tissue diseases found reduced [21] or functionally impaired [22] Treg cells, and Treg cells of autoimmune hepatitis patients have reduced expression of Fox-P3 and CTLA-4, which may lead to impaired suppressor activity [23]. On the contrary, these proinflammatory Th17 cells are implicated in different autoimmune disease models [24-26]. Furthermore, these cells typically express IL-23R on their membrane [27], and recent studies in AS [28-30] show an important genetic contribution for polymorphisms in the gene that codes for this IL-23R. The active polymorphisms in the IL-23R gene could thus indicate an important role for this pathogenic T-cell subset (Th17) in the development and maintenance of AS. The involvement of Treg and Th17 cells in AS, however, has not yet been clearly established.\nAs previously described, skewing of responses towards Th17 and away from BMSCs or Treg cells may be responsible for the development and/or progression of AS [31]. Furthermore, CCR6 and CCR4 identified true Th17 memory cells producing IL-17 [32] and the majority of Th17 cells were CCR6+CCR4+ [33]. Aimed at investigating the puzzling issues above, the present study was designed to examine the biological and immunological properties of BMSCs, to examine the frequencies and phenotypes of Treg cells and proinflammatory CCR4+CCR6+ Th memory cells, and to study the interactions between BMSCs and CCR4+CCR6+ Th/Treg cells in peripheral blood mononuclear cells (PBMCs) for AS.", "[SUBTITLE] Patients and controls [SUBSECTION] The present study was approved by the ethics committee of the Sun Yat-Sen Memorial Hospital of Sun Yat-Sen University, Guangzhou, China. In addition, informed consent was obtained from all patients and all healthy donors (HDs). Fifty-one ASp (eight women and 43 men) with an average age of 29.4 years (17 to 45 years) and 49 HDs (eight women and 41 men) with an average age of 27.3 years (18 to 39 years) were included in the study. All of the AS patients were diagnosed according to the New York modified criteria [34] and were HLA-B27-positive; conversely, 37 healthy donors were HLA-B27-negative (HD1) and 12 healthy donors were HLA-B27-positive (HD2). Sixteen patients were diagnosed for the first time, and the research samples from all ASp were taken at the active stage (all Bath Ankylosing Spondylitis Disease Activity Index ≥4) and without taking any medicine for at least 2 weeks.\nThe present study was approved by the ethics committee of the Sun Yat-Sen Memorial Hospital of Sun Yat-Sen University, Guangzhou, China. In addition, informed consent was obtained from all patients and all healthy donors (HDs). Fifty-one ASp (eight women and 43 men) with an average age of 29.4 years (17 to 45 years) and 49 HDs (eight women and 41 men) with an average age of 27.3 years (18 to 39 years) were included in the study. All of the AS patients were diagnosed according to the New York modified criteria [34] and were HLA-B27-positive; conversely, 37 healthy donors were HLA-B27-negative (HD1) and 12 healthy donors were HLA-B27-positive (HD2). Sixteen patients were diagnosed for the first time, and the research samples from all ASp were taken at the active stage (all Bath Ankylosing Spondylitis Disease Activity Index ≥4) and without taking any medicine for at least 2 weeks.\n[SUBTITLE] Bone marrow aspiration, human BMSCs and PBMCs [SUBSECTION] After being informed regarding the scientific contributions, possible risks and complications and the corresponding prevention and treating measures for bone marrow aspirations, all of the healthy controls and ASp expressed approval and signed the informed consent. The bone marrow aspirations were all performed by skilled allied health professionals strictly according to the international standardized procedure for bone marrow aspirations. The bone marrow samples of AS patients and HDs were diluted with DMEM (low-glucose DMEM) containing 10% FBS. The mononuclear cells were prepared by gradient centrifugation at 900 × g for 30 minutes on Percoll (Pharmacia Biotech, Uppsala, Sweden) of density 1.073 g/ml. The cells were washed, counted, seeded at 2 × 106 cells/cm2 in 25-cm2 flasks containing low-glucose DMEM supplemented with 10% FBS and cultured at 37°C, 5% carbon dioxide. Medium was replaced and the cells in suspension were removed at 48 hours and every 3 or 4 days thereafter. BMSCs were recovered using 0.25% Trypsin-ethylenediamine tetraacetic acid and replated at a density of 5 × 103 to 6 × 103 cells/cm2 surface area as passage 1 cells when the culture reached 90% confluency. BMSCs after the third subculture were used for described experiments. PBMCs were obtained by the Ficoll-Hypaque (Pharmacia Biotech, Uppsala, Sweden) gradient separation of the buffy coat of ASp and HDs.\nAfter being informed regarding the scientific contributions, possible risks and complications and the corresponding prevention and treating measures for bone marrow aspirations, all of the healthy controls and ASp expressed approval and signed the informed consent. The bone marrow aspirations were all performed by skilled allied health professionals strictly according to the international standardized procedure for bone marrow aspirations. The bone marrow samples of AS patients and HDs were diluted with DMEM (low-glucose DMEM) containing 10% FBS. The mononuclear cells were prepared by gradient centrifugation at 900 × g for 30 minutes on Percoll (Pharmacia Biotech, Uppsala, Sweden) of density 1.073 g/ml. The cells were washed, counted, seeded at 2 × 106 cells/cm2 in 25-cm2 flasks containing low-glucose DMEM supplemented with 10% FBS and cultured at 37°C, 5% carbon dioxide. Medium was replaced and the cells in suspension were removed at 48 hours and every 3 or 4 days thereafter. BMSCs were recovered using 0.25% Trypsin-ethylenediamine tetraacetic acid and replated at a density of 5 × 103 to 6 × 103 cells/cm2 surface area as passage 1 cells when the culture reached 90% confluency. BMSCs after the third subculture were used for described experiments. PBMCs were obtained by the Ficoll-Hypaque (Pharmacia Biotech, Uppsala, Sweden) gradient separation of the buffy coat of ASp and HDs.\n[SUBTITLE] Cell viability and proliferation test for BMSCs [SUBSECTION] BMSCs were seeded in 96-well plates at a concentration of 1 × 104/ml, in a final volume of 100 μl fresh medium (10% FBS + low-glucose DMEM), and three wells of each sample were digested using 0.25% Trypsin-ethylenediamine tetraacetic acid for cell counting per day up to 12 days. The BMSC growth curves were made using the data for cell proliferation obtained above. Using MTT (5 mg/ml; Sigma-Aldrich Co., St. Louis, MO, USA), dimethyl sulphoxide (Sigma) and an EL800 microplate reader (BioTek Instruments, Winooski, VT, USA) that was to determine absorbance at 490 nm, the cell viability curves for BMSCs were acquired in the same way according to the day and the absorbance. The BMSC proliferation ability was also examined by 3H-TdR assay. Fresh medium was used as a negative control.\nBMSCs were seeded in 96-well plates at a concentration of 1 × 104/ml, in a final volume of 100 μl fresh medium (10% FBS + low-glucose DMEM), and three wells of each sample were digested using 0.25% Trypsin-ethylenediamine tetraacetic acid for cell counting per day up to 12 days. The BMSC growth curves were made using the data for cell proliferation obtained above. Using MTT (5 mg/ml; Sigma-Aldrich Co., St. Louis, MO, USA), dimethyl sulphoxide (Sigma) and an EL800 microplate reader (BioTek Instruments, Winooski, VT, USA) that was to determine absorbance at 490 nm, the cell viability curves for BMSCs were acquired in the same way according to the day and the absorbance. The BMSC proliferation ability was also examined by 3H-TdR assay. Fresh medium was used as a negative control.\n[SUBTITLE] In vitro differentiation potential assay of BMSCs [SUBSECTION] To induce osteogenic differentiation, BMSCs were initially seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were allowed to grow in osteogenic medium (high-glucose DMEM supplemented with 10% FBS, 50 mg/l ascorbic acid, 10 mM β-glycerolphosphate and 10 nM dexamethasone; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, according to the experimental requirements for up to 14 days, and the medium was replaced every 3 days before harvest. The alkaline phosphatase (ALP) and mineralization of BMSCs were assayed using Cell Alkaline Phosphatase Staining assay (Sigma) and Alizarin Red staining (AR-S, 40 mmol/l, pH 4.2; Sigma) on the 14th day, respectively.\nTo induce adipogenic differentiation, the BMSCs were seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were shifted to adipogenic medium (low-glucose DMEM supplemented with 10% FBS, 1 μM dexamethasone, 10 μg/ml insulin, 0.5 mM 3-isobutyl-1-methylxanthine and 0.2 mM indomethacin; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, and the medium was replaced every 3 days before harvest. The intracellular lipid accumulation as an indicator was visualized on the 14th day by Oil Red O staining after fixed with 4% cold paraformaldehyde in PBS (pH 7.4) and washed with distilled water.\nTo induce chondrogenic differentiation, aliquots of 2.5 × 105 BMSCs were centrifuged at 1,000 rpm for 5 minutes in 15-ml polypropylene conical tubes to form pellets, which were then cultured in high-glucose DMEM supplemented with 1% ITS-Premix (Becton-Dickinson, Mountain View, CA, USA), 50 mg/ml ascorbic acid (Sigma), 10-3 M sodium pyruvate (Sigma), 10-7 M dexametazone (Sigma), and 10 ng/ml transforming growth factor-β3 (R&D Systems, Minneapolis, MN, USA) for 28 days. The pellets were then fixed with 4% paraformaldehyde, embedded in paraffin, and subjected to Alcian blue staining to confirm chondrogenic differentiation.\nThe BMSCs in fresh medium (high-glucose DMEM supplemented with 10% FBS) without these differentiation-inducing factors were used as the experimental control, and fresh medium without any cells was used as a negative control. All measurements were performed in triplicate. The images were visualized using an inverted phase contrast microscope (Nikon Eclipse Ti-S, Nikon Corporation, Tokyo Prefecture, Japan).\nTo induce osteogenic differentiation, BMSCs were initially seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were allowed to grow in osteogenic medium (high-glucose DMEM supplemented with 10% FBS, 50 mg/l ascorbic acid, 10 mM β-glycerolphosphate and 10 nM dexamethasone; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, according to the experimental requirements for up to 14 days, and the medium was replaced every 3 days before harvest. The alkaline phosphatase (ALP) and mineralization of BMSCs were assayed using Cell Alkaline Phosphatase Staining assay (Sigma) and Alizarin Red staining (AR-S, 40 mmol/l, pH 4.2; Sigma) on the 14th day, respectively.\nTo induce adipogenic differentiation, the BMSCs were seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were shifted to adipogenic medium (low-glucose DMEM supplemented with 10% FBS, 1 μM dexamethasone, 10 μg/ml insulin, 0.5 mM 3-isobutyl-1-methylxanthine and 0.2 mM indomethacin; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, and the medium was replaced every 3 days before harvest. The intracellular lipid accumulation as an indicator was visualized on the 14th day by Oil Red O staining after fixed with 4% cold paraformaldehyde in PBS (pH 7.4) and washed with distilled water.\nTo induce chondrogenic differentiation, aliquots of 2.5 × 105 BMSCs were centrifuged at 1,000 rpm for 5 minutes in 15-ml polypropylene conical tubes to form pellets, which were then cultured in high-glucose DMEM supplemented with 1% ITS-Premix (Becton-Dickinson, Mountain View, CA, USA), 50 mg/ml ascorbic acid (Sigma), 10-3 M sodium pyruvate (Sigma), 10-7 M dexametazone (Sigma), and 10 ng/ml transforming growth factor-β3 (R&D Systems, Minneapolis, MN, USA) for 28 days. The pellets were then fixed with 4% paraformaldehyde, embedded in paraffin, and subjected to Alcian blue staining to confirm chondrogenic differentiation.\nThe BMSCs in fresh medium (high-glucose DMEM supplemented with 10% FBS) without these differentiation-inducing factors were used as the experimental control, and fresh medium without any cells was used as a negative control. All measurements were performed in triplicate. The images were visualized using an inverted phase contrast microscope (Nikon Eclipse Ti-S, Nikon Corporation, Tokyo Prefecture, Japan).\n[SUBTITLE] Alkaline phosphatase measurement [SUBSECTION] On the 14th day the osteogenic medium was removed, and then 1.0 ml Triton X-100 (Sigma) was added to each well. A cell scraper was used to remove the BMSCs from the well bottom, and then the 1.0 ml cell lysate were placed in a 1.5-ml centrifuge tube. The samples were then processed through two freeze-thaw cycles (-70°C and room temperature, 45 minutes each) to rupture the cell membrane and extract the proteins and DNA from the cells. A p-nitrophenyl phosphate liquid substrate system (Stanbio, Boerne, TX, USA) was used to analyze the ALP concentration from the cells of each group. Then 10 μl each cell lysate solution was added to 190 μl p-nitrophenyl phosphate substrate and incubated in the dark at room temperature for 1 minute. The absorbance was read using a plate reader (M5 SpectraMax; Molecular Devices, Sunnyvale, CA, USA) at 405 nm and normalized to the PicoGreen assay [35]. DNA was quantified using the Quant-iT PicoGreen Kit (Invitrogen, Carlsbad, CA, USA) following standard protocols. Briefly, 100 μl each cell lysate solution was added to 100 μl PicoGreen reagent and incubated in the dark at room temperature for 5 minutes. The absorbance was read at an excitation/emission of 480 to 520 nm on the plate reader.\nOn the 14th day the osteogenic medium was removed, and then 1.0 ml Triton X-100 (Sigma) was added to each well. A cell scraper was used to remove the BMSCs from the well bottom, and then the 1.0 ml cell lysate were placed in a 1.5-ml centrifuge tube. The samples were then processed through two freeze-thaw cycles (-70°C and room temperature, 45 minutes each) to rupture the cell membrane and extract the proteins and DNA from the cells. A p-nitrophenyl phosphate liquid substrate system (Stanbio, Boerne, TX, USA) was used to analyze the ALP concentration from the cells of each group. Then 10 μl each cell lysate solution was added to 190 μl p-nitrophenyl phosphate substrate and incubated in the dark at room temperature for 1 minute. The absorbance was read using a plate reader (M5 SpectraMax; Molecular Devices, Sunnyvale, CA, USA) at 405 nm and normalized to the PicoGreen assay [35]. DNA was quantified using the Quant-iT PicoGreen Kit (Invitrogen, Carlsbad, CA, USA) following standard protocols. Briefly, 100 μl each cell lysate solution was added to 100 μl PicoGreen reagent and incubated in the dark at room temperature for 5 minutes. The absorbance was read at an excitation/emission of 480 to 520 nm on the plate reader.\n[SUBTITLE] Immunomodulation potential of BMSCs [SUBSECTION] The inhibitory effects of BMSCs on mixed PBMC reaction (MLR) and PBMC proliferation stimulated by phytohemagglutinin (PHA) (4 μg/ml; Roche, Mannheim, Germany) were measured using the MTT assay [36] and the 3H-TdR assay [10] as described previously. Briefly, BMSCs were seeded in V-bottomed, 96-well culture plates for 4 hours for adherence, and then irradiated (30 Gy) with Co60 before being cultured with the mixed PBMCs or the PBMCs stimulated by PHA.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] For the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\nFor the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Compared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.\nCompared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.\nThe inhibitory effects of BMSCs on mixed PBMC reaction (MLR) and PBMC proliferation stimulated by phytohemagglutinin (PHA) (4 μg/ml; Roche, Mannheim, Germany) were measured using the MTT assay [36] and the 3H-TdR assay [10] as described previously. Briefly, BMSCs were seeded in V-bottomed, 96-well culture plates for 4 hours for adherence, and then irradiated (30 Gy) with Co60 before being cultured with the mixed PBMCs or the PBMCs stimulated by PHA.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] For the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\nFor the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Compared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.\nCompared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.\n[SUBTITLE] Direct contact co-culture of BMSCs and PBMCs [SUBSECTION] BMSCs were trypsinized and then irradiated (30 Gy) with Co60 before being co-cultured with PBMCs from a healthy volunteer in the presence of PHA (4 μg/ml; Roche) in 24-well plates (Nunclon, Roskilde, Denmark) and plated at a ratio of 1:10 in a total volume of 2 ml/well in triplicate for 72 hours. The cell density was 5 × 104/cm2 BMSCs and 5 × 105/cm2 PBMCs in a mix. Phorbol myristate acetate (50 ng/ml; Sigma, St Louis, MO, USA) and calcium ionomycin (1 μg/ml; Sigma) were added 6 hours prior to the end of the 72-hour co-culture. All of the PBMCs were then collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells. PBMCs were also grown alone in BMSC-free medium and used as control.\nBMSCs were trypsinized and then irradiated (30 Gy) with Co60 before being co-cultured with PBMCs from a healthy volunteer in the presence of PHA (4 μg/ml; Roche) in 24-well plates (Nunclon, Roskilde, Denmark) and plated at a ratio of 1:10 in a total volume of 2 ml/well in triplicate for 72 hours. The cell density was 5 × 104/cm2 BMSCs and 5 × 105/cm2 PBMCs in a mix. Phorbol myristate acetate (50 ng/ml; Sigma, St Louis, MO, USA) and calcium ionomycin (1 μg/ml; Sigma) were added 6 hours prior to the end of the 72-hour co-culture. All of the PBMCs were then collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells. PBMCs were also grown alone in BMSC-free medium and used as control.\n[SUBTITLE] Antibodies and flow cytometry [SUBSECTION] To detect the surface markers [37] of BMSCs and the frequency of CCR4+CCR6+ Th and Treg cells in PBMCs, the antibodies (Table 2) - including CD105(FITC), CD73(FITC), CD90(FITC), CD34(FITC), CD45(FITC), CD14(PE) and HLA-DR(FITC) for BMSCs; CCR4(PE-Cy7), CD196(CCR6)(PE) and CD4(FITC) for CCR4+CCR6+ Th cells [29]; and CD4(FITC), CD25(APC) and Fox-P3(PE) antibodies for Treg cells - were used according to the manufacturers' recommendations. BMSCs and PBMCs marked with appropriate antibodies were measured with a FACScan laser flow cytometry system (Becton Dickinson) immediately. In each experiment, control staining with the appropriate isotype monoclonal antibodies was included. Results were expressed as the mean (frequency, %) ± SD.\nAntibodies used to detect CCR4+CCR6+ Th and Treg cells in PBMCs and phenotype BMSCs by flow cytometry\naAntibodies used to phenotype bone marrow-derived mensenchymal stem cells (BMSCs) by flow cytometry. bAntibodies used to detect CCR4+CCR6+ CD4+ T-helper (Th) cells and forkhead box P3-positive regulatory T (Treg) cells by flow cytometry. PBMC, peripheral blood mononuclear cell.\nTo detect the surface markers [37] of BMSCs and the frequency of CCR4+CCR6+ Th and Treg cells in PBMCs, the antibodies (Table 2) - including CD105(FITC), CD73(FITC), CD90(FITC), CD34(FITC), CD45(FITC), CD14(PE) and HLA-DR(FITC) for BMSCs; CCR4(PE-Cy7), CD196(CCR6)(PE) and CD4(FITC) for CCR4+CCR6+ Th cells [29]; and CD4(FITC), CD25(APC) and Fox-P3(PE) antibodies for Treg cells - were used according to the manufacturers' recommendations. BMSCs and PBMCs marked with appropriate antibodies were measured with a FACScan laser flow cytometry system (Becton Dickinson) immediately. In each experiment, control staining with the appropriate isotype monoclonal antibodies was included. Results were expressed as the mean (frequency, %) ± SD.\nAntibodies used to detect CCR4+CCR6+ Th and Treg cells in PBMCs and phenotype BMSCs by flow cytometry\naAntibodies used to phenotype bone marrow-derived mensenchymal stem cells (BMSCs) by flow cytometry. bAntibodies used to detect CCR4+CCR6+ CD4+ T-helper (Th) cells and forkhead box P3-positive regulatory T (Treg) cells by flow cytometry. PBMC, peripheral blood mononuclear cell.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data are expressed as the mean ± SD, and the significance of the results was determined using the unpaired Student's t test. The product-moment correlation coefficient was used to test the correlations between the suppression ratios of BMSCs and the ratio of CCR4+CCR6+ Th cells to Treg cells in peripheral blood. Statistical analysis was performed using the SPSS computer program (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant.\nData are expressed as the mean ± SD, and the significance of the results was determined using the unpaired Student's t test. The product-moment correlation coefficient was used to test the correlations between the suppression ratios of BMSCs and the ratio of CCR4+CCR6+ Th cells to Treg cells in peripheral blood. Statistical analysis was performed using the SPSS computer program (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant.", "The present study was approved by the ethics committee of the Sun Yat-Sen Memorial Hospital of Sun Yat-Sen University, Guangzhou, China. In addition, informed consent was obtained from all patients and all healthy donors (HDs). Fifty-one ASp (eight women and 43 men) with an average age of 29.4 years (17 to 45 years) and 49 HDs (eight women and 41 men) with an average age of 27.3 years (18 to 39 years) were included in the study. All of the AS patients were diagnosed according to the New York modified criteria [34] and were HLA-B27-positive; conversely, 37 healthy donors were HLA-B27-negative (HD1) and 12 healthy donors were HLA-B27-positive (HD2). Sixteen patients were diagnosed for the first time, and the research samples from all ASp were taken at the active stage (all Bath Ankylosing Spondylitis Disease Activity Index ≥4) and without taking any medicine for at least 2 weeks.", "After being informed regarding the scientific contributions, possible risks and complications and the corresponding prevention and treating measures for bone marrow aspirations, all of the healthy controls and ASp expressed approval and signed the informed consent. The bone marrow aspirations were all performed by skilled allied health professionals strictly according to the international standardized procedure for bone marrow aspirations. The bone marrow samples of AS patients and HDs were diluted with DMEM (low-glucose DMEM) containing 10% FBS. The mononuclear cells were prepared by gradient centrifugation at 900 × g for 30 minutes on Percoll (Pharmacia Biotech, Uppsala, Sweden) of density 1.073 g/ml. The cells were washed, counted, seeded at 2 × 106 cells/cm2 in 25-cm2 flasks containing low-glucose DMEM supplemented with 10% FBS and cultured at 37°C, 5% carbon dioxide. Medium was replaced and the cells in suspension were removed at 48 hours and every 3 or 4 days thereafter. BMSCs were recovered using 0.25% Trypsin-ethylenediamine tetraacetic acid and replated at a density of 5 × 103 to 6 × 103 cells/cm2 surface area as passage 1 cells when the culture reached 90% confluency. BMSCs after the third subculture were used for described experiments. PBMCs were obtained by the Ficoll-Hypaque (Pharmacia Biotech, Uppsala, Sweden) gradient separation of the buffy coat of ASp and HDs.", "BMSCs were seeded in 96-well plates at a concentration of 1 × 104/ml, in a final volume of 100 μl fresh medium (10% FBS + low-glucose DMEM), and three wells of each sample were digested using 0.25% Trypsin-ethylenediamine tetraacetic acid for cell counting per day up to 12 days. The BMSC growth curves were made using the data for cell proliferation obtained above. Using MTT (5 mg/ml; Sigma-Aldrich Co., St. Louis, MO, USA), dimethyl sulphoxide (Sigma) and an EL800 microplate reader (BioTek Instruments, Winooski, VT, USA) that was to determine absorbance at 490 nm, the cell viability curves for BMSCs were acquired in the same way according to the day and the absorbance. The BMSC proliferation ability was also examined by 3H-TdR assay. Fresh medium was used as a negative control.", "To induce osteogenic differentiation, BMSCs were initially seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were allowed to grow in osteogenic medium (high-glucose DMEM supplemented with 10% FBS, 50 mg/l ascorbic acid, 10 mM β-glycerolphosphate and 10 nM dexamethasone; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, according to the experimental requirements for up to 14 days, and the medium was replaced every 3 days before harvest. The alkaline phosphatase (ALP) and mineralization of BMSCs were assayed using Cell Alkaline Phosphatase Staining assay (Sigma) and Alizarin Red staining (AR-S, 40 mmol/l, pH 4.2; Sigma) on the 14th day, respectively.\nTo induce adipogenic differentiation, the BMSCs were seeded in six-well plates at a concentration of 104/cm2. After preculturing for 24 hours, the BMSCs were shifted to adipogenic medium (low-glucose DMEM supplemented with 10% FBS, 1 μM dexamethasone, 10 μg/ml insulin, 0.5 mM 3-isobutyl-1-methylxanthine and 0.2 mM indomethacin; all these inducing reagents from Sigma). The BMSCs were then incubated in 5% carbon dioxide at 37°C, and the medium was replaced every 3 days before harvest. The intracellular lipid accumulation as an indicator was visualized on the 14th day by Oil Red O staining after fixed with 4% cold paraformaldehyde in PBS (pH 7.4) and washed with distilled water.\nTo induce chondrogenic differentiation, aliquots of 2.5 × 105 BMSCs were centrifuged at 1,000 rpm for 5 minutes in 15-ml polypropylene conical tubes to form pellets, which were then cultured in high-glucose DMEM supplemented with 1% ITS-Premix (Becton-Dickinson, Mountain View, CA, USA), 50 mg/ml ascorbic acid (Sigma), 10-3 M sodium pyruvate (Sigma), 10-7 M dexametazone (Sigma), and 10 ng/ml transforming growth factor-β3 (R&D Systems, Minneapolis, MN, USA) for 28 days. The pellets were then fixed with 4% paraformaldehyde, embedded in paraffin, and subjected to Alcian blue staining to confirm chondrogenic differentiation.\nThe BMSCs in fresh medium (high-glucose DMEM supplemented with 10% FBS) without these differentiation-inducing factors were used as the experimental control, and fresh medium without any cells was used as a negative control. All measurements were performed in triplicate. The images were visualized using an inverted phase contrast microscope (Nikon Eclipse Ti-S, Nikon Corporation, Tokyo Prefecture, Japan).", "On the 14th day the osteogenic medium was removed, and then 1.0 ml Triton X-100 (Sigma) was added to each well. A cell scraper was used to remove the BMSCs from the well bottom, and then the 1.0 ml cell lysate were placed in a 1.5-ml centrifuge tube. The samples were then processed through two freeze-thaw cycles (-70°C and room temperature, 45 minutes each) to rupture the cell membrane and extract the proteins and DNA from the cells. A p-nitrophenyl phosphate liquid substrate system (Stanbio, Boerne, TX, USA) was used to analyze the ALP concentration from the cells of each group. Then 10 μl each cell lysate solution was added to 190 μl p-nitrophenyl phosphate substrate and incubated in the dark at room temperature for 1 minute. The absorbance was read using a plate reader (M5 SpectraMax; Molecular Devices, Sunnyvale, CA, USA) at 405 nm and normalized to the PicoGreen assay [35]. DNA was quantified using the Quant-iT PicoGreen Kit (Invitrogen, Carlsbad, CA, USA) following standard protocols. Briefly, 100 μl each cell lysate solution was added to 100 μl PicoGreen reagent and incubated in the dark at room temperature for 5 minutes. The absorbance was read at an excitation/emission of 480 to 520 nm on the plate reader.", "The inhibitory effects of BMSCs on mixed PBMC reaction (MLR) and PBMC proliferation stimulated by phytohemagglutinin (PHA) (4 μg/ml; Roche, Mannheim, Germany) were measured using the MTT assay [36] and the 3H-TdR assay [10] as described previously. Briefly, BMSCs were seeded in V-bottomed, 96-well culture plates for 4 hours for adherence, and then irradiated (30 Gy) with Co60 before being cultured with the mixed PBMCs or the PBMCs stimulated by PHA.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] For the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\nFor the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Compared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.\nCompared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.", "For the two-way MLR, allogeneic PBMCs (15 × 104 cells/cm2) from a healthy volunteer were mixed in a 1:1 ratio with PBMCs from another unrelated healthy volunteer (third-party setting). The mixed PBMCs were then mixed with different amounts (15 × 103 cells/cm2 = 1:20 BMSC:PBMC ratio, 3 × 104 cells/cm2 = 1:10, 6 × 104 cells/cm2 = 1:5, 15 × 104 cells/cm2 = 1:2, 3 × 105 cells/cm2 = 1:1) of BMSCs (experiment wells) or without BMSCs (blank wells) in V-bottomed, 96-well culture plates to ensure efficient cell-cell contact for 5 days in 0.2 ml modified RPMI-1640 medium (Gibco, BRL, Grand Island, NY, USA) supplemented with 10% FBS.", "Compared with the MLR, the allogeneic PBMC proliferation assay only uses one allogeneic PBMC reaction (30 × 104 cells/cm2) from a healthy volunteer stimulated with PHA, instead of two PBMC reactions. Inhibitory effects were measured on the 5th day using the MTT assay with an EL800 microplate reader at 570 nm and the 3H-TdR assay with a microplate scintillation and luminescence counter (Packard NXT, Meriden, CT, USA). Results were expressed as mean absorbance (optical density (OD)) ± standard deviation (SD) and as mean counts per minute (CPM) ± SD, respectively. All measurements were performed in triplicate.\nThe data are presented as percentage inhibition values calculated using the following formulae (Table 1):\nDetails regarding the formula for percentage inhibition in the present study\nBMSC, bone marrow-derived mensenchymal stem cell; PBMC, peripheral blood mononuclear cell; HD-PBMC, peripheral blood mononuclear cell of healthy donor; PHA, phytohemagglutinin; MLR, mixed PBMC reaction.\nOD(exp), OD(adj) and OD(bla) represent the mean absorbance of experiment wells, adjusted wells (only BMSCs) and blank wells, respectively, and CPM(exp), CPM(adj) and CPM(bla) represent the mean counts per minute of the corresponding wells. Depending on the experimental design, there were some wells used for controlling. Results were expressed as the mean (% inhibition) ± SD.", "BMSCs were trypsinized and then irradiated (30 Gy) with Co60 before being co-cultured with PBMCs from a healthy volunteer in the presence of PHA (4 μg/ml; Roche) in 24-well plates (Nunclon, Roskilde, Denmark) and plated at a ratio of 1:10 in a total volume of 2 ml/well in triplicate for 72 hours. The cell density was 5 × 104/cm2 BMSCs and 5 × 105/cm2 PBMCs in a mix. Phorbol myristate acetate (50 ng/ml; Sigma, St Louis, MO, USA) and calcium ionomycin (1 μg/ml; Sigma) were added 6 hours prior to the end of the 72-hour co-culture. All of the PBMCs were then collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells. PBMCs were also grown alone in BMSC-free medium and used as control.", "To detect the surface markers [37] of BMSCs and the frequency of CCR4+CCR6+ Th and Treg cells in PBMCs, the antibodies (Table 2) - including CD105(FITC), CD73(FITC), CD90(FITC), CD34(FITC), CD45(FITC), CD14(PE) and HLA-DR(FITC) for BMSCs; CCR4(PE-Cy7), CD196(CCR6)(PE) and CD4(FITC) for CCR4+CCR6+ Th cells [29]; and CD4(FITC), CD25(APC) and Fox-P3(PE) antibodies for Treg cells - were used according to the manufacturers' recommendations. BMSCs and PBMCs marked with appropriate antibodies were measured with a FACScan laser flow cytometry system (Becton Dickinson) immediately. In each experiment, control staining with the appropriate isotype monoclonal antibodies was included. Results were expressed as the mean (frequency, %) ± SD.\nAntibodies used to detect CCR4+CCR6+ Th and Treg cells in PBMCs and phenotype BMSCs by flow cytometry\naAntibodies used to phenotype bone marrow-derived mensenchymal stem cells (BMSCs) by flow cytometry. bAntibodies used to detect CCR4+CCR6+ CD4+ T-helper (Th) cells and forkhead box P3-positive regulatory T (Treg) cells by flow cytometry. PBMC, peripheral blood mononuclear cell.", "Data are expressed as the mean ± SD, and the significance of the results was determined using the unpaired Student's t test. The product-moment correlation coefficient was used to test the correlations between the suppression ratios of BMSCs and the ratio of CCR4+CCR6+ Th cells to Treg cells in peripheral blood. Statistical analysis was performed using the SPSS computer program (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered statistically significant.", "[SUBTITLE] Growth characteristics and cell viability of AS-BMSCs are normal [SUBSECTION] To evaluate the biological properties of AS-BMSCs, compared with those of HD-BMSCs, the studies for growth characteristics, cell viability and multiple differentiation potentials in vitro were performed. The AS-BMSC growth curves have the same tendency as those for HD-BMSCs. The BMSC proliferation data of these two groups at each day (1 to 12 days) were tested by unpaired Student's t test, and the statistical result indicates that there was no statistically significant difference in BMSC growth characteristics between ASp and HDs (HD1 and HD2) (P > 0.05, Student's t test for independent samples). Established cultures (12 days) of BMSCs exhibited close, even equivalent, cell viability at each point of time from 1 to 12 days, as determined by cellular viability assays, and the difference of OD at 490 nm between ASp and HDs (HD1 and HD2) at each day (1 to 12 days) was not also statistically significant (P > 0.05, Student's t test for independent samples). The cultures have similar purities: (QL [the lower point of interquartile range], QU [the upper point of interquartile range]) = (95%, 99%) for AS-BMSCs, (QL, QU) = (96%, 98%) for HD1-BMSCs and (QL, QU) = (96%, 99%) for HD2-BMSCs.\nTo evaluate the biological properties of AS-BMSCs, compared with those of HD-BMSCs, the studies for growth characteristics, cell viability and multiple differentiation potentials in vitro were performed. The AS-BMSC growth curves have the same tendency as those for HD-BMSCs. The BMSC proliferation data of these two groups at each day (1 to 12 days) were tested by unpaired Student's t test, and the statistical result indicates that there was no statistically significant difference in BMSC growth characteristics between ASp and HDs (HD1 and HD2) (P > 0.05, Student's t test for independent samples). Established cultures (12 days) of BMSCs exhibited close, even equivalent, cell viability at each point of time from 1 to 12 days, as determined by cellular viability assays, and the difference of OD at 490 nm between ASp and HDs (HD1 and HD2) at each day (1 to 12 days) was not also statistically significant (P > 0.05, Student's t test for independent samples). The cultures have similar purities: (QL [the lower point of interquartile range], QU [the upper point of interquartile range]) = (95%, 99%) for AS-BMSCs, (QL, QU) = (96%, 98%) for HD1-BMSCs and (QL, QU) = (96%, 99%) for HD2-BMSCs.\n[SUBTITLE] Triple differentiation potentials of AS-BMSCs in vitro were not changed [SUBSECTION] To explore whether the multiple differentiation potentials of BMSCs in AS were abnormal, we investigate the osteogenic, adipogenic and chondrogenic differentiation potentials of AS-BMSCs and HD-BMSCs in the present study. Obvious differentiated osteocytes and adipocytes were detected as early as the 7th day after being induced for osteogenic and adipogenic differentiation, and obvious differentiated chondrocytes were seen at about 14 days since induction (Figure 1A to 1C).\nBone marrow-derived mensenchymal stem cell triple differentiation potentials from ankylosing spondylitis patients and healthy donors. (A), (B), (C) Morphological characteristics of bone marrow-derived mensenchymal stem cells (BMSCs) for osteogenic, adipogenic and chondrogenic differentiation evaluated by the inverted phase contrast microscope; ankylosing spondylitis (AS)-BMSCs have the same morphological properties as the HLA-B27-negative healthy donor (HD1)-BMSCs and HLA-B27-positive healthy donor (HD2)-BMSCs. Osteocytes were stained for calcium deposition using Alizarin Red-S (A1, B1, C1: x400) and for alkaline phosphatase (ALP) with the Cell Alkaline Phosphatase-S assay (A2, B2, C2: x200). Adipocytes were filled with many fat vacuoles, and Red Oil O was used to stain the fat vacuoles of adipocytes (A3, B3, C3: x200). Chondroblast differentiation from BMSCs was identified with Alcian blue staining (A4, B4, C4: x200). (D) General photographs of BMSCs for osteogenic differentiation, stained with Alizarin Red-S. (E) ALP activities of AS-BMSCs, HD1-BMSCs and HD2-BMSCs were 644 ± 45, 655 ± 49 and 646 ± 51, respectively; differences not statistically significant (P > 0.05). Data presented as mean ± standard deviation. MLR, mixed peripheral blood mononuclear cell reaction; PHA, phytohemagglutinin.\nThere appeared to be two stages in the BMSC differentiation process for both ASp and HDs. In the early stage, only a few osteocytes, adipocytes and chondrocytes were found within the undifferentiated BMSCs. Gradually, these three kinds of cells increased; simultaneously, the cell's body got bigger and cytoplasm became more abundant because, for example, osteocytes made closer contact, fat vacuoles of adipocytes multiplied and grew bigger, and chondrocytes began to gain many collagen fibers. In the later stage, these three kinds of cells increased rapidly and nearly predominated. For the adipocytes, osteocytes and chondrocytes derived from BMSCs, the purities were (QL, QU) = (90%, 97%), (QL, QU) = (91%, 96%) and (QL, QU) = (88%, 95%) for ASp, (QL, QU) = (88%, 98%), (QL, QU) = (90%, 97%) and (QL, QU) = (90%, 96%) for HD1, and (QL, QU) = (86%, 95%), (QL, QU) = (89%, 98%) and (QL, QU) = (92%, 97%) for HD2, respectively.\nThe calcium nodules were stained to present a red color (Figure 1A1 to 1D1), after Alizarin Red staining for calcium deposits of osteocytes was performed to determine the mineralization of BMSCs. For the adipogenic differentiation, the mass fat vacuoles of adipocytes were also stained to present a red color by Oil Red O staining (Figure 1A3 to 1C3). The well-differentiated chondrocytes were Alcian Blue-positive, and presented a bright blue color after staining (Figure 1A4 to 1C4).\nThe ALP activity, normalized to DNA concentration, is plotted in Figure 1E. The ALP activity (mean ± SD) was 644 ± 45 (mM p-nitrophenyl phosphate/minute per mg DNA) for AS-BMSCs (n = 51), which is lower than the 655 ± 49 for HD1-BMSCs (n = 37) (P > 0.05) and the 646 ± 51 for HD2-BMSCs (n = 12) (P > 0.05). All three values were much higher than those of the baseline ALP for BMSCs of ASp, HD1 and HD2 (85 ± 40, 88 ± 48 and 82 ± 13, respectively) (P < 0.001) in control medium without the osteogenic factors. ALP staining was performed on the 14th day to investigate the maturity degree of osteocytes in the groups of Asp, HD1 and HD2 (Figure 1A2 to 1C2).\nTo explore whether the multiple differentiation potentials of BMSCs in AS were abnormal, we investigate the osteogenic, adipogenic and chondrogenic differentiation potentials of AS-BMSCs and HD-BMSCs in the present study. Obvious differentiated osteocytes and adipocytes were detected as early as the 7th day after being induced for osteogenic and adipogenic differentiation, and obvious differentiated chondrocytes were seen at about 14 days since induction (Figure 1A to 1C).\nBone marrow-derived mensenchymal stem cell triple differentiation potentials from ankylosing spondylitis patients and healthy donors. (A), (B), (C) Morphological characteristics of bone marrow-derived mensenchymal stem cells (BMSCs) for osteogenic, adipogenic and chondrogenic differentiation evaluated by the inverted phase contrast microscope; ankylosing spondylitis (AS)-BMSCs have the same morphological properties as the HLA-B27-negative healthy donor (HD1)-BMSCs and HLA-B27-positive healthy donor (HD2)-BMSCs. Osteocytes were stained for calcium deposition using Alizarin Red-S (A1, B1, C1: x400) and for alkaline phosphatase (ALP) with the Cell Alkaline Phosphatase-S assay (A2, B2, C2: x200). Adipocytes were filled with many fat vacuoles, and Red Oil O was used to stain the fat vacuoles of adipocytes (A3, B3, C3: x200). Chondroblast differentiation from BMSCs was identified with Alcian blue staining (A4, B4, C4: x200). (D) General photographs of BMSCs for osteogenic differentiation, stained with Alizarin Red-S. (E) ALP activities of AS-BMSCs, HD1-BMSCs and HD2-BMSCs were 644 ± 45, 655 ± 49 and 646 ± 51, respectively; differences not statistically significant (P > 0.05). Data presented as mean ± standard deviation. MLR, mixed peripheral blood mononuclear cell reaction; PHA, phytohemagglutinin.\nThere appeared to be two stages in the BMSC differentiation process for both ASp and HDs. In the early stage, only a few osteocytes, adipocytes and chondrocytes were found within the undifferentiated BMSCs. Gradually, these three kinds of cells increased; simultaneously, the cell's body got bigger and cytoplasm became more abundant because, for example, osteocytes made closer contact, fat vacuoles of adipocytes multiplied and grew bigger, and chondrocytes began to gain many collagen fibers. In the later stage, these three kinds of cells increased rapidly and nearly predominated. For the adipocytes, osteocytes and chondrocytes derived from BMSCs, the purities were (QL, QU) = (90%, 97%), (QL, QU) = (91%, 96%) and (QL, QU) = (88%, 95%) for ASp, (QL, QU) = (88%, 98%), (QL, QU) = (90%, 97%) and (QL, QU) = (90%, 96%) for HD1, and (QL, QU) = (86%, 95%), (QL, QU) = (89%, 98%) and (QL, QU) = (92%, 97%) for HD2, respectively.\nThe calcium nodules were stained to present a red color (Figure 1A1 to 1D1), after Alizarin Red staining for calcium deposits of osteocytes was performed to determine the mineralization of BMSCs. For the adipogenic differentiation, the mass fat vacuoles of adipocytes were also stained to present a red color by Oil Red O staining (Figure 1A3 to 1C3). The well-differentiated chondrocytes were Alcian Blue-positive, and presented a bright blue color after staining (Figure 1A4 to 1C4).\nThe ALP activity, normalized to DNA concentration, is plotted in Figure 1E. The ALP activity (mean ± SD) was 644 ± 45 (mM p-nitrophenyl phosphate/minute per mg DNA) for AS-BMSCs (n = 51), which is lower than the 655 ± 49 for HD1-BMSCs (n = 37) (P > 0.05) and the 646 ± 51 for HD2-BMSCs (n = 12) (P > 0.05). All three values were much higher than those of the baseline ALP for BMSCs of ASp, HD1 and HD2 (85 ± 40, 88 ± 48 and 82 ± 13, respectively) (P < 0.001) in control medium without the osteogenic factors. ALP staining was performed on the 14th day to investigate the maturity degree of osteocytes in the groups of Asp, HD1 and HD2 (Figure 1A2 to 1C2).\n[SUBTITLE] Phenotype of bone marrow-derived mesenchymal stem cells [SUBSECTION] The AS-BMSCs and HD-BMSCs were then examined for typical MSC phenotypic surface markers. Flow cytometric analysis showed that the AS-BMSCs and HD-BMSCs (HD1-BMSCs and HD2-BMSCs) have the same phenotypic surface markers, just as the typical MSCs did. The samples all express high levels of the surface markers CD105, CD73 and CD90, and lack expression of CD45, CD34, CD14 and HLA-DR surface molecules (Figure 2).\nPhenotyping of bone marrow-derived mensenchymal stem cells for typical mensenchymal stromal cell surface markers. Single-parameter histograms for (A1) to (A3), (B1) to (B3), (C1) to (C3) individual mensenchymal stromal cell (MSC) markers and (A4) to (A7), (B4) to (B7), (C4) to (C7) MSC exclusion markers, representative of samples from patients with ankylosing spondylitis (AS) and from healthy donors (blue lines). Red lines indicate background fluorescence obtained with isotype control IgG. x axis, fluorescence intensity; y axis, cell counts. BMSC, bone marrow-derived mensenchymal stem cell; HD1, HLA-B27-negative healthy donors; HD2, HLA-B27-positive healthy donors.\nThe AS-BMSCs and HD-BMSCs were then examined for typical MSC phenotypic surface markers. Flow cytometric analysis showed that the AS-BMSCs and HD-BMSCs (HD1-BMSCs and HD2-BMSCs) have the same phenotypic surface markers, just as the typical MSCs did. The samples all express high levels of the surface markers CD105, CD73 and CD90, and lack expression of CD45, CD34, CD14 and HLA-DR surface molecules (Figure 2).\nPhenotyping of bone marrow-derived mensenchymal stem cells for typical mensenchymal stromal cell surface markers. Single-parameter histograms for (A1) to (A3), (B1) to (B3), (C1) to (C3) individual mensenchymal stromal cell (MSC) markers and (A4) to (A7), (B4) to (B7), (C4) to (C7) MSC exclusion markers, representative of samples from patients with ankylosing spondylitis (AS) and from healthy donors (blue lines). Red lines indicate background fluorescence obtained with isotype control IgG. x axis, fluorescence intensity; y axis, cell counts. BMSC, bone marrow-derived mensenchymal stem cell; HD1, HLA-B27-negative healthy donors; HD2, HLA-B27-positive healthy donors.\n[SUBTITLE] Decreased suppressive potential of AS-BMSCs on either two-way MLR or PBMC proliferation stimulated with PHA [SUBSECTION] Under the condition that the proliferation characteristics, cell viability, multiple-differentiation potentials and surface markers of AS-BMSCs were normal, compared with HD-BMSCs, the immunomodulation potential of AS-BMSCs was evaluated in the present study. The effects of BMSCs from ASp (n = 51), HD1 (n = 37) and HD2 (n = 12) on two-way MLR or PBMC proliferation in the presence of PHA were evaluated by mixing BMSCs and mixed PBMCs for two-way MLR, or by PBMCs from a third healthy volunteer in the presence of PHA for PBMC proliferation assay at five BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1, respectively.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\nThe differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.\nSimilarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.\nUnder the condition that the proliferation characteristics, cell viability, multiple-differentiation potentials and surface markers of AS-BMSCs were normal, compared with HD-BMSCs, the immunomodulation potential of AS-BMSCs was evaluated in the present study. The effects of BMSCs from ASp (n = 51), HD1 (n = 37) and HD2 (n = 12) on two-way MLR or PBMC proliferation in the presence of PHA were evaluated by mixing BMSCs and mixed PBMCs for two-way MLR, or by PBMCs from a third healthy volunteer in the presence of PHA for PBMC proliferation assay at five BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1, respectively.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\nThe differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.\nSimilarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.\n[SUBTITLE] Increased CCR4+CCR6+ Th and decreased Treg populations in peripheral blood of patients with AS [SUBSECTION] Recent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], so we also examined the frequencies of CCR4+CCR6+ Th and Treg cells in PBMCs of ASp and HDs (Figure 4). The PBMCs from ASp and HDs were examined for the subset populations using flow cytometry, defined as the percentages of CCR4+CCR6+ Th cells (CCR4/CCR6 double-positive) [32] and Treg cells (CD4/CD25/Fox-P3 triple-positive) accounting for the total CD4-positive Th cells (CCR4+CCR6+ Th/Th, Treg/Th), CD3-positive T cells (CCR4+CCR6+ Th/T, Treg/T), lymphocytes (CCR4+CCR6+ Th/L, Treg/L), and peripheral blood mononuclear cells (CCR4+CCR6+ Th/PBMCs, Treg/PBMCs) respectively. The proportions of Fox-P3-positive occupying CD4/CD25 double-positive cells (Fox-P3+/CD4+CD25+) and PBMCs (Fox-P3+/PBMCs) were also tested. Compared with healthy donors (HD1 and HD2), the CCR4+CCR6+ Th population of ASp was significantly increased (P < 0.001, Table 3 and Figure 5), whereas Treg cells and Fox-P3-positive cells were found to be significantly decreased (P < 0.001, Student's t test for independent data, Table 3 and Figure 5). There were no significant differences between HD1 and HD2.\nRepresentative plots of CCR4+CCR6+ T-helper and regulatory T cells. Representative plots of peripheral circulating populations of (A1) to (A3) CCR4+CCR6+ T-helper (Th) cells (green) and (B1) to (B3) regulatory T (Treg) cells (red) in peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp) (A1, B1), HLA-B27-negative healthy donors (HD1) (A2, B2) and HLA-B27-positive healthy donors (HD2) (A3, B3). FOX-P3, forkhead box P3; FSH, forward scatter-height; SSH, side scatter-height.\nPercentages of CCR4+/6+ Th and Treg cells in appropriate cell subsets\nData presented as the percentage mean ± standard deviation. The differences for all percentages between patients with ankylosing spondylitis (ASp) and either HLA-B27-negative healthy donors (HD1) or HLA-B27-positive healthy donors (HD2) were significant (P < 0.001, according to a two-tailed significant level of 0.05). There were no significant differences between HD1 and HD2 (P > 0.05). Th, CD4+ T-helper cells; T, T lymphocytes; L, lymphocytes; Treg, forkhead box P3-positive regulatory T cells.\nPercentages of CCR4+CCR6+ T-helper, regulatory T and Fox-P3-positive cells in peripheral blood mononuclear cells. Statistical multiple bar graphs showing percentages of increased CCR4+CCR6+ T-helper (Th) cells, reduced regulatory T (Treg) cells and reduced forkhead box P3 (Fox-P3)-positive cells in the peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp), compared with values for healthy donors (HLA-B27-negative healthy donors (HD1) and HLA-B27-positive healthy donors (HD2)). *P < 0.001, Student's t test for independent data.\nRecent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], so we also examined the frequencies of CCR4+CCR6+ Th and Treg cells in PBMCs of ASp and HDs (Figure 4). The PBMCs from ASp and HDs were examined for the subset populations using flow cytometry, defined as the percentages of CCR4+CCR6+ Th cells (CCR4/CCR6 double-positive) [32] and Treg cells (CD4/CD25/Fox-P3 triple-positive) accounting for the total CD4-positive Th cells (CCR4+CCR6+ Th/Th, Treg/Th), CD3-positive T cells (CCR4+CCR6+ Th/T, Treg/T), lymphocytes (CCR4+CCR6+ Th/L, Treg/L), and peripheral blood mononuclear cells (CCR4+CCR6+ Th/PBMCs, Treg/PBMCs) respectively. The proportions of Fox-P3-positive occupying CD4/CD25 double-positive cells (Fox-P3+/CD4+CD25+) and PBMCs (Fox-P3+/PBMCs) were also tested. Compared with healthy donors (HD1 and HD2), the CCR4+CCR6+ Th population of ASp was significantly increased (P < 0.001, Table 3 and Figure 5), whereas Treg cells and Fox-P3-positive cells were found to be significantly decreased (P < 0.001, Student's t test for independent data, Table 3 and Figure 5). There were no significant differences between HD1 and HD2.\nRepresentative plots of CCR4+CCR6+ T-helper and regulatory T cells. Representative plots of peripheral circulating populations of (A1) to (A3) CCR4+CCR6+ T-helper (Th) cells (green) and (B1) to (B3) regulatory T (Treg) cells (red) in peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp) (A1, B1), HLA-B27-negative healthy donors (HD1) (A2, B2) and HLA-B27-positive healthy donors (HD2) (A3, B3). FOX-P3, forkhead box P3; FSH, forward scatter-height; SSH, side scatter-height.\nPercentages of CCR4+/6+ Th and Treg cells in appropriate cell subsets\nData presented as the percentage mean ± standard deviation. The differences for all percentages between patients with ankylosing spondylitis (ASp) and either HLA-B27-negative healthy donors (HD1) or HLA-B27-positive healthy donors (HD2) were significant (P < 0.001, according to a two-tailed significant level of 0.05). There were no significant differences between HD1 and HD2 (P > 0.05). Th, CD4+ T-helper cells; T, T lymphocytes; L, lymphocytes; Treg, forkhead box P3-positive regulatory T cells.\nPercentages of CCR4+CCR6+ T-helper, regulatory T and Fox-P3-positive cells in peripheral blood mononuclear cells. Statistical multiple bar graphs showing percentages of increased CCR4+CCR6+ T-helper (Th) cells, reduced regulatory T (Treg) cells and reduced forkhead box P3 (Fox-P3)-positive cells in the peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp), compared with values for healthy donors (HLA-B27-negative healthy donors (HD1) and HLA-B27-positive healthy donors (HD2)). *P < 0.001, Student's t test for independent data.\n[SUBTITLE] BMSCs of patients with AS-induced CCR4+CCR6+ Th/Treg imbalance [SUBSECTION] We performed the direct contact co-culture of BMSCs and PBMCs to explore whether the reduced immunomodulation potential of AS-BMSCs altered the balance of CCR4+CCR6+ Th/Treg cells. The PBMCs were collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells after co-culture with BMSCs of ASp and HDs (HD1 and HD2) for 3 days. The percentages of Treg cells (0.63 ± 0.23%) and Fox-P3-positive cells (0.74 ± 0.11%) in PBMCs after co-culture with AS-BMSCs for 3 days reduced significantly, whereas the percentages of CCR4+CCR6+ Th cells (1.87 ± 0.29%) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly, compared with these values of groups, including 3-day HD1, 3-day HD2, 3-day control, and 0 days (P < 0.001, Figure 6). Impressively, the ratio of CCR4+CCR6+ Th cells to Treg cells (CCR4+CCR6+ Th/Treg) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly (P < 0.001, Figure 6).\nRatios of Fox-P3/PBMCs, CCR4+CCR6+ Th/PBMCs, Treg/PBMCs and CCR4+CCR6+ Th/Treg. Peripheral blood mononuclear cells (PBMCs) were collected to be assayed by flow cytometry for the CCR4+CCR6+ T-helper (Th) cells, regulatory T (Treg) cells and forkhead box P3 (Fox-P3)-positive cells after co-culture with or without (3 day-control) ankylosing spondylitis (AS)-bone marrow-derived mensenchymal stem cells (BMSCs) (3 day-ASp), HLA-B27-negative healthy donors (HD1)-BMSCs (3 day-HD1) or HLA-B27-positive healthy donors (HD2)-BMSCs (3 day-HD2) for 72 hours. Statistical multiple bar graphs show that AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg imbalance via reducing Treg/PBMCs but increasing CCR4+CCR6+ Th/PBMCs; it also produced a significant reduction of Fox-P3-positive cells in PBMCs (*P < 0.001, Student's t test for independent data).\nWe performed the direct contact co-culture of BMSCs and PBMCs to explore whether the reduced immunomodulation potential of AS-BMSCs altered the balance of CCR4+CCR6+ Th/Treg cells. The PBMCs were collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells after co-culture with BMSCs of ASp and HDs (HD1 and HD2) for 3 days. The percentages of Treg cells (0.63 ± 0.23%) and Fox-P3-positive cells (0.74 ± 0.11%) in PBMCs after co-culture with AS-BMSCs for 3 days reduced significantly, whereas the percentages of CCR4+CCR6+ Th cells (1.87 ± 0.29%) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly, compared with these values of groups, including 3-day HD1, 3-day HD2, 3-day control, and 0 days (P < 0.001, Figure 6). Impressively, the ratio of CCR4+CCR6+ Th cells to Treg cells (CCR4+CCR6+ Th/Treg) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly (P < 0.001, Figure 6).\nRatios of Fox-P3/PBMCs, CCR4+CCR6+ Th/PBMCs, Treg/PBMCs and CCR4+CCR6+ Th/Treg. Peripheral blood mononuclear cells (PBMCs) were collected to be assayed by flow cytometry for the CCR4+CCR6+ T-helper (Th) cells, regulatory T (Treg) cells and forkhead box P3 (Fox-P3)-positive cells after co-culture with or without (3 day-control) ankylosing spondylitis (AS)-bone marrow-derived mensenchymal stem cells (BMSCs) (3 day-ASp), HLA-B27-negative healthy donors (HD1)-BMSCs (3 day-HD1) or HLA-B27-positive healthy donors (HD2)-BMSCs (3 day-HD2) for 72 hours. Statistical multiple bar graphs show that AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg imbalance via reducing Treg/PBMCs but increasing CCR4+CCR6+ Th/PBMCs; it also produced a significant reduction of Fox-P3-positive cells in PBMCs (*P < 0.001, Student's t test for independent data).\n[SUBTITLE] Negative correlations between percentages of CCR4+CCR6+ Th/Treg cells and the suppressive ratio of BMSCs [SUBSECTION] When examining data from all subjects tested, we observed positive correlations between the percentage inhibition of BMSCs (MLR) and the percentage inhibition of BMSCs (PHA) for all of the 51 ASp, 37 HD1 and 12 HD2. Interestingly, however, for all of the ASp (Figure 7A), HD1 (Figure 7B) and HD2 (Figure 7C) there were significantly negative correlations between the ratios of CCR4+CCR6+ Th cells to Treg cells in the peripheral blood and either percentage inhibition of BMSCs (MLR) or percentage inhibition of BMSCs (PHA) at all five ratios (P < 0.01, respectively).\nCorrelation analysis between CCR4+CCR6+ Th/Treg ratio and immunomodulation potential of bone marrow-derived mensenchymal stem cells. (A) All patients with ankylosing spondylitis (ASp), (B) HLA-B27-negative healthy donors (HD1) and (C) HLA-B27-positive healthy donors (HD2) presented significantly negative correlations between the ratio of CCR4+CCR6+ T-helper (Th) cells to regulatory T (Treg) cells in the peripheral blood mononuclear cells (PBMCs) and percentage inhibition either from two-way mixed PBMC reaction (MLR) (upper panel) or PBMC proliferation induced by phytohemagglutinin (PHA) (lower panel) at a bone marrow-derived mensenchymal stem cell (BMSC):PBMC ratio of 1:10 (P < 0.01, respectively).\nWhen examining data from all subjects tested, we observed positive correlations between the percentage inhibition of BMSCs (MLR) and the percentage inhibition of BMSCs (PHA) for all of the 51 ASp, 37 HD1 and 12 HD2. Interestingly, however, for all of the ASp (Figure 7A), HD1 (Figure 7B) and HD2 (Figure 7C) there were significantly negative correlations between the ratios of CCR4+CCR6+ Th cells to Treg cells in the peripheral blood and either percentage inhibition of BMSCs (MLR) or percentage inhibition of BMSCs (PHA) at all five ratios (P < 0.01, respectively).\nCorrelation analysis between CCR4+CCR6+ Th/Treg ratio and immunomodulation potential of bone marrow-derived mensenchymal stem cells. (A) All patients with ankylosing spondylitis (ASp), (B) HLA-B27-negative healthy donors (HD1) and (C) HLA-B27-positive healthy donors (HD2) presented significantly negative correlations between the ratio of CCR4+CCR6+ T-helper (Th) cells to regulatory T (Treg) cells in the peripheral blood mononuclear cells (PBMCs) and percentage inhibition either from two-way mixed PBMC reaction (MLR) (upper panel) or PBMC proliferation induced by phytohemagglutinin (PHA) (lower panel) at a bone marrow-derived mensenchymal stem cell (BMSC):PBMC ratio of 1:10 (P < 0.01, respectively).", "To evaluate the biological properties of AS-BMSCs, compared with those of HD-BMSCs, the studies for growth characteristics, cell viability and multiple differentiation potentials in vitro were performed. The AS-BMSC growth curves have the same tendency as those for HD-BMSCs. The BMSC proliferation data of these two groups at each day (1 to 12 days) were tested by unpaired Student's t test, and the statistical result indicates that there was no statistically significant difference in BMSC growth characteristics between ASp and HDs (HD1 and HD2) (P > 0.05, Student's t test for independent samples). Established cultures (12 days) of BMSCs exhibited close, even equivalent, cell viability at each point of time from 1 to 12 days, as determined by cellular viability assays, and the difference of OD at 490 nm between ASp and HDs (HD1 and HD2) at each day (1 to 12 days) was not also statistically significant (P > 0.05, Student's t test for independent samples). The cultures have similar purities: (QL [the lower point of interquartile range], QU [the upper point of interquartile range]) = (95%, 99%) for AS-BMSCs, (QL, QU) = (96%, 98%) for HD1-BMSCs and (QL, QU) = (96%, 99%) for HD2-BMSCs.", "To explore whether the multiple differentiation potentials of BMSCs in AS were abnormal, we investigate the osteogenic, adipogenic and chondrogenic differentiation potentials of AS-BMSCs and HD-BMSCs in the present study. Obvious differentiated osteocytes and adipocytes were detected as early as the 7th day after being induced for osteogenic and adipogenic differentiation, and obvious differentiated chondrocytes were seen at about 14 days since induction (Figure 1A to 1C).\nBone marrow-derived mensenchymal stem cell triple differentiation potentials from ankylosing spondylitis patients and healthy donors. (A), (B), (C) Morphological characteristics of bone marrow-derived mensenchymal stem cells (BMSCs) for osteogenic, adipogenic and chondrogenic differentiation evaluated by the inverted phase contrast microscope; ankylosing spondylitis (AS)-BMSCs have the same morphological properties as the HLA-B27-negative healthy donor (HD1)-BMSCs and HLA-B27-positive healthy donor (HD2)-BMSCs. Osteocytes were stained for calcium deposition using Alizarin Red-S (A1, B1, C1: x400) and for alkaline phosphatase (ALP) with the Cell Alkaline Phosphatase-S assay (A2, B2, C2: x200). Adipocytes were filled with many fat vacuoles, and Red Oil O was used to stain the fat vacuoles of adipocytes (A3, B3, C3: x200). Chondroblast differentiation from BMSCs was identified with Alcian blue staining (A4, B4, C4: x200). (D) General photographs of BMSCs for osteogenic differentiation, stained with Alizarin Red-S. (E) ALP activities of AS-BMSCs, HD1-BMSCs and HD2-BMSCs were 644 ± 45, 655 ± 49 and 646 ± 51, respectively; differences not statistically significant (P > 0.05). Data presented as mean ± standard deviation. MLR, mixed peripheral blood mononuclear cell reaction; PHA, phytohemagglutinin.\nThere appeared to be two stages in the BMSC differentiation process for both ASp and HDs. In the early stage, only a few osteocytes, adipocytes and chondrocytes were found within the undifferentiated BMSCs. Gradually, these three kinds of cells increased; simultaneously, the cell's body got bigger and cytoplasm became more abundant because, for example, osteocytes made closer contact, fat vacuoles of adipocytes multiplied and grew bigger, and chondrocytes began to gain many collagen fibers. In the later stage, these three kinds of cells increased rapidly and nearly predominated. For the adipocytes, osteocytes and chondrocytes derived from BMSCs, the purities were (QL, QU) = (90%, 97%), (QL, QU) = (91%, 96%) and (QL, QU) = (88%, 95%) for ASp, (QL, QU) = (88%, 98%), (QL, QU) = (90%, 97%) and (QL, QU) = (90%, 96%) for HD1, and (QL, QU) = (86%, 95%), (QL, QU) = (89%, 98%) and (QL, QU) = (92%, 97%) for HD2, respectively.\nThe calcium nodules were stained to present a red color (Figure 1A1 to 1D1), after Alizarin Red staining for calcium deposits of osteocytes was performed to determine the mineralization of BMSCs. For the adipogenic differentiation, the mass fat vacuoles of adipocytes were also stained to present a red color by Oil Red O staining (Figure 1A3 to 1C3). The well-differentiated chondrocytes were Alcian Blue-positive, and presented a bright blue color after staining (Figure 1A4 to 1C4).\nThe ALP activity, normalized to DNA concentration, is plotted in Figure 1E. The ALP activity (mean ± SD) was 644 ± 45 (mM p-nitrophenyl phosphate/minute per mg DNA) for AS-BMSCs (n = 51), which is lower than the 655 ± 49 for HD1-BMSCs (n = 37) (P > 0.05) and the 646 ± 51 for HD2-BMSCs (n = 12) (P > 0.05). All three values were much higher than those of the baseline ALP for BMSCs of ASp, HD1 and HD2 (85 ± 40, 88 ± 48 and 82 ± 13, respectively) (P < 0.001) in control medium without the osteogenic factors. ALP staining was performed on the 14th day to investigate the maturity degree of osteocytes in the groups of Asp, HD1 and HD2 (Figure 1A2 to 1C2).", "The AS-BMSCs and HD-BMSCs were then examined for typical MSC phenotypic surface markers. Flow cytometric analysis showed that the AS-BMSCs and HD-BMSCs (HD1-BMSCs and HD2-BMSCs) have the same phenotypic surface markers, just as the typical MSCs did. The samples all express high levels of the surface markers CD105, CD73 and CD90, and lack expression of CD45, CD34, CD14 and HLA-DR surface molecules (Figure 2).\nPhenotyping of bone marrow-derived mensenchymal stem cells for typical mensenchymal stromal cell surface markers. Single-parameter histograms for (A1) to (A3), (B1) to (B3), (C1) to (C3) individual mensenchymal stromal cell (MSC) markers and (A4) to (A7), (B4) to (B7), (C4) to (C7) MSC exclusion markers, representative of samples from patients with ankylosing spondylitis (AS) and from healthy donors (blue lines). Red lines indicate background fluorescence obtained with isotype control IgG. x axis, fluorescence intensity; y axis, cell counts. BMSC, bone marrow-derived mensenchymal stem cell; HD1, HLA-B27-negative healthy donors; HD2, HLA-B27-positive healthy donors.", "Under the condition that the proliferation characteristics, cell viability, multiple-differentiation potentials and surface markers of AS-BMSCs were normal, compared with HD-BMSCs, the immunomodulation potential of AS-BMSCs was evaluated in the present study. The effects of BMSCs from ASp (n = 51), HD1 (n = 37) and HD2 (n = 12) on two-way MLR or PBMC proliferation in the presence of PHA were evaluated by mixing BMSCs and mixed PBMCs for two-way MLR, or by PBMCs from a third healthy volunteer in the presence of PHA for PBMC proliferation assay at five BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1, respectively.\n[SUBTITLE] Two-way mixed PBMC reaction [SUBSECTION] The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\nThe differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.\n[SUBTITLE] Allogeneic PBMC proliferation assay [SUBSECTION] Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.\nSimilarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.", "The differences of absorbance between 0 days and 5 days were not statistically significant for the allogeneic PBMCs from a healthy volunteer (P = 0.351) and the PBMCs from another unrelated healthy volunteer (P = 0.418) (Figure 3A1). For the mixed PBMCs, however, the absorbance at 5 days was significantly higher than the value at 0 days (P < 0.001, Figure 3A1). As shown in Figure 3A2,A3, there was a statistically significant reduction in suppressive potential (% inhibition) of BMSCs from ASp on two-way MLR at all five ratios, compared with the percentage inhibition of HD-BMSCs (P < 0.001, Figure 3A2,A3).\nReduced suppressive potential of bone marrow-derived mensenchymal stem cells of patients with ankylosing spondylitis. (A1) Absorbance of mixed peripheral blood mononuclear cells (PBMCs) at 5 days (0.194 ± 0.038) was significantly higher than the value at 0 days (0.104 ± 0.023) (*P < 0.001), showing the significant mixed PBMC reaction. (A2), (A3) Compared with healthy donor (HD)-bone marrow-derived mensenchymal stem cells (BMSCs), the decreased percentage inhibition of ankylosing spondylitis (AS)-BMSCs on two-way mixed peripheral blood mononuclear cell reaction (MLR) at different ratios showed that suppressive potentials of AS-BMSCs were reduced (% inhibition reduced, *P < 0.001). (B1) PBMCs derived from a healthy volunteer could proliferate significantly the presence of phytohemagglutinin (PHA) in vitro (P < 0.001). (B2), (B3) Percentage inhibition of AS-BMSCs on PBMC proliferation induced by PHA was significantly lower than the values of HD-BMSCs at varied BMSC:PBMC ratios of 1:20, 1:10, 1:5, 1:2 and 1:1 (% inhibition reduced, *P < 0.001). Data expressed as mean ± standard deviation of triplicates of three separate experiments. (A2), (B2) and (A3), (B3) were performed by MTT assay and 3H-TdR assay respectively. There were no statistically significant differences of suppressive potentials between (A2), (A3) HLA-B27-negative healthy donor (HD1)-BMSCs and (B2), (B3) HLA-B27-positive healthy donor (HD2)-BMSCs. OD, optical density.", "Similarly, when PBMC proliferation was elicited by means of PHA, the addition of BMSCs from ASp also produced a statistically significant decreased inhibitory effect on PBMC proliferation (Figure 3B2,B3; P < 0.001, Student's t test for independent data), ranging from a BMSC:PBMC ratio of 1:20 to 1:1. Additionally, the differences of absorbance between 0 days and 5 days without PHA were not significant (P = 0.223), while the value for 5 days with PHA was significantly higher than the value at 0 days (P < 0.001, Figure 3B1).\nFurthermore, in either the MLR (Figure 3A3) or the PBMC proliferation assay stimulated with PHA (Figure 3B3), the 3H-TdR assay data suggested a significant relationship between dose and suppression of immunoreactivity of BMSCs for ASp, HD1 and HD2. The MTT assay also presented this phenomenon basically, but not clearly.", "Recent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], so we also examined the frequencies of CCR4+CCR6+ Th and Treg cells in PBMCs of ASp and HDs (Figure 4). The PBMCs from ASp and HDs were examined for the subset populations using flow cytometry, defined as the percentages of CCR4+CCR6+ Th cells (CCR4/CCR6 double-positive) [32] and Treg cells (CD4/CD25/Fox-P3 triple-positive) accounting for the total CD4-positive Th cells (CCR4+CCR6+ Th/Th, Treg/Th), CD3-positive T cells (CCR4+CCR6+ Th/T, Treg/T), lymphocytes (CCR4+CCR6+ Th/L, Treg/L), and peripheral blood mononuclear cells (CCR4+CCR6+ Th/PBMCs, Treg/PBMCs) respectively. The proportions of Fox-P3-positive occupying CD4/CD25 double-positive cells (Fox-P3+/CD4+CD25+) and PBMCs (Fox-P3+/PBMCs) were also tested. Compared with healthy donors (HD1 and HD2), the CCR4+CCR6+ Th population of ASp was significantly increased (P < 0.001, Table 3 and Figure 5), whereas Treg cells and Fox-P3-positive cells were found to be significantly decreased (P < 0.001, Student's t test for independent data, Table 3 and Figure 5). There were no significant differences between HD1 and HD2.\nRepresentative plots of CCR4+CCR6+ T-helper and regulatory T cells. Representative plots of peripheral circulating populations of (A1) to (A3) CCR4+CCR6+ T-helper (Th) cells (green) and (B1) to (B3) regulatory T (Treg) cells (red) in peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp) (A1, B1), HLA-B27-negative healthy donors (HD1) (A2, B2) and HLA-B27-positive healthy donors (HD2) (A3, B3). FOX-P3, forkhead box P3; FSH, forward scatter-height; SSH, side scatter-height.\nPercentages of CCR4+/6+ Th and Treg cells in appropriate cell subsets\nData presented as the percentage mean ± standard deviation. The differences for all percentages between patients with ankylosing spondylitis (ASp) and either HLA-B27-negative healthy donors (HD1) or HLA-B27-positive healthy donors (HD2) were significant (P < 0.001, according to a two-tailed significant level of 0.05). There were no significant differences between HD1 and HD2 (P > 0.05). Th, CD4+ T-helper cells; T, T lymphocytes; L, lymphocytes; Treg, forkhead box P3-positive regulatory T cells.\nPercentages of CCR4+CCR6+ T-helper, regulatory T and Fox-P3-positive cells in peripheral blood mononuclear cells. Statistical multiple bar graphs showing percentages of increased CCR4+CCR6+ T-helper (Th) cells, reduced regulatory T (Treg) cells and reduced forkhead box P3 (Fox-P3)-positive cells in the peripheral blood mononuclear cells (PBMCs) of patients with ankylosing spondylitis (ASp), compared with values for healthy donors (HLA-B27-negative healthy donors (HD1) and HLA-B27-positive healthy donors (HD2)). *P < 0.001, Student's t test for independent data.", "We performed the direct contact co-culture of BMSCs and PBMCs to explore whether the reduced immunomodulation potential of AS-BMSCs altered the balance of CCR4+CCR6+ Th/Treg cells. The PBMCs were collected to be assayed by flow cytometry for the CCR4+CCR6+ Th and Treg cells after co-culture with BMSCs of ASp and HDs (HD1 and HD2) for 3 days. The percentages of Treg cells (0.63 ± 0.23%) and Fox-P3-positive cells (0.74 ± 0.11%) in PBMCs after co-culture with AS-BMSCs for 3 days reduced significantly, whereas the percentages of CCR4+CCR6+ Th cells (1.87 ± 0.29%) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly, compared with these values of groups, including 3-day HD1, 3-day HD2, 3-day control, and 0 days (P < 0.001, Figure 6). Impressively, the ratio of CCR4+CCR6+ Th cells to Treg cells (CCR4+CCR6+ Th/Treg) in PBMCs after co-culture with AS-BMSCs for 3 days increased significantly (P < 0.001, Figure 6).\nRatios of Fox-P3/PBMCs, CCR4+CCR6+ Th/PBMCs, Treg/PBMCs and CCR4+CCR6+ Th/Treg. Peripheral blood mononuclear cells (PBMCs) were collected to be assayed by flow cytometry for the CCR4+CCR6+ T-helper (Th) cells, regulatory T (Treg) cells and forkhead box P3 (Fox-P3)-positive cells after co-culture with or without (3 day-control) ankylosing spondylitis (AS)-bone marrow-derived mensenchymal stem cells (BMSCs) (3 day-ASp), HLA-B27-negative healthy donors (HD1)-BMSCs (3 day-HD1) or HLA-B27-positive healthy donors (HD2)-BMSCs (3 day-HD2) for 72 hours. Statistical multiple bar graphs show that AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg imbalance via reducing Treg/PBMCs but increasing CCR4+CCR6+ Th/PBMCs; it also produced a significant reduction of Fox-P3-positive cells in PBMCs (*P < 0.001, Student's t test for independent data).", "When examining data from all subjects tested, we observed positive correlations between the percentage inhibition of BMSCs (MLR) and the percentage inhibition of BMSCs (PHA) for all of the 51 ASp, 37 HD1 and 12 HD2. Interestingly, however, for all of the ASp (Figure 7A), HD1 (Figure 7B) and HD2 (Figure 7C) there were significantly negative correlations between the ratios of CCR4+CCR6+ Th cells to Treg cells in the peripheral blood and either percentage inhibition of BMSCs (MLR) or percentage inhibition of BMSCs (PHA) at all five ratios (P < 0.01, respectively).\nCorrelation analysis between CCR4+CCR6+ Th/Treg ratio and immunomodulation potential of bone marrow-derived mensenchymal stem cells. (A) All patients with ankylosing spondylitis (ASp), (B) HLA-B27-negative healthy donors (HD1) and (C) HLA-B27-positive healthy donors (HD2) presented significantly negative correlations between the ratio of CCR4+CCR6+ T-helper (Th) cells to regulatory T (Treg) cells in the peripheral blood mononuclear cells (PBMCs) and percentage inhibition either from two-way mixed PBMC reaction (MLR) (upper panel) or PBMC proliferation induced by phytohemagglutinin (PHA) (lower panel) at a bone marrow-derived mensenchymal stem cell (BMSC):PBMC ratio of 1:10 (P < 0.01, respectively).", "In the present study, we found that AS-BMSCs showed normal proliferation, cell viability, surface markers and multiple differentiations characteristics, but significantly reduced immunomodulation potential; also, the frequencies of Treg and Fox-P3+ cells in AS-PBMCs decreased, but CCR4+CCR6+ Th cells increased. Moreover, the AS-BMSCs induced the ratio of CCR4+CCR6+ Th/Treg cell imbalance when co-cultured with PBMCs. Additionally, no differences were found between HD1 and HD2. Impressively, the immunomodulation potential of BMSCs has negative correlation with the ratios of CCR4+CCR6+ Th to Treg cells in peripheral blood.\nCharacteristic symptoms of AS are spinal stiffness, ankylosis and syndesmophytes [1], which are explained by spinal inflammation, structural damage, or both [40]. As the ankylosis of the spine or even spinal stiffness was probably initiated by the heterotopic ossification of osteoblasts, and most of these osteoblasts derived from BMSCs [41,42], and, simultaneously, there were some abnormalities with the biological properties, including the multiple differentiation potentials in some autoimmune disorders, such as severe aplastic anemia [14], we performed research to examine the biology properties of AS-BMSCs. We did not, however, detect any abnormality about the biological characteristics of AS-BMSCs in vitro, including the proliferation ability, cell viability, morphological features, differentiation potentials and surface markers. Especially, the activity of osteogenic differentiation and mineralization capacity are totally normal. In addition, Braun and colleagues reported that immunohistological studies on sacroiliac joint biopsies have shown cellular infiltrates, including T cells and macrophages, and that TNFα is overexpressed in sacroiliac joints [43]. These events indicated that the endogenous osteogenic differentiation potential of BMSCs may be not the real murderer, which was thought to induce MSCs to produce heterotopic ossification; the appropriate cell activity and cytokine function [44] existing in the internal environments, which maintain BMSCs in vivo, may play a critical role in the process of BMSC heterotopic ossification.\nThere appears to be a reciprocal relationship between the development of Treg cells and Th17 cells. Recent studies have independently revealed enhanced Th17 response and weakened Treg response in some autoimmune diseases [38,39], indicating an important role for Th17/Treg imbalance in the pathogenesis of autoimmunity. The present study revealed that the Th17/Treg imbalance existed in the peripheral blood of ASp, suggesting its potential role in the breakdown of immune self-tolerance in AS. Moreover, the physiological frequency of Fox-P3+ and Treg cells can suppress autoimmune disorders, but the reduction or even depletion of Fox-P3+ cells could lead to induction of autoimmunity by specific ablation of Treg cells in genetically targeted mice [45], these results indicated that the reduction of Treg cells probably enhanced the pathological process of AS. The balance between Treg and Th17 cells is dependent on the localized cytokine milieu including levels of IL-2, IL-6 [46] and transforming growth factor beta (TGFβ) [47], and the differentiation of both Treg and Th17 cells required TGFβ, but depends on opposing activities: at low concentrations, TGFβ synergizes with IL-6 and IL-21 to promote IL-23R expression, favoring Th17 cell differentiation, while high concentrations of TGFβ repress IL-23R expression and favor Fox-P3+ Treg cells [48].\nRivino and colleagues reported that the combination of CCR4 and CCR6 does not uniquely define Th17 cells; it also demarcated an IL-10-producing population of T cells [49]. There are several reasons why the Th17 cells were defined with the combination of CCR4 and CCR6 in this study. At first, CCR4 had been shown to mark skin-homing T cells [50], expression of which has been associated with the ability of cells to traffic into peripheral tissues [51]; in addition, the percentage of CD4+/CCR4+ T cells showed significant positive correlations with the Bath Ankylosing Spondylitis Disease Activity Index in AS [52]. Furthermore, the findings of Napolitani and colleagues provide a functional link between CCR6 and IL-17 [32], which have been independently associated with tissue pathology. CCR6 has been shown to be involved in the recruitment of pathogenic T cells in rheumatoid arthritis [53], experimental autoimmune encephalitis [54] and psoriasis [55,56], and Th17 cells are increasingly recognized as essential mediators of those diseases [25,57-61]. Besides, just like CCR4, CCR6 also mediates T-cell homing to skin and mucosal tissues [62], and its expression facilitates the recruitment of both dendritic cells and T cells in different diseases [60]. These findings illustrated that the Th17 cells CCR4+CCR6+ were the most active and aggressive pathogenic ones. Second, only a fraction of IL-10-producing CCR6+ T cells co-expresses CCR4 [49]. Finally, Hill Gaston and colleagues reported increased frequency of IL-17-producing T cells in AS [33]; these findings were consistent with our study's results. Moreover, they did not detect any differences in the frequency of IL-10-positive CD4+ T cells between patients with arthritis and control subjects, and none of the IL-17-positive cells co-expressed IL-10. This means that the increased frequency of IL-17-producing T cells in AS was not compensated for by an increased frequency of IL-10-producing cells.\nIn the present study, we failed to find any significant differences between HLA-B27-negative and HLA-B27-positive healthy donors, which were essentially the same in all respects we had studied. These findings indicated that HLA-B27-positivity may be not responsible for those abnormalities of ASp. Inflammation is one important link within the pathogenesis of AS [1], while few studies reported whether inflammation be responsible for the altered properties of AS. Rheumatoid arthritis is a typical inflammatory disease, One study reported that the number of IL-17+ Th cells and CD4+CD25+ Treg cells in the peripheral blood of patients with rheumatoid arthritis is elevated compared with that of healthy individuals [33,63], whereas other studies suggest no differences between these two groups [64,65]. Two groups have reported that peripheral blood Treg cells isolated from patients with rheumatoid arthritis and from control individuals showed no difference in their ability to suppress effector T-cell proliferation [38]. Another group, however, reported a striking defect in the capacity of Treg cells from patients with rheumatoid arthritis to suppress effector T-cell proliferation [66]. These divergent results could reflect differences in the populations of patients, the methods used to purify Treg and Th17 cells, or how the suppression assays were performed.\nRecently, a study indicated that an alteration in the balance of Th1, Th2, Th17, and Treg cells contributes to the development of experimental autoimmune myastheia gravis, and that the administration of BMSCs can ameliorate the severity and, in a process dependent on the secretion of TGF-β, presenting to inhibit the proliferation of antigen-specific T cells, normalize the distribution of the four T-helper subsets and their corresponding cytokines [67]. In vitro experiments have shown that human MSCs can induce the generation of CD4+ T-cell subsets displaying a regulatory phenotype (Treg) [8,68]. These results demonstrated that administration of BMSCs from healthy donors to ASp may be a novelty therapeutic strategy for AS.\nThe imbalance of Th17/Treg cell subsets may contribute to the inflammatory responses [69] and heterotopic ossification of MSCs [44] in AS by secreting the proinflammatory T-cell cytokines. There is therefore a potential mechanism of AS that reduced the immunomodulation potential of BMSC induced CCR4+CCR6+ Th/Treg imbalance and led to excessive activation of T cells, and then to the increased proinflammatory CCR4+CCR6+ Th cells and reduced Treg cells. Fox-P3+ cells compounded by the synergistic actions of activated T-cell cytokines drive the local BMSCs into both osteoblasts and osteoclasts at localized sites of inflammation, and then the induced BMSCs result in syndesmophytes, fusion of the sacroiliac joint and even spinal stiffness via heterotopic ossification. Finally, AS occurs.\nIncreasing evidence suggests that MSCs might be a suitable cell population for immunosuppressive therapy in solid organ transplantation and may be strong candidates for cell therapy against human autoimmune diseases [70-72]. The advantages of MSCs are obvious: they can be easily harvested from a multitude of tissues, can be cultured to nearly unlimited extent, and have very promising immunomodulation effects [73]. The immunoregulatory function of BMSCs thus appears to represent a promising strategy for cytotherapy of autoimmune diseases, such as AS, which is central to human health and disease, and provides novel insights into new therapeutic interventions. The retinoic acid receptor-alpha [74] could be another considered candidate for the treatment of autoimmunity, because signaling through a specific nuclear retinoic acid receptor can favor the decision to adopt the Treg cell fate at the expense of the Th17 cell fate. The further elucidation of the precise mechanism may aid in the identification of targets for future immunomodulatory therapy of AS.", "The reduced immunomodulation potential of BMSCs may be an initiating factor for AS pathogenesis, and may play a novelty role in triggering the onset of AS via inducing the CCR4+CCR6+ Th/Treg cell subset imbalance. BMSCs may therefore be an interesting therapeutic target in AS, suggesting the use of BMSCs from HDs in the disease.", "ALP: alkaline phosphatase; AS: ankylosing spondylitis; AS-BMSC: bone marrow-derived mensenchymal stem cell of patient with AS; ASp: patients with ankylosing spondylitis; AS-PBMC: peripheral blood mononuclear cell of patient with AS; BMSC: bone marrow-derived mensenchymal stem cell; CPM: counts per minute; DMEM: Dulbecco's modified Eagle's medium; FBS: fetal bovine serum; Fox-P3: forkhead box P3; 3H-TdR: 3H-thymidine; HD: healthy donor; HD1: HLA-B27-negative healthy donors; HD2: HLA-B27-positive healthy donors; HD-BMSC: bone marrow-derived mensenchymal stem cell of healthy donor; HD-PBMC: peripheral blood mononuclear cell of healthy donor; IL: interleukin; MLR: mixed peripheral blood mononuclear cell reaction; MSC: mensenchymal stromal cell; MTT: methyl thiazolyl tetrazolium; OD: optical density; PBMC: peripheral blood mononuclear cell; PBS: phosphate-buffered saline; PCR: polymerase chain reaction; PHA: phytohemagglutinin; QL: the lower point of interquartile range; QU: the upper point of interquartile range; SD: standard deviation; TGFβ: transforming growth factor beta; Th: T-helper; TNF: tumor necrosis factor; Treg: regulatory T.", "The authors declare that they have no competing interests.", "RY, XL, YM and YT carried out the experimental work and the data collection and interpretation. LH, KC and JY participated in the design and coordination of experimental work, and the acquisition of data. MR and YW participated in the study design, data collection, analysis of data and preparation of the manuscript. PW and HS carried out the study design, the analysis and interpretation of data and drafted the manuscript. All authors read and approved the final manuscript." ]
[ null, "materials|methods", null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, "discussion", "conclusions", null, null, null ]
[]
Using a computerized provider order entry system to meet the unique prescribing needs of children: description of an advanced dosing model.
21338518
It is well known that the information requirements necessary to safely treat children with therapeutic medications cannot be met with the same approaches used in adults. Over a 1-year period, Duke University Hospital engaged in the challenging task of enhancing an established computerized provider order entry (CPOE) system to address the unique medication dosing needs of pediatric patients.
BACKGROUND
An advanced dosing model (ADM) was designed to interact with our existing CPOE application to provide decision support enabling complex pediatric dose calculations based on chronological age, gestational age, weight, care area in the hospital, indication, and level of renal impairment. Given that weight is a critical component of medication dosing that may change over time, alerting logic was added to guard against erroneous entry or outdated weight information.
METHODS
Pediatric CPOE was deployed in a staggered fashion across 6 care areas over a 14-month period. Safeguards to prevent miskeyed values became important in allowing providers the flexibility to override the ADM logic if desired. Methods to guard against over- and under-dosing were added. The modular nature of our model allows us to easily add new dosing scenarios for specialized populations as the pediatric population and formulary change over time.
RESULTS
The medical needs of pediatric patients vary greatly from those of adults, and the information systems that support those needs require tailored approaches to design and implementation. When a single CPOE system is used for both adults and pediatrics, safeguards such as redirection and suppression must be used to protect children from inappropriate adult medication dosing content. Unlike other pediatric dosing systems, our model provides active dosing assistance and dosing process management, not just static dosing advice.
CONCLUSIONS
[ "Child", "Drug Therapy, Computer-Assisted", "Hospitals, University", "Humans", "Medical Order Entry Systems", "Medication Errors", "Medication Systems, Hospital", "Pharmaceutical Preparations" ]
3048480
null
null
Methods
[SUBTITLE] Setting and implementation period [SUBSECTION] Duke Children's Hospital (DCH) is a tertiary care facility within DUH that comprises 7 inpatient pediatric service locations: 2 general care medical wards, a pediatric intensive care unit (PICU), a neonatal intensive care unit (NICU), a bone marrow transplant (BMT) unit, a pediatric cardiac ICU, and a transitional care (i.e., step-down) unit. DCH averages 7000 pediatric admissions per year across 187 inpatient beds, approximately 50% of which are located in critical care wards. DCH employs approximately 197 attending physicians and 50 pediatric residents across 20 clinical service areas. Table 1 details release of the CPOE application, which included the ADM functionality described in this report, in order of deployment over a 14-month period across pediatric units (Table 1). Deployment of pediatric CPOE at Duke Children's Hospital *Number of beds in service as of 1/29/2010. †New pediatric location introduced to DCH after full CPOE deployment on 3/10/2008. Duke Children's Hospital (DCH) is a tertiary care facility within DUH that comprises 7 inpatient pediatric service locations: 2 general care medical wards, a pediatric intensive care unit (PICU), a neonatal intensive care unit (NICU), a bone marrow transplant (BMT) unit, a pediatric cardiac ICU, and a transitional care (i.e., step-down) unit. DCH averages 7000 pediatric admissions per year across 187 inpatient beds, approximately 50% of which are located in critical care wards. DCH employs approximately 197 attending physicians and 50 pediatric residents across 20 clinical service areas. Table 1 details release of the CPOE application, which included the ADM functionality described in this report, in order of deployment over a 14-month period across pediatric units (Table 1). Deployment of pediatric CPOE at Duke Children's Hospital *Number of beds in service as of 1/29/2010. †New pediatric location introduced to DCH after full CPOE deployment on 3/10/2008. [SUBTITLE] CPOE architecture at Duke University Hospital [SUBSECTION] At DUH, the Horizon Expert Orders (HEO) CPOE system (McKesson Corporation, San Francisco, CA) is a comprehensive order management system that spans medical disciplines and offers real-time decision support and guidance for order entry. This product was deployed on all DUH adult floors by April, 2006. Providers interact with a Java-based desktop client (Java version 1.42.09) that queries an Oracle 10 database (Oracle Corporation, Redwood Shores, CA) holding both the clinical content tables (e.g., orderables, order sets, and clinical decision support information) and patient information tables (e.g., patient identity, care area, diagnoses, existing orders). New provider choices made through the CPOE client are saved to the patient information table and executed as HL7 messages to other hospital information technology (IT) applications that fulfill the orders. At DUH, the Horizon Expert Orders (HEO) CPOE system (McKesson Corporation, San Francisco, CA) is a comprehensive order management system that spans medical disciplines and offers real-time decision support and guidance for order entry. This product was deployed on all DUH adult floors by April, 2006. Providers interact with a Java-based desktop client (Java version 1.42.09) that queries an Oracle 10 database (Oracle Corporation, Redwood Shores, CA) holding both the clinical content tables (e.g., orderables, order sets, and clinical decision support information) and patient information tables (e.g., patient identity, care area, diagnoses, existing orders). New provider choices made through the CPOE client are saved to the patient information table and executed as HL7 messages to other hospital information technology (IT) applications that fulfill the orders. [SUBTITLE] Medication challenges at Duke Children's Hospital [SUBSECTION] DCH's pre-implementation medication vulnerabilities were similar to those described by other tertiary care institutions and have been reported previously in an analysis of both voluntarily reported events and ADEs detected by computerized surveillance [24]. Briefly, DCH sees approximately 18.0 medication-related safety incidents per 1000 patient days as detected by voluntary reporting. Of these, approximately 10.9% result in some level of patient harm. Computerized surveillance, a complementary incident detection method that behaves as an automated trigger tool, found 1.6 ADEs per 1000 patient days. As expected, event density is higher in critical care units than in general care areas. The most common problem areas are failures in the medication use process such as incorrect drug dose or rate followed by drug omissions. Antibiotics in particular were a drug class identified for enhanced surveillance targeting. We reported that the safety profile of pediatrics was distinct from that of adults, underscoring the importance of pediatric-specific clinical content for dosing guidance [25]. DCH's pre-implementation medication vulnerabilities were similar to those described by other tertiary care institutions and have been reported previously in an analysis of both voluntarily reported events and ADEs detected by computerized surveillance [24]. Briefly, DCH sees approximately 18.0 medication-related safety incidents per 1000 patient days as detected by voluntary reporting. Of these, approximately 10.9% result in some level of patient harm. Computerized surveillance, a complementary incident detection method that behaves as an automated trigger tool, found 1.6 ADEs per 1000 patient days. As expected, event density is higher in critical care units than in general care areas. The most common problem areas are failures in the medication use process such as incorrect drug dose or rate followed by drug omissions. Antibiotics in particular were a drug class identified for enhanced surveillance targeting. We reported that the safety profile of pediatrics was distinct from that of adults, underscoring the importance of pediatric-specific clinical content for dosing guidance [25]. [SUBTITLE] Needs assessment [SUBSECTION] Because DCH serves challenging, critically ill pediatric patients, the deployment of CPOE in this environment was deferred until the end of the adult CPOE implementation plan. A needs assessment was performed by a multidisciplinary clinical advisory workgroup of physicians, nurses, pharmacists, and safety directors to define the features and work flow requirements for a pediatric CPOE product deployed to DCH. These individuals made broad decisions that would affect clinician work flow, and their input was critical to ensure operational acceptance. It was immediately recognized that a pediatric CPOE application would require high flexibility to approach the wide variety of nuanced, sometimes novel, pediatric therapies in place at DCH. Given that the existing CPOE system provides adult dosing guidance via a specific set of clinical content tables, it was recognized that the needs of pediatric dosing could be satisfied through adding an additional set of tables for that population while maintaining the existing adult-based infrastructure. As a result, the most efficient plan was to partner with McKesson to enhance the adult product for pediatric usage instead of implementing a new, vended solution. McKesson and DUH agreed to a joint development project to incorporate clinical content from the pediatric WizOrder tool [26], acquired by McKesson from Vanderbilt University Medical Center, into the HEO commercial product. The clinical advisory committee continued to meet weekly for 6 months prior to system release to discuss clinical issues, understand technological implications of the application, and act as liaisons to technical developers at McKesson for all areas of pediatric CPOE design. Because DCH serves challenging, critically ill pediatric patients, the deployment of CPOE in this environment was deferred until the end of the adult CPOE implementation plan. A needs assessment was performed by a multidisciplinary clinical advisory workgroup of physicians, nurses, pharmacists, and safety directors to define the features and work flow requirements for a pediatric CPOE product deployed to DCH. These individuals made broad decisions that would affect clinician work flow, and their input was critical to ensure operational acceptance. It was immediately recognized that a pediatric CPOE application would require high flexibility to approach the wide variety of nuanced, sometimes novel, pediatric therapies in place at DCH. Given that the existing CPOE system provides adult dosing guidance via a specific set of clinical content tables, it was recognized that the needs of pediatric dosing could be satisfied through adding an additional set of tables for that population while maintaining the existing adult-based infrastructure. As a result, the most efficient plan was to partner with McKesson to enhance the adult product for pediatric usage instead of implementing a new, vended solution. McKesson and DUH agreed to a joint development project to incorporate clinical content from the pediatric WizOrder tool [26], acquired by McKesson from Vanderbilt University Medical Center, into the HEO commercial product. The clinical advisory committee continued to meet weekly for 6 months prior to system release to discuss clinical issues, understand technological implications of the application, and act as liaisons to technical developers at McKesson for all areas of pediatric CPOE design. [SUBTITLE] Functional design of the pediatric Advanced Dosing Model (ADM) [SUBSECTION] By the broadest definition, the ADM as it relates to HEO is a combination of clinical content tables and decision tree logic layered on top of the existing adult CPOE application to provide extensive, content-driven, drug-by-drug clinical decision support for pediatric medication dosing. Based on the clinical advisory committee's recommendations, an ADM focus group was convened to specifically address its functional requirements and design details. This group included 15 pediatric pharmacists charged with defining the detailed patient-specific criteria that shape the dosing scenarios for each drug. To facilitate this, a temporary web application and supporting database was built to store their design decisions and manage the pertinent clinical content. Instead of simply facilitating weight-based calculations familiar to pediatric dosing, we sought to create a broad clinical decision support system that incorporates up to 6 patient-specific parameters (Table 2) and safety alerts. The major functional design decisions from both the clinical advisory committee and ADM focus group were: Pediatric patient parameters reasoned over by the Advanced Dosing Model logic • The ADM should provide clinical guidance and targeted medication dosing recommendations based on 1 or more patient-centric criteria at the time of order entry. • The clinical decision support information surrounding medication dosing that populates CPOE clinical content tables should address the diverse needs of general and specialized pediatric populations. In this way, these tables can act as a centralized body of complex yet agreed-upon clinical dosing standards for the hospital. • The ADM processing logic should be designed to proactively alert clinicians when changes in patient criteria might warrant dosing changes based on the configured clinical knowledge base. When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement logic that applies only to children. This is especially important on surgical services where residents care for both adults and children, or in cases of off-service placement where children are cared for on adult floors due to space limitations or unusual practice patterns. After careful multidisciplinary analysis, the advisory group used age solely to define a pediatric patient (< 14 years of age or <18 years and <45 kg [99 lbs]) in order to capture individuals regardless of location. By the broadest definition, the ADM as it relates to HEO is a combination of clinical content tables and decision tree logic layered on top of the existing adult CPOE application to provide extensive, content-driven, drug-by-drug clinical decision support for pediatric medication dosing. Based on the clinical advisory committee's recommendations, an ADM focus group was convened to specifically address its functional requirements and design details. This group included 15 pediatric pharmacists charged with defining the detailed patient-specific criteria that shape the dosing scenarios for each drug. To facilitate this, a temporary web application and supporting database was built to store their design decisions and manage the pertinent clinical content. Instead of simply facilitating weight-based calculations familiar to pediatric dosing, we sought to create a broad clinical decision support system that incorporates up to 6 patient-specific parameters (Table 2) and safety alerts. The major functional design decisions from both the clinical advisory committee and ADM focus group were: Pediatric patient parameters reasoned over by the Advanced Dosing Model logic • The ADM should provide clinical guidance and targeted medication dosing recommendations based on 1 or more patient-centric criteria at the time of order entry. • The clinical decision support information surrounding medication dosing that populates CPOE clinical content tables should address the diverse needs of general and specialized pediatric populations. In this way, these tables can act as a centralized body of complex yet agreed-upon clinical dosing standards for the hospital. • The ADM processing logic should be designed to proactively alert clinicians when changes in patient criteria might warrant dosing changes based on the configured clinical knowledge base. When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement logic that applies only to children. This is especially important on surgical services where residents care for both adults and children, or in cases of off-service placement where children are cared for on adult floors due to space limitations or unusual practice patterns. After careful multidisciplinary analysis, the advisory group used age solely to define a pediatric patient (< 14 years of age or <18 years and <45 kg [99 lbs]) in order to capture individuals regardless of location. [SUBTITLE] Development of pediatric clinical content [SUBSECTION] To build pediatric clinical content as part of the ADM, unique patient scenarios were identified that may drive a dosing end point for a specific medication. Each unique dosing scenario is termed a medication "dosing region"; that is, the constellation of clinical characteristics that stipulates a specific pediatric dosage for a given drug. A drug that has a variety of clinical usage scenarios therefore has multiple dosing regions. A dosing region can be based on as few or as many criteria as are needed to achieve the specificity required. Similarly, a medication may have numerous associated dosing regions depending on the complexity of the dosing permutations given the patient-specific criteria. For example, Table 3 displays dosing regions built for several ampicillin indications based on different patient scenarios. Dosing regions for several ampicillin indications *The dose is administered once a patient is called to the operating room. Creating the formulary of adult drugs in the initial launch of adult CPOE was very labor-intensive even when each drug could contain only 1 set of dosing options. We sought to limit initial development to only high-use medications as well as to those that have serious safety concerns. To identify this subset, we pulled information regarding drug utilization for a 1-year period from the pharmacy medication management program. A list including the top 100 most commonly prescribed drugs on pediatric floors was sent to the pediatric pharmacist workgroup. Collective clinical review reduced this list by removing orderables that already had existing clinical decision support through elaborate advisor interfaces (i.e.., insulin prescribing or intravenous fluids). Additionally, chemotherapy is not currently handled within CPOE at Duke University Hospital as it is paper-based and protocol-driven. The workgroup then added medications to this list, regardless of usage frequency, if any of the following conditions were met: a) the drug had high risk of a severe prescribing error; b) the drug had high seasonal usage; or c) the drug appeared on care area "pocket cards." Nurses working in intensive care and BMT units all carry pocket cards that give dosing suggestions for commonly used medications in their care areas. If these medications were located on the card, then they were required to undergo dosing region development. The full drug list was then grouped by American Hospital Formulary Service (AHFS) drug class and prioritized for dosing region development. Pediatric dosing references [27-29], literature review, and insight from at least 2 clinical pharmacists serving on the ADM task force were used to define dosing regions. Once all dosing regions were designed, a final review was conducted by a sub-specialty physician. By 1 month after ADM deployment, 1200 dosing regions were built for 175 medications. As of July 2009, the knowledge base had expanded to include over 2300 dosing regions for 375 medications. Dosing regions are reviewed and updated based on new clinical evidence once every 2 years. Because dosing is often based on weights and involves calculation, it was important to create logic that facilitates prescribing drugs in practical amounts. To this end, each dosing region was assigned a rounding method by linkage to a specific data table within the ADM database schema that defined a type of rounding behavior. The "round to 10%" data table lists dosing values where the dispensable dose should be rounded to the closest value in the table, which would ultimately be within 10% of the weight-based calculation. If a calculated dose falls outside of the values in the table, no rounding should occur. A second "round to 5%" data table functions similarly and was put into place for dosing regions requiring tighter control such as those for narcotics, sedatives, and steroids. Finally, specialized rounding tables were created on a drug-by-drug basis to allow for cases where high customization is necessary. For example, several suppository rounding tables were created to reflect commonly dispensable partial suppository amounts for children. The creation of these tables had the effect of forcing selection of drugs in an easily dispensable form. To build pediatric clinical content as part of the ADM, unique patient scenarios were identified that may drive a dosing end point for a specific medication. Each unique dosing scenario is termed a medication "dosing region"; that is, the constellation of clinical characteristics that stipulates a specific pediatric dosage for a given drug. A drug that has a variety of clinical usage scenarios therefore has multiple dosing regions. A dosing region can be based on as few or as many criteria as are needed to achieve the specificity required. Similarly, a medication may have numerous associated dosing regions depending on the complexity of the dosing permutations given the patient-specific criteria. For example, Table 3 displays dosing regions built for several ampicillin indications based on different patient scenarios. Dosing regions for several ampicillin indications *The dose is administered once a patient is called to the operating room. Creating the formulary of adult drugs in the initial launch of adult CPOE was very labor-intensive even when each drug could contain only 1 set of dosing options. We sought to limit initial development to only high-use medications as well as to those that have serious safety concerns. To identify this subset, we pulled information regarding drug utilization for a 1-year period from the pharmacy medication management program. A list including the top 100 most commonly prescribed drugs on pediatric floors was sent to the pediatric pharmacist workgroup. Collective clinical review reduced this list by removing orderables that already had existing clinical decision support through elaborate advisor interfaces (i.e.., insulin prescribing or intravenous fluids). Additionally, chemotherapy is not currently handled within CPOE at Duke University Hospital as it is paper-based and protocol-driven. The workgroup then added medications to this list, regardless of usage frequency, if any of the following conditions were met: a) the drug had high risk of a severe prescribing error; b) the drug had high seasonal usage; or c) the drug appeared on care area "pocket cards." Nurses working in intensive care and BMT units all carry pocket cards that give dosing suggestions for commonly used medications in their care areas. If these medications were located on the card, then they were required to undergo dosing region development. The full drug list was then grouped by American Hospital Formulary Service (AHFS) drug class and prioritized for dosing region development. Pediatric dosing references [27-29], literature review, and insight from at least 2 clinical pharmacists serving on the ADM task force were used to define dosing regions. Once all dosing regions were designed, a final review was conducted by a sub-specialty physician. By 1 month after ADM deployment, 1200 dosing regions were built for 175 medications. As of July 2009, the knowledge base had expanded to include over 2300 dosing regions for 375 medications. Dosing regions are reviewed and updated based on new clinical evidence once every 2 years. Because dosing is often based on weights and involves calculation, it was important to create logic that facilitates prescribing drugs in practical amounts. To this end, each dosing region was assigned a rounding method by linkage to a specific data table within the ADM database schema that defined a type of rounding behavior. The "round to 10%" data table lists dosing values where the dispensable dose should be rounded to the closest value in the table, which would ultimately be within 10% of the weight-based calculation. If a calculated dose falls outside of the values in the table, no rounding should occur. A second "round to 5%" data table functions similarly and was put into place for dosing regions requiring tighter control such as those for narcotics, sedatives, and steroids. Finally, specialized rounding tables were created on a drug-by-drug basis to allow for cases where high customization is necessary. For example, several suppository rounding tables were created to reflect commonly dispensable partial suppository amounts for children. The creation of these tables had the effect of forcing selection of drugs in an easily dispensable form. [SUBTITLE] Designation of patient parameters describing dosing regions [SUBSECTION] Each dosing region specifies appropriate dosing advice within the context of 6 patient-specific parameters (Table 2). These parameters were defined by the clinical advisory workgroup discussed previously. One of the more complex issues addressed was patient weight, and it became clear that multiple weights would be required for the system to be clinically relevant to pediatric patients. The concept of "dosing weight" was made distinct from "actual weight" to account for excesses or deficiencies in weight due to fluid imbalances, infections, or feeding issues. Thus dosing weight is the weight on which dosing recommendations should be based, and the clinician should actively decide whether the actual weight is appropriate or a dosing weight should be entered to reflect either transient physiologic changes, such as excess body water, or chronic conditions such as pediatric obesity. When considering the matter of age for medication dosing, the ADM was designed to provide unit translation for the clinician. In older children, it is appropriate to classify chronological age in whole-number years (e.g., 8 years rather than 8.25 years); however, infants may have age stored in months (or days for newborns). Depending on the age of the child, the ADM may prompt the clinician for gestational age if needed to identify the correct dosing region for a medication. Renal impairment was included as a potential dosing parameter based upon a qualitative assessment by the ordering provider (i.e., "impaired" or "not impaired"), which is requested when nephrotoxic drugs are dosed. This assessment will result in a suggestion of a different dosage of the drug. Including this as part of the dosing region parameters serves as an important reminder to the ordering provider that the drug is either renally cleared or builds up metabolites and therefore requires consideration of dose adjustment. The ADM dosing region selection process may require knowledge of a patient's indication. If so, a provider is prompted to select an indication within the CPOE client. Common indication-based dose adjustments include those used in cases of meningitis, osteomyelitis, or an infectious disease. A second pathway to dosing by indication is to use an indication-specific order set for specialized dosing in support of unique disease states or clinical conditions such as sickle cell anemia, cystic fibrosis, or organ transplant dosing. The ADM automatically understands the patient indication based on the order set identity and presents the user with the appropriate, specific spectrum of dosing regions. The user may be prompted further if additional indications are needed to define the dosing region. Each dosing region specifies appropriate dosing advice within the context of 6 patient-specific parameters (Table 2). These parameters were defined by the clinical advisory workgroup discussed previously. One of the more complex issues addressed was patient weight, and it became clear that multiple weights would be required for the system to be clinically relevant to pediatric patients. The concept of "dosing weight" was made distinct from "actual weight" to account for excesses or deficiencies in weight due to fluid imbalances, infections, or feeding issues. Thus dosing weight is the weight on which dosing recommendations should be based, and the clinician should actively decide whether the actual weight is appropriate or a dosing weight should be entered to reflect either transient physiologic changes, such as excess body water, or chronic conditions such as pediatric obesity. When considering the matter of age for medication dosing, the ADM was designed to provide unit translation for the clinician. In older children, it is appropriate to classify chronological age in whole-number years (e.g., 8 years rather than 8.25 years); however, infants may have age stored in months (or days for newborns). Depending on the age of the child, the ADM may prompt the clinician for gestational age if needed to identify the correct dosing region for a medication. Renal impairment was included as a potential dosing parameter based upon a qualitative assessment by the ordering provider (i.e., "impaired" or "not impaired"), which is requested when nephrotoxic drugs are dosed. This assessment will result in a suggestion of a different dosage of the drug. Including this as part of the dosing region parameters serves as an important reminder to the ordering provider that the drug is either renally cleared or builds up metabolites and therefore requires consideration of dose adjustment. The ADM dosing region selection process may require knowledge of a patient's indication. If so, a provider is prompted to select an indication within the CPOE client. Common indication-based dose adjustments include those used in cases of meningitis, osteomyelitis, or an infectious disease. A second pathway to dosing by indication is to use an indication-specific order set for specialized dosing in support of unique disease states or clinical conditions such as sickle cell anemia, cystic fibrosis, or organ transplant dosing. The ADM automatically understands the patient indication based on the order set identity and presents the user with the appropriate, specific spectrum of dosing regions. The user may be prompted further if additional indications are needed to define the dosing region. [SUBTITLE] Error prevention measures [SUBSECTION] When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement the correct logic. It was recognized early in the planning process that most of the safety features of pediatric CPOE would be undermined if the wrong weight is entered for the patient. Pediatric patients tend to undergo changes in weight more often and to a larger degree than adults, and each manual update of that changing weight is an opportunity for error. To address this, the ADM was configured with several weight-based "pop-up" alerts: • The user cannot enter any medication without being prompted for a dosing weight if one is not already present. Patient weights are compared to either pediatric or neonatal growth curves stored within the CPOE Oracle database clinical content tables. Users are warned if a patient falls below 3rd percentile or above 97th percentile. • The user is alerted if there is an extreme change in weight--on average, a change in 10% of the previously recorded value. This threshold is configurable by care area. • The user is warned if there is an extreme variance between actual and dosing weight, and the variance threshold is configurable by care area. • If a medication order was dosed off a weight that had since been updated, a reminder to "weight adjust" the medication dose is issued. • Any patient parameter that is expected to change over time (i.e., dosing weight or actual weight) will raise a less invasive alert (colorful text-based tag next to the variable in question) if it has not been updated after a configurable number of days. It is our expectation that actual weights are updated daily, and dosing weights are updated based on the unit protocol and clinical condition. Because the pediatric CPOE system was based on a prior adult implementation, adult-specific decision support is left embedded in medication orderables throughout the system. Most orderables contain, for example, adult options for dose, route, frequency, etc., along with instructional text describing appropriate dose recommendations and other considerations specific to that drug. We recognized that it was critically important that pediatricians not be inadvertently presented with adult dosing advice and therefore used a 2-pronged strategy (suppression and redirection) for handling adult-oriented material that may be presented to the end user. Suppression functionality was implemented for cases where pediatric dosing regions are not available for certain medications but adult-based material is available in the clinical content tables. Any dosing guidance present in the base form of the medication--which is assumed not to have been approved for use in pediatrics--is suppressed in the CPOE application code whenever the patient fits our definition of "pediatric." A countermeasure (i.e., a way to "suppress the suppression") was implemented for unique clinical situations where the same dosing principles apply regardless of whether CPOE considers the patient "pediatric" or "adult." The obstetrics service, for example, uses the same labor-and-delivery order sets whether their patient is 12 or 32 years old. Such order sets are designated safe for use in both adults and pediatric populations and thus exempt from the suppressive logic. Redirection functionality was implemented for cases where it is appropriate for the pediatric prescriber to completely avoid selection of a particular drug. Insulin prescribing, for example, is a complicated endeavor that involves multiple drug orderables because it requires options for different forms of scheduled and supplemental insulin. Clinicians caring for adult patients have always had a full-page, graphical module ("Subcutaneous Insulin Advisor") that presents formal decision support. When the analogous pediatric interface was created, we recognized the risk of a pediatric clinician inadvertently activating the adult-oriented insulin advisor (e.g., with a misdirected mouse-click). Therefore, we created a logic module, triggered by the user's selection of the adult insulin advisor, that determines whether the patient meets the definition of "pediatric" and, if so, redirects the user to the pediatric version. When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement the correct logic. It was recognized early in the planning process that most of the safety features of pediatric CPOE would be undermined if the wrong weight is entered for the patient. Pediatric patients tend to undergo changes in weight more often and to a larger degree than adults, and each manual update of that changing weight is an opportunity for error. To address this, the ADM was configured with several weight-based "pop-up" alerts: • The user cannot enter any medication without being prompted for a dosing weight if one is not already present. Patient weights are compared to either pediatric or neonatal growth curves stored within the CPOE Oracle database clinical content tables. Users are warned if a patient falls below 3rd percentile or above 97th percentile. • The user is alerted if there is an extreme change in weight--on average, a change in 10% of the previously recorded value. This threshold is configurable by care area. • The user is warned if there is an extreme variance between actual and dosing weight, and the variance threshold is configurable by care area. • If a medication order was dosed off a weight that had since been updated, a reminder to "weight adjust" the medication dose is issued. • Any patient parameter that is expected to change over time (i.e., dosing weight or actual weight) will raise a less invasive alert (colorful text-based tag next to the variable in question) if it has not been updated after a configurable number of days. It is our expectation that actual weights are updated daily, and dosing weights are updated based on the unit protocol and clinical condition. Because the pediatric CPOE system was based on a prior adult implementation, adult-specific decision support is left embedded in medication orderables throughout the system. Most orderables contain, for example, adult options for dose, route, frequency, etc., along with instructional text describing appropriate dose recommendations and other considerations specific to that drug. We recognized that it was critically important that pediatricians not be inadvertently presented with adult dosing advice and therefore used a 2-pronged strategy (suppression and redirection) for handling adult-oriented material that may be presented to the end user. Suppression functionality was implemented for cases where pediatric dosing regions are not available for certain medications but adult-based material is available in the clinical content tables. Any dosing guidance present in the base form of the medication--which is assumed not to have been approved for use in pediatrics--is suppressed in the CPOE application code whenever the patient fits our definition of "pediatric." A countermeasure (i.e., a way to "suppress the suppression") was implemented for unique clinical situations where the same dosing principles apply regardless of whether CPOE considers the patient "pediatric" or "adult." The obstetrics service, for example, uses the same labor-and-delivery order sets whether their patient is 12 or 32 years old. Such order sets are designated safe for use in both adults and pediatric populations and thus exempt from the suppressive logic. Redirection functionality was implemented for cases where it is appropriate for the pediatric prescriber to completely avoid selection of a particular drug. Insulin prescribing, for example, is a complicated endeavor that involves multiple drug orderables because it requires options for different forms of scheduled and supplemental insulin. Clinicians caring for adult patients have always had a full-page, graphical module ("Subcutaneous Insulin Advisor") that presents formal decision support. When the analogous pediatric interface was created, we recognized the risk of a pediatric clinician inadvertently activating the adult-oriented insulin advisor (e.g., with a misdirected mouse-click). Therefore, we created a logic module, triggered by the user's selection of the adult insulin advisor, that determines whether the patient meets the definition of "pediatric" and, if so, redirects the user to the pediatric version. [SUBTITLE] Analysis of voluntarily reported safety events [SUBSECTION] The voluntary Safety Reporting System (SRS) allows staff members to report any perceived safety issues within any DUH care environment, including DCH. DCH staff enters approximately 80 pediatric reports per month. Incidents are reported as being 1 of 9 event categories, and all incidents in the medications category are reviewed by a team of medication safety pharmacists and scored for patient severity. Events of severity of 3 or more (i.e., patient length of stay was increased by the event) are considered ADEs. A full description of the severity algorithm is reported elsewhere [24]. We compared the rate of harmful events (i.e., ADEs) per 1000 patient days and the fraction of total reported events that were ADEs pre- and post- deployment for critical care areas. The Pearson's chi-square test was used to assess statistical significance, and binomial 95% confidence intervals for proportions were calculated. The voluntary Safety Reporting System (SRS) allows staff members to report any perceived safety issues within any DUH care environment, including DCH. DCH staff enters approximately 80 pediatric reports per month. Incidents are reported as being 1 of 9 event categories, and all incidents in the medications category are reviewed by a team of medication safety pharmacists and scored for patient severity. Events of severity of 3 or more (i.e., patient length of stay was increased by the event) are considered ADEs. A full description of the severity algorithm is reported elsewhere [24]. We compared the rate of harmful events (i.e., ADEs) per 1000 patient days and the fraction of total reported events that were ADEs pre- and post- deployment for critical care areas. The Pearson's chi-square test was used to assess statistical significance, and binomial 95% confidence intervals for proportions were calculated.
null
null
null
null
[ "Background", "Setting and implementation period", "CPOE architecture at Duke University Hospital", "Medication challenges at Duke Children's Hospital", "Needs assessment", "Functional design of the pediatric Advanced Dosing Model (ADM)", "Development of pediatric clinical content", "Designation of patient parameters describing dosing regions", "Error prevention measures", "Analysis of voluntarily reported safety events", "Results and Discussion", "Clinician workflow for medication ordering within the CPOE interface", "Guarding against over- and under-dosing", "Limited evaluation of CPOE implementation using organizational safety data", "Comparison to other pediatric dosing models", "Limitations and lessons learned", "Addressing patient weight in the provider work flow", "Overreliance on technology", "The importance of team composition", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Medication management in children poses distinctive challenges [1,2], as pediatricians need to calculate doses based on weight, age, gestational age, and indication, which may increase the risk of mathematical errors (such as the serious and common 10-fold overdose) [3]. Pediatric pharmacists often work with adult formulations and must manually compound suspensions for use in pediatric patients. Most importantly, childhood is an inherently dynamic period during which children experience rapidly changing weights and physiologic fluctuations that place them at high risk for incorrect dosing. Given the limited physiologic reserve of pediatric patients, small miscalculations caused by technical glitches or improper system design can cause significant morbidity and mortality [2,4].\nSome of these challenges can be addressed using computerized provider order entry (CPOE) systems, and the American Academy of Pediatrics policy statement on the prevention of medication errors strongly recommends the use of \"computerized systems\" whenever feasible [5]. Research on the epidemiology of adverse drug events (ADEs) in pediatric inpatients reveals that most ADEs originate at the drug ordering stage, with the smallest and most critically ill patients at highest risk [6-9]. In adults, CPOE has been shown to decrease the incidence of medication errors by 55% [10], and CPOE with advanced clinical decision support can decrease error rates by 83% [5,11]. Several studies of pediatric inpatients have demonstrated decreases in medication errors after the implementation of CPOE [8,12,13], and 1 study of pediatric intensive care unit (ICU) patients found a 95% reduction in medication errors and a 40% reduction in potential ADEs following CPOE implementation [14]. However, few studies have shown improvements in post-CPOE implementation outcome measures such as actual (not potential) ADEs, even after meta-analysis of multiple studies [15] or an advanced time-series analysis is performed [16,17]. Furthermore, the landmark study in 2005 by Han reporting an unexpected increase in ICU pediatric mortality related to a pediatric CPOE implementation [18] sparked a nationwide dialogue about the relationship between technology and patient safety. Del Beccaro et al. provided another perspective on this topic by similarly evaluating the same CPOE product in a separate tertiary care facility [19]. They found no mortality increase in critical care areas, and attributed this to a more careful implementation process and the design of ICU-specific order sets. Recently, another group demonstrated a reduction of mortality after implementing a vended but locally modified product at an academic children's hospital [20]. In this time of increased awareness, it is essential that pediatric centers share their experiences to improve large-scale pediatric deployment of CPOE.\nUnfortunately, there is significant variability in the level of pediatric-specific medication dosing functionality built into today's CPOE structures. An institution wishing to implement pediatric CPOE often faces a difficult choice between replacing its legacy system with a dedicated pediatric CPOE application [21-23] versus enhancing the existing functionality of an adult-focused CPOE application. At Duke University Hospital (DUH), we chose the latter option, working with our CPOE vendor (Horizon Expert Orders, McKesson Corporation, San Francisco) to adapt our existing, adult-focused system to provide pediatric CPOE. In doing so, we created the \"Advanced Dosing Model\" (ADM) within the CPOE application to address the unique dosing needs of children. The ADM uses broad clinical decision support to incorporate many criteria into medication dosing such as weight, age, indication, and safety alerts that are built into clinical content. We feel the details of this novel model are sufficiently complex to warrant its own report distinct from other elements of pediatric CPOE.", "Duke Children's Hospital (DCH) is a tertiary care facility within DUH that comprises 7 inpatient pediatric service locations: 2 general care medical wards, a pediatric intensive care unit (PICU), a neonatal intensive care unit (NICU), a bone marrow transplant (BMT) unit, a pediatric cardiac ICU, and a transitional care (i.e., step-down) unit. DCH averages 7000 pediatric admissions per year across 187 inpatient beds, approximately 50% of which are located in critical care wards. DCH employs approximately 197 attending physicians and 50 pediatric residents across 20 clinical service areas. Table 1 details release of the CPOE application, which included the ADM functionality described in this report, in order of deployment over a 14-month period across pediatric units (Table 1).\nDeployment of pediatric CPOE at Duke Children's Hospital\n*Number of beds in service as of 1/29/2010.\n†New pediatric location introduced to DCH after full CPOE deployment on 3/10/2008.", "At DUH, the Horizon Expert Orders (HEO) CPOE system (McKesson Corporation, San Francisco, CA) is a comprehensive order management system that spans medical disciplines and offers real-time decision support and guidance for order entry. This product was deployed on all DUH adult floors by April, 2006. Providers interact with a Java-based desktop client (Java version 1.42.09) that queries an Oracle 10 database (Oracle Corporation, Redwood Shores, CA) holding both the clinical content tables (e.g., orderables, order sets, and clinical decision support information) and patient information tables (e.g., patient identity, care area, diagnoses, existing orders). New provider choices made through the CPOE client are saved to the patient information table and executed as HL7 messages to other hospital information technology (IT) applications that fulfill the orders.", "DCH's pre-implementation medication vulnerabilities were similar to those described by other tertiary care institutions and have been reported previously in an analysis of both voluntarily reported events and ADEs detected by computerized surveillance [24]. Briefly, DCH sees approximately 18.0 medication-related safety incidents per 1000 patient days as detected by voluntary reporting. Of these, approximately 10.9% result in some level of patient harm. Computerized surveillance, a complementary incident detection method that behaves as an automated trigger tool, found 1.6 ADEs per 1000 patient days. As expected, event density is higher in critical care units than in general care areas. The most common problem areas are failures in the medication use process such as incorrect drug dose or rate followed by drug omissions. Antibiotics in particular were a drug class identified for enhanced surveillance targeting. We reported that the safety profile of pediatrics was distinct from that of adults, underscoring the importance of pediatric-specific clinical content for dosing guidance [25].", "Because DCH serves challenging, critically ill pediatric patients, the deployment of CPOE in this environment was deferred until the end of the adult CPOE implementation plan. A needs assessment was performed by a multidisciplinary clinical advisory workgroup of physicians, nurses, pharmacists, and safety directors to define the features and work flow requirements for a pediatric CPOE product deployed to DCH. These individuals made broad decisions that would affect clinician work flow, and their input was critical to ensure operational acceptance. It was immediately recognized that a pediatric CPOE application would require high flexibility to approach the wide variety of nuanced, sometimes novel, pediatric therapies in place at DCH. Given that the existing CPOE system provides adult dosing guidance via a specific set of clinical content tables, it was recognized that the needs of pediatric dosing could be satisfied through adding an additional set of tables for that population while maintaining the existing adult-based infrastructure. As a result, the most efficient plan was to partner with McKesson to enhance the adult product for pediatric usage instead of implementing a new, vended solution. McKesson and DUH agreed to a joint development project to incorporate clinical content from the pediatric WizOrder tool [26], acquired by McKesson from Vanderbilt University Medical Center, into the HEO commercial product. The clinical advisory committee continued to meet weekly for 6 months prior to system release to discuss clinical issues, understand technological implications of the application, and act as liaisons to technical developers at McKesson for all areas of pediatric CPOE design.", "By the broadest definition, the ADM as it relates to HEO is a combination of clinical content tables and decision tree logic layered on top of the existing adult CPOE application to provide extensive, content-driven, drug-by-drug clinical decision support for pediatric medication dosing. Based on the clinical advisory committee's recommendations, an ADM focus group was convened to specifically address its functional requirements and design details. This group included 15 pediatric pharmacists charged with defining the detailed patient-specific criteria that shape the dosing scenarios for each drug. To facilitate this, a temporary web application and supporting database was built to store their design decisions and manage the pertinent clinical content.\nInstead of simply facilitating weight-based calculations familiar to pediatric dosing, we sought to create a broad clinical decision support system that incorporates up to 6 patient-specific parameters (Table 2) and safety alerts. The major functional design decisions from both the clinical advisory committee and ADM focus group were:\nPediatric patient parameters reasoned over by the Advanced Dosing Model logic\n• The ADM should provide clinical guidance and targeted medication dosing recommendations based on 1 or more patient-centric criteria at the time of order entry.\n• The clinical decision support information surrounding medication dosing that populates CPOE clinical content tables should address the diverse needs of general and specialized pediatric populations. In this way, these tables can act as a centralized body of complex yet agreed-upon clinical dosing standards for the hospital.\n• The ADM processing logic should be designed to proactively alert clinicians when changes in patient criteria might warrant dosing changes based on the configured clinical knowledge base.\nWhen a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement logic that applies only to children. This is especially important on surgical services where residents care for both adults and children, or in cases of off-service placement where children are cared for on adult floors due to space limitations or unusual practice patterns. After careful multidisciplinary analysis, the advisory group used age solely to define a pediatric patient (< 14 years of age or <18 years and <45 kg [99 lbs]) in order to capture individuals regardless of location.", "To build pediatric clinical content as part of the ADM, unique patient scenarios were identified that may drive a dosing end point for a specific medication. Each unique dosing scenario is termed a medication \"dosing region\"; that is, the constellation of clinical characteristics that stipulates a specific pediatric dosage for a given drug. A drug that has a variety of clinical usage scenarios therefore has multiple dosing regions. A dosing region can be based on as few or as many criteria as are needed to achieve the specificity required. Similarly, a medication may have numerous associated dosing regions depending on the complexity of the dosing permutations given the patient-specific criteria. For example, Table 3 displays dosing regions built for several ampicillin indications based on different patient scenarios.\nDosing regions for several ampicillin indications\n*The dose is administered once a patient is called to the operating room.\nCreating the formulary of adult drugs in the initial launch of adult CPOE was very labor-intensive even when each drug could contain only 1 set of dosing options. We sought to limit initial development to only high-use medications as well as to those that have serious safety concerns. To identify this subset, we pulled information regarding drug utilization for a 1-year period from the pharmacy medication management program. A list including the top 100 most commonly prescribed drugs on pediatric floors was sent to the pediatric pharmacist workgroup. Collective clinical review reduced this list by removing orderables that already had existing clinical decision support through elaborate advisor interfaces (i.e.., insulin prescribing or intravenous fluids). Additionally, chemotherapy is not currently handled within CPOE at Duke University Hospital as it is paper-based and protocol-driven. The workgroup then added medications to this list, regardless of usage frequency, if any of the following conditions were met: a) the drug had high risk of a severe prescribing error; b) the drug had high seasonal usage; or c) the drug appeared on care area \"pocket cards.\" Nurses working in intensive care and BMT units all carry pocket cards that give dosing suggestions for commonly used medications in their care areas. If these medications were located on the card, then they were required to undergo dosing region development. The full drug list was then grouped by American Hospital Formulary Service (AHFS) drug class and prioritized for dosing region development.\nPediatric dosing references [27-29], literature review, and insight from at least 2 clinical pharmacists serving on the ADM task force were used to define dosing regions. Once all dosing regions were designed, a final review was conducted by a sub-specialty physician. By 1 month after ADM deployment, 1200 dosing regions were built for 175 medications. As of July 2009, the knowledge base had expanded to include over 2300 dosing regions for 375 medications. Dosing regions are reviewed and updated based on new clinical evidence once every 2 years.\nBecause dosing is often based on weights and involves calculation, it was important to create logic that facilitates prescribing drugs in practical amounts. To this end, each dosing region was assigned a rounding method by linkage to a specific data table within the ADM database schema that defined a type of rounding behavior. The \"round to 10%\" data table lists dosing values where the dispensable dose should be rounded to the closest value in the table, which would ultimately be within 10% of the weight-based calculation. If a calculated dose falls outside of the values in the table, no rounding should occur. A second \"round to 5%\" data table functions similarly and was put into place for dosing regions requiring tighter control such as those for narcotics, sedatives, and steroids. Finally, specialized rounding tables were created on a drug-by-drug basis to allow for cases where high customization is necessary. For example, several suppository rounding tables were created to reflect commonly dispensable partial suppository amounts for children. The creation of these tables had the effect of forcing selection of drugs in an easily dispensable form.", "Each dosing region specifies appropriate dosing advice within the context of 6 patient-specific parameters (Table 2). These parameters were defined by the clinical advisory workgroup discussed previously. One of the more complex issues addressed was patient weight, and it became clear that multiple weights would be required for the system to be clinically relevant to pediatric patients. The concept of \"dosing weight\" was made distinct from \"actual weight\" to account for excesses or deficiencies in weight due to fluid imbalances, infections, or feeding issues. Thus dosing weight is the weight on which dosing recommendations should be based, and the clinician should actively decide whether the actual weight is appropriate or a dosing weight should be entered to reflect either transient physiologic changes, such as excess body water, or chronic conditions such as pediatric obesity.\nWhen considering the matter of age for medication dosing, the ADM was designed to provide unit translation for the clinician. In older children, it is appropriate to classify chronological age in whole-number years (e.g., 8 years rather than 8.25 years); however, infants may have age stored in months (or days for newborns). Depending on the age of the child, the ADM may prompt the clinician for gestational age if needed to identify the correct dosing region for a medication.\nRenal impairment was included as a potential dosing parameter based upon a qualitative assessment by the ordering provider (i.e., \"impaired\" or \"not impaired\"), which is requested when nephrotoxic drugs are dosed. This assessment will result in a suggestion of a different dosage of the drug. Including this as part of the dosing region parameters serves as an important reminder to the ordering provider that the drug is either renally cleared or builds up metabolites and therefore requires consideration of dose adjustment.\nThe ADM dosing region selection process may require knowledge of a patient's indication. If so, a provider is prompted to select an indication within the CPOE client. Common indication-based dose adjustments include those used in cases of meningitis, osteomyelitis, or an infectious disease. A second pathway to dosing by indication is to use an indication-specific order set for specialized dosing in support of unique disease states or clinical conditions such as sickle cell anemia, cystic fibrosis, or organ transplant dosing. The ADM automatically understands the patient indication based on the order set identity and presents the user with the appropriate, specific spectrum of dosing regions. The user may be prompted further if additional indications are needed to define the dosing region.", "When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement the correct logic. It was recognized early in the planning process that most of the safety features of pediatric CPOE would be undermined if the wrong weight is entered for the patient. Pediatric patients tend to undergo changes in weight more often and to a larger degree than adults, and each manual update of that changing weight is an opportunity for error. To address this, the ADM was configured with several weight-based \"pop-up\" alerts:\n• The user cannot enter any medication without being prompted for a dosing weight if one is not already present. Patient weights are compared to either pediatric or neonatal growth curves stored within the CPOE Oracle database clinical content tables. Users are warned if a patient falls below 3rd percentile or above 97th percentile.\n• The user is alerted if there is an extreme change in weight--on average, a change in 10% of the previously recorded value. This threshold is configurable by care area.\n• The user is warned if there is an extreme variance between actual and dosing weight, and the variance threshold is configurable by care area.\n• If a medication order was dosed off a weight that had since been updated, a reminder to \"weight adjust\" the medication dose is issued.\n• Any patient parameter that is expected to change over time (i.e., dosing weight or actual weight) will raise a less invasive alert (colorful text-based tag next to the variable in question) if it has not been updated after a configurable number of days. It is our expectation that actual weights are updated daily, and dosing weights are updated based on the unit protocol and clinical condition.\nBecause the pediatric CPOE system was based on a prior adult implementation, adult-specific decision support is left embedded in medication orderables throughout the system. Most orderables contain, for example, adult options for dose, route, frequency, etc., along with instructional text describing appropriate dose recommendations and other considerations specific to that drug. We recognized that it was critically important that pediatricians not be inadvertently presented with adult dosing advice and therefore used a 2-pronged strategy (suppression and redirection) for handling adult-oriented material that may be presented to the end user.\nSuppression functionality was implemented for cases where pediatric dosing regions are not available for certain medications but adult-based material is available in the clinical content tables. Any dosing guidance present in the base form of the medication--which is assumed not to have been approved for use in pediatrics--is suppressed in the CPOE application code whenever the patient fits our definition of \"pediatric.\" A countermeasure (i.e., a way to \"suppress the suppression\") was implemented for unique clinical situations where the same dosing principles apply regardless of whether CPOE considers the patient \"pediatric\" or \"adult.\" The obstetrics service, for example, uses the same labor-and-delivery order sets whether their patient is 12 or 32 years old. Such order sets are designated safe for use in both adults and pediatric populations and thus exempt from the suppressive logic.\nRedirection functionality was implemented for cases where it is appropriate for the pediatric prescriber to completely avoid selection of a particular drug. Insulin prescribing, for example, is a complicated endeavor that involves multiple drug orderables because it requires options for different forms of scheduled and supplemental insulin. Clinicians caring for adult patients have always had a full-page, graphical module (\"Subcutaneous Insulin Advisor\") that presents formal decision support. When the analogous pediatric interface was created, we recognized the risk of a pediatric clinician inadvertently activating the adult-oriented insulin advisor (e.g., with a misdirected mouse-click). Therefore, we created a logic module, triggered by the user's selection of the adult insulin advisor, that determines whether the patient meets the definition of \"pediatric\" and, if so, redirects the user to the pediatric version.", "The voluntary Safety Reporting System (SRS) allows staff members to report any perceived safety issues within any DUH care environment, including DCH. DCH staff enters approximately 80 pediatric reports per month. Incidents are reported as being 1 of 9 event categories, and all incidents in the medications category are reviewed by a team of medication safety pharmacists and scored for patient severity. Events of severity of 3 or more (i.e., patient length of stay was increased by the event) are considered ADEs. A full description of the severity algorithm is reported elsewhere [24]. We compared the rate of harmful events (i.e., ADEs) per 1000 patient days and the fraction of total reported events that were ADEs pre- and post- deployment for critical care areas. The Pearson's chi-square test was used to assess statistical significance, and binomial 95% confidence intervals for proportions were calculated.", "[SUBTITLE] Clinician workflow for medication ordering within the CPOE interface [SUBSECTION] Users enter the CPOE interface through the organizational electronic health record. The process of ordering medications using CPOE can be initiated by either searching for them using a dialog box within the application or choosing an order set developed for a specific patient profile and then clicking on the hyperlink for an ADM-enabled medication. When a patient is admitted to DCH, details regarding the patient's identity, age, and current physical location are transmitted to the CPOE application from the hospital's admission, discharge, and transfer system. The ADM uses this information, along with prompted provider input and its underlying clinical content tables, to determine whether pediatric medication dosing logic is required (Figure 1).\nProcessing logic of the Advanced Dosing Model. A clinician initiates an order for a medication. ADM pre-processing logic verifies if dosing regions exist for this drug. If so, the system will retrieve the available patient information (chronological age, weight, and care area) and prompt the user for any additional information needed. Once all patient parameters are collected, the decision support algorithm will resolve the list of all potential dosing regions from the clinical knowledge base. If the algorithm successfully identifies 1 dosing region, it is presented to the clinician and made available for calculations and screening. If there are no dosing regions available for the requested medication and patient parameters, or multiple dosing regions are found, the system exits the ADM model and the clinician is prompted to enter a medication dose manually.\nOnce within the ADM logic, the top half of the screen will display all patient parameters and any relevant considerations or warnings for the drug. The bottom half displays actions to be taken. If an allergy to a medication is known, warnings will be shown at this step, and the user must override the allergy alert twice in the bottom half of the screen before being permitted to move forward with medication ordering. If there are no warnings, the ADM will prompt for any additional information (i.e., indication) and presents the proper dose/frequency combinations determined by the patient parameters. For example, when ampicillin is chosen, the system automatically narrows down the available dosing regions based on the patient's age, weight, and care intensity as defined by location. Because ampicillin dosing regions differ based on indication, it asks the user for this information (e.g., meningitis), and then the system provides the user with appropriate dosing guidance and automatically displays the correct dose and interval to be ordered (Figure 2). During this process, the ADM has calculated dosage based on weight, applied custom numeric rounding, and enforced maximum and minimum doses. Unlike other CPOE applications that present all possible choices of ordering combinations, our application uses the ADM to display a limited subset of recommendations that only includes options deemed appropriate for the patient in question. This greatly reduces display \"noise\" associated with long lists and helps prevent potential errors due to incorrect selection.\nScreenshot of pediatric medication dosing. This screenshot shows the presentation of an ampicillin dosing region for an infant at Duke Children's Hospital. The left panel displays the current orders for the patient. The right top panel presents the recommended dosing region based on the patient's dosing weight, age, physical location, and indication. The provider may click and select the suggested value (50 mg/kg) or enter his or her own dosage manually as an override.\nThe user can either select this dosing region or manually enter the dosage desired (i.e., an override). From this point, the user will be taken through a series of screens where he or she may enter dose form, start time, duration, and any additional comments for the pharmacy department that are stored in a text field. The last screen will show a summary of the order and all selections made, at which point the user may accept, save as a draft, modify, or exit without saving or ordering. After this terminal step, the order is electronically routed to the pharmacy department for processing. If dosing regions are undefined or do not fit the patient-specific criteria, the system will exit the ADM and prompt the user to manually enter the desired medication order--similar to the process that routinely occurred on paper prior to CPOE deployment.\nUsers enter the CPOE interface through the organizational electronic health record. The process of ordering medications using CPOE can be initiated by either searching for them using a dialog box within the application or choosing an order set developed for a specific patient profile and then clicking on the hyperlink for an ADM-enabled medication. When a patient is admitted to DCH, details regarding the patient's identity, age, and current physical location are transmitted to the CPOE application from the hospital's admission, discharge, and transfer system. The ADM uses this information, along with prompted provider input and its underlying clinical content tables, to determine whether pediatric medication dosing logic is required (Figure 1).\nProcessing logic of the Advanced Dosing Model. A clinician initiates an order for a medication. ADM pre-processing logic verifies if dosing regions exist for this drug. If so, the system will retrieve the available patient information (chronological age, weight, and care area) and prompt the user for any additional information needed. Once all patient parameters are collected, the decision support algorithm will resolve the list of all potential dosing regions from the clinical knowledge base. If the algorithm successfully identifies 1 dosing region, it is presented to the clinician and made available for calculations and screening. If there are no dosing regions available for the requested medication and patient parameters, or multiple dosing regions are found, the system exits the ADM model and the clinician is prompted to enter a medication dose manually.\nOnce within the ADM logic, the top half of the screen will display all patient parameters and any relevant considerations or warnings for the drug. The bottom half displays actions to be taken. If an allergy to a medication is known, warnings will be shown at this step, and the user must override the allergy alert twice in the bottom half of the screen before being permitted to move forward with medication ordering. If there are no warnings, the ADM will prompt for any additional information (i.e., indication) and presents the proper dose/frequency combinations determined by the patient parameters. For example, when ampicillin is chosen, the system automatically narrows down the available dosing regions based on the patient's age, weight, and care intensity as defined by location. Because ampicillin dosing regions differ based on indication, it asks the user for this information (e.g., meningitis), and then the system provides the user with appropriate dosing guidance and automatically displays the correct dose and interval to be ordered (Figure 2). During this process, the ADM has calculated dosage based on weight, applied custom numeric rounding, and enforced maximum and minimum doses. Unlike other CPOE applications that present all possible choices of ordering combinations, our application uses the ADM to display a limited subset of recommendations that only includes options deemed appropriate for the patient in question. This greatly reduces display \"noise\" associated with long lists and helps prevent potential errors due to incorrect selection.\nScreenshot of pediatric medication dosing. This screenshot shows the presentation of an ampicillin dosing region for an infant at Duke Children's Hospital. The left panel displays the current orders for the patient. The right top panel presents the recommended dosing region based on the patient's dosing weight, age, physical location, and indication. The provider may click and select the suggested value (50 mg/kg) or enter his or her own dosage manually as an override.\nThe user can either select this dosing region or manually enter the dosage desired (i.e., an override). From this point, the user will be taken through a series of screens where he or she may enter dose form, start time, duration, and any additional comments for the pharmacy department that are stored in a text field. The last screen will show a summary of the order and all selections made, at which point the user may accept, save as a draft, modify, or exit without saving or ordering. After this terminal step, the order is electronically routed to the pharmacy department for processing. If dosing regions are undefined or do not fit the patient-specific criteria, the system will exit the ADM and prompt the user to manually enter the desired medication order--similar to the process that routinely occurred on paper prior to CPOE deployment.\n[SUBTITLE] Guarding against over- and under-dosing [SUBSECTION] In allowing providers to override the dosing region logic, it became clear during the testing phase that there is potential for a provider to mistype the dose and order far too much or too little drug. As a result, we programmed extra safeguards into the dosing region knowledge base by having the ADM alert the user if a value is entered that exceeds the minimal or maximal drug doses permitted by any dosing region associated with that medication. To override this, the user must enter the exact \"aberrant\" value a second time before the ordering process will move forward. We chose this route as opposed to requiring that the user enter a reason for the override to guard against cases where an override is justified and yet an unintentional, excessively extreme dosage is still entered. We believe this level of active participation by the user (as opposed to the oft-used passive alerting; e.g., having to click an \"OK\" button) strikes a reasonable balance in preventing errors with minimal annoyance. In every case within the CPOE application where alerting methods were used, we recognized that over-alerting (such that the warnings no longer command the user's attention) is as ineffective as no warnings at all and attempted to set alert thresholds accordingly.\nIn allowing providers to override the dosing region logic, it became clear during the testing phase that there is potential for a provider to mistype the dose and order far too much or too little drug. As a result, we programmed extra safeguards into the dosing region knowledge base by having the ADM alert the user if a value is entered that exceeds the minimal or maximal drug doses permitted by any dosing region associated with that medication. To override this, the user must enter the exact \"aberrant\" value a second time before the ordering process will move forward. We chose this route as opposed to requiring that the user enter a reason for the override to guard against cases where an override is justified and yet an unintentional, excessively extreme dosage is still entered. We believe this level of active participation by the user (as opposed to the oft-used passive alerting; e.g., having to click an \"OK\" button) strikes a reasonable balance in preventing errors with minimal annoyance. In every case within the CPOE application where alerting methods were used, we recognized that over-alerting (such that the warnings no longer command the user's attention) is as ineffective as no warnings at all and attempted to set alert thresholds accordingly.\n[SUBTITLE] Limited evaluation of CPOE implementation using organizational safety data [SUBSECTION] Given the extensive literature available that discusses the impact that CPOE may have in terms of unintended patient harm in critical care areas [12,13,15,18,19], we felt compelled to evaluate our intervention using available safety and quality resources. We examined data from our organizational voluntary Safety Reporting System (SRS) to ask whether the rates of adverse drug events increased in pediatric critical care after CPOE deployment. Full details regarding the SRS system have been thoroughly described previously [24,25]. We acknowledge that relying solely upon voluntarily reported events may be problematic due to the well-known issues of reporting bias, volume, seasonality, and anonymity concerns [30]. However, voluntary reporting data have been used elsewhere to evaluate pediatric CPOE systems to better understand the effects of the implementation when a more rigorous prospective study is not possible [12]. Furthermore, SRS is well established within our health system and has become an integral part of the culture of safety at Duke Medicine.\nWith these caveats in mind, we collected all reported harmful ADEs (i.e., at the minimum, transient adverse patient effects occurred that required some corrective therapy or increased length of stay) [24,25]. The ADE rate decreased 42.9% (p = 0.012) and 46.4% (p = 0.006) in the PICU and NICU units, respectively. Similarly, the percentage of total reports that were severe ADEs decreased significantly in each unit (Table 4). We cannot rule out the effects of reporter bias, and event volume is too low to look at the reports in terms of categories of system failures and attributable causes. Pediatrician review of the event narratives entered by the medication safety pharmacists suggests that there may be fewer PICU reports within 2 primary areas of acknowledged weakness in medication processing--incorrect ordering and order transcription--although the data are sparse. Although we were unable to prospectively study the impact of the advanced dosing model on patient safety, this retrospective analysis of voluntarily reported safety data suggests that we have improved the safety of medication dosing in our pediatric critical care population.\nVoluntarily reported adverse drug events pre- and post-deployment of the pediatric Advanced Dosing Model\nPre-period begins on 9/17/2004 and ends the day before the CPOE deployment dates for each unit described in Table 1. Post-period begins the day of CPOE deployment and ends 12/31/2009. CI = confidence interval. *Significant by 2-way Pearson's chi-square test.\nGiven the extensive literature available that discusses the impact that CPOE may have in terms of unintended patient harm in critical care areas [12,13,15,18,19], we felt compelled to evaluate our intervention using available safety and quality resources. We examined data from our organizational voluntary Safety Reporting System (SRS) to ask whether the rates of adverse drug events increased in pediatric critical care after CPOE deployment. Full details regarding the SRS system have been thoroughly described previously [24,25]. We acknowledge that relying solely upon voluntarily reported events may be problematic due to the well-known issues of reporting bias, volume, seasonality, and anonymity concerns [30]. However, voluntary reporting data have been used elsewhere to evaluate pediatric CPOE systems to better understand the effects of the implementation when a more rigorous prospective study is not possible [12]. Furthermore, SRS is well established within our health system and has become an integral part of the culture of safety at Duke Medicine.\nWith these caveats in mind, we collected all reported harmful ADEs (i.e., at the minimum, transient adverse patient effects occurred that required some corrective therapy or increased length of stay) [24,25]. The ADE rate decreased 42.9% (p = 0.012) and 46.4% (p = 0.006) in the PICU and NICU units, respectively. Similarly, the percentage of total reports that were severe ADEs decreased significantly in each unit (Table 4). We cannot rule out the effects of reporter bias, and event volume is too low to look at the reports in terms of categories of system failures and attributable causes. Pediatrician review of the event narratives entered by the medication safety pharmacists suggests that there may be fewer PICU reports within 2 primary areas of acknowledged weakness in medication processing--incorrect ordering and order transcription--although the data are sparse. Although we were unable to prospectively study the impact of the advanced dosing model on patient safety, this retrospective analysis of voluntarily reported safety data suggests that we have improved the safety of medication dosing in our pediatric critical care population.\nVoluntarily reported adverse drug events pre- and post-deployment of the pediatric Advanced Dosing Model\nPre-period begins on 9/17/2004 and ends the day before the CPOE deployment dates for each unit described in Table 1. Post-period begins the day of CPOE deployment and ends 12/31/2009. CI = confidence interval. *Significant by 2-way Pearson's chi-square test.\n[SUBTITLE] Comparison to other pediatric dosing models [SUBSECTION] A comprehensive comparison to other systems is hampered by the lack of rigorous reports on CPOE dosing rule development in the formal literature. Commercial CPOE systems, including Horizon Expert Orders, often use formulary references such as First Databank [31] or Lexicon-Multum [32] for static dosing advice (e.g., drug-drug interactions or allergy alerts) but not for active dosing assistance or management of the dosing process where tailored doses are suggested based on the patient's clinical profile. Although true clinical decision support knowledge bases are available, these are focused on adults, require much manipulation for use in the hospital setting, and are underdeveloped for pediatrics [33].\nWizOrder, the predecessor of our CPOE system, did evolve to include weight-based dosing [26], but the lack of a full report that describes its implementation is a barrier to comparative analysis. In a 1-page conference proceeding, the authors include the concept of dosing weight, as well as a process by which existing orders are reviewed when the patient's weight changes significantly. Our weight-based dosing also includes checking against growth curves to ensure that the change in weight from the prior value is logical. When 1 new drug is dosed on an updated weight, the other drug orders live on a patient are automatically checked to see if the dosing region is still appropriate or should be updated.\nKillelea and colleagues published a description of their pediatric dosing decision support rules for a large teaching hospital [34]. Like our ADM, their method included designing rules by committee based on medication, age, and weight. However, they do not go into much detail regarding how the weights are managed, nor do they describe alerting functionality surrounding weight as we have at DCH. Additionally, their consideration of indication is limited to displaying the dosing guidance for the default indication and providing pop-up windows that describe dosing rules for other less common indications. Patient-specific indication details are not considered by the system, and location-based customization (i.e., care intensity) is not included for presenting a dosing suggestion. Rounding is configurable on a per-medication rule level, but their system as reported is not configurable per location as is the case at DCH.\nOur incorporation of location thus comes into focus as an important, novel aspect of our model. We use location as a critical identifier of care intensity, which is especially important given our large BMT and ICU population. These patients often receive augmented doses of medications given the severity of their illness--doses that may be severely harmful to other pediatric patients. It is therefore critical that the decision support inherent to the dosing regions \"locks\" this content to only the BMT and ICU units. Similarly, cystic fibrosis patients have equally unique medication needs, and so the inclusion of an indication parameter in the ADM allows us to focus clinical content just to this specialized population. Overall, the modular nature of the dosing region model allows us to easily develop new dosing scenarios for specialized populations, especially if the pediatric population profile at DCH changes over time.\nA comprehensive comparison to other systems is hampered by the lack of rigorous reports on CPOE dosing rule development in the formal literature. Commercial CPOE systems, including Horizon Expert Orders, often use formulary references such as First Databank [31] or Lexicon-Multum [32] for static dosing advice (e.g., drug-drug interactions or allergy alerts) but not for active dosing assistance or management of the dosing process where tailored doses are suggested based on the patient's clinical profile. Although true clinical decision support knowledge bases are available, these are focused on adults, require much manipulation for use in the hospital setting, and are underdeveloped for pediatrics [33].\nWizOrder, the predecessor of our CPOE system, did evolve to include weight-based dosing [26], but the lack of a full report that describes its implementation is a barrier to comparative analysis. In a 1-page conference proceeding, the authors include the concept of dosing weight, as well as a process by which existing orders are reviewed when the patient's weight changes significantly. Our weight-based dosing also includes checking against growth curves to ensure that the change in weight from the prior value is logical. When 1 new drug is dosed on an updated weight, the other drug orders live on a patient are automatically checked to see if the dosing region is still appropriate or should be updated.\nKillelea and colleagues published a description of their pediatric dosing decision support rules for a large teaching hospital [34]. Like our ADM, their method included designing rules by committee based on medication, age, and weight. However, they do not go into much detail regarding how the weights are managed, nor do they describe alerting functionality surrounding weight as we have at DCH. Additionally, their consideration of indication is limited to displaying the dosing guidance for the default indication and providing pop-up windows that describe dosing rules for other less common indications. Patient-specific indication details are not considered by the system, and location-based customization (i.e., care intensity) is not included for presenting a dosing suggestion. Rounding is configurable on a per-medication rule level, but their system as reported is not configurable per location as is the case at DCH.\nOur incorporation of location thus comes into focus as an important, novel aspect of our model. We use location as a critical identifier of care intensity, which is especially important given our large BMT and ICU population. These patients often receive augmented doses of medications given the severity of their illness--doses that may be severely harmful to other pediatric patients. It is therefore critical that the decision support inherent to the dosing regions \"locks\" this content to only the BMT and ICU units. Similarly, cystic fibrosis patients have equally unique medication needs, and so the inclusion of an indication parameter in the ADM allows us to focus clinical content just to this specialized population. Overall, the modular nature of the dosing region model allows us to easily develop new dosing scenarios for specialized populations, especially if the pediatric population profile at DCH changes over time.\n[SUBTITLE] Limitations and lessons learned [SUBSECTION] The ADM represents a unique, modular approach to manage the logistics of adding increasingly complex clinical decision support information to an existing CPOE application. This allowed us to tailor the adult tool to the unique needs of pediatrics. However, in practice, there are practical and clinical considerations regarding whether a subpopulation warrants specialized dosing using an ADM approach. There can be significant resource management, implementation, and maintenance trade-offs between over-defining and under-defining patient subpopulations. Thus, a core limitation with the dosing region model of medication ordering is that it is not practical to develop a dosing region for every conceivable scenario of pediatric drug dosing. As a result, there will always be certain medications without any dosing regions or instances of dosing region gaps where the ADM is unable to suggest an appropriate dose. Use of dosing regions, therefore, requires that end user clinicians resolve these gaps. In all cases where the ADM is not able to suggest unique dosing, the patient parameters, medications involved, and manually entered dosages are recorded in an audit file for further review to determine if systematic needs are being unmet.\nThe ADM represents a unique, modular approach to manage the logistics of adding increasingly complex clinical decision support information to an existing CPOE application. This allowed us to tailor the adult tool to the unique needs of pediatrics. However, in practice, there are practical and clinical considerations regarding whether a subpopulation warrants specialized dosing using an ADM approach. There can be significant resource management, implementation, and maintenance trade-offs between over-defining and under-defining patient subpopulations. Thus, a core limitation with the dosing region model of medication ordering is that it is not practical to develop a dosing region for every conceivable scenario of pediatric drug dosing. As a result, there will always be certain medications without any dosing regions or instances of dosing region gaps where the ADM is unable to suggest an appropriate dose. Use of dosing regions, therefore, requires that end user clinicians resolve these gaps. In all cases where the ADM is not able to suggest unique dosing, the patient parameters, medications involved, and manually entered dosages are recorded in an audit file for further review to determine if systematic needs are being unmet.\n[SUBTITLE] Addressing patient weight in the provider work flow [SUBSECTION] Because the ADM relies so heavily on patient weight in pediatric dosing, it became critically important to study the workflow processes that shape how a provider interprets the definition of weight in order to guard against unexpected results. Potential for unintended consequences is significant in a culture that thinks in pounds but doses in kilograms, which would translate into greater than double the intended dose of medication. Providing alerts to address both extremes of the possible weight continuum addresses this issue but has its own challenges. DCH has a large patient population with disease states or chronic conditions that often result in cases of extremely low weight due to poor growth. Even among the general care areas, the increasing prevalence of obese children requires frequent adjustments to commonly accepted dosing paradigms. As a result, crafting alerts that would remind the provider and still avoid alert fatigue was extremely important. Once a patient's weight was entered in the CPOE application, it soon became necessary to define policies surrounding weight definition and responsibilities for its updating during the course of a patient's stay. Although nurses updated the actual weight in their stand-alone documentation system, it was important that this weight was simultaneously updated in the CPOE system. Because of the impact of weight on medication dosing, it was decided that only physicians or physician extenders could enter or update this value.\nBecause the ADM relies so heavily on patient weight in pediatric dosing, it became critically important to study the workflow processes that shape how a provider interprets the definition of weight in order to guard against unexpected results. Potential for unintended consequences is significant in a culture that thinks in pounds but doses in kilograms, which would translate into greater than double the intended dose of medication. Providing alerts to address both extremes of the possible weight continuum addresses this issue but has its own challenges. DCH has a large patient population with disease states or chronic conditions that often result in cases of extremely low weight due to poor growth. Even among the general care areas, the increasing prevalence of obese children requires frequent adjustments to commonly accepted dosing paradigms. As a result, crafting alerts that would remind the provider and still avoid alert fatigue was extremely important. Once a patient's weight was entered in the CPOE application, it soon became necessary to define policies surrounding weight definition and responsibilities for its updating during the course of a patient's stay. Although nurses updated the actual weight in their stand-alone documentation system, it was important that this weight was simultaneously updated in the CPOE system. Because of the impact of weight on medication dosing, it was decided that only physicians or physician extenders could enter or update this value.\n[SUBTITLE] Overreliance on technology [SUBSECTION] One unexpected side effect of the dosing region model was that providers quickly became accustomed to the concept and were concerned when no dosing region was present. Frontline users expected the computer to prevent any bad decisions, and yet, the clinical advisory committee was reluctant to make any restriction on dosing regions that would compromise a provider's flexibility in ordering. Providers had to be educated to think critically about the dosing recommendations for each patient, and were reminded that the thought process in manually entering a dosage drug without a dosing region within CPOE is nearly identical to the prior paper-based ordering process.\nOne unexpected side effect of the dosing region model was that providers quickly became accustomed to the concept and were concerned when no dosing region was present. Frontline users expected the computer to prevent any bad decisions, and yet, the clinical advisory committee was reluctant to make any restriction on dosing regions that would compromise a provider's flexibility in ordering. Providers had to be educated to think critically about the dosing recommendations for each patient, and were reminded that the thought process in manually entering a dosage drug without a dosing region within CPOE is nearly identical to the prior paper-based ordering process.\n[SUBTITLE] The importance of team composition [SUBSECTION] Finally, computerized decision support design is not a typical knowledge area for most clinical practitioners, and, conversely, IT system developers often do not possess an understanding of the dynamic health care environment in which their applications are used. Because CPOE is technically sophisticated with immense clinical impact, it is extremely important that the design team include individuals who can bridge the traditional IT and clinical specialty divide. Such individuals review functional and technical specifications with an eye for clinical impact, potential functionality conflicts, and knowledge base gaps.\nFinally, computerized decision support design is not a typical knowledge area for most clinical practitioners, and, conversely, IT system developers often do not possess an understanding of the dynamic health care environment in which their applications are used. Because CPOE is technically sophisticated with immense clinical impact, it is extremely important that the design team include individuals who can bridge the traditional IT and clinical specialty divide. Such individuals review functional and technical specifications with an eye for clinical impact, potential functionality conflicts, and knowledge base gaps.", "Users enter the CPOE interface through the organizational electronic health record. The process of ordering medications using CPOE can be initiated by either searching for them using a dialog box within the application or choosing an order set developed for a specific patient profile and then clicking on the hyperlink for an ADM-enabled medication. When a patient is admitted to DCH, details regarding the patient's identity, age, and current physical location are transmitted to the CPOE application from the hospital's admission, discharge, and transfer system. The ADM uses this information, along with prompted provider input and its underlying clinical content tables, to determine whether pediatric medication dosing logic is required (Figure 1).\nProcessing logic of the Advanced Dosing Model. A clinician initiates an order for a medication. ADM pre-processing logic verifies if dosing regions exist for this drug. If so, the system will retrieve the available patient information (chronological age, weight, and care area) and prompt the user for any additional information needed. Once all patient parameters are collected, the decision support algorithm will resolve the list of all potential dosing regions from the clinical knowledge base. If the algorithm successfully identifies 1 dosing region, it is presented to the clinician and made available for calculations and screening. If there are no dosing regions available for the requested medication and patient parameters, or multiple dosing regions are found, the system exits the ADM model and the clinician is prompted to enter a medication dose manually.\nOnce within the ADM logic, the top half of the screen will display all patient parameters and any relevant considerations or warnings for the drug. The bottom half displays actions to be taken. If an allergy to a medication is known, warnings will be shown at this step, and the user must override the allergy alert twice in the bottom half of the screen before being permitted to move forward with medication ordering. If there are no warnings, the ADM will prompt for any additional information (i.e., indication) and presents the proper dose/frequency combinations determined by the patient parameters. For example, when ampicillin is chosen, the system automatically narrows down the available dosing regions based on the patient's age, weight, and care intensity as defined by location. Because ampicillin dosing regions differ based on indication, it asks the user for this information (e.g., meningitis), and then the system provides the user with appropriate dosing guidance and automatically displays the correct dose and interval to be ordered (Figure 2). During this process, the ADM has calculated dosage based on weight, applied custom numeric rounding, and enforced maximum and minimum doses. Unlike other CPOE applications that present all possible choices of ordering combinations, our application uses the ADM to display a limited subset of recommendations that only includes options deemed appropriate for the patient in question. This greatly reduces display \"noise\" associated with long lists and helps prevent potential errors due to incorrect selection.\nScreenshot of pediatric medication dosing. This screenshot shows the presentation of an ampicillin dosing region for an infant at Duke Children's Hospital. The left panel displays the current orders for the patient. The right top panel presents the recommended dosing region based on the patient's dosing weight, age, physical location, and indication. The provider may click and select the suggested value (50 mg/kg) or enter his or her own dosage manually as an override.\nThe user can either select this dosing region or manually enter the dosage desired (i.e., an override). From this point, the user will be taken through a series of screens where he or she may enter dose form, start time, duration, and any additional comments for the pharmacy department that are stored in a text field. The last screen will show a summary of the order and all selections made, at which point the user may accept, save as a draft, modify, or exit without saving or ordering. After this terminal step, the order is electronically routed to the pharmacy department for processing. If dosing regions are undefined or do not fit the patient-specific criteria, the system will exit the ADM and prompt the user to manually enter the desired medication order--similar to the process that routinely occurred on paper prior to CPOE deployment.", "In allowing providers to override the dosing region logic, it became clear during the testing phase that there is potential for a provider to mistype the dose and order far too much or too little drug. As a result, we programmed extra safeguards into the dosing region knowledge base by having the ADM alert the user if a value is entered that exceeds the minimal or maximal drug doses permitted by any dosing region associated with that medication. To override this, the user must enter the exact \"aberrant\" value a second time before the ordering process will move forward. We chose this route as opposed to requiring that the user enter a reason for the override to guard against cases where an override is justified and yet an unintentional, excessively extreme dosage is still entered. We believe this level of active participation by the user (as opposed to the oft-used passive alerting; e.g., having to click an \"OK\" button) strikes a reasonable balance in preventing errors with minimal annoyance. In every case within the CPOE application where alerting methods were used, we recognized that over-alerting (such that the warnings no longer command the user's attention) is as ineffective as no warnings at all and attempted to set alert thresholds accordingly.", "Given the extensive literature available that discusses the impact that CPOE may have in terms of unintended patient harm in critical care areas [12,13,15,18,19], we felt compelled to evaluate our intervention using available safety and quality resources. We examined data from our organizational voluntary Safety Reporting System (SRS) to ask whether the rates of adverse drug events increased in pediatric critical care after CPOE deployment. Full details regarding the SRS system have been thoroughly described previously [24,25]. We acknowledge that relying solely upon voluntarily reported events may be problematic due to the well-known issues of reporting bias, volume, seasonality, and anonymity concerns [30]. However, voluntary reporting data have been used elsewhere to evaluate pediatric CPOE systems to better understand the effects of the implementation when a more rigorous prospective study is not possible [12]. Furthermore, SRS is well established within our health system and has become an integral part of the culture of safety at Duke Medicine.\nWith these caveats in mind, we collected all reported harmful ADEs (i.e., at the minimum, transient adverse patient effects occurred that required some corrective therapy or increased length of stay) [24,25]. The ADE rate decreased 42.9% (p = 0.012) and 46.4% (p = 0.006) in the PICU and NICU units, respectively. Similarly, the percentage of total reports that were severe ADEs decreased significantly in each unit (Table 4). We cannot rule out the effects of reporter bias, and event volume is too low to look at the reports in terms of categories of system failures and attributable causes. Pediatrician review of the event narratives entered by the medication safety pharmacists suggests that there may be fewer PICU reports within 2 primary areas of acknowledged weakness in medication processing--incorrect ordering and order transcription--although the data are sparse. Although we were unable to prospectively study the impact of the advanced dosing model on patient safety, this retrospective analysis of voluntarily reported safety data suggests that we have improved the safety of medication dosing in our pediatric critical care population.\nVoluntarily reported adverse drug events pre- and post-deployment of the pediatric Advanced Dosing Model\nPre-period begins on 9/17/2004 and ends the day before the CPOE deployment dates for each unit described in Table 1. Post-period begins the day of CPOE deployment and ends 12/31/2009. CI = confidence interval. *Significant by 2-way Pearson's chi-square test.", "A comprehensive comparison to other systems is hampered by the lack of rigorous reports on CPOE dosing rule development in the formal literature. Commercial CPOE systems, including Horizon Expert Orders, often use formulary references such as First Databank [31] or Lexicon-Multum [32] for static dosing advice (e.g., drug-drug interactions or allergy alerts) but not for active dosing assistance or management of the dosing process where tailored doses are suggested based on the patient's clinical profile. Although true clinical decision support knowledge bases are available, these are focused on adults, require much manipulation for use in the hospital setting, and are underdeveloped for pediatrics [33].\nWizOrder, the predecessor of our CPOE system, did evolve to include weight-based dosing [26], but the lack of a full report that describes its implementation is a barrier to comparative analysis. In a 1-page conference proceeding, the authors include the concept of dosing weight, as well as a process by which existing orders are reviewed when the patient's weight changes significantly. Our weight-based dosing also includes checking against growth curves to ensure that the change in weight from the prior value is logical. When 1 new drug is dosed on an updated weight, the other drug orders live on a patient are automatically checked to see if the dosing region is still appropriate or should be updated.\nKillelea and colleagues published a description of their pediatric dosing decision support rules for a large teaching hospital [34]. Like our ADM, their method included designing rules by committee based on medication, age, and weight. However, they do not go into much detail regarding how the weights are managed, nor do they describe alerting functionality surrounding weight as we have at DCH. Additionally, their consideration of indication is limited to displaying the dosing guidance for the default indication and providing pop-up windows that describe dosing rules for other less common indications. Patient-specific indication details are not considered by the system, and location-based customization (i.e., care intensity) is not included for presenting a dosing suggestion. Rounding is configurable on a per-medication rule level, but their system as reported is not configurable per location as is the case at DCH.\nOur incorporation of location thus comes into focus as an important, novel aspect of our model. We use location as a critical identifier of care intensity, which is especially important given our large BMT and ICU population. These patients often receive augmented doses of medications given the severity of their illness--doses that may be severely harmful to other pediatric patients. It is therefore critical that the decision support inherent to the dosing regions \"locks\" this content to only the BMT and ICU units. Similarly, cystic fibrosis patients have equally unique medication needs, and so the inclusion of an indication parameter in the ADM allows us to focus clinical content just to this specialized population. Overall, the modular nature of the dosing region model allows us to easily develop new dosing scenarios for specialized populations, especially if the pediatric population profile at DCH changes over time.", "The ADM represents a unique, modular approach to manage the logistics of adding increasingly complex clinical decision support information to an existing CPOE application. This allowed us to tailor the adult tool to the unique needs of pediatrics. However, in practice, there are practical and clinical considerations regarding whether a subpopulation warrants specialized dosing using an ADM approach. There can be significant resource management, implementation, and maintenance trade-offs between over-defining and under-defining patient subpopulations. Thus, a core limitation with the dosing region model of medication ordering is that it is not practical to develop a dosing region for every conceivable scenario of pediatric drug dosing. As a result, there will always be certain medications without any dosing regions or instances of dosing region gaps where the ADM is unable to suggest an appropriate dose. Use of dosing regions, therefore, requires that end user clinicians resolve these gaps. In all cases where the ADM is not able to suggest unique dosing, the patient parameters, medications involved, and manually entered dosages are recorded in an audit file for further review to determine if systematic needs are being unmet.", "Because the ADM relies so heavily on patient weight in pediatric dosing, it became critically important to study the workflow processes that shape how a provider interprets the definition of weight in order to guard against unexpected results. Potential for unintended consequences is significant in a culture that thinks in pounds but doses in kilograms, which would translate into greater than double the intended dose of medication. Providing alerts to address both extremes of the possible weight continuum addresses this issue but has its own challenges. DCH has a large patient population with disease states or chronic conditions that often result in cases of extremely low weight due to poor growth. Even among the general care areas, the increasing prevalence of obese children requires frequent adjustments to commonly accepted dosing paradigms. As a result, crafting alerts that would remind the provider and still avoid alert fatigue was extremely important. Once a patient's weight was entered in the CPOE application, it soon became necessary to define policies surrounding weight definition and responsibilities for its updating during the course of a patient's stay. Although nurses updated the actual weight in their stand-alone documentation system, it was important that this weight was simultaneously updated in the CPOE system. Because of the impact of weight on medication dosing, it was decided that only physicians or physician extenders could enter or update this value.", "One unexpected side effect of the dosing region model was that providers quickly became accustomed to the concept and were concerned when no dosing region was present. Frontline users expected the computer to prevent any bad decisions, and yet, the clinical advisory committee was reluctant to make any restriction on dosing regions that would compromise a provider's flexibility in ordering. Providers had to be educated to think critically about the dosing recommendations for each patient, and were reminded that the thought process in manually entering a dosage drug without a dosing region within CPOE is nearly identical to the prior paper-based ordering process.", "Finally, computerized decision support design is not a typical knowledge area for most clinical practitioners, and, conversely, IT system developers often do not possess an understanding of the dynamic health care environment in which their applications are used. Because CPOE is technically sophisticated with immense clinical impact, it is extremely important that the design team include individuals who can bridge the traditional IT and clinical specialty divide. Such individuals review functional and technical specifications with an eye for clinical impact, potential functionality conflicts, and knowledge base gaps.", "In this study, we describe the implementation of a pediatric Advanced Dosing Model that acts as an enhancement to an adult-centric, vended CPOE system in order to meet the unique challenges of pediatric care. Despite some limitations, the ADM provides a powerful way to guide pediatricians through the medication ordering process. The model uses knowledge of the patient's state to deliberate on care parameters and suggest the appropriate dose.\nEnhancing an adult-focused CPOE system for safe pediatric medication management is a daunting task. When undertaking such a project, it is essential that physicians and pharmacists with formal informatics training serve as an interface between the clinicians and the development team. We hope that the strategies described here will serve to guide other pediatric institutions as they develop their own plans for the implementation of pediatric computerized provider order entry.", "The authors declare that they have no competing interests.", "JMF conceived the project, designed the technical and functional specifications, made critical design decisions, oversaw application deployment, and drafted the manuscript. JJ and PS designed the dosing region functionality and supported frontline deployment. CMD reviewed reported safety incidents and advised on the manuscript. MH performed safety data analysis, interpreted the results, and drafted the final manuscript. AA supported project design and deployment.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6947/11/14/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Setting and implementation period", "CPOE architecture at Duke University Hospital", "Medication challenges at Duke Children's Hospital", "Needs assessment", "Functional design of the pediatric Advanced Dosing Model (ADM)", "Development of pediatric clinical content", "Designation of patient parameters describing dosing regions", "Error prevention measures", "Analysis of voluntarily reported safety events", "Results and Discussion", "Clinician workflow for medication ordering within the CPOE interface", "Guarding against over- and under-dosing", "Limited evaluation of CPOE implementation using organizational safety data", "Comparison to other pediatric dosing models", "Limitations and lessons learned", "Addressing patient weight in the provider work flow", "Overreliance on technology", "The importance of team composition", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Medication management in children poses distinctive challenges [1,2], as pediatricians need to calculate doses based on weight, age, gestational age, and indication, which may increase the risk of mathematical errors (such as the serious and common 10-fold overdose) [3]. Pediatric pharmacists often work with adult formulations and must manually compound suspensions for use in pediatric patients. Most importantly, childhood is an inherently dynamic period during which children experience rapidly changing weights and physiologic fluctuations that place them at high risk for incorrect dosing. Given the limited physiologic reserve of pediatric patients, small miscalculations caused by technical glitches or improper system design can cause significant morbidity and mortality [2,4].\nSome of these challenges can be addressed using computerized provider order entry (CPOE) systems, and the American Academy of Pediatrics policy statement on the prevention of medication errors strongly recommends the use of \"computerized systems\" whenever feasible [5]. Research on the epidemiology of adverse drug events (ADEs) in pediatric inpatients reveals that most ADEs originate at the drug ordering stage, with the smallest and most critically ill patients at highest risk [6-9]. In adults, CPOE has been shown to decrease the incidence of medication errors by 55% [10], and CPOE with advanced clinical decision support can decrease error rates by 83% [5,11]. Several studies of pediatric inpatients have demonstrated decreases in medication errors after the implementation of CPOE [8,12,13], and 1 study of pediatric intensive care unit (ICU) patients found a 95% reduction in medication errors and a 40% reduction in potential ADEs following CPOE implementation [14]. However, few studies have shown improvements in post-CPOE implementation outcome measures such as actual (not potential) ADEs, even after meta-analysis of multiple studies [15] or an advanced time-series analysis is performed [16,17]. Furthermore, the landmark study in 2005 by Han reporting an unexpected increase in ICU pediatric mortality related to a pediatric CPOE implementation [18] sparked a nationwide dialogue about the relationship between technology and patient safety. Del Beccaro et al. provided another perspective on this topic by similarly evaluating the same CPOE product in a separate tertiary care facility [19]. They found no mortality increase in critical care areas, and attributed this to a more careful implementation process and the design of ICU-specific order sets. Recently, another group demonstrated a reduction of mortality after implementing a vended but locally modified product at an academic children's hospital [20]. In this time of increased awareness, it is essential that pediatric centers share their experiences to improve large-scale pediatric deployment of CPOE.\nUnfortunately, there is significant variability in the level of pediatric-specific medication dosing functionality built into today's CPOE structures. An institution wishing to implement pediatric CPOE often faces a difficult choice between replacing its legacy system with a dedicated pediatric CPOE application [21-23] versus enhancing the existing functionality of an adult-focused CPOE application. At Duke University Hospital (DUH), we chose the latter option, working with our CPOE vendor (Horizon Expert Orders, McKesson Corporation, San Francisco) to adapt our existing, adult-focused system to provide pediatric CPOE. In doing so, we created the \"Advanced Dosing Model\" (ADM) within the CPOE application to address the unique dosing needs of children. The ADM uses broad clinical decision support to incorporate many criteria into medication dosing such as weight, age, indication, and safety alerts that are built into clinical content. We feel the details of this novel model are sufficiently complex to warrant its own report distinct from other elements of pediatric CPOE.", "[SUBTITLE] Setting and implementation period [SUBSECTION] Duke Children's Hospital (DCH) is a tertiary care facility within DUH that comprises 7 inpatient pediatric service locations: 2 general care medical wards, a pediatric intensive care unit (PICU), a neonatal intensive care unit (NICU), a bone marrow transplant (BMT) unit, a pediatric cardiac ICU, and a transitional care (i.e., step-down) unit. DCH averages 7000 pediatric admissions per year across 187 inpatient beds, approximately 50% of which are located in critical care wards. DCH employs approximately 197 attending physicians and 50 pediatric residents across 20 clinical service areas. Table 1 details release of the CPOE application, which included the ADM functionality described in this report, in order of deployment over a 14-month period across pediatric units (Table 1).\nDeployment of pediatric CPOE at Duke Children's Hospital\n*Number of beds in service as of 1/29/2010.\n†New pediatric location introduced to DCH after full CPOE deployment on 3/10/2008.\nDuke Children's Hospital (DCH) is a tertiary care facility within DUH that comprises 7 inpatient pediatric service locations: 2 general care medical wards, a pediatric intensive care unit (PICU), a neonatal intensive care unit (NICU), a bone marrow transplant (BMT) unit, a pediatric cardiac ICU, and a transitional care (i.e., step-down) unit. DCH averages 7000 pediatric admissions per year across 187 inpatient beds, approximately 50% of which are located in critical care wards. DCH employs approximately 197 attending physicians and 50 pediatric residents across 20 clinical service areas. Table 1 details release of the CPOE application, which included the ADM functionality described in this report, in order of deployment over a 14-month period across pediatric units (Table 1).\nDeployment of pediatric CPOE at Duke Children's Hospital\n*Number of beds in service as of 1/29/2010.\n†New pediatric location introduced to DCH after full CPOE deployment on 3/10/2008.\n[SUBTITLE] CPOE architecture at Duke University Hospital [SUBSECTION] At DUH, the Horizon Expert Orders (HEO) CPOE system (McKesson Corporation, San Francisco, CA) is a comprehensive order management system that spans medical disciplines and offers real-time decision support and guidance for order entry. This product was deployed on all DUH adult floors by April, 2006. Providers interact with a Java-based desktop client (Java version 1.42.09) that queries an Oracle 10 database (Oracle Corporation, Redwood Shores, CA) holding both the clinical content tables (e.g., orderables, order sets, and clinical decision support information) and patient information tables (e.g., patient identity, care area, diagnoses, existing orders). New provider choices made through the CPOE client are saved to the patient information table and executed as HL7 messages to other hospital information technology (IT) applications that fulfill the orders.\nAt DUH, the Horizon Expert Orders (HEO) CPOE system (McKesson Corporation, San Francisco, CA) is a comprehensive order management system that spans medical disciplines and offers real-time decision support and guidance for order entry. This product was deployed on all DUH adult floors by April, 2006. Providers interact with a Java-based desktop client (Java version 1.42.09) that queries an Oracle 10 database (Oracle Corporation, Redwood Shores, CA) holding both the clinical content tables (e.g., orderables, order sets, and clinical decision support information) and patient information tables (e.g., patient identity, care area, diagnoses, existing orders). New provider choices made through the CPOE client are saved to the patient information table and executed as HL7 messages to other hospital information technology (IT) applications that fulfill the orders.\n[SUBTITLE] Medication challenges at Duke Children's Hospital [SUBSECTION] DCH's pre-implementation medication vulnerabilities were similar to those described by other tertiary care institutions and have been reported previously in an analysis of both voluntarily reported events and ADEs detected by computerized surveillance [24]. Briefly, DCH sees approximately 18.0 medication-related safety incidents per 1000 patient days as detected by voluntary reporting. Of these, approximately 10.9% result in some level of patient harm. Computerized surveillance, a complementary incident detection method that behaves as an automated trigger tool, found 1.6 ADEs per 1000 patient days. As expected, event density is higher in critical care units than in general care areas. The most common problem areas are failures in the medication use process such as incorrect drug dose or rate followed by drug omissions. Antibiotics in particular were a drug class identified for enhanced surveillance targeting. We reported that the safety profile of pediatrics was distinct from that of adults, underscoring the importance of pediatric-specific clinical content for dosing guidance [25].\nDCH's pre-implementation medication vulnerabilities were similar to those described by other tertiary care institutions and have been reported previously in an analysis of both voluntarily reported events and ADEs detected by computerized surveillance [24]. Briefly, DCH sees approximately 18.0 medication-related safety incidents per 1000 patient days as detected by voluntary reporting. Of these, approximately 10.9% result in some level of patient harm. Computerized surveillance, a complementary incident detection method that behaves as an automated trigger tool, found 1.6 ADEs per 1000 patient days. As expected, event density is higher in critical care units than in general care areas. The most common problem areas are failures in the medication use process such as incorrect drug dose or rate followed by drug omissions. Antibiotics in particular were a drug class identified for enhanced surveillance targeting. We reported that the safety profile of pediatrics was distinct from that of adults, underscoring the importance of pediatric-specific clinical content for dosing guidance [25].\n[SUBTITLE] Needs assessment [SUBSECTION] Because DCH serves challenging, critically ill pediatric patients, the deployment of CPOE in this environment was deferred until the end of the adult CPOE implementation plan. A needs assessment was performed by a multidisciplinary clinical advisory workgroup of physicians, nurses, pharmacists, and safety directors to define the features and work flow requirements for a pediatric CPOE product deployed to DCH. These individuals made broad decisions that would affect clinician work flow, and their input was critical to ensure operational acceptance. It was immediately recognized that a pediatric CPOE application would require high flexibility to approach the wide variety of nuanced, sometimes novel, pediatric therapies in place at DCH. Given that the existing CPOE system provides adult dosing guidance via a specific set of clinical content tables, it was recognized that the needs of pediatric dosing could be satisfied through adding an additional set of tables for that population while maintaining the existing adult-based infrastructure. As a result, the most efficient plan was to partner with McKesson to enhance the adult product for pediatric usage instead of implementing a new, vended solution. McKesson and DUH agreed to a joint development project to incorporate clinical content from the pediatric WizOrder tool [26], acquired by McKesson from Vanderbilt University Medical Center, into the HEO commercial product. The clinical advisory committee continued to meet weekly for 6 months prior to system release to discuss clinical issues, understand technological implications of the application, and act as liaisons to technical developers at McKesson for all areas of pediatric CPOE design.\nBecause DCH serves challenging, critically ill pediatric patients, the deployment of CPOE in this environment was deferred until the end of the adult CPOE implementation plan. A needs assessment was performed by a multidisciplinary clinical advisory workgroup of physicians, nurses, pharmacists, and safety directors to define the features and work flow requirements for a pediatric CPOE product deployed to DCH. These individuals made broad decisions that would affect clinician work flow, and their input was critical to ensure operational acceptance. It was immediately recognized that a pediatric CPOE application would require high flexibility to approach the wide variety of nuanced, sometimes novel, pediatric therapies in place at DCH. Given that the existing CPOE system provides adult dosing guidance via a specific set of clinical content tables, it was recognized that the needs of pediatric dosing could be satisfied through adding an additional set of tables for that population while maintaining the existing adult-based infrastructure. As a result, the most efficient plan was to partner with McKesson to enhance the adult product for pediatric usage instead of implementing a new, vended solution. McKesson and DUH agreed to a joint development project to incorporate clinical content from the pediatric WizOrder tool [26], acquired by McKesson from Vanderbilt University Medical Center, into the HEO commercial product. The clinical advisory committee continued to meet weekly for 6 months prior to system release to discuss clinical issues, understand technological implications of the application, and act as liaisons to technical developers at McKesson for all areas of pediatric CPOE design.\n[SUBTITLE] Functional design of the pediatric Advanced Dosing Model (ADM) [SUBSECTION] By the broadest definition, the ADM as it relates to HEO is a combination of clinical content tables and decision tree logic layered on top of the existing adult CPOE application to provide extensive, content-driven, drug-by-drug clinical decision support for pediatric medication dosing. Based on the clinical advisory committee's recommendations, an ADM focus group was convened to specifically address its functional requirements and design details. This group included 15 pediatric pharmacists charged with defining the detailed patient-specific criteria that shape the dosing scenarios for each drug. To facilitate this, a temporary web application and supporting database was built to store their design decisions and manage the pertinent clinical content.\nInstead of simply facilitating weight-based calculations familiar to pediatric dosing, we sought to create a broad clinical decision support system that incorporates up to 6 patient-specific parameters (Table 2) and safety alerts. The major functional design decisions from both the clinical advisory committee and ADM focus group were:\nPediatric patient parameters reasoned over by the Advanced Dosing Model logic\n• The ADM should provide clinical guidance and targeted medication dosing recommendations based on 1 or more patient-centric criteria at the time of order entry.\n• The clinical decision support information surrounding medication dosing that populates CPOE clinical content tables should address the diverse needs of general and specialized pediatric populations. In this way, these tables can act as a centralized body of complex yet agreed-upon clinical dosing standards for the hospital.\n• The ADM processing logic should be designed to proactively alert clinicians when changes in patient criteria might warrant dosing changes based on the configured clinical knowledge base.\nWhen a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement logic that applies only to children. This is especially important on surgical services where residents care for both adults and children, or in cases of off-service placement where children are cared for on adult floors due to space limitations or unusual practice patterns. After careful multidisciplinary analysis, the advisory group used age solely to define a pediatric patient (< 14 years of age or <18 years and <45 kg [99 lbs]) in order to capture individuals regardless of location.\nBy the broadest definition, the ADM as it relates to HEO is a combination of clinical content tables and decision tree logic layered on top of the existing adult CPOE application to provide extensive, content-driven, drug-by-drug clinical decision support for pediatric medication dosing. Based on the clinical advisory committee's recommendations, an ADM focus group was convened to specifically address its functional requirements and design details. This group included 15 pediatric pharmacists charged with defining the detailed patient-specific criteria that shape the dosing scenarios for each drug. To facilitate this, a temporary web application and supporting database was built to store their design decisions and manage the pertinent clinical content.\nInstead of simply facilitating weight-based calculations familiar to pediatric dosing, we sought to create a broad clinical decision support system that incorporates up to 6 patient-specific parameters (Table 2) and safety alerts. The major functional design decisions from both the clinical advisory committee and ADM focus group were:\nPediatric patient parameters reasoned over by the Advanced Dosing Model logic\n• The ADM should provide clinical guidance and targeted medication dosing recommendations based on 1 or more patient-centric criteria at the time of order entry.\n• The clinical decision support information surrounding medication dosing that populates CPOE clinical content tables should address the diverse needs of general and specialized pediatric populations. In this way, these tables can act as a centralized body of complex yet agreed-upon clinical dosing standards for the hospital.\n• The ADM processing logic should be designed to proactively alert clinicians when changes in patient criteria might warrant dosing changes based on the configured clinical knowledge base.\nWhen a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement logic that applies only to children. This is especially important on surgical services where residents care for both adults and children, or in cases of off-service placement where children are cared for on adult floors due to space limitations or unusual practice patterns. After careful multidisciplinary analysis, the advisory group used age solely to define a pediatric patient (< 14 years of age or <18 years and <45 kg [99 lbs]) in order to capture individuals regardless of location.\n[SUBTITLE] Development of pediatric clinical content [SUBSECTION] To build pediatric clinical content as part of the ADM, unique patient scenarios were identified that may drive a dosing end point for a specific medication. Each unique dosing scenario is termed a medication \"dosing region\"; that is, the constellation of clinical characteristics that stipulates a specific pediatric dosage for a given drug. A drug that has a variety of clinical usage scenarios therefore has multiple dosing regions. A dosing region can be based on as few or as many criteria as are needed to achieve the specificity required. Similarly, a medication may have numerous associated dosing regions depending on the complexity of the dosing permutations given the patient-specific criteria. For example, Table 3 displays dosing regions built for several ampicillin indications based on different patient scenarios.\nDosing regions for several ampicillin indications\n*The dose is administered once a patient is called to the operating room.\nCreating the formulary of adult drugs in the initial launch of adult CPOE was very labor-intensive even when each drug could contain only 1 set of dosing options. We sought to limit initial development to only high-use medications as well as to those that have serious safety concerns. To identify this subset, we pulled information regarding drug utilization for a 1-year period from the pharmacy medication management program. A list including the top 100 most commonly prescribed drugs on pediatric floors was sent to the pediatric pharmacist workgroup. Collective clinical review reduced this list by removing orderables that already had existing clinical decision support through elaborate advisor interfaces (i.e.., insulin prescribing or intravenous fluids). Additionally, chemotherapy is not currently handled within CPOE at Duke University Hospital as it is paper-based and protocol-driven. The workgroup then added medications to this list, regardless of usage frequency, if any of the following conditions were met: a) the drug had high risk of a severe prescribing error; b) the drug had high seasonal usage; or c) the drug appeared on care area \"pocket cards.\" Nurses working in intensive care and BMT units all carry pocket cards that give dosing suggestions for commonly used medications in their care areas. If these medications were located on the card, then they were required to undergo dosing region development. The full drug list was then grouped by American Hospital Formulary Service (AHFS) drug class and prioritized for dosing region development.\nPediatric dosing references [27-29], literature review, and insight from at least 2 clinical pharmacists serving on the ADM task force were used to define dosing regions. Once all dosing regions were designed, a final review was conducted by a sub-specialty physician. By 1 month after ADM deployment, 1200 dosing regions were built for 175 medications. As of July 2009, the knowledge base had expanded to include over 2300 dosing regions for 375 medications. Dosing regions are reviewed and updated based on new clinical evidence once every 2 years.\nBecause dosing is often based on weights and involves calculation, it was important to create logic that facilitates prescribing drugs in practical amounts. To this end, each dosing region was assigned a rounding method by linkage to a specific data table within the ADM database schema that defined a type of rounding behavior. The \"round to 10%\" data table lists dosing values where the dispensable dose should be rounded to the closest value in the table, which would ultimately be within 10% of the weight-based calculation. If a calculated dose falls outside of the values in the table, no rounding should occur. A second \"round to 5%\" data table functions similarly and was put into place for dosing regions requiring tighter control such as those for narcotics, sedatives, and steroids. Finally, specialized rounding tables were created on a drug-by-drug basis to allow for cases where high customization is necessary. For example, several suppository rounding tables were created to reflect commonly dispensable partial suppository amounts for children. The creation of these tables had the effect of forcing selection of drugs in an easily dispensable form.\nTo build pediatric clinical content as part of the ADM, unique patient scenarios were identified that may drive a dosing end point for a specific medication. Each unique dosing scenario is termed a medication \"dosing region\"; that is, the constellation of clinical characteristics that stipulates a specific pediatric dosage for a given drug. A drug that has a variety of clinical usage scenarios therefore has multiple dosing regions. A dosing region can be based on as few or as many criteria as are needed to achieve the specificity required. Similarly, a medication may have numerous associated dosing regions depending on the complexity of the dosing permutations given the patient-specific criteria. For example, Table 3 displays dosing regions built for several ampicillin indications based on different patient scenarios.\nDosing regions for several ampicillin indications\n*The dose is administered once a patient is called to the operating room.\nCreating the formulary of adult drugs in the initial launch of adult CPOE was very labor-intensive even when each drug could contain only 1 set of dosing options. We sought to limit initial development to only high-use medications as well as to those that have serious safety concerns. To identify this subset, we pulled information regarding drug utilization for a 1-year period from the pharmacy medication management program. A list including the top 100 most commonly prescribed drugs on pediatric floors was sent to the pediatric pharmacist workgroup. Collective clinical review reduced this list by removing orderables that already had existing clinical decision support through elaborate advisor interfaces (i.e.., insulin prescribing or intravenous fluids). Additionally, chemotherapy is not currently handled within CPOE at Duke University Hospital as it is paper-based and protocol-driven. The workgroup then added medications to this list, regardless of usage frequency, if any of the following conditions were met: a) the drug had high risk of a severe prescribing error; b) the drug had high seasonal usage; or c) the drug appeared on care area \"pocket cards.\" Nurses working in intensive care and BMT units all carry pocket cards that give dosing suggestions for commonly used medications in their care areas. If these medications were located on the card, then they were required to undergo dosing region development. The full drug list was then grouped by American Hospital Formulary Service (AHFS) drug class and prioritized for dosing region development.\nPediatric dosing references [27-29], literature review, and insight from at least 2 clinical pharmacists serving on the ADM task force were used to define dosing regions. Once all dosing regions were designed, a final review was conducted by a sub-specialty physician. By 1 month after ADM deployment, 1200 dosing regions were built for 175 medications. As of July 2009, the knowledge base had expanded to include over 2300 dosing regions for 375 medications. Dosing regions are reviewed and updated based on new clinical evidence once every 2 years.\nBecause dosing is often based on weights and involves calculation, it was important to create logic that facilitates prescribing drugs in practical amounts. To this end, each dosing region was assigned a rounding method by linkage to a specific data table within the ADM database schema that defined a type of rounding behavior. The \"round to 10%\" data table lists dosing values where the dispensable dose should be rounded to the closest value in the table, which would ultimately be within 10% of the weight-based calculation. If a calculated dose falls outside of the values in the table, no rounding should occur. A second \"round to 5%\" data table functions similarly and was put into place for dosing regions requiring tighter control such as those for narcotics, sedatives, and steroids. Finally, specialized rounding tables were created on a drug-by-drug basis to allow for cases where high customization is necessary. For example, several suppository rounding tables were created to reflect commonly dispensable partial suppository amounts for children. The creation of these tables had the effect of forcing selection of drugs in an easily dispensable form.\n[SUBTITLE] Designation of patient parameters describing dosing regions [SUBSECTION] Each dosing region specifies appropriate dosing advice within the context of 6 patient-specific parameters (Table 2). These parameters were defined by the clinical advisory workgroup discussed previously. One of the more complex issues addressed was patient weight, and it became clear that multiple weights would be required for the system to be clinically relevant to pediatric patients. The concept of \"dosing weight\" was made distinct from \"actual weight\" to account for excesses or deficiencies in weight due to fluid imbalances, infections, or feeding issues. Thus dosing weight is the weight on which dosing recommendations should be based, and the clinician should actively decide whether the actual weight is appropriate or a dosing weight should be entered to reflect either transient physiologic changes, such as excess body water, or chronic conditions such as pediatric obesity.\nWhen considering the matter of age for medication dosing, the ADM was designed to provide unit translation for the clinician. In older children, it is appropriate to classify chronological age in whole-number years (e.g., 8 years rather than 8.25 years); however, infants may have age stored in months (or days for newborns). Depending on the age of the child, the ADM may prompt the clinician for gestational age if needed to identify the correct dosing region for a medication.\nRenal impairment was included as a potential dosing parameter based upon a qualitative assessment by the ordering provider (i.e., \"impaired\" or \"not impaired\"), which is requested when nephrotoxic drugs are dosed. This assessment will result in a suggestion of a different dosage of the drug. Including this as part of the dosing region parameters serves as an important reminder to the ordering provider that the drug is either renally cleared or builds up metabolites and therefore requires consideration of dose adjustment.\nThe ADM dosing region selection process may require knowledge of a patient's indication. If so, a provider is prompted to select an indication within the CPOE client. Common indication-based dose adjustments include those used in cases of meningitis, osteomyelitis, or an infectious disease. A second pathway to dosing by indication is to use an indication-specific order set for specialized dosing in support of unique disease states or clinical conditions such as sickle cell anemia, cystic fibrosis, or organ transplant dosing. The ADM automatically understands the patient indication based on the order set identity and presents the user with the appropriate, specific spectrum of dosing regions. The user may be prompted further if additional indications are needed to define the dosing region.\nEach dosing region specifies appropriate dosing advice within the context of 6 patient-specific parameters (Table 2). These parameters were defined by the clinical advisory workgroup discussed previously. One of the more complex issues addressed was patient weight, and it became clear that multiple weights would be required for the system to be clinically relevant to pediatric patients. The concept of \"dosing weight\" was made distinct from \"actual weight\" to account for excesses or deficiencies in weight due to fluid imbalances, infections, or feeding issues. Thus dosing weight is the weight on which dosing recommendations should be based, and the clinician should actively decide whether the actual weight is appropriate or a dosing weight should be entered to reflect either transient physiologic changes, such as excess body water, or chronic conditions such as pediatric obesity.\nWhen considering the matter of age for medication dosing, the ADM was designed to provide unit translation for the clinician. In older children, it is appropriate to classify chronological age in whole-number years (e.g., 8 years rather than 8.25 years); however, infants may have age stored in months (or days for newborns). Depending on the age of the child, the ADM may prompt the clinician for gestational age if needed to identify the correct dosing region for a medication.\nRenal impairment was included as a potential dosing parameter based upon a qualitative assessment by the ordering provider (i.e., \"impaired\" or \"not impaired\"), which is requested when nephrotoxic drugs are dosed. This assessment will result in a suggestion of a different dosage of the drug. Including this as part of the dosing region parameters serves as an important reminder to the ordering provider that the drug is either renally cleared or builds up metabolites and therefore requires consideration of dose adjustment.\nThe ADM dosing region selection process may require knowledge of a patient's indication. If so, a provider is prompted to select an indication within the CPOE client. Common indication-based dose adjustments include those used in cases of meningitis, osteomyelitis, or an infectious disease. A second pathway to dosing by indication is to use an indication-specific order set for specialized dosing in support of unique disease states or clinical conditions such as sickle cell anemia, cystic fibrosis, or organ transplant dosing. The ADM automatically understands the patient indication based on the order set identity and presents the user with the appropriate, specific spectrum of dosing regions. The user may be prompted further if additional indications are needed to define the dosing region.\n[SUBTITLE] Error prevention measures [SUBSECTION] When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement the correct logic. It was recognized early in the planning process that most of the safety features of pediatric CPOE would be undermined if the wrong weight is entered for the patient. Pediatric patients tend to undergo changes in weight more often and to a larger degree than adults, and each manual update of that changing weight is an opportunity for error. To address this, the ADM was configured with several weight-based \"pop-up\" alerts:\n• The user cannot enter any medication without being prompted for a dosing weight if one is not already present. Patient weights are compared to either pediatric or neonatal growth curves stored within the CPOE Oracle database clinical content tables. Users are warned if a patient falls below 3rd percentile or above 97th percentile.\n• The user is alerted if there is an extreme change in weight--on average, a change in 10% of the previously recorded value. This threshold is configurable by care area.\n• The user is warned if there is an extreme variance between actual and dosing weight, and the variance threshold is configurable by care area.\n• If a medication order was dosed off a weight that had since been updated, a reminder to \"weight adjust\" the medication dose is issued.\n• Any patient parameter that is expected to change over time (i.e., dosing weight or actual weight) will raise a less invasive alert (colorful text-based tag next to the variable in question) if it has not been updated after a configurable number of days. It is our expectation that actual weights are updated daily, and dosing weights are updated based on the unit protocol and clinical condition.\nBecause the pediatric CPOE system was based on a prior adult implementation, adult-specific decision support is left embedded in medication orderables throughout the system. Most orderables contain, for example, adult options for dose, route, frequency, etc., along with instructional text describing appropriate dose recommendations and other considerations specific to that drug. We recognized that it was critically important that pediatricians not be inadvertently presented with adult dosing advice and therefore used a 2-pronged strategy (suppression and redirection) for handling adult-oriented material that may be presented to the end user.\nSuppression functionality was implemented for cases where pediatric dosing regions are not available for certain medications but adult-based material is available in the clinical content tables. Any dosing guidance present in the base form of the medication--which is assumed not to have been approved for use in pediatrics--is suppressed in the CPOE application code whenever the patient fits our definition of \"pediatric.\" A countermeasure (i.e., a way to \"suppress the suppression\") was implemented for unique clinical situations where the same dosing principles apply regardless of whether CPOE considers the patient \"pediatric\" or \"adult.\" The obstetrics service, for example, uses the same labor-and-delivery order sets whether their patient is 12 or 32 years old. Such order sets are designated safe for use in both adults and pediatric populations and thus exempt from the suppressive logic.\nRedirection functionality was implemented for cases where it is appropriate for the pediatric prescriber to completely avoid selection of a particular drug. Insulin prescribing, for example, is a complicated endeavor that involves multiple drug orderables because it requires options for different forms of scheduled and supplemental insulin. Clinicians caring for adult patients have always had a full-page, graphical module (\"Subcutaneous Insulin Advisor\") that presents formal decision support. When the analogous pediatric interface was created, we recognized the risk of a pediatric clinician inadvertently activating the adult-oriented insulin advisor (e.g., with a misdirected mouse-click). Therefore, we created a logic module, triggered by the user's selection of the adult insulin advisor, that determines whether the patient meets the definition of \"pediatric\" and, if so, redirects the user to the pediatric version.\nWhen a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement the correct logic. It was recognized early in the planning process that most of the safety features of pediatric CPOE would be undermined if the wrong weight is entered for the patient. Pediatric patients tend to undergo changes in weight more often and to a larger degree than adults, and each manual update of that changing weight is an opportunity for error. To address this, the ADM was configured with several weight-based \"pop-up\" alerts:\n• The user cannot enter any medication without being prompted for a dosing weight if one is not already present. Patient weights are compared to either pediatric or neonatal growth curves stored within the CPOE Oracle database clinical content tables. Users are warned if a patient falls below 3rd percentile or above 97th percentile.\n• The user is alerted if there is an extreme change in weight--on average, a change in 10% of the previously recorded value. This threshold is configurable by care area.\n• The user is warned if there is an extreme variance between actual and dosing weight, and the variance threshold is configurable by care area.\n• If a medication order was dosed off a weight that had since been updated, a reminder to \"weight adjust\" the medication dose is issued.\n• Any patient parameter that is expected to change over time (i.e., dosing weight or actual weight) will raise a less invasive alert (colorful text-based tag next to the variable in question) if it has not been updated after a configurable number of days. It is our expectation that actual weights are updated daily, and dosing weights are updated based on the unit protocol and clinical condition.\nBecause the pediatric CPOE system was based on a prior adult implementation, adult-specific decision support is left embedded in medication orderables throughout the system. Most orderables contain, for example, adult options for dose, route, frequency, etc., along with instructional text describing appropriate dose recommendations and other considerations specific to that drug. We recognized that it was critically important that pediatricians not be inadvertently presented with adult dosing advice and therefore used a 2-pronged strategy (suppression and redirection) for handling adult-oriented material that may be presented to the end user.\nSuppression functionality was implemented for cases where pediatric dosing regions are not available for certain medications but adult-based material is available in the clinical content tables. Any dosing guidance present in the base form of the medication--which is assumed not to have been approved for use in pediatrics--is suppressed in the CPOE application code whenever the patient fits our definition of \"pediatric.\" A countermeasure (i.e., a way to \"suppress the suppression\") was implemented for unique clinical situations where the same dosing principles apply regardless of whether CPOE considers the patient \"pediatric\" or \"adult.\" The obstetrics service, for example, uses the same labor-and-delivery order sets whether their patient is 12 or 32 years old. Such order sets are designated safe for use in both adults and pediatric populations and thus exempt from the suppressive logic.\nRedirection functionality was implemented for cases where it is appropriate for the pediatric prescriber to completely avoid selection of a particular drug. Insulin prescribing, for example, is a complicated endeavor that involves multiple drug orderables because it requires options for different forms of scheduled and supplemental insulin. Clinicians caring for adult patients have always had a full-page, graphical module (\"Subcutaneous Insulin Advisor\") that presents formal decision support. When the analogous pediatric interface was created, we recognized the risk of a pediatric clinician inadvertently activating the adult-oriented insulin advisor (e.g., with a misdirected mouse-click). Therefore, we created a logic module, triggered by the user's selection of the adult insulin advisor, that determines whether the patient meets the definition of \"pediatric\" and, if so, redirects the user to the pediatric version.\n[SUBTITLE] Analysis of voluntarily reported safety events [SUBSECTION] The voluntary Safety Reporting System (SRS) allows staff members to report any perceived safety issues within any DUH care environment, including DCH. DCH staff enters approximately 80 pediatric reports per month. Incidents are reported as being 1 of 9 event categories, and all incidents in the medications category are reviewed by a team of medication safety pharmacists and scored for patient severity. Events of severity of 3 or more (i.e., patient length of stay was increased by the event) are considered ADEs. A full description of the severity algorithm is reported elsewhere [24]. We compared the rate of harmful events (i.e., ADEs) per 1000 patient days and the fraction of total reported events that were ADEs pre- and post- deployment for critical care areas. The Pearson's chi-square test was used to assess statistical significance, and binomial 95% confidence intervals for proportions were calculated.\nThe voluntary Safety Reporting System (SRS) allows staff members to report any perceived safety issues within any DUH care environment, including DCH. DCH staff enters approximately 80 pediatric reports per month. Incidents are reported as being 1 of 9 event categories, and all incidents in the medications category are reviewed by a team of medication safety pharmacists and scored for patient severity. Events of severity of 3 or more (i.e., patient length of stay was increased by the event) are considered ADEs. A full description of the severity algorithm is reported elsewhere [24]. We compared the rate of harmful events (i.e., ADEs) per 1000 patient days and the fraction of total reported events that were ADEs pre- and post- deployment for critical care areas. The Pearson's chi-square test was used to assess statistical significance, and binomial 95% confidence intervals for proportions were calculated.", "Duke Children's Hospital (DCH) is a tertiary care facility within DUH that comprises 7 inpatient pediatric service locations: 2 general care medical wards, a pediatric intensive care unit (PICU), a neonatal intensive care unit (NICU), a bone marrow transplant (BMT) unit, a pediatric cardiac ICU, and a transitional care (i.e., step-down) unit. DCH averages 7000 pediatric admissions per year across 187 inpatient beds, approximately 50% of which are located in critical care wards. DCH employs approximately 197 attending physicians and 50 pediatric residents across 20 clinical service areas. Table 1 details release of the CPOE application, which included the ADM functionality described in this report, in order of deployment over a 14-month period across pediatric units (Table 1).\nDeployment of pediatric CPOE at Duke Children's Hospital\n*Number of beds in service as of 1/29/2010.\n†New pediatric location introduced to DCH after full CPOE deployment on 3/10/2008.", "At DUH, the Horizon Expert Orders (HEO) CPOE system (McKesson Corporation, San Francisco, CA) is a comprehensive order management system that spans medical disciplines and offers real-time decision support and guidance for order entry. This product was deployed on all DUH adult floors by April, 2006. Providers interact with a Java-based desktop client (Java version 1.42.09) that queries an Oracle 10 database (Oracle Corporation, Redwood Shores, CA) holding both the clinical content tables (e.g., orderables, order sets, and clinical decision support information) and patient information tables (e.g., patient identity, care area, diagnoses, existing orders). New provider choices made through the CPOE client are saved to the patient information table and executed as HL7 messages to other hospital information technology (IT) applications that fulfill the orders.", "DCH's pre-implementation medication vulnerabilities were similar to those described by other tertiary care institutions and have been reported previously in an analysis of both voluntarily reported events and ADEs detected by computerized surveillance [24]. Briefly, DCH sees approximately 18.0 medication-related safety incidents per 1000 patient days as detected by voluntary reporting. Of these, approximately 10.9% result in some level of patient harm. Computerized surveillance, a complementary incident detection method that behaves as an automated trigger tool, found 1.6 ADEs per 1000 patient days. As expected, event density is higher in critical care units than in general care areas. The most common problem areas are failures in the medication use process such as incorrect drug dose or rate followed by drug omissions. Antibiotics in particular were a drug class identified for enhanced surveillance targeting. We reported that the safety profile of pediatrics was distinct from that of adults, underscoring the importance of pediatric-specific clinical content for dosing guidance [25].", "Because DCH serves challenging, critically ill pediatric patients, the deployment of CPOE in this environment was deferred until the end of the adult CPOE implementation plan. A needs assessment was performed by a multidisciplinary clinical advisory workgroup of physicians, nurses, pharmacists, and safety directors to define the features and work flow requirements for a pediatric CPOE product deployed to DCH. These individuals made broad decisions that would affect clinician work flow, and their input was critical to ensure operational acceptance. It was immediately recognized that a pediatric CPOE application would require high flexibility to approach the wide variety of nuanced, sometimes novel, pediatric therapies in place at DCH. Given that the existing CPOE system provides adult dosing guidance via a specific set of clinical content tables, it was recognized that the needs of pediatric dosing could be satisfied through adding an additional set of tables for that population while maintaining the existing adult-based infrastructure. As a result, the most efficient plan was to partner with McKesson to enhance the adult product for pediatric usage instead of implementing a new, vended solution. McKesson and DUH agreed to a joint development project to incorporate clinical content from the pediatric WizOrder tool [26], acquired by McKesson from Vanderbilt University Medical Center, into the HEO commercial product. The clinical advisory committee continued to meet weekly for 6 months prior to system release to discuss clinical issues, understand technological implications of the application, and act as liaisons to technical developers at McKesson for all areas of pediatric CPOE design.", "By the broadest definition, the ADM as it relates to HEO is a combination of clinical content tables and decision tree logic layered on top of the existing adult CPOE application to provide extensive, content-driven, drug-by-drug clinical decision support for pediatric medication dosing. Based on the clinical advisory committee's recommendations, an ADM focus group was convened to specifically address its functional requirements and design details. This group included 15 pediatric pharmacists charged with defining the detailed patient-specific criteria that shape the dosing scenarios for each drug. To facilitate this, a temporary web application and supporting database was built to store their design decisions and manage the pertinent clinical content.\nInstead of simply facilitating weight-based calculations familiar to pediatric dosing, we sought to create a broad clinical decision support system that incorporates up to 6 patient-specific parameters (Table 2) and safety alerts. The major functional design decisions from both the clinical advisory committee and ADM focus group were:\nPediatric patient parameters reasoned over by the Advanced Dosing Model logic\n• The ADM should provide clinical guidance and targeted medication dosing recommendations based on 1 or more patient-centric criteria at the time of order entry.\n• The clinical decision support information surrounding medication dosing that populates CPOE clinical content tables should address the diverse needs of general and specialized pediatric populations. In this way, these tables can act as a centralized body of complex yet agreed-upon clinical dosing standards for the hospital.\n• The ADM processing logic should be designed to proactively alert clinicians when changes in patient criteria might warrant dosing changes based on the configured clinical knowledge base.\nWhen a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement logic that applies only to children. This is especially important on surgical services where residents care for both adults and children, or in cases of off-service placement where children are cared for on adult floors due to space limitations or unusual practice patterns. After careful multidisciplinary analysis, the advisory group used age solely to define a pediatric patient (< 14 years of age or <18 years and <45 kg [99 lbs]) in order to capture individuals regardless of location.", "To build pediatric clinical content as part of the ADM, unique patient scenarios were identified that may drive a dosing end point for a specific medication. Each unique dosing scenario is termed a medication \"dosing region\"; that is, the constellation of clinical characteristics that stipulates a specific pediatric dosage for a given drug. A drug that has a variety of clinical usage scenarios therefore has multiple dosing regions. A dosing region can be based on as few or as many criteria as are needed to achieve the specificity required. Similarly, a medication may have numerous associated dosing regions depending on the complexity of the dosing permutations given the patient-specific criteria. For example, Table 3 displays dosing regions built for several ampicillin indications based on different patient scenarios.\nDosing regions for several ampicillin indications\n*The dose is administered once a patient is called to the operating room.\nCreating the formulary of adult drugs in the initial launch of adult CPOE was very labor-intensive even when each drug could contain only 1 set of dosing options. We sought to limit initial development to only high-use medications as well as to those that have serious safety concerns. To identify this subset, we pulled information regarding drug utilization for a 1-year period from the pharmacy medication management program. A list including the top 100 most commonly prescribed drugs on pediatric floors was sent to the pediatric pharmacist workgroup. Collective clinical review reduced this list by removing orderables that already had existing clinical decision support through elaborate advisor interfaces (i.e.., insulin prescribing or intravenous fluids). Additionally, chemotherapy is not currently handled within CPOE at Duke University Hospital as it is paper-based and protocol-driven. The workgroup then added medications to this list, regardless of usage frequency, if any of the following conditions were met: a) the drug had high risk of a severe prescribing error; b) the drug had high seasonal usage; or c) the drug appeared on care area \"pocket cards.\" Nurses working in intensive care and BMT units all carry pocket cards that give dosing suggestions for commonly used medications in their care areas. If these medications were located on the card, then they were required to undergo dosing region development. The full drug list was then grouped by American Hospital Formulary Service (AHFS) drug class and prioritized for dosing region development.\nPediatric dosing references [27-29], literature review, and insight from at least 2 clinical pharmacists serving on the ADM task force were used to define dosing regions. Once all dosing regions were designed, a final review was conducted by a sub-specialty physician. By 1 month after ADM deployment, 1200 dosing regions were built for 175 medications. As of July 2009, the knowledge base had expanded to include over 2300 dosing regions for 375 medications. Dosing regions are reviewed and updated based on new clinical evidence once every 2 years.\nBecause dosing is often based on weights and involves calculation, it was important to create logic that facilitates prescribing drugs in practical amounts. To this end, each dosing region was assigned a rounding method by linkage to a specific data table within the ADM database schema that defined a type of rounding behavior. The \"round to 10%\" data table lists dosing values where the dispensable dose should be rounded to the closest value in the table, which would ultimately be within 10% of the weight-based calculation. If a calculated dose falls outside of the values in the table, no rounding should occur. A second \"round to 5%\" data table functions similarly and was put into place for dosing regions requiring tighter control such as those for narcotics, sedatives, and steroids. Finally, specialized rounding tables were created on a drug-by-drug basis to allow for cases where high customization is necessary. For example, several suppository rounding tables were created to reflect commonly dispensable partial suppository amounts for children. The creation of these tables had the effect of forcing selection of drugs in an easily dispensable form.", "Each dosing region specifies appropriate dosing advice within the context of 6 patient-specific parameters (Table 2). These parameters were defined by the clinical advisory workgroup discussed previously. One of the more complex issues addressed was patient weight, and it became clear that multiple weights would be required for the system to be clinically relevant to pediatric patients. The concept of \"dosing weight\" was made distinct from \"actual weight\" to account for excesses or deficiencies in weight due to fluid imbalances, infections, or feeding issues. Thus dosing weight is the weight on which dosing recommendations should be based, and the clinician should actively decide whether the actual weight is appropriate or a dosing weight should be entered to reflect either transient physiologic changes, such as excess body water, or chronic conditions such as pediatric obesity.\nWhen considering the matter of age for medication dosing, the ADM was designed to provide unit translation for the clinician. In older children, it is appropriate to classify chronological age in whole-number years (e.g., 8 years rather than 8.25 years); however, infants may have age stored in months (or days for newborns). Depending on the age of the child, the ADM may prompt the clinician for gestational age if needed to identify the correct dosing region for a medication.\nRenal impairment was included as a potential dosing parameter based upon a qualitative assessment by the ordering provider (i.e., \"impaired\" or \"not impaired\"), which is requested when nephrotoxic drugs are dosed. This assessment will result in a suggestion of a different dosage of the drug. Including this as part of the dosing region parameters serves as an important reminder to the ordering provider that the drug is either renally cleared or builds up metabolites and therefore requires consideration of dose adjustment.\nThe ADM dosing region selection process may require knowledge of a patient's indication. If so, a provider is prompted to select an indication within the CPOE client. Common indication-based dose adjustments include those used in cases of meningitis, osteomyelitis, or an infectious disease. A second pathway to dosing by indication is to use an indication-specific order set for specialized dosing in support of unique disease states or clinical conditions such as sickle cell anemia, cystic fibrosis, or organ transplant dosing. The ADM automatically understands the patient indication based on the order set identity and presents the user with the appropriate, specific spectrum of dosing regions. The user may be prompted further if additional indications are needed to define the dosing region.", "When a single CPOE system is used for both children and adults, the system must differentiate between the 2 populations so that it knows when to implement the correct logic. It was recognized early in the planning process that most of the safety features of pediatric CPOE would be undermined if the wrong weight is entered for the patient. Pediatric patients tend to undergo changes in weight more often and to a larger degree than adults, and each manual update of that changing weight is an opportunity for error. To address this, the ADM was configured with several weight-based \"pop-up\" alerts:\n• The user cannot enter any medication without being prompted for a dosing weight if one is not already present. Patient weights are compared to either pediatric or neonatal growth curves stored within the CPOE Oracle database clinical content tables. Users are warned if a patient falls below 3rd percentile or above 97th percentile.\n• The user is alerted if there is an extreme change in weight--on average, a change in 10% of the previously recorded value. This threshold is configurable by care area.\n• The user is warned if there is an extreme variance between actual and dosing weight, and the variance threshold is configurable by care area.\n• If a medication order was dosed off a weight that had since been updated, a reminder to \"weight adjust\" the medication dose is issued.\n• Any patient parameter that is expected to change over time (i.e., dosing weight or actual weight) will raise a less invasive alert (colorful text-based tag next to the variable in question) if it has not been updated after a configurable number of days. It is our expectation that actual weights are updated daily, and dosing weights are updated based on the unit protocol and clinical condition.\nBecause the pediatric CPOE system was based on a prior adult implementation, adult-specific decision support is left embedded in medication orderables throughout the system. Most orderables contain, for example, adult options for dose, route, frequency, etc., along with instructional text describing appropriate dose recommendations and other considerations specific to that drug. We recognized that it was critically important that pediatricians not be inadvertently presented with adult dosing advice and therefore used a 2-pronged strategy (suppression and redirection) for handling adult-oriented material that may be presented to the end user.\nSuppression functionality was implemented for cases where pediatric dosing regions are not available for certain medications but adult-based material is available in the clinical content tables. Any dosing guidance present in the base form of the medication--which is assumed not to have been approved for use in pediatrics--is suppressed in the CPOE application code whenever the patient fits our definition of \"pediatric.\" A countermeasure (i.e., a way to \"suppress the suppression\") was implemented for unique clinical situations where the same dosing principles apply regardless of whether CPOE considers the patient \"pediatric\" or \"adult.\" The obstetrics service, for example, uses the same labor-and-delivery order sets whether their patient is 12 or 32 years old. Such order sets are designated safe for use in both adults and pediatric populations and thus exempt from the suppressive logic.\nRedirection functionality was implemented for cases where it is appropriate for the pediatric prescriber to completely avoid selection of a particular drug. Insulin prescribing, for example, is a complicated endeavor that involves multiple drug orderables because it requires options for different forms of scheduled and supplemental insulin. Clinicians caring for adult patients have always had a full-page, graphical module (\"Subcutaneous Insulin Advisor\") that presents formal decision support. When the analogous pediatric interface was created, we recognized the risk of a pediatric clinician inadvertently activating the adult-oriented insulin advisor (e.g., with a misdirected mouse-click). Therefore, we created a logic module, triggered by the user's selection of the adult insulin advisor, that determines whether the patient meets the definition of \"pediatric\" and, if so, redirects the user to the pediatric version.", "The voluntary Safety Reporting System (SRS) allows staff members to report any perceived safety issues within any DUH care environment, including DCH. DCH staff enters approximately 80 pediatric reports per month. Incidents are reported as being 1 of 9 event categories, and all incidents in the medications category are reviewed by a team of medication safety pharmacists and scored for patient severity. Events of severity of 3 or more (i.e., patient length of stay was increased by the event) are considered ADEs. A full description of the severity algorithm is reported elsewhere [24]. We compared the rate of harmful events (i.e., ADEs) per 1000 patient days and the fraction of total reported events that were ADEs pre- and post- deployment for critical care areas. The Pearson's chi-square test was used to assess statistical significance, and binomial 95% confidence intervals for proportions were calculated.", "[SUBTITLE] Clinician workflow for medication ordering within the CPOE interface [SUBSECTION] Users enter the CPOE interface through the organizational electronic health record. The process of ordering medications using CPOE can be initiated by either searching for them using a dialog box within the application or choosing an order set developed for a specific patient profile and then clicking on the hyperlink for an ADM-enabled medication. When a patient is admitted to DCH, details regarding the patient's identity, age, and current physical location are transmitted to the CPOE application from the hospital's admission, discharge, and transfer system. The ADM uses this information, along with prompted provider input and its underlying clinical content tables, to determine whether pediatric medication dosing logic is required (Figure 1).\nProcessing logic of the Advanced Dosing Model. A clinician initiates an order for a medication. ADM pre-processing logic verifies if dosing regions exist for this drug. If so, the system will retrieve the available patient information (chronological age, weight, and care area) and prompt the user for any additional information needed. Once all patient parameters are collected, the decision support algorithm will resolve the list of all potential dosing regions from the clinical knowledge base. If the algorithm successfully identifies 1 dosing region, it is presented to the clinician and made available for calculations and screening. If there are no dosing regions available for the requested medication and patient parameters, or multiple dosing regions are found, the system exits the ADM model and the clinician is prompted to enter a medication dose manually.\nOnce within the ADM logic, the top half of the screen will display all patient parameters and any relevant considerations or warnings for the drug. The bottom half displays actions to be taken. If an allergy to a medication is known, warnings will be shown at this step, and the user must override the allergy alert twice in the bottom half of the screen before being permitted to move forward with medication ordering. If there are no warnings, the ADM will prompt for any additional information (i.e., indication) and presents the proper dose/frequency combinations determined by the patient parameters. For example, when ampicillin is chosen, the system automatically narrows down the available dosing regions based on the patient's age, weight, and care intensity as defined by location. Because ampicillin dosing regions differ based on indication, it asks the user for this information (e.g., meningitis), and then the system provides the user with appropriate dosing guidance and automatically displays the correct dose and interval to be ordered (Figure 2). During this process, the ADM has calculated dosage based on weight, applied custom numeric rounding, and enforced maximum and minimum doses. Unlike other CPOE applications that present all possible choices of ordering combinations, our application uses the ADM to display a limited subset of recommendations that only includes options deemed appropriate for the patient in question. This greatly reduces display \"noise\" associated with long lists and helps prevent potential errors due to incorrect selection.\nScreenshot of pediatric medication dosing. This screenshot shows the presentation of an ampicillin dosing region for an infant at Duke Children's Hospital. The left panel displays the current orders for the patient. The right top panel presents the recommended dosing region based on the patient's dosing weight, age, physical location, and indication. The provider may click and select the suggested value (50 mg/kg) or enter his or her own dosage manually as an override.\nThe user can either select this dosing region or manually enter the dosage desired (i.e., an override). From this point, the user will be taken through a series of screens where he or she may enter dose form, start time, duration, and any additional comments for the pharmacy department that are stored in a text field. The last screen will show a summary of the order and all selections made, at which point the user may accept, save as a draft, modify, or exit without saving or ordering. After this terminal step, the order is electronically routed to the pharmacy department for processing. If dosing regions are undefined or do not fit the patient-specific criteria, the system will exit the ADM and prompt the user to manually enter the desired medication order--similar to the process that routinely occurred on paper prior to CPOE deployment.\nUsers enter the CPOE interface through the organizational electronic health record. The process of ordering medications using CPOE can be initiated by either searching for them using a dialog box within the application or choosing an order set developed for a specific patient profile and then clicking on the hyperlink for an ADM-enabled medication. When a patient is admitted to DCH, details regarding the patient's identity, age, and current physical location are transmitted to the CPOE application from the hospital's admission, discharge, and transfer system. The ADM uses this information, along with prompted provider input and its underlying clinical content tables, to determine whether pediatric medication dosing logic is required (Figure 1).\nProcessing logic of the Advanced Dosing Model. A clinician initiates an order for a medication. ADM pre-processing logic verifies if dosing regions exist for this drug. If so, the system will retrieve the available patient information (chronological age, weight, and care area) and prompt the user for any additional information needed. Once all patient parameters are collected, the decision support algorithm will resolve the list of all potential dosing regions from the clinical knowledge base. If the algorithm successfully identifies 1 dosing region, it is presented to the clinician and made available for calculations and screening. If there are no dosing regions available for the requested medication and patient parameters, or multiple dosing regions are found, the system exits the ADM model and the clinician is prompted to enter a medication dose manually.\nOnce within the ADM logic, the top half of the screen will display all patient parameters and any relevant considerations or warnings for the drug. The bottom half displays actions to be taken. If an allergy to a medication is known, warnings will be shown at this step, and the user must override the allergy alert twice in the bottom half of the screen before being permitted to move forward with medication ordering. If there are no warnings, the ADM will prompt for any additional information (i.e., indication) and presents the proper dose/frequency combinations determined by the patient parameters. For example, when ampicillin is chosen, the system automatically narrows down the available dosing regions based on the patient's age, weight, and care intensity as defined by location. Because ampicillin dosing regions differ based on indication, it asks the user for this information (e.g., meningitis), and then the system provides the user with appropriate dosing guidance and automatically displays the correct dose and interval to be ordered (Figure 2). During this process, the ADM has calculated dosage based on weight, applied custom numeric rounding, and enforced maximum and minimum doses. Unlike other CPOE applications that present all possible choices of ordering combinations, our application uses the ADM to display a limited subset of recommendations that only includes options deemed appropriate for the patient in question. This greatly reduces display \"noise\" associated with long lists and helps prevent potential errors due to incorrect selection.\nScreenshot of pediatric medication dosing. This screenshot shows the presentation of an ampicillin dosing region for an infant at Duke Children's Hospital. The left panel displays the current orders for the patient. The right top panel presents the recommended dosing region based on the patient's dosing weight, age, physical location, and indication. The provider may click and select the suggested value (50 mg/kg) or enter his or her own dosage manually as an override.\nThe user can either select this dosing region or manually enter the dosage desired (i.e., an override). From this point, the user will be taken through a series of screens where he or she may enter dose form, start time, duration, and any additional comments for the pharmacy department that are stored in a text field. The last screen will show a summary of the order and all selections made, at which point the user may accept, save as a draft, modify, or exit without saving or ordering. After this terminal step, the order is electronically routed to the pharmacy department for processing. If dosing regions are undefined or do not fit the patient-specific criteria, the system will exit the ADM and prompt the user to manually enter the desired medication order--similar to the process that routinely occurred on paper prior to CPOE deployment.\n[SUBTITLE] Guarding against over- and under-dosing [SUBSECTION] In allowing providers to override the dosing region logic, it became clear during the testing phase that there is potential for a provider to mistype the dose and order far too much or too little drug. As a result, we programmed extra safeguards into the dosing region knowledge base by having the ADM alert the user if a value is entered that exceeds the minimal or maximal drug doses permitted by any dosing region associated with that medication. To override this, the user must enter the exact \"aberrant\" value a second time before the ordering process will move forward. We chose this route as opposed to requiring that the user enter a reason for the override to guard against cases where an override is justified and yet an unintentional, excessively extreme dosage is still entered. We believe this level of active participation by the user (as opposed to the oft-used passive alerting; e.g., having to click an \"OK\" button) strikes a reasonable balance in preventing errors with minimal annoyance. In every case within the CPOE application where alerting methods were used, we recognized that over-alerting (such that the warnings no longer command the user's attention) is as ineffective as no warnings at all and attempted to set alert thresholds accordingly.\nIn allowing providers to override the dosing region logic, it became clear during the testing phase that there is potential for a provider to mistype the dose and order far too much or too little drug. As a result, we programmed extra safeguards into the dosing region knowledge base by having the ADM alert the user if a value is entered that exceeds the minimal or maximal drug doses permitted by any dosing region associated with that medication. To override this, the user must enter the exact \"aberrant\" value a second time before the ordering process will move forward. We chose this route as opposed to requiring that the user enter a reason for the override to guard against cases where an override is justified and yet an unintentional, excessively extreme dosage is still entered. We believe this level of active participation by the user (as opposed to the oft-used passive alerting; e.g., having to click an \"OK\" button) strikes a reasonable balance in preventing errors with minimal annoyance. In every case within the CPOE application where alerting methods were used, we recognized that over-alerting (such that the warnings no longer command the user's attention) is as ineffective as no warnings at all and attempted to set alert thresholds accordingly.\n[SUBTITLE] Limited evaluation of CPOE implementation using organizational safety data [SUBSECTION] Given the extensive literature available that discusses the impact that CPOE may have in terms of unintended patient harm in critical care areas [12,13,15,18,19], we felt compelled to evaluate our intervention using available safety and quality resources. We examined data from our organizational voluntary Safety Reporting System (SRS) to ask whether the rates of adverse drug events increased in pediatric critical care after CPOE deployment. Full details regarding the SRS system have been thoroughly described previously [24,25]. We acknowledge that relying solely upon voluntarily reported events may be problematic due to the well-known issues of reporting bias, volume, seasonality, and anonymity concerns [30]. However, voluntary reporting data have been used elsewhere to evaluate pediatric CPOE systems to better understand the effects of the implementation when a more rigorous prospective study is not possible [12]. Furthermore, SRS is well established within our health system and has become an integral part of the culture of safety at Duke Medicine.\nWith these caveats in mind, we collected all reported harmful ADEs (i.e., at the minimum, transient adverse patient effects occurred that required some corrective therapy or increased length of stay) [24,25]. The ADE rate decreased 42.9% (p = 0.012) and 46.4% (p = 0.006) in the PICU and NICU units, respectively. Similarly, the percentage of total reports that were severe ADEs decreased significantly in each unit (Table 4). We cannot rule out the effects of reporter bias, and event volume is too low to look at the reports in terms of categories of system failures and attributable causes. Pediatrician review of the event narratives entered by the medication safety pharmacists suggests that there may be fewer PICU reports within 2 primary areas of acknowledged weakness in medication processing--incorrect ordering and order transcription--although the data are sparse. Although we were unable to prospectively study the impact of the advanced dosing model on patient safety, this retrospective analysis of voluntarily reported safety data suggests that we have improved the safety of medication dosing in our pediatric critical care population.\nVoluntarily reported adverse drug events pre- and post-deployment of the pediatric Advanced Dosing Model\nPre-period begins on 9/17/2004 and ends the day before the CPOE deployment dates for each unit described in Table 1. Post-period begins the day of CPOE deployment and ends 12/31/2009. CI = confidence interval. *Significant by 2-way Pearson's chi-square test.\nGiven the extensive literature available that discusses the impact that CPOE may have in terms of unintended patient harm in critical care areas [12,13,15,18,19], we felt compelled to evaluate our intervention using available safety and quality resources. We examined data from our organizational voluntary Safety Reporting System (SRS) to ask whether the rates of adverse drug events increased in pediatric critical care after CPOE deployment. Full details regarding the SRS system have been thoroughly described previously [24,25]. We acknowledge that relying solely upon voluntarily reported events may be problematic due to the well-known issues of reporting bias, volume, seasonality, and anonymity concerns [30]. However, voluntary reporting data have been used elsewhere to evaluate pediatric CPOE systems to better understand the effects of the implementation when a more rigorous prospective study is not possible [12]. Furthermore, SRS is well established within our health system and has become an integral part of the culture of safety at Duke Medicine.\nWith these caveats in mind, we collected all reported harmful ADEs (i.e., at the minimum, transient adverse patient effects occurred that required some corrective therapy or increased length of stay) [24,25]. The ADE rate decreased 42.9% (p = 0.012) and 46.4% (p = 0.006) in the PICU and NICU units, respectively. Similarly, the percentage of total reports that were severe ADEs decreased significantly in each unit (Table 4). We cannot rule out the effects of reporter bias, and event volume is too low to look at the reports in terms of categories of system failures and attributable causes. Pediatrician review of the event narratives entered by the medication safety pharmacists suggests that there may be fewer PICU reports within 2 primary areas of acknowledged weakness in medication processing--incorrect ordering and order transcription--although the data are sparse. Although we were unable to prospectively study the impact of the advanced dosing model on patient safety, this retrospective analysis of voluntarily reported safety data suggests that we have improved the safety of medication dosing in our pediatric critical care population.\nVoluntarily reported adverse drug events pre- and post-deployment of the pediatric Advanced Dosing Model\nPre-period begins on 9/17/2004 and ends the day before the CPOE deployment dates for each unit described in Table 1. Post-period begins the day of CPOE deployment and ends 12/31/2009. CI = confidence interval. *Significant by 2-way Pearson's chi-square test.\n[SUBTITLE] Comparison to other pediatric dosing models [SUBSECTION] A comprehensive comparison to other systems is hampered by the lack of rigorous reports on CPOE dosing rule development in the formal literature. Commercial CPOE systems, including Horizon Expert Orders, often use formulary references such as First Databank [31] or Lexicon-Multum [32] for static dosing advice (e.g., drug-drug interactions or allergy alerts) but not for active dosing assistance or management of the dosing process where tailored doses are suggested based on the patient's clinical profile. Although true clinical decision support knowledge bases are available, these are focused on adults, require much manipulation for use in the hospital setting, and are underdeveloped for pediatrics [33].\nWizOrder, the predecessor of our CPOE system, did evolve to include weight-based dosing [26], but the lack of a full report that describes its implementation is a barrier to comparative analysis. In a 1-page conference proceeding, the authors include the concept of dosing weight, as well as a process by which existing orders are reviewed when the patient's weight changes significantly. Our weight-based dosing also includes checking against growth curves to ensure that the change in weight from the prior value is logical. When 1 new drug is dosed on an updated weight, the other drug orders live on a patient are automatically checked to see if the dosing region is still appropriate or should be updated.\nKillelea and colleagues published a description of their pediatric dosing decision support rules for a large teaching hospital [34]. Like our ADM, their method included designing rules by committee based on medication, age, and weight. However, they do not go into much detail regarding how the weights are managed, nor do they describe alerting functionality surrounding weight as we have at DCH. Additionally, their consideration of indication is limited to displaying the dosing guidance for the default indication and providing pop-up windows that describe dosing rules for other less common indications. Patient-specific indication details are not considered by the system, and location-based customization (i.e., care intensity) is not included for presenting a dosing suggestion. Rounding is configurable on a per-medication rule level, but their system as reported is not configurable per location as is the case at DCH.\nOur incorporation of location thus comes into focus as an important, novel aspect of our model. We use location as a critical identifier of care intensity, which is especially important given our large BMT and ICU population. These patients often receive augmented doses of medications given the severity of their illness--doses that may be severely harmful to other pediatric patients. It is therefore critical that the decision support inherent to the dosing regions \"locks\" this content to only the BMT and ICU units. Similarly, cystic fibrosis patients have equally unique medication needs, and so the inclusion of an indication parameter in the ADM allows us to focus clinical content just to this specialized population. Overall, the modular nature of the dosing region model allows us to easily develop new dosing scenarios for specialized populations, especially if the pediatric population profile at DCH changes over time.\nA comprehensive comparison to other systems is hampered by the lack of rigorous reports on CPOE dosing rule development in the formal literature. Commercial CPOE systems, including Horizon Expert Orders, often use formulary references such as First Databank [31] or Lexicon-Multum [32] for static dosing advice (e.g., drug-drug interactions or allergy alerts) but not for active dosing assistance or management of the dosing process where tailored doses are suggested based on the patient's clinical profile. Although true clinical decision support knowledge bases are available, these are focused on adults, require much manipulation for use in the hospital setting, and are underdeveloped for pediatrics [33].\nWizOrder, the predecessor of our CPOE system, did evolve to include weight-based dosing [26], but the lack of a full report that describes its implementation is a barrier to comparative analysis. In a 1-page conference proceeding, the authors include the concept of dosing weight, as well as a process by which existing orders are reviewed when the patient's weight changes significantly. Our weight-based dosing also includes checking against growth curves to ensure that the change in weight from the prior value is logical. When 1 new drug is dosed on an updated weight, the other drug orders live on a patient are automatically checked to see if the dosing region is still appropriate or should be updated.\nKillelea and colleagues published a description of their pediatric dosing decision support rules for a large teaching hospital [34]. Like our ADM, their method included designing rules by committee based on medication, age, and weight. However, they do not go into much detail regarding how the weights are managed, nor do they describe alerting functionality surrounding weight as we have at DCH. Additionally, their consideration of indication is limited to displaying the dosing guidance for the default indication and providing pop-up windows that describe dosing rules for other less common indications. Patient-specific indication details are not considered by the system, and location-based customization (i.e., care intensity) is not included for presenting a dosing suggestion. Rounding is configurable on a per-medication rule level, but their system as reported is not configurable per location as is the case at DCH.\nOur incorporation of location thus comes into focus as an important, novel aspect of our model. We use location as a critical identifier of care intensity, which is especially important given our large BMT and ICU population. These patients often receive augmented doses of medications given the severity of their illness--doses that may be severely harmful to other pediatric patients. It is therefore critical that the decision support inherent to the dosing regions \"locks\" this content to only the BMT and ICU units. Similarly, cystic fibrosis patients have equally unique medication needs, and so the inclusion of an indication parameter in the ADM allows us to focus clinical content just to this specialized population. Overall, the modular nature of the dosing region model allows us to easily develop new dosing scenarios for specialized populations, especially if the pediatric population profile at DCH changes over time.\n[SUBTITLE] Limitations and lessons learned [SUBSECTION] The ADM represents a unique, modular approach to manage the logistics of adding increasingly complex clinical decision support information to an existing CPOE application. This allowed us to tailor the adult tool to the unique needs of pediatrics. However, in practice, there are practical and clinical considerations regarding whether a subpopulation warrants specialized dosing using an ADM approach. There can be significant resource management, implementation, and maintenance trade-offs between over-defining and under-defining patient subpopulations. Thus, a core limitation with the dosing region model of medication ordering is that it is not practical to develop a dosing region for every conceivable scenario of pediatric drug dosing. As a result, there will always be certain medications without any dosing regions or instances of dosing region gaps where the ADM is unable to suggest an appropriate dose. Use of dosing regions, therefore, requires that end user clinicians resolve these gaps. In all cases where the ADM is not able to suggest unique dosing, the patient parameters, medications involved, and manually entered dosages are recorded in an audit file for further review to determine if systematic needs are being unmet.\nThe ADM represents a unique, modular approach to manage the logistics of adding increasingly complex clinical decision support information to an existing CPOE application. This allowed us to tailor the adult tool to the unique needs of pediatrics. However, in practice, there are practical and clinical considerations regarding whether a subpopulation warrants specialized dosing using an ADM approach. There can be significant resource management, implementation, and maintenance trade-offs between over-defining and under-defining patient subpopulations. Thus, a core limitation with the dosing region model of medication ordering is that it is not practical to develop a dosing region for every conceivable scenario of pediatric drug dosing. As a result, there will always be certain medications without any dosing regions or instances of dosing region gaps where the ADM is unable to suggest an appropriate dose. Use of dosing regions, therefore, requires that end user clinicians resolve these gaps. In all cases where the ADM is not able to suggest unique dosing, the patient parameters, medications involved, and manually entered dosages are recorded in an audit file for further review to determine if systematic needs are being unmet.\n[SUBTITLE] Addressing patient weight in the provider work flow [SUBSECTION] Because the ADM relies so heavily on patient weight in pediatric dosing, it became critically important to study the workflow processes that shape how a provider interprets the definition of weight in order to guard against unexpected results. Potential for unintended consequences is significant in a culture that thinks in pounds but doses in kilograms, which would translate into greater than double the intended dose of medication. Providing alerts to address both extremes of the possible weight continuum addresses this issue but has its own challenges. DCH has a large patient population with disease states or chronic conditions that often result in cases of extremely low weight due to poor growth. Even among the general care areas, the increasing prevalence of obese children requires frequent adjustments to commonly accepted dosing paradigms. As a result, crafting alerts that would remind the provider and still avoid alert fatigue was extremely important. Once a patient's weight was entered in the CPOE application, it soon became necessary to define policies surrounding weight definition and responsibilities for its updating during the course of a patient's stay. Although nurses updated the actual weight in their stand-alone documentation system, it was important that this weight was simultaneously updated in the CPOE system. Because of the impact of weight on medication dosing, it was decided that only physicians or physician extenders could enter or update this value.\nBecause the ADM relies so heavily on patient weight in pediatric dosing, it became critically important to study the workflow processes that shape how a provider interprets the definition of weight in order to guard against unexpected results. Potential for unintended consequences is significant in a culture that thinks in pounds but doses in kilograms, which would translate into greater than double the intended dose of medication. Providing alerts to address both extremes of the possible weight continuum addresses this issue but has its own challenges. DCH has a large patient population with disease states or chronic conditions that often result in cases of extremely low weight due to poor growth. Even among the general care areas, the increasing prevalence of obese children requires frequent adjustments to commonly accepted dosing paradigms. As a result, crafting alerts that would remind the provider and still avoid alert fatigue was extremely important. Once a patient's weight was entered in the CPOE application, it soon became necessary to define policies surrounding weight definition and responsibilities for its updating during the course of a patient's stay. Although nurses updated the actual weight in their stand-alone documentation system, it was important that this weight was simultaneously updated in the CPOE system. Because of the impact of weight on medication dosing, it was decided that only physicians or physician extenders could enter or update this value.\n[SUBTITLE] Overreliance on technology [SUBSECTION] One unexpected side effect of the dosing region model was that providers quickly became accustomed to the concept and were concerned when no dosing region was present. Frontline users expected the computer to prevent any bad decisions, and yet, the clinical advisory committee was reluctant to make any restriction on dosing regions that would compromise a provider's flexibility in ordering. Providers had to be educated to think critically about the dosing recommendations for each patient, and were reminded that the thought process in manually entering a dosage drug without a dosing region within CPOE is nearly identical to the prior paper-based ordering process.\nOne unexpected side effect of the dosing region model was that providers quickly became accustomed to the concept and were concerned when no dosing region was present. Frontline users expected the computer to prevent any bad decisions, and yet, the clinical advisory committee was reluctant to make any restriction on dosing regions that would compromise a provider's flexibility in ordering. Providers had to be educated to think critically about the dosing recommendations for each patient, and were reminded that the thought process in manually entering a dosage drug without a dosing region within CPOE is nearly identical to the prior paper-based ordering process.\n[SUBTITLE] The importance of team composition [SUBSECTION] Finally, computerized decision support design is not a typical knowledge area for most clinical practitioners, and, conversely, IT system developers often do not possess an understanding of the dynamic health care environment in which their applications are used. Because CPOE is technically sophisticated with immense clinical impact, it is extremely important that the design team include individuals who can bridge the traditional IT and clinical specialty divide. Such individuals review functional and technical specifications with an eye for clinical impact, potential functionality conflicts, and knowledge base gaps.\nFinally, computerized decision support design is not a typical knowledge area for most clinical practitioners, and, conversely, IT system developers often do not possess an understanding of the dynamic health care environment in which their applications are used. Because CPOE is technically sophisticated with immense clinical impact, it is extremely important that the design team include individuals who can bridge the traditional IT and clinical specialty divide. Such individuals review functional and technical specifications with an eye for clinical impact, potential functionality conflicts, and knowledge base gaps.", "Users enter the CPOE interface through the organizational electronic health record. The process of ordering medications using CPOE can be initiated by either searching for them using a dialog box within the application or choosing an order set developed for a specific patient profile and then clicking on the hyperlink for an ADM-enabled medication. When a patient is admitted to DCH, details regarding the patient's identity, age, and current physical location are transmitted to the CPOE application from the hospital's admission, discharge, and transfer system. The ADM uses this information, along with prompted provider input and its underlying clinical content tables, to determine whether pediatric medication dosing logic is required (Figure 1).\nProcessing logic of the Advanced Dosing Model. A clinician initiates an order for a medication. ADM pre-processing logic verifies if dosing regions exist for this drug. If so, the system will retrieve the available patient information (chronological age, weight, and care area) and prompt the user for any additional information needed. Once all patient parameters are collected, the decision support algorithm will resolve the list of all potential dosing regions from the clinical knowledge base. If the algorithm successfully identifies 1 dosing region, it is presented to the clinician and made available for calculations and screening. If there are no dosing regions available for the requested medication and patient parameters, or multiple dosing regions are found, the system exits the ADM model and the clinician is prompted to enter a medication dose manually.\nOnce within the ADM logic, the top half of the screen will display all patient parameters and any relevant considerations or warnings for the drug. The bottom half displays actions to be taken. If an allergy to a medication is known, warnings will be shown at this step, and the user must override the allergy alert twice in the bottom half of the screen before being permitted to move forward with medication ordering. If there are no warnings, the ADM will prompt for any additional information (i.e., indication) and presents the proper dose/frequency combinations determined by the patient parameters. For example, when ampicillin is chosen, the system automatically narrows down the available dosing regions based on the patient's age, weight, and care intensity as defined by location. Because ampicillin dosing regions differ based on indication, it asks the user for this information (e.g., meningitis), and then the system provides the user with appropriate dosing guidance and automatically displays the correct dose and interval to be ordered (Figure 2). During this process, the ADM has calculated dosage based on weight, applied custom numeric rounding, and enforced maximum and minimum doses. Unlike other CPOE applications that present all possible choices of ordering combinations, our application uses the ADM to display a limited subset of recommendations that only includes options deemed appropriate for the patient in question. This greatly reduces display \"noise\" associated with long lists and helps prevent potential errors due to incorrect selection.\nScreenshot of pediatric medication dosing. This screenshot shows the presentation of an ampicillin dosing region for an infant at Duke Children's Hospital. The left panel displays the current orders for the patient. The right top panel presents the recommended dosing region based on the patient's dosing weight, age, physical location, and indication. The provider may click and select the suggested value (50 mg/kg) or enter his or her own dosage manually as an override.\nThe user can either select this dosing region or manually enter the dosage desired (i.e., an override). From this point, the user will be taken through a series of screens where he or she may enter dose form, start time, duration, and any additional comments for the pharmacy department that are stored in a text field. The last screen will show a summary of the order and all selections made, at which point the user may accept, save as a draft, modify, or exit without saving or ordering. After this terminal step, the order is electronically routed to the pharmacy department for processing. If dosing regions are undefined or do not fit the patient-specific criteria, the system will exit the ADM and prompt the user to manually enter the desired medication order--similar to the process that routinely occurred on paper prior to CPOE deployment.", "In allowing providers to override the dosing region logic, it became clear during the testing phase that there is potential for a provider to mistype the dose and order far too much or too little drug. As a result, we programmed extra safeguards into the dosing region knowledge base by having the ADM alert the user if a value is entered that exceeds the minimal or maximal drug doses permitted by any dosing region associated with that medication. To override this, the user must enter the exact \"aberrant\" value a second time before the ordering process will move forward. We chose this route as opposed to requiring that the user enter a reason for the override to guard against cases where an override is justified and yet an unintentional, excessively extreme dosage is still entered. We believe this level of active participation by the user (as opposed to the oft-used passive alerting; e.g., having to click an \"OK\" button) strikes a reasonable balance in preventing errors with minimal annoyance. In every case within the CPOE application where alerting methods were used, we recognized that over-alerting (such that the warnings no longer command the user's attention) is as ineffective as no warnings at all and attempted to set alert thresholds accordingly.", "Given the extensive literature available that discusses the impact that CPOE may have in terms of unintended patient harm in critical care areas [12,13,15,18,19], we felt compelled to evaluate our intervention using available safety and quality resources. We examined data from our organizational voluntary Safety Reporting System (SRS) to ask whether the rates of adverse drug events increased in pediatric critical care after CPOE deployment. Full details regarding the SRS system have been thoroughly described previously [24,25]. We acknowledge that relying solely upon voluntarily reported events may be problematic due to the well-known issues of reporting bias, volume, seasonality, and anonymity concerns [30]. However, voluntary reporting data have been used elsewhere to evaluate pediatric CPOE systems to better understand the effects of the implementation when a more rigorous prospective study is not possible [12]. Furthermore, SRS is well established within our health system and has become an integral part of the culture of safety at Duke Medicine.\nWith these caveats in mind, we collected all reported harmful ADEs (i.e., at the minimum, transient adverse patient effects occurred that required some corrective therapy or increased length of stay) [24,25]. The ADE rate decreased 42.9% (p = 0.012) and 46.4% (p = 0.006) in the PICU and NICU units, respectively. Similarly, the percentage of total reports that were severe ADEs decreased significantly in each unit (Table 4). We cannot rule out the effects of reporter bias, and event volume is too low to look at the reports in terms of categories of system failures and attributable causes. Pediatrician review of the event narratives entered by the medication safety pharmacists suggests that there may be fewer PICU reports within 2 primary areas of acknowledged weakness in medication processing--incorrect ordering and order transcription--although the data are sparse. Although we were unable to prospectively study the impact of the advanced dosing model on patient safety, this retrospective analysis of voluntarily reported safety data suggests that we have improved the safety of medication dosing in our pediatric critical care population.\nVoluntarily reported adverse drug events pre- and post-deployment of the pediatric Advanced Dosing Model\nPre-period begins on 9/17/2004 and ends the day before the CPOE deployment dates for each unit described in Table 1. Post-period begins the day of CPOE deployment and ends 12/31/2009. CI = confidence interval. *Significant by 2-way Pearson's chi-square test.", "A comprehensive comparison to other systems is hampered by the lack of rigorous reports on CPOE dosing rule development in the formal literature. Commercial CPOE systems, including Horizon Expert Orders, often use formulary references such as First Databank [31] or Lexicon-Multum [32] for static dosing advice (e.g., drug-drug interactions or allergy alerts) but not for active dosing assistance or management of the dosing process where tailored doses are suggested based on the patient's clinical profile. Although true clinical decision support knowledge bases are available, these are focused on adults, require much manipulation for use in the hospital setting, and are underdeveloped for pediatrics [33].\nWizOrder, the predecessor of our CPOE system, did evolve to include weight-based dosing [26], but the lack of a full report that describes its implementation is a barrier to comparative analysis. In a 1-page conference proceeding, the authors include the concept of dosing weight, as well as a process by which existing orders are reviewed when the patient's weight changes significantly. Our weight-based dosing also includes checking against growth curves to ensure that the change in weight from the prior value is logical. When 1 new drug is dosed on an updated weight, the other drug orders live on a patient are automatically checked to see if the dosing region is still appropriate or should be updated.\nKillelea and colleagues published a description of their pediatric dosing decision support rules for a large teaching hospital [34]. Like our ADM, their method included designing rules by committee based on medication, age, and weight. However, they do not go into much detail regarding how the weights are managed, nor do they describe alerting functionality surrounding weight as we have at DCH. Additionally, their consideration of indication is limited to displaying the dosing guidance for the default indication and providing pop-up windows that describe dosing rules for other less common indications. Patient-specific indication details are not considered by the system, and location-based customization (i.e., care intensity) is not included for presenting a dosing suggestion. Rounding is configurable on a per-medication rule level, but their system as reported is not configurable per location as is the case at DCH.\nOur incorporation of location thus comes into focus as an important, novel aspect of our model. We use location as a critical identifier of care intensity, which is especially important given our large BMT and ICU population. These patients often receive augmented doses of medications given the severity of their illness--doses that may be severely harmful to other pediatric patients. It is therefore critical that the decision support inherent to the dosing regions \"locks\" this content to only the BMT and ICU units. Similarly, cystic fibrosis patients have equally unique medication needs, and so the inclusion of an indication parameter in the ADM allows us to focus clinical content just to this specialized population. Overall, the modular nature of the dosing region model allows us to easily develop new dosing scenarios for specialized populations, especially if the pediatric population profile at DCH changes over time.", "The ADM represents a unique, modular approach to manage the logistics of adding increasingly complex clinical decision support information to an existing CPOE application. This allowed us to tailor the adult tool to the unique needs of pediatrics. However, in practice, there are practical and clinical considerations regarding whether a subpopulation warrants specialized dosing using an ADM approach. There can be significant resource management, implementation, and maintenance trade-offs between over-defining and under-defining patient subpopulations. Thus, a core limitation with the dosing region model of medication ordering is that it is not practical to develop a dosing region for every conceivable scenario of pediatric drug dosing. As a result, there will always be certain medications without any dosing regions or instances of dosing region gaps where the ADM is unable to suggest an appropriate dose. Use of dosing regions, therefore, requires that end user clinicians resolve these gaps. In all cases where the ADM is not able to suggest unique dosing, the patient parameters, medications involved, and manually entered dosages are recorded in an audit file for further review to determine if systematic needs are being unmet.", "Because the ADM relies so heavily on patient weight in pediatric dosing, it became critically important to study the workflow processes that shape how a provider interprets the definition of weight in order to guard against unexpected results. Potential for unintended consequences is significant in a culture that thinks in pounds but doses in kilograms, which would translate into greater than double the intended dose of medication. Providing alerts to address both extremes of the possible weight continuum addresses this issue but has its own challenges. DCH has a large patient population with disease states or chronic conditions that often result in cases of extremely low weight due to poor growth. Even among the general care areas, the increasing prevalence of obese children requires frequent adjustments to commonly accepted dosing paradigms. As a result, crafting alerts that would remind the provider and still avoid alert fatigue was extremely important. Once a patient's weight was entered in the CPOE application, it soon became necessary to define policies surrounding weight definition and responsibilities for its updating during the course of a patient's stay. Although nurses updated the actual weight in their stand-alone documentation system, it was important that this weight was simultaneously updated in the CPOE system. Because of the impact of weight on medication dosing, it was decided that only physicians or physician extenders could enter or update this value.", "One unexpected side effect of the dosing region model was that providers quickly became accustomed to the concept and were concerned when no dosing region was present. Frontline users expected the computer to prevent any bad decisions, and yet, the clinical advisory committee was reluctant to make any restriction on dosing regions that would compromise a provider's flexibility in ordering. Providers had to be educated to think critically about the dosing recommendations for each patient, and were reminded that the thought process in manually entering a dosage drug without a dosing region within CPOE is nearly identical to the prior paper-based ordering process.", "Finally, computerized decision support design is not a typical knowledge area for most clinical practitioners, and, conversely, IT system developers often do not possess an understanding of the dynamic health care environment in which their applications are used. Because CPOE is technically sophisticated with immense clinical impact, it is extremely important that the design team include individuals who can bridge the traditional IT and clinical specialty divide. Such individuals review functional and technical specifications with an eye for clinical impact, potential functionality conflicts, and knowledge base gaps.", "In this study, we describe the implementation of a pediatric Advanced Dosing Model that acts as an enhancement to an adult-centric, vended CPOE system in order to meet the unique challenges of pediatric care. Despite some limitations, the ADM provides a powerful way to guide pediatricians through the medication ordering process. The model uses knowledge of the patient's state to deliberate on care parameters and suggest the appropriate dose.\nEnhancing an adult-focused CPOE system for safe pediatric medication management is a daunting task. When undertaking such a project, it is essential that physicians and pharmacists with formal informatics training serve as an interface between the clinicians and the development team. We hope that the strategies described here will serve to guide other pediatric institutions as they develop their own plans for the implementation of pediatric computerized provider order entry.", "The authors declare that they have no competing interests.", "JMF conceived the project, designed the technical and functional specifications, made critical design decisions, oversaw application deployment, and drafted the manuscript. JJ and PS designed the dosing region functionality and supported frontline deployment. CMD reviewed reported safety incidents and advised on the manuscript. MH performed safety data analysis, interpreted the results, and drafted the final manuscript. AA supported project design and deployment.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6947/11/14/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
A multi-centre cohort study shows no association between experienced violence and labour dystocia in nulliparous women at term.
21338523
Although both labour dystocia and domestic violence during pregnancy are associated with adverse maternal and fetal outcome, evidence in support of a possible association between experiences of domestic violence and labour dystocia is sparse. The aim of this study was to investigate whether self-reported history of violence or experienced violence during pregnancy is associated with increased risk of labour dystocia in nulliparous women at term.
BACKGROUND
A population-based multi-centre cohort study. A self-administrated questionnaire collected at 37 weeks of gestation from nine obstetric departments in Denmark. The total cohort comprised 2652 nulliparous women, among whom 985 (37.1%) met the protocol criteria for dystocia.
METHODS
Among the total cohort, 940 (35.4%) women reported experience of violence, and among these, 66 (2.5%) women reported exposure to violence during their first pregnancy. Further, 39.5% (n = 26) of those had never been exposed to violence before. Univariate logistic regression analysis showed no association between history of violence or experienced violence during pregnancy and labour dystocia at term, crude OR 0.91, 95% CI (0.77-1.08), OR 0.90, 95% CI (0.54-1.50), respectively. However, violence exposed women consuming alcoholic beverages during late pregnancy had increased odds of labour dystocia, crude OR 1.45, 95% CI (1.07-1.96).
RESULTS
Our findings indicate that nulliparous women who have a history of violence or experienced violence during pregnancy do not appear to have a higher risk of labour dystocia at term, according to the definition of labour dystocia in this study. Additional research on this topic would be beneficial, including further evaluation of the criteria for labour dystocia.
CONCLUSIONS
[ "Adolescent", "Adult", "Alcohol Drinking", "Birth Weight", "Cohort Studies", "Denmark", "Domestic Violence", "Dystocia", "Female", "Humans", "Logistic Models", "Odds Ratio", "Parity", "Pregnancy", "Risk Factors", "Self Report", "Surveys and Questionnaires", "Young Adult" ]
3052209
null
null
Methods
The material used in this study originates from the Danish Dystocia Study (DDS), a population-based multi-centre cohort study, and 8099 nulliparous women were potentially eligible for inclusion in the study [8-10]. However, 6356 women were invited to the DDS study (external drop-out was 21.5%) and 5484 women accepted participation. For the current sub-study, a data set on 2652 nulliparous women who fulfilled the inclusion criteria (showed below) was available for analyses of exposure to violence before and during pregnancy. Among these, 985 (37.1%) met the protocol criteria for labour dystocia (Table 1). These diagnostic criteria are in accordance with the American College of Obstetrics and Gynecology (ACOG) criteria for dystocia in labour's second stage [6] and also with the criteria for labour dystocia in first and second stage described by the Danish Society for Obstetrics and Gynecology [39,40]. The diagnosis prompted augmentation (i.e. with oxytocin stimulation) [8-10]. Definition of stages and phases of labour and diagnostic criteria for dystocia for current sub-study [8-10]. Data were collected prospectively between May 2004 and July 2005. Participants were recruited from nine obstetric departments in Denmark with annual birth rates between 850-5400 per year. The departments were four large university hospitals, three county hospitals, and two local district departments. Recruitment of the women took place in the antenatal clinics at 33 gestational weeks, and baseline information was collected at 37 gestational weeks. Inclusion criteria were Danish speaking (i.e. reading/understanding) nulliparous women at 18 years of age or older, with a singleton pregnancy in cephalic presentation and no planned elective caesarean section or induction of labour. Exclusion criteria were nulliparous women with a delivery < 37 or > 42 weeks of gestation, induction, elective caesarean section and breech presentation (n = 1115 or 17.5% in DDS). All data were based on a self-administrated questionnaire and on information contained in obstetric records filled out by the midwives at admission and postpartum. Forty percent of the questionnaires were completed in an internet version. Fourteen (0.5%) of the 2652 women did not answer the questions about violence and were classified as having no exposure to violence. Eight items in the questionnaire dealt with violence and originated from the short form of the Conflict Tactics Scale (CTS2S) [41]. This instrument has been used in large population-based studies in Denmark, and translation from English to Danish and back translation to English were performed prior to the Danish Health and Morbidity survey 2000 [42]. The questions were adapted for a pregnant cohort in the DDS [8-10]. Three alternatives were provided as possible answers to the various exposure questions: 'yes during this pregnancy', 'yes earlier', and 'no never'. Women were not required to provide information concerning the number of episodes of violence that had occurred (Additional file 1). 'History of violence' was defined as experience of violence ever in lifetime before and/or during pregnancy, 'Violence before pregnancy' as experienced violence ever in lifetime before pregnancy, 'Violence during pregnancy' as experienced violence during pregnancy (with or without violence before pregnancy) and 'Violence for the first time during pregnancy' as experienced violence during pregnancy without experienced violence before pregnancy. Further, for the purpose of analysis, violence was categorized as i) threat of violence, ii) physical violence, iii) sexual violence, and iv) serious violence. However, a more detailed description of the prevalence of violence will be published elsewhere by another research group. For the purpose of the current sub-study, the concept domestic violence was defined as exposure to psychological and/or physical abuse by 'Your husband/Co-habitant' or 'A person you know very well in your family', according to the first two alternatives in question 9 in the questionnaire (Additional file 1). Background and lifestyle factors were classified as follows. Maternal age was classified as 18-24, 25-29, 30-34 and >34 years. Country of origin was classified according to whether the woman was born in Denmark, in another Nordic country, or in other country. Cohabiting status was divided into yes or no. Educational status was dichotomised as ≤ 10 years or > 10 years and employment status as employed or unemployed (including voluntary unemployed or studying). Smoking status was classified as "yes" (if the woman was a daily smoker or was smoking at some point during pregnancy) or "no" (never smoked or alternatively, if she had ceased before pregnancy) and use of alcohol as "yes" (if the woman had been drinking alcohol during pregnancy at the time when the questionnaire was administered) or "no" (if the woman had been drinking solely alcohol-free drinks). Body mass index (BMI) was calculated from maternal weight and height before the pregnancy and classified as normal or low weight if BMI was ≤ 25, or overweight when > 25. Infant birth weight was dichotomised as < 3500 g or ≥ 3500 g and delivery mode as partus normalis (PN) or instrumental delivery, including caesarean section and vacuum extraction (VE). [SUBTITLE] Ethics [SUBSECTION] Since no invasive procedures were applied in the study, no Ethics Committee System approval was required by Danish law. The policy of the Helsinki Declaration was followed throughout the data collection and analyses. Written consent was obtained and person-specific data were protected by codes. Permission to establish the database was obtained from the Danish Data Protection Agency (j. no. 2004-41-3995). Since no invasive procedures were applied in the study, no Ethics Committee System approval was required by Danish law. The policy of the Helsinki Declaration was followed throughout the data collection and analyses. Written consent was obtained and person-specific data were protected by codes. Permission to establish the database was obtained from the Danish Data Protection Agency (j. no. 2004-41-3995). [SUBTITLE] Statistical methods [SUBSECTION] Chi-square analysis was used to investigate differences in background characteristics between women who were exposed to violence and women not exposed to violence. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for the crude associations between various background- and lifestyle characteristics and labour dystocia, with dystocia as the dependent variable for logistic regression. Age was dichotomised as ≤ 24 or >24 years and country of origin as Danish or non-Danish. Univariate logistic regression was used to analyse the crude odds ratios for dystocia in relation to various background- and lifestyle characteristics and self-reported history of violence. Further, multiple regression was used to analyse domestic violence (solely) and history of violence as independent variables (two different analysis) together with the other well-documented maternal factors (maternal age, BMI and smoking) associated with dystocia. Odds ratios were used as estimates of relative risk. Statistical significance was accepted at p < 0.05. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 16.0 for Windows. Chi-square analysis was used to investigate differences in background characteristics between women who were exposed to violence and women not exposed to violence. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for the crude associations between various background- and lifestyle characteristics and labour dystocia, with dystocia as the dependent variable for logistic regression. Age was dichotomised as ≤ 24 or >24 years and country of origin as Danish or non-Danish. Univariate logistic regression was used to analyse the crude odds ratios for dystocia in relation to various background- and lifestyle characteristics and self-reported history of violence. Further, multiple regression was used to analyse domestic violence (solely) and history of violence as independent variables (two different analysis) together with the other well-documented maternal factors (maternal age, BMI and smoking) associated with dystocia. Odds ratios were used as estimates of relative risk. Statistical significance was accepted at p < 0.05. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 16.0 for Windows.
null
null
null
null
[ "Background", "Ethics", "Statistical methods", "Results", "Discussion", "Methodological discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Accumulating knowledge suggests that domestic violence occurring during pregnancy is a serious public health issue due to the risk for adverse maternal and fetal health outcomes [1-3]. Labour dystocia, another serious complication in obstetrics, has also been increasingly highlighted during the past decades [4-9]. Labour dystocia is defined as a slow or difficult labour or childbirth. According to Kjaergaard et al. [10] the term 'dystocia' is frequently used in clinical practice, yet there is no consistency in the use of terminology for prolonged labour or labour dystocia [4,6,11,12]. However, labour dystocia accounts for most interventions during labour [4,6,7]. Although both labour dystocia [4,7] and domestic violence during pregnancy [1,2] are associated with adverse maternal and fetal outcome, evidence in support of a possible association between experiences of violence and labour dystocia is sparse. One recent study from Iran has shown an association between experienced abuse by an intimate partner and labour dystocia, and such abuse included psychological threats as well as physical, or sexual abuse [13].\nAlthough the demographic background of women exposed to domestic violence may vary widely, some women are more vulnerable and at increased risk [14]. Disadvantaged women, with low socio-economic status [15-17] and younger age, [18] as well as single women at younger age, [15-17] certain ethnic groups [15,17,19] and even women with a partner born outside Europe [17] are more likely to be exposed to domestic violence. Also unhealthy maternal behaviour such as smoking [20-23] and use of alcohol and drugs during pregnancy are more common among women who live in violent relationships [20,21]. Pregnant women exposed to violence have a greater risk of delivering babies with low birth weight, [20,22,24] premature labour, [22,25] abruption of placenta [25] and fetal trauma [22,24,25] or death [22,24,26] and are also at increased risk of caesarean section [25].\nSome identified risk factors for dystocia are high maternal age, [10,11] short maternal height, [27,28] overweight, [10] obesity [29] and smoking [30]. Also, high fetal weight increases the risk for prolonged labour [31] and labour dystocia [32]. Further, up to 50% of unplanned caesarean sections among nulliparous women are related to labour dystocia [4,6].\nAlready thirty years ago, Lederman et al. [33] showed that physical and psychosocial characteristics of the woman, such as maternal emotional stress related to pregnancy and motherhood, partner and family relationships, and fears of labour were significantly associated with less efficient uterine function, higher state of anxiety, higher epinephrine levels in plasma and longer length of labour. The higher levels of epinephrine may disrupt the normal progress in labour or the coordinated uterine contractions explained by an adrenoreceptor theory [34]. Subsequently, Alehagen et al. [35] confirmed significantly increased levels of all three stress hormones from pregnancy to labour and drastically increased levels of epinephrine and cortisol compared with nor-epinephrine, indicating that mental stress is more dominant than physical stress during labour. Maternal psychosocial stress, family functioning and fear of childbirth may have an association with specific complications such as prolonged labour or caesarean section [36]. History of sexual violence in adult life is associated with an increased risk of extreme fear during labour, [37] and fear of childbirth in the third trimester has been shown to increase the risk of prolonged labour and emergency caesarean section [38]. Thus, the current body of evidence in this area would support the hypothesis that experience of violence before and/or during pregnancy increases the risk of labour dystocia.\nThe aim of this study was to investigate whether self-reported history of violence or experienced violence during pregnancy is associated with increased risk of labour dystocia in nulliparous women at term.", "Since no invasive procedures were applied in the study, no Ethics Committee System approval was required by Danish law. The policy of the Helsinki Declaration was followed throughout the data collection and analyses. Written consent was obtained and person-specific data were protected by codes. Permission to establish the database was obtained from the Danish Data Protection Agency (j. no. 2004-41-3995).", "Chi-square analysis was used to investigate differences in background characteristics between women who were exposed to violence and women not exposed to violence. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for the crude associations between various background- and lifestyle characteristics and labour dystocia, with dystocia as the dependent variable for logistic regression. Age was dichotomised as ≤ 24 or >24 years and country of origin as Danish or non-Danish. Univariate logistic regression was used to analyse the crude odds ratios for dystocia in relation to various background- and lifestyle characteristics and self-reported history of violence. Further, multiple regression was used to analyse domestic violence (solely) and history of violence as independent variables (two different analysis) together with the other well-documented maternal factors (maternal age, BMI and smoking) associated with dystocia. Odds ratios were used as estimates of relative risk. Statistical significance was accepted at p < 0.05. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 16.0 for Windows.", "Table 2 provides a descriptive overview of the maternal characteristics for the total cohort of 2652 women, with and without self-reported experience of 'history of violence', 'violence before pregnancy' and 'violence during pregnancy'.\nDescriptive overview of maternal characteristics in nulliparous women who have reported experienced violence before and/or during pregnancy compared to women not exposed to violence (n = 2652).\nStatistical significance is accepted at p < 0.05\n† Same women can occur in more than one group\nAmong the 940 (35.4%) women who reported experience of 'history of violence', 914 (97.2%) reported experienced 'violence before pregnancy'. Also, 66 (2.5%) women reported violence during current pregnancy (Table 2). Of these women, 26 (39.5%) were exposed to 'violence for the first time during pregnancy'. All women exposed to violence for the first time during their first pregnancy were Danish, three (11.5%) women in the age group 18 - 24 years, 17 (65.4%) at age 25- 29, five (19.2%) at age 30-34 and one (3.8%) >34 years. Three (11.5%) women were not cohabiting, five (19.2%) had ≤ 10 years education, eight (30.8%) were unemployed, seven women were smokers (26.9%), ten (38.4%) were alcohol consumers at the 37th week of gestation, and five (19.2) had BMI > 25.\nOf the 940 women who had a 'history of violence', 697 (77%) answered a question concerning whom the perpetrator was. Thirty-seven percent had been exposed to domestic violence. Further, 22% to violence by someone they knew very well (not family member) and 15% by someone they knew superficially (family or other). The perpetrator was a stranger in 26% of the cases. Of the 66 women who had been exposed to violence during pregnancy, 53 (80%) answered the question about the perpetrator, and in 23 (43.0%) cases they were exposed to domestic violence.\nThe median age of all nulliparous women was 28 years. In the violence-exposed group significantly more women were in the 18-24 age categories in all three violence exposure groups (p < 0.001, p < 0.001, p = 0.020). No differences in exposure to violence were found in relation to country of origin. In the total sample, 94.9% of the women (n = 2517) were cohabiting. Across all categories of exposure to violence, such exposure was proportionally more often reported by non-cohabiting women (p = 0.004, p = 0.003 respectively p < 0.001) albeit only 16 (0.6%) of the women were not cohabiting. Slightly more than eighty percent (80.3%) of the women had more than 10 years of schooling. Exposure to 'history of violence' and 'violence before pregnancy' was more frequently reported by women who had a lower educational level (≤ 10 years) compared to women not exposed (p < 0.001), as well as in the group 'violence during pregnancy' (p < 0.006). Over two-thirds (69.7%) of the women were employed. The exposed group differed from the non-exposed group before pregnancy in that more women were unemployed (p < 0.001). However, there was no significant difference in employment status among the group of 66 (2.5%) women who were violence-exposed during pregnancy (Table 2).\nMore than twenty-four percent (24.3%) of these nulliparous women were smokers at term or at some point during pregnancy. Exposure to violence was proportionally more often reported by smokers than by non-smokers across all categories (p < 0.001, p < 0.001, p = 0.022). Twenty-four percent of the nulliparous reported that they consumed alcohol during pregnancy, in 37th week of pregnancy (Table 2). The quantity ranged between 1 to 10 units of alcoholic beverages per week. However, there were no significant differences in alcohol consumption between violence-exposed or unexposed women. No differences in exposure to violence were found in relation to BMI.\nCrude odds ratios showed no association between experiences of 'history of violence' and dystocia (n = 940) OR 0.91, 95% CI (0.77-1.08), 'violence before pregnancy' and dystocia (n = 914) OR 0.90, 95% CI (0.77-1.07), 'violence during pregnancy' and dystocia (n = 66) OR 0.90, 95% CI (0.54-1.50), or 'first time violence during pregnancy' (n = 26) OR 1.24, 95% CI (0.56-2.71) and dystocia. Moreover, no significant associations were found between dystocia at term and any of the various categorizations of violence: i) 'threat of violence' OR 0.97, 95%CI (0.79-1.18), ii) 'physical violence' OR 0.93, 95%CI (0.78-1.11), iii) 'sexual violence' OR 1.18, 95%CI (0.85-1.62) and iv) 'serious violence' OR 1.00, 95%CI (0.81-1.23).\nA multiple regression done with 'domestic violence' (solely) as an independent variable together with already known factors as maternal age, BMI and smoking associated with dystocia showed no significant association to dystocia at term, OR 1.23 95% CI (0.89 - 1.69). Women older than 24 years and women with pre pregnancy overweight had significantly increased risk for dystocia at term with OR 1.53 95% CI (1.16 -2.00) respectively OR 1.31 95% CI (1.07-1.62). Further, multiple regression with 'history of violence' as an independent variable together with age, BMI and smoking showed no association to dystocia at term with OR 0.98 95% CI (0.81-1.18).\nTable 3 shows the relationship between background and lifestyle characteristics and the risk (crude odds ratios) for dystocia in women with and without exposure to 'history of violence'. Women older than 24 years had significantly increased risk for dystocia at term, irrespective of exposure to violence (exposed: OR 1.64, 95% CI: 1.16-2.30; unexposed: OR 1.36, 95% CI: 1.02-1.83). Also, women who consumed alcohol during pregnancy and had experienced exposure to 'history of violence' had an increased risk for dystocia at term (exposed: OR 1.45, 95% CI: 1.07-1.96).\nMaternal background characteristics as risk factors for dystocia in nulliparous women with and without experience of history of violence, as shown by crude odds ratios (OR) and 95% confidence intervals.\nWomen giving birth to an infant with a birth weight of 3500 g or more (n = 1231) had significantly increased risk of dystocia irrespective of exposure to violence (exposed (n = 424): OR 2.0, 95% CI: 1.49-2.69; unexposed (n = 807): OR 1.39, 95% CI: 1.12-1.71). Women with dystocia had significantly increased risk for instrumental deliveries (n = 632) compared to normal deliveries, irrespective of exposure to violence (exposed (n = 221): OR 4.45, 95% CI: 3.23-6.11; unexposed (n = 410): OR 4.21, 95% CI: 3.33-5.33).", "More than one third (35.4%) of the women in this study had been exposed to violence ever in their lifetime, i.e. before and/or during pregnancy. However, no association was found between experienced violence and labour dystocia in nulliparous women at term. Therefore, our findings suggest that women who have been exposed to violence ever in lifetime before and/or during pregnancy are not at a higher risk of prolonged delivery process at term. However, as this is the first study ever with the specific aim to examine the potential association between history of violence and labour dystocia, the current results should be regarded as only preliminary, and further research is needed in order to confirm these apparently negative findings. Nevertheless, recent findings by Khodakarami et al.[13] did show an association between experienced intimate partner violence and labour dystocia. However, Khodakarami et al.[13] did not define dystocia, and also, our definition of experienced domestic violence is somewhat broader, which makes it difficult to compare the results. Yet, in our study, the odds of having dystocia if exposed solely to domestic violence were increased by 23%, albeit not significantly. These two major challenges in obstetrics thus appear mostly to have different underlying risk factors, although smoking is common to both exposure to violence [20-23,30] and prolonged labour [30], which can in turn lead to labour dystocia.\nThe subjects investigated in our study are primarily Danish women (92.5%), i.e. they were born in Denmark and have Danish ethnicity. Due to ethical considerations, women younger than 18 years were excluded in this study in respect for Danish law regarding autonomy, because otherwise parental consent would have been necessary for participation in the study.\nThe mean age of the nulliparous women was rather high, i.e. 28 years. In accordance with results from previous studies, [16-18] younger age (< 24 years) is a risk group for exposure to violence. The results in our study showed that women older than 24 years with or without experience of violence had significantly increased risk for dystocia at term, although in the non-violence exposed group, the association may be regarded as marginally significant due to the lower limits of the confidence interval. Earlier studies have shown that increasing maternal age has a strong association with labour dystocia [10,11].\nWomen exposed to violence were more often smokers, in accordance with what several international studies have shown, [21-23] even though smoking has been decreasing in Denmark during the last decade, especially in the age-group 25-44 years [42]. A nation-wide study in Denmark showed that in the year 2005, smoking prevalence at some point in pregnancy was 16% [43]. However, our study had the same definition of smoking as in the study of Egebjerg Jensen et al.[43], and the prevalence of smoking during pregnancy was higher, i.e., 24.3% in our study. It is alarming if the smoking prevalence is increasing during pregnancy.\nAnother background variable that might be of importance for an association between exposure to violence and labour dystocia is alcohol. In the current study, women who had experience of violence and who also were alcohol consumers during late pregnancy had higher risk of dystocia at term compared to non-violence exposed women. The calculated odds ratio was significant (p = 0.017), albeit the strength of the association may perhaps best be regarded as modest in the current context, in that these are crude odds ratios, i.e. unadjusted for any other background characteristics. In accordance with earlier results, [20,21] unhealthy maternal behaviour such as use of alcohol and drugs during pregnancy are more common among women who live in violent relationships. Yet, to our knowledge associations between consumption of alcohol during the third trimester in pregnancy and experience of violence as a risk factor for labour dystocia have not been described in the literature before. These findings are difficult to interpret and need further investigation.\nIn the present study 2.5% (n = 66) of nulliparous women were exposed to violence during the pregnancy and 39.5% (n = 26) of them had never been exposed to violence previously. Thus, the violence was initiated during their first pregnancy. The size of this group was however limited and these results would need to be investigated further. Transition into a new social role can be experienced as a very stressful event for the father to-be [44] and may lead to increased pre-existing strains in the couple's relationship to such an extent that the partner uses psychological or physical violence towards the mother to-be. However, our definition of 'history of violence' in this study includes all experienced violence during and before pregnancy, and thus, intimate partner violence is only one possible component.\nIt should be noted that the current results regarding prevalence of exposure to violence may conceivably represent an underestimate of the true rates. Technical errors affected the internet data collection (40% of the material), such that women were unable to report whether they were exposed to violence during current pregnancy or not. More specifically, they were only provided with two alternatives of answers in the questionnaire, instead of three. Also, the true prevalence of physical and psychological abuse in pregnant women is difficult to estimate since women who are exposed to violence may be afraid to report such violence in fear of abuse escalation [24]. First time pregnancy may escalate existing stressors in the couple's relationship which can lead to psychological or physical abuse and this in turn may result in prolonged labour [33-36]. Nevertheless, in the current study, there was no association between exposure to 'first time violence during pregnancy and dystocia'. However, there were only 26 women in this group. Despite the limited size of this group, the odds of having dystocia were increased by almost 25%, albeit not significantly. Thus, the question remains as to whether a significant association between dystocia and exposure to first time violence during pregnancy would be obtained in a larger sample. A potential weakness in the current study is the small number of individuals in some of the sub-group analyses.\nIn current study overweight pre pregnancy showed significant increased risk of more than 30% to having dystocia at term irrespective if exposed solely to domestic violence or to history of violence. Kjaergaard et al.[10] has already presented overweight as a riskfactor for labour dystocia from the DDS [8-10].\nSome potential obstetrical risk factors for dystocia were also analysed in relation to violence. Our findings showed that delivering a baby with a birth weight ≥ 3500 g was associated with dystocia at term without any association with exposure of violence. Yet, Kjærgaard et al.[8] have already shown on the DDS material that expecting a child with a birth weight > 4000 g was associated with increased risk of dystocia. Indeed, high birth weight as a predisposing factor for prolonged labour and labour dystocia is well-described in the literature [31,32]. Women exposed to violence more often give birth to low birth weight babies [20,22,24]. However, birth weight is probably not the sole explanation for labour dystocia, and women may have prolonged second stage without any correlation to birth weight [45]. It should also be noted that some studies have found no association between violence and low birth weight [14,46]. Furthermore, unknown factors such as psychosocial stress may also have some importance in this context. However, Nystedt et al. [47] could not find a link between a low level of psychosocial resources in early pregnancy and increased risk for prolonged labour. The etiology of the diagnosis labour dystocia appears to be multifaceted and therefore complex.\nIn addition, although instrumental delivery is a well-known independent consequence of dystocia, [4,6] we did not find any association between instrumental delivery and experience of violence with labour dystocia. Women with labour dystocia had significantly increased risk for instrumental deliveries, irrespective of exposure to violence or not, a finding which is unremarkable. Previous studies have found that women reporting physical violence during pregnancy are more likely to be delivered by caesarean section than those who are not exposed to physical violence [25,48]. However, it is important to keep in mind that in the current sample, only nulliparous women at term were included and thus all premature deliveries were excluded.\n[SUBTITLE] Methodological discussion [SUBSECTION] The results of this study might potentially be biased due to selection or misclassification. However, we do not find any reason to believe that systematic selection bias or misclassification occurred. The current cohort design based upon prospectively collected data enabled the comparison of risk of labour dystocia among women exposed and un-exposed to violence during the same time period. The population in this study consisted only of nulliparous women which made the cohort a homogeneous group in that respect. Also, the concept 'dystocia' was very well defined, in accordance with ACOG criteria for dystocia in labour's second stage [6] and with the criteria for dystocia in the first and second stage described by the Danish Society for Obstetrics and Gynecology, [39,40] which means that the composition of the group defined with labour dystocia is homogeneous. However, our results raise the question as to whether these criteria for labour dystocia are relevant for the diagnosis. Labour dystocia is still a poorly defined phenomenon which might be categorized with respect to clinical diagnosis [12]. It may well be that the current definition with a time span of four hours is too short, and therefore the prevalence of dystocia may be overestimated. The use of a lengthier time criteria might lead to a reduced number of cases diagnosed as dystocia, but would probably yield a more accurate estimate. The extent to which this in turn might lead to a stronger association between experienced violence and labour dystocia is unknown.\nThe internal non-response rate of the questions about violence was only 0.5% that is, only 14 women in this cohort did not answer the violence questions at all. The limited number of women with missing information on violence exposure is unlikely to have affected the results in any major way, and we can only speculate as to whether these women were exposed to violence or not. However, as mentioned above, technical errors due to the use of the internet for data collection (40% of the answers at baseline) provided only two alternatives for answers regarding violence exposure, i.e. 'yes earlier', or 'no never', instead of three alternatives. Misclassification of responses could potentially have led to an underreporting of exposure to violence during pregnancy at term. MacMillan et al.[49] found that computer-based screening did not increase prevalence, and that written screening methods yielded fewest missing data.\nThe questions measuring violence used for this sub-study have been previously validated and used in a Danish general population [42]. However, since the questions have not been adapted to a pregnant cohort before, this may have influenced the findings obtained. Further, it is possible that the rather broad time frame for experienced violence investigated in the current study is not relevant for a study of obstetric outcome. However, according to Eberhard-Gran et al., [37] history of sexual violence in adult life is associated with an increased risk of extreme fear during labour. In our hypothetical model excessive stress, fear and anxiety are related to dysfunctional labour. Screening for violence is not a routine in all countries. If it could be known for the midwife and the obstetrician prior to delivery that the woman had been exposed to excessive stress due to domestic violence before or during pregnancy, then health care practitioners could provide closer monitoring throughout pregnancy and during delivery. The caring process could be more carefully scrutinised to the unique woman's needs. However, the extent to which closer monitoring would decrease risk for labour dystocia is still an unanswered question.\nThe results of this study might potentially be biased due to selection or misclassification. However, we do not find any reason to believe that systematic selection bias or misclassification occurred. The current cohort design based upon prospectively collected data enabled the comparison of risk of labour dystocia among women exposed and un-exposed to violence during the same time period. The population in this study consisted only of nulliparous women which made the cohort a homogeneous group in that respect. Also, the concept 'dystocia' was very well defined, in accordance with ACOG criteria for dystocia in labour's second stage [6] and with the criteria for dystocia in the first and second stage described by the Danish Society for Obstetrics and Gynecology, [39,40] which means that the composition of the group defined with labour dystocia is homogeneous. However, our results raise the question as to whether these criteria for labour dystocia are relevant for the diagnosis. Labour dystocia is still a poorly defined phenomenon which might be categorized with respect to clinical diagnosis [12]. It may well be that the current definition with a time span of four hours is too short, and therefore the prevalence of dystocia may be overestimated. The use of a lengthier time criteria might lead to a reduced number of cases diagnosed as dystocia, but would probably yield a more accurate estimate. The extent to which this in turn might lead to a stronger association between experienced violence and labour dystocia is unknown.\nThe internal non-response rate of the questions about violence was only 0.5% that is, only 14 women in this cohort did not answer the violence questions at all. The limited number of women with missing information on violence exposure is unlikely to have affected the results in any major way, and we can only speculate as to whether these women were exposed to violence or not. However, as mentioned above, technical errors due to the use of the internet for data collection (40% of the answers at baseline) provided only two alternatives for answers regarding violence exposure, i.e. 'yes earlier', or 'no never', instead of three alternatives. Misclassification of responses could potentially have led to an underreporting of exposure to violence during pregnancy at term. MacMillan et al.[49] found that computer-based screening did not increase prevalence, and that written screening methods yielded fewest missing data.\nThe questions measuring violence used for this sub-study have been previously validated and used in a Danish general population [42]. However, since the questions have not been adapted to a pregnant cohort before, this may have influenced the findings obtained. Further, it is possible that the rather broad time frame for experienced violence investigated in the current study is not relevant for a study of obstetric outcome. However, according to Eberhard-Gran et al., [37] history of sexual violence in adult life is associated with an increased risk of extreme fear during labour. In our hypothetical model excessive stress, fear and anxiety are related to dysfunctional labour. Screening for violence is not a routine in all countries. If it could be known for the midwife and the obstetrician prior to delivery that the woman had been exposed to excessive stress due to domestic violence before or during pregnancy, then health care practitioners could provide closer monitoring throughout pregnancy and during delivery. The caring process could be more carefully scrutinised to the unique woman's needs. However, the extent to which closer monitoring would decrease risk for labour dystocia is still an unanswered question.", "The results of this study might potentially be biased due to selection or misclassification. However, we do not find any reason to believe that systematic selection bias or misclassification occurred. The current cohort design based upon prospectively collected data enabled the comparison of risk of labour dystocia among women exposed and un-exposed to violence during the same time period. The population in this study consisted only of nulliparous women which made the cohort a homogeneous group in that respect. Also, the concept 'dystocia' was very well defined, in accordance with ACOG criteria for dystocia in labour's second stage [6] and with the criteria for dystocia in the first and second stage described by the Danish Society for Obstetrics and Gynecology, [39,40] which means that the composition of the group defined with labour dystocia is homogeneous. However, our results raise the question as to whether these criteria for labour dystocia are relevant for the diagnosis. Labour dystocia is still a poorly defined phenomenon which might be categorized with respect to clinical diagnosis [12]. It may well be that the current definition with a time span of four hours is too short, and therefore the prevalence of dystocia may be overestimated. The use of a lengthier time criteria might lead to a reduced number of cases diagnosed as dystocia, but would probably yield a more accurate estimate. The extent to which this in turn might lead to a stronger association between experienced violence and labour dystocia is unknown.\nThe internal non-response rate of the questions about violence was only 0.5% that is, only 14 women in this cohort did not answer the violence questions at all. The limited number of women with missing information on violence exposure is unlikely to have affected the results in any major way, and we can only speculate as to whether these women were exposed to violence or not. However, as mentioned above, technical errors due to the use of the internet for data collection (40% of the answers at baseline) provided only two alternatives for answers regarding violence exposure, i.e. 'yes earlier', or 'no never', instead of three alternatives. Misclassification of responses could potentially have led to an underreporting of exposure to violence during pregnancy at term. MacMillan et al.[49] found that computer-based screening did not increase prevalence, and that written screening methods yielded fewest missing data.\nThe questions measuring violence used for this sub-study have been previously validated and used in a Danish general population [42]. However, since the questions have not been adapted to a pregnant cohort before, this may have influenced the findings obtained. Further, it is possible that the rather broad time frame for experienced violence investigated in the current study is not relevant for a study of obstetric outcome. However, according to Eberhard-Gran et al., [37] history of sexual violence in adult life is associated with an increased risk of extreme fear during labour. In our hypothetical model excessive stress, fear and anxiety are related to dysfunctional labour. Screening for violence is not a routine in all countries. If it could be known for the midwife and the obstetrician prior to delivery that the woman had been exposed to excessive stress due to domestic violence before or during pregnancy, then health care practitioners could provide closer monitoring throughout pregnancy and during delivery. The caring process could be more carefully scrutinised to the unique woman's needs. However, the extent to which closer monitoring would decrease risk for labour dystocia is still an unanswered question.", "The hypothesis that nulliparous women who have been exposed to violence are more prone to labour dystocia during childbirth at term has not been confirmed. Due to the current scarcity of studies exploring a possible association between violence and labour dystocia, two major contributors to adverse maternal and fetal outcome, the extent to which a relationship might exist would need further investigation. In this regard, it would also be beneficial if the criteria for the definition dystocia could be further evaluated.", "The authors declare that they have no competing interests.", "All authors contributed to the planning of the study. Analyses were planned by all authors. HF performed the analysis and all authors interpreted the results. HF wrote the drafts of the manuscript, which the other authors commented on and discussed. All authors approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/11/14/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Ethics", "Statistical methods", "Results", "Discussion", "Methodological discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Accumulating knowledge suggests that domestic violence occurring during pregnancy is a serious public health issue due to the risk for adverse maternal and fetal health outcomes [1-3]. Labour dystocia, another serious complication in obstetrics, has also been increasingly highlighted during the past decades [4-9]. Labour dystocia is defined as a slow or difficult labour or childbirth. According to Kjaergaard et al. [10] the term 'dystocia' is frequently used in clinical practice, yet there is no consistency in the use of terminology for prolonged labour or labour dystocia [4,6,11,12]. However, labour dystocia accounts for most interventions during labour [4,6,7]. Although both labour dystocia [4,7] and domestic violence during pregnancy [1,2] are associated with adverse maternal and fetal outcome, evidence in support of a possible association between experiences of violence and labour dystocia is sparse. One recent study from Iran has shown an association between experienced abuse by an intimate partner and labour dystocia, and such abuse included psychological threats as well as physical, or sexual abuse [13].\nAlthough the demographic background of women exposed to domestic violence may vary widely, some women are more vulnerable and at increased risk [14]. Disadvantaged women, with low socio-economic status [15-17] and younger age, [18] as well as single women at younger age, [15-17] certain ethnic groups [15,17,19] and even women with a partner born outside Europe [17] are more likely to be exposed to domestic violence. Also unhealthy maternal behaviour such as smoking [20-23] and use of alcohol and drugs during pregnancy are more common among women who live in violent relationships [20,21]. Pregnant women exposed to violence have a greater risk of delivering babies with low birth weight, [20,22,24] premature labour, [22,25] abruption of placenta [25] and fetal trauma [22,24,25] or death [22,24,26] and are also at increased risk of caesarean section [25].\nSome identified risk factors for dystocia are high maternal age, [10,11] short maternal height, [27,28] overweight, [10] obesity [29] and smoking [30]. Also, high fetal weight increases the risk for prolonged labour [31] and labour dystocia [32]. Further, up to 50% of unplanned caesarean sections among nulliparous women are related to labour dystocia [4,6].\nAlready thirty years ago, Lederman et al. [33] showed that physical and psychosocial characteristics of the woman, such as maternal emotional stress related to pregnancy and motherhood, partner and family relationships, and fears of labour were significantly associated with less efficient uterine function, higher state of anxiety, higher epinephrine levels in plasma and longer length of labour. The higher levels of epinephrine may disrupt the normal progress in labour or the coordinated uterine contractions explained by an adrenoreceptor theory [34]. Subsequently, Alehagen et al. [35] confirmed significantly increased levels of all three stress hormones from pregnancy to labour and drastically increased levels of epinephrine and cortisol compared with nor-epinephrine, indicating that mental stress is more dominant than physical stress during labour. Maternal psychosocial stress, family functioning and fear of childbirth may have an association with specific complications such as prolonged labour or caesarean section [36]. History of sexual violence in adult life is associated with an increased risk of extreme fear during labour, [37] and fear of childbirth in the third trimester has been shown to increase the risk of prolonged labour and emergency caesarean section [38]. Thus, the current body of evidence in this area would support the hypothesis that experience of violence before and/or during pregnancy increases the risk of labour dystocia.\nThe aim of this study was to investigate whether self-reported history of violence or experienced violence during pregnancy is associated with increased risk of labour dystocia in nulliparous women at term.", "The material used in this study originates from the Danish Dystocia Study (DDS), a population-based multi-centre cohort study, and 8099 nulliparous women were potentially eligible for inclusion in the study [8-10]. However, 6356 women were invited to the DDS study (external drop-out was 21.5%) and 5484 women accepted participation. For the current sub-study, a data set on 2652 nulliparous women who fulfilled the inclusion criteria (showed below) was available for analyses of exposure to violence before and during pregnancy. Among these, 985 (37.1%) met the protocol criteria for labour dystocia (Table 1). These diagnostic criteria are in accordance with the American College of Obstetrics and Gynecology (ACOG) criteria for dystocia in labour's second stage [6] and also with the criteria for labour dystocia in first and second stage described by the Danish Society for Obstetrics and Gynecology [39,40]. The diagnosis prompted augmentation (i.e. with oxytocin stimulation) [8-10].\nDefinition of stages and phases of labour and diagnostic criteria for dystocia for current sub-study [8-10].\nData were collected prospectively between May 2004 and July 2005. Participants were recruited from nine obstetric departments in Denmark with annual birth rates between 850-5400 per year. The departments were four large university hospitals, three county hospitals, and two local district departments. Recruitment of the women took place in the antenatal clinics at 33 gestational weeks, and baseline information was collected at 37 gestational weeks. Inclusion criteria were Danish speaking (i.e. reading/understanding) nulliparous women at 18 years of age or older, with a singleton pregnancy in cephalic presentation and no planned elective caesarean section or induction of labour. Exclusion criteria were nulliparous women with a delivery < 37 or > 42 weeks of gestation, induction, elective caesarean section and breech presentation (n = 1115 or 17.5% in DDS). All data were based on a self-administrated questionnaire and on information contained in obstetric records filled out by the midwives at admission and postpartum. Forty percent of the questionnaires were completed in an internet version. Fourteen (0.5%) of the 2652 women did not answer the questions about violence and were classified as having no exposure to violence.\nEight items in the questionnaire dealt with violence and originated from the short form of the Conflict Tactics Scale (CTS2S) [41]. This instrument has been used in large population-based studies in Denmark, and translation from English to Danish and back translation to English were performed prior to the Danish Health and Morbidity survey 2000 [42]. The questions were adapted for a pregnant cohort in the DDS [8-10]. Three alternatives were provided as possible answers to the various exposure questions: 'yes during this pregnancy', 'yes earlier', and 'no never'. Women were not required to provide information concerning the number of episodes of violence that had occurred (Additional file 1).\n'History of violence' was defined as experience of violence ever in lifetime before and/or during pregnancy, 'Violence before pregnancy' as experienced violence ever in lifetime before pregnancy, 'Violence during pregnancy' as experienced violence during pregnancy (with or without violence before pregnancy) and 'Violence for the first time during pregnancy' as experienced violence during pregnancy without experienced violence before pregnancy.\nFurther, for the purpose of analysis, violence was categorized as i) threat of violence, ii) physical violence, iii) sexual violence, and iv) serious violence. However, a more detailed description of the prevalence of violence will be published elsewhere by another research group.\nFor the purpose of the current sub-study, the concept domestic violence was defined as exposure to psychological and/or physical abuse by 'Your husband/Co-habitant' or 'A person you know very well in your family', according to the first two alternatives in question 9 in the questionnaire (Additional file 1).\nBackground and lifestyle factors were classified as follows. Maternal age was classified as 18-24, 25-29, 30-34 and >34 years. Country of origin was classified according to whether the woman was born in Denmark, in another Nordic country, or in other country. Cohabiting status was divided into yes or no. Educational status was dichotomised as ≤ 10 years or > 10 years and employment status as employed or unemployed (including voluntary unemployed or studying). Smoking status was classified as \"yes\" (if the woman was a daily smoker or was smoking at some point during pregnancy) or \"no\" (never smoked or alternatively, if she had ceased before pregnancy) and use of alcohol as \"yes\" (if the woman had been drinking alcohol during pregnancy at the time when the questionnaire was administered) or \"no\" (if the woman had been drinking solely alcohol-free drinks). Body mass index (BMI) was calculated from maternal weight and height before the pregnancy and classified as normal or low weight if BMI was ≤ 25, or overweight when > 25. Infant birth weight was dichotomised as < 3500 g or ≥ 3500 g and delivery mode as partus normalis (PN) or instrumental delivery, including caesarean section and vacuum extraction (VE).\n[SUBTITLE] Ethics [SUBSECTION] Since no invasive procedures were applied in the study, no Ethics Committee System approval was required by Danish law. The policy of the Helsinki Declaration was followed throughout the data collection and analyses. Written consent was obtained and person-specific data were protected by codes. Permission to establish the database was obtained from the Danish Data Protection Agency (j. no. 2004-41-3995).\nSince no invasive procedures were applied in the study, no Ethics Committee System approval was required by Danish law. The policy of the Helsinki Declaration was followed throughout the data collection and analyses. Written consent was obtained and person-specific data were protected by codes. Permission to establish the database was obtained from the Danish Data Protection Agency (j. no. 2004-41-3995).\n[SUBTITLE] Statistical methods [SUBSECTION] Chi-square analysis was used to investigate differences in background characteristics between women who were exposed to violence and women not exposed to violence. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for the crude associations between various background- and lifestyle characteristics and labour dystocia, with dystocia as the dependent variable for logistic regression. Age was dichotomised as ≤ 24 or >24 years and country of origin as Danish or non-Danish. Univariate logistic regression was used to analyse the crude odds ratios for dystocia in relation to various background- and lifestyle characteristics and self-reported history of violence. Further, multiple regression was used to analyse domestic violence (solely) and history of violence as independent variables (two different analysis) together with the other well-documented maternal factors (maternal age, BMI and smoking) associated with dystocia. Odds ratios were used as estimates of relative risk. Statistical significance was accepted at p < 0.05. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 16.0 for Windows.\nChi-square analysis was used to investigate differences in background characteristics between women who were exposed to violence and women not exposed to violence. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for the crude associations between various background- and lifestyle characteristics and labour dystocia, with dystocia as the dependent variable for logistic regression. Age was dichotomised as ≤ 24 or >24 years and country of origin as Danish or non-Danish. Univariate logistic regression was used to analyse the crude odds ratios for dystocia in relation to various background- and lifestyle characteristics and self-reported history of violence. Further, multiple regression was used to analyse domestic violence (solely) and history of violence as independent variables (two different analysis) together with the other well-documented maternal factors (maternal age, BMI and smoking) associated with dystocia. Odds ratios were used as estimates of relative risk. Statistical significance was accepted at p < 0.05. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 16.0 for Windows.", "Since no invasive procedures were applied in the study, no Ethics Committee System approval was required by Danish law. The policy of the Helsinki Declaration was followed throughout the data collection and analyses. Written consent was obtained and person-specific data were protected by codes. Permission to establish the database was obtained from the Danish Data Protection Agency (j. no. 2004-41-3995).", "Chi-square analysis was used to investigate differences in background characteristics between women who were exposed to violence and women not exposed to violence. Odds ratios (OR) and 95% confidence intervals (95% CI) were calculated for the crude associations between various background- and lifestyle characteristics and labour dystocia, with dystocia as the dependent variable for logistic regression. Age was dichotomised as ≤ 24 or >24 years and country of origin as Danish or non-Danish. Univariate logistic regression was used to analyse the crude odds ratios for dystocia in relation to various background- and lifestyle characteristics and self-reported history of violence. Further, multiple regression was used to analyse domestic violence (solely) and history of violence as independent variables (two different analysis) together with the other well-documented maternal factors (maternal age, BMI and smoking) associated with dystocia. Odds ratios were used as estimates of relative risk. Statistical significance was accepted at p < 0.05. Statistical analyses were performed using the Statistical Package for Social Sciences (SPSS) version 16.0 for Windows.", "Table 2 provides a descriptive overview of the maternal characteristics for the total cohort of 2652 women, with and without self-reported experience of 'history of violence', 'violence before pregnancy' and 'violence during pregnancy'.\nDescriptive overview of maternal characteristics in nulliparous women who have reported experienced violence before and/or during pregnancy compared to women not exposed to violence (n = 2652).\nStatistical significance is accepted at p < 0.05\n† Same women can occur in more than one group\nAmong the 940 (35.4%) women who reported experience of 'history of violence', 914 (97.2%) reported experienced 'violence before pregnancy'. Also, 66 (2.5%) women reported violence during current pregnancy (Table 2). Of these women, 26 (39.5%) were exposed to 'violence for the first time during pregnancy'. All women exposed to violence for the first time during their first pregnancy were Danish, three (11.5%) women in the age group 18 - 24 years, 17 (65.4%) at age 25- 29, five (19.2%) at age 30-34 and one (3.8%) >34 years. Three (11.5%) women were not cohabiting, five (19.2%) had ≤ 10 years education, eight (30.8%) were unemployed, seven women were smokers (26.9%), ten (38.4%) were alcohol consumers at the 37th week of gestation, and five (19.2) had BMI > 25.\nOf the 940 women who had a 'history of violence', 697 (77%) answered a question concerning whom the perpetrator was. Thirty-seven percent had been exposed to domestic violence. Further, 22% to violence by someone they knew very well (not family member) and 15% by someone they knew superficially (family or other). The perpetrator was a stranger in 26% of the cases. Of the 66 women who had been exposed to violence during pregnancy, 53 (80%) answered the question about the perpetrator, and in 23 (43.0%) cases they were exposed to domestic violence.\nThe median age of all nulliparous women was 28 years. In the violence-exposed group significantly more women were in the 18-24 age categories in all three violence exposure groups (p < 0.001, p < 0.001, p = 0.020). No differences in exposure to violence were found in relation to country of origin. In the total sample, 94.9% of the women (n = 2517) were cohabiting. Across all categories of exposure to violence, such exposure was proportionally more often reported by non-cohabiting women (p = 0.004, p = 0.003 respectively p < 0.001) albeit only 16 (0.6%) of the women were not cohabiting. Slightly more than eighty percent (80.3%) of the women had more than 10 years of schooling. Exposure to 'history of violence' and 'violence before pregnancy' was more frequently reported by women who had a lower educational level (≤ 10 years) compared to women not exposed (p < 0.001), as well as in the group 'violence during pregnancy' (p < 0.006). Over two-thirds (69.7%) of the women were employed. The exposed group differed from the non-exposed group before pregnancy in that more women were unemployed (p < 0.001). However, there was no significant difference in employment status among the group of 66 (2.5%) women who were violence-exposed during pregnancy (Table 2).\nMore than twenty-four percent (24.3%) of these nulliparous women were smokers at term or at some point during pregnancy. Exposure to violence was proportionally more often reported by smokers than by non-smokers across all categories (p < 0.001, p < 0.001, p = 0.022). Twenty-four percent of the nulliparous reported that they consumed alcohol during pregnancy, in 37th week of pregnancy (Table 2). The quantity ranged between 1 to 10 units of alcoholic beverages per week. However, there were no significant differences in alcohol consumption between violence-exposed or unexposed women. No differences in exposure to violence were found in relation to BMI.\nCrude odds ratios showed no association between experiences of 'history of violence' and dystocia (n = 940) OR 0.91, 95% CI (0.77-1.08), 'violence before pregnancy' and dystocia (n = 914) OR 0.90, 95% CI (0.77-1.07), 'violence during pregnancy' and dystocia (n = 66) OR 0.90, 95% CI (0.54-1.50), or 'first time violence during pregnancy' (n = 26) OR 1.24, 95% CI (0.56-2.71) and dystocia. Moreover, no significant associations were found between dystocia at term and any of the various categorizations of violence: i) 'threat of violence' OR 0.97, 95%CI (0.79-1.18), ii) 'physical violence' OR 0.93, 95%CI (0.78-1.11), iii) 'sexual violence' OR 1.18, 95%CI (0.85-1.62) and iv) 'serious violence' OR 1.00, 95%CI (0.81-1.23).\nA multiple regression done with 'domestic violence' (solely) as an independent variable together with already known factors as maternal age, BMI and smoking associated with dystocia showed no significant association to dystocia at term, OR 1.23 95% CI (0.89 - 1.69). Women older than 24 years and women with pre pregnancy overweight had significantly increased risk for dystocia at term with OR 1.53 95% CI (1.16 -2.00) respectively OR 1.31 95% CI (1.07-1.62). Further, multiple regression with 'history of violence' as an independent variable together with age, BMI and smoking showed no association to dystocia at term with OR 0.98 95% CI (0.81-1.18).\nTable 3 shows the relationship between background and lifestyle characteristics and the risk (crude odds ratios) for dystocia in women with and without exposure to 'history of violence'. Women older than 24 years had significantly increased risk for dystocia at term, irrespective of exposure to violence (exposed: OR 1.64, 95% CI: 1.16-2.30; unexposed: OR 1.36, 95% CI: 1.02-1.83). Also, women who consumed alcohol during pregnancy and had experienced exposure to 'history of violence' had an increased risk for dystocia at term (exposed: OR 1.45, 95% CI: 1.07-1.96).\nMaternal background characteristics as risk factors for dystocia in nulliparous women with and without experience of history of violence, as shown by crude odds ratios (OR) and 95% confidence intervals.\nWomen giving birth to an infant with a birth weight of 3500 g or more (n = 1231) had significantly increased risk of dystocia irrespective of exposure to violence (exposed (n = 424): OR 2.0, 95% CI: 1.49-2.69; unexposed (n = 807): OR 1.39, 95% CI: 1.12-1.71). Women with dystocia had significantly increased risk for instrumental deliveries (n = 632) compared to normal deliveries, irrespective of exposure to violence (exposed (n = 221): OR 4.45, 95% CI: 3.23-6.11; unexposed (n = 410): OR 4.21, 95% CI: 3.33-5.33).", "More than one third (35.4%) of the women in this study had been exposed to violence ever in their lifetime, i.e. before and/or during pregnancy. However, no association was found between experienced violence and labour dystocia in nulliparous women at term. Therefore, our findings suggest that women who have been exposed to violence ever in lifetime before and/or during pregnancy are not at a higher risk of prolonged delivery process at term. However, as this is the first study ever with the specific aim to examine the potential association between history of violence and labour dystocia, the current results should be regarded as only preliminary, and further research is needed in order to confirm these apparently negative findings. Nevertheless, recent findings by Khodakarami et al.[13] did show an association between experienced intimate partner violence and labour dystocia. However, Khodakarami et al.[13] did not define dystocia, and also, our definition of experienced domestic violence is somewhat broader, which makes it difficult to compare the results. Yet, in our study, the odds of having dystocia if exposed solely to domestic violence were increased by 23%, albeit not significantly. These two major challenges in obstetrics thus appear mostly to have different underlying risk factors, although smoking is common to both exposure to violence [20-23,30] and prolonged labour [30], which can in turn lead to labour dystocia.\nThe subjects investigated in our study are primarily Danish women (92.5%), i.e. they were born in Denmark and have Danish ethnicity. Due to ethical considerations, women younger than 18 years were excluded in this study in respect for Danish law regarding autonomy, because otherwise parental consent would have been necessary for participation in the study.\nThe mean age of the nulliparous women was rather high, i.e. 28 years. In accordance with results from previous studies, [16-18] younger age (< 24 years) is a risk group for exposure to violence. The results in our study showed that women older than 24 years with or without experience of violence had significantly increased risk for dystocia at term, although in the non-violence exposed group, the association may be regarded as marginally significant due to the lower limits of the confidence interval. Earlier studies have shown that increasing maternal age has a strong association with labour dystocia [10,11].\nWomen exposed to violence were more often smokers, in accordance with what several international studies have shown, [21-23] even though smoking has been decreasing in Denmark during the last decade, especially in the age-group 25-44 years [42]. A nation-wide study in Denmark showed that in the year 2005, smoking prevalence at some point in pregnancy was 16% [43]. However, our study had the same definition of smoking as in the study of Egebjerg Jensen et al.[43], and the prevalence of smoking during pregnancy was higher, i.e., 24.3% in our study. It is alarming if the smoking prevalence is increasing during pregnancy.\nAnother background variable that might be of importance for an association between exposure to violence and labour dystocia is alcohol. In the current study, women who had experience of violence and who also were alcohol consumers during late pregnancy had higher risk of dystocia at term compared to non-violence exposed women. The calculated odds ratio was significant (p = 0.017), albeit the strength of the association may perhaps best be regarded as modest in the current context, in that these are crude odds ratios, i.e. unadjusted for any other background characteristics. In accordance with earlier results, [20,21] unhealthy maternal behaviour such as use of alcohol and drugs during pregnancy are more common among women who live in violent relationships. Yet, to our knowledge associations between consumption of alcohol during the third trimester in pregnancy and experience of violence as a risk factor for labour dystocia have not been described in the literature before. These findings are difficult to interpret and need further investigation.\nIn the present study 2.5% (n = 66) of nulliparous women were exposed to violence during the pregnancy and 39.5% (n = 26) of them had never been exposed to violence previously. Thus, the violence was initiated during their first pregnancy. The size of this group was however limited and these results would need to be investigated further. Transition into a new social role can be experienced as a very stressful event for the father to-be [44] and may lead to increased pre-existing strains in the couple's relationship to such an extent that the partner uses psychological or physical violence towards the mother to-be. However, our definition of 'history of violence' in this study includes all experienced violence during and before pregnancy, and thus, intimate partner violence is only one possible component.\nIt should be noted that the current results regarding prevalence of exposure to violence may conceivably represent an underestimate of the true rates. Technical errors affected the internet data collection (40% of the material), such that women were unable to report whether they were exposed to violence during current pregnancy or not. More specifically, they were only provided with two alternatives of answers in the questionnaire, instead of three. Also, the true prevalence of physical and psychological abuse in pregnant women is difficult to estimate since women who are exposed to violence may be afraid to report such violence in fear of abuse escalation [24]. First time pregnancy may escalate existing stressors in the couple's relationship which can lead to psychological or physical abuse and this in turn may result in prolonged labour [33-36]. Nevertheless, in the current study, there was no association between exposure to 'first time violence during pregnancy and dystocia'. However, there were only 26 women in this group. Despite the limited size of this group, the odds of having dystocia were increased by almost 25%, albeit not significantly. Thus, the question remains as to whether a significant association between dystocia and exposure to first time violence during pregnancy would be obtained in a larger sample. A potential weakness in the current study is the small number of individuals in some of the sub-group analyses.\nIn current study overweight pre pregnancy showed significant increased risk of more than 30% to having dystocia at term irrespective if exposed solely to domestic violence or to history of violence. Kjaergaard et al.[10] has already presented overweight as a riskfactor for labour dystocia from the DDS [8-10].\nSome potential obstetrical risk factors for dystocia were also analysed in relation to violence. Our findings showed that delivering a baby with a birth weight ≥ 3500 g was associated with dystocia at term without any association with exposure of violence. Yet, Kjærgaard et al.[8] have already shown on the DDS material that expecting a child with a birth weight > 4000 g was associated with increased risk of dystocia. Indeed, high birth weight as a predisposing factor for prolonged labour and labour dystocia is well-described in the literature [31,32]. Women exposed to violence more often give birth to low birth weight babies [20,22,24]. However, birth weight is probably not the sole explanation for labour dystocia, and women may have prolonged second stage without any correlation to birth weight [45]. It should also be noted that some studies have found no association between violence and low birth weight [14,46]. Furthermore, unknown factors such as psychosocial stress may also have some importance in this context. However, Nystedt et al. [47] could not find a link between a low level of psychosocial resources in early pregnancy and increased risk for prolonged labour. The etiology of the diagnosis labour dystocia appears to be multifaceted and therefore complex.\nIn addition, although instrumental delivery is a well-known independent consequence of dystocia, [4,6] we did not find any association between instrumental delivery and experience of violence with labour dystocia. Women with labour dystocia had significantly increased risk for instrumental deliveries, irrespective of exposure to violence or not, a finding which is unremarkable. Previous studies have found that women reporting physical violence during pregnancy are more likely to be delivered by caesarean section than those who are not exposed to physical violence [25,48]. However, it is important to keep in mind that in the current sample, only nulliparous women at term were included and thus all premature deliveries were excluded.\n[SUBTITLE] Methodological discussion [SUBSECTION] The results of this study might potentially be biased due to selection or misclassification. However, we do not find any reason to believe that systematic selection bias or misclassification occurred. The current cohort design based upon prospectively collected data enabled the comparison of risk of labour dystocia among women exposed and un-exposed to violence during the same time period. The population in this study consisted only of nulliparous women which made the cohort a homogeneous group in that respect. Also, the concept 'dystocia' was very well defined, in accordance with ACOG criteria for dystocia in labour's second stage [6] and with the criteria for dystocia in the first and second stage described by the Danish Society for Obstetrics and Gynecology, [39,40] which means that the composition of the group defined with labour dystocia is homogeneous. However, our results raise the question as to whether these criteria for labour dystocia are relevant for the diagnosis. Labour dystocia is still a poorly defined phenomenon which might be categorized with respect to clinical diagnosis [12]. It may well be that the current definition with a time span of four hours is too short, and therefore the prevalence of dystocia may be overestimated. The use of a lengthier time criteria might lead to a reduced number of cases diagnosed as dystocia, but would probably yield a more accurate estimate. The extent to which this in turn might lead to a stronger association between experienced violence and labour dystocia is unknown.\nThe internal non-response rate of the questions about violence was only 0.5% that is, only 14 women in this cohort did not answer the violence questions at all. The limited number of women with missing information on violence exposure is unlikely to have affected the results in any major way, and we can only speculate as to whether these women were exposed to violence or not. However, as mentioned above, technical errors due to the use of the internet for data collection (40% of the answers at baseline) provided only two alternatives for answers regarding violence exposure, i.e. 'yes earlier', or 'no never', instead of three alternatives. Misclassification of responses could potentially have led to an underreporting of exposure to violence during pregnancy at term. MacMillan et al.[49] found that computer-based screening did not increase prevalence, and that written screening methods yielded fewest missing data.\nThe questions measuring violence used for this sub-study have been previously validated and used in a Danish general population [42]. However, since the questions have not been adapted to a pregnant cohort before, this may have influenced the findings obtained. Further, it is possible that the rather broad time frame for experienced violence investigated in the current study is not relevant for a study of obstetric outcome. However, according to Eberhard-Gran et al., [37] history of sexual violence in adult life is associated with an increased risk of extreme fear during labour. In our hypothetical model excessive stress, fear and anxiety are related to dysfunctional labour. Screening for violence is not a routine in all countries. If it could be known for the midwife and the obstetrician prior to delivery that the woman had been exposed to excessive stress due to domestic violence before or during pregnancy, then health care practitioners could provide closer monitoring throughout pregnancy and during delivery. The caring process could be more carefully scrutinised to the unique woman's needs. However, the extent to which closer monitoring would decrease risk for labour dystocia is still an unanswered question.\nThe results of this study might potentially be biased due to selection or misclassification. However, we do not find any reason to believe that systematic selection bias or misclassification occurred. The current cohort design based upon prospectively collected data enabled the comparison of risk of labour dystocia among women exposed and un-exposed to violence during the same time period. The population in this study consisted only of nulliparous women which made the cohort a homogeneous group in that respect. Also, the concept 'dystocia' was very well defined, in accordance with ACOG criteria for dystocia in labour's second stage [6] and with the criteria for dystocia in the first and second stage described by the Danish Society for Obstetrics and Gynecology, [39,40] which means that the composition of the group defined with labour dystocia is homogeneous. However, our results raise the question as to whether these criteria for labour dystocia are relevant for the diagnosis. Labour dystocia is still a poorly defined phenomenon which might be categorized with respect to clinical diagnosis [12]. It may well be that the current definition with a time span of four hours is too short, and therefore the prevalence of dystocia may be overestimated. The use of a lengthier time criteria might lead to a reduced number of cases diagnosed as dystocia, but would probably yield a more accurate estimate. The extent to which this in turn might lead to a stronger association between experienced violence and labour dystocia is unknown.\nThe internal non-response rate of the questions about violence was only 0.5% that is, only 14 women in this cohort did not answer the violence questions at all. The limited number of women with missing information on violence exposure is unlikely to have affected the results in any major way, and we can only speculate as to whether these women were exposed to violence or not. However, as mentioned above, technical errors due to the use of the internet for data collection (40% of the answers at baseline) provided only two alternatives for answers regarding violence exposure, i.e. 'yes earlier', or 'no never', instead of three alternatives. Misclassification of responses could potentially have led to an underreporting of exposure to violence during pregnancy at term. MacMillan et al.[49] found that computer-based screening did not increase prevalence, and that written screening methods yielded fewest missing data.\nThe questions measuring violence used for this sub-study have been previously validated and used in a Danish general population [42]. However, since the questions have not been adapted to a pregnant cohort before, this may have influenced the findings obtained. Further, it is possible that the rather broad time frame for experienced violence investigated in the current study is not relevant for a study of obstetric outcome. However, according to Eberhard-Gran et al., [37] history of sexual violence in adult life is associated with an increased risk of extreme fear during labour. In our hypothetical model excessive stress, fear and anxiety are related to dysfunctional labour. Screening for violence is not a routine in all countries. If it could be known for the midwife and the obstetrician prior to delivery that the woman had been exposed to excessive stress due to domestic violence before or during pregnancy, then health care practitioners could provide closer monitoring throughout pregnancy and during delivery. The caring process could be more carefully scrutinised to the unique woman's needs. However, the extent to which closer monitoring would decrease risk for labour dystocia is still an unanswered question.", "The results of this study might potentially be biased due to selection or misclassification. However, we do not find any reason to believe that systematic selection bias or misclassification occurred. The current cohort design based upon prospectively collected data enabled the comparison of risk of labour dystocia among women exposed and un-exposed to violence during the same time period. The population in this study consisted only of nulliparous women which made the cohort a homogeneous group in that respect. Also, the concept 'dystocia' was very well defined, in accordance with ACOG criteria for dystocia in labour's second stage [6] and with the criteria for dystocia in the first and second stage described by the Danish Society for Obstetrics and Gynecology, [39,40] which means that the composition of the group defined with labour dystocia is homogeneous. However, our results raise the question as to whether these criteria for labour dystocia are relevant for the diagnosis. Labour dystocia is still a poorly defined phenomenon which might be categorized with respect to clinical diagnosis [12]. It may well be that the current definition with a time span of four hours is too short, and therefore the prevalence of dystocia may be overestimated. The use of a lengthier time criteria might lead to a reduced number of cases diagnosed as dystocia, but would probably yield a more accurate estimate. The extent to which this in turn might lead to a stronger association between experienced violence and labour dystocia is unknown.\nThe internal non-response rate of the questions about violence was only 0.5% that is, only 14 women in this cohort did not answer the violence questions at all. The limited number of women with missing information on violence exposure is unlikely to have affected the results in any major way, and we can only speculate as to whether these women were exposed to violence or not. However, as mentioned above, technical errors due to the use of the internet for data collection (40% of the answers at baseline) provided only two alternatives for answers regarding violence exposure, i.e. 'yes earlier', or 'no never', instead of three alternatives. Misclassification of responses could potentially have led to an underreporting of exposure to violence during pregnancy at term. MacMillan et al.[49] found that computer-based screening did not increase prevalence, and that written screening methods yielded fewest missing data.\nThe questions measuring violence used for this sub-study have been previously validated and used in a Danish general population [42]. However, since the questions have not been adapted to a pregnant cohort before, this may have influenced the findings obtained. Further, it is possible that the rather broad time frame for experienced violence investigated in the current study is not relevant for a study of obstetric outcome. However, according to Eberhard-Gran et al., [37] history of sexual violence in adult life is associated with an increased risk of extreme fear during labour. In our hypothetical model excessive stress, fear and anxiety are related to dysfunctional labour. Screening for violence is not a routine in all countries. If it could be known for the midwife and the obstetrician prior to delivery that the woman had been exposed to excessive stress due to domestic violence before or during pregnancy, then health care practitioners could provide closer monitoring throughout pregnancy and during delivery. The caring process could be more carefully scrutinised to the unique woman's needs. However, the extent to which closer monitoring would decrease risk for labour dystocia is still an unanswered question.", "The hypothesis that nulliparous women who have been exposed to violence are more prone to labour dystocia during childbirth at term has not been confirmed. Due to the current scarcity of studies exploring a possible association between violence and labour dystocia, two major contributors to adverse maternal and fetal outcome, the extent to which a relationship might exist would need further investigation. In this regard, it would also be beneficial if the criteria for the definition dystocia could be further evaluated.", "The authors declare that they have no competing interests.", "All authors contributed to the planning of the study. Analyses were planned by all authors. HF performed the analysis and all authors interpreted the results. HF wrote the drafts of the manuscript, which the other authors commented on and discussed. All authors approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/11/14/prepub\n", "Appendix. Questions concerning violence used in the current study.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Comparing methods to estimate treatment effects on a continuous outcome in multicentre randomized controlled trials: a simulation study.
21338524
Multicentre randomized controlled trials (RCTs) routinely use randomization and analysis stratified by centre to control for differences between centres and to improve precision. No consensus has been reached on how to best analyze correlated continuous outcomes in such settings. Our objective was to investigate the properties of commonly used statistical models at various levels of clustering in the context of multicentre RCTs.
BACKGROUND
Assuming no treatment by centre interaction, we compared six methods (ignoring centre effects, including centres as fixed effects, including centres as random effects, generalized estimating equation (GEE), and fixed- and random-effects centre-level analysis) to analyze continuous outcomes in multicentre RCTs using simulations over a wide spectrum of intraclass correlation (ICC) values, and varying numbers of centres and centre size. The performance of models was evaluated in terms of bias, precision, mean squared error of the point estimator of treatment effect, empirical coverage of the 95% confidence interval, and statistical power of the procedure.
METHODS
While all methods yielded unbiased estimates of treatment effect, ignoring centres led to inflation of standard error and loss of statistical power when within centre correlation was present. Mixed-effects model was most efficient and attained nominal coverage of 95% and 90% power in almost all scenarios. Fixed-effects model was less precise when the number of centres was large and treatment allocation was subject to chance imbalance within centre. GEE approach underestimated standard error of the treatment effect when the number of centres was small. The two centre-level models led to more variable point estimates and relatively low interval coverage or statistical power depending on whether or not heterogeneity of treatment contrasts was considered in the analysis.
RESULTS
All six models produced unbiased estimates of treatment effect in the context of multicentre trials. Adjusting for centre as a random intercept led to the most efficient treatment effect estimation across all simulations under the normality assumption, when there was no treatment by centre interaction.
CONCLUSIONS
[ "Algorithms", "Bias", "Computer Simulation", "Confidence Intervals", "Humans", "Likelihood Functions", "Linear Models", "Monte Carlo Method", "Multicenter Studies as Topic", "Randomized Controlled Trials as Topic", "Treatment Outcome" ]
3056845
null
null
Methods
[SUBTITLE] Approaches assessing treatment effects [SUBSECTION] We investigated six statistical approaches to evaluating effect of an experimental treatment on a continuous outcome compared with the control, for multicentre RCTs. Assuming baseline prognostic characteristics are approximately balanced between the treatment and control groups via randomization, we do not consider covariates other than centre effects in the models. The first four approaches use individual patient as unit of analysis, while centre is the unit of analysis in the last two approaches. [SUBTITLE] Simple linear regression (Model A) [SUBSECTION] This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6]. (1) Y i j = β 0 + β 1 X i j + e i j , where Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome. This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6]. (1) Y i j = β 0 + β 1 X i j + e i j , where Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome. [SUBTITLE] Fixed-effects regression (Model B) [SUBSECTION] This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R. (2) Y i j = β 0 j + β 1 X i j + e i j This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R. (2) Y i j = β 0 j + β 1 X i j + e i j [SUBTITLE] Mixed-effects regression (Model C) [SUBSECTION] Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre. (3) Y i j = β 0 + b 0 j + β 1 X i j + e i j Similar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18]. Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre. (3) Y i j = β 0 + b 0 j + β 1 X i j + e i j Similar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18]. [SUBTITLE] Generalized estimating equations (Model D) [SUBSECTION] The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the "combined" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D. The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the "combined" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D. [SUBTITLE] Centre-level fixed-effects model (Model E - 1) [SUBSECTION] The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R. A schematic of fixed- and random-effects centre-level models. The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R. A schematic of fixed- and random-effects centre-level models. [SUBTITLE] Centre-level random-effects model (Model E - 2) [SUBSECTION] A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances. A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances. We investigated six statistical approaches to evaluating effect of an experimental treatment on a continuous outcome compared with the control, for multicentre RCTs. Assuming baseline prognostic characteristics are approximately balanced between the treatment and control groups via randomization, we do not consider covariates other than centre effects in the models. The first four approaches use individual patient as unit of analysis, while centre is the unit of analysis in the last two approaches. [SUBTITLE] Simple linear regression (Model A) [SUBSECTION] This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6]. (1) Y i j = β 0 + β 1 X i j + e i j , where Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome. This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6]. (1) Y i j = β 0 + β 1 X i j + e i j , where Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome. [SUBTITLE] Fixed-effects regression (Model B) [SUBSECTION] This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R. (2) Y i j = β 0 j + β 1 X i j + e i j This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R. (2) Y i j = β 0 j + β 1 X i j + e i j [SUBTITLE] Mixed-effects regression (Model C) [SUBSECTION] Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre. (3) Y i j = β 0 + b 0 j + β 1 X i j + e i j Similar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18]. Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre. (3) Y i j = β 0 + b 0 j + β 1 X i j + e i j Similar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18]. [SUBTITLE] Generalized estimating equations (Model D) [SUBSECTION] The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the "combined" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D. The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the "combined" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D. [SUBTITLE] Centre-level fixed-effects model (Model E - 1) [SUBSECTION] The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R. A schematic of fixed- and random-effects centre-level models. The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R. A schematic of fixed- and random-effects centre-level models. [SUBTITLE] Centre-level random-effects model (Model E - 2) [SUBSECTION] A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances. A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances. [SUBTITLE] Study data simulation [SUBSECTION] We used Monte Carlo simulation to assess performance of statistical models to analyze parallel group multicentre RCTs with a continuous outcome. We simulated outcome, Y, using the mixed-effects linear regression model (Model C): Yij = β0 + b0j + β1Xij + eij for the i-th patient in the j-th centre, where Xij(= 0, 1) is the dummy variable for treatment allocation (i = 1...mj, j = 1... J). We generated random error, e, from N(0,σe2=1). We set the true treatment effect (β1) to be 0.5 residual standard deviation (σe), an effect size suggested by the COMPETE II trial. This corresponds to a medium effect size according to Cohen's criterion [25]. To simulate centre effects, we employed the relationship between ICC and σb2:ICC=σb2σe2+σb2: To fully study the behaviour of candidate models at various ICC levels, we considered the following values of ICC for completeness: 0.00, 0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50 and 0.75. This in turn set the corresponding σb2 values to be 0, 1/99, 1/19, 1/9, 3/17, 1/4, 1/3, 3/7, 7/13, 2/3, 9/11, 1 and 3. However, we focused interpretation of the results on lower values of ICC as they were more likely to occur in practice [26-28]. The original sample size was determined to be 84 per arm using a two-sided two-sample t-test (Model A) to ensure 90% power to detect a standardized effect size of 0.5 at 5% type I error rate. We increased the final sample size to 90 (Power increases to 91.8%) per arm to accommodate more combinations of the number and size of participating centres. We assumed patients were randomly allocated to two groups with a ratio of 1:1, the most common and efficient choice. We generated data in nine scenarios (Table 1) to assess model performance in three designs: (a) balanced studies where equal numbers of patients are enrolled from study centres and the numbers of patients in the two arms are the same (fixed by design); (b) unbalanced studies where equal numbers of patients are enrolled from study centres but the numbers of patients in two arms within centre may be different due to chance yet remain 1:1 allocation ratio; and (c) unbalanced studies where the numbers of patients enrolled vary among centres, and block randomization of size 2 or 4 is used to reduce chance imbalance. For designs (a) and (b), we considered three combinations of centre size and number of centres: J = 45 centres, 4 patients per centre; J = 18 centres, 10 patients per centre; and J = 6 centres, 30 patients per centre. Design (c) mimicked a more realistic scenario for multicentre RCTs. For the first setup of design (c), we grouped 180 patients to 17 centres. It was constructed so that the centre composition and degree of allocation imbalance were analogous to the COMPETE II trial but at a smaller sample size: the number of patients per centre varying from 1 to 28; quartiles = 5, 10, 15; mean = 11; SD = 8; percentage of unbalanced centres between 47% and 70% depending on block size. Catalogue of simulation designs ICC: Intraclass (intracentre) correlation To compare results from various models in analyzing the COMPETE II trial, and assess accuracy and precision of the effect estimates, we included an additional scenario in design (c) to imitate this motivating example more closely, with respect to sample size and centre composition (scenario 9). We generated treatment allocation (X1) and outcome (Y) for 511 patients in 46 centres, where the number of patients per centre was set exactly the same as observed in the COMPETE II trial (Table 2). In particular, three centres recruiting only one patient was simulated. Analogously to COMPETE II, a fixed block size of 6 was used to assign patients to treatments. The same simulation model was employed as in previous scenarios yet a separate set of parameters based on results of the COMPETE II trial were used (Table 3): β0 = 1.34, β1= 1.26, σb2=1, σe2=7, ICC = 0.125. Centre composition of the COMPETE II trial Estimates of intervention effects in COMPETE II trial SE: standard error; CI: confidence interval; σe2: within-centre variance; σb2: between-centre variance; ICC: intraclass (intracentre) correlation We generated 1000 simulations for each of the 13 ICC values under each of the first eight scenarios and 1000 simulations for the specified ICC value for the ninth scenario. Separate sets of centre effects were simulated for each scenario and each simulation 1-1000. We chose to simulate 1000 replicates so that the simulation standard deviation for the empirical power at a nominal level of 90% in the absence of clustering was controlled at 1%. This also ensured that standard deviations of the coverage of the confidence interval and the empirical power not exceed 1.6%. We used Monte Carlo simulation to assess performance of statistical models to analyze parallel group multicentre RCTs with a continuous outcome. We simulated outcome, Y, using the mixed-effects linear regression model (Model C): Yij = β0 + b0j + β1Xij + eij for the i-th patient in the j-th centre, where Xij(= 0, 1) is the dummy variable for treatment allocation (i = 1...mj, j = 1... J). We generated random error, e, from N(0,σe2=1). We set the true treatment effect (β1) to be 0.5 residual standard deviation (σe), an effect size suggested by the COMPETE II trial. This corresponds to a medium effect size according to Cohen's criterion [25]. To simulate centre effects, we employed the relationship between ICC and σb2:ICC=σb2σe2+σb2: To fully study the behaviour of candidate models at various ICC levels, we considered the following values of ICC for completeness: 0.00, 0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50 and 0.75. This in turn set the corresponding σb2 values to be 0, 1/99, 1/19, 1/9, 3/17, 1/4, 1/3, 3/7, 7/13, 2/3, 9/11, 1 and 3. However, we focused interpretation of the results on lower values of ICC as they were more likely to occur in practice [26-28]. The original sample size was determined to be 84 per arm using a two-sided two-sample t-test (Model A) to ensure 90% power to detect a standardized effect size of 0.5 at 5% type I error rate. We increased the final sample size to 90 (Power increases to 91.8%) per arm to accommodate more combinations of the number and size of participating centres. We assumed patients were randomly allocated to two groups with a ratio of 1:1, the most common and efficient choice. We generated data in nine scenarios (Table 1) to assess model performance in three designs: (a) balanced studies where equal numbers of patients are enrolled from study centres and the numbers of patients in the two arms are the same (fixed by design); (b) unbalanced studies where equal numbers of patients are enrolled from study centres but the numbers of patients in two arms within centre may be different due to chance yet remain 1:1 allocation ratio; and (c) unbalanced studies where the numbers of patients enrolled vary among centres, and block randomization of size 2 or 4 is used to reduce chance imbalance. For designs (a) and (b), we considered three combinations of centre size and number of centres: J = 45 centres, 4 patients per centre; J = 18 centres, 10 patients per centre; and J = 6 centres, 30 patients per centre. Design (c) mimicked a more realistic scenario for multicentre RCTs. For the first setup of design (c), we grouped 180 patients to 17 centres. It was constructed so that the centre composition and degree of allocation imbalance were analogous to the COMPETE II trial but at a smaller sample size: the number of patients per centre varying from 1 to 28; quartiles = 5, 10, 15; mean = 11; SD = 8; percentage of unbalanced centres between 47% and 70% depending on block size. Catalogue of simulation designs ICC: Intraclass (intracentre) correlation To compare results from various models in analyzing the COMPETE II trial, and assess accuracy and precision of the effect estimates, we included an additional scenario in design (c) to imitate this motivating example more closely, with respect to sample size and centre composition (scenario 9). We generated treatment allocation (X1) and outcome (Y) for 511 patients in 46 centres, where the number of patients per centre was set exactly the same as observed in the COMPETE II trial (Table 2). In particular, three centres recruiting only one patient was simulated. Analogously to COMPETE II, a fixed block size of 6 was used to assign patients to treatments. The same simulation model was employed as in previous scenarios yet a separate set of parameters based on results of the COMPETE II trial were used (Table 3): β0 = 1.34, β1= 1.26, σb2=1, σe2=7, ICC = 0.125. Centre composition of the COMPETE II trial Estimates of intervention effects in COMPETE II trial SE: standard error; CI: confidence interval; σe2: within-centre variance; σb2: between-centre variance; ICC: intraclass (intracentre) correlation We generated 1000 simulations for each of the 13 ICC values under each of the first eight scenarios and 1000 simulations for the specified ICC value for the ninth scenario. Separate sets of centre effects were simulated for each scenario and each simulation 1-1000. We chose to simulate 1000 replicates so that the simulation standard deviation for the empirical power at a nominal level of 90% in the absence of clustering was controlled at 1%. This also ensured that standard deviations of the coverage of the confidence interval and the empirical power not exceed 1.6%. [SUBTITLE] Comparison of analytic models [SUBSECTION] We applied six statistical models to each simulated dataset. For each model, we calculated the bias, simulation standard deviation (SD), average of estimated standard error (SE) and mean squared error (MSE) of the point estimator of treatment effect (i.e. β1), empirical coverage of the 95% confidence interval around β1 and the empirical statistical power. We constructed confidence intervals based on t-test for Models A - C, and Wald interval based on normal approximation for Models D and E. We estimated bias as the difference between the average estimate of β1 over 1000 simulated datasets and the true effect. The simulation or empirical SD was calculated as the standard deviation of the estimated β1s across simulations, indicating precision of the estimator. We also obtain average of the estimated SEs from 1000 simulations to assess accuracy of variance estimator from each simulation dataset. The overall error rate of the point estimator was captured by the estimated MSE, enumerated by the average squared difference between the estimated β1 and true value across the 1000 datasets. Furthermore, we reported performance of the interval estimators in each model. The empirical coverage was estimated as the proportion of 95% confidence intervals that covered the true β1, and the empirical power was the proportion of confidence intervals that rejected a false null hypothesis, i.e. zero lies outside CI. All datasets were simulated and analyzed in R version 2.4.1[29]. We applied six statistical models to each simulated dataset. For each model, we calculated the bias, simulation standard deviation (SD), average of estimated standard error (SE) and mean squared error (MSE) of the point estimator of treatment effect (i.e. β1), empirical coverage of the 95% confidence interval around β1 and the empirical statistical power. We constructed confidence intervals based on t-test for Models A - C, and Wald interval based on normal approximation for Models D and E. We estimated bias as the difference between the average estimate of β1 over 1000 simulated datasets and the true effect. The simulation or empirical SD was calculated as the standard deviation of the estimated β1s across simulations, indicating precision of the estimator. We also obtain average of the estimated SEs from 1000 simulations to assess accuracy of variance estimator from each simulation dataset. The overall error rate of the point estimator was captured by the estimated MSE, enumerated by the average squared difference between the estimated β1 and true value across the 1000 datasets. Furthermore, we reported performance of the interval estimators in each model. The empirical coverage was estimated as the proportion of 95% confidence intervals that covered the true β1, and the empirical power was the proportion of confidence intervals that rejected a false null hypothesis, i.e. zero lies outside CI. All datasets were simulated and analyzed in R version 2.4.1[29].
null
null
null
null
[ "Background", "Approaches assessing treatment effects", "Simple linear regression (Model A)", "Fixed-effects regression (Model B)", "Mixed-effects regression (Model C)", "Generalized estimating equations (Model D)", "Centre-level fixed-effects model (Model E - 1)", "Centre-level random-effects model (Model E - 2)", "Study data simulation", "Comparison of analytic models", "Results", "Analysis of COMPETE II trial data", "Balanced design with equal centre size", "Properties of point estimates", "Properties of interval estimates", "Design with equal centre size and chance imbalance", "Properties of point estimates", "Properties of interval estimates", "Design with unequal centre sizes and chance imbalance", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "A multicentre randomized control trial (RCT) is an experimental study \"conducted according to a single protocol but at more than one site and, therefore, carried out by more than one investigator\"[1]. Multicentre RCTs are usually carried out for two main reasons. First, they provide a feasible way to accrue sufficient participants to achieve reasonable statistical power to detect the effect of an experimental treatment compared with some control treatment. Second, by enrolling participants of more diverse demographics from a broader spectrum of geographical locations and various clinical settings, multicentre RCTs increase generalizability of the experimental treatment for future use [1].\nRandomization is the most important feature of RCTs, for on average it balances known and unknown baseline prognostic factors between treatment groups, in addition to minimizing selection bias. Nevertheless, randomization does not guarantee complete balance of participant characteristics especially when the sample size is moderate or small. Stratification is a useful technique to guard against potential bias introduced by imbalance in key prognostic factors. In multicentre RCTs, investigators often use a stratified randomization design to achieve balance over key differences in study population (e.g. environmental, socio-economic or demographical factors) and management team (e.g. patient administration and management) at centre level to improve precision of statistical analysis [2]. Regulatory agencies recommend that stratification variables in design should usually be accounted for in analysis, unless the potential value of adjustment is questionable (e.g. very few subjects per centre) [1].\nThe current study was motivated by the COMPETE II trial which was designed to determine if an integrated computerized decision support system shared by primary care providers and patients could improve management of diabetes [3]. A total number of 511 patients were recruited from 46 family physician practices. Individual patients were randomized to one of the two intervention groups stratified by physician practice using permuted blocks of size 6.The number of patients treated by one physician varied from 1 to 26 (interquartiles = 7.25, 11, 15; mean = 11; standard deviation [SD] = 6). The primary outcome was a continuous variable representing the change of a 10-point process composite score based on eight diabetes-related component variables from baseline to a mean of 5.9 months' follow-up. A positive change indicated a favourable result. During the study, the possibility of clustering within physician practice and its consequence on statistical analysis was a concern to the investigators. The phenomenon of clustering emerges when outcomes observed from patients managed by the same centre, practice or physician are more similar than outcomes from different centres, practices or physicians. Clustering often arises in situations where patients are selective about which centre they belong to, patients in a centre or practice are managed according to the same clinical care paths, or patients influence each other in the same cluster [4]. Intraclass (or intracentre) correlation (ICC) is often used to quantify the average correlation between any two outcomes within the same cluster [5]. It is a number between zero and one. A large value indicates that within-cluster observations are similar relative to observations from other clusters and each observation within cluster contains less unique information. This implies that the independence assumption which many standard statistical models are based on is violated. An ICC of zero indicates that individual observations within the same clusters are uncorrelated and different clusters on average have similar observations.\nThrough a literature review, we identified six statistical methods that were sometimes employed to analyze continuous outcomes in multicentre RCTs: A. simple linear regression (two sample t-test), B. fixed-effects regression, C. mixed-effects regression, D. generalized estimating equations (GEE), E-1. fixed-effects centre-level analysis, and E-2. random-effects centre-level analysis. The first four methods use patient as unit of analysis, yet address centre effects differently [6-8]. Simple linear regression completely ignores centre effects that are likely to arise from two sources: (1) possible differences in environmental, socio-economic or treatment factors between centres, and (2) potential correlation among patients within centres. Although stratified randomization attempts to minimize the impact of centre on standard error of the treatment effect by ensuring that the treated and control groups are largely balanced with respect to centre, failure to control for stratification in analysis will likely inflate variance of the effect estimate. The fixed-effects model treats each participating centre as a fixed intercept to control for possible population or environmental differences among centres. This model assumes that study subjects from the same centre have independent outcomes, i.e. the intraclass correlation is fixed at zero. The mixed-effects model incorporates dependence of outcomes within a centre and treats centres as random intercepts. Proposed by Liang and Zeger [9], the generalized estimating equation (GEE) model extends generalized linear regression with continuous, categorical or count outcomes to correlated observations within cluster. Under a commonly used and perhaps oversimplified assumption, that the degree of similarity between any two outcomes from a centre is equal, an exchangeable correlation structure can be used to assess treatment effect in Model C and D. Though the within- and between-centre variances (σe2 and σb2) are estimated differently in these two models. Method E-1 and E-2 are routinely employed to combine information from different studies in meta-analysis [10]. One can also apply them to aggregate treatment effects over multiple centres [11-13]. The overall effect is obtained as the average within-centre effect differences over centre, using inverse-variance weighting.\nTo date, only a few studies have been carried out to compare the performance of statistical models in analyzing multicentre RCTs using Monte Carlo simulation [6,7,14], whereas many studies assessed the impact of ICC in cluster randomization trials. Moerbeek et al [6] compared the simple linear regression model, fixed-effects regression and fixed-effects centre-level analysis with equal centre size. Pickering et al [7] examined the bias, precision and power of three methods: simple regression, fixed-effects and mixed-effects regression assuming block randomization of size 2 or 4 on a continuous outcome. In the presence of imbalance and non-orthogonality, they found ignoring centres or incorporating them as random-effects led to greater precision and smaller type II error compared with treating centres as fixed effects. Performance of the GEE approach and centre-level methods were not investigated in that work. Jones et al [14] compared the fixed-effects and random-effects regression models to a two-step Frequentist procedure as well as a Bayesian model, in the presence of treatment by centre interaction, and recommended fixed-effects weighted method for future analysis of multicentre trials. The investigation was further expanded to assessing correlated survival outcomes from large multicentre cancer trials. A series of random-effects approaches were proposed to account for centre or treatment by centre heterogeneity in proportional hazards models [15,16].\nA lack of definitive evidence on which models perform the best in various situations led to this comprehensive simulation study to examine the performance of all six commonly used models with continuous outcomes. The objective was to assess their comparative performance in terms of bias, precision (simulation standard deviation (SD) and average estimated SE), and mean squared error (MSE) of the point estimator of the treatment effect, empirical coverage of the 95% confidence interval (CI) and the empirical statistical power, over a wide spectrum of ICC value and centre size. We did not consider treatment by centre interaction this study, partly because clinicians and trialists have been making efforts to standardize the conduct and management of multicentre trials via, for instance, uniform patient selection criteria, staff training, and trial monitoring and auditing to reduce heterogeneity of treatment effects among centres. Furthermore it is uncommon to find clinical trials designed with sufficient power to detect treatment by covariate interactions.\nIn this paper, we survey six methods to investigate the effect of a treatment in multicentre RCTs in detail. We outline the design and analysis of an extensive simulation study, and report how model performance varies with ICC, centre size and the number of centres. We also present the estimated effect of the computer-aid decision support system on management of diabetes using different methods.", "We investigated six statistical approaches to evaluating effect of an experimental treatment on a continuous outcome compared with the control, for multicentre RCTs. Assuming baseline prognostic characteristics are approximately balanced between the treatment and control groups via randomization, we do not consider covariates other than centre effects in the models. The first four approaches use individual patient as unit of analysis, while centre is the unit of analysis in the last two approaches.\n[SUBTITLE] Simple linear regression (Model A) [SUBSECTION] This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\nThis approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\n[SUBTITLE] Fixed-effects regression (Model B) [SUBSECTION] This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nThis model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\n[SUBTITLE] Mixed-effects regression (Model C) [SUBSECTION] Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\nSimilar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\n[SUBTITLE] Generalized estimating equations (Model D) [SUBSECTION] The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\nThe GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\n[SUBTITLE] Centre-level fixed-effects model (Model E - 1) [SUBSECTION] The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\nThe centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\n[SUBTITLE] Centre-level random-effects model (Model E - 2) [SUBSECTION] A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.\nA random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.", "This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.", "This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n", "Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].", "The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.", "The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.", "A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.", "We used Monte Carlo simulation to assess performance of statistical models to analyze parallel group multicentre RCTs with a continuous outcome. We simulated outcome, Y, using the mixed-effects linear regression model (Model C): Yij = β0 + b0j + β1Xij + eij for the i-th patient in the j-th centre, where Xij(= 0, 1) is the dummy variable for treatment allocation (i = 1...mj, j = 1... J). We generated random error, e, from N(0,σe2=1). We set the true treatment effect (β1) to be 0.5 residual standard deviation (σe), an effect size suggested by the COMPETE II trial. This corresponds to a medium effect size according to Cohen's criterion [25]. To simulate centre effects, we employed the relationship between ICC and σb2:ICC=σb2σe2+σb2: To fully study the behaviour of candidate models at various ICC levels, we considered the following values of ICC for completeness: 0.00, 0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50 and 0.75. This in turn set the corresponding σb2 values to be 0, 1/99, 1/19, 1/9, 3/17, 1/4, 1/3, 3/7, 7/13, 2/3, 9/11, 1 and 3. However, we focused interpretation of the results on lower values of ICC as they were more likely to occur in practice [26-28].\nThe original sample size was determined to be 84 per arm using a two-sided two-sample t-test (Model A) to ensure 90% power to detect a standardized effect size of 0.5 at 5% type I error rate. We increased the final sample size to 90 (Power increases to 91.8%) per arm to accommodate more combinations of the number and size of participating centres. We assumed patients were randomly allocated to two groups with a ratio of 1:1, the most common and efficient choice. We generated data in nine scenarios (Table 1) to assess model performance in three designs: (a) balanced studies where equal numbers of patients are enrolled from study centres and the numbers of patients in the two arms are the same (fixed by design); (b) unbalanced studies where equal numbers of patients are enrolled from study centres but the numbers of patients in two arms within centre may be different due to chance yet remain 1:1 allocation ratio; and (c) unbalanced studies where the numbers of patients enrolled vary among centres, and block randomization of size 2 or 4 is used to reduce chance imbalance. For designs (a) and (b), we considered three combinations of centre size and number of centres: J = 45 centres, 4 patients per centre; J = 18 centres, 10 patients per centre; and J = 6 centres, 30 patients per centre. Design (c) mimicked a more realistic scenario for multicentre RCTs. For the first setup of design (c), we grouped 180 patients to 17 centres. It was constructed so that the centre composition and degree of allocation imbalance were analogous to the COMPETE II trial but at a smaller sample size: the number of patients per centre varying from 1 to 28; quartiles = 5, 10, 15; mean = 11; SD = 8; percentage of unbalanced centres between 47% and 70% depending on block size.\nCatalogue of simulation designs\nICC: Intraclass (intracentre) correlation\nTo compare results from various models in analyzing the COMPETE II trial, and assess accuracy and precision of the effect estimates, we included an additional scenario in design (c) to imitate this motivating example more closely, with respect to sample size and centre composition (scenario 9). We generated treatment allocation (X1) and outcome (Y) for 511 patients in 46 centres, where the number of patients per centre was set exactly the same as observed in the COMPETE II trial (Table 2). In particular, three centres recruiting only one patient was simulated. Analogously to COMPETE II, a fixed block size of 6 was used to assign patients to treatments. The same simulation model was employed as in previous scenarios yet a separate set of parameters based on results of the COMPETE II trial were used (Table 3): β0 = 1.34, β1= 1.26, σb2=1, σe2=7, ICC = 0.125.\nCentre composition of the COMPETE II trial\nEstimates of intervention effects in COMPETE II trial\nSE: standard error; CI: confidence interval; σe2: within-centre variance; σb2: between-centre variance; ICC: intraclass (intracentre) correlation\nWe generated 1000 simulations for each of the 13 ICC values under each of the first eight scenarios and 1000 simulations for the specified ICC value for the ninth scenario. Separate sets of centre effects were simulated for each scenario and each simulation 1-1000. We chose to simulate 1000 replicates so that the simulation standard deviation for the empirical power at a nominal level of 90% in the absence of clustering was controlled at 1%. This also ensured that standard deviations of the coverage of the confidence interval and the empirical power not exceed 1.6%.", "We applied six statistical models to each simulated dataset. For each model, we calculated the bias, simulation standard deviation (SD), average of estimated standard error (SE) and mean squared error (MSE) of the point estimator of treatment effect (i.e. β1), empirical coverage of the 95% confidence interval around β1 and the empirical statistical power. We constructed confidence intervals based on t-test for Models A - C, and Wald interval based on normal approximation for Models D and E. We estimated bias as the difference between the average estimate of β1 over 1000 simulated datasets and the true effect. The simulation or empirical SD was calculated as the standard deviation of the estimated β1s across simulations, indicating precision of the estimator. We also obtain average of the estimated SEs from 1000 simulations to assess accuracy of variance estimator from each simulation dataset. The overall error rate of the point estimator was captured by the estimated MSE, enumerated by the average squared difference between the estimated β1 and true value across the 1000 datasets. Furthermore, we reported performance of the interval estimators in each model. The empirical coverage was estimated as the proportion of 95% confidence intervals that covered the true β1, and the empirical power was the proportion of confidence intervals that rejected a false null hypothesis, i.e. zero lies outside CI. All datasets were simulated and analyzed in R version 2.4.1[29].", "[SUBTITLE] Analysis of COMPETE II trial data [SUBSECTION] We applied all six models to the COMPETE II data and reported results in Table 3. Approximately equal numbers of patients were randomized to the intervention and control groups within each family doctor, leading to 253 and 258 patients in the intervention and control group, respectively. Among 46 family physicians, 11 physicians (24%) treated equal numbers of patients in two arms, 24 physicians (52%) treated one more patient in the intervention or control arm, 10 physicians (22%) managed 2 more patients in either arm, and one physician (2%) managed 3 more patients in one arm compared with the other.\nAll baseline characteristics were roughly balanced between arms [3]. The analyses using patient-level data produced similar estimates for β1 and the effect size was around 0.5 times the corresponding residual standard deviation. The standard error of the estimated β1 reduced from 0.25 (Model A) to 0.23 (Models B, C) then 0.19 (Model D) when centre effects were adjusted, leading to narrower CIs around estimated β1 in Models B - D. The intraclass correlation was estimated 0.138 in Model C and 0.124 in Model D. The two centre-level analyses returned slightly larger estimates of β1 than those from the individual patient-level models. In fact the minimal variance between physicians indicated no noticeable heterogeneity between physicians (τ2 = 0, I2 = 0), resulting in same estimates from E-1 and E-2. Zero was not contained in the 95% confidence intervals, therefore all models led to the conclusion that the experimental intervention significantly improved patient management over usual care based on the change of composite process score.\nWe applied all six models to the COMPETE II data and reported results in Table 3. Approximately equal numbers of patients were randomized to the intervention and control groups within each family doctor, leading to 253 and 258 patients in the intervention and control group, respectively. Among 46 family physicians, 11 physicians (24%) treated equal numbers of patients in two arms, 24 physicians (52%) treated one more patient in the intervention or control arm, 10 physicians (22%) managed 2 more patients in either arm, and one physician (2%) managed 3 more patients in one arm compared with the other.\nAll baseline characteristics were roughly balanced between arms [3]. The analyses using patient-level data produced similar estimates for β1 and the effect size was around 0.5 times the corresponding residual standard deviation. The standard error of the estimated β1 reduced from 0.25 (Model A) to 0.23 (Models B, C) then 0.19 (Model D) when centre effects were adjusted, leading to narrower CIs around estimated β1 in Models B - D. The intraclass correlation was estimated 0.138 in Model C and 0.124 in Model D. The two centre-level analyses returned slightly larger estimates of β1 than those from the individual patient-level models. In fact the minimal variance between physicians indicated no noticeable heterogeneity between physicians (τ2 = 0, I2 = 0), resulting in same estimates from E-1 and E-2. Zero was not contained in the 95% confidence intervals, therefore all models led to the conclusion that the experimental intervention significantly improved patient management over usual care based on the change of composite process score.\n[SUBTITLE] Balanced design with equal centre size [SUBSECTION] [SUBTITLE] Properties of point estimates [SUBSECTION] Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\nTable 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\n[SUBTITLE] Properties of interval estimates [SUBSECTION] The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\nThe empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\n[SUBTITLE] Properties of point estimates [SUBSECTION] Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\nTable 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\n[SUBTITLE] Properties of interval estimates [SUBSECTION] The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\nThe empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\n[SUBTITLE] Design with equal centre size and chance imbalance [SUBSECTION] [SUBTITLE] Properties of point estimates [SUBSECTION] Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nPerformance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\n[SUBTITLE] Properties of interval estimates [SUBSECTION] Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\nSimilar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\n[SUBTITLE] Properties of point estimates [SUBSECTION] Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nPerformance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\n[SUBTITLE] Properties of interval estimates [SUBSECTION] Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\nSimilar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\n[SUBTITLE] Design with unequal centre sizes and chance imbalance [SUBSECTION] The properties of point and interval estimates in the scenarios 7 and 8 (with unequal centre sizes and chance imbalance) were close to results in the previous two designs. In particular, the comparative performance of six models lay in the middle ground between scenarios 2 and 5, as the level of imbalance between two treatments was no more than half of the block size within centres. As similar results were observed for block sizes 2 and 4, summary statistics based on block size 4 were plotted in Figure 2, 3, 4 and 5. Results suggested that unequal centre size had little impact on model performance, yet it was associated with slight enlargement of the empirical variance of β^1 in Model E - 1. To summarize, although all six models produced unbiased point estimates, the fixed- and mixed-effects models using patient-level data provided the most accurate estimates of the standard error of β^1 given large ICC values, hence should be used in the analysis of multicentre trials when the ICC was nontrivial or unknown to control type I and type II error rates. For studies consisting of a large number of centres with only a few patients per centre, adjusting for centre as mixed effects produced most precise point estimate of treatment effect, hence were more preferable. The information sandwich method appeared to slightly underestimate the actual variance when patients were recruited from 17 centres in scenarios 7 or 8. Due to varying centre sizes, Model D did not converge for all simulated datasets (number varied between 1 and 93 out of 1000 simulations) after 2000 iterations, when ICC was less than or equal to 0.1 or greater than 0.4 for block size of 2 or 4. Such datasets were excluded for all models and extra data were simulated to attain a total number of 1000 simulations for any ICC value. In most cases, the non-convergence of GEE occurred due to a non-positive definite working correlation matrix.\nEmpirical standard deviation (SD) across 1000 simulations by ICC for scenario 8 (block size = 4).\nAverage of standard error (SE) across 1000 simulations by ICC for scenario 8 (block size = 4).\nCoverage of 95% CI by ICC for scenario 8 (block size = 4).\nEmpirical power by ICC for scenario 8 (block size = 4).\nIn scenario 9, as a result of mimicking the particular centre composition of the COMPETE II trial, on average, three centres out of 46 contained no patients in one of the treatment groups per simulation. These centres were removed from the fixed-effects model (Model B), as no comparison patients in the same centre were available. About six centres out of 46 recruited less than two patients per treatment arm for each simulation. These centres were dropped from the centre-level analyses, as the standard error for treatment difference per centre could not be obtained as input variables for 'metacont( )'. Performance of six models in scenario 9 was similar to that in scenarios 7 and 8, although point estimates from all models appeared to be marginally biased toward the null (Table 8). Estimates from patient-level models were more precise and closer to 0.230, the best linear unbiased estimate of standard error based on the simulation model. Once again, the standard error was slightly biased upward in Model A and marginally biased downward in Model D. This resulted in wider and conservative interval estimates from Model A and slightly narrower intervals from Model D. Models B and C performed comparably, probably because on average only three centres each containing one patient were dropped from Model B, which did not affect the variance estimation much. Models C and D achieved convergence for all 1000 simulations in this scenario.\nProperties of point and 95% interval estimates calculated from Models A - E based on 1000 simulated datasets in scenario 9 - unbalanced, 46 centres, same centre composition as the COMPETE II trial.\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; Cover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nThe properties of point and interval estimates in the scenarios 7 and 8 (with unequal centre sizes and chance imbalance) were close to results in the previous two designs. In particular, the comparative performance of six models lay in the middle ground between scenarios 2 and 5, as the level of imbalance between two treatments was no more than half of the block size within centres. As similar results were observed for block sizes 2 and 4, summary statistics based on block size 4 were plotted in Figure 2, 3, 4 and 5. Results suggested that unequal centre size had little impact on model performance, yet it was associated with slight enlargement of the empirical variance of β^1 in Model E - 1. To summarize, although all six models produced unbiased point estimates, the fixed- and mixed-effects models using patient-level data provided the most accurate estimates of the standard error of β^1 given large ICC values, hence should be used in the analysis of multicentre trials when the ICC was nontrivial or unknown to control type I and type II error rates. For studies consisting of a large number of centres with only a few patients per centre, adjusting for centre as mixed effects produced most precise point estimate of treatment effect, hence were more preferable. The information sandwich method appeared to slightly underestimate the actual variance when patients were recruited from 17 centres in scenarios 7 or 8. Due to varying centre sizes, Model D did not converge for all simulated datasets (number varied between 1 and 93 out of 1000 simulations) after 2000 iterations, when ICC was less than or equal to 0.1 or greater than 0.4 for block size of 2 or 4. Such datasets were excluded for all models and extra data were simulated to attain a total number of 1000 simulations for any ICC value. In most cases, the non-convergence of GEE occurred due to a non-positive definite working correlation matrix.\nEmpirical standard deviation (SD) across 1000 simulations by ICC for scenario 8 (block size = 4).\nAverage of standard error (SE) across 1000 simulations by ICC for scenario 8 (block size = 4).\nCoverage of 95% CI by ICC for scenario 8 (block size = 4).\nEmpirical power by ICC for scenario 8 (block size = 4).\nIn scenario 9, as a result of mimicking the particular centre composition of the COMPETE II trial, on average, three centres out of 46 contained no patients in one of the treatment groups per simulation. These centres were removed from the fixed-effects model (Model B), as no comparison patients in the same centre were available. About six centres out of 46 recruited less than two patients per treatment arm for each simulation. These centres were dropped from the centre-level analyses, as the standard error for treatment difference per centre could not be obtained as input variables for 'metacont( )'. Performance of six models in scenario 9 was similar to that in scenarios 7 and 8, although point estimates from all models appeared to be marginally biased toward the null (Table 8). Estimates from patient-level models were more precise and closer to 0.230, the best linear unbiased estimate of standard error based on the simulation model. Once again, the standard error was slightly biased upward in Model A and marginally biased downward in Model D. This resulted in wider and conservative interval estimates from Model A and slightly narrower intervals from Model D. Models B and C performed comparably, probably because on average only three centres each containing one patient were dropped from Model B, which did not affect the variance estimation much. Models C and D achieved convergence for all 1000 simulations in this scenario.\nProperties of point and 95% interval estimates calculated from Models A - E based on 1000 simulated datasets in scenario 9 - unbalanced, 46 centres, same centre composition as the COMPETE II trial.\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; Cover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation", "We applied all six models to the COMPETE II data and reported results in Table 3. Approximately equal numbers of patients were randomized to the intervention and control groups within each family doctor, leading to 253 and 258 patients in the intervention and control group, respectively. Among 46 family physicians, 11 physicians (24%) treated equal numbers of patients in two arms, 24 physicians (52%) treated one more patient in the intervention or control arm, 10 physicians (22%) managed 2 more patients in either arm, and one physician (2%) managed 3 more patients in one arm compared with the other.\nAll baseline characteristics were roughly balanced between arms [3]. The analyses using patient-level data produced similar estimates for β1 and the effect size was around 0.5 times the corresponding residual standard deviation. The standard error of the estimated β1 reduced from 0.25 (Model A) to 0.23 (Models B, C) then 0.19 (Model D) when centre effects were adjusted, leading to narrower CIs around estimated β1 in Models B - D. The intraclass correlation was estimated 0.138 in Model C and 0.124 in Model D. The two centre-level analyses returned slightly larger estimates of β1 than those from the individual patient-level models. In fact the minimal variance between physicians indicated no noticeable heterogeneity between physicians (τ2 = 0, I2 = 0), resulting in same estimates from E-1 and E-2. Zero was not contained in the 95% confidence intervals, therefore all models led to the conclusion that the experimental intervention significantly improved patient management over usual care based on the change of composite process score.", "[SUBTITLE] Properties of point estimates [SUBSECTION] Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\nTable 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\n[SUBTITLE] Properties of interval estimates [SUBSECTION] The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\nThe empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.", "Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.", "The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.", "[SUBTITLE] Properties of point estimates [SUBSECTION] Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nPerformance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\n[SUBTITLE] Properties of interval estimates [SUBSECTION] Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\nSimilar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.", "Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation", "Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.", "The properties of point and interval estimates in the scenarios 7 and 8 (with unequal centre sizes and chance imbalance) were close to results in the previous two designs. In particular, the comparative performance of six models lay in the middle ground between scenarios 2 and 5, as the level of imbalance between two treatments was no more than half of the block size within centres. As similar results were observed for block sizes 2 and 4, summary statistics based on block size 4 were plotted in Figure 2, 3, 4 and 5. Results suggested that unequal centre size had little impact on model performance, yet it was associated with slight enlargement of the empirical variance of β^1 in Model E - 1. To summarize, although all six models produced unbiased point estimates, the fixed- and mixed-effects models using patient-level data provided the most accurate estimates of the standard error of β^1 given large ICC values, hence should be used in the analysis of multicentre trials when the ICC was nontrivial or unknown to control type I and type II error rates. For studies consisting of a large number of centres with only a few patients per centre, adjusting for centre as mixed effects produced most precise point estimate of treatment effect, hence were more preferable. The information sandwich method appeared to slightly underestimate the actual variance when patients were recruited from 17 centres in scenarios 7 or 8. Due to varying centre sizes, Model D did not converge for all simulated datasets (number varied between 1 and 93 out of 1000 simulations) after 2000 iterations, when ICC was less than or equal to 0.1 or greater than 0.4 for block size of 2 or 4. Such datasets were excluded for all models and extra data were simulated to attain a total number of 1000 simulations for any ICC value. In most cases, the non-convergence of GEE occurred due to a non-positive definite working correlation matrix.\nEmpirical standard deviation (SD) across 1000 simulations by ICC for scenario 8 (block size = 4).\nAverage of standard error (SE) across 1000 simulations by ICC for scenario 8 (block size = 4).\nCoverage of 95% CI by ICC for scenario 8 (block size = 4).\nEmpirical power by ICC for scenario 8 (block size = 4).\nIn scenario 9, as a result of mimicking the particular centre composition of the COMPETE II trial, on average, three centres out of 46 contained no patients in one of the treatment groups per simulation. These centres were removed from the fixed-effects model (Model B), as no comparison patients in the same centre were available. About six centres out of 46 recruited less than two patients per treatment arm for each simulation. These centres were dropped from the centre-level analyses, as the standard error for treatment difference per centre could not be obtained as input variables for 'metacont( )'. Performance of six models in scenario 9 was similar to that in scenarios 7 and 8, although point estimates from all models appeared to be marginally biased toward the null (Table 8). Estimates from patient-level models were more precise and closer to 0.230, the best linear unbiased estimate of standard error based on the simulation model. Once again, the standard error was slightly biased upward in Model A and marginally biased downward in Model D. This resulted in wider and conservative interval estimates from Model A and slightly narrower intervals from Model D. Models B and C performed comparably, probably because on average only three centres each containing one patient were dropped from Model B, which did not affect the variance estimation much. Models C and D achieved convergence for all 1000 simulations in this scenario.\nProperties of point and 95% interval estimates calculated from Models A - E based on 1000 simulated datasets in scenario 9 - unbalanced, 46 centres, same centre composition as the COMPETE II trial.\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; Cover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation", "In this paper, we investigated six modelling strategies in a Frequentist framework to study the effect of an experimental treatment compared to the control treatment in the context of multicentre RCTs with a continuous outcome. We focused on three designs with equal or varying centre sizes and a treatment allocation ratio of 1:1 in the absence of treatment by centre interaction. Results of this simulation study showed that, when the proportion of patients allocated to the experimental treatment was the same in each centre or subject to chance imbalance only, models using patient-level and centre-level data yielded unbiased point estimates of treatment effect across a wide spectrum of ICC values. Ignoring stratification by centre or within-centre correlation did not bias the estimated treatment effects even when ICC was large. In fact, Parzen et al showed that mathematically the usual two-sample t-test, naively assuming independent observations of the response within centre was asymptotically unbiased in this context [30].\nThe simulation study also indicated that these models produced different standard errors of β^1, and the properties of interval estimates were affected by several factors: whether and how centre effects were incorporated in analysis, the combination of centre size and number of participating centres, and the level of non-orthogonality of the observed data. Treating centre as a random intercept resulted in the most precise estimate, and nominal values of coverage and power were attained in all circumstances. The fixed-effects model had extremely similar performance compared with the mixed-effects model in balanced design, but was slightly less efficient when the number of centres was large (J > 20) in an unbalanced design. Pickering and Weatherall observed the same pattern in their simulation study comparing three patient-level models with small ICC values [7]. The GEE model using information sandwich covariance method tended to underestimate the standard error across centre effects when the sample of centres was small, a property noticed by researchers [20,31]. This resulted in higher statistical power. That is, the treatment effect estimate was more likely to be significant with a smaller standard error, but was associated with a lower coverage of the conference interval. Marray et al suggested that at least 40 centres should be used to ensure reliable estimate of standard error in the context of cluster randomized trials [32]. Our simulation results suggested that such cut value was also applicable to multicentre RCTs. Failure to control for centre effects in any form resulted in inflation of standard error, falsely high interval coverage and sizable drop of power, as ICC increased. Parzen et al quantified the impact of correlation among observations within centre on the variance of β^1 in Model A as 1/(1-ICC) [30]. Alternatively, one may consider a variant of robust variance estimation or a GEE model with an independent working correlation to control for the impact of ICC on variance estimation using t-test. Centre-level models generally produced larger standard errors, lower coverage or power than the patient-level models. Centre-level random-effects model incorporated variability of the treatment effect over centres, and was not a fair comparator to other models. Interestingly, this model seemed to fare better than the centre-level fixed-effects model in terms of precision and coverage even though the simulated datasets contained no treatment by centre interaction. Despite that the random-effects centre-level model may be a reasonable alternative for patient-level models when the number of patients per centre is large (≥30), centre-level models cannot adjust for patient-level covariates, a potential fatal drawback in the presence of patient prognostic imbalance.\nStatisticians have different viewpoints on treating centre effects and treatment by centre interaction as fixed or random effects when analyzing multicentre RCTs [12,13,21,33]. Our simulation results demonstrated the advantage of treating centres as random intercepts in the absence of treatment by centre interaction. When many centres enrol a few patients and allocation is unbalanced, the random intercept models can give more precise estimates of the treatment effect than the fixed intercept models, because they recover inter-centre information in unbalanced situations. For instance, in a multicentre RCT consisting of 45 centres each recruiting 4 patients, the empirical variance of the estimator of the treatment effect resulting from the fixed-effects model was 24.8% and 26.0% greater than that from the random-effects model when the ICC was 0.01 and 0.05, respectively. In the sentence alluded to, we need to compare the empirical variance of 0.1622 with the value of 0.1452 for ICC = 0.01, and 0.1742 to 0.1552 for ICC = 0.05 (Table 6 scenario 4). We therefore take the same position as Grizzle [33] and Agresti and Hartzel [12] that, \"Although the clinics are not randomly chosen, the assumption of random clinic effect will result in tests and confidence intervals that better capture the variability inherent in the system more realistically than clinical effects are considered fixed\".\nOur results have some implications for the design of multicentre RCTs in the absence of treatment by centre interaction. First, regardless of the pre-determined allocation ratio, permutated block randomization (of relatively small block sizes) should be used to maintain approximate balance or orthogonality (i.e. same treatment allocation proportion across centres [7]) between treatments and centres, so that their individual effects can be evaluated independently. Variable block sizes can be used to strengthen allocation concealment. Second, for a given sample size, the number of patients randomized in majority of centres should be sufficiently large to ensure reliable estimate of within-centre variation. Third, it is essential for investigators to obtain a rough estimate of ICC for within-centre responses, through literature review or a pilot study. To reach nominal power of 80% or 90% (in the absence of clustering), centre effects should be taken into consideration in sample size assessment. When centre effects are included without treatment by centre interaction, the analysis becomes more powerful than a two-sample t-test. One method to assess sample size is to start with a two sample t-test for continuous outcomes (ignoring centre effect) then multiple the original estimated error variance by an variation inflation factor of 1/(1-ICC). This factor would have the effect of increasing the required sample size. Ignoring centre effects results in the larger sample size in the absence of interaction. Sample size determined using information sandwich covariance of GEE model could lead to slight loss of power, when the number of centres is small (≥40) and no proper adjustment is done. Lastly, there is no particular reason to require equal numbers of patients being enrolled in all participating centres and this is seldom the case in practice. Throughout the simulations, we observed similar results for studies of equal and varying centre sizes. In the study, we considered three scenarios representing the particular centre composition of the COMPETE II trial. For discussion on potential impact of enrolment patterns on the point and interval estimates of treatment effect, readers can refer to the publications on random enrolment verse determined enrolment, and relative efficiency between equal and unequal cluster sizes in the reference list [34,35].\nThe current ICH E9 guideline recommends that researchers investigate treatment effect using a model that allows for centre differences in the absence of treatment by centre interaction [1]. However, it is implausible or impractical to include centre effects in statistical modelling or stratify randomization by centre, when it is anticipated from the start that trials may have very few subjects per centre. As it is acknowledged in the document, these recommendations are based on fixed-effects models. Mixed-effects models on the other hand may also be used to explore the centre and centre by interaction effects, especially when the number of centres is large [1]. Our simulation results indicated that when a considerable number of centres contains only a few patients, adjusting for centre as a fixed effect may lead to reduced precision (depending on distribution of patients between arms) compared with the naïve unadjusted analysis. Our work complements the ICH E9 guideline, by studying the impact of intraclass correlation on the assessment of treatment effects - a challenge that is seldom discussed, although routinely faced by investigators in reality. Our investigation suggests that, (1) ignoring centre effects completely may cause substantial overestimation of the standard error, faulty increase of coverage of the confidence interval and reduction of power; and (2) mixed-effects models and GEE models, if employed appropriately, can produce accurate and precise effect estimates, regardless of the degree of clustering. We recommend consider these methods in developing future guidelines.\nWhen the number of patients per centre is very small, it is not practical to include centre as a fixed effect to analyze patient-level data, as centre effects cannot be reliably estimated, and precision of the treatment effect will be compromised. In fact for extremely small centres, all patients may be allocated to the same treatment group, and such centres will be ignored by the fixed-effects model [36-39]. The alternatives include collapsing all centres to perform a two-sample t-test, collapsing smaller centres to create an artificial centre and treating it as a fixed effect, and exploring other models discussed above. The mixed-effects model utilizes small centres more efficiently by \"borrowing\" information from larger centres. The GEE approach models the average treatment difference across all centres and adjusts for centre effects through a uniform correlation structure. This is an intuitively more efficient model which unfortunately does not always converge when the number of patients per centre was highly variable (simulation scenarios 7 and 8). In the current study, non-convergence problems were more likely to arise for very small or large ICC values (less than 0.1 or greater than 0.4 for block size 2 or 4) due to non-positive definite working correlation matrices, and the frequency could be as big as 10% after 2000 iterations. Conversely, convergence problems did not occur for the mixed-effects models in any scenarios. Our results show that analysis of trials consisting of very small centres (i.e. those containing less than 2 patients per arm) using centre-level models may not be an optimal strategy, because the within-centre standard deviation of treatment difference cannot be estimated for such centres, and consequently these very small centres are excluded from the analysis.\nResults of two large empirical studies and one systematic review of cluster RCTs in primary care clinics suggested that most ICC values on physical, functional and social measures were less than 0.10 [26-28]. The estimated ICC in the COMPETE II trial using GEE and linear mixed-effects model, on the other hand, was 0.124 and 0.138, respectively. We chose to include rare yet possible large ICC values (0-0.75) in this simulation to examine the overall trend of model performance by ICC, and for the purpose of completeness and generalizability. Readers should anticipate the ICC values likely to emerge from their studies when interpreting these results. Throughout the work, we quantified correlation among subjects within centre using ICC, the most commonly used concept to assess clustering in biomedical literature. As indicated in previous sections, ICC reflects the interplay of two variance components in multicentre data: the between-centre variance and within-centre variance. These variance components are relatively easy to interpret for analysis of continuous outcomes using linear models. For analysis of binary or time-to-event data from multicentre trials using generalized mixed and frailty models, interpretation of centre heterogeneity can present challenges because random effects are linked to the outcome via nonlinear functions [40]. Reparameterization of the probability density function may be used to assess the impact of within- and between-centre variance. Interested readers can refer to Duchateau and Janssen [40] for more details.\nA major limitation of the study is that it did not address model performance when the treatment by centre interaction exists. The interactions may be due to different patient populations or variable standard of care. Interested readers may read Moerbeek et al [6] for formulas of variance of β^1 in different models and Jones et al [14] for simulation results. Future studies addressing interaction effects in multicentre RCTs are needed. Datasets in the current paper were generated based on a moderate treatment effect reflected by the standardized mean difference between the treatment and control group. More or less prominent treatment effects are also likely to occur in clinical studies and similar findings are expected. The current study investigated on continuous outcomes in two groups from a Frequentist perspective. The models discussed above can be naturally extended to compare three or more treatments. Agresti and Hartzel [12] surveyed different methods to evaluate treatments for binary outcomes in multicentre RCTs. Non-parametric approaches and Bayesian methods are also available to obtain treatment contrast. Interested readers can refer to Aitkin [41], Gould [11], Smith et al [42], Legrand et al [16], and Louis [43], to name a few.", "We used simulations to investigate the performance of six statistical approaches that have been advocated to analyze continuous outcomes in multicentre RCTs. Our simulation study showed that all six models produced unbiased estimates of treatment effect in individual patient randomization multicentre trials. Adjusting for centre as random effects resulted in more efficient effect estimates in all scenarios over a wide spectrum of ICC values and various centre compositions. Fixed-effects model performed comparably to the mixed-effects model under most circumstances but lost efficiency when many centres contained a relatively small number of patients. The GEE model underestimated standard error of the effect estimates when a small number of centres were involved, and did not always converge when the centre size was variable for very large or small ICC values. Two-sample t-test severely overestimated standard error given moderate to large ICC values. The relative efficiencyof statistical modelling of treatment contrasts was also affected by ICC, distribution of patient enrolment, centre size and the number of centres.", "The authors declare that they have no competing interests.", "RC participated in the design of the study, simulation, analysis and interpretation of data, and drafting and revision of the manuscript. LT contributed to the conception and design of the study, interpretation of data and revision of the manuscript. JM contributed to the design of the study and revision of the manuscript. AH contributed to acquisition of data and critical revision of the manuscript. EP and PJD advised on critical revision of the manuscript for important intellectual content. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/21/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Approaches assessing treatment effects", "Simple linear regression (Model A)", "Fixed-effects regression (Model B)", "Mixed-effects regression (Model C)", "Generalized estimating equations (Model D)", "Centre-level fixed-effects model (Model E - 1)", "Centre-level random-effects model (Model E - 2)", "Study data simulation", "Comparison of analytic models", "Results", "Analysis of COMPETE II trial data", "Balanced design with equal centre size", "Properties of point estimates", "Properties of interval estimates", "Design with equal centre size and chance imbalance", "Properties of point estimates", "Properties of interval estimates", "Design with unequal centre sizes and chance imbalance", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "A multicentre randomized control trial (RCT) is an experimental study \"conducted according to a single protocol but at more than one site and, therefore, carried out by more than one investigator\"[1]. Multicentre RCTs are usually carried out for two main reasons. First, they provide a feasible way to accrue sufficient participants to achieve reasonable statistical power to detect the effect of an experimental treatment compared with some control treatment. Second, by enrolling participants of more diverse demographics from a broader spectrum of geographical locations and various clinical settings, multicentre RCTs increase generalizability of the experimental treatment for future use [1].\nRandomization is the most important feature of RCTs, for on average it balances known and unknown baseline prognostic factors between treatment groups, in addition to minimizing selection bias. Nevertheless, randomization does not guarantee complete balance of participant characteristics especially when the sample size is moderate or small. Stratification is a useful technique to guard against potential bias introduced by imbalance in key prognostic factors. In multicentre RCTs, investigators often use a stratified randomization design to achieve balance over key differences in study population (e.g. environmental, socio-economic or demographical factors) and management team (e.g. patient administration and management) at centre level to improve precision of statistical analysis [2]. Regulatory agencies recommend that stratification variables in design should usually be accounted for in analysis, unless the potential value of adjustment is questionable (e.g. very few subjects per centre) [1].\nThe current study was motivated by the COMPETE II trial which was designed to determine if an integrated computerized decision support system shared by primary care providers and patients could improve management of diabetes [3]. A total number of 511 patients were recruited from 46 family physician practices. Individual patients were randomized to one of the two intervention groups stratified by physician practice using permuted blocks of size 6.The number of patients treated by one physician varied from 1 to 26 (interquartiles = 7.25, 11, 15; mean = 11; standard deviation [SD] = 6). The primary outcome was a continuous variable representing the change of a 10-point process composite score based on eight diabetes-related component variables from baseline to a mean of 5.9 months' follow-up. A positive change indicated a favourable result. During the study, the possibility of clustering within physician practice and its consequence on statistical analysis was a concern to the investigators. The phenomenon of clustering emerges when outcomes observed from patients managed by the same centre, practice or physician are more similar than outcomes from different centres, practices or physicians. Clustering often arises in situations where patients are selective about which centre they belong to, patients in a centre or practice are managed according to the same clinical care paths, or patients influence each other in the same cluster [4]. Intraclass (or intracentre) correlation (ICC) is often used to quantify the average correlation between any two outcomes within the same cluster [5]. It is a number between zero and one. A large value indicates that within-cluster observations are similar relative to observations from other clusters and each observation within cluster contains less unique information. This implies that the independence assumption which many standard statistical models are based on is violated. An ICC of zero indicates that individual observations within the same clusters are uncorrelated and different clusters on average have similar observations.\nThrough a literature review, we identified six statistical methods that were sometimes employed to analyze continuous outcomes in multicentre RCTs: A. simple linear regression (two sample t-test), B. fixed-effects regression, C. mixed-effects regression, D. generalized estimating equations (GEE), E-1. fixed-effects centre-level analysis, and E-2. random-effects centre-level analysis. The first four methods use patient as unit of analysis, yet address centre effects differently [6-8]. Simple linear regression completely ignores centre effects that are likely to arise from two sources: (1) possible differences in environmental, socio-economic or treatment factors between centres, and (2) potential correlation among patients within centres. Although stratified randomization attempts to minimize the impact of centre on standard error of the treatment effect by ensuring that the treated and control groups are largely balanced with respect to centre, failure to control for stratification in analysis will likely inflate variance of the effect estimate. The fixed-effects model treats each participating centre as a fixed intercept to control for possible population or environmental differences among centres. This model assumes that study subjects from the same centre have independent outcomes, i.e. the intraclass correlation is fixed at zero. The mixed-effects model incorporates dependence of outcomes within a centre and treats centres as random intercepts. Proposed by Liang and Zeger [9], the generalized estimating equation (GEE) model extends generalized linear regression with continuous, categorical or count outcomes to correlated observations within cluster. Under a commonly used and perhaps oversimplified assumption, that the degree of similarity between any two outcomes from a centre is equal, an exchangeable correlation structure can be used to assess treatment effect in Model C and D. Though the within- and between-centre variances (σe2 and σb2) are estimated differently in these two models. Method E-1 and E-2 are routinely employed to combine information from different studies in meta-analysis [10]. One can also apply them to aggregate treatment effects over multiple centres [11-13]. The overall effect is obtained as the average within-centre effect differences over centre, using inverse-variance weighting.\nTo date, only a few studies have been carried out to compare the performance of statistical models in analyzing multicentre RCTs using Monte Carlo simulation [6,7,14], whereas many studies assessed the impact of ICC in cluster randomization trials. Moerbeek et al [6] compared the simple linear regression model, fixed-effects regression and fixed-effects centre-level analysis with equal centre size. Pickering et al [7] examined the bias, precision and power of three methods: simple regression, fixed-effects and mixed-effects regression assuming block randomization of size 2 or 4 on a continuous outcome. In the presence of imbalance and non-orthogonality, they found ignoring centres or incorporating them as random-effects led to greater precision and smaller type II error compared with treating centres as fixed effects. Performance of the GEE approach and centre-level methods were not investigated in that work. Jones et al [14] compared the fixed-effects and random-effects regression models to a two-step Frequentist procedure as well as a Bayesian model, in the presence of treatment by centre interaction, and recommended fixed-effects weighted method for future analysis of multicentre trials. The investigation was further expanded to assessing correlated survival outcomes from large multicentre cancer trials. A series of random-effects approaches were proposed to account for centre or treatment by centre heterogeneity in proportional hazards models [15,16].\nA lack of definitive evidence on which models perform the best in various situations led to this comprehensive simulation study to examine the performance of all six commonly used models with continuous outcomes. The objective was to assess their comparative performance in terms of bias, precision (simulation standard deviation (SD) and average estimated SE), and mean squared error (MSE) of the point estimator of the treatment effect, empirical coverage of the 95% confidence interval (CI) and the empirical statistical power, over a wide spectrum of ICC value and centre size. We did not consider treatment by centre interaction this study, partly because clinicians and trialists have been making efforts to standardize the conduct and management of multicentre trials via, for instance, uniform patient selection criteria, staff training, and trial monitoring and auditing to reduce heterogeneity of treatment effects among centres. Furthermore it is uncommon to find clinical trials designed with sufficient power to detect treatment by covariate interactions.\nIn this paper, we survey six methods to investigate the effect of a treatment in multicentre RCTs in detail. We outline the design and analysis of an extensive simulation study, and report how model performance varies with ICC, centre size and the number of centres. We also present the estimated effect of the computer-aid decision support system on management of diabetes using different methods.", "[SUBTITLE] Approaches assessing treatment effects [SUBSECTION] We investigated six statistical approaches to evaluating effect of an experimental treatment on a continuous outcome compared with the control, for multicentre RCTs. Assuming baseline prognostic characteristics are approximately balanced between the treatment and control groups via randomization, we do not consider covariates other than centre effects in the models. The first four approaches use individual patient as unit of analysis, while centre is the unit of analysis in the last two approaches.\n[SUBTITLE] Simple linear regression (Model A) [SUBSECTION] This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\nThis approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\n[SUBTITLE] Fixed-effects regression (Model B) [SUBSECTION] This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nThis model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\n[SUBTITLE] Mixed-effects regression (Model C) [SUBSECTION] Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\nSimilar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\n[SUBTITLE] Generalized estimating equations (Model D) [SUBSECTION] The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\nThe GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\n[SUBTITLE] Centre-level fixed-effects model (Model E - 1) [SUBSECTION] The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\nThe centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\n[SUBTITLE] Centre-level random-effects model (Model E - 2) [SUBSECTION] A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.\nA random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.\nWe investigated six statistical approaches to evaluating effect of an experimental treatment on a continuous outcome compared with the control, for multicentre RCTs. Assuming baseline prognostic characteristics are approximately balanced between the treatment and control groups via randomization, we do not consider covariates other than centre effects in the models. The first four approaches use individual patient as unit of analysis, while centre is the unit of analysis in the last two approaches.\n[SUBTITLE] Simple linear regression (Model A) [SUBSECTION] This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\nThis approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\n[SUBTITLE] Fixed-effects regression (Model B) [SUBSECTION] This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nThis model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\n[SUBTITLE] Mixed-effects regression (Model C) [SUBSECTION] Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\nSimilar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\n[SUBTITLE] Generalized estimating equations (Model D) [SUBSECTION] The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\nThe GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\n[SUBTITLE] Centre-level fixed-effects model (Model E - 1) [SUBSECTION] The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\nThe centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\n[SUBTITLE] Centre-level random-effects model (Model E - 2) [SUBSECTION] A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.\nA random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.\n[SUBTITLE] Study data simulation [SUBSECTION] We used Monte Carlo simulation to assess performance of statistical models to analyze parallel group multicentre RCTs with a continuous outcome. We simulated outcome, Y, using the mixed-effects linear regression model (Model C): Yij = β0 + b0j + β1Xij + eij for the i-th patient in the j-th centre, where Xij(= 0, 1) is the dummy variable for treatment allocation (i = 1...mj, j = 1... J). We generated random error, e, from N(0,σe2=1). We set the true treatment effect (β1) to be 0.5 residual standard deviation (σe), an effect size suggested by the COMPETE II trial. This corresponds to a medium effect size according to Cohen's criterion [25]. To simulate centre effects, we employed the relationship between ICC and σb2:ICC=σb2σe2+σb2: To fully study the behaviour of candidate models at various ICC levels, we considered the following values of ICC for completeness: 0.00, 0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50 and 0.75. This in turn set the corresponding σb2 values to be 0, 1/99, 1/19, 1/9, 3/17, 1/4, 1/3, 3/7, 7/13, 2/3, 9/11, 1 and 3. However, we focused interpretation of the results on lower values of ICC as they were more likely to occur in practice [26-28].\nThe original sample size was determined to be 84 per arm using a two-sided two-sample t-test (Model A) to ensure 90% power to detect a standardized effect size of 0.5 at 5% type I error rate. We increased the final sample size to 90 (Power increases to 91.8%) per arm to accommodate more combinations of the number and size of participating centres. We assumed patients were randomly allocated to two groups with a ratio of 1:1, the most common and efficient choice. We generated data in nine scenarios (Table 1) to assess model performance in three designs: (a) balanced studies where equal numbers of patients are enrolled from study centres and the numbers of patients in the two arms are the same (fixed by design); (b) unbalanced studies where equal numbers of patients are enrolled from study centres but the numbers of patients in two arms within centre may be different due to chance yet remain 1:1 allocation ratio; and (c) unbalanced studies where the numbers of patients enrolled vary among centres, and block randomization of size 2 or 4 is used to reduce chance imbalance. For designs (a) and (b), we considered three combinations of centre size and number of centres: J = 45 centres, 4 patients per centre; J = 18 centres, 10 patients per centre; and J = 6 centres, 30 patients per centre. Design (c) mimicked a more realistic scenario for multicentre RCTs. For the first setup of design (c), we grouped 180 patients to 17 centres. It was constructed so that the centre composition and degree of allocation imbalance were analogous to the COMPETE II trial but at a smaller sample size: the number of patients per centre varying from 1 to 28; quartiles = 5, 10, 15; mean = 11; SD = 8; percentage of unbalanced centres between 47% and 70% depending on block size.\nCatalogue of simulation designs\nICC: Intraclass (intracentre) correlation\nTo compare results from various models in analyzing the COMPETE II trial, and assess accuracy and precision of the effect estimates, we included an additional scenario in design (c) to imitate this motivating example more closely, with respect to sample size and centre composition (scenario 9). We generated treatment allocation (X1) and outcome (Y) for 511 patients in 46 centres, where the number of patients per centre was set exactly the same as observed in the COMPETE II trial (Table 2). In particular, three centres recruiting only one patient was simulated. Analogously to COMPETE II, a fixed block size of 6 was used to assign patients to treatments. The same simulation model was employed as in previous scenarios yet a separate set of parameters based on results of the COMPETE II trial were used (Table 3): β0 = 1.34, β1= 1.26, σb2=1, σe2=7, ICC = 0.125.\nCentre composition of the COMPETE II trial\nEstimates of intervention effects in COMPETE II trial\nSE: standard error; CI: confidence interval; σe2: within-centre variance; σb2: between-centre variance; ICC: intraclass (intracentre) correlation\nWe generated 1000 simulations for each of the 13 ICC values under each of the first eight scenarios and 1000 simulations for the specified ICC value for the ninth scenario. Separate sets of centre effects were simulated for each scenario and each simulation 1-1000. We chose to simulate 1000 replicates so that the simulation standard deviation for the empirical power at a nominal level of 90% in the absence of clustering was controlled at 1%. This also ensured that standard deviations of the coverage of the confidence interval and the empirical power not exceed 1.6%.\nWe used Monte Carlo simulation to assess performance of statistical models to analyze parallel group multicentre RCTs with a continuous outcome. We simulated outcome, Y, using the mixed-effects linear regression model (Model C): Yij = β0 + b0j + β1Xij + eij for the i-th patient in the j-th centre, where Xij(= 0, 1) is the dummy variable for treatment allocation (i = 1...mj, j = 1... J). We generated random error, e, from N(0,σe2=1). We set the true treatment effect (β1) to be 0.5 residual standard deviation (σe), an effect size suggested by the COMPETE II trial. This corresponds to a medium effect size according to Cohen's criterion [25]. To simulate centre effects, we employed the relationship between ICC and σb2:ICC=σb2σe2+σb2: To fully study the behaviour of candidate models at various ICC levels, we considered the following values of ICC for completeness: 0.00, 0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50 and 0.75. This in turn set the corresponding σb2 values to be 0, 1/99, 1/19, 1/9, 3/17, 1/4, 1/3, 3/7, 7/13, 2/3, 9/11, 1 and 3. However, we focused interpretation of the results on lower values of ICC as they were more likely to occur in practice [26-28].\nThe original sample size was determined to be 84 per arm using a two-sided two-sample t-test (Model A) to ensure 90% power to detect a standardized effect size of 0.5 at 5% type I error rate. We increased the final sample size to 90 (Power increases to 91.8%) per arm to accommodate more combinations of the number and size of participating centres. We assumed patients were randomly allocated to two groups with a ratio of 1:1, the most common and efficient choice. We generated data in nine scenarios (Table 1) to assess model performance in three designs: (a) balanced studies where equal numbers of patients are enrolled from study centres and the numbers of patients in the two arms are the same (fixed by design); (b) unbalanced studies where equal numbers of patients are enrolled from study centres but the numbers of patients in two arms within centre may be different due to chance yet remain 1:1 allocation ratio; and (c) unbalanced studies where the numbers of patients enrolled vary among centres, and block randomization of size 2 or 4 is used to reduce chance imbalance. For designs (a) and (b), we considered three combinations of centre size and number of centres: J = 45 centres, 4 patients per centre; J = 18 centres, 10 patients per centre; and J = 6 centres, 30 patients per centre. Design (c) mimicked a more realistic scenario for multicentre RCTs. For the first setup of design (c), we grouped 180 patients to 17 centres. It was constructed so that the centre composition and degree of allocation imbalance were analogous to the COMPETE II trial but at a smaller sample size: the number of patients per centre varying from 1 to 28; quartiles = 5, 10, 15; mean = 11; SD = 8; percentage of unbalanced centres between 47% and 70% depending on block size.\nCatalogue of simulation designs\nICC: Intraclass (intracentre) correlation\nTo compare results from various models in analyzing the COMPETE II trial, and assess accuracy and precision of the effect estimates, we included an additional scenario in design (c) to imitate this motivating example more closely, with respect to sample size and centre composition (scenario 9). We generated treatment allocation (X1) and outcome (Y) for 511 patients in 46 centres, where the number of patients per centre was set exactly the same as observed in the COMPETE II trial (Table 2). In particular, three centres recruiting only one patient was simulated. Analogously to COMPETE II, a fixed block size of 6 was used to assign patients to treatments. The same simulation model was employed as in previous scenarios yet a separate set of parameters based on results of the COMPETE II trial were used (Table 3): β0 = 1.34, β1= 1.26, σb2=1, σe2=7, ICC = 0.125.\nCentre composition of the COMPETE II trial\nEstimates of intervention effects in COMPETE II trial\nSE: standard error; CI: confidence interval; σe2: within-centre variance; σb2: between-centre variance; ICC: intraclass (intracentre) correlation\nWe generated 1000 simulations for each of the 13 ICC values under each of the first eight scenarios and 1000 simulations for the specified ICC value for the ninth scenario. Separate sets of centre effects were simulated for each scenario and each simulation 1-1000. We chose to simulate 1000 replicates so that the simulation standard deviation for the empirical power at a nominal level of 90% in the absence of clustering was controlled at 1%. This also ensured that standard deviations of the coverage of the confidence interval and the empirical power not exceed 1.6%.\n[SUBTITLE] Comparison of analytic models [SUBSECTION] We applied six statistical models to each simulated dataset. For each model, we calculated the bias, simulation standard deviation (SD), average of estimated standard error (SE) and mean squared error (MSE) of the point estimator of treatment effect (i.e. β1), empirical coverage of the 95% confidence interval around β1 and the empirical statistical power. We constructed confidence intervals based on t-test for Models A - C, and Wald interval based on normal approximation for Models D and E. We estimated bias as the difference between the average estimate of β1 over 1000 simulated datasets and the true effect. The simulation or empirical SD was calculated as the standard deviation of the estimated β1s across simulations, indicating precision of the estimator. We also obtain average of the estimated SEs from 1000 simulations to assess accuracy of variance estimator from each simulation dataset. The overall error rate of the point estimator was captured by the estimated MSE, enumerated by the average squared difference between the estimated β1 and true value across the 1000 datasets. Furthermore, we reported performance of the interval estimators in each model. The empirical coverage was estimated as the proportion of 95% confidence intervals that covered the true β1, and the empirical power was the proportion of confidence intervals that rejected a false null hypothesis, i.e. zero lies outside CI. All datasets were simulated and analyzed in R version 2.4.1[29].\nWe applied six statistical models to each simulated dataset. For each model, we calculated the bias, simulation standard deviation (SD), average of estimated standard error (SE) and mean squared error (MSE) of the point estimator of treatment effect (i.e. β1), empirical coverage of the 95% confidence interval around β1 and the empirical statistical power. We constructed confidence intervals based on t-test for Models A - C, and Wald interval based on normal approximation for Models D and E. We estimated bias as the difference between the average estimate of β1 over 1000 simulated datasets and the true effect. The simulation or empirical SD was calculated as the standard deviation of the estimated β1s across simulations, indicating precision of the estimator. We also obtain average of the estimated SEs from 1000 simulations to assess accuracy of variance estimator from each simulation dataset. The overall error rate of the point estimator was captured by the estimated MSE, enumerated by the average squared difference between the estimated β1 and true value across the 1000 datasets. Furthermore, we reported performance of the interval estimators in each model. The empirical coverage was estimated as the proportion of 95% confidence intervals that covered the true β1, and the empirical power was the proportion of confidence intervals that rejected a false null hypothesis, i.e. zero lies outside CI. All datasets were simulated and analyzed in R version 2.4.1[29].", "We investigated six statistical approaches to evaluating effect of an experimental treatment on a continuous outcome compared with the control, for multicentre RCTs. Assuming baseline prognostic characteristics are approximately balanced between the treatment and control groups via randomization, we do not consider covariates other than centre effects in the models. The first four approaches use individual patient as unit of analysis, while centre is the unit of analysis in the last two approaches.\n[SUBTITLE] Simple linear regression (Model A) [SUBSECTION] This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\nThis approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.\n[SUBTITLE] Fixed-effects regression (Model B) [SUBSECTION] This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nThis model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\n[SUBTITLE] Mixed-effects regression (Model C) [SUBSECTION] Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\nSimilar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].\n[SUBTITLE] Generalized estimating equations (Model D) [SUBSECTION] The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\nThe GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.\n[SUBTITLE] Centre-level fixed-effects model (Model E - 1) [SUBSECTION] The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\nThe centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.\n[SUBTITLE] Centre-level random-effects model (Model E - 2) [SUBSECTION] A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.\nA random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.", "This approach models the impact of treatment (X) on outcome (Y) via regression technique (Equation 1). In the context of a two-arm trial, this approach is the same as a two-sample t-test [6].\n\n\n(1)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n,\n\n\n\n\nwhere Yij is the outcome of the i-th patient in the j-th centre, Xij stands for the treatment assignment (Xij = 1 for the treatment, Xij = 0 for the control), and eij is the random error assumed to follow a normal distribution with mean 0 and variance σe2. The intercept, β0 , represents the mean outcome for the control group in all participating centres, and the slope β1 represents effect of the treatment on the mean outcome.", "This model (Equation 2) allows a separate intercept for each centre (β0j) as a fixed effect by restricting the scope of statistical inference to the sample of participating centres in a RCT. Interpretation for β1 remains the same as in Model A. Model A and B were fitted using the linear model procedure 'lm( )' in R.\n\n\n(2)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n", "Similar to Model B, the mixed-effects regression model assumes that the intercept β0j = β0 + b0j follows a normal distribution N(β0, σb2), and is thus random effect. In Equation 3, b0j is the random deviation from the mean intercept β0, specific for each centre.\n\n\n(3)\n\n\n\nY\n\ni\nj\n\n\n=\n\nβ\n0\n\n+\n\nb\n\n0\nj\n\n\n+\n\nβ\n1\n\n\nX\n\ni\nj\n\n\n+\n\ne\n\ni\nj\n\n\n\n\n\n\nSimilar to the previous models, the within-centre variability is reflected by σe2. The variability of outcome between-centre is captured by σb2 in Model C. The variance and covariance of outcomes in the same or different centres can be expressed as: Var(Yij)=σb2+σe2, Cov(Yij,Yi'j)=σb2, Cov(Yij,Yi'j')=0. The intraclass correlation that measures the correlation among outcomes within centre is given by σb2σe2+σb2, assumed equal across all centres. We fitted Model C in R via linear mixed-effects procedure 'lme( )' using the restricted maximum likelihood (REML) method [17,18].", "The GEE method has gained increasing popularity among health science researchers for its availability in most statistical software. As opposed to the mixed-effects method that estimates treatment difference between arms and individual centre effects, the GEE approach models the marginal population-average treatment effects in two steps: 1) it fits a naïve linear regression assuming independence between observations within and across centres, and 2) it estimates parameters of the working correlation matrix using residuals in the naïve model and refit regression model to adjust standard error and confidence interval for within-centre dependence [19]. As a result, the estimated impact of treatment on the outcome in GEE model reflects the \"combined\" within- and between-centre relationship. GEE employs quasi-likelihood to estimate regression coefficients iteratively, and a working correlation needs to be supplied to approximate the within centre correlation. When the working correlation is mis-specified, the sandwich-based covariance estimator will lead to a robust yet less efficient estimate of treatment effect in GEE model [9]. Recently, statisticians found that variance of the estimated treatment effect could be underestimated when the number of centres was small [20]. We therefore assessed the efficiency of GEE models using procedure 'gee( )' in library(gee) in R. As in the mixed-effects model, an exchangeable correlation structure was assumed in fitting Model D.", "The centre level model is a stratified analysis performed on the mean difference in outcome between the treatment and control arms within centre. The overall treatment effect is estimated by a weighted average of individual mean differences across all centres. The principle of inverse-variance weighting is often used (Figure 1). This model is essentially a centre-level inverse-variance weighted paired t-test (i.e. the treatment arm is paired to the control arm in the same centre) to account for within centre correlation [10]. In the absence of intraclass correlation and under the assumption of equal sampling variation at patient level, the inverse-variance weight reduces to ntjncjntj+ncj for the j-th centre, which can be further simplified as the size of centre nj = ntj + ncj, given equal numbers of patients in two arms. Here ntj and ncj represent the number of patients in the treatment and control group, respectively, in the j-th centre. This form of the weighted analysis (without adjustment for covariates) was discussed extensively by many researchers [21-23]. We implemented Models E - 1 using the fixed-effects method for meta-analysis provided by the 'metacont( )' procedure in R.\nA schematic of fixed- and random-effects centre-level models.", "A random-effects approach is used to aggregate mean effect differences not only across all participating centres but also across a population of centres represented by the sample. This model factors heterogeneity of treatment effect among centres (i.e. random treatment by centre interaction) into its weighting scheme and captures within- and between-centre variation of the outcome. One should not confuse this method with the mixed-effects model using patient-level data (Model C). For Model E-2, the underlying true treatment effects are not a fixed single value for all centres, rather they are considered random effects, normally distributed around a mean treatment effect with between-centre variation. Model C, on the other hand, treats centres as random intercepts and postulates the same treatment effect across all centres. Model E-2 does not serve as a fair comparator to the alternatives listed here which assume no treatment by centre interaction. Preliminary investigation suggested E-2 could outperform E-1 in some situations; we therefore included E-2 in the study to advance understanding of these models. Details of centre level models are provided in Figure 1. Model E - 2 was carried out using DerSimonian-Laird random-effects [24] method using the 'metacont( )' procedure in R. The confidence interval for Model E - 2 was constructed based on the within- and between-centre variances.", "We used Monte Carlo simulation to assess performance of statistical models to analyze parallel group multicentre RCTs with a continuous outcome. We simulated outcome, Y, using the mixed-effects linear regression model (Model C): Yij = β0 + b0j + β1Xij + eij for the i-th patient in the j-th centre, where Xij(= 0, 1) is the dummy variable for treatment allocation (i = 1...mj, j = 1... J). We generated random error, e, from N(0,σe2=1). We set the true treatment effect (β1) to be 0.5 residual standard deviation (σe), an effect size suggested by the COMPETE II trial. This corresponds to a medium effect size according to Cohen's criterion [25]. To simulate centre effects, we employed the relationship between ICC and σb2:ICC=σb2σe2+σb2: To fully study the behaviour of candidate models at various ICC levels, we considered the following values of ICC for completeness: 0.00, 0.01, 0.05, 0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50 and 0.75. This in turn set the corresponding σb2 values to be 0, 1/99, 1/19, 1/9, 3/17, 1/4, 1/3, 3/7, 7/13, 2/3, 9/11, 1 and 3. However, we focused interpretation of the results on lower values of ICC as they were more likely to occur in practice [26-28].\nThe original sample size was determined to be 84 per arm using a two-sided two-sample t-test (Model A) to ensure 90% power to detect a standardized effect size of 0.5 at 5% type I error rate. We increased the final sample size to 90 (Power increases to 91.8%) per arm to accommodate more combinations of the number and size of participating centres. We assumed patients were randomly allocated to two groups with a ratio of 1:1, the most common and efficient choice. We generated data in nine scenarios (Table 1) to assess model performance in three designs: (a) balanced studies where equal numbers of patients are enrolled from study centres and the numbers of patients in the two arms are the same (fixed by design); (b) unbalanced studies where equal numbers of patients are enrolled from study centres but the numbers of patients in two arms within centre may be different due to chance yet remain 1:1 allocation ratio; and (c) unbalanced studies where the numbers of patients enrolled vary among centres, and block randomization of size 2 or 4 is used to reduce chance imbalance. For designs (a) and (b), we considered three combinations of centre size and number of centres: J = 45 centres, 4 patients per centre; J = 18 centres, 10 patients per centre; and J = 6 centres, 30 patients per centre. Design (c) mimicked a more realistic scenario for multicentre RCTs. For the first setup of design (c), we grouped 180 patients to 17 centres. It was constructed so that the centre composition and degree of allocation imbalance were analogous to the COMPETE II trial but at a smaller sample size: the number of patients per centre varying from 1 to 28; quartiles = 5, 10, 15; mean = 11; SD = 8; percentage of unbalanced centres between 47% and 70% depending on block size.\nCatalogue of simulation designs\nICC: Intraclass (intracentre) correlation\nTo compare results from various models in analyzing the COMPETE II trial, and assess accuracy and precision of the effect estimates, we included an additional scenario in design (c) to imitate this motivating example more closely, with respect to sample size and centre composition (scenario 9). We generated treatment allocation (X1) and outcome (Y) for 511 patients in 46 centres, where the number of patients per centre was set exactly the same as observed in the COMPETE II trial (Table 2). In particular, three centres recruiting only one patient was simulated. Analogously to COMPETE II, a fixed block size of 6 was used to assign patients to treatments. The same simulation model was employed as in previous scenarios yet a separate set of parameters based on results of the COMPETE II trial were used (Table 3): β0 = 1.34, β1= 1.26, σb2=1, σe2=7, ICC = 0.125.\nCentre composition of the COMPETE II trial\nEstimates of intervention effects in COMPETE II trial\nSE: standard error; CI: confidence interval; σe2: within-centre variance; σb2: between-centre variance; ICC: intraclass (intracentre) correlation\nWe generated 1000 simulations for each of the 13 ICC values under each of the first eight scenarios and 1000 simulations for the specified ICC value for the ninth scenario. Separate sets of centre effects were simulated for each scenario and each simulation 1-1000. We chose to simulate 1000 replicates so that the simulation standard deviation for the empirical power at a nominal level of 90% in the absence of clustering was controlled at 1%. This also ensured that standard deviations of the coverage of the confidence interval and the empirical power not exceed 1.6%.", "We applied six statistical models to each simulated dataset. For each model, we calculated the bias, simulation standard deviation (SD), average of estimated standard error (SE) and mean squared error (MSE) of the point estimator of treatment effect (i.e. β1), empirical coverage of the 95% confidence interval around β1 and the empirical statistical power. We constructed confidence intervals based on t-test for Models A - C, and Wald interval based on normal approximation for Models D and E. We estimated bias as the difference between the average estimate of β1 over 1000 simulated datasets and the true effect. The simulation or empirical SD was calculated as the standard deviation of the estimated β1s across simulations, indicating precision of the estimator. We also obtain average of the estimated SEs from 1000 simulations to assess accuracy of variance estimator from each simulation dataset. The overall error rate of the point estimator was captured by the estimated MSE, enumerated by the average squared difference between the estimated β1 and true value across the 1000 datasets. Furthermore, we reported performance of the interval estimators in each model. The empirical coverage was estimated as the proportion of 95% confidence intervals that covered the true β1, and the empirical power was the proportion of confidence intervals that rejected a false null hypothesis, i.e. zero lies outside CI. All datasets were simulated and analyzed in R version 2.4.1[29].", "[SUBTITLE] Analysis of COMPETE II trial data [SUBSECTION] We applied all six models to the COMPETE II data and reported results in Table 3. Approximately equal numbers of patients were randomized to the intervention and control groups within each family doctor, leading to 253 and 258 patients in the intervention and control group, respectively. Among 46 family physicians, 11 physicians (24%) treated equal numbers of patients in two arms, 24 physicians (52%) treated one more patient in the intervention or control arm, 10 physicians (22%) managed 2 more patients in either arm, and one physician (2%) managed 3 more patients in one arm compared with the other.\nAll baseline characteristics were roughly balanced between arms [3]. The analyses using patient-level data produced similar estimates for β1 and the effect size was around 0.5 times the corresponding residual standard deviation. The standard error of the estimated β1 reduced from 0.25 (Model A) to 0.23 (Models B, C) then 0.19 (Model D) when centre effects were adjusted, leading to narrower CIs around estimated β1 in Models B - D. The intraclass correlation was estimated 0.138 in Model C and 0.124 in Model D. The two centre-level analyses returned slightly larger estimates of β1 than those from the individual patient-level models. In fact the minimal variance between physicians indicated no noticeable heterogeneity between physicians (τ2 = 0, I2 = 0), resulting in same estimates from E-1 and E-2. Zero was not contained in the 95% confidence intervals, therefore all models led to the conclusion that the experimental intervention significantly improved patient management over usual care based on the change of composite process score.\nWe applied all six models to the COMPETE II data and reported results in Table 3. Approximately equal numbers of patients were randomized to the intervention and control groups within each family doctor, leading to 253 and 258 patients in the intervention and control group, respectively. Among 46 family physicians, 11 physicians (24%) treated equal numbers of patients in two arms, 24 physicians (52%) treated one more patient in the intervention or control arm, 10 physicians (22%) managed 2 more patients in either arm, and one physician (2%) managed 3 more patients in one arm compared with the other.\nAll baseline characteristics were roughly balanced between arms [3]. The analyses using patient-level data produced similar estimates for β1 and the effect size was around 0.5 times the corresponding residual standard deviation. The standard error of the estimated β1 reduced from 0.25 (Model A) to 0.23 (Models B, C) then 0.19 (Model D) when centre effects were adjusted, leading to narrower CIs around estimated β1 in Models B - D. The intraclass correlation was estimated 0.138 in Model C and 0.124 in Model D. The two centre-level analyses returned slightly larger estimates of β1 than those from the individual patient-level models. In fact the minimal variance between physicians indicated no noticeable heterogeneity between physicians (τ2 = 0, I2 = 0), resulting in same estimates from E-1 and E-2. Zero was not contained in the 95% confidence intervals, therefore all models led to the conclusion that the experimental intervention significantly improved patient management over usual care based on the change of composite process score.\n[SUBTITLE] Balanced design with equal centre size [SUBSECTION] [SUBTITLE] Properties of point estimates [SUBSECTION] Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\nTable 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\n[SUBTITLE] Properties of interval estimates [SUBSECTION] The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\nThe empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\n[SUBTITLE] Properties of point estimates [SUBSECTION] Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\nTable 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\n[SUBTITLE] Properties of interval estimates [SUBSECTION] The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\nThe empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\n[SUBTITLE] Design with equal centre size and chance imbalance [SUBSECTION] [SUBTITLE] Properties of point estimates [SUBSECTION] Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nPerformance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\n[SUBTITLE] Properties of interval estimates [SUBSECTION] Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\nSimilar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\n[SUBTITLE] Properties of point estimates [SUBSECTION] Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nPerformance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\n[SUBTITLE] Properties of interval estimates [SUBSECTION] Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\nSimilar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\n[SUBTITLE] Design with unequal centre sizes and chance imbalance [SUBSECTION] The properties of point and interval estimates in the scenarios 7 and 8 (with unequal centre sizes and chance imbalance) were close to results in the previous two designs. In particular, the comparative performance of six models lay in the middle ground between scenarios 2 and 5, as the level of imbalance between two treatments was no more than half of the block size within centres. As similar results were observed for block sizes 2 and 4, summary statistics based on block size 4 were plotted in Figure 2, 3, 4 and 5. Results suggested that unequal centre size had little impact on model performance, yet it was associated with slight enlargement of the empirical variance of β^1 in Model E - 1. To summarize, although all six models produced unbiased point estimates, the fixed- and mixed-effects models using patient-level data provided the most accurate estimates of the standard error of β^1 given large ICC values, hence should be used in the analysis of multicentre trials when the ICC was nontrivial or unknown to control type I and type II error rates. For studies consisting of a large number of centres with only a few patients per centre, adjusting for centre as mixed effects produced most precise point estimate of treatment effect, hence were more preferable. The information sandwich method appeared to slightly underestimate the actual variance when patients were recruited from 17 centres in scenarios 7 or 8. Due to varying centre sizes, Model D did not converge for all simulated datasets (number varied between 1 and 93 out of 1000 simulations) after 2000 iterations, when ICC was less than or equal to 0.1 or greater than 0.4 for block size of 2 or 4. Such datasets were excluded for all models and extra data were simulated to attain a total number of 1000 simulations for any ICC value. In most cases, the non-convergence of GEE occurred due to a non-positive definite working correlation matrix.\nEmpirical standard deviation (SD) across 1000 simulations by ICC for scenario 8 (block size = 4).\nAverage of standard error (SE) across 1000 simulations by ICC for scenario 8 (block size = 4).\nCoverage of 95% CI by ICC for scenario 8 (block size = 4).\nEmpirical power by ICC for scenario 8 (block size = 4).\nIn scenario 9, as a result of mimicking the particular centre composition of the COMPETE II trial, on average, three centres out of 46 contained no patients in one of the treatment groups per simulation. These centres were removed from the fixed-effects model (Model B), as no comparison patients in the same centre were available. About six centres out of 46 recruited less than two patients per treatment arm for each simulation. These centres were dropped from the centre-level analyses, as the standard error for treatment difference per centre could not be obtained as input variables for 'metacont( )'. Performance of six models in scenario 9 was similar to that in scenarios 7 and 8, although point estimates from all models appeared to be marginally biased toward the null (Table 8). Estimates from patient-level models were more precise and closer to 0.230, the best linear unbiased estimate of standard error based on the simulation model. Once again, the standard error was slightly biased upward in Model A and marginally biased downward in Model D. This resulted in wider and conservative interval estimates from Model A and slightly narrower intervals from Model D. Models B and C performed comparably, probably because on average only three centres each containing one patient were dropped from Model B, which did not affect the variance estimation much. Models C and D achieved convergence for all 1000 simulations in this scenario.\nProperties of point and 95% interval estimates calculated from Models A - E based on 1000 simulated datasets in scenario 9 - unbalanced, 46 centres, same centre composition as the COMPETE II trial.\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; Cover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nThe properties of point and interval estimates in the scenarios 7 and 8 (with unequal centre sizes and chance imbalance) were close to results in the previous two designs. In particular, the comparative performance of six models lay in the middle ground between scenarios 2 and 5, as the level of imbalance between two treatments was no more than half of the block size within centres. As similar results were observed for block sizes 2 and 4, summary statistics based on block size 4 were plotted in Figure 2, 3, 4 and 5. Results suggested that unequal centre size had little impact on model performance, yet it was associated with slight enlargement of the empirical variance of β^1 in Model E - 1. To summarize, although all six models produced unbiased point estimates, the fixed- and mixed-effects models using patient-level data provided the most accurate estimates of the standard error of β^1 given large ICC values, hence should be used in the analysis of multicentre trials when the ICC was nontrivial or unknown to control type I and type II error rates. For studies consisting of a large number of centres with only a few patients per centre, adjusting for centre as mixed effects produced most precise point estimate of treatment effect, hence were more preferable. The information sandwich method appeared to slightly underestimate the actual variance when patients were recruited from 17 centres in scenarios 7 or 8. Due to varying centre sizes, Model D did not converge for all simulated datasets (number varied between 1 and 93 out of 1000 simulations) after 2000 iterations, when ICC was less than or equal to 0.1 or greater than 0.4 for block size of 2 or 4. Such datasets were excluded for all models and extra data were simulated to attain a total number of 1000 simulations for any ICC value. In most cases, the non-convergence of GEE occurred due to a non-positive definite working correlation matrix.\nEmpirical standard deviation (SD) across 1000 simulations by ICC for scenario 8 (block size = 4).\nAverage of standard error (SE) across 1000 simulations by ICC for scenario 8 (block size = 4).\nCoverage of 95% CI by ICC for scenario 8 (block size = 4).\nEmpirical power by ICC for scenario 8 (block size = 4).\nIn scenario 9, as a result of mimicking the particular centre composition of the COMPETE II trial, on average, three centres out of 46 contained no patients in one of the treatment groups per simulation. These centres were removed from the fixed-effects model (Model B), as no comparison patients in the same centre were available. About six centres out of 46 recruited less than two patients per treatment arm for each simulation. These centres were dropped from the centre-level analyses, as the standard error for treatment difference per centre could not be obtained as input variables for 'metacont( )'. Performance of six models in scenario 9 was similar to that in scenarios 7 and 8, although point estimates from all models appeared to be marginally biased toward the null (Table 8). Estimates from patient-level models were more precise and closer to 0.230, the best linear unbiased estimate of standard error based on the simulation model. Once again, the standard error was slightly biased upward in Model A and marginally biased downward in Model D. This resulted in wider and conservative interval estimates from Model A and slightly narrower intervals from Model D. Models B and C performed comparably, probably because on average only three centres each containing one patient were dropped from Model B, which did not affect the variance estimation much. Models C and D achieved convergence for all 1000 simulations in this scenario.\nProperties of point and 95% interval estimates calculated from Models A - E based on 1000 simulated datasets in scenario 9 - unbalanced, 46 centres, same centre composition as the COMPETE II trial.\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; Cover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation", "We applied all six models to the COMPETE II data and reported results in Table 3. Approximately equal numbers of patients were randomized to the intervention and control groups within each family doctor, leading to 253 and 258 patients in the intervention and control group, respectively. Among 46 family physicians, 11 physicians (24%) treated equal numbers of patients in two arms, 24 physicians (52%) treated one more patient in the intervention or control arm, 10 physicians (22%) managed 2 more patients in either arm, and one physician (2%) managed 3 more patients in one arm compared with the other.\nAll baseline characteristics were roughly balanced between arms [3]. The analyses using patient-level data produced similar estimates for β1 and the effect size was around 0.5 times the corresponding residual standard deviation. The standard error of the estimated β1 reduced from 0.25 (Model A) to 0.23 (Models B, C) then 0.19 (Model D) when centre effects were adjusted, leading to narrower CIs around estimated β1 in Models B - D. The intraclass correlation was estimated 0.138 in Model C and 0.124 in Model D. The two centre-level analyses returned slightly larger estimates of β1 than those from the individual patient-level models. In fact the minimal variance between physicians indicated no noticeable heterogeneity between physicians (τ2 = 0, I2 = 0), resulting in same estimates from E-1 and E-2. Zero was not contained in the 95% confidence intervals, therefore all models led to the conclusion that the experimental intervention significantly improved patient management over usual care based on the change of composite process score.", "[SUBTITLE] Properties of point estimates [SUBSECTION] Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\nTable 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.\n[SUBTITLE] Properties of interval estimates [SUBSECTION] The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.\nThe empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.", "Table 4 summarizes descriptive statistics of the point estimator of treatment effect in Models A - E for three values in the lower range of the spectrum of ICC, in the balanced design. The point estimates of β1 were unbiased in all six models for all ICC values. Upon review, it was surprising that the point estimates in Model A, ignoring stratification and clustering, were invariant of ICC, and that the same estimates were returned by four patient-level models for each simulation. In fact, when treatments are allocated in same proportion in all centres, centre has no association with the treatment allocation, hence adjusting for centre effect or not has little impact on point estimate of the treatment - response relationship given a continuous response variable. For this reason, different ways to incorporate between-centre information (Models B-D) led to same estimates of treatment contrast in a balanced design. Same point estimates led to same empirical SD and overall error rate (measured by MSE) of the estimator in Models A - D regardless of ICC. Across different ICC values and scenarios 1 - 3, Models B and C yielded accurate estimates of the standard error of β^1 that approximated the empirical SD and the true standard deviation, 0.149, calculated using the best linear unbiased estimator of the simulation model, i.e. Model C [18]. From Table 4 we found that the standard error of β^1, in Model A increased with ICC in each scenario, deviating from the corresponding empirical SD. The standard error could be slightly underestimated in Model D when the number of centres was small (Table 4 scenario 2 and 3 comparing empirical SD and average SE). This agreed with previous work concerning small sample properties of the GEE model [20].\nProperties of point estimates of the treatment effect from Models A - E in scenarios 1 to 3\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nThe centre-level analyses produced larger empirical SE and MSE for β^1 compared with the patient-level analyses given small or moderate centre sizes (Table 4). The difference reduced as centre size increased. When only a few patients were enrolled per centre, the fixed-effects centre-level point estimator in Model E - 1 had large sampling variation that was severely underestimated at all ICC values. The random-effects model (E - 2) based on DerSimonian-Laird method on the other hand seemed to yield valid SE for β^1 that was on average greater than SEs from the patient-level models. The average estimate of SE for β^1 over all simulations in Model E - 2 was always larger than estimates of SE in Models B and C, followed by the SE estimated in Model E - 1 across different combinations of centre size and number of centres. In this study, although datasets were generated so that the treatment effects were homogeneous among centres (i.e. no treatment by centre interaction), random-effects analysis using centre-level data outperformed the fixed-effects analysis when the centre size was small, for Model E - 2 took into account the observed \"heterogeneity\" due to imprecise estimation of the centre mean difference and the associated standard error.", "The empirical coverage of confidence intervals (CIs) and the statistical power in balanced studies are displayed in Table 5. Models B and C produced similar coverage close to the nominal value of 95% over different ICC values and centre composition. Model A provided conservatively high coverage increasing with ICC, illustrating that for moderate to large ICC values, CIs in Model A were abnormally wide due to overestimated SE for β^1. The empirical coverage of CIs from Model D or E - 1 on average was farther down from 95% compared with Models B and C. This is likely caused by underestimation of the standard error in Models D and E-1, and is associated with an apparent increase of power in the first three scenarios. For Model D, the coverage dropped to below 90% when the number of centres reduced to six in scenario 3. The coverage of Model E - 1 was too low to be useful when studies were conducted at many smaller centres (scenario 1). However, coverage increased gradually with centre size and approached 95% when there were 30 patients per centre (scenario 3). Model E-2 presented similar coverage pattern to E-1, although the coverage was closer to 95%. Models B and C largely maintained nominal power of 91.8% regardless of ICC value. Power of Model A decreased dramatically as ICC departed from 0, indicating that the model failed to adjust for between-centre variation or within-centre correlation in the outcome measure. The nominal type II error rate (8%) was maintained in Models D and E - 1 in scenarios 1 - 3. Model E - 2 generally had lower power to detect the true treatment effect due to a larger standard error that reflects both the within-centre variability and treatment by centre interaction. Interestingly, this power rose as the number of centres reduced and approached 88% in scenario 3.\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 1 to 3\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nOverall, Models B and C had very close performance that outweighed other models in balanced design. Models C and D converged to a solution in all simulations.", "[SUBTITLE] Properties of point estimates [SUBSECTION] Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\nPerformance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation\n[SUBTITLE] Properties of interval estimates [SUBSECTION] Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.\nSimilar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.", "Performance of different models in multicentre studies of equal centre sizes, 1-to-1 allocation ratio and chance imbalance is displayed in Table 6 and 7. Similar results were observed as in the balanced design, though a few differences emerged. The unbalanced allocation of patients into treatment arms due to pure within-centre variation introduced chance imbalance (in both directions) into treatment - response relationship, hence ignoring centre effects completely (as in Model A) led to unbiased yet less efficient estimates for large ICC values. Model B could be less precise than Model A given small to moderate ICC values, a phenomenon previously reported by Pickering and Weatherall [7]. As in the balanced design, the fixed- and random-effects models performed comparably for various ICC values, largely because the fixed and random intercepts for study centres cancelled out in estimating effect contrast when we fit Models B and C, and had little impact on the estimation of the fixed effect contrast across centres. However, the fixed-effects model produced larger empirical standard deviation and average standard error in scenario 4, a study being composed of many centres each managing a few patients. Adjusting for between-centre variation as random effects in Model C or using population-averaged analysis in Model D allowed to borrow information across centres and resulted in greater precision.\nProperties of point estimates of the treatment effect from Models A - E in scenarios 4 to 6\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; ICC: intraclass (intracentre) correlation\nCoverage of the 95% interval estimate of the treatment effect and statistical power of Models A - E in scenarios 4 to 6\nCover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation", "Similar results were observed relative to the balanced design. Patient - level models A - C guaranteed nominal coverage of confidence intervals at different ICC values, whereas the other models were likely to produce lower coverage under certain conditions. Among all models, Models C and D achieved the best empirical power that was closest to the nominal value of 91.8% across different centre sizes. When centre size was small and the number of centre was large (scenario 4), power for Models C and D also decreased with ICC, a pattern that was less obvious in scenarios 5 and 6. Models C and D achieved convergence in analyzing all simulated datasets.", "The properties of point and interval estimates in the scenarios 7 and 8 (with unequal centre sizes and chance imbalance) were close to results in the previous two designs. In particular, the comparative performance of six models lay in the middle ground between scenarios 2 and 5, as the level of imbalance between two treatments was no more than half of the block size within centres. As similar results were observed for block sizes 2 and 4, summary statistics based on block size 4 were plotted in Figure 2, 3, 4 and 5. Results suggested that unequal centre size had little impact on model performance, yet it was associated with slight enlargement of the empirical variance of β^1 in Model E - 1. To summarize, although all six models produced unbiased point estimates, the fixed- and mixed-effects models using patient-level data provided the most accurate estimates of the standard error of β^1 given large ICC values, hence should be used in the analysis of multicentre trials when the ICC was nontrivial or unknown to control type I and type II error rates. For studies consisting of a large number of centres with only a few patients per centre, adjusting for centre as mixed effects produced most precise point estimate of treatment effect, hence were more preferable. The information sandwich method appeared to slightly underestimate the actual variance when patients were recruited from 17 centres in scenarios 7 or 8. Due to varying centre sizes, Model D did not converge for all simulated datasets (number varied between 1 and 93 out of 1000 simulations) after 2000 iterations, when ICC was less than or equal to 0.1 or greater than 0.4 for block size of 2 or 4. Such datasets were excluded for all models and extra data were simulated to attain a total number of 1000 simulations for any ICC value. In most cases, the non-convergence of GEE occurred due to a non-positive definite working correlation matrix.\nEmpirical standard deviation (SD) across 1000 simulations by ICC for scenario 8 (block size = 4).\nAverage of standard error (SE) across 1000 simulations by ICC for scenario 8 (block size = 4).\nCoverage of 95% CI by ICC for scenario 8 (block size = 4).\nEmpirical power by ICC for scenario 8 (block size = 4).\nIn scenario 9, as a result of mimicking the particular centre composition of the COMPETE II trial, on average, three centres out of 46 contained no patients in one of the treatment groups per simulation. These centres were removed from the fixed-effects model (Model B), as no comparison patients in the same centre were available. About six centres out of 46 recruited less than two patients per treatment arm for each simulation. These centres were dropped from the centre-level analyses, as the standard error for treatment difference per centre could not be obtained as input variables for 'metacont( )'. Performance of six models in scenario 9 was similar to that in scenarios 7 and 8, although point estimates from all models appeared to be marginally biased toward the null (Table 8). Estimates from patient-level models were more precise and closer to 0.230, the best linear unbiased estimate of standard error based on the simulation model. Once again, the standard error was slightly biased upward in Model A and marginally biased downward in Model D. This resulted in wider and conservative interval estimates from Model A and slightly narrower intervals from Model D. Models B and C performed comparably, probably because on average only three centres each containing one patient were dropped from Model B, which did not affect the variance estimation much. Models C and D achieved convergence for all 1000 simulations in this scenario.\nProperties of point and 95% interval estimates calculated from Models A - E based on 1000 simulated datasets in scenario 9 - unbalanced, 46 centres, same centre composition as the COMPETE II trial.\nSD: empirical standard deviation; Ave. SE: average estimated SE; MSE: mean squared error; Cover. of CI: coverage proportion of 95% confidence interval; ICC: intraclass (intracentre) correlation", "In this paper, we investigated six modelling strategies in a Frequentist framework to study the effect of an experimental treatment compared to the control treatment in the context of multicentre RCTs with a continuous outcome. We focused on three designs with equal or varying centre sizes and a treatment allocation ratio of 1:1 in the absence of treatment by centre interaction. Results of this simulation study showed that, when the proportion of patients allocated to the experimental treatment was the same in each centre or subject to chance imbalance only, models using patient-level and centre-level data yielded unbiased point estimates of treatment effect across a wide spectrum of ICC values. Ignoring stratification by centre or within-centre correlation did not bias the estimated treatment effects even when ICC was large. In fact, Parzen et al showed that mathematically the usual two-sample t-test, naively assuming independent observations of the response within centre was asymptotically unbiased in this context [30].\nThe simulation study also indicated that these models produced different standard errors of β^1, and the properties of interval estimates were affected by several factors: whether and how centre effects were incorporated in analysis, the combination of centre size and number of participating centres, and the level of non-orthogonality of the observed data. Treating centre as a random intercept resulted in the most precise estimate, and nominal values of coverage and power were attained in all circumstances. The fixed-effects model had extremely similar performance compared with the mixed-effects model in balanced design, but was slightly less efficient when the number of centres was large (J > 20) in an unbalanced design. Pickering and Weatherall observed the same pattern in their simulation study comparing three patient-level models with small ICC values [7]. The GEE model using information sandwich covariance method tended to underestimate the standard error across centre effects when the sample of centres was small, a property noticed by researchers [20,31]. This resulted in higher statistical power. That is, the treatment effect estimate was more likely to be significant with a smaller standard error, but was associated with a lower coverage of the conference interval. Marray et al suggested that at least 40 centres should be used to ensure reliable estimate of standard error in the context of cluster randomized trials [32]. Our simulation results suggested that such cut value was also applicable to multicentre RCTs. Failure to control for centre effects in any form resulted in inflation of standard error, falsely high interval coverage and sizable drop of power, as ICC increased. Parzen et al quantified the impact of correlation among observations within centre on the variance of β^1 in Model A as 1/(1-ICC) [30]. Alternatively, one may consider a variant of robust variance estimation or a GEE model with an independent working correlation to control for the impact of ICC on variance estimation using t-test. Centre-level models generally produced larger standard errors, lower coverage or power than the patient-level models. Centre-level random-effects model incorporated variability of the treatment effect over centres, and was not a fair comparator to other models. Interestingly, this model seemed to fare better than the centre-level fixed-effects model in terms of precision and coverage even though the simulated datasets contained no treatment by centre interaction. Despite that the random-effects centre-level model may be a reasonable alternative for patient-level models when the number of patients per centre is large (≥30), centre-level models cannot adjust for patient-level covariates, a potential fatal drawback in the presence of patient prognostic imbalance.\nStatisticians have different viewpoints on treating centre effects and treatment by centre interaction as fixed or random effects when analyzing multicentre RCTs [12,13,21,33]. Our simulation results demonstrated the advantage of treating centres as random intercepts in the absence of treatment by centre interaction. When many centres enrol a few patients and allocation is unbalanced, the random intercept models can give more precise estimates of the treatment effect than the fixed intercept models, because they recover inter-centre information in unbalanced situations. For instance, in a multicentre RCT consisting of 45 centres each recruiting 4 patients, the empirical variance of the estimator of the treatment effect resulting from the fixed-effects model was 24.8% and 26.0% greater than that from the random-effects model when the ICC was 0.01 and 0.05, respectively. In the sentence alluded to, we need to compare the empirical variance of 0.1622 with the value of 0.1452 for ICC = 0.01, and 0.1742 to 0.1552 for ICC = 0.05 (Table 6 scenario 4). We therefore take the same position as Grizzle [33] and Agresti and Hartzel [12] that, \"Although the clinics are not randomly chosen, the assumption of random clinic effect will result in tests and confidence intervals that better capture the variability inherent in the system more realistically than clinical effects are considered fixed\".\nOur results have some implications for the design of multicentre RCTs in the absence of treatment by centre interaction. First, regardless of the pre-determined allocation ratio, permutated block randomization (of relatively small block sizes) should be used to maintain approximate balance or orthogonality (i.e. same treatment allocation proportion across centres [7]) between treatments and centres, so that their individual effects can be evaluated independently. Variable block sizes can be used to strengthen allocation concealment. Second, for a given sample size, the number of patients randomized in majority of centres should be sufficiently large to ensure reliable estimate of within-centre variation. Third, it is essential for investigators to obtain a rough estimate of ICC for within-centre responses, through literature review or a pilot study. To reach nominal power of 80% or 90% (in the absence of clustering), centre effects should be taken into consideration in sample size assessment. When centre effects are included without treatment by centre interaction, the analysis becomes more powerful than a two-sample t-test. One method to assess sample size is to start with a two sample t-test for continuous outcomes (ignoring centre effect) then multiple the original estimated error variance by an variation inflation factor of 1/(1-ICC). This factor would have the effect of increasing the required sample size. Ignoring centre effects results in the larger sample size in the absence of interaction. Sample size determined using information sandwich covariance of GEE model could lead to slight loss of power, when the number of centres is small (≥40) and no proper adjustment is done. Lastly, there is no particular reason to require equal numbers of patients being enrolled in all participating centres and this is seldom the case in practice. Throughout the simulations, we observed similar results for studies of equal and varying centre sizes. In the study, we considered three scenarios representing the particular centre composition of the COMPETE II trial. For discussion on potential impact of enrolment patterns on the point and interval estimates of treatment effect, readers can refer to the publications on random enrolment verse determined enrolment, and relative efficiency between equal and unequal cluster sizes in the reference list [34,35].\nThe current ICH E9 guideline recommends that researchers investigate treatment effect using a model that allows for centre differences in the absence of treatment by centre interaction [1]. However, it is implausible or impractical to include centre effects in statistical modelling or stratify randomization by centre, when it is anticipated from the start that trials may have very few subjects per centre. As it is acknowledged in the document, these recommendations are based on fixed-effects models. Mixed-effects models on the other hand may also be used to explore the centre and centre by interaction effects, especially when the number of centres is large [1]. Our simulation results indicated that when a considerable number of centres contains only a few patients, adjusting for centre as a fixed effect may lead to reduced precision (depending on distribution of patients between arms) compared with the naïve unadjusted analysis. Our work complements the ICH E9 guideline, by studying the impact of intraclass correlation on the assessment of treatment effects - a challenge that is seldom discussed, although routinely faced by investigators in reality. Our investigation suggests that, (1) ignoring centre effects completely may cause substantial overestimation of the standard error, faulty increase of coverage of the confidence interval and reduction of power; and (2) mixed-effects models and GEE models, if employed appropriately, can produce accurate and precise effect estimates, regardless of the degree of clustering. We recommend consider these methods in developing future guidelines.\nWhen the number of patients per centre is very small, it is not practical to include centre as a fixed effect to analyze patient-level data, as centre effects cannot be reliably estimated, and precision of the treatment effect will be compromised. In fact for extremely small centres, all patients may be allocated to the same treatment group, and such centres will be ignored by the fixed-effects model [36-39]. The alternatives include collapsing all centres to perform a two-sample t-test, collapsing smaller centres to create an artificial centre and treating it as a fixed effect, and exploring other models discussed above. The mixed-effects model utilizes small centres more efficiently by \"borrowing\" information from larger centres. The GEE approach models the average treatment difference across all centres and adjusts for centre effects through a uniform correlation structure. This is an intuitively more efficient model which unfortunately does not always converge when the number of patients per centre was highly variable (simulation scenarios 7 and 8). In the current study, non-convergence problems were more likely to arise for very small or large ICC values (less than 0.1 or greater than 0.4 for block size 2 or 4) due to non-positive definite working correlation matrices, and the frequency could be as big as 10% after 2000 iterations. Conversely, convergence problems did not occur for the mixed-effects models in any scenarios. Our results show that analysis of trials consisting of very small centres (i.e. those containing less than 2 patients per arm) using centre-level models may not be an optimal strategy, because the within-centre standard deviation of treatment difference cannot be estimated for such centres, and consequently these very small centres are excluded from the analysis.\nResults of two large empirical studies and one systematic review of cluster RCTs in primary care clinics suggested that most ICC values on physical, functional and social measures were less than 0.10 [26-28]. The estimated ICC in the COMPETE II trial using GEE and linear mixed-effects model, on the other hand, was 0.124 and 0.138, respectively. We chose to include rare yet possible large ICC values (0-0.75) in this simulation to examine the overall trend of model performance by ICC, and for the purpose of completeness and generalizability. Readers should anticipate the ICC values likely to emerge from their studies when interpreting these results. Throughout the work, we quantified correlation among subjects within centre using ICC, the most commonly used concept to assess clustering in biomedical literature. As indicated in previous sections, ICC reflects the interplay of two variance components in multicentre data: the between-centre variance and within-centre variance. These variance components are relatively easy to interpret for analysis of continuous outcomes using linear models. For analysis of binary or time-to-event data from multicentre trials using generalized mixed and frailty models, interpretation of centre heterogeneity can present challenges because random effects are linked to the outcome via nonlinear functions [40]. Reparameterization of the probability density function may be used to assess the impact of within- and between-centre variance. Interested readers can refer to Duchateau and Janssen [40] for more details.\nA major limitation of the study is that it did not address model performance when the treatment by centre interaction exists. The interactions may be due to different patient populations or variable standard of care. Interested readers may read Moerbeek et al [6] for formulas of variance of β^1 in different models and Jones et al [14] for simulation results. Future studies addressing interaction effects in multicentre RCTs are needed. Datasets in the current paper were generated based on a moderate treatment effect reflected by the standardized mean difference between the treatment and control group. More or less prominent treatment effects are also likely to occur in clinical studies and similar findings are expected. The current study investigated on continuous outcomes in two groups from a Frequentist perspective. The models discussed above can be naturally extended to compare three or more treatments. Agresti and Hartzel [12] surveyed different methods to evaluate treatments for binary outcomes in multicentre RCTs. Non-parametric approaches and Bayesian methods are also available to obtain treatment contrast. Interested readers can refer to Aitkin [41], Gould [11], Smith et al [42], Legrand et al [16], and Louis [43], to name a few.", "We used simulations to investigate the performance of six statistical approaches that have been advocated to analyze continuous outcomes in multicentre RCTs. Our simulation study showed that all six models produced unbiased estimates of treatment effect in individual patient randomization multicentre trials. Adjusting for centre as random effects resulted in more efficient effect estimates in all scenarios over a wide spectrum of ICC values and various centre compositions. Fixed-effects model performed comparably to the mixed-effects model under most circumstances but lost efficiency when many centres contained a relatively small number of patients. The GEE model underestimated standard error of the effect estimates when a small number of centres were involved, and did not always converge when the centre size was variable for very large or small ICC values. Two-sample t-test severely overestimated standard error given moderate to large ICC values. The relative efficiencyof statistical modelling of treatment contrasts was also affected by ICC, distribution of patient enrolment, centre size and the number of centres.", "The authors declare that they have no competing interests.", "RC participated in the design of the study, simulation, analysis and interpretation of data, and drafting and revision of the manuscript. LT contributed to the conception and design of the study, interpretation of data and revision of the manuscript. JM contributed to the design of the study and revision of the manuscript. AH contributed to acquisition of data and critical revision of the manuscript. EP and PJD advised on critical revision of the manuscript for important intellectual content. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/21/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Beyond 50. Challenges at work for older nurses and allied health workers in rural Australia: a thematic analysis of focus group discussions.
21338525
The health workforce in Australia is ageing, particularly in rural areas, where this change will have the most immediate implications for health care delivery and workforce needs. In rural areas, the sustainability of health services will be dependent upon nurses and allied health workers being willing to work beyond middle age, yet the particular challenges for older health workers in rural Australia are not well known. The purpose of this research was to identify aspects of work that have become more difficult for rural health workers as they have become older; and the age-related changes and exacerbating factors that contribute to these difficulties. Findings will support efforts to make workplaces more 'user-friendly' for older health workers.
BACKGROUND
Nurses and allied health workers aged 50 years and over were invited to attend one of six local workshops held in the Hunter New England region of NSW, Australia. This qualitative action research project used a focus group methodology and thematic content analysis to identify and interpret issues arising from workshop discussions.
METHODS
Eighty older health workers from a range of disciplines attended the workshops. Tasks and aspects of work that have become more difficult for older health workers in hospital settings, include reading labels and administering medications; hearing patients and colleagues; manual handling; particular movements and postures; shift work; delivery of babies; patient exercises and suturing. In community settings, difficulties relate to vehicle use and home visiting. Significant issues across settings include ongoing education, work with computers and general fatigue. Wider personal challenges include coping with change, balancing work-life commitments, dealing with attachments and meeting goals and expectations. Work and age-related factors that exacerbate difficulties include vision and hearing deficits, increasing tiredness, more complex professional roles and a sense of not being valued in the context of greater perceived workload.
RESULTS
Older health workers are managing a range of issues, on top of the general challenges of rural practice. Personal health, wellbeing and other realms of life appear to take on increasing importance for older health workers when faced with increasing difficulties at work. Solutions need to address difficulties at personal, workplace and system wide levels.
CONCLUSIONS
[ "Adult", "Allied Health Personnel", "Education", "Female", "Focus Groups", "Health Services Research", "Humans", "Job Satisfaction", "Male", "Middle Aged", "New South Wales", "Nurses", "Personal Satisfaction", "Population Dynamics", "Rural Health Services", "Workload" ]
3060112
null
null
Methods
This qualitative research project used a focus group methodology and thematic analysis to identify and interpret issues arising from workshop discussions. Participants were also encouraged to reflect on issues, provide further input and act locally on ideas whilst the wider research was still in progress, consistent with qualitative action research principles. A Project Reference Group (PRG) was established prior to commencement of the workshops, which consisted of the two researchers and eight health service personnel. These included health service managers, rural nurses and allied health workers currently working within HNE Health. PRG members were nominated by the HNE Health Executive Team on the basis of particular areas of practice, location, responsibility or expertise relevant to the Project. The purpose of the PRG was to provide practical advice and assistance on the conduct and direction of the project; provide feedback after the initial workshop; and to review and provide practical comment on preliminary results and recommendations. Communities were selected to host a focus group or 'workshop' on the basis of size and geographic spread across the Hunter New England region of rural NSW. Each workshop was held in a town with a population between 5,000 and 30,000 people. Communities were located in six of the seven rural management clusters of HNE Health (Figure 1). Location of communities where older health workers project workshops were held within Hunter New England Area Health Service, NSW, Australia. Local health service managers or their representatives were contacted to assist in arranging the workshop and to make sure all health service staff knew about it. Notification of each local workshop, with an invitation to participate directed at staff '50 years and over', was circulated 2-3 weeks in advance, through regular workplace communication channels. This included staff email networks, staff meetings, staff notice boards and word-of-mouth. Health service managers in communities within an hour's drive of the hosting health service were also asked to inform their staff about the workshop. A convenience sample of 80 older health workers attended the six workshops between August and November 2008. Four were men and an average of 13 participants attended each workshop. Demographic information on the age and characteristics of individual participants was not collected, but each group was asked to indicate their general work experience and fields of practice. Consistent with an 'over 50' cohort, a number of participants had worked within the health service for 30 years or more. Participants were mainly drawn from the nursing and allied health professional groups, who were the focus of recruitment efforts. Allied health fields of practice included radiography, occupational therapy, diversional therapy, physiotherapy, social work and psychology. Nursing sector participants included registered nurses and enrolled nurses from community health, aged care, general hospital wards, operating theatres, sterilising departments, midwifery, emergency departments and health service management. A small number were from other professional groupings (less than 10%), encompassing 'other professional, para-professional and clinical support staff' (Aboriginal health), 'corporate services' (clerical administration) and 'hotel services' (catering). The majority of participants worked regular office hours, although others had worked shift work in the recent past. Participation was completely voluntary and informed consent was obtained from participants prior to their involvement. At the outset of each workshop, the purpose of the research was explained to participants by the lead researcher, who acted as the group facilitator. A 'participant information sheet' was provided outlining further details; and after opportunity for questions, participants were asked to sign a formal consent form prior to commencement of discussions. Within each workshop, participants were asked to identify and describe: • work tasks and aspects of work that have become more difficult for rural health workers as they have become older; • age-related changes and other exacerbating factors contributing to these difficulties Responses focusing on these questions were recorded by participants during small break-out groups of 3-5 people. In particular, participants were asked to list difficulties in a table; and to record ideas on the age-related changes and exacerbating factors that contributed to each of these difficulties in an adjacent column. One or two items from each small group were selected for further discussion by the larger group, enabling other group members to contribute and/or debate ideas. One researcher had the designated task of taking detailed notes during the course of these larger group discussions, which supplemented the data recorded by participants on the small group worksheets. Thematic content analysis was then used to identify issues arising from workshop discussions. Notes and worksheets were reviewed after each workshop and information sorted into categories that focused on defining difficulties associated with specific tasks, variation within work contexts and relationships between categories. Categories were progressively reviewed and used to help guide discussion in subsequent workshops until no new information on the subject arose. Themes relating to difficulties were tabulated moving from (1) particular tasks and specific workplace settings, to (2) more general aspects of work and broader settings. Participants were invited to reflect on ideas shared in their 'workshop' and act locally on issues whilst the wider research project was still in progress. Preliminary findings from all workshops were sent to participants, who were given opportunity to provide further feedback and ideas prior to final recommendations being made. The research protocol was approved by the Hunter New England Human Research Ethics Committee (Reference 08/05/21/4.06). This was consistent with local health service requirements and research ethics principles of the World Medical Association Declaration Of Helsinki [23]
null
null
null
null
[ "Background", "Results", "Challenges in particular work settings", "Challenges spanning work tasks and settings", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Funding", "Pre-publication history" ]
[ "The rural health workforce in Australia is ageing and is older than the urban health workforce [1-4]. In 2005, the average age of nurses in outer regional areas of Australia was 46.0 years, compared to 44.6 years in metropolitan areas [2]. Although younger than the nursing profession, rural allied health workers also tend to be older than their city counterparts [3]. Both the nursing and allied health workforces are predominantly female, at around 75% [4].\nFor the Hunter New England Area Health Service (HNE Health), in northern New South Wales, around one quarter of allied health workers and a half of the rural nursing workforce were aged over 50 years in October 2009 (HNE Health Workforce Informatics). This a pattern likely to be shared by other rural health services across Australia. Workforce sustainability, at least in the short term, is likely to depend on nurses and allied health workers being willing, able and happy to work beyond middle age.\nAgeing is associated with recognised physical and mental changes including reductions in aerobic power, muscular strength and endurance, reaction speed, acuity of special senses, difficulties with thermoregulation, sleep disturbances and concerns about increased risk of chronic illness [5-8]. Back pain, other musculo-skeletal disorders and stress-related mood disorders are common health and injury problems with older health workers [9,10]. However, in some studies, older nurses and other older workers have been shown to have better than average physical and mental health - although this may be because work helps keep older workers healthy, or less healthy older workers leave the workforce [9,11].\nGenerally speaking, in the absence of major illness, little relationship has been found between ageing and work performance per se [5,12-14]. Personal health and fitness however, can be highly variable amongst older adults and is more likely to affect physical strength and likelihood of injury than ageing itself [5,12]. Poor health is strongly associated with labour force exit [15-17]. Whilst older workers have considerable capacity to manage job demands and difficulties, it has been suggested that at some point, older workers do become \"overwhelmed\" by the increased risk of health consequences, injury and disability [18].\nRecruitment and retention of health professionals of all ages has proved an ongoing difficulty in rural areas. This is related to an interplay of personal, environmental and wider work-related factors. Professional isolation, lack of recognition limited access to professional education, high workloads and stress related to lack of replacement staff, long work hours, limited resources and sometimes unsupportive management practices are commonly described workplace challenges [3,19-22].\nGiven the proportion of rural health workers aged over 50 years, the question to be addressed is whether there are specific difficulties for older workers on a day to day basis, in addition to the challenges faced by all health workers? Specifically, this study aimed to identify work tasks and broader challenges that have become more difficult for rural health workers as they have become older and the factors that contribute to these difficulties. Findings are informing the development of solutions at personal, workplace and system-wide levels, in an ongoing action research project.", "[SUBTITLE] Challenges in particular work settings [SUBSECTION] The following tables indicate specific tasks and challenges in a range of work settings that older rural health workers reported as becoming more difficult with age. Work and age-related factors that contributed to these challenges, are also presented.\nTasks and challenges in hospital ward settings that have become more difficult with age\nWork in the hospital setting that has become more difficult for older workers includes reading medication labels. This is exacerbated by poor lighting, small print size and print colours, such as orange-red print on ampoules used for injections. Hearing difficulties are exacerbated by high levels of workplace noise and for some, by the accents of staff from non-English-speaking backgrounds.\nManual handling and manoeuvring of patients was particularly difficult where squatting, bending or maintaining postures for long periods was required. It was also considered more physically challenging, because patients were getting older, more debilitated and overweight.\n\"Patients are older and heavier ... and so are we.\"\nShift work was met with increased tiredness with an expressed need for longer recovery times between shifts. Disturbed sleep and anxiety about lack of sleep was commonly reported, with problems of tiredness sometimes exacerbated by a lack of replacement staff.\n\"It takes 2 days to get over a double shift.\"\n\"You can't go off sick - there's no one to replace you.\"\nSpecific clinical procedures, such as physiotherapy exercises, are also more difficult because they often involve long periods of leaning, bending and being on one's feet. Midwifery had become more of a challenge for older midwives, partly attributed due to the increased birthing options and positions for women.\n\"I think (these are) reasons why some older nurses no longer practice midwifery.\"\nChallenges facing older community health workers are described in Table 2:\nTasks and challenges in community settings that have become more difficult with age\nCommunity health workers described greater driver discomfort and fatigue now they are older. Getting in and out of vehicles was more difficult - especially with the purchase of smaller, lower cars. Carrying equipment to and from vehicles and moving around in cluttered homes, has made home visiting more difficult.\n\"The (smaller car brands) are too low - you have to 'roll out of them.'\"\nWork with computers (Table 3) also poses a particular challenge for older health workers. The problems relate to physical discomfort associated with sitting for long periods, keyboard dexterity and seeing small font on screens. Additionally, there are mental challenges associated with the rapid adoption of new programs, the need to remember passwords and interpretation of computerised results. Little previous exposure and a general lack of confidence with computers was a commonly reported experience. It was also felt group training sessions were mostly levelled at a younger, more computer literate generation.\nChallenges associated with computer work that have become more difficult with age\n\"I.T trainers assume a skill level that is not necessarily so\"\n\"Some have even left because of the computers... or some can't work in emergency department (ED) ... because they can't use the computer.\"\nParticipation in meetings, conferences and education programs often requires long distance travel and night driving (Table 4). This has become harder now that participants are older, due to driver fatigue, musculoskeletal problems and difficulties with night vision. These issues are exacerbated by a lack of adequate back support in vehicles, glare of lights and bright road signs. Alternatives such as teleconferencing also pose problems for the hearing impaired.\nChallenges associated with meetings and ongoing education programs that have become more difficult with age\n\" Once upon a time I could get in a car and drive to a meeting at the end of the day. Now I can't do that.\"\nThe following tables indicate specific tasks and challenges in a range of work settings that older rural health workers reported as becoming more difficult with age. Work and age-related factors that contributed to these challenges, are also presented.\nTasks and challenges in hospital ward settings that have become more difficult with age\nWork in the hospital setting that has become more difficult for older workers includes reading medication labels. This is exacerbated by poor lighting, small print size and print colours, such as orange-red print on ampoules used for injections. Hearing difficulties are exacerbated by high levels of workplace noise and for some, by the accents of staff from non-English-speaking backgrounds.\nManual handling and manoeuvring of patients was particularly difficult where squatting, bending or maintaining postures for long periods was required. It was also considered more physically challenging, because patients were getting older, more debilitated and overweight.\n\"Patients are older and heavier ... and so are we.\"\nShift work was met with increased tiredness with an expressed need for longer recovery times between shifts. Disturbed sleep and anxiety about lack of sleep was commonly reported, with problems of tiredness sometimes exacerbated by a lack of replacement staff.\n\"It takes 2 days to get over a double shift.\"\n\"You can't go off sick - there's no one to replace you.\"\nSpecific clinical procedures, such as physiotherapy exercises, are also more difficult because they often involve long periods of leaning, bending and being on one's feet. Midwifery had become more of a challenge for older midwives, partly attributed due to the increased birthing options and positions for women.\n\"I think (these are) reasons why some older nurses no longer practice midwifery.\"\nChallenges facing older community health workers are described in Table 2:\nTasks and challenges in community settings that have become more difficult with age\nCommunity health workers described greater driver discomfort and fatigue now they are older. Getting in and out of vehicles was more difficult - especially with the purchase of smaller, lower cars. Carrying equipment to and from vehicles and moving around in cluttered homes, has made home visiting more difficult.\n\"The (smaller car brands) are too low - you have to 'roll out of them.'\"\nWork with computers (Table 3) also poses a particular challenge for older health workers. The problems relate to physical discomfort associated with sitting for long periods, keyboard dexterity and seeing small font on screens. Additionally, there are mental challenges associated with the rapid adoption of new programs, the need to remember passwords and interpretation of computerised results. Little previous exposure and a general lack of confidence with computers was a commonly reported experience. It was also felt group training sessions were mostly levelled at a younger, more computer literate generation.\nChallenges associated with computer work that have become more difficult with age\n\"I.T trainers assume a skill level that is not necessarily so\"\n\"Some have even left because of the computers... or some can't work in emergency department (ED) ... because they can't use the computer.\"\nParticipation in meetings, conferences and education programs often requires long distance travel and night driving (Table 4). This has become harder now that participants are older, due to driver fatigue, musculoskeletal problems and difficulties with night vision. These issues are exacerbated by a lack of adequate back support in vehicles, glare of lights and bright road signs. Alternatives such as teleconferencing also pose problems for the hearing impaired.\nChallenges associated with meetings and ongoing education programs that have become more difficult with age\n\" Once upon a time I could get in a car and drive to a meeting at the end of the day. Now I can't do that.\"\n[SUBTITLE] Challenges spanning work tasks and settings [SUBSECTION] Table 5 indicates the more general challenges associated with age and work within rural health services. These difficulties span across work settings and particular tasks.\nChallenges reported that span work settings and particular tasks\nParticipants felt that meeting overall workplace demands across settings had generally become more difficult, as workers have become older. Mental fatigue is occurring from a sense of 'information overload' and an expectation to have 'stuff in your head'. Increased paperwork requirements and having multiple roles within the health service, are contributing to a build up of mental stress, making it difficult to concentrate amidst distractions and leading to a sense of 'always feeling stretched'.\n\"It's just harder when things mount up at end of the day.\"\nPersonal challenges reported by older workers include coping with 'change,' such as increased role complexity, 'more hats to wear' and greater administrative requirements.\n\"A new system every year, you learn it, then it changes again.\"\nThis is reportedly worse when older workers 'organisational wisdom' about change and what has worked in the past, has not been sought or valued. 'New ideas' were sometimes perceived as 'reinventions' that led to more work, more stress and little perceived benefit.\n\"What goes around comes around, a revolving door of ideas, ... you've done it before and nothing ever came of it.\"\nEmotional stress following acute care episodes or death of people, has also become much harder to cope with, with workers often knowing people personally, after years of caring for them. Participants also reported a greater sense of empathy, attachment, loss and an increased awareness of their own ageing and mortality.\n\"The death of older people affects us more as we age ourselves - they're closer to our own age now.\"\n\"Now you know them, they're not just a patient.\"\nBalancing work with family life commitments is another challenge. Many described having an extended carer role within families - for ageing parents, partners, grown up children and grand-children. This could involve travelling long distances for health and family matters. Participants expressed a desire to work less or more regular hours, or have time off to attend to these responsibilities. However, they felt management often did not understand this need, or that there were not enough workers available in small rural health services to enable this.\n\"When finished work, (we are) still carrying the bricks - caring for our own sick elderly.\"\n\"At our age, we have children growing up, with their children, and elderly parents, (we are) the main carer.\"\nParticipants were keen to make a positive contribution to good health care in their community - and believe they were doing so. However, at the same time, many felt they fall short of achieving personal standards related to care - or in fulfilling the expectations of patients (related to care), the organisation (relating to workload) and peers (related to support). This was attributed to high workloads, reduced clinical time, increased administrative tasks and a sense they could not get done in a day what they wanted or were expected to do. Participants felt this was worse for older workers, because they were slower than they used to be, yet had a high work ethic and pride in delivering quality care.\n\"It's not the work you do, it's the work you cannot do that is frustrating and emotionally distressing and results in less job satisfaction.\"\nGiven this context, some expressed a lowered tolerance for uncooperative, rude or aggressive patients, or those who 'won't help themselves'. Conversely, some felt that life experience gave them a more 'empathetic' view than when they were younger.\nParticipants in each workshop were offered the opportunity to suggest potential solutions to the problems that were identified. These solutions fell into three categories - recommendations for personal behavioural changes, for work-supportive solutions (by local and area health services) and system-wide solutions such as purchasing of vials with more legible print. These ideas are being incorporated into an ongoing action-research project by the health service.\nTable 5 indicates the more general challenges associated with age and work within rural health services. These difficulties span across work settings and particular tasks.\nChallenges reported that span work settings and particular tasks\nParticipants felt that meeting overall workplace demands across settings had generally become more difficult, as workers have become older. Mental fatigue is occurring from a sense of 'information overload' and an expectation to have 'stuff in your head'. Increased paperwork requirements and having multiple roles within the health service, are contributing to a build up of mental stress, making it difficult to concentrate amidst distractions and leading to a sense of 'always feeling stretched'.\n\"It's just harder when things mount up at end of the day.\"\nPersonal challenges reported by older workers include coping with 'change,' such as increased role complexity, 'more hats to wear' and greater administrative requirements.\n\"A new system every year, you learn it, then it changes again.\"\nThis is reportedly worse when older workers 'organisational wisdom' about change and what has worked in the past, has not been sought or valued. 'New ideas' were sometimes perceived as 'reinventions' that led to more work, more stress and little perceived benefit.\n\"What goes around comes around, a revolving door of ideas, ... you've done it before and nothing ever came of it.\"\nEmotional stress following acute care episodes or death of people, has also become much harder to cope with, with workers often knowing people personally, after years of caring for them. Participants also reported a greater sense of empathy, attachment, loss and an increased awareness of their own ageing and mortality.\n\"The death of older people affects us more as we age ourselves - they're closer to our own age now.\"\n\"Now you know them, they're not just a patient.\"\nBalancing work with family life commitments is another challenge. Many described having an extended carer role within families - for ageing parents, partners, grown up children and grand-children. This could involve travelling long distances for health and family matters. Participants expressed a desire to work less or more regular hours, or have time off to attend to these responsibilities. However, they felt management often did not understand this need, or that there were not enough workers available in small rural health services to enable this.\n\"When finished work, (we are) still carrying the bricks - caring for our own sick elderly.\"\n\"At our age, we have children growing up, with their children, and elderly parents, (we are) the main carer.\"\nParticipants were keen to make a positive contribution to good health care in their community - and believe they were doing so. However, at the same time, many felt they fall short of achieving personal standards related to care - or in fulfilling the expectations of patients (related to care), the organisation (relating to workload) and peers (related to support). This was attributed to high workloads, reduced clinical time, increased administrative tasks and a sense they could not get done in a day what they wanted or were expected to do. Participants felt this was worse for older workers, because they were slower than they used to be, yet had a high work ethic and pride in delivering quality care.\n\"It's not the work you do, it's the work you cannot do that is frustrating and emotionally distressing and results in less job satisfaction.\"\nGiven this context, some expressed a lowered tolerance for uncooperative, rude or aggressive patients, or those who 'won't help themselves'. Conversely, some felt that life experience gave them a more 'empathetic' view than when they were younger.\nParticipants in each workshop were offered the opportunity to suggest potential solutions to the problems that were identified. These solutions fell into three categories - recommendations for personal behavioural changes, for work-supportive solutions (by local and area health services) and system-wide solutions such as purchasing of vials with more legible print. These ideas are being incorporated into an ongoing action-research project by the health service.", "The following tables indicate specific tasks and challenges in a range of work settings that older rural health workers reported as becoming more difficult with age. Work and age-related factors that contributed to these challenges, are also presented.\nTasks and challenges in hospital ward settings that have become more difficult with age\nWork in the hospital setting that has become more difficult for older workers includes reading medication labels. This is exacerbated by poor lighting, small print size and print colours, such as orange-red print on ampoules used for injections. Hearing difficulties are exacerbated by high levels of workplace noise and for some, by the accents of staff from non-English-speaking backgrounds.\nManual handling and manoeuvring of patients was particularly difficult where squatting, bending or maintaining postures for long periods was required. It was also considered more physically challenging, because patients were getting older, more debilitated and overweight.\n\"Patients are older and heavier ... and so are we.\"\nShift work was met with increased tiredness with an expressed need for longer recovery times between shifts. Disturbed sleep and anxiety about lack of sleep was commonly reported, with problems of tiredness sometimes exacerbated by a lack of replacement staff.\n\"It takes 2 days to get over a double shift.\"\n\"You can't go off sick - there's no one to replace you.\"\nSpecific clinical procedures, such as physiotherapy exercises, are also more difficult because they often involve long periods of leaning, bending and being on one's feet. Midwifery had become more of a challenge for older midwives, partly attributed due to the increased birthing options and positions for women.\n\"I think (these are) reasons why some older nurses no longer practice midwifery.\"\nChallenges facing older community health workers are described in Table 2:\nTasks and challenges in community settings that have become more difficult with age\nCommunity health workers described greater driver discomfort and fatigue now they are older. Getting in and out of vehicles was more difficult - especially with the purchase of smaller, lower cars. Carrying equipment to and from vehicles and moving around in cluttered homes, has made home visiting more difficult.\n\"The (smaller car brands) are too low - you have to 'roll out of them.'\"\nWork with computers (Table 3) also poses a particular challenge for older health workers. The problems relate to physical discomfort associated with sitting for long periods, keyboard dexterity and seeing small font on screens. Additionally, there are mental challenges associated with the rapid adoption of new programs, the need to remember passwords and interpretation of computerised results. Little previous exposure and a general lack of confidence with computers was a commonly reported experience. It was also felt group training sessions were mostly levelled at a younger, more computer literate generation.\nChallenges associated with computer work that have become more difficult with age\n\"I.T trainers assume a skill level that is not necessarily so\"\n\"Some have even left because of the computers... or some can't work in emergency department (ED) ... because they can't use the computer.\"\nParticipation in meetings, conferences and education programs often requires long distance travel and night driving (Table 4). This has become harder now that participants are older, due to driver fatigue, musculoskeletal problems and difficulties with night vision. These issues are exacerbated by a lack of adequate back support in vehicles, glare of lights and bright road signs. Alternatives such as teleconferencing also pose problems for the hearing impaired.\nChallenges associated with meetings and ongoing education programs that have become more difficult with age\n\" Once upon a time I could get in a car and drive to a meeting at the end of the day. Now I can't do that.\"", "Table 5 indicates the more general challenges associated with age and work within rural health services. These difficulties span across work settings and particular tasks.\nChallenges reported that span work settings and particular tasks\nParticipants felt that meeting overall workplace demands across settings had generally become more difficult, as workers have become older. Mental fatigue is occurring from a sense of 'information overload' and an expectation to have 'stuff in your head'. Increased paperwork requirements and having multiple roles within the health service, are contributing to a build up of mental stress, making it difficult to concentrate amidst distractions and leading to a sense of 'always feeling stretched'.\n\"It's just harder when things mount up at end of the day.\"\nPersonal challenges reported by older workers include coping with 'change,' such as increased role complexity, 'more hats to wear' and greater administrative requirements.\n\"A new system every year, you learn it, then it changes again.\"\nThis is reportedly worse when older workers 'organisational wisdom' about change and what has worked in the past, has not been sought or valued. 'New ideas' were sometimes perceived as 'reinventions' that led to more work, more stress and little perceived benefit.\n\"What goes around comes around, a revolving door of ideas, ... you've done it before and nothing ever came of it.\"\nEmotional stress following acute care episodes or death of people, has also become much harder to cope with, with workers often knowing people personally, after years of caring for them. Participants also reported a greater sense of empathy, attachment, loss and an increased awareness of their own ageing and mortality.\n\"The death of older people affects us more as we age ourselves - they're closer to our own age now.\"\n\"Now you know them, they're not just a patient.\"\nBalancing work with family life commitments is another challenge. Many described having an extended carer role within families - for ageing parents, partners, grown up children and grand-children. This could involve travelling long distances for health and family matters. Participants expressed a desire to work less or more regular hours, or have time off to attend to these responsibilities. However, they felt management often did not understand this need, or that there were not enough workers available in small rural health services to enable this.\n\"When finished work, (we are) still carrying the bricks - caring for our own sick elderly.\"\n\"At our age, we have children growing up, with their children, and elderly parents, (we are) the main carer.\"\nParticipants were keen to make a positive contribution to good health care in their community - and believe they were doing so. However, at the same time, many felt they fall short of achieving personal standards related to care - or in fulfilling the expectations of patients (related to care), the organisation (relating to workload) and peers (related to support). This was attributed to high workloads, reduced clinical time, increased administrative tasks and a sense they could not get done in a day what they wanted or were expected to do. Participants felt this was worse for older workers, because they were slower than they used to be, yet had a high work ethic and pride in delivering quality care.\n\"It's not the work you do, it's the work you cannot do that is frustrating and emotionally distressing and results in less job satisfaction.\"\nGiven this context, some expressed a lowered tolerance for uncooperative, rude or aggressive patients, or those who 'won't help themselves'. Conversely, some felt that life experience gave them a more 'empathetic' view than when they were younger.\nParticipants in each workshop were offered the opportunity to suggest potential solutions to the problems that were identified. These solutions fell into three categories - recommendations for personal behavioural changes, for work-supportive solutions (by local and area health services) and system-wide solutions such as purchasing of vials with more legible print. These ideas are being incorporated into an ongoing action-research project by the health service.", "This study attempted to capture a range of perspectives on this issue from older health workers in rural health services across the Hunter New England region, although it could not be claimed that the views of participants were 'representative' of all older health workers in the area. Recruitment of individuals was dependent upon participants receiving information about the workshops; the appeal of attending a 'workshop' on this issue; and available time on the part of managers to encourage local participation and ensure information was received by all health workers. Whilst provisions were made for work release to attend workshops, fewer 'current' shift workers attended workshops than community health staff and others who worked more regular hours. Also, whilst groups were facilitated, it is possible some individuals did not feel free to express personal opinions in a group forum; and that some individual opinions were not captured by group data collection processes.\nDespite these study limitations, recurrent themes arose in the workshops relating to broad areas of difficulty, with no new themes on the issue emerging by the last workshop. Ageing changes described by older health workers that contributed to difficulties at work, are generally consistent with ageing impacts reported elsewhere [5-8]. In particular, musculoskeletal problems, deteriorating vision, fatigue and reduced fine motor dexterity were common concerns.\nParticipants were concerned that these difficulties were affecting their sense of personal health and well-being. A desire for reduced work hours and to spend more time with family, suggested an awareness of the impacts of 'too much work', supporting findings elsewhere that the benefits of a 'healthy worker effect' can be compromised by 'work-family' conflict and excessive work hours [9,11,24] Physical demands, rotating shifts, inflexible scheduling and full time positions, are also known to be associated with chronic fatigue and personal health concerns of older nurses elsewhere [6,9,25]. However, alternatives such as lateral movement of older workers to less physically demanding positions cited elsewhere [10,26,27], are a less likely option for many small rural health services, already short of staff.\nDifficulties working with computers were commonly reported, supporting findings elsewhere [28,29], including the sense that computerised systems did not necessarily reduce documentation time [30]. Participants were keen to participate in computer education courses, given attention to more appropriate, adult learning styles - a finding also supported by other studies [5,12]. However, barriers to education access for older workers not reported elsewhere, include difficulties with night and long distance driving and difficulties hearing teleconferencing facilities.\nWhilst a sense of community and appreciation of rural lifestyle are amongst the most important reasons given by health workers, for continuing to work in rural areas [19,31,32], there is also a greater likelihood that patients in rural areas are known by health workers, with emotional implications when well-known patients suffer or die. The grief and emotional impact described by participants of 'losing someone you know' may be a form of 'compassion fatigue', also described as the 'cost of caring for others in emotional pain' [33,34]. Given participants' life stage and the rural context, 'compassion fatigue' may be part of the fatigue experience of older rural health workers.\nReviewing life-work balance has been reported as a response to working conditions, existence of chronic pain, ageing and personal issues [10]. Prominent among 'personal' issues in this study and elsewhere [6,35], are the demands and complex responsibilities associated with an extended primary carer role outside of work - contributing to a desire to work less hours.\nSimilar to other studies [10,36], participants felt that the perspectives of older workers were often not valued by the organisation, even though worker participants were generally supportive of the organisation and happy to have been consulted in this study. A sense of powerlessness about not having one's opinion heard, has been associated with burnout and ill-health in health workers [10]. More generally, work stress amongst rural nurses has been attributed to a climate of change, restructuring and financial constraints that lead to increased expectations or role changes for which nurses are not prepared or consulted [37]. This experience may be worse for older workers, also dealing with ageing and generational factors that make adapting to change harder.\nAs a generation, older health workers have been described as committed, hard-working, intuitive and skilled decision makers, possessing a deep-rooted understanding of patient needs [6,26,38]. However, such values may increase the emotional exhaustion and guilt felt when extra stressors related to role change and patient load, takes time away from providing the quality care that helps define self-worth [6,39,40].\nIntolerance for perceived rudeness, abuse and unwillingness in some patients to 'help themselves', may arise because such behaviours conflict with generational values; and because older health workers feel they are already providing quality care under stressful circumstances. For example, obese patients were regarded as more physically demanding, which, if combined with a perception that a patient was not trying to help themselves, could contribute to or explain some of the negative attitudes toward such patients, that have been reported elsewhere [41]. However, in contrast, other participants felt they were more empathetic or tolerant of 'demanding' patients than younger workers - a finding also reported elsewhere [41,42].\nGenerally speaking, older health workers displayed a great sense of humour during the workshops and believed their maturity and wisdom gave them a valuable perspective on work and life. Like older workers elsewhere, participants 'continue to care', despite difficulties, intergenerational conflicts and less respect from patients [10,39].\nCollegial relationships within the health care team have been found to reduce job stress and foster decisions to stay in the workplace [6], whilst poor working conditions, lack of supportive workplace relationships, combined with ageing concerns, have influenced older nurses to make changes for the sake of their own health [10]. Finding ways to address concerns and avoid excessive demands being placed on older health workers, should contribute to a happier, healthier and more sustainable older health workforce.", "This study describes some of the practical difficulties and issues confronting older workers in the Hunter New England health region of NSW, which are likely to be shared in other regions. With a view to developing practical solutions, the following recommendations have been made to and accepted by the Health Service Executive Team:\n- Older health workers should be involved in development of \"a resource booklet\" on ageing and other factors that impact upon work with practical suggestions for addressing these at personal and local level\n- The Health Service should establish a health service \"Task Force\", comprising managers, older rural health workers and an occupational therapist, to examine the study findings and implement area-wide policy and practice solutions, as well as recommendations for state-wide policy development.\nOlder health workers in rural areas are a committed and productive section of the workforce who are meeting health service delivery needs at considerable personal cost. This study supports the view that there comes a point where physical and emotional 'costs' exceed the benefits of work, particularly as other realms of life take on more importance - but that this is likely to occur more rapidly, where stressors and difficulties that are amenable to solution or modification are not addressed. Actions that health services can take to consult and value the opinions of older workers in addressing these difficulties, will benefit not only older workers, but will be in the interests of the health service and better health care delivery.", "The authors declare that they have no competing interests.", "Both authors have (1) made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; 2) were involved in drafting the manuscript; and 3) have given final approval of the final manuscript.", "The Older Health Workers Project was an internally funded, co-joint activity of the Australian Centre for Agricultural Health and Safety and the Hunter New England Area Health Service. No special-purpose or external funds were received or allocated from any other funding bodies. Research time and labour costs were met under existing funding arrangements by the Australian Centre for Agricultural Health and Safety.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/42/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Challenges in particular work settings", "Challenges spanning work tasks and settings", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Funding", "Pre-publication history" ]
[ "The rural health workforce in Australia is ageing and is older than the urban health workforce [1-4]. In 2005, the average age of nurses in outer regional areas of Australia was 46.0 years, compared to 44.6 years in metropolitan areas [2]. Although younger than the nursing profession, rural allied health workers also tend to be older than their city counterparts [3]. Both the nursing and allied health workforces are predominantly female, at around 75% [4].\nFor the Hunter New England Area Health Service (HNE Health), in northern New South Wales, around one quarter of allied health workers and a half of the rural nursing workforce were aged over 50 years in October 2009 (HNE Health Workforce Informatics). This a pattern likely to be shared by other rural health services across Australia. Workforce sustainability, at least in the short term, is likely to depend on nurses and allied health workers being willing, able and happy to work beyond middle age.\nAgeing is associated with recognised physical and mental changes including reductions in aerobic power, muscular strength and endurance, reaction speed, acuity of special senses, difficulties with thermoregulation, sleep disturbances and concerns about increased risk of chronic illness [5-8]. Back pain, other musculo-skeletal disorders and stress-related mood disorders are common health and injury problems with older health workers [9,10]. However, in some studies, older nurses and other older workers have been shown to have better than average physical and mental health - although this may be because work helps keep older workers healthy, or less healthy older workers leave the workforce [9,11].\nGenerally speaking, in the absence of major illness, little relationship has been found between ageing and work performance per se [5,12-14]. Personal health and fitness however, can be highly variable amongst older adults and is more likely to affect physical strength and likelihood of injury than ageing itself [5,12]. Poor health is strongly associated with labour force exit [15-17]. Whilst older workers have considerable capacity to manage job demands and difficulties, it has been suggested that at some point, older workers do become \"overwhelmed\" by the increased risk of health consequences, injury and disability [18].\nRecruitment and retention of health professionals of all ages has proved an ongoing difficulty in rural areas. This is related to an interplay of personal, environmental and wider work-related factors. Professional isolation, lack of recognition limited access to professional education, high workloads and stress related to lack of replacement staff, long work hours, limited resources and sometimes unsupportive management practices are commonly described workplace challenges [3,19-22].\nGiven the proportion of rural health workers aged over 50 years, the question to be addressed is whether there are specific difficulties for older workers on a day to day basis, in addition to the challenges faced by all health workers? Specifically, this study aimed to identify work tasks and broader challenges that have become more difficult for rural health workers as they have become older and the factors that contribute to these difficulties. Findings are informing the development of solutions at personal, workplace and system-wide levels, in an ongoing action research project.", "This qualitative research project used a focus group methodology and thematic analysis to identify and interpret issues arising from workshop discussions. Participants were also encouraged to reflect on issues, provide further input and act locally on ideas whilst the wider research was still in progress, consistent with qualitative action research principles.\nA Project Reference Group (PRG) was established prior to commencement of the workshops, which consisted of the two researchers and eight health service personnel. These included health service managers, rural nurses and allied health workers currently working within HNE Health. PRG members were nominated by the HNE Health Executive Team on the basis of particular areas of practice, location, responsibility or expertise relevant to the Project. The purpose of the PRG was to provide practical advice and assistance on the conduct and direction of the project; provide feedback after the initial workshop; and to review and provide practical comment on preliminary results and recommendations.\nCommunities were selected to host a focus group or 'workshop' on the basis of size and geographic spread across the Hunter New England region of rural NSW. Each workshop was held in a town with a population between 5,000 and 30,000 people. Communities were located in six of the seven rural management clusters of HNE Health (Figure 1).\nLocation of communities where older health workers project workshops were held within Hunter New England Area Health Service, NSW, Australia. \nLocal health service managers or their representatives were contacted to assist in arranging the workshop and to make sure all health service staff knew about it. Notification of each local workshop, with an invitation to participate directed at staff '50 years and over', was circulated 2-3 weeks in advance, through regular workplace communication channels. This included staff email networks, staff meetings, staff notice boards and word-of-mouth. Health service managers in communities within an hour's drive of the hosting health service were also asked to inform their staff about the workshop.\nA convenience sample of 80 older health workers attended the six workshops between August and November 2008. Four were men and an average of 13 participants attended each workshop. Demographic information on the age and characteristics of individual participants was not collected, but each group was asked to indicate their general work experience and fields of practice. Consistent with an 'over 50' cohort, a number of participants had worked within the health service for 30 years or more.\nParticipants were mainly drawn from the nursing and allied health professional groups, who were the focus of recruitment efforts. Allied health fields of practice included radiography, occupational therapy, diversional therapy, physiotherapy, social work and psychology. Nursing sector participants included registered nurses and enrolled nurses from community health, aged care, general hospital wards, operating theatres, sterilising departments, midwifery, emergency departments and health service management. A small number were from other professional groupings (less than 10%), encompassing 'other professional, para-professional and clinical support staff' (Aboriginal health), 'corporate services' (clerical administration) and 'hotel services' (catering). The majority of participants worked regular office hours, although others had worked shift work in the recent past.\nParticipation was completely voluntary and informed consent was obtained from participants prior to their involvement. At the outset of each workshop, the purpose of the research was explained to participants by the lead researcher, who acted as the group facilitator. A 'participant information sheet' was provided outlining further details; and after opportunity for questions, participants were asked to sign a formal consent form prior to commencement of discussions.\nWithin each workshop, participants were asked to identify and describe:\n• work tasks and aspects of work that have become more difficult for rural health workers as they have become older;\n• age-related changes and other exacerbating factors contributing to these difficulties\nResponses focusing on these questions were recorded by participants during small break-out groups of 3-5 people. In particular, participants were asked to list difficulties in a table; and to record ideas on the age-related changes and exacerbating factors that contributed to each of these difficulties in an adjacent column. One or two items from each small group were selected for further discussion by the larger group, enabling other group members to contribute and/or debate ideas. One researcher had the designated task of taking detailed notes during the course of these larger group discussions, which supplemented the data recorded by participants on the small group worksheets.\nThematic content analysis was then used to identify issues arising from workshop discussions. Notes and worksheets were reviewed after each workshop and information sorted into categories that focused on defining difficulties associated with specific tasks, variation within work contexts and relationships between categories. Categories were progressively reviewed and used to help guide discussion in subsequent workshops until no new information on the subject arose. Themes relating to difficulties were tabulated moving from (1) particular tasks and specific workplace settings, to (2) more general aspects of work and broader settings.\nParticipants were invited to reflect on ideas shared in their 'workshop' and act locally on issues whilst the wider research project was still in progress. Preliminary findings from all workshops were sent to participants, who were given opportunity to provide further feedback and ideas prior to final recommendations being made.\nThe research protocol was approved by the Hunter New England Human Research Ethics Committee (Reference 08/05/21/4.06). This was consistent with local health service requirements and research ethics principles of the World Medical Association Declaration Of Helsinki [23]", "[SUBTITLE] Challenges in particular work settings [SUBSECTION] The following tables indicate specific tasks and challenges in a range of work settings that older rural health workers reported as becoming more difficult with age. Work and age-related factors that contributed to these challenges, are also presented.\nTasks and challenges in hospital ward settings that have become more difficult with age\nWork in the hospital setting that has become more difficult for older workers includes reading medication labels. This is exacerbated by poor lighting, small print size and print colours, such as orange-red print on ampoules used for injections. Hearing difficulties are exacerbated by high levels of workplace noise and for some, by the accents of staff from non-English-speaking backgrounds.\nManual handling and manoeuvring of patients was particularly difficult where squatting, bending or maintaining postures for long periods was required. It was also considered more physically challenging, because patients were getting older, more debilitated and overweight.\n\"Patients are older and heavier ... and so are we.\"\nShift work was met with increased tiredness with an expressed need for longer recovery times between shifts. Disturbed sleep and anxiety about lack of sleep was commonly reported, with problems of tiredness sometimes exacerbated by a lack of replacement staff.\n\"It takes 2 days to get over a double shift.\"\n\"You can't go off sick - there's no one to replace you.\"\nSpecific clinical procedures, such as physiotherapy exercises, are also more difficult because they often involve long periods of leaning, bending and being on one's feet. Midwifery had become more of a challenge for older midwives, partly attributed due to the increased birthing options and positions for women.\n\"I think (these are) reasons why some older nurses no longer practice midwifery.\"\nChallenges facing older community health workers are described in Table 2:\nTasks and challenges in community settings that have become more difficult with age\nCommunity health workers described greater driver discomfort and fatigue now they are older. Getting in and out of vehicles was more difficult - especially with the purchase of smaller, lower cars. Carrying equipment to and from vehicles and moving around in cluttered homes, has made home visiting more difficult.\n\"The (smaller car brands) are too low - you have to 'roll out of them.'\"\nWork with computers (Table 3) also poses a particular challenge for older health workers. The problems relate to physical discomfort associated with sitting for long periods, keyboard dexterity and seeing small font on screens. Additionally, there are mental challenges associated with the rapid adoption of new programs, the need to remember passwords and interpretation of computerised results. Little previous exposure and a general lack of confidence with computers was a commonly reported experience. It was also felt group training sessions were mostly levelled at a younger, more computer literate generation.\nChallenges associated with computer work that have become more difficult with age\n\"I.T trainers assume a skill level that is not necessarily so\"\n\"Some have even left because of the computers... or some can't work in emergency department (ED) ... because they can't use the computer.\"\nParticipation in meetings, conferences and education programs often requires long distance travel and night driving (Table 4). This has become harder now that participants are older, due to driver fatigue, musculoskeletal problems and difficulties with night vision. These issues are exacerbated by a lack of adequate back support in vehicles, glare of lights and bright road signs. Alternatives such as teleconferencing also pose problems for the hearing impaired.\nChallenges associated with meetings and ongoing education programs that have become more difficult with age\n\" Once upon a time I could get in a car and drive to a meeting at the end of the day. Now I can't do that.\"\nThe following tables indicate specific tasks and challenges in a range of work settings that older rural health workers reported as becoming more difficult with age. Work and age-related factors that contributed to these challenges, are also presented.\nTasks and challenges in hospital ward settings that have become more difficult with age\nWork in the hospital setting that has become more difficult for older workers includes reading medication labels. This is exacerbated by poor lighting, small print size and print colours, such as orange-red print on ampoules used for injections. Hearing difficulties are exacerbated by high levels of workplace noise and for some, by the accents of staff from non-English-speaking backgrounds.\nManual handling and manoeuvring of patients was particularly difficult where squatting, bending or maintaining postures for long periods was required. It was also considered more physically challenging, because patients were getting older, more debilitated and overweight.\n\"Patients are older and heavier ... and so are we.\"\nShift work was met with increased tiredness with an expressed need for longer recovery times between shifts. Disturbed sleep and anxiety about lack of sleep was commonly reported, with problems of tiredness sometimes exacerbated by a lack of replacement staff.\n\"It takes 2 days to get over a double shift.\"\n\"You can't go off sick - there's no one to replace you.\"\nSpecific clinical procedures, such as physiotherapy exercises, are also more difficult because they often involve long periods of leaning, bending and being on one's feet. Midwifery had become more of a challenge for older midwives, partly attributed due to the increased birthing options and positions for women.\n\"I think (these are) reasons why some older nurses no longer practice midwifery.\"\nChallenges facing older community health workers are described in Table 2:\nTasks and challenges in community settings that have become more difficult with age\nCommunity health workers described greater driver discomfort and fatigue now they are older. Getting in and out of vehicles was more difficult - especially with the purchase of smaller, lower cars. Carrying equipment to and from vehicles and moving around in cluttered homes, has made home visiting more difficult.\n\"The (smaller car brands) are too low - you have to 'roll out of them.'\"\nWork with computers (Table 3) also poses a particular challenge for older health workers. The problems relate to physical discomfort associated with sitting for long periods, keyboard dexterity and seeing small font on screens. Additionally, there are mental challenges associated with the rapid adoption of new programs, the need to remember passwords and interpretation of computerised results. Little previous exposure and a general lack of confidence with computers was a commonly reported experience. It was also felt group training sessions were mostly levelled at a younger, more computer literate generation.\nChallenges associated with computer work that have become more difficult with age\n\"I.T trainers assume a skill level that is not necessarily so\"\n\"Some have even left because of the computers... or some can't work in emergency department (ED) ... because they can't use the computer.\"\nParticipation in meetings, conferences and education programs often requires long distance travel and night driving (Table 4). This has become harder now that participants are older, due to driver fatigue, musculoskeletal problems and difficulties with night vision. These issues are exacerbated by a lack of adequate back support in vehicles, glare of lights and bright road signs. Alternatives such as teleconferencing also pose problems for the hearing impaired.\nChallenges associated with meetings and ongoing education programs that have become more difficult with age\n\" Once upon a time I could get in a car and drive to a meeting at the end of the day. Now I can't do that.\"\n[SUBTITLE] Challenges spanning work tasks and settings [SUBSECTION] Table 5 indicates the more general challenges associated with age and work within rural health services. These difficulties span across work settings and particular tasks.\nChallenges reported that span work settings and particular tasks\nParticipants felt that meeting overall workplace demands across settings had generally become more difficult, as workers have become older. Mental fatigue is occurring from a sense of 'information overload' and an expectation to have 'stuff in your head'. Increased paperwork requirements and having multiple roles within the health service, are contributing to a build up of mental stress, making it difficult to concentrate amidst distractions and leading to a sense of 'always feeling stretched'.\n\"It's just harder when things mount up at end of the day.\"\nPersonal challenges reported by older workers include coping with 'change,' such as increased role complexity, 'more hats to wear' and greater administrative requirements.\n\"A new system every year, you learn it, then it changes again.\"\nThis is reportedly worse when older workers 'organisational wisdom' about change and what has worked in the past, has not been sought or valued. 'New ideas' were sometimes perceived as 'reinventions' that led to more work, more stress and little perceived benefit.\n\"What goes around comes around, a revolving door of ideas, ... you've done it before and nothing ever came of it.\"\nEmotional stress following acute care episodes or death of people, has also become much harder to cope with, with workers often knowing people personally, after years of caring for them. Participants also reported a greater sense of empathy, attachment, loss and an increased awareness of their own ageing and mortality.\n\"The death of older people affects us more as we age ourselves - they're closer to our own age now.\"\n\"Now you know them, they're not just a patient.\"\nBalancing work with family life commitments is another challenge. Many described having an extended carer role within families - for ageing parents, partners, grown up children and grand-children. This could involve travelling long distances for health and family matters. Participants expressed a desire to work less or more regular hours, or have time off to attend to these responsibilities. However, they felt management often did not understand this need, or that there were not enough workers available in small rural health services to enable this.\n\"When finished work, (we are) still carrying the bricks - caring for our own sick elderly.\"\n\"At our age, we have children growing up, with their children, and elderly parents, (we are) the main carer.\"\nParticipants were keen to make a positive contribution to good health care in their community - and believe they were doing so. However, at the same time, many felt they fall short of achieving personal standards related to care - or in fulfilling the expectations of patients (related to care), the organisation (relating to workload) and peers (related to support). This was attributed to high workloads, reduced clinical time, increased administrative tasks and a sense they could not get done in a day what they wanted or were expected to do. Participants felt this was worse for older workers, because they were slower than they used to be, yet had a high work ethic and pride in delivering quality care.\n\"It's not the work you do, it's the work you cannot do that is frustrating and emotionally distressing and results in less job satisfaction.\"\nGiven this context, some expressed a lowered tolerance for uncooperative, rude or aggressive patients, or those who 'won't help themselves'. Conversely, some felt that life experience gave them a more 'empathetic' view than when they were younger.\nParticipants in each workshop were offered the opportunity to suggest potential solutions to the problems that were identified. These solutions fell into three categories - recommendations for personal behavioural changes, for work-supportive solutions (by local and area health services) and system-wide solutions such as purchasing of vials with more legible print. These ideas are being incorporated into an ongoing action-research project by the health service.\nTable 5 indicates the more general challenges associated with age and work within rural health services. These difficulties span across work settings and particular tasks.\nChallenges reported that span work settings and particular tasks\nParticipants felt that meeting overall workplace demands across settings had generally become more difficult, as workers have become older. Mental fatigue is occurring from a sense of 'information overload' and an expectation to have 'stuff in your head'. Increased paperwork requirements and having multiple roles within the health service, are contributing to a build up of mental stress, making it difficult to concentrate amidst distractions and leading to a sense of 'always feeling stretched'.\n\"It's just harder when things mount up at end of the day.\"\nPersonal challenges reported by older workers include coping with 'change,' such as increased role complexity, 'more hats to wear' and greater administrative requirements.\n\"A new system every year, you learn it, then it changes again.\"\nThis is reportedly worse when older workers 'organisational wisdom' about change and what has worked in the past, has not been sought or valued. 'New ideas' were sometimes perceived as 'reinventions' that led to more work, more stress and little perceived benefit.\n\"What goes around comes around, a revolving door of ideas, ... you've done it before and nothing ever came of it.\"\nEmotional stress following acute care episodes or death of people, has also become much harder to cope with, with workers often knowing people personally, after years of caring for them. Participants also reported a greater sense of empathy, attachment, loss and an increased awareness of their own ageing and mortality.\n\"The death of older people affects us more as we age ourselves - they're closer to our own age now.\"\n\"Now you know them, they're not just a patient.\"\nBalancing work with family life commitments is another challenge. Many described having an extended carer role within families - for ageing parents, partners, grown up children and grand-children. This could involve travelling long distances for health and family matters. Participants expressed a desire to work less or more regular hours, or have time off to attend to these responsibilities. However, they felt management often did not understand this need, or that there were not enough workers available in small rural health services to enable this.\n\"When finished work, (we are) still carrying the bricks - caring for our own sick elderly.\"\n\"At our age, we have children growing up, with their children, and elderly parents, (we are) the main carer.\"\nParticipants were keen to make a positive contribution to good health care in their community - and believe they were doing so. However, at the same time, many felt they fall short of achieving personal standards related to care - or in fulfilling the expectations of patients (related to care), the organisation (relating to workload) and peers (related to support). This was attributed to high workloads, reduced clinical time, increased administrative tasks and a sense they could not get done in a day what they wanted or were expected to do. Participants felt this was worse for older workers, because they were slower than they used to be, yet had a high work ethic and pride in delivering quality care.\n\"It's not the work you do, it's the work you cannot do that is frustrating and emotionally distressing and results in less job satisfaction.\"\nGiven this context, some expressed a lowered tolerance for uncooperative, rude or aggressive patients, or those who 'won't help themselves'. Conversely, some felt that life experience gave them a more 'empathetic' view than when they were younger.\nParticipants in each workshop were offered the opportunity to suggest potential solutions to the problems that were identified. These solutions fell into three categories - recommendations for personal behavioural changes, for work-supportive solutions (by local and area health services) and system-wide solutions such as purchasing of vials with more legible print. These ideas are being incorporated into an ongoing action-research project by the health service.", "The following tables indicate specific tasks and challenges in a range of work settings that older rural health workers reported as becoming more difficult with age. Work and age-related factors that contributed to these challenges, are also presented.\nTasks and challenges in hospital ward settings that have become more difficult with age\nWork in the hospital setting that has become more difficult for older workers includes reading medication labels. This is exacerbated by poor lighting, small print size and print colours, such as orange-red print on ampoules used for injections. Hearing difficulties are exacerbated by high levels of workplace noise and for some, by the accents of staff from non-English-speaking backgrounds.\nManual handling and manoeuvring of patients was particularly difficult where squatting, bending or maintaining postures for long periods was required. It was also considered more physically challenging, because patients were getting older, more debilitated and overweight.\n\"Patients are older and heavier ... and so are we.\"\nShift work was met with increased tiredness with an expressed need for longer recovery times between shifts. Disturbed sleep and anxiety about lack of sleep was commonly reported, with problems of tiredness sometimes exacerbated by a lack of replacement staff.\n\"It takes 2 days to get over a double shift.\"\n\"You can't go off sick - there's no one to replace you.\"\nSpecific clinical procedures, such as physiotherapy exercises, are also more difficult because they often involve long periods of leaning, bending and being on one's feet. Midwifery had become more of a challenge for older midwives, partly attributed due to the increased birthing options and positions for women.\n\"I think (these are) reasons why some older nurses no longer practice midwifery.\"\nChallenges facing older community health workers are described in Table 2:\nTasks and challenges in community settings that have become more difficult with age\nCommunity health workers described greater driver discomfort and fatigue now they are older. Getting in and out of vehicles was more difficult - especially with the purchase of smaller, lower cars. Carrying equipment to and from vehicles and moving around in cluttered homes, has made home visiting more difficult.\n\"The (smaller car brands) are too low - you have to 'roll out of them.'\"\nWork with computers (Table 3) also poses a particular challenge for older health workers. The problems relate to physical discomfort associated with sitting for long periods, keyboard dexterity and seeing small font on screens. Additionally, there are mental challenges associated with the rapid adoption of new programs, the need to remember passwords and interpretation of computerised results. Little previous exposure and a general lack of confidence with computers was a commonly reported experience. It was also felt group training sessions were mostly levelled at a younger, more computer literate generation.\nChallenges associated with computer work that have become more difficult with age\n\"I.T trainers assume a skill level that is not necessarily so\"\n\"Some have even left because of the computers... or some can't work in emergency department (ED) ... because they can't use the computer.\"\nParticipation in meetings, conferences and education programs often requires long distance travel and night driving (Table 4). This has become harder now that participants are older, due to driver fatigue, musculoskeletal problems and difficulties with night vision. These issues are exacerbated by a lack of adequate back support in vehicles, glare of lights and bright road signs. Alternatives such as teleconferencing also pose problems for the hearing impaired.\nChallenges associated with meetings and ongoing education programs that have become more difficult with age\n\" Once upon a time I could get in a car and drive to a meeting at the end of the day. Now I can't do that.\"", "Table 5 indicates the more general challenges associated with age and work within rural health services. These difficulties span across work settings and particular tasks.\nChallenges reported that span work settings and particular tasks\nParticipants felt that meeting overall workplace demands across settings had generally become more difficult, as workers have become older. Mental fatigue is occurring from a sense of 'information overload' and an expectation to have 'stuff in your head'. Increased paperwork requirements and having multiple roles within the health service, are contributing to a build up of mental stress, making it difficult to concentrate amidst distractions and leading to a sense of 'always feeling stretched'.\n\"It's just harder when things mount up at end of the day.\"\nPersonal challenges reported by older workers include coping with 'change,' such as increased role complexity, 'more hats to wear' and greater administrative requirements.\n\"A new system every year, you learn it, then it changes again.\"\nThis is reportedly worse when older workers 'organisational wisdom' about change and what has worked in the past, has not been sought or valued. 'New ideas' were sometimes perceived as 'reinventions' that led to more work, more stress and little perceived benefit.\n\"What goes around comes around, a revolving door of ideas, ... you've done it before and nothing ever came of it.\"\nEmotional stress following acute care episodes or death of people, has also become much harder to cope with, with workers often knowing people personally, after years of caring for them. Participants also reported a greater sense of empathy, attachment, loss and an increased awareness of their own ageing and mortality.\n\"The death of older people affects us more as we age ourselves - they're closer to our own age now.\"\n\"Now you know them, they're not just a patient.\"\nBalancing work with family life commitments is another challenge. Many described having an extended carer role within families - for ageing parents, partners, grown up children and grand-children. This could involve travelling long distances for health and family matters. Participants expressed a desire to work less or more regular hours, or have time off to attend to these responsibilities. However, they felt management often did not understand this need, or that there were not enough workers available in small rural health services to enable this.\n\"When finished work, (we are) still carrying the bricks - caring for our own sick elderly.\"\n\"At our age, we have children growing up, with their children, and elderly parents, (we are) the main carer.\"\nParticipants were keen to make a positive contribution to good health care in their community - and believe they were doing so. However, at the same time, many felt they fall short of achieving personal standards related to care - or in fulfilling the expectations of patients (related to care), the organisation (relating to workload) and peers (related to support). This was attributed to high workloads, reduced clinical time, increased administrative tasks and a sense they could not get done in a day what they wanted or were expected to do. Participants felt this was worse for older workers, because they were slower than they used to be, yet had a high work ethic and pride in delivering quality care.\n\"It's not the work you do, it's the work you cannot do that is frustrating and emotionally distressing and results in less job satisfaction.\"\nGiven this context, some expressed a lowered tolerance for uncooperative, rude or aggressive patients, or those who 'won't help themselves'. Conversely, some felt that life experience gave them a more 'empathetic' view than when they were younger.\nParticipants in each workshop were offered the opportunity to suggest potential solutions to the problems that were identified. These solutions fell into three categories - recommendations for personal behavioural changes, for work-supportive solutions (by local and area health services) and system-wide solutions such as purchasing of vials with more legible print. These ideas are being incorporated into an ongoing action-research project by the health service.", "This study attempted to capture a range of perspectives on this issue from older health workers in rural health services across the Hunter New England region, although it could not be claimed that the views of participants were 'representative' of all older health workers in the area. Recruitment of individuals was dependent upon participants receiving information about the workshops; the appeal of attending a 'workshop' on this issue; and available time on the part of managers to encourage local participation and ensure information was received by all health workers. Whilst provisions were made for work release to attend workshops, fewer 'current' shift workers attended workshops than community health staff and others who worked more regular hours. Also, whilst groups were facilitated, it is possible some individuals did not feel free to express personal opinions in a group forum; and that some individual opinions were not captured by group data collection processes.\nDespite these study limitations, recurrent themes arose in the workshops relating to broad areas of difficulty, with no new themes on the issue emerging by the last workshop. Ageing changes described by older health workers that contributed to difficulties at work, are generally consistent with ageing impacts reported elsewhere [5-8]. In particular, musculoskeletal problems, deteriorating vision, fatigue and reduced fine motor dexterity were common concerns.\nParticipants were concerned that these difficulties were affecting their sense of personal health and well-being. A desire for reduced work hours and to spend more time with family, suggested an awareness of the impacts of 'too much work', supporting findings elsewhere that the benefits of a 'healthy worker effect' can be compromised by 'work-family' conflict and excessive work hours [9,11,24] Physical demands, rotating shifts, inflexible scheduling and full time positions, are also known to be associated with chronic fatigue and personal health concerns of older nurses elsewhere [6,9,25]. However, alternatives such as lateral movement of older workers to less physically demanding positions cited elsewhere [10,26,27], are a less likely option for many small rural health services, already short of staff.\nDifficulties working with computers were commonly reported, supporting findings elsewhere [28,29], including the sense that computerised systems did not necessarily reduce documentation time [30]. Participants were keen to participate in computer education courses, given attention to more appropriate, adult learning styles - a finding also supported by other studies [5,12]. However, barriers to education access for older workers not reported elsewhere, include difficulties with night and long distance driving and difficulties hearing teleconferencing facilities.\nWhilst a sense of community and appreciation of rural lifestyle are amongst the most important reasons given by health workers, for continuing to work in rural areas [19,31,32], there is also a greater likelihood that patients in rural areas are known by health workers, with emotional implications when well-known patients suffer or die. The grief and emotional impact described by participants of 'losing someone you know' may be a form of 'compassion fatigue', also described as the 'cost of caring for others in emotional pain' [33,34]. Given participants' life stage and the rural context, 'compassion fatigue' may be part of the fatigue experience of older rural health workers.\nReviewing life-work balance has been reported as a response to working conditions, existence of chronic pain, ageing and personal issues [10]. Prominent among 'personal' issues in this study and elsewhere [6,35], are the demands and complex responsibilities associated with an extended primary carer role outside of work - contributing to a desire to work less hours.\nSimilar to other studies [10,36], participants felt that the perspectives of older workers were often not valued by the organisation, even though worker participants were generally supportive of the organisation and happy to have been consulted in this study. A sense of powerlessness about not having one's opinion heard, has been associated with burnout and ill-health in health workers [10]. More generally, work stress amongst rural nurses has been attributed to a climate of change, restructuring and financial constraints that lead to increased expectations or role changes for which nurses are not prepared or consulted [37]. This experience may be worse for older workers, also dealing with ageing and generational factors that make adapting to change harder.\nAs a generation, older health workers have been described as committed, hard-working, intuitive and skilled decision makers, possessing a deep-rooted understanding of patient needs [6,26,38]. However, such values may increase the emotional exhaustion and guilt felt when extra stressors related to role change and patient load, takes time away from providing the quality care that helps define self-worth [6,39,40].\nIntolerance for perceived rudeness, abuse and unwillingness in some patients to 'help themselves', may arise because such behaviours conflict with generational values; and because older health workers feel they are already providing quality care under stressful circumstances. For example, obese patients were regarded as more physically demanding, which, if combined with a perception that a patient was not trying to help themselves, could contribute to or explain some of the negative attitudes toward such patients, that have been reported elsewhere [41]. However, in contrast, other participants felt they were more empathetic or tolerant of 'demanding' patients than younger workers - a finding also reported elsewhere [41,42].\nGenerally speaking, older health workers displayed a great sense of humour during the workshops and believed their maturity and wisdom gave them a valuable perspective on work and life. Like older workers elsewhere, participants 'continue to care', despite difficulties, intergenerational conflicts and less respect from patients [10,39].\nCollegial relationships within the health care team have been found to reduce job stress and foster decisions to stay in the workplace [6], whilst poor working conditions, lack of supportive workplace relationships, combined with ageing concerns, have influenced older nurses to make changes for the sake of their own health [10]. Finding ways to address concerns and avoid excessive demands being placed on older health workers, should contribute to a happier, healthier and more sustainable older health workforce.", "This study describes some of the practical difficulties and issues confronting older workers in the Hunter New England health region of NSW, which are likely to be shared in other regions. With a view to developing practical solutions, the following recommendations have been made to and accepted by the Health Service Executive Team:\n- Older health workers should be involved in development of \"a resource booklet\" on ageing and other factors that impact upon work with practical suggestions for addressing these at personal and local level\n- The Health Service should establish a health service \"Task Force\", comprising managers, older rural health workers and an occupational therapist, to examine the study findings and implement area-wide policy and practice solutions, as well as recommendations for state-wide policy development.\nOlder health workers in rural areas are a committed and productive section of the workforce who are meeting health service delivery needs at considerable personal cost. This study supports the view that there comes a point where physical and emotional 'costs' exceed the benefits of work, particularly as other realms of life take on more importance - but that this is likely to occur more rapidly, where stressors and difficulties that are amenable to solution or modification are not addressed. Actions that health services can take to consult and value the opinions of older workers in addressing these difficulties, will benefit not only older workers, but will be in the interests of the health service and better health care delivery.", "The authors declare that they have no competing interests.", "Both authors have (1) made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; 2) were involved in drafting the manuscript; and 3) have given final approval of the final manuscript.", "The Older Health Workers Project was an internally funded, co-joint activity of the Australian Centre for Agricultural Health and Safety and the Hunter New England Area Health Service. No special-purpose or external funds were received or allocated from any other funding bodies. Research time and labour costs were met under existing funding arrangements by the Australian Centre for Agricultural Health and Safety.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/42/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Spontaneous hyaline cartilage regeneration can be induced in an osteochondral defect created in the femoral condyle using a novel double-network hydrogel.
21338528
Functional repair of articular osteochondral defects remains a major challenge not only in the field of knee surgery but also in tissue regeneration medicine. The purpose is to clarify whether the spontaneous hyaline cartilage regeneration can be induced in a large osteochondral defect created in the femoral condyle by means of implanting a novel double-network (DN) gel at the bottom of the defect.
BACKGROUND
Twenty-five mature rabbits were used in this study. In the bilateral knees of each animal, we created an osteochondral defect having a diameter of 2.4-mm in the medial condyle. Then, in 21 rabbits, we implanted a DN gel plug into a right knee defect so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left. In the left knee, we did not apply any treatment to the defect to obtain the control data. All the rabbits were sacrificed at 4 weeks, and the gross and histological evaluations were performed. The remaining 4 rabbits underwent the same treatment as used in Group II, and real-time PCR analysis was performed at 4 weeks.
METHODS
The defect in Group II was filled with a sufficient volume of the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. The Wayne's gross appearance and histology scores showed that Group II was significantly greater than Group I, III, and Control (p < 0.012). The relative expression level of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in Group II than in the control group (p < 0.023).
RESULTS
This study demonstrated that spontaneous hyaline cartilage regeneration can be induced in vivo in an osteochondral defect created in the femoral condyle by means of implanting the DN gel plug at the bottom of the defect so that an approximately 2-mm deep vacant space was intentionally left in the defect. This fact has prompted us to propose an innovative strategy without cell culture to repair osteochondral lesions in the femoral condyle.
CONCLUSIONS
[ "Aggrecans", "Animals", "Chondrogenesis", "Collagen Type II", "Female", "Femur", "Hyaline Cartilage", "Hydrogels", "Models, Animal", "Proteoglycans", "Rabbits", "Regeneration", "SOX9 Transcription Factor" ]
3050780
null
null
Methods
[SUBTITLE] 1) Materials [SUBSECTION] The PAMPS/PDMAAm DN hydrogel is a kind of interpenetrating network gel, but with an asymmetric structure: The first PAMPS network, which is rigid and brittle, is composed of densely cross-linked polyelectrolyte, and the second PDMAAm network, which is soft and ductile, consists of loosely or even non-crosslinked neutral polymers. The PAMPS/PDMAAm DN gel is strong enough to create an implantable plug, because the compressive fracture strength and the elastic modulus of the DN gel are 3.1 MPa and 0.2 MPa, respectively [21,22]. The material properties do not deteriorate in implantation into the subcutaneous tissue for 6 weeks [21]. The PAMPS network in this DN gel is negatively charged and has sulphonic acid bases, being similar to proteoglycans in normal cartilage. Our previous implantation test has shown that this DN gel is so bioactive that it induces cell infiltration in the muscle tissue at 1 week without any toxic effects for 6 weeks [23]. In addition, the PAMPS/PDMAAm DN gel surface can enhance differentiation of chondrogenic ATDC5 cells into chondrocytes in the in vitro condition [20,24]. The DN gel was synthesized by coauthors (T.K. and J.P.G) in Department of Biological Sciences, Hokkaido University Graduate School of Science, using the previously reported two-step sequential polymerization method [19]. After polymerization, the DN gel was immersed in pure water for 1 week and the water was changed 2 times every day to remove any un-reacted materials. From the DN gel, we created cylindrical plugs having a 2.7-mm diameter and an 8-mm length. The PAMPS/PDMAAm DN hydrogel is a kind of interpenetrating network gel, but with an asymmetric structure: The first PAMPS network, which is rigid and brittle, is composed of densely cross-linked polyelectrolyte, and the second PDMAAm network, which is soft and ductile, consists of loosely or even non-crosslinked neutral polymers. The PAMPS/PDMAAm DN gel is strong enough to create an implantable plug, because the compressive fracture strength and the elastic modulus of the DN gel are 3.1 MPa and 0.2 MPa, respectively [21,22]. The material properties do not deteriorate in implantation into the subcutaneous tissue for 6 weeks [21]. The PAMPS network in this DN gel is negatively charged and has sulphonic acid bases, being similar to proteoglycans in normal cartilage. Our previous implantation test has shown that this DN gel is so bioactive that it induces cell infiltration in the muscle tissue at 1 week without any toxic effects for 6 weeks [23]. In addition, the PAMPS/PDMAAm DN gel surface can enhance differentiation of chondrogenic ATDC5 cells into chondrocytes in the in vitro condition [20,24]. The DN gel was synthesized by coauthors (T.K. and J.P.G) in Department of Biological Sciences, Hokkaido University Graduate School of Science, using the previously reported two-step sequential polymerization method [19]. After polymerization, the DN gel was immersed in pure water for 1 week and the water was changed 2 times every day to remove any un-reacted materials. From the DN gel, we created cylindrical plugs having a 2.7-mm diameter and an 8-mm length. [SUBTITLE] 2) Study design [SUBSECTION] A total of 25 mature female New Zealand White rabbits, weighing 3.6 ± 0.4 kg, were used in this study. Animal experiments were carried out in the Institute of Animal Experimentation, Hokkaido University School of Medicine under the Rules and Regulation of the Animal Care and Use Committee, Hokkaido University School of Medicine. This experimental report was composed of 2 studies (Figure 1). In the first study, we divided 21 out of the 25 rabbits into 3 groups (Groups I, II, and III) of 7 animals each in order to clarify the effect of plug position (depth from the articular surface) in the defect on quality of the regenerated cartilage. An operation for each animal was performed under intravenous anesthesia (pentobarbital, 25 mg/kg) and sterile conditions. In the bilateral knees of each animal, we created a cylindrical osteochondral defect having a diameter of 2.4-mm at the center of the medial condyle of the FT joint, using a drill (Figure 2A). This diameter value was chosen as the maximal defect diameter that we could create on the cartilage surface without intra- or post-operative fracture of the medial condyle, because the width of the medial condyle around the defect was approximately 4.5 mm. Then, we implanted the PAMPS/PDMAAm DN gel plug into a defect in the right knee so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left (Figure 2B). The actual depth was precisely measured in the histological sections after sacrifice. According to the measured results, "the mean ± the standard deviation" of the real depth were 1.49 ± 0.28 mm in Group I, 2.44 ± 0.27 mm in Group II, and 3.46 ± 0.31 mm in Group III. The cartilage thickness was approximately 0.5 mm around the defect. In the left knee, we created the defect having the same depth as the right knee, and we did not apply any treatment to obtain the non-treated control (Control group). The incised joint capsule and the skin wound were closed in layers with 3-0 nylon sutures, and an antiseptic spray dressing was applied. Postoperatively, each animal was allowed unrestricted activity in a cage (310 × 550 × 320 mm) without any joint immobilization. In each group, all rabbits were sacrificed by pentobarbital injection at the 4-week period, and we performed the gross and histological evaluations using the grading scale reported by Wayne et al [14] as well as immunohistochemical observations for the bilateral knees. Flowchart to explain the study design. How to induce cartilage regeneration. A: We created a cylindrical osteochondral defect having a diameter of 2.4-mm in the medial condyle of the FT joint. Then, we implanted a double network (DN) gel plug into a bottom of the defect. B: A schematic cross-section of the osteochondral defect into which the plug was implanted. Note that a defect having a few millimeter depth from the cartilage surface remained after surgery. The second study was conducted using 4 rabbits, based on the results of the first study. The aim of the second study using real-time PCR analysis was to confirm gene expression of type-2 collagen, aggrecan, and SOX9 in the tissue regenerated in the defect of Group II in comparison with the non-treated control knees, because the degree of spontaneous cartilage regeneration was the greatest in Group II among the tested groups in the first study. The same surgical treatments as performed in Group II of the first study were carried out in the bilateral knees, respectively. Immediately after sacrifice at 4 weeks, total RNA was extracted from the tissues regenerated in the defect created in the bilateral knees. A total of 25 mature female New Zealand White rabbits, weighing 3.6 ± 0.4 kg, were used in this study. Animal experiments were carried out in the Institute of Animal Experimentation, Hokkaido University School of Medicine under the Rules and Regulation of the Animal Care and Use Committee, Hokkaido University School of Medicine. This experimental report was composed of 2 studies (Figure 1). In the first study, we divided 21 out of the 25 rabbits into 3 groups (Groups I, II, and III) of 7 animals each in order to clarify the effect of plug position (depth from the articular surface) in the defect on quality of the regenerated cartilage. An operation for each animal was performed under intravenous anesthesia (pentobarbital, 25 mg/kg) and sterile conditions. In the bilateral knees of each animal, we created a cylindrical osteochondral defect having a diameter of 2.4-mm at the center of the medial condyle of the FT joint, using a drill (Figure 2A). This diameter value was chosen as the maximal defect diameter that we could create on the cartilage surface without intra- or post-operative fracture of the medial condyle, because the width of the medial condyle around the defect was approximately 4.5 mm. Then, we implanted the PAMPS/PDMAAm DN gel plug into a defect in the right knee so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left (Figure 2B). The actual depth was precisely measured in the histological sections after sacrifice. According to the measured results, "the mean ± the standard deviation" of the real depth were 1.49 ± 0.28 mm in Group I, 2.44 ± 0.27 mm in Group II, and 3.46 ± 0.31 mm in Group III. The cartilage thickness was approximately 0.5 mm around the defect. In the left knee, we created the defect having the same depth as the right knee, and we did not apply any treatment to obtain the non-treated control (Control group). The incised joint capsule and the skin wound were closed in layers with 3-0 nylon sutures, and an antiseptic spray dressing was applied. Postoperatively, each animal was allowed unrestricted activity in a cage (310 × 550 × 320 mm) without any joint immobilization. In each group, all rabbits were sacrificed by pentobarbital injection at the 4-week period, and we performed the gross and histological evaluations using the grading scale reported by Wayne et al [14] as well as immunohistochemical observations for the bilateral knees. Flowchart to explain the study design. How to induce cartilage regeneration. A: We created a cylindrical osteochondral defect having a diameter of 2.4-mm in the medial condyle of the FT joint. Then, we implanted a double network (DN) gel plug into a bottom of the defect. B: A schematic cross-section of the osteochondral defect into which the plug was implanted. Note that a defect having a few millimeter depth from the cartilage surface remained after surgery. The second study was conducted using 4 rabbits, based on the results of the first study. The aim of the second study using real-time PCR analysis was to confirm gene expression of type-2 collagen, aggrecan, and SOX9 in the tissue regenerated in the defect of Group II in comparison with the non-treated control knees, because the degree of spontaneous cartilage regeneration was the greatest in Group II among the tested groups in the first study. The same surgical treatments as performed in Group II of the first study were carried out in the bilateral knees, respectively. Immediately after sacrifice at 4 weeks, total RNA was extracted from the tissues regenerated in the defect created in the bilateral knees. [SUBTITLE] 3) Statistical Analysis [SUBSECTION] The scores for each specimen were assessed for statistical differences using one-way analysis of variance with the Fisher's protected least significance difference for post hoc multiple comparisons. A commercially available software program (StatView, SAS Institute, NC) was used for statistical calculation. The significance level was set at p = 0.05. The scores for each specimen were assessed for statistical differences using one-way analysis of variance with the Fisher's protected least significance difference for post hoc multiple comparisons. A commercially available software program (StatView, SAS Institute, NC) was used for statistical calculation. The significance level was set at p = 0.05. [SUBTITLE] 4) Examination methods [SUBSECTION] [SUBTITLE] Gross observation for in vivo regenerated tissues [SUBSECTION] Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points. Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points. [SUBTITLE] Histological and immunohistochemical examinations [SUBSECTION] A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points. A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points. [SUBTITLE] Real time polymerase chain reaction (PCR) analysis [SUBSECTION] Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples. Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples. [SUBTITLE] Gross observation for in vivo regenerated tissues [SUBSECTION] Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points. Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points. [SUBTITLE] Histological and immunohistochemical examinations [SUBSECTION] A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points. A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points. [SUBTITLE] Real time polymerase chain reaction (PCR) analysis [SUBSECTION] Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples. Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.
null
null
null
null
[ "Background", "1) Materials", "2) Study design", "3) Statistical Analysis", "4) Examination methods", "Gross observation for in vivo regenerated tissues", "Histological and immunohistochemical examinations", "Real time polymerase chain reaction (PCR) analysis", "Results", "Gross observation of the joint surface repair", "Histological and immunohistological evaluations", "Quantitative evaluations of gross appearance and histology", "Real time PCR analysis", "Discussion", "Conclusions", "Financial competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Articular cartilage defects are a significant and increasing health care concern. It has been a commonly belief that hyaline cartilage tissue cannot spontaneously regenerate in vivo [1,2]. Therefore, the most progressive strategy to repair the articular cartilage defect is to fill an osteochondral defect with a tissue-engineered cartilage-like tissue or a cell-seeded scaffold material [3-6]. However, the cell culture procedures with the mammalian-derived materials/molecules include a possible risk of zoonosis transmission. In addition, it has been pointed out that this strategy has various realistic problems, including two-stage surgeries, a long period until weight bearing, an enormous amount of cost to establish a tissue-engineering industry system, possibly high medical fee for patients [7-10]. Under the similar strategy, some investigators have recently tried to fill up an osteochondral defect with acellular polymer scaffolds to induce cartilage cell regeneration inside it [11-14]. However, the results of these experimental trials are not favorable and are not indicated for clinical use. Thus, functional repair of articular osteochondral defects remains a major challenge not only in the field of knee surgery but also in tissue regeneration medicine.\nWe paid attention to the fact that sufficient fibrocartilage tissue can be regenerated in an osteochondral defect by creating many thin holes that penetrate the subchondral bone at the base of the defect in order to create bleeding from the bone marrow and subsequent clot formation (\"Microfracture\" technique). These induced mesenchymal stem cells have a high potential for cartilage regeneration [15]. In addition, recent studies have showed that, in autologous chondrocyte transplantation, quality of the tissue located just beneath the transplanted cells significantly affects quality of the regenerated cartilage [16,17]. In an ex vivo study, Engler et al [18] reported that elasticity of the material on which cultured cells attach directs stem cell differentiation: e.g., elastic materials induce differentiation to the cartilage tissue, and stiff materials induce differentiation to the bone tissue. Therefore, we hypothesize that a bioactive elastic material implanted in a chondral defect can stimulate and support hyaline cartilage regeneration.\nWe focused our research on an originally developed PAMPS/PDMAAm double-network (DN) hydrogel composed of poly-(2-Acrylamido-2-methylpropanesulfonic acid) (PAMPS) and poly-(N,N'-Dimetyl acrylamide) (PDMAAm) [19]. In our previous study validating the implant and its use in a large osteochondral defect created in the patellofemoral (PF) joint of the rabbit knee [20], we found that spontaneous hyaline cartilage regeneration occurred in vivo in the defect within 4 weeks after surgery when a PAMPS/PDMAAm DN gel plug was implanted at the bottom of the defect so that a 1.5 to 3.5-mm deep vacant space was intentionally left in the defect. In the clinical field, however, the joint that the most frequently requires a cartilage regeneration therapy is not the PF joint but the femorotibial (FT) joint. The PF and FT joints are anatomically, morphologically, and biomechanically different. Therefore, it is needed to clarify whether the spontaneous hyaline cartilage regeneration occurs in the FT joint. The purpose of this study is to clarify whether the spontaneous hyaline cartilage regeneration can be induced in vivo in a large osteochondral defect created in the medial femoral condyle of the FT joint by means of implanting a PAMPS/PDMAAm DN gel plug at the bottom of the defect.", "The PAMPS/PDMAAm DN hydrogel is a kind of interpenetrating network gel, but with an asymmetric structure: The first PAMPS network, which is rigid and brittle, is composed of densely cross-linked polyelectrolyte, and the second PDMAAm network, which is soft and ductile, consists of loosely or even non-crosslinked neutral polymers. The PAMPS/PDMAAm DN gel is strong enough to create an implantable plug, because the compressive fracture strength and the elastic modulus of the DN gel are 3.1 MPa and 0.2 MPa, respectively [21,22]. The material properties do not deteriorate in implantation into the subcutaneous tissue for 6 weeks [21]. The PAMPS network in this DN gel is negatively charged and has sulphonic acid bases, being similar to proteoglycans in normal cartilage. Our previous implantation test has shown that this DN gel is so bioactive that it induces cell infiltration in the muscle tissue at 1 week without any toxic effects for 6 weeks [23]. In addition, the PAMPS/PDMAAm DN gel surface can enhance differentiation of chondrogenic ATDC5 cells into chondrocytes in the in vitro condition [20,24].\nThe DN gel was synthesized by coauthors (T.K. and J.P.G) in Department of Biological Sciences, Hokkaido University Graduate School of Science, using the previously reported two-step sequential polymerization method [19]. After polymerization, the DN gel was immersed in pure water for 1 week and the water was changed 2 times every day to remove any un-reacted materials. From the DN gel, we created cylindrical plugs having a 2.7-mm diameter and an 8-mm length.", "A total of 25 mature female New Zealand White rabbits, weighing 3.6 ± 0.4 kg, were used in this study. Animal experiments were carried out in the Institute of Animal Experimentation, Hokkaido University School of Medicine under the Rules and Regulation of the Animal Care and Use Committee, Hokkaido University School of Medicine.\nThis experimental report was composed of 2 studies (Figure 1). In the first study, we divided 21 out of the 25 rabbits into 3 groups (Groups I, II, and III) of 7 animals each in order to clarify the effect of plug position (depth from the articular surface) in the defect on quality of the regenerated cartilage. An operation for each animal was performed under intravenous anesthesia (pentobarbital, 25 mg/kg) and sterile conditions. In the bilateral knees of each animal, we created a cylindrical osteochondral defect having a diameter of 2.4-mm at the center of the medial condyle of the FT joint, using a drill (Figure 2A). This diameter value was chosen as the maximal defect diameter that we could create on the cartilage surface without intra- or post-operative fracture of the medial condyle, because the width of the medial condyle around the defect was approximately 4.5 mm. Then, we implanted the PAMPS/PDMAAm DN gel plug into a defect in the right knee so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left (Figure 2B). The actual depth was precisely measured in the histological sections after sacrifice. According to the measured results, \"the mean ± the standard deviation\" of the real depth were 1.49 ± 0.28 mm in Group I, 2.44 ± 0.27 mm in Group II, and 3.46 ± 0.31 mm in Group III. The cartilage thickness was approximately 0.5 mm around the defect. In the left knee, we created the defect having the same depth as the right knee, and we did not apply any treatment to obtain the non-treated control (Control group). The incised joint capsule and the skin wound were closed in layers with 3-0 nylon sutures, and an antiseptic spray dressing was applied. Postoperatively, each animal was allowed unrestricted activity in a cage (310 × 550 × 320 mm) without any joint immobilization. In each group, all rabbits were sacrificed by pentobarbital injection at the 4-week period, and we performed the gross and histological evaluations using the grading scale reported by Wayne et al [14] as well as immunohistochemical observations for the bilateral knees.\nFlowchart to explain the study design.\nHow to induce cartilage regeneration. A: We created a cylindrical osteochondral defect having a diameter of 2.4-mm in the medial condyle of the FT joint. Then, we implanted a double network (DN) gel plug into a bottom of the defect. B: A schematic cross-section of the osteochondral defect into which the plug was implanted. Note that a defect having a few millimeter depth from the cartilage surface remained after surgery.\nThe second study was conducted using 4 rabbits, based on the results of the first study. The aim of the second study using real-time PCR analysis was to confirm gene expression of type-2 collagen, aggrecan, and SOX9 in the tissue regenerated in the defect of Group II in comparison with the non-treated control knees, because the degree of spontaneous cartilage regeneration was the greatest in Group II among the tested groups in the first study. The same surgical treatments as performed in Group II of the first study were carried out in the bilateral knees, respectively. Immediately after sacrifice at 4 weeks, total RNA was extracted from the tissues regenerated in the defect created in the bilateral knees.", "The scores for each specimen were assessed for statistical differences using one-way analysis of variance with the Fisher's protected least significance difference for post hoc multiple comparisons. A commercially available software program (StatView, SAS Institute, NC) was used for statistical calculation. The significance level was set at p = 0.05.", "[SUBTITLE] Gross observation for in vivo regenerated tissues [SUBSECTION] Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\nImmediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\n[SUBTITLE] Histological and immunohistochemical examinations [SUBSECTION] A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\nA distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\n[SUBTITLE] Real time polymerase chain reaction (PCR) analysis [SUBSECTION] Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.\nTotal RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.", "Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.", "A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.", "Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.", "[SUBTITLE] Gross observation of the joint surface repair [SUBSECTION] In gross observations, the knee joint did not show any inflammation or any pathological changes. The defect was filled with a white opaque tissue in Groups II, while the defect was insufficiently filled with white or reddish, opaque, patchy tissues in Groups I and III, (Figure 3A-C). The untreated defect in Control group showed white or reddish, opaque, patchy, stiff tissues, independent of the depth (Figure 3D).\nGross observations of the joint surface (A-D), histological findings with Safranin-O staining (E-H), and immunohistological findings with type-2 collagen staining (I-L). Black scale bars show a length of 500 micrometers. In Groups II, the defect was filled with the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. In Groups I and III, we observed the cartilage tissue in the defect although it was not homogenous in Safranin-O staining or type-2 collagen staining.\nIn gross observations, the knee joint did not show any inflammation or any pathological changes. The defect was filled with a white opaque tissue in Groups II, while the defect was insufficiently filled with white or reddish, opaque, patchy tissues in Groups I and III, (Figure 3A-C). The untreated defect in Control group showed white or reddish, opaque, patchy, stiff tissues, independent of the depth (Figure 3D).\nGross observations of the joint surface (A-D), histological findings with Safranin-O staining (E-H), and immunohistological findings with type-2 collagen staining (I-L). Black scale bars show a length of 500 micrometers. In Groups II, the defect was filled with the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. In Groups I and III, we observed the cartilage tissue in the defect although it was not homogenous in Safranin-O staining or type-2 collagen staining.\n[SUBTITLE] Histological and immunohistological evaluations [SUBSECTION] Low magnification histology (Figure 3E-L) showed that the untreated (control) defect was filled with the fibrous and bone tissues, while a small amount of the proteoglycan-rich tissue was occasionally and irregularly seen in these tissues (Figure 3H). The type-2 collagen expression was not found in the tissue regenerated in the untreated defect, except for a limited amount in the peripheral portion (Figure 3L). On the other hand, the defect of Group II was filled by a sufficient volume of the proteoglycan-rich tissue with regenerated subchondral bone tissue (Figure 3F). The detailed surface histomorphological change of each specimen was described in the Table 1. The immunohistochemical observation showed that the type-2 collagen was abundantly expressed in the proteoglycan-rich tissue (Figure 3J). These findings showed that the hyaline cartilage tissue was regenerated in the defect of Group II. Regarding the defect of Groups I and III, we observed the cartilage tissue in the defect although the tissue was not homogenous in Safranin-O staining or type-2 collagen staining.\nSurface evaluations on the histological sections of DN gel-implanted specimens in each group.\nThe range of the maximum depth or overgrowth in each specimen was noted in the parentheses.\nIn high magnification histology of Group II, fairly large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (Figure 4A and 4B). In these cells, type-2 collagen was richly expressed (Figure 4C). At the superficial layer in this tissue, cells were relatively small and sparse, while some cells were aligned as cell columns parallel to the surface (Figure 4D). In addition, the most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (Figure 4D).\nHigh magnification histology of Group II. Black scale bars show a length of 20.0 micrometers. Large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (A and B). In these cells, type 2 collagen was richly expressed (C). At the superficial layer, cells were relatively small and aligned parallel to the surface (D). The most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (D).\nLow magnification histology (Figure 3E-L) showed that the untreated (control) defect was filled with the fibrous and bone tissues, while a small amount of the proteoglycan-rich tissue was occasionally and irregularly seen in these tissues (Figure 3H). The type-2 collagen expression was not found in the tissue regenerated in the untreated defect, except for a limited amount in the peripheral portion (Figure 3L). On the other hand, the defect of Group II was filled by a sufficient volume of the proteoglycan-rich tissue with regenerated subchondral bone tissue (Figure 3F). The detailed surface histomorphological change of each specimen was described in the Table 1. The immunohistochemical observation showed that the type-2 collagen was abundantly expressed in the proteoglycan-rich tissue (Figure 3J). These findings showed that the hyaline cartilage tissue was regenerated in the defect of Group II. Regarding the defect of Groups I and III, we observed the cartilage tissue in the defect although the tissue was not homogenous in Safranin-O staining or type-2 collagen staining.\nSurface evaluations on the histological sections of DN gel-implanted specimens in each group.\nThe range of the maximum depth or overgrowth in each specimen was noted in the parentheses.\nIn high magnification histology of Group II, fairly large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (Figure 4A and 4B). In these cells, type-2 collagen was richly expressed (Figure 4C). At the superficial layer in this tissue, cells were relatively small and sparse, while some cells were aligned as cell columns parallel to the surface (Figure 4D). In addition, the most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (Figure 4D).\nHigh magnification histology of Group II. Black scale bars show a length of 20.0 micrometers. Large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (A and B). In these cells, type 2 collagen was richly expressed (C). At the superficial layer, cells were relatively small and aligned parallel to the surface (D). The most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (D).\n[SUBTITLE] Quantitative evaluations of gross appearance and histology [SUBSECTION] Concerning the gross appearance score, Group II was significantly greater than Groups I, III, and Control (p = 0.0119, p = 0.0006, and p < 0.0001, respectively) (Figure 5A). Regarding the histology score, Group II was significantly greater than the other groups (p = 0.0004, p < 0.0001, and p < 0.0001, respectively) (Figure 5B). Thus, the total score showed that Group II was significantly greater than the other groups (p = 0.0007, p < 0.0001, and p < 0.0001, respectively) (Figure 5C). In each score, there were no significant differences among Groups I, III, and Control.\nQuantitative evaluations of gross appearance and histology. Concerning each score, Group II was significantly greater than Groups I, III, and Control. While there were no significant differences among Groups I, III, and Control.\nConcerning the gross appearance score, Group II was significantly greater than Groups I, III, and Control (p = 0.0119, p = 0.0006, and p < 0.0001, respectively) (Figure 5A). Regarding the histology score, Group II was significantly greater than the other groups (p = 0.0004, p < 0.0001, and p < 0.0001, respectively) (Figure 5B). Thus, the total score showed that Group II was significantly greater than the other groups (p = 0.0007, p < 0.0001, and p < 0.0001, respectively) (Figure 5C). In each score, there were no significant differences among Groups I, III, and Control.\nQuantitative evaluations of gross appearance and histology. Concerning each score, Group II was significantly greater than Groups I, III, and Control. While there were no significant differences among Groups I, III, and Control.\n[SUBTITLE] Real time PCR analysis [SUBSECTION] In the real time PCR analysis performed at 4 weeks, the degree of expression of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group (p = 0.0228, p = 0.0165, and p = 0.0172, respectively) (Figure 6). The expressions were seldom seen in the tissues regenerated in the untreated defect.\nReal time PCR analysis. The degree of expression of type-2 collagen, Aggrican, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group.\nIn the real time PCR analysis performed at 4 weeks, the degree of expression of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group (p = 0.0228, p = 0.0165, and p = 0.0172, respectively) (Figure 6). The expressions were seldom seen in the tissues regenerated in the untreated defect.\nReal time PCR analysis. The degree of expression of type-2 collagen, Aggrican, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group.", "In gross observations, the knee joint did not show any inflammation or any pathological changes. The defect was filled with a white opaque tissue in Groups II, while the defect was insufficiently filled with white or reddish, opaque, patchy tissues in Groups I and III, (Figure 3A-C). The untreated defect in Control group showed white or reddish, opaque, patchy, stiff tissues, independent of the depth (Figure 3D).\nGross observations of the joint surface (A-D), histological findings with Safranin-O staining (E-H), and immunohistological findings with type-2 collagen staining (I-L). Black scale bars show a length of 500 micrometers. In Groups II, the defect was filled with the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. In Groups I and III, we observed the cartilage tissue in the defect although it was not homogenous in Safranin-O staining or type-2 collagen staining.", "Low magnification histology (Figure 3E-L) showed that the untreated (control) defect was filled with the fibrous and bone tissues, while a small amount of the proteoglycan-rich tissue was occasionally and irregularly seen in these tissues (Figure 3H). The type-2 collagen expression was not found in the tissue regenerated in the untreated defect, except for a limited amount in the peripheral portion (Figure 3L). On the other hand, the defect of Group II was filled by a sufficient volume of the proteoglycan-rich tissue with regenerated subchondral bone tissue (Figure 3F). The detailed surface histomorphological change of each specimen was described in the Table 1. The immunohistochemical observation showed that the type-2 collagen was abundantly expressed in the proteoglycan-rich tissue (Figure 3J). These findings showed that the hyaline cartilage tissue was regenerated in the defect of Group II. Regarding the defect of Groups I and III, we observed the cartilage tissue in the defect although the tissue was not homogenous in Safranin-O staining or type-2 collagen staining.\nSurface evaluations on the histological sections of DN gel-implanted specimens in each group.\nThe range of the maximum depth or overgrowth in each specimen was noted in the parentheses.\nIn high magnification histology of Group II, fairly large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (Figure 4A and 4B). In these cells, type-2 collagen was richly expressed (Figure 4C). At the superficial layer in this tissue, cells were relatively small and sparse, while some cells were aligned as cell columns parallel to the surface (Figure 4D). In addition, the most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (Figure 4D).\nHigh magnification histology of Group II. Black scale bars show a length of 20.0 micrometers. Large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (A and B). In these cells, type 2 collagen was richly expressed (C). At the superficial layer, cells were relatively small and aligned parallel to the surface (D). The most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (D).", "Concerning the gross appearance score, Group II was significantly greater than Groups I, III, and Control (p = 0.0119, p = 0.0006, and p < 0.0001, respectively) (Figure 5A). Regarding the histology score, Group II was significantly greater than the other groups (p = 0.0004, p < 0.0001, and p < 0.0001, respectively) (Figure 5B). Thus, the total score showed that Group II was significantly greater than the other groups (p = 0.0007, p < 0.0001, and p < 0.0001, respectively) (Figure 5C). In each score, there were no significant differences among Groups I, III, and Control.\nQuantitative evaluations of gross appearance and histology. Concerning each score, Group II was significantly greater than Groups I, III, and Control. While there were no significant differences among Groups I, III, and Control.", "In the real time PCR analysis performed at 4 weeks, the degree of expression of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group (p = 0.0228, p = 0.0165, and p = 0.0172, respectively) (Figure 6). The expressions were seldom seen in the tissues regenerated in the untreated defect.\nReal time PCR analysis. The degree of expression of type-2 collagen, Aggrican, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group.", "The present study demonstrated, first, that spontaneous hyaline cartilage regeneration can be induced in vivo in a large osteochondral defect created even in the FT joint by means of implanting a cylindrical PAMPS/PDMAAm DN gel plug at the bottom of the defect in the rabbit so that an approximately 2-mm deep vacant space was intentionally left in the defect. This fact suggested that the spontaneous hyaline cartilage regeneration using the DN gel implantation is not a specific phenomenon in the PF joint but common one in the diarthrodial joints. Secondly, this study showed that the regeneration effect was affected by the position (depth) of the implanted gel plug in the defect. We have reported that mechanical environment generated by repetitive compression forces during weight bearing and the elastic properties of the DN gel located at the bottom of the defect is a significant factor that differentiates stem cells that exist in the defect to chondrocytes [25]. Therefore, it is considered that the position (depth) of the implanted gel plug in the defect is one of significant factors that affect the mechanical environment in the defect, resulting in the outcome of the hyaline cartilage regeneration. In addition, the second study showed that the gene expression measured in Group II correlated to the better results in spontaneous cartilage regeneration. This result suggested that the treatment with a DN gel induced spontaneous hyaline cartilage regeneration by applying unknown effects in the gene level on stem cells infiltrating in the defect.\nWe have speculated the reasons why the regeneration effect was affected by the position (depth) of the implanted gel plug in the defect. First, our previous study has demonstrated that the cartilage regeneration occurs in the blood clot, which is containing mesenchymal stem cells and various cytokines [20]. We believe that a sufficient amount of blood clot must be formed in the defect immediately after surgery for spontaneous cartilage regeneration. Therefore, the reason why the cartilage regeneration was not induced in Group I of the present study (a vacant space of 1.5-mm depth) is considered that the space was so narrow that blood clot could not be sufficiently formed in the vacant space. Secondly, we consider that the reason why the cartilage regeneration was not induced in Group III (3.5-mm depth) of the present study but in Group II (2.5-mm depth) can be explained by the difference of biomechanical conditions. Kelly et al. reported that mechanical signals play an important role in differentiation of bone-marrow derived stem cells [26]. Engler et al. reported that matrix elasticity influences the differentiation of mesenchymal stem cells [18]. In addition, appropriate repetitive compressive stress significantly enhances chondrocyte proliferation as well as aggrecan and collagen synthesis in chondrocytes [27-30]. Recently, we have found that joint motion is needed to induce the spontaneous hyaline cartilage regeneration in the osteochondral defect using the DN gel [25]. The joint motion generates repetitive compression forces to the tissue regenerated in the defect. The repetitive forces create mechanical environment in the regenerated tissue, being affected by the location (the depth) and the stiffness of the gel plug that located beneath the tissue. Thus, we speculate that the mechanical microenvironment in the tissue of Group II was appropriate for cartilage regeneration, but that the mechanical microenvironment in the defect of Group III was inappropriate.\nThe results of the present study have prompted us to propose an innovative strategy to clinically repair various osteochondral lesions in the femoral condyle with the DN gel implantation and induction of the spontaneous cartilage regeneration. We should note that this therapeutic strategy is new and completely different in the concept from the current progressive strategies that completely fill the defected space with the tissue-engineered cartilage tissue, cell-seeded scaffold material implantation, or acellular polymer scaffolds with signaling molecules [11-13]. Numerous problems exist for current treatment strategies for chondral and osteochondral defects including but not limited to donor site morbidity, multiple surgeries required, prolonged limitations in activity, and significant financial costs [7-10]. We believe that the spontaneous regeneration strategy has potential to solve almost all of the above-described problems of the current progressive strategies. Therefore, the spontaneous regeneration strategy should be studied as a realistic research focus in greater detail in the near future. For example, we reported that the depth of 2.5 mm is optimal for lesions in the rabbit medial femoral condyle in this study. However, we did not analyze the relationship among the depth of the plug, the depth of the whole defect space, and the height of the plug. In addition, we did not analyze on the influence of the ratio of cartilage thickness to depth of lesion on cartilage regeneration. For the possible clinical use of this treatment strategy, further studies should be conducted to clarify these issues in the near future.\nRegarding the safety of the PAMPS/PDMAAm DN gel as a biomaterial, we conducted a pellet implantation test into the para-vertebral muscle [23], according to the guideline for biological evaluation of the safety of biomaterials, which had been published by the Ministry of Health, Labour and Welfare, Japan. Although this DN gel implantation induced a mild cell infiltration at 1 week, the degree of the inflammation significantly decreased into the same degree as that of the negative control at 4 and 6 weeks. We also cultured ATDC5 cells on the PAMPS/PDMAAm DN gel [20,24]. No harmful effects due to the DN gel surface were detected. We believe that the PAMPS/PDMAAm DN gel is a safe biomaterial. However, we have not completed to establish the clinical safety of this DN gel as an implant. Further studies are needed to establish the clinical safety of this gel in the near future.\nThere are some limitations in this study. The first limitation is that the number of animals was insufficient in the second study because the statistical power was 0.55, although the power in the first study was sufficient (0.9). However, we should note that the second study showed the statistical significant difference beyond the low power. Therefore, the number of 4 rabbits was acceptable as an experimental study. The second limitation is that we did not perform long-term observation of the regenerated cartilage above the DN gel or a border between the cartilage and the original tissue. A long-term evaluation study is needed to be performed immediately after this study. The third limitation is that we have not completely clarified the mechanism of the spontaneous cartilage tissue regeneration by the DN gel implantation. Further studies are needed in the near future in order to clarify the in vivo comprehensive mechanism of the spontaneous hyaline cartilage regeneration induced in a large osteochondral defect by means of implanting a cylindrical PAMPS/PDMAAm DN gel plug at the bottom of the defect.", "This study demonstrated that spontaneous hyaline cartilage regeneration can be induced in vivo in an osteochondral defect created in the femoral condyle by means of implanting the DN gel plug at the bottom of the defect so that an approximately 2-mm deep vacant space was intentionally left in the defect. This fact has prompted us to propose an innovative strategy without cell culture to repair osteochondral lesions in the femoral condyle.", "The authors declare that they have no competing interests.", "MY performed animal experiments. KY designed the study, participated in the study, and drafted the manuscript. NK and KA participated in designing the study and instructed animal experiments. SO performed the PCR analysis. TK and JPG created the DN-gel material. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/12/49/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "1) Materials", "2) Study design", "3) Statistical Analysis", "4) Examination methods", "Gross observation for in vivo regenerated tissues", "Histological and immunohistochemical examinations", "Real time polymerase chain reaction (PCR) analysis", "Results", "Gross observation of the joint surface repair", "Histological and immunohistological evaluations", "Quantitative evaluations of gross appearance and histology", "Real time PCR analysis", "Discussion", "Conclusions", "Financial competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Articular cartilage defects are a significant and increasing health care concern. It has been a commonly belief that hyaline cartilage tissue cannot spontaneously regenerate in vivo [1,2]. Therefore, the most progressive strategy to repair the articular cartilage defect is to fill an osteochondral defect with a tissue-engineered cartilage-like tissue or a cell-seeded scaffold material [3-6]. However, the cell culture procedures with the mammalian-derived materials/molecules include a possible risk of zoonosis transmission. In addition, it has been pointed out that this strategy has various realistic problems, including two-stage surgeries, a long period until weight bearing, an enormous amount of cost to establish a tissue-engineering industry system, possibly high medical fee for patients [7-10]. Under the similar strategy, some investigators have recently tried to fill up an osteochondral defect with acellular polymer scaffolds to induce cartilage cell regeneration inside it [11-14]. However, the results of these experimental trials are not favorable and are not indicated for clinical use. Thus, functional repair of articular osteochondral defects remains a major challenge not only in the field of knee surgery but also in tissue regeneration medicine.\nWe paid attention to the fact that sufficient fibrocartilage tissue can be regenerated in an osteochondral defect by creating many thin holes that penetrate the subchondral bone at the base of the defect in order to create bleeding from the bone marrow and subsequent clot formation (\"Microfracture\" technique). These induced mesenchymal stem cells have a high potential for cartilage regeneration [15]. In addition, recent studies have showed that, in autologous chondrocyte transplantation, quality of the tissue located just beneath the transplanted cells significantly affects quality of the regenerated cartilage [16,17]. In an ex vivo study, Engler et al [18] reported that elasticity of the material on which cultured cells attach directs stem cell differentiation: e.g., elastic materials induce differentiation to the cartilage tissue, and stiff materials induce differentiation to the bone tissue. Therefore, we hypothesize that a bioactive elastic material implanted in a chondral defect can stimulate and support hyaline cartilage regeneration.\nWe focused our research on an originally developed PAMPS/PDMAAm double-network (DN) hydrogel composed of poly-(2-Acrylamido-2-methylpropanesulfonic acid) (PAMPS) and poly-(N,N'-Dimetyl acrylamide) (PDMAAm) [19]. In our previous study validating the implant and its use in a large osteochondral defect created in the patellofemoral (PF) joint of the rabbit knee [20], we found that spontaneous hyaline cartilage regeneration occurred in vivo in the defect within 4 weeks after surgery when a PAMPS/PDMAAm DN gel plug was implanted at the bottom of the defect so that a 1.5 to 3.5-mm deep vacant space was intentionally left in the defect. In the clinical field, however, the joint that the most frequently requires a cartilage regeneration therapy is not the PF joint but the femorotibial (FT) joint. The PF and FT joints are anatomically, morphologically, and biomechanically different. Therefore, it is needed to clarify whether the spontaneous hyaline cartilage regeneration occurs in the FT joint. The purpose of this study is to clarify whether the spontaneous hyaline cartilage regeneration can be induced in vivo in a large osteochondral defect created in the medial femoral condyle of the FT joint by means of implanting a PAMPS/PDMAAm DN gel plug at the bottom of the defect.", "[SUBTITLE] 1) Materials [SUBSECTION] The PAMPS/PDMAAm DN hydrogel is a kind of interpenetrating network gel, but with an asymmetric structure: The first PAMPS network, which is rigid and brittle, is composed of densely cross-linked polyelectrolyte, and the second PDMAAm network, which is soft and ductile, consists of loosely or even non-crosslinked neutral polymers. The PAMPS/PDMAAm DN gel is strong enough to create an implantable plug, because the compressive fracture strength and the elastic modulus of the DN gel are 3.1 MPa and 0.2 MPa, respectively [21,22]. The material properties do not deteriorate in implantation into the subcutaneous tissue for 6 weeks [21]. The PAMPS network in this DN gel is negatively charged and has sulphonic acid bases, being similar to proteoglycans in normal cartilage. Our previous implantation test has shown that this DN gel is so bioactive that it induces cell infiltration in the muscle tissue at 1 week without any toxic effects for 6 weeks [23]. In addition, the PAMPS/PDMAAm DN gel surface can enhance differentiation of chondrogenic ATDC5 cells into chondrocytes in the in vitro condition [20,24].\nThe DN gel was synthesized by coauthors (T.K. and J.P.G) in Department of Biological Sciences, Hokkaido University Graduate School of Science, using the previously reported two-step sequential polymerization method [19]. After polymerization, the DN gel was immersed in pure water for 1 week and the water was changed 2 times every day to remove any un-reacted materials. From the DN gel, we created cylindrical plugs having a 2.7-mm diameter and an 8-mm length.\nThe PAMPS/PDMAAm DN hydrogel is a kind of interpenetrating network gel, but with an asymmetric structure: The first PAMPS network, which is rigid and brittle, is composed of densely cross-linked polyelectrolyte, and the second PDMAAm network, which is soft and ductile, consists of loosely or even non-crosslinked neutral polymers. The PAMPS/PDMAAm DN gel is strong enough to create an implantable plug, because the compressive fracture strength and the elastic modulus of the DN gel are 3.1 MPa and 0.2 MPa, respectively [21,22]. The material properties do not deteriorate in implantation into the subcutaneous tissue for 6 weeks [21]. The PAMPS network in this DN gel is negatively charged and has sulphonic acid bases, being similar to proteoglycans in normal cartilage. Our previous implantation test has shown that this DN gel is so bioactive that it induces cell infiltration in the muscle tissue at 1 week without any toxic effects for 6 weeks [23]. In addition, the PAMPS/PDMAAm DN gel surface can enhance differentiation of chondrogenic ATDC5 cells into chondrocytes in the in vitro condition [20,24].\nThe DN gel was synthesized by coauthors (T.K. and J.P.G) in Department of Biological Sciences, Hokkaido University Graduate School of Science, using the previously reported two-step sequential polymerization method [19]. After polymerization, the DN gel was immersed in pure water for 1 week and the water was changed 2 times every day to remove any un-reacted materials. From the DN gel, we created cylindrical plugs having a 2.7-mm diameter and an 8-mm length.\n[SUBTITLE] 2) Study design [SUBSECTION] A total of 25 mature female New Zealand White rabbits, weighing 3.6 ± 0.4 kg, were used in this study. Animal experiments were carried out in the Institute of Animal Experimentation, Hokkaido University School of Medicine under the Rules and Regulation of the Animal Care and Use Committee, Hokkaido University School of Medicine.\nThis experimental report was composed of 2 studies (Figure 1). In the first study, we divided 21 out of the 25 rabbits into 3 groups (Groups I, II, and III) of 7 animals each in order to clarify the effect of plug position (depth from the articular surface) in the defect on quality of the regenerated cartilage. An operation for each animal was performed under intravenous anesthesia (pentobarbital, 25 mg/kg) and sterile conditions. In the bilateral knees of each animal, we created a cylindrical osteochondral defect having a diameter of 2.4-mm at the center of the medial condyle of the FT joint, using a drill (Figure 2A). This diameter value was chosen as the maximal defect diameter that we could create on the cartilage surface without intra- or post-operative fracture of the medial condyle, because the width of the medial condyle around the defect was approximately 4.5 mm. Then, we implanted the PAMPS/PDMAAm DN gel plug into a defect in the right knee so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left (Figure 2B). The actual depth was precisely measured in the histological sections after sacrifice. According to the measured results, \"the mean ± the standard deviation\" of the real depth were 1.49 ± 0.28 mm in Group I, 2.44 ± 0.27 mm in Group II, and 3.46 ± 0.31 mm in Group III. The cartilage thickness was approximately 0.5 mm around the defect. In the left knee, we created the defect having the same depth as the right knee, and we did not apply any treatment to obtain the non-treated control (Control group). The incised joint capsule and the skin wound were closed in layers with 3-0 nylon sutures, and an antiseptic spray dressing was applied. Postoperatively, each animal was allowed unrestricted activity in a cage (310 × 550 × 320 mm) without any joint immobilization. In each group, all rabbits were sacrificed by pentobarbital injection at the 4-week period, and we performed the gross and histological evaluations using the grading scale reported by Wayne et al [14] as well as immunohistochemical observations for the bilateral knees.\nFlowchart to explain the study design.\nHow to induce cartilage regeneration. A: We created a cylindrical osteochondral defect having a diameter of 2.4-mm in the medial condyle of the FT joint. Then, we implanted a double network (DN) gel plug into a bottom of the defect. B: A schematic cross-section of the osteochondral defect into which the plug was implanted. Note that a defect having a few millimeter depth from the cartilage surface remained after surgery.\nThe second study was conducted using 4 rabbits, based on the results of the first study. The aim of the second study using real-time PCR analysis was to confirm gene expression of type-2 collagen, aggrecan, and SOX9 in the tissue regenerated in the defect of Group II in comparison with the non-treated control knees, because the degree of spontaneous cartilage regeneration was the greatest in Group II among the tested groups in the first study. The same surgical treatments as performed in Group II of the first study were carried out in the bilateral knees, respectively. Immediately after sacrifice at 4 weeks, total RNA was extracted from the tissues regenerated in the defect created in the bilateral knees.\nA total of 25 mature female New Zealand White rabbits, weighing 3.6 ± 0.4 kg, were used in this study. Animal experiments were carried out in the Institute of Animal Experimentation, Hokkaido University School of Medicine under the Rules and Regulation of the Animal Care and Use Committee, Hokkaido University School of Medicine.\nThis experimental report was composed of 2 studies (Figure 1). In the first study, we divided 21 out of the 25 rabbits into 3 groups (Groups I, II, and III) of 7 animals each in order to clarify the effect of plug position (depth from the articular surface) in the defect on quality of the regenerated cartilage. An operation for each animal was performed under intravenous anesthesia (pentobarbital, 25 mg/kg) and sterile conditions. In the bilateral knees of each animal, we created a cylindrical osteochondral defect having a diameter of 2.4-mm at the center of the medial condyle of the FT joint, using a drill (Figure 2A). This diameter value was chosen as the maximal defect diameter that we could create on the cartilage surface without intra- or post-operative fracture of the medial condyle, because the width of the medial condyle around the defect was approximately 4.5 mm. Then, we implanted the PAMPS/PDMAAm DN gel plug into a defect in the right knee so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left (Figure 2B). The actual depth was precisely measured in the histological sections after sacrifice. According to the measured results, \"the mean ± the standard deviation\" of the real depth were 1.49 ± 0.28 mm in Group I, 2.44 ± 0.27 mm in Group II, and 3.46 ± 0.31 mm in Group III. The cartilage thickness was approximately 0.5 mm around the defect. In the left knee, we created the defect having the same depth as the right knee, and we did not apply any treatment to obtain the non-treated control (Control group). The incised joint capsule and the skin wound were closed in layers with 3-0 nylon sutures, and an antiseptic spray dressing was applied. Postoperatively, each animal was allowed unrestricted activity in a cage (310 × 550 × 320 mm) without any joint immobilization. In each group, all rabbits were sacrificed by pentobarbital injection at the 4-week period, and we performed the gross and histological evaluations using the grading scale reported by Wayne et al [14] as well as immunohistochemical observations for the bilateral knees.\nFlowchart to explain the study design.\nHow to induce cartilage regeneration. A: We created a cylindrical osteochondral defect having a diameter of 2.4-mm in the medial condyle of the FT joint. Then, we implanted a double network (DN) gel plug into a bottom of the defect. B: A schematic cross-section of the osteochondral defect into which the plug was implanted. Note that a defect having a few millimeter depth from the cartilage surface remained after surgery.\nThe second study was conducted using 4 rabbits, based on the results of the first study. The aim of the second study using real-time PCR analysis was to confirm gene expression of type-2 collagen, aggrecan, and SOX9 in the tissue regenerated in the defect of Group II in comparison with the non-treated control knees, because the degree of spontaneous cartilage regeneration was the greatest in Group II among the tested groups in the first study. The same surgical treatments as performed in Group II of the first study were carried out in the bilateral knees, respectively. Immediately after sacrifice at 4 weeks, total RNA was extracted from the tissues regenerated in the defect created in the bilateral knees.\n[SUBTITLE] 3) Statistical Analysis [SUBSECTION] The scores for each specimen were assessed for statistical differences using one-way analysis of variance with the Fisher's protected least significance difference for post hoc multiple comparisons. A commercially available software program (StatView, SAS Institute, NC) was used for statistical calculation. The significance level was set at p = 0.05.\nThe scores for each specimen were assessed for statistical differences using one-way analysis of variance with the Fisher's protected least significance difference for post hoc multiple comparisons. A commercially available software program (StatView, SAS Institute, NC) was used for statistical calculation. The significance level was set at p = 0.05.\n[SUBTITLE] 4) Examination methods [SUBSECTION] [SUBTITLE] Gross observation for in vivo regenerated tissues [SUBSECTION] Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\nImmediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\n[SUBTITLE] Histological and immunohistochemical examinations [SUBSECTION] A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\nA distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\n[SUBTITLE] Real time polymerase chain reaction (PCR) analysis [SUBSECTION] Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.\nTotal RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.\n[SUBTITLE] Gross observation for in vivo regenerated tissues [SUBSECTION] Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\nImmediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\n[SUBTITLE] Histological and immunohistochemical examinations [SUBSECTION] A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\nA distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\n[SUBTITLE] Real time polymerase chain reaction (PCR) analysis [SUBSECTION] Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.\nTotal RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.", "The PAMPS/PDMAAm DN hydrogel is a kind of interpenetrating network gel, but with an asymmetric structure: The first PAMPS network, which is rigid and brittle, is composed of densely cross-linked polyelectrolyte, and the second PDMAAm network, which is soft and ductile, consists of loosely or even non-crosslinked neutral polymers. The PAMPS/PDMAAm DN gel is strong enough to create an implantable plug, because the compressive fracture strength and the elastic modulus of the DN gel are 3.1 MPa and 0.2 MPa, respectively [21,22]. The material properties do not deteriorate in implantation into the subcutaneous tissue for 6 weeks [21]. The PAMPS network in this DN gel is negatively charged and has sulphonic acid bases, being similar to proteoglycans in normal cartilage. Our previous implantation test has shown that this DN gel is so bioactive that it induces cell infiltration in the muscle tissue at 1 week without any toxic effects for 6 weeks [23]. In addition, the PAMPS/PDMAAm DN gel surface can enhance differentiation of chondrogenic ATDC5 cells into chondrocytes in the in vitro condition [20,24].\nThe DN gel was synthesized by coauthors (T.K. and J.P.G) in Department of Biological Sciences, Hokkaido University Graduate School of Science, using the previously reported two-step sequential polymerization method [19]. After polymerization, the DN gel was immersed in pure water for 1 week and the water was changed 2 times every day to remove any un-reacted materials. From the DN gel, we created cylindrical plugs having a 2.7-mm diameter and an 8-mm length.", "A total of 25 mature female New Zealand White rabbits, weighing 3.6 ± 0.4 kg, were used in this study. Animal experiments were carried out in the Institute of Animal Experimentation, Hokkaido University School of Medicine under the Rules and Regulation of the Animal Care and Use Committee, Hokkaido University School of Medicine.\nThis experimental report was composed of 2 studies (Figure 1). In the first study, we divided 21 out of the 25 rabbits into 3 groups (Groups I, II, and III) of 7 animals each in order to clarify the effect of plug position (depth from the articular surface) in the defect on quality of the regenerated cartilage. An operation for each animal was performed under intravenous anesthesia (pentobarbital, 25 mg/kg) and sterile conditions. In the bilateral knees of each animal, we created a cylindrical osteochondral defect having a diameter of 2.4-mm at the center of the medial condyle of the FT joint, using a drill (Figure 2A). This diameter value was chosen as the maximal defect diameter that we could create on the cartilage surface without intra- or post-operative fracture of the medial condyle, because the width of the medial condyle around the defect was approximately 4.5 mm. Then, we implanted the PAMPS/PDMAAm DN gel plug into a defect in the right knee so that a vacant space of 1.5-mm depth (in Group I), 2.5-mm depth (in Group II), or 3.5-mm depth (in Group III) was left (Figure 2B). The actual depth was precisely measured in the histological sections after sacrifice. According to the measured results, \"the mean ± the standard deviation\" of the real depth were 1.49 ± 0.28 mm in Group I, 2.44 ± 0.27 mm in Group II, and 3.46 ± 0.31 mm in Group III. The cartilage thickness was approximately 0.5 mm around the defect. In the left knee, we created the defect having the same depth as the right knee, and we did not apply any treatment to obtain the non-treated control (Control group). The incised joint capsule and the skin wound were closed in layers with 3-0 nylon sutures, and an antiseptic spray dressing was applied. Postoperatively, each animal was allowed unrestricted activity in a cage (310 × 550 × 320 mm) without any joint immobilization. In each group, all rabbits were sacrificed by pentobarbital injection at the 4-week period, and we performed the gross and histological evaluations using the grading scale reported by Wayne et al [14] as well as immunohistochemical observations for the bilateral knees.\nFlowchart to explain the study design.\nHow to induce cartilage regeneration. A: We created a cylindrical osteochondral defect having a diameter of 2.4-mm in the medial condyle of the FT joint. Then, we implanted a double network (DN) gel plug into a bottom of the defect. B: A schematic cross-section of the osteochondral defect into which the plug was implanted. Note that a defect having a few millimeter depth from the cartilage surface remained after surgery.\nThe second study was conducted using 4 rabbits, based on the results of the first study. The aim of the second study using real-time PCR analysis was to confirm gene expression of type-2 collagen, aggrecan, and SOX9 in the tissue regenerated in the defect of Group II in comparison with the non-treated control knees, because the degree of spontaneous cartilage regeneration was the greatest in Group II among the tested groups in the first study. The same surgical treatments as performed in Group II of the first study were carried out in the bilateral knees, respectively. Immediately after sacrifice at 4 weeks, total RNA was extracted from the tissues regenerated in the defect created in the bilateral knees.", "The scores for each specimen were assessed for statistical differences using one-way analysis of variance with the Fisher's protected least significance difference for post hoc multiple comparisons. A commercially available software program (StatView, SAS Institute, NC) was used for statistical calculation. The significance level was set at p = 0.05.", "[SUBTITLE] Gross observation for in vivo regenerated tissues [SUBSECTION] Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\nImmediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.\n[SUBTITLE] Histological and immunohistochemical examinations [SUBSECTION] A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\nA distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.\n[SUBTITLE] Real time polymerase chain reaction (PCR) analysis [SUBSECTION] Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.\nTotal RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.", "Immediately after sacrifice, the tissue regenerated in the osteochondral defect was quantitatively evaluated with the grading scale reported by Wayne et al [14]. Gross appearance of each defect on the femoral condyle was graded for coverage (4 points), tissue color (4 points), defect margins (4 points), and surface (4 points). Thus, the maximum total score was 16 points.", "A distal portion of the resected femur was fixed in a 10% neutral buffered formalin solution for 3 days, decalcified with 50 mM EDTA for a period of 3-4 weeks, and then cast in a paraffin block. The femur was sectioned perpendicular to the longitudinal axis, and stained with hematoxylin-eosin and Safranin-O. For immunohistochemical evaluations, monoclonal antibody (anti-hCL(II), purified IgG, Fuji Chemical Industries Ltd, Toyama, Japan) was used as primary antibodies. Immunostaining was carried out according to the manufacturer's instructions using the Envision immunostaining system (DAKO Japan, Kyoto, Japan). Finally, the sections were counterstained with hematoxylin. Histology was evaluated with the scoring system reported by Wayne et al [14], which was composed of matrix points (4 points), cell distribution points (3 points), smoothness points of the surface (4 points), safranin O stain points (4 points), safranin O-stained area points (4 points). Thus, the maximum total score was 19 points.", "Total RNA was extracted from the tissues regenerated in the defect, using the RNeasy mini kit (Qiagen Inc., Valencia, CA). RNA quality from each sample was assured by the A260/280 absorbance ratio. The RNA (100 ng) was reverse-transcribed into single strand cDNA using PrimeScript® RT reagent Kit (TakaraBio, Ohtsu, Japan). The RT reaction was carried out for 15 minutes at 37 degrees Celsius and then for 5 seconds at 85 degrees Celsius. All oligonucleotide primer sets were designed based upon the published mRNA sequence. The expected amplicon lengths ranged from 93 to 189 bp. The sequences of primers used in real time PCR analyses for rabbit regenerative tissues were as follows: type-2 collagen forward GACCATCAATGGCGGCTTC; reverse CACGCTGTTCTTGCAGTGGTAG. Aggrecan forward GCTACGACGCCATCTGCTAC; reverse GTCTGGACCGTGATGTCCTC. SOX9 forward AACGCCGAGCTCAGCAAGA; reverse TGGTACTTGTAGTCCGGGTGGTC. GAPDH forward CCCTCAATGACCACTTTGTGAA; reverse AGGCCATGTGGACCATGAG. The real time PCR was performed in Thermal Cycler Dice® TP800 (TakaraBio, Ohtsu, Japan) by using SYBR® Premix Ex TaqTM (TakaraBio, Ohtsu, Japan). cDNA template (5 ng) was used for real time PCR in a final volume of 25 microlitter. cDNA was amplified according to the following condition: 95 degrees Celsius for 5 sec and 60 degrees Celsius for 30 sec at 40 amplification cycles. Fluorescence changes were monitored with SYBR Green after every cycle. A dissociation curve analysis was performed (0.5 degrees Celsius/sec increase from 60 to 95 degrees Celsius with continuous fluorescence readings) at the end of cycles to ensure that single PCR products were obtained. The amplicon size and reaction specificity were confirmed by 2.5% agarose gel electrophoresis. The results were evaluated using the Thermal Cycler Dice® Real Time System software program (TakaraBio, Ohtsu, Japan). Glyceroaldehyde-3-phosphate dehydrogenase (GAPDH) primers were used to normalize samples.", "[SUBTITLE] Gross observation of the joint surface repair [SUBSECTION] In gross observations, the knee joint did not show any inflammation or any pathological changes. The defect was filled with a white opaque tissue in Groups II, while the defect was insufficiently filled with white or reddish, opaque, patchy tissues in Groups I and III, (Figure 3A-C). The untreated defect in Control group showed white or reddish, opaque, patchy, stiff tissues, independent of the depth (Figure 3D).\nGross observations of the joint surface (A-D), histological findings with Safranin-O staining (E-H), and immunohistological findings with type-2 collagen staining (I-L). Black scale bars show a length of 500 micrometers. In Groups II, the defect was filled with the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. In Groups I and III, we observed the cartilage tissue in the defect although it was not homogenous in Safranin-O staining or type-2 collagen staining.\nIn gross observations, the knee joint did not show any inflammation or any pathological changes. The defect was filled with a white opaque tissue in Groups II, while the defect was insufficiently filled with white or reddish, opaque, patchy tissues in Groups I and III, (Figure 3A-C). The untreated defect in Control group showed white or reddish, opaque, patchy, stiff tissues, independent of the depth (Figure 3D).\nGross observations of the joint surface (A-D), histological findings with Safranin-O staining (E-H), and immunohistological findings with type-2 collagen staining (I-L). Black scale bars show a length of 500 micrometers. In Groups II, the defect was filled with the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. In Groups I and III, we observed the cartilage tissue in the defect although it was not homogenous in Safranin-O staining or type-2 collagen staining.\n[SUBTITLE] Histological and immunohistological evaluations [SUBSECTION] Low magnification histology (Figure 3E-L) showed that the untreated (control) defect was filled with the fibrous and bone tissues, while a small amount of the proteoglycan-rich tissue was occasionally and irregularly seen in these tissues (Figure 3H). The type-2 collagen expression was not found in the tissue regenerated in the untreated defect, except for a limited amount in the peripheral portion (Figure 3L). On the other hand, the defect of Group II was filled by a sufficient volume of the proteoglycan-rich tissue with regenerated subchondral bone tissue (Figure 3F). The detailed surface histomorphological change of each specimen was described in the Table 1. The immunohistochemical observation showed that the type-2 collagen was abundantly expressed in the proteoglycan-rich tissue (Figure 3J). These findings showed that the hyaline cartilage tissue was regenerated in the defect of Group II. Regarding the defect of Groups I and III, we observed the cartilage tissue in the defect although the tissue was not homogenous in Safranin-O staining or type-2 collagen staining.\nSurface evaluations on the histological sections of DN gel-implanted specimens in each group.\nThe range of the maximum depth or overgrowth in each specimen was noted in the parentheses.\nIn high magnification histology of Group II, fairly large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (Figure 4A and 4B). In these cells, type-2 collagen was richly expressed (Figure 4C). At the superficial layer in this tissue, cells were relatively small and sparse, while some cells were aligned as cell columns parallel to the surface (Figure 4D). In addition, the most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (Figure 4D).\nHigh magnification histology of Group II. Black scale bars show a length of 20.0 micrometers. Large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (A and B). In these cells, type 2 collagen was richly expressed (C). At the superficial layer, cells were relatively small and aligned parallel to the surface (D). The most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (D).\nLow magnification histology (Figure 3E-L) showed that the untreated (control) defect was filled with the fibrous and bone tissues, while a small amount of the proteoglycan-rich tissue was occasionally and irregularly seen in these tissues (Figure 3H). The type-2 collagen expression was not found in the tissue regenerated in the untreated defect, except for a limited amount in the peripheral portion (Figure 3L). On the other hand, the defect of Group II was filled by a sufficient volume of the proteoglycan-rich tissue with regenerated subchondral bone tissue (Figure 3F). The detailed surface histomorphological change of each specimen was described in the Table 1. The immunohistochemical observation showed that the type-2 collagen was abundantly expressed in the proteoglycan-rich tissue (Figure 3J). These findings showed that the hyaline cartilage tissue was regenerated in the defect of Group II. Regarding the defect of Groups I and III, we observed the cartilage tissue in the defect although the tissue was not homogenous in Safranin-O staining or type-2 collagen staining.\nSurface evaluations on the histological sections of DN gel-implanted specimens in each group.\nThe range of the maximum depth or overgrowth in each specimen was noted in the parentheses.\nIn high magnification histology of Group II, fairly large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (Figure 4A and 4B). In these cells, type-2 collagen was richly expressed (Figure 4C). At the superficial layer in this tissue, cells were relatively small and sparse, while some cells were aligned as cell columns parallel to the surface (Figure 4D). In addition, the most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (Figure 4D).\nHigh magnification histology of Group II. Black scale bars show a length of 20.0 micrometers. Large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (A and B). In these cells, type 2 collagen was richly expressed (C). At the superficial layer, cells were relatively small and aligned parallel to the surface (D). The most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (D).\n[SUBTITLE] Quantitative evaluations of gross appearance and histology [SUBSECTION] Concerning the gross appearance score, Group II was significantly greater than Groups I, III, and Control (p = 0.0119, p = 0.0006, and p < 0.0001, respectively) (Figure 5A). Regarding the histology score, Group II was significantly greater than the other groups (p = 0.0004, p < 0.0001, and p < 0.0001, respectively) (Figure 5B). Thus, the total score showed that Group II was significantly greater than the other groups (p = 0.0007, p < 0.0001, and p < 0.0001, respectively) (Figure 5C). In each score, there were no significant differences among Groups I, III, and Control.\nQuantitative evaluations of gross appearance and histology. Concerning each score, Group II was significantly greater than Groups I, III, and Control. While there were no significant differences among Groups I, III, and Control.\nConcerning the gross appearance score, Group II was significantly greater than Groups I, III, and Control (p = 0.0119, p = 0.0006, and p < 0.0001, respectively) (Figure 5A). Regarding the histology score, Group II was significantly greater than the other groups (p = 0.0004, p < 0.0001, and p < 0.0001, respectively) (Figure 5B). Thus, the total score showed that Group II was significantly greater than the other groups (p = 0.0007, p < 0.0001, and p < 0.0001, respectively) (Figure 5C). In each score, there were no significant differences among Groups I, III, and Control.\nQuantitative evaluations of gross appearance and histology. Concerning each score, Group II was significantly greater than Groups I, III, and Control. While there were no significant differences among Groups I, III, and Control.\n[SUBTITLE] Real time PCR analysis [SUBSECTION] In the real time PCR analysis performed at 4 weeks, the degree of expression of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group (p = 0.0228, p = 0.0165, and p = 0.0172, respectively) (Figure 6). The expressions were seldom seen in the tissues regenerated in the untreated defect.\nReal time PCR analysis. The degree of expression of type-2 collagen, Aggrican, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group.\nIn the real time PCR analysis performed at 4 weeks, the degree of expression of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group (p = 0.0228, p = 0.0165, and p = 0.0172, respectively) (Figure 6). The expressions were seldom seen in the tissues regenerated in the untreated defect.\nReal time PCR analysis. The degree of expression of type-2 collagen, Aggrican, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group.", "In gross observations, the knee joint did not show any inflammation or any pathological changes. The defect was filled with a white opaque tissue in Groups II, while the defect was insufficiently filled with white or reddish, opaque, patchy tissues in Groups I and III, (Figure 3A-C). The untreated defect in Control group showed white or reddish, opaque, patchy, stiff tissues, independent of the depth (Figure 3D).\nGross observations of the joint surface (A-D), histological findings with Safranin-O staining (E-H), and immunohistological findings with type-2 collagen staining (I-L). Black scale bars show a length of 500 micrometers. In Groups II, the defect was filled with the hyaline cartilage tissue rich in proteoglycan and type-2 collagen. In Groups I and III, we observed the cartilage tissue in the defect although it was not homogenous in Safranin-O staining or type-2 collagen staining.", "Low magnification histology (Figure 3E-L) showed that the untreated (control) defect was filled with the fibrous and bone tissues, while a small amount of the proteoglycan-rich tissue was occasionally and irregularly seen in these tissues (Figure 3H). The type-2 collagen expression was not found in the tissue regenerated in the untreated defect, except for a limited amount in the peripheral portion (Figure 3L). On the other hand, the defect of Group II was filled by a sufficient volume of the proteoglycan-rich tissue with regenerated subchondral bone tissue (Figure 3F). The detailed surface histomorphological change of each specimen was described in the Table 1. The immunohistochemical observation showed that the type-2 collagen was abundantly expressed in the proteoglycan-rich tissue (Figure 3J). These findings showed that the hyaline cartilage tissue was regenerated in the defect of Group II. Regarding the defect of Groups I and III, we observed the cartilage tissue in the defect although the tissue was not homogenous in Safranin-O staining or type-2 collagen staining.\nSurface evaluations on the histological sections of DN gel-implanted specimens in each group.\nThe range of the maximum depth or overgrowth in each specimen was noted in the parentheses.\nIn high magnification histology of Group II, fairly large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (Figure 4A and 4B). In these cells, type-2 collagen was richly expressed (Figure 4C). At the superficial layer in this tissue, cells were relatively small and sparse, while some cells were aligned as cell columns parallel to the surface (Figure 4D). In addition, the most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (Figure 4D).\nHigh magnification histology of Group II. Black scale bars show a length of 20.0 micrometers. Large round cells rich in cytoplasm were scattered singly or as an isogenous group in a proteoglycan-rich matrix (A and B). In these cells, type 2 collagen was richly expressed (C). At the superficial layer, cells were relatively small and aligned parallel to the surface (D). The most superficial layer was devoid of cells, resembling the lamina splendens in the normal articular cartilage (D).", "Concerning the gross appearance score, Group II was significantly greater than Groups I, III, and Control (p = 0.0119, p = 0.0006, and p < 0.0001, respectively) (Figure 5A). Regarding the histology score, Group II was significantly greater than the other groups (p = 0.0004, p < 0.0001, and p < 0.0001, respectively) (Figure 5B). Thus, the total score showed that Group II was significantly greater than the other groups (p = 0.0007, p < 0.0001, and p < 0.0001, respectively) (Figure 5C). In each score, there were no significant differences among Groups I, III, and Control.\nQuantitative evaluations of gross appearance and histology. Concerning each score, Group II was significantly greater than Groups I, III, and Control. While there were no significant differences among Groups I, III, and Control.", "In the real time PCR analysis performed at 4 weeks, the degree of expression of type-2 collagen, aggrecan, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group (p = 0.0228, p = 0.0165, and p = 0.0172, respectively) (Figure 6). The expressions were seldom seen in the tissues regenerated in the untreated defect.\nReal time PCR analysis. The degree of expression of type-2 collagen, Aggrican, and SOX9 mRNAs was significantly greater in the regenerated tissue of Group II than that of Control Group.", "The present study demonstrated, first, that spontaneous hyaline cartilage regeneration can be induced in vivo in a large osteochondral defect created even in the FT joint by means of implanting a cylindrical PAMPS/PDMAAm DN gel plug at the bottom of the defect in the rabbit so that an approximately 2-mm deep vacant space was intentionally left in the defect. This fact suggested that the spontaneous hyaline cartilage regeneration using the DN gel implantation is not a specific phenomenon in the PF joint but common one in the diarthrodial joints. Secondly, this study showed that the regeneration effect was affected by the position (depth) of the implanted gel plug in the defect. We have reported that mechanical environment generated by repetitive compression forces during weight bearing and the elastic properties of the DN gel located at the bottom of the defect is a significant factor that differentiates stem cells that exist in the defect to chondrocytes [25]. Therefore, it is considered that the position (depth) of the implanted gel plug in the defect is one of significant factors that affect the mechanical environment in the defect, resulting in the outcome of the hyaline cartilage regeneration. In addition, the second study showed that the gene expression measured in Group II correlated to the better results in spontaneous cartilage regeneration. This result suggested that the treatment with a DN gel induced spontaneous hyaline cartilage regeneration by applying unknown effects in the gene level on stem cells infiltrating in the defect.\nWe have speculated the reasons why the regeneration effect was affected by the position (depth) of the implanted gel plug in the defect. First, our previous study has demonstrated that the cartilage regeneration occurs in the blood clot, which is containing mesenchymal stem cells and various cytokines [20]. We believe that a sufficient amount of blood clot must be formed in the defect immediately after surgery for spontaneous cartilage regeneration. Therefore, the reason why the cartilage regeneration was not induced in Group I of the present study (a vacant space of 1.5-mm depth) is considered that the space was so narrow that blood clot could not be sufficiently formed in the vacant space. Secondly, we consider that the reason why the cartilage regeneration was not induced in Group III (3.5-mm depth) of the present study but in Group II (2.5-mm depth) can be explained by the difference of biomechanical conditions. Kelly et al. reported that mechanical signals play an important role in differentiation of bone-marrow derived stem cells [26]. Engler et al. reported that matrix elasticity influences the differentiation of mesenchymal stem cells [18]. In addition, appropriate repetitive compressive stress significantly enhances chondrocyte proliferation as well as aggrecan and collagen synthesis in chondrocytes [27-30]. Recently, we have found that joint motion is needed to induce the spontaneous hyaline cartilage regeneration in the osteochondral defect using the DN gel [25]. The joint motion generates repetitive compression forces to the tissue regenerated in the defect. The repetitive forces create mechanical environment in the regenerated tissue, being affected by the location (the depth) and the stiffness of the gel plug that located beneath the tissue. Thus, we speculate that the mechanical microenvironment in the tissue of Group II was appropriate for cartilage regeneration, but that the mechanical microenvironment in the defect of Group III was inappropriate.\nThe results of the present study have prompted us to propose an innovative strategy to clinically repair various osteochondral lesions in the femoral condyle with the DN gel implantation and induction of the spontaneous cartilage regeneration. We should note that this therapeutic strategy is new and completely different in the concept from the current progressive strategies that completely fill the defected space with the tissue-engineered cartilage tissue, cell-seeded scaffold material implantation, or acellular polymer scaffolds with signaling molecules [11-13]. Numerous problems exist for current treatment strategies for chondral and osteochondral defects including but not limited to donor site morbidity, multiple surgeries required, prolonged limitations in activity, and significant financial costs [7-10]. We believe that the spontaneous regeneration strategy has potential to solve almost all of the above-described problems of the current progressive strategies. Therefore, the spontaneous regeneration strategy should be studied as a realistic research focus in greater detail in the near future. For example, we reported that the depth of 2.5 mm is optimal for lesions in the rabbit medial femoral condyle in this study. However, we did not analyze the relationship among the depth of the plug, the depth of the whole defect space, and the height of the plug. In addition, we did not analyze on the influence of the ratio of cartilage thickness to depth of lesion on cartilage regeneration. For the possible clinical use of this treatment strategy, further studies should be conducted to clarify these issues in the near future.\nRegarding the safety of the PAMPS/PDMAAm DN gel as a biomaterial, we conducted a pellet implantation test into the para-vertebral muscle [23], according to the guideline for biological evaluation of the safety of biomaterials, which had been published by the Ministry of Health, Labour and Welfare, Japan. Although this DN gel implantation induced a mild cell infiltration at 1 week, the degree of the inflammation significantly decreased into the same degree as that of the negative control at 4 and 6 weeks. We also cultured ATDC5 cells on the PAMPS/PDMAAm DN gel [20,24]. No harmful effects due to the DN gel surface were detected. We believe that the PAMPS/PDMAAm DN gel is a safe biomaterial. However, we have not completed to establish the clinical safety of this DN gel as an implant. Further studies are needed to establish the clinical safety of this gel in the near future.\nThere are some limitations in this study. The first limitation is that the number of animals was insufficient in the second study because the statistical power was 0.55, although the power in the first study was sufficient (0.9). However, we should note that the second study showed the statistical significant difference beyond the low power. Therefore, the number of 4 rabbits was acceptable as an experimental study. The second limitation is that we did not perform long-term observation of the regenerated cartilage above the DN gel or a border between the cartilage and the original tissue. A long-term evaluation study is needed to be performed immediately after this study. The third limitation is that we have not completely clarified the mechanism of the spontaneous cartilage tissue regeneration by the DN gel implantation. Further studies are needed in the near future in order to clarify the in vivo comprehensive mechanism of the spontaneous hyaline cartilage regeneration induced in a large osteochondral defect by means of implanting a cylindrical PAMPS/PDMAAm DN gel plug at the bottom of the defect.", "This study demonstrated that spontaneous hyaline cartilage regeneration can be induced in vivo in an osteochondral defect created in the femoral condyle by means of implanting the DN gel plug at the bottom of the defect so that an approximately 2-mm deep vacant space was intentionally left in the defect. This fact has prompted us to propose an innovative strategy without cell culture to repair osteochondral lesions in the femoral condyle.", "The authors declare that they have no competing interests.", "MY performed animal experiments. KY designed the study, participated in the study, and drafted the manuscript. NK and KA participated in designing the study and instructed animal experiments. SO performed the PCR analysis. TK and JPG created the DN-gel material. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/12/49/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Identification and validation of genes involved in cervical tumourigenesis.
21338529
Cervical cancer is the most common cancer among Indian women. This cancer has well defined pre-cancerous stages and evolves over 10-15 years or more. This study was undertaken to identify differentially expressed genes between normal, dysplastic and invasive cervical cancer.
BACKGROUND
A total of 28 invasive cervical cancers, 4 CIN3/CIS, 4 CIN1/CIN2 and 5 Normal cervix samples were studied. We have used microarray technique followed by validation of the significant genes by relative quantitation using Taqman Low Density Array Real Time PCR. Immunohistochemistry was used to study the protein expression of MMP3, UBE2C and p16 in normal, dysplasia and cancers of the cervix. The effect of a dominant negative UBE2C on the growth of the SiHa cells was assessed using a MTT assay.
MATERIALS AND METHODS
Our study, for the first time, has identified 20 genes to be up-regulated and 14 down-regulated in cervical cancers and 5 up-regulated in CIN3. In addition, 26 genes identified by other studies, as to playing a role in cervical cancer, were also confirmed in our study. UBE2C, CCNB1, CCNB2, PLOD2, NUP210, MELK, CDC20 genes were overexpressed in tumours and in CIN3/CIS relative to both Normal and CIN1/CIN2, suggesting that they could have a role to play in the early phase of tumorigenesis. IL8, INDO, ISG15, ISG20, AGRN, DTXL, MMP1, MMP3, CCL18, TOP2A AND STAT1 were found to be upregulated in tumours. Using Immunohistochemistry, we showed over-expression of MMP3, UBE2C and p16 in cancers compared to normal cervical epithelium and varying grades of dysplasia. A dominant negative UBE2C was found to produce growth inhibition in SiHa cells, which over-expresses UBE2C 4 fold more than HEK293 cells.
RESULTS
Several novel genes were found to be differentially expressed in cervical cancer. MMP3, UBE2C and p16 protein overexpression in cervical cancers was confirmed by immunohistochemistry. These will need to be validated further in a larger series of samples. UBE2C could be evaluated further to assess its potential as a therapeutic target in cervical cancer.
CONCLUSIONS
[ "Alphapapillomavirus", "Cell Transformation, Neoplastic", "Cells, Cultured", "DNA, Viral", "Female", "Gene Expression Profiling", "Gene Expression Regulation, Neoplastic", "Genes, Neoplasm", "Genes, Viral", "HeLa Cells", "Humans", "Microarray Analysis", "Uterine Cervical Neoplasms", "Uterine Cervical Dysplasia" ]
3050856
null
null
Methods
Archival total RNA extracted from punch biopsy samples from patients with cervical cancer, collected in RNA later (Ambion, Austin, USA; Cat no: AM7021) and stored in the tumour bank after an informed consent were used, after obtaining the Institutional Ethical committee's approval for the study. The RNA had been extracted from the biopsy samples using the RNeasy RNA extraction kit (Qiagen, Gmbh, Hilden; Cat no: 74106) as per the manufacturer's instructions. Twenty eight cervical cancer patients' samples were included in the study. The criteria for inclusion in the study were as follows: 1. good quality RNA as assessed by Bio-analyser (RIN 6 or above); 2. paired paraffin block having at least 70% tumour cells; 3. sufficient quantity of RNA be available; 4. patient should have completed prescribed radiotherapy and follow-up information till death/last disease free status be available. In addition, 5 normal cervix tissues from women who underwent hysterectomy for non-malignant conditions or for non-cervical cancer were included. Four CIN1/CIN2 and 4 CIN3/CIS (one CIN3/CIS was included for RQ-RT-PCR analysis directly) were also included after informed consent. The Normal and CIN samples underwent frozen section to confirm their histopathologic status and the samples were immediately snap frozen in liquid nitrogen. RNA was extracted from the samples using the RNeasy RNA extraction kit, as described above. [SUBTITLE] HPV Testing [SUBSECTION] The quality of the DNA was assessed by amplifying for β globin and only then HPV testing was done using GP5+ and GP6+ primers [9]. HPV16 and 18 typing was done using Nested Multiplex Polymerase Chain Reaction (NMPCR) technique [10]. SiHa DNA for HPV16 and HeLa DNA for HPV18 (positive controls) and C33A DNA (negative control) were included in all runs. The quality of the DNA was assessed by amplifying for β globin and only then HPV testing was done using GP5+ and GP6+ primers [9]. HPV16 and 18 typing was done using Nested Multiplex Polymerase Chain Reaction (NMPCR) technique [10]. SiHa DNA for HPV16 and HeLa DNA for HPV18 (positive controls) and C33A DNA (negative control) were included in all runs. [SUBTITLE] Microarray experiment [SUBSECTION] 1 μg of total RNA from the tumour/CIN/Normal sample and universal RNA (Stratagene; Cat no: 740000-41) were reverse transcribed using Arrayscript at 42°C for 2 hrs to obtain cDNA using the Amino Allyl MessageAmp II aRNA amplification kit (Ambion, Austin, USA; Cat no: AM1797). The cDNA was amplified by in-vitro transcription in the presence of T7 RNA polymerase; aRNA thus obtained was purified and quantitated in NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). 20 μg of tumour/CIN/Normal aRNA was labelled using NHS ester of Cy5 dye and the control universal aRNA was labelled using NHS ester of Cy3 dye. The Cy3 and Cy5 labelled aRNA was used for hybridization onto the microarray chips from Stanford Functional Genomics Facility (SFGF, Stanford, CA) containing 44,544 spots, for 16 hrs in Lucidea SlidePro hybridization chamber (GE Health Care, Uppsala, Sweden) at 42°C. After hybridization, slides were washed in 0.1× SSC, 1× SSC followed by 0.1× SSC and dried. The slides were scanned in ProScanArray (PerkinElmer, Shelton, CT, USA). Griding was done using Scan array Express software package (version -4). The integrated or mean intensity of signal within the spot was calculated. The files were saved as GPR files. All the raw data files have been submitted to GEO with an assigned GEO accession number - GSE14404. 1 μg of total RNA from the tumour/CIN/Normal sample and universal RNA (Stratagene; Cat no: 740000-41) were reverse transcribed using Arrayscript at 42°C for 2 hrs to obtain cDNA using the Amino Allyl MessageAmp II aRNA amplification kit (Ambion, Austin, USA; Cat no: AM1797). The cDNA was amplified by in-vitro transcription in the presence of T7 RNA polymerase; aRNA thus obtained was purified and quantitated in NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). 20 μg of tumour/CIN/Normal aRNA was labelled using NHS ester of Cy5 dye and the control universal aRNA was labelled using NHS ester of Cy3 dye. The Cy3 and Cy5 labelled aRNA was used for hybridization onto the microarray chips from Stanford Functional Genomics Facility (SFGF, Stanford, CA) containing 44,544 spots, for 16 hrs in Lucidea SlidePro hybridization chamber (GE Health Care, Uppsala, Sweden) at 42°C. After hybridization, slides were washed in 0.1× SSC, 1× SSC followed by 0.1× SSC and dried. The slides were scanned in ProScanArray (PerkinElmer, Shelton, CT, USA). Griding was done using Scan array Express software package (version -4). The integrated or mean intensity of signal within the spot was calculated. The files were saved as GPR files. All the raw data files have been submitted to GEO with an assigned GEO accession number - GSE14404. [SUBTITLE] Microarray dasta analysis [SUBSECTION] The Foreground Median intensity for Cy3 and Cy5, Background Median intensity for Cy3 and Cy5, spot size data were imported into BRB-ArrayTools software [11] using the Import wizard function. Background correction was not done. Global normalization was used to median centre the log-ratios on each array in order to adjust for differences in labelling intensities of the Cy3 and Cy5 dyes. The data was analysed using the Class comparison and Class prediction modules in the BRB-Array Tools software. In addition, Lowess normalization was also done separately and the data analysed using the modules mentioned above. The normalized Log ratios were also imported into Significance Analysis of Microarray (SAM) [12] software and analysed. The Foreground Median intensity for Cy3 and Cy5, Background Median intensity for Cy3 and Cy5, spot size data were imported into BRB-ArrayTools software [11] using the Import wizard function. Background correction was not done. Global normalization was used to median centre the log-ratios on each array in order to adjust for differences in labelling intensities of the Cy3 and Cy5 dyes. The data was analysed using the Class comparison and Class prediction modules in the BRB-Array Tools software. In addition, Lowess normalization was also done separately and the data analysed using the modules mentioned above. The normalized Log ratios were also imported into Significance Analysis of Microarray (SAM) [12] software and analysed. [SUBTITLE] Class Comparison in BRB-Array Tools [SUBSECTION] We identified genes that were differentially expressed among the four classes (Normal, CIN1/2, CIN3/CIS, Cancer) using a random-variance t-test. The random-variance t-test is an improvement over the standard separate t-test as it permits sharing information among genes about within-class variation without assuming that all genes have the same variance [13]. Genes were considered statistically significant if their p value was < 0.01. In addition a two fold difference was required between the Cancer and Normal, CIN3/CIS and Normal, CIN1/2 and Normal. The same was repeated with the Lowess normalized data using the same criteria. We identified genes that were differentially expressed among the four classes (Normal, CIN1/2, CIN3/CIS, Cancer) using a random-variance t-test. The random-variance t-test is an improvement over the standard separate t-test as it permits sharing information among genes about within-class variation without assuming that all genes have the same variance [13]. Genes were considered statistically significant if their p value was < 0.01. In addition a two fold difference was required between the Cancer and Normal, CIN3/CIS and Normal, CIN1/2 and Normal. The same was repeated with the Lowess normalized data using the same criteria. [SUBTITLE] Class prediction in BRB-Array Tools [SUBSECTION] We developed models for utilizing gene expression profile to predict the class of future samples based on the Diagonal Linear Discriminant Analysis and Nearest Neighbour Classification [11]. The models incorporated genes that were differentially expressed among genes at the 0.01 significance level as assessed by the random variance t-test [13]. We estimated the prediction error of each model using leave-one-out cross-validation (LOOCV) as described [14]. Leave-one-out cross-validation method was used to compute mis-classification rate. From the list, genes were sorted further based on 2 fold difference between Cancer versus CIN1/2 & Normal, CIN3/CIS versus CIN1/2 & Normal, and CIN1/2 versus Normal. The same was repeated with the Lowess normalized data using a significance value of 0.01. We developed models for utilizing gene expression profile to predict the class of future samples based on the Diagonal Linear Discriminant Analysis and Nearest Neighbour Classification [11]. The models incorporated genes that were differentially expressed among genes at the 0.01 significance level as assessed by the random variance t-test [13]. We estimated the prediction error of each model using leave-one-out cross-validation (LOOCV) as described [14]. Leave-one-out cross-validation method was used to compute mis-classification rate. From the list, genes were sorted further based on 2 fold difference between Cancer versus CIN1/2 & Normal, CIN3/CIS versus CIN1/2 & Normal, and CIN1/2 versus Normal. The same was repeated with the Lowess normalized data using a significance value of 0.01. [SUBTITLE] SAM Analysis [SUBSECTION] The normalized log ratios of all the samples were imported into SAM software and analysed. A Multi-class analysis with 100 permutations was done. A delta value of 0.96 and a fold difference of 2 was used to identify the genes differentially expressed. The normalized log ratios of all the samples were imported into SAM software and analysed. A Multi-class analysis with 100 permutations was done. A delta value of 0.96 and a fold difference of 2 was used to identify the genes differentially expressed. [SUBTITLE] Quantitative Real time PCR [SUBSECTION] High Capacity Reverse Transcription kit (Applied Biosystems, Foster City, CA; Cat no: 4368814) was used to reverse transcribe 2 μg of total RNA from the 38 samples in a 20 μl reaction volume. In 3 samples, due to the limiting amount of RNA, 0.75 μg was used for the cDNA synthesis. These cDNA samples were used for real time PCR amplification assays using TaqMan® arrays formerly TaqMan® Low density arrays (TLDA) (Applied Biosystems, Foster City, CA; Cat no: 4342261). The fluorogenic, FAM labelled probes and the sequence specific primers for the list of genes with endogenous control 18S rRNA were obtained as inventoried assays and incorporated into the TaqMan® array format. Quadruplicate (n = 38) and duplicate (n = 3; with limiting amount of RNA for cDNA synthesis) cDNA template samples were amplified and analysed on the ABI Prism 7900HT sequence detection system (Applied Biosystems, Foster City, CA). The reaction set up, briefly, consisted of 1.44 μg of cDNA template made up to 400 μl with deionised water and equal amounts of TaqMan® Universal PCR Master Mix (Applied Biosystems, Foster City, CA; Cat no: 4304437). 100 μl was loaded into each of the 8 ports of the array (2 ports comprise of one sample replicate on the array). Thus, the samples run as duplicates were only loaded into 4 ports of the array. Thermal cycling conditions included a 50°C step for 2 minutes, denaturation for 10 min at 94°C followed by 40 cycles consisting of 2 steps: 97°C for 30 seconds and 59.7°C for 1 minute for annealing and extension. The raw data from the Prism 7900HT sequence detection system was imported into the Real-Time StatMiner™ software for statistical analysis of the data. Among the endogenous reference genes included on the array (18S ribosomal gene; UBC, β2 microglobulin), UBC and β2 microglobulin were chosen after visualizing the global Ct value distribution, for normalizing the data (Supplementary figure 1). The TLDA assays were run at LabIndia Instruments Pvt Ltd laboratories at Gurgaon, New Delhi. [SUBTITLE] Immunohistochemistry (IHC) [SUBSECTION] IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3. p16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16]. UBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression. IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3. p16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16]. UBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression. [SUBTITLE] UBE2C in cervical cancer cell lines [SUBSECTION] Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data. Dominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C). Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data. Dominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C). [SUBTITLE] Statistical analysis [SUBSECTION] Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias. Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias. High Capacity Reverse Transcription kit (Applied Biosystems, Foster City, CA; Cat no: 4368814) was used to reverse transcribe 2 μg of total RNA from the 38 samples in a 20 μl reaction volume. In 3 samples, due to the limiting amount of RNA, 0.75 μg was used for the cDNA synthesis. These cDNA samples were used for real time PCR amplification assays using TaqMan® arrays formerly TaqMan® Low density arrays (TLDA) (Applied Biosystems, Foster City, CA; Cat no: 4342261). The fluorogenic, FAM labelled probes and the sequence specific primers for the list of genes with endogenous control 18S rRNA were obtained as inventoried assays and incorporated into the TaqMan® array format. Quadruplicate (n = 38) and duplicate (n = 3; with limiting amount of RNA for cDNA synthesis) cDNA template samples were amplified and analysed on the ABI Prism 7900HT sequence detection system (Applied Biosystems, Foster City, CA). The reaction set up, briefly, consisted of 1.44 μg of cDNA template made up to 400 μl with deionised water and equal amounts of TaqMan® Universal PCR Master Mix (Applied Biosystems, Foster City, CA; Cat no: 4304437). 100 μl was loaded into each of the 8 ports of the array (2 ports comprise of one sample replicate on the array). Thus, the samples run as duplicates were only loaded into 4 ports of the array. Thermal cycling conditions included a 50°C step for 2 minutes, denaturation for 10 min at 94°C followed by 40 cycles consisting of 2 steps: 97°C for 30 seconds and 59.7°C for 1 minute for annealing and extension. The raw data from the Prism 7900HT sequence detection system was imported into the Real-Time StatMiner™ software for statistical analysis of the data. Among the endogenous reference genes included on the array (18S ribosomal gene; UBC, β2 microglobulin), UBC and β2 microglobulin were chosen after visualizing the global Ct value distribution, for normalizing the data (Supplementary figure 1). The TLDA assays were run at LabIndia Instruments Pvt Ltd laboratories at Gurgaon, New Delhi. [SUBTITLE] Immunohistochemistry (IHC) [SUBSECTION] IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3. p16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16]. UBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression. IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3. p16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16]. UBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression. [SUBTITLE] UBE2C in cervical cancer cell lines [SUBSECTION] Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data. Dominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C). Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data. Dominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C). [SUBTITLE] Statistical analysis [SUBSECTION] Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias. Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.
null
null
null
null
[ "Background", "HPV Testing", "Microarray experiment", "Microarray dasta analysis", "Class Comparison in BRB-Array Tools", "Class prediction in BRB-Array Tools", "SAM Analysis", "Quantitative Real time PCR", "Immunohistochemistry (IHC)", "UBE2C in cervical cancer cell lines", "Statistical analysis", "Results", "Discussion", "Conclusion", "Conflict of interests", "Authors' contributions", "Pre-publication history" ]
[ "Cervical cancer is the second most common cancer among women worldwide and the most common cancer in Indian women [1]. In most developing countries there are no organized screening programmes, as a result most patients report to tertiary centres in locally advanced stages.\nHuman papilloma viruses (HPV) have been shown to play a major role in the pathogenesis of cervical cancer, but it alone is not sufficient [2]. Additional events, activation of proto-oncogenes and inactivation of tumour suppressor genes, are required in the induction of cervical cancer.\nCervical cancer goes through a series of pre-malignant stages - Cervical Intraepithelial Neoplasia (CIN) 1, 2 and 3. In general it takes upto about 10 - 15 years for the normal cervical epithelial cell to become a malignant one. However, some CIN2 lesions may develop soon after HPV infection, suggesting that there could be alternate pathways involved. CIN1 and 2 have a higher rate of spontaneous reversion compared to CIN3 [3]. The CIN3 then progresses to invasive carcinoma, which can then metastasize to regional lymph nodes and distant organs (e.g. lung).\nThe advent of microarray based technology has helped study the expression patterns of more than 40,000 genes at a time [4]. Several groups have used microarray based technology to look for differentially expressed genes in the different stages of cervical tumorigenesis [5,6]. Few studies have followed up and validated the microarray data in a large number of genes [7,8]. The objective of our study was to identify genes differentially expressed between normal cervix, CIN1/CIN2, CIN3/CIS and invasive cervical cancer, using oligo-microarray technique, validate the genes so identified using Relative quantitation Real Time Polymerase Chain Reaction (RQ-RT-PCR) and detect potential biomarkers for early diagnosis and therapeutic targets.", "The quality of the DNA was assessed by amplifying for β globin and only then HPV testing was done using GP5+ and GP6+ primers [9]. HPV16 and 18 typing was done using Nested Multiplex Polymerase Chain Reaction (NMPCR) technique [10]. SiHa DNA for HPV16 and HeLa DNA for HPV18 (positive controls) and C33A DNA (negative control) were included in all runs.", "1 μg of total RNA from the tumour/CIN/Normal sample and universal RNA (Stratagene; Cat no: 740000-41) were reverse transcribed using Arrayscript at 42°C for 2 hrs to obtain cDNA using the Amino Allyl MessageAmp II aRNA amplification kit (Ambion, Austin, USA; Cat no: AM1797). The cDNA was amplified by in-vitro transcription in the presence of T7 RNA polymerase; aRNA thus obtained was purified and quantitated in NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). 20 μg of tumour/CIN/Normal aRNA was labelled using NHS ester of Cy5 dye and the control universal aRNA was labelled using NHS ester of Cy3 dye. The Cy3 and Cy5 labelled aRNA was used for hybridization onto the microarray chips from Stanford Functional Genomics Facility (SFGF, Stanford, CA) containing 44,544 spots, for 16 hrs in Lucidea SlidePro hybridization chamber (GE Health Care, Uppsala, Sweden) at 42°C. After hybridization, slides were washed in 0.1× SSC, 1× SSC followed by 0.1× SSC and dried.\nThe slides were scanned in ProScanArray (PerkinElmer, Shelton, CT, USA). Griding was done using Scan array Express software package (version -4). The integrated or mean intensity of signal within the spot was calculated. The files were saved as GPR files.\nAll the raw data files have been submitted to GEO with an assigned GEO accession number - GSE14404.", "The Foreground Median intensity for Cy3 and Cy5, Background Median intensity for Cy3 and Cy5, spot size data were imported into BRB-ArrayTools software [11] using the Import wizard function. Background correction was not done. Global normalization was used to median centre the log-ratios on each array in order to adjust for differences in labelling intensities of the Cy3 and Cy5 dyes. The data was analysed using the Class comparison and Class prediction modules in the BRB-Array Tools software. In addition, Lowess normalization was also done separately and the data analysed using the modules mentioned above. The normalized Log ratios were also imported into Significance Analysis of Microarray (SAM) [12] software and analysed.", "We identified genes that were differentially expressed among the four classes (Normal, CIN1/2, CIN3/CIS, Cancer) using a random-variance t-test. The random-variance t-test is an improvement over the standard separate t-test as it permits sharing information among genes about within-class variation without assuming that all genes have the same variance [13]. Genes were considered statistically significant if their p value was < 0.01. In addition a two fold difference was required between the Cancer and Normal, CIN3/CIS and Normal, CIN1/2 and Normal. The same was repeated with the Lowess normalized data using the same criteria.", "We developed models for utilizing gene expression profile to predict the class of future samples based on the Diagonal Linear Discriminant Analysis and Nearest Neighbour Classification [11]. The models incorporated genes that were differentially expressed among genes at the 0.01 significance level as assessed by the random variance t-test [13]. We estimated the prediction error of each model using leave-one-out cross-validation (LOOCV) as described [14]. Leave-one-out cross-validation method was used to compute mis-classification rate. From the list, genes were sorted further based on 2 fold difference between Cancer versus CIN1/2 & Normal, CIN3/CIS versus CIN1/2 & Normal, and CIN1/2 versus Normal. The same was repeated with the Lowess normalized data using a significance value of 0.01.", "The normalized log ratios of all the samples were imported into SAM software and analysed. A Multi-class analysis with 100 permutations was done. A delta value of 0.96 and a fold difference of 2 was used to identify the genes differentially expressed.", "High Capacity Reverse Transcription kit (Applied Biosystems, Foster City, CA; Cat no: 4368814) was used to reverse transcribe 2 μg of total RNA from the 38 samples in a 20 μl reaction volume. In 3 samples, due to the limiting amount of RNA, 0.75 μg was used for the cDNA synthesis.\nThese cDNA samples were used for real time PCR amplification assays using TaqMan® arrays formerly TaqMan® Low density arrays (TLDA) (Applied Biosystems, Foster City, CA; Cat no: 4342261). The fluorogenic, FAM labelled probes and the sequence specific primers for the list of genes with endogenous control 18S rRNA were obtained as inventoried assays and incorporated into the TaqMan® array format. Quadruplicate (n = 38) and duplicate (n = 3; with limiting amount of RNA for cDNA synthesis) cDNA template samples were amplified and analysed on the ABI Prism 7900HT sequence detection system (Applied Biosystems, Foster City, CA).\nThe reaction set up, briefly, consisted of 1.44 μg of cDNA template made up to 400 μl with deionised water and equal amounts of TaqMan® Universal PCR Master Mix (Applied Biosystems, Foster City, CA; Cat no: 4304437). 100 μl was loaded into each of the 8 ports of the array (2 ports comprise of one sample replicate on the array). Thus, the samples run as duplicates were only loaded into 4 ports of the array. Thermal cycling conditions included a 50°C step for 2 minutes, denaturation for 10 min at 94°C followed by 40 cycles consisting of 2 steps: 97°C for 30 seconds and 59.7°C for 1 minute for annealing and extension.\nThe raw data from the Prism 7900HT sequence detection system was imported into the Real-Time StatMiner™ software for statistical analysis of the data. Among the endogenous reference genes included on the array (18S ribosomal gene; UBC, β2 microglobulin), UBC and β2 microglobulin were chosen after visualizing the global Ct value distribution, for normalizing the data (Supplementary figure 1). The TLDA assays were run at LabIndia Instruments Pvt Ltd laboratories at Gurgaon, New Delhi.\n[SUBTITLE] Immunohistochemistry (IHC) [SUBSECTION] IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\nIHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\n[SUBTITLE] UBE2C in cervical cancer cell lines [SUBSECTION] Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\nTaqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\n[SUBTITLE] Statistical analysis [SUBSECTION] Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.\nComparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.", "IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.", "Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).", "Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.", "The stage distribution of the invasive cancer cases was as follows: IB - 2, IIA - 4, IIB - 18 and IIIB - 4. Twenty seven of the tumours were Squamous cell carcinomas (18 Large cell non-keratinizing, 5 large cell keratinizing and 4 unspecified) and one was a poorly differentiated carcinoma. Eighteen were HPV16 positive, 6 were HPV18 positive and 4 were HPV16 and 18 subtype negative (but HPV positive). All the Normals were HPV negative while one CIN1/2 and all the CIN3/CIS were HPV16 positive.\nUsing different methods, as described above, genes that were found to be differentially expressed between the four classes (Normal, CIN1/2, CIN3/CIS and Cancer) were identified. We did not use a Training set and a Test set for the Class Prediction model but used LOOCV for cross-validation and obtain the mis-classification error. The list of genes significant by different methods of microarray analysis is given in the Additional File 1 (AF1).\nSixty nine genes were selected for further validation by RQ-PCR using the Taqman Low Density Array card (TLDA) format (Additional File 2). These 69 genes formed part of the 95 genes selected for analysis using the TLDA format. The additional genes were those which had been found to be differentially expressed between the responders and non-responders to radiotherapy only treatment. Apart from the mandatory endogenous 18S rRNA included in the TLDA cards, based on the microarray data, UBC and β2 microglobulin, were included as additional endogenous reference genes.\nTwo of the samples CXL19-hov160 and CXM024-hov210 which had worked in microarray did not amplify satisfactorily in the RQ-TLDA assay and had to be removed from further analysis. In addition, RPS3A gene did not amplify in any of the samples.\nThe RQ values after calibrating with the Normal samples (Mean) for all the 94 genes showed 8 additional genes to be overexpressed; 4 (ASB16, CCL18, FST, THOC6) in Cancers, 1 (KLK9) in CIN3/CIS and 3 (RASSF6, TMEM123 and GLB1L3) in CIN1/2 samples. These 8 genes had initially been chosen for validation of the differentially expressed genes between responders and non-responders to radiotherapy. After excluding the genes which did not amplify, we now had 76 genes for further analysis.\nOf the 31 genes which had been selected based on a greater than 2 fold difference between cancer versus CIN1/2 & Normal, 28 were concordant between the microarray data and the RQ-RT-PCR (Concordant rate of 90%). Three of four genes selected based on higher level of expression in Normals compared with all other classes showed concordance between the different methods of analysis. In the case of CIN1/2, concordance was seen in 6/7 genes (86%). However, with CIN3, this dropped to 41% (11/27). In four additional genes, there was a two fold greater difference between CIN3/CIS and Normal but not with CIN1/2. The overall concordance rate between the microarray data and the RQ-RT-PCR was 70% (48/69).\nThe list of genes validated and found to have a greater than 2 fold difference compared to the Normal, in the 3 different classes (Cancer, CIN3/CIS and CIN1/2) is given in Table 1. Figure 1 provides the fold change relative to Normal for these genes.\nRq Values For The Genes Relative To Normal\nGene symbols in bold italics indicate those which were not concordant between microarray and RQ-RT-PCR analysis\nRelative quantitation levels of significant genes.\nThe genes were grouped on the basis of whether or not they were known to be involved in cervical tumorigenesis (Tables 2 and 3). Gene Ontology mapping was done using Babelomics software [18], which showed an over-representation of genes involved in cell cycle, cell division, catabolic process and multi-cellular organismal metabolic process. The genes identified to be differentially expressed were then analysed for specific pathways of relevance by manual curetting of data from published literature and online databases. The genes were grouped under the following categories: 1. Cell cycle regulatory genes (n = 13); 2. Interferon induced genes (n = 5); 3. Ubiquitin pathway (n = 5); 4. Myc Pathway [19] (n = 12); 5. HPV-E6/E7 related genes [20] (n = 14); 6. RNA targeting genes (n = 3) (details are given in Additional File 3). In addition, 40 genes in our list were found to be potentially regulated by p53 family of genes [21] (Additional File 4). Using GeneGo's Metacore software (Trial version) (url: http://www.genego.com), the relationship of our validated genes with known Transcription factors was analyzed. Based on this and from the manually curetted information, we then attempted to construct relationship chart (Figure 2) providing information on the gene interactions.\nGenes Identified as Up or Down-Regulated In Cervical Cancers For The First Time\nGenes Known To Be Up or Down-Regulated In Cervical Cancers Found Also In Our Study\nInter-relationship of our validated genes with known Transcription factors and E6 & E7 protein. Bold arrows indicate stimulatory effect; dotted arrows indicate inhibitory effect. Dot-Dash arrow refers to unknown effect.\nUsing IHC, we studied the protein expression for MMP3 in 5 normal cervical tissues, 30 dysplasias of varying grades and 27 invasive cancers. Using a semi-quantitative scoring system and a cut-off threshold set based on the normal cervical tissue staining, 6/30 dysplasias and 11/27 invasive cancers were found to overexpress MMP3 protein (Figure 3A). Among the patients whose tumours had been treated only with radical radiotherapy and had been followed up for a minimum period of 3 years, over-expression was seen in a greater number of tumours that failed treatment (6/9) compared to those free of disease at 3 years (2/12) (p = 0.03). p16 was found to be overexpressed in 19 of 31 dysplasias of varying grade and in 27/29 cancers (p = 0.005) (Figure 3B).\nImmunohistochemical staining for MMP3 (3A), p16 (3B) and UBE2C (3C) in invasive cancers (Magnification × 200).\nUsing IHC, we found UBE2C to be overexpressed in 28/32 cancers, 2/11 CIN3/CIS and none of the CIN1 or 2 (Fisher's exact test p = 2.2 e-11) (Figure 3C). Using RQ RT-PCR, UBE2C was found to be overexpressed by more than 2 fold in SiHa, HeLa, C33A and ME180 relative to the HEK293 cells (Figure 4A). The growth of SiHa cells transfected with dominant negative UBE2C was significantly reduced at 48 and 72 hours compared to SiHa WT and SiHa transfected with pcDNA vector alone (p < 0.001) (Figure 4B).\nUBE2C experiment data. 4A: RQ of UBE2C in cervical cancer cell lines. Fold change relative to HEK293 cells. 4B: Growth curve for SiHa WT cells, SiHa cells transfected either with pcDNA alone or with Dominant negative UBE2C. ★ Denotes a statistically significant change (p < 0.001).", "There was good overall concordance between the microarray and the RQ-RT-PCR data. The lower concordance rate seen with the CIN3/CIS may be due to the additional CIN3 sample processed directly using RQ-RT-PCR. The relative quantitation values with and without the additional sample is given as Additional File 5. The concordance rate between microarray and semi-quantitative RT-PCR in the study by Gius et al [8] was less than 50%, using the standard microarray data analysis package.\nThere were several instances, wherein, a small difference in Microarray (above the 2 fold mandatory criteria) sometimes translated to large differences with RQ-RT-PCR (e.g. p16, MMP1, MMP3) and vice versa (e.g. CD36). This reinforces the point about the limitation of the microarray technique and it does emphasize the need for further validation, using assays like RQ-RT-PCR.\nHPV16 was the predominant subtype seen in the invasive cancers and CIN3/CIS. However, we did not look for all the high risk subtypes and hence cannot exclude multiple subtype infection. Four of the cancers were HPV positive but HPV16 and 18 negative, suggesting that other high risk subtypes could be involved. None of the normal cervical tissues were HPV positive.\nThe genes that were for the first time, found to be over-expressed in cervical cancers compared to Normal cervix, is given along with information in which other cancers they have been reported to be overexpressed (Table 2A). Our study, for the first time, has identified 20 genes to be up-regulated in cervical cancers and 5 in CIN3; 14 genes were found to be down-regulated. In addition, 26 genes identified by other studies, as to playing a role in cervical cancer, were also confirmed in our study. UBE2C, CCNB1, CCNB2, PLOD2, NUP210, MELK, CDC20 were overexpressed in tumours and in CIN3/CIS relative to both Normal and CIN1/CIN2, suggesting that they could have an important role to play in the early phase of tumorigenesis. Among the genes which were up-regulated in cancers compared to that of Normal, CIN1/2 or CIN3/CIS, IL8, INDO, ISG15, ISG20, AGRN, DTXL, MMP1, MMP3, CCL18, TOP2A AND STAT1 are likely to play an important role in the progression of the disease.\nSTAT1 gene has a bi-phasic level, a rise in CIN1/2, drop in CIN3/CIS and a significant rise in invasive cancers. STAT1 has been considered generally to be a tumour suppressor, while STAT3 and STAT5 are known to be proto-oncogenes. However, recent studies have shown STAT3 to have both oncogenic and tumour suppressor function [22]. It could be that in cervical cancer, STAT1 may be protective in the early phase of HPV infection but could function as a proto-oncogene in the invasive stages of the disease. Highly invasive melanoma cell lines had high levels of STAT1 and c-myc [23].\nThe study by Lessnick et al., [24] showed that introduction of the potentially oncogenic EWS-FLI transcript into the fibroblasts, resulted in growth arrest rather than transformation. Knocking out p53 using HPV E6 helped overcome the growth arrest but was not sufficient to induce malignant transformation. The study used microarray to identify genes differentially expressed between the EWS-FLI transfected and the mock transfected cell line and found several genes related to growth promotion down-regulated. Our study had several genes [19] overlapping with theirs. Thirteen genes from our study were found to be HPV E6/E7 related genes[20] and 40 of the genes in our list were found to be potential p53 Family Target genes[21] (Additional File 3). In addition, there were 12 myc regulated genes, (MYC Cancer database at http://www.myc-cancer-gene.org/) of which CSTB which has been reported to be down-regulated by myc, was down-regulated in CIN3/CIS and in Cancer [19].\np16 gene, a tumour suppressor has been reported to be over-expressed in dysplasias and invasive cancer of the cervix. Several studies have tried to use this as a marker in the PAP smears for more reliable interpretation of the smear. von Knebel's group from Germany [25], had developed an ELISA to detect p16 in the cervical cell lysates, and reported a 96% sensitivity to pick up high grade dysplasias. Subsequently, the p16 ELISA assay was compared with Hybrid Capture 2 and was found to have comparable sensitivity and a slightly better specificity (46.9% versus 35.4%) [26]. Our RQ-RT-PCR data shows a gross over-expression of p16 in the CIN3 and invasive cancers (>250 fold). In our series of dysplasias and cancers, p16 protein was found to be overexpressed in invasive cancers compared to the dysplasias.\nFigure 2 shows the inter-relationship of our genes with E6 and E7 protein and other known Transcription factors including p53, E2F, c-myc, B-MYB and c-Jun. The important genes in our list MELK, ISG15, STAT1, IL8, MMP1 and MMP3, could be playing critical roles in the tumorigenic pathway and could be potential targets for newer therapies.\nUBE2C is an E2 enzyme involved in the process of ubiquitination. Townsley et al. [17] had developed a dominant negative UBE2C which lacks the catalytic activity. When the dominant negative UBE2C was expressed in SiHa cells, which have nearly 4 fold greater levels of UBE2C compared to HEK293 cells, it produced a significant growth inhibition (Figure 4B), indicating that the dominant negative UBE2C is competing with the wild type UBE2C, and can interfere with cell proliferation. Additional studies will be required to understand the mechanism by which this effect occurs.", "Our study has helped identify newer genes which could play a role in the cervical tumorigenesis and could offer the potential of developing newer diagnostic markers and therapeutic targets. We have confirmed over-expression of MMP3, UBE2C and p16 in tumours, by IHC. This will need to be validated further in a larger series of tumours and dysplasias. UBE2C will need to be studied further to assess its potential as a target for the treatment of cervical cancer.", "The authors declare that they have no competing interests.", "TR conceived the study; acquired, analysed & interpreted the data and drafted and revised the article. KS was involved in the acquisition and analysis of the microarray data. NV standardized and together with MB performed the microarray experiments and the immunohistochemistry. SS carried out all the pathological studies and assessment of samples for the microarray studies. GG standardized the UBE2C transfection into the SiHa cells and studied the effect on the growth of the cells. GS was involved in the clinical management and data analysis and follow-up of the patients. All the authors read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/80/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "HPV Testing", "Microarray experiment", "Microarray dasta analysis", "Class Comparison in BRB-Array Tools", "Class prediction in BRB-Array Tools", "SAM Analysis", "Quantitative Real time PCR", "Immunohistochemistry (IHC)", "UBE2C in cervical cancer cell lines", "Statistical analysis", "Results", "Discussion", "Conclusion", "Conflict of interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Cervical cancer is the second most common cancer among women worldwide and the most common cancer in Indian women [1]. In most developing countries there are no organized screening programmes, as a result most patients report to tertiary centres in locally advanced stages.\nHuman papilloma viruses (HPV) have been shown to play a major role in the pathogenesis of cervical cancer, but it alone is not sufficient [2]. Additional events, activation of proto-oncogenes and inactivation of tumour suppressor genes, are required in the induction of cervical cancer.\nCervical cancer goes through a series of pre-malignant stages - Cervical Intraepithelial Neoplasia (CIN) 1, 2 and 3. In general it takes upto about 10 - 15 years for the normal cervical epithelial cell to become a malignant one. However, some CIN2 lesions may develop soon after HPV infection, suggesting that there could be alternate pathways involved. CIN1 and 2 have a higher rate of spontaneous reversion compared to CIN3 [3]. The CIN3 then progresses to invasive carcinoma, which can then metastasize to regional lymph nodes and distant organs (e.g. lung).\nThe advent of microarray based technology has helped study the expression patterns of more than 40,000 genes at a time [4]. Several groups have used microarray based technology to look for differentially expressed genes in the different stages of cervical tumorigenesis [5,6]. Few studies have followed up and validated the microarray data in a large number of genes [7,8]. The objective of our study was to identify genes differentially expressed between normal cervix, CIN1/CIN2, CIN3/CIS and invasive cervical cancer, using oligo-microarray technique, validate the genes so identified using Relative quantitation Real Time Polymerase Chain Reaction (RQ-RT-PCR) and detect potential biomarkers for early diagnosis and therapeutic targets.", "Archival total RNA extracted from punch biopsy samples from patients with cervical cancer, collected in RNA later (Ambion, Austin, USA; Cat no: AM7021) and stored in the tumour bank after an informed consent were used, after obtaining the Institutional Ethical committee's approval for the study. The RNA had been extracted from the biopsy samples using the RNeasy RNA extraction kit (Qiagen, Gmbh, Hilden; Cat no: 74106) as per the manufacturer's instructions.\nTwenty eight cervical cancer patients' samples were included in the study. The criteria for inclusion in the study were as follows: 1. good quality RNA as assessed by Bio-analyser (RIN 6 or above); 2. paired paraffin block having at least 70% tumour cells; 3. sufficient quantity of RNA be available; 4. patient should have completed prescribed radiotherapy and follow-up information till death/last disease free status be available.\nIn addition, 5 normal cervix tissues from women who underwent hysterectomy for non-malignant conditions or for non-cervical cancer were included. Four CIN1/CIN2 and 4 CIN3/CIS (one CIN3/CIS was included for RQ-RT-PCR analysis directly) were also included after informed consent. The Normal and CIN samples underwent frozen section to confirm their histopathologic status and the samples were immediately snap frozen in liquid nitrogen. RNA was extracted from the samples using the RNeasy RNA extraction kit, as described above.\n[SUBTITLE] HPV Testing [SUBSECTION] The quality of the DNA was assessed by amplifying for β globin and only then HPV testing was done using GP5+ and GP6+ primers [9]. HPV16 and 18 typing was done using Nested Multiplex Polymerase Chain Reaction (NMPCR) technique [10]. SiHa DNA for HPV16 and HeLa DNA for HPV18 (positive controls) and C33A DNA (negative control) were included in all runs.\nThe quality of the DNA was assessed by amplifying for β globin and only then HPV testing was done using GP5+ and GP6+ primers [9]. HPV16 and 18 typing was done using Nested Multiplex Polymerase Chain Reaction (NMPCR) technique [10]. SiHa DNA for HPV16 and HeLa DNA for HPV18 (positive controls) and C33A DNA (negative control) were included in all runs.\n[SUBTITLE] Microarray experiment [SUBSECTION] 1 μg of total RNA from the tumour/CIN/Normal sample and universal RNA (Stratagene; Cat no: 740000-41) were reverse transcribed using Arrayscript at 42°C for 2 hrs to obtain cDNA using the Amino Allyl MessageAmp II aRNA amplification kit (Ambion, Austin, USA; Cat no: AM1797). The cDNA was amplified by in-vitro transcription in the presence of T7 RNA polymerase; aRNA thus obtained was purified and quantitated in NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). 20 μg of tumour/CIN/Normal aRNA was labelled using NHS ester of Cy5 dye and the control universal aRNA was labelled using NHS ester of Cy3 dye. The Cy3 and Cy5 labelled aRNA was used for hybridization onto the microarray chips from Stanford Functional Genomics Facility (SFGF, Stanford, CA) containing 44,544 spots, for 16 hrs in Lucidea SlidePro hybridization chamber (GE Health Care, Uppsala, Sweden) at 42°C. After hybridization, slides were washed in 0.1× SSC, 1× SSC followed by 0.1× SSC and dried.\nThe slides were scanned in ProScanArray (PerkinElmer, Shelton, CT, USA). Griding was done using Scan array Express software package (version -4). The integrated or mean intensity of signal within the spot was calculated. The files were saved as GPR files.\nAll the raw data files have been submitted to GEO with an assigned GEO accession number - GSE14404.\n1 μg of total RNA from the tumour/CIN/Normal sample and universal RNA (Stratagene; Cat no: 740000-41) were reverse transcribed using Arrayscript at 42°C for 2 hrs to obtain cDNA using the Amino Allyl MessageAmp II aRNA amplification kit (Ambion, Austin, USA; Cat no: AM1797). The cDNA was amplified by in-vitro transcription in the presence of T7 RNA polymerase; aRNA thus obtained was purified and quantitated in NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). 20 μg of tumour/CIN/Normal aRNA was labelled using NHS ester of Cy5 dye and the control universal aRNA was labelled using NHS ester of Cy3 dye. The Cy3 and Cy5 labelled aRNA was used for hybridization onto the microarray chips from Stanford Functional Genomics Facility (SFGF, Stanford, CA) containing 44,544 spots, for 16 hrs in Lucidea SlidePro hybridization chamber (GE Health Care, Uppsala, Sweden) at 42°C. After hybridization, slides were washed in 0.1× SSC, 1× SSC followed by 0.1× SSC and dried.\nThe slides were scanned in ProScanArray (PerkinElmer, Shelton, CT, USA). Griding was done using Scan array Express software package (version -4). The integrated or mean intensity of signal within the spot was calculated. The files were saved as GPR files.\nAll the raw data files have been submitted to GEO with an assigned GEO accession number - GSE14404.\n[SUBTITLE] Microarray dasta analysis [SUBSECTION] The Foreground Median intensity for Cy3 and Cy5, Background Median intensity for Cy3 and Cy5, spot size data were imported into BRB-ArrayTools software [11] using the Import wizard function. Background correction was not done. Global normalization was used to median centre the log-ratios on each array in order to adjust for differences in labelling intensities of the Cy3 and Cy5 dyes. The data was analysed using the Class comparison and Class prediction modules in the BRB-Array Tools software. In addition, Lowess normalization was also done separately and the data analysed using the modules mentioned above. The normalized Log ratios were also imported into Significance Analysis of Microarray (SAM) [12] software and analysed.\nThe Foreground Median intensity for Cy3 and Cy5, Background Median intensity for Cy3 and Cy5, spot size data were imported into BRB-ArrayTools software [11] using the Import wizard function. Background correction was not done. Global normalization was used to median centre the log-ratios on each array in order to adjust for differences in labelling intensities of the Cy3 and Cy5 dyes. The data was analysed using the Class comparison and Class prediction modules in the BRB-Array Tools software. In addition, Lowess normalization was also done separately and the data analysed using the modules mentioned above. The normalized Log ratios were also imported into Significance Analysis of Microarray (SAM) [12] software and analysed.\n[SUBTITLE] Class Comparison in BRB-Array Tools [SUBSECTION] We identified genes that were differentially expressed among the four classes (Normal, CIN1/2, CIN3/CIS, Cancer) using a random-variance t-test. The random-variance t-test is an improvement over the standard separate t-test as it permits sharing information among genes about within-class variation without assuming that all genes have the same variance [13]. Genes were considered statistically significant if their p value was < 0.01. In addition a two fold difference was required between the Cancer and Normal, CIN3/CIS and Normal, CIN1/2 and Normal. The same was repeated with the Lowess normalized data using the same criteria.\nWe identified genes that were differentially expressed among the four classes (Normal, CIN1/2, CIN3/CIS, Cancer) using a random-variance t-test. The random-variance t-test is an improvement over the standard separate t-test as it permits sharing information among genes about within-class variation without assuming that all genes have the same variance [13]. Genes were considered statistically significant if their p value was < 0.01. In addition a two fold difference was required between the Cancer and Normal, CIN3/CIS and Normal, CIN1/2 and Normal. The same was repeated with the Lowess normalized data using the same criteria.\n[SUBTITLE] Class prediction in BRB-Array Tools [SUBSECTION] We developed models for utilizing gene expression profile to predict the class of future samples based on the Diagonal Linear Discriminant Analysis and Nearest Neighbour Classification [11]. The models incorporated genes that were differentially expressed among genes at the 0.01 significance level as assessed by the random variance t-test [13]. We estimated the prediction error of each model using leave-one-out cross-validation (LOOCV) as described [14]. Leave-one-out cross-validation method was used to compute mis-classification rate. From the list, genes were sorted further based on 2 fold difference between Cancer versus CIN1/2 & Normal, CIN3/CIS versus CIN1/2 & Normal, and CIN1/2 versus Normal. The same was repeated with the Lowess normalized data using a significance value of 0.01.\nWe developed models for utilizing gene expression profile to predict the class of future samples based on the Diagonal Linear Discriminant Analysis and Nearest Neighbour Classification [11]. The models incorporated genes that were differentially expressed among genes at the 0.01 significance level as assessed by the random variance t-test [13]. We estimated the prediction error of each model using leave-one-out cross-validation (LOOCV) as described [14]. Leave-one-out cross-validation method was used to compute mis-classification rate. From the list, genes were sorted further based on 2 fold difference between Cancer versus CIN1/2 & Normal, CIN3/CIS versus CIN1/2 & Normal, and CIN1/2 versus Normal. The same was repeated with the Lowess normalized data using a significance value of 0.01.\n[SUBTITLE] SAM Analysis [SUBSECTION] The normalized log ratios of all the samples were imported into SAM software and analysed. A Multi-class analysis with 100 permutations was done. A delta value of 0.96 and a fold difference of 2 was used to identify the genes differentially expressed.\nThe normalized log ratios of all the samples were imported into SAM software and analysed. A Multi-class analysis with 100 permutations was done. A delta value of 0.96 and a fold difference of 2 was used to identify the genes differentially expressed.\n[SUBTITLE] Quantitative Real time PCR [SUBSECTION] High Capacity Reverse Transcription kit (Applied Biosystems, Foster City, CA; Cat no: 4368814) was used to reverse transcribe 2 μg of total RNA from the 38 samples in a 20 μl reaction volume. In 3 samples, due to the limiting amount of RNA, 0.75 μg was used for the cDNA synthesis.\nThese cDNA samples were used for real time PCR amplification assays using TaqMan® arrays formerly TaqMan® Low density arrays (TLDA) (Applied Biosystems, Foster City, CA; Cat no: 4342261). The fluorogenic, FAM labelled probes and the sequence specific primers for the list of genes with endogenous control 18S rRNA were obtained as inventoried assays and incorporated into the TaqMan® array format. Quadruplicate (n = 38) and duplicate (n = 3; with limiting amount of RNA for cDNA synthesis) cDNA template samples were amplified and analysed on the ABI Prism 7900HT sequence detection system (Applied Biosystems, Foster City, CA).\nThe reaction set up, briefly, consisted of 1.44 μg of cDNA template made up to 400 μl with deionised water and equal amounts of TaqMan® Universal PCR Master Mix (Applied Biosystems, Foster City, CA; Cat no: 4304437). 100 μl was loaded into each of the 8 ports of the array (2 ports comprise of one sample replicate on the array). Thus, the samples run as duplicates were only loaded into 4 ports of the array. Thermal cycling conditions included a 50°C step for 2 minutes, denaturation for 10 min at 94°C followed by 40 cycles consisting of 2 steps: 97°C for 30 seconds and 59.7°C for 1 minute for annealing and extension.\nThe raw data from the Prism 7900HT sequence detection system was imported into the Real-Time StatMiner™ software for statistical analysis of the data. Among the endogenous reference genes included on the array (18S ribosomal gene; UBC, β2 microglobulin), UBC and β2 microglobulin were chosen after visualizing the global Ct value distribution, for normalizing the data (Supplementary figure 1). The TLDA assays were run at LabIndia Instruments Pvt Ltd laboratories at Gurgaon, New Delhi.\n[SUBTITLE] Immunohistochemistry (IHC) [SUBSECTION] IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\nIHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\n[SUBTITLE] UBE2C in cervical cancer cell lines [SUBSECTION] Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\nTaqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\n[SUBTITLE] Statistical analysis [SUBSECTION] Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.\nComparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.\nHigh Capacity Reverse Transcription kit (Applied Biosystems, Foster City, CA; Cat no: 4368814) was used to reverse transcribe 2 μg of total RNA from the 38 samples in a 20 μl reaction volume. In 3 samples, due to the limiting amount of RNA, 0.75 μg was used for the cDNA synthesis.\nThese cDNA samples were used for real time PCR amplification assays using TaqMan® arrays formerly TaqMan® Low density arrays (TLDA) (Applied Biosystems, Foster City, CA; Cat no: 4342261). The fluorogenic, FAM labelled probes and the sequence specific primers for the list of genes with endogenous control 18S rRNA were obtained as inventoried assays and incorporated into the TaqMan® array format. Quadruplicate (n = 38) and duplicate (n = 3; with limiting amount of RNA for cDNA synthesis) cDNA template samples were amplified and analysed on the ABI Prism 7900HT sequence detection system (Applied Biosystems, Foster City, CA).\nThe reaction set up, briefly, consisted of 1.44 μg of cDNA template made up to 400 μl with deionised water and equal amounts of TaqMan® Universal PCR Master Mix (Applied Biosystems, Foster City, CA; Cat no: 4304437). 100 μl was loaded into each of the 8 ports of the array (2 ports comprise of one sample replicate on the array). Thus, the samples run as duplicates were only loaded into 4 ports of the array. Thermal cycling conditions included a 50°C step for 2 minutes, denaturation for 10 min at 94°C followed by 40 cycles consisting of 2 steps: 97°C for 30 seconds and 59.7°C for 1 minute for annealing and extension.\nThe raw data from the Prism 7900HT sequence detection system was imported into the Real-Time StatMiner™ software for statistical analysis of the data. Among the endogenous reference genes included on the array (18S ribosomal gene; UBC, β2 microglobulin), UBC and β2 microglobulin were chosen after visualizing the global Ct value distribution, for normalizing the data (Supplementary figure 1). The TLDA assays were run at LabIndia Instruments Pvt Ltd laboratories at Gurgaon, New Delhi.\n[SUBTITLE] Immunohistochemistry (IHC) [SUBSECTION] IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\nIHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\n[SUBTITLE] UBE2C in cervical cancer cell lines [SUBSECTION] Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\nTaqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\n[SUBTITLE] Statistical analysis [SUBSECTION] Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.\nComparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.", "The quality of the DNA was assessed by amplifying for β globin and only then HPV testing was done using GP5+ and GP6+ primers [9]. HPV16 and 18 typing was done using Nested Multiplex Polymerase Chain Reaction (NMPCR) technique [10]. SiHa DNA for HPV16 and HeLa DNA for HPV18 (positive controls) and C33A DNA (negative control) were included in all runs.", "1 μg of total RNA from the tumour/CIN/Normal sample and universal RNA (Stratagene; Cat no: 740000-41) were reverse transcribed using Arrayscript at 42°C for 2 hrs to obtain cDNA using the Amino Allyl MessageAmp II aRNA amplification kit (Ambion, Austin, USA; Cat no: AM1797). The cDNA was amplified by in-vitro transcription in the presence of T7 RNA polymerase; aRNA thus obtained was purified and quantitated in NanoDrop (NanoDrop Technologies, Wilmington, DE, USA). 20 μg of tumour/CIN/Normal aRNA was labelled using NHS ester of Cy5 dye and the control universal aRNA was labelled using NHS ester of Cy3 dye. The Cy3 and Cy5 labelled aRNA was used for hybridization onto the microarray chips from Stanford Functional Genomics Facility (SFGF, Stanford, CA) containing 44,544 spots, for 16 hrs in Lucidea SlidePro hybridization chamber (GE Health Care, Uppsala, Sweden) at 42°C. After hybridization, slides were washed in 0.1× SSC, 1× SSC followed by 0.1× SSC and dried.\nThe slides were scanned in ProScanArray (PerkinElmer, Shelton, CT, USA). Griding was done using Scan array Express software package (version -4). The integrated or mean intensity of signal within the spot was calculated. The files were saved as GPR files.\nAll the raw data files have been submitted to GEO with an assigned GEO accession number - GSE14404.", "The Foreground Median intensity for Cy3 and Cy5, Background Median intensity for Cy3 and Cy5, spot size data were imported into BRB-ArrayTools software [11] using the Import wizard function. Background correction was not done. Global normalization was used to median centre the log-ratios on each array in order to adjust for differences in labelling intensities of the Cy3 and Cy5 dyes. The data was analysed using the Class comparison and Class prediction modules in the BRB-Array Tools software. In addition, Lowess normalization was also done separately and the data analysed using the modules mentioned above. The normalized Log ratios were also imported into Significance Analysis of Microarray (SAM) [12] software and analysed.", "We identified genes that were differentially expressed among the four classes (Normal, CIN1/2, CIN3/CIS, Cancer) using a random-variance t-test. The random-variance t-test is an improvement over the standard separate t-test as it permits sharing information among genes about within-class variation without assuming that all genes have the same variance [13]. Genes were considered statistically significant if their p value was < 0.01. In addition a two fold difference was required between the Cancer and Normal, CIN3/CIS and Normal, CIN1/2 and Normal. The same was repeated with the Lowess normalized data using the same criteria.", "We developed models for utilizing gene expression profile to predict the class of future samples based on the Diagonal Linear Discriminant Analysis and Nearest Neighbour Classification [11]. The models incorporated genes that were differentially expressed among genes at the 0.01 significance level as assessed by the random variance t-test [13]. We estimated the prediction error of each model using leave-one-out cross-validation (LOOCV) as described [14]. Leave-one-out cross-validation method was used to compute mis-classification rate. From the list, genes were sorted further based on 2 fold difference between Cancer versus CIN1/2 & Normal, CIN3/CIS versus CIN1/2 & Normal, and CIN1/2 versus Normal. The same was repeated with the Lowess normalized data using a significance value of 0.01.", "The normalized log ratios of all the samples were imported into SAM software and analysed. A Multi-class analysis with 100 permutations was done. A delta value of 0.96 and a fold difference of 2 was used to identify the genes differentially expressed.", "High Capacity Reverse Transcription kit (Applied Biosystems, Foster City, CA; Cat no: 4368814) was used to reverse transcribe 2 μg of total RNA from the 38 samples in a 20 μl reaction volume. In 3 samples, due to the limiting amount of RNA, 0.75 μg was used for the cDNA synthesis.\nThese cDNA samples were used for real time PCR amplification assays using TaqMan® arrays formerly TaqMan® Low density arrays (TLDA) (Applied Biosystems, Foster City, CA; Cat no: 4342261). The fluorogenic, FAM labelled probes and the sequence specific primers for the list of genes with endogenous control 18S rRNA were obtained as inventoried assays and incorporated into the TaqMan® array format. Quadruplicate (n = 38) and duplicate (n = 3; with limiting amount of RNA for cDNA synthesis) cDNA template samples were amplified and analysed on the ABI Prism 7900HT sequence detection system (Applied Biosystems, Foster City, CA).\nThe reaction set up, briefly, consisted of 1.44 μg of cDNA template made up to 400 μl with deionised water and equal amounts of TaqMan® Universal PCR Master Mix (Applied Biosystems, Foster City, CA; Cat no: 4304437). 100 μl was loaded into each of the 8 ports of the array (2 ports comprise of one sample replicate on the array). Thus, the samples run as duplicates were only loaded into 4 ports of the array. Thermal cycling conditions included a 50°C step for 2 minutes, denaturation for 10 min at 94°C followed by 40 cycles consisting of 2 steps: 97°C for 30 seconds and 59.7°C for 1 minute for annealing and extension.\nThe raw data from the Prism 7900HT sequence detection system was imported into the Real-Time StatMiner™ software for statistical analysis of the data. Among the endogenous reference genes included on the array (18S ribosomal gene; UBC, β2 microglobulin), UBC and β2 microglobulin were chosen after visualizing the global Ct value distribution, for normalizing the data (Supplementary figure 1). The TLDA assays were run at LabIndia Instruments Pvt Ltd laboratories at Gurgaon, New Delhi.\n[SUBTITLE] Immunohistochemistry (IHC) [SUBSECTION] IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\nIHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.\n[SUBTITLE] UBE2C in cervical cancer cell lines [SUBSECTION] Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\nTaqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).\n[SUBTITLE] Statistical analysis [SUBSECTION] Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.\nComparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.", "IHC was done for MMP3 protein expression in 5 Normal cervical tissue, 30 dysplasias of varying grade (CIN1 - 11; CIN2 - 8; CIN3/CIS - 11) and 27 invasive cervical cancers. A 3 layered ABC technique was used as described previously [15]. MMP3, monoclonal antibody (Sigma Aldrich, India; cat no: M6552) was used at a dilution of 1:75 and with wet antigen retrieval method. Positive control (section from a pancreatic cancer) and negative control (omission of primary antibody) were included in each run. The slides were scored by SS and TR independently and where discordant, jointly. The scoring was based on percentage of tumour cells immunoreactive (negative - 0; <25% = 1; 25-50% - 2; 51 - 75% - 3; >75% - 4), intensity of immunoreactivity (negative - 0; + - 1; ++ - 2; +++ - 3) and the compartment stained (cytoplasmic, nuclear or stromal). The scores obtained were added and the threshold was set at above the scores seen in the Normal cervical tissue (maximum score seen in Normal cervical tissue was 8). Hence tissues with a score of 9 or above were considered to overexpress MMP3.\np16 IHC was done as described previously [16] on 5 normal cervical tissue, 31 dysplasias of varying grades (CIN1 - 12; CIN2 - 8; CIN3/CIS - 11) and 29 tumours. Slides were scored as reported previously [16].\nUBE2C IHC was done as above using wet autoclaving with a hold time of 5 minutes. Rabbit UBE2C polyclonal antibody (Millipore, USA - catalogue no: AB3861) was used at 1 in 100 dilution. The scoring was done similar to the scoring of MMP3 staining, with the maximum score seen in normal cervical tissue being 6. Hence a score of 7 or above was considered to be overexpression.", "Taqman Real time PCR was done for UBE2C levels in SiHa, C33A, HeLa, ME180, BU25K and HEK293 (Human embryonic kidney cells) cell lines. GAPDH was used to normalize the data.\nDominant negative UBE2C, in which Cysteine 114 is replaced by Serine, leading to loss of catalytic activity [17] was introduced into SiHa cells, using Fugene 6 Transfection Reagent (Roche Applied Science) according to the manufacturer's instructions using a 3:2 Fugene/DNA ratio. The effect on growth was assessed using the MTS assay (Promega) in the SiHa wild type (WT), in SiHa with pcDNA vector alone (SiHa pcDNA) and in SiHa with dominant negative UBE2C (SiHa DN-UBE2C).", "Comparison between group means was assessed using a one-way ANOVA and multiple-comparison correction by Holm-Sidak method using Sigmaplot version 11.0. Fisher's exact test (2 tailed) was used to assess significance of IHC immuno-reactivity between cancer and dysplasias.", "The stage distribution of the invasive cancer cases was as follows: IB - 2, IIA - 4, IIB - 18 and IIIB - 4. Twenty seven of the tumours were Squamous cell carcinomas (18 Large cell non-keratinizing, 5 large cell keratinizing and 4 unspecified) and one was a poorly differentiated carcinoma. Eighteen were HPV16 positive, 6 were HPV18 positive and 4 were HPV16 and 18 subtype negative (but HPV positive). All the Normals were HPV negative while one CIN1/2 and all the CIN3/CIS were HPV16 positive.\nUsing different methods, as described above, genes that were found to be differentially expressed between the four classes (Normal, CIN1/2, CIN3/CIS and Cancer) were identified. We did not use a Training set and a Test set for the Class Prediction model but used LOOCV for cross-validation and obtain the mis-classification error. The list of genes significant by different methods of microarray analysis is given in the Additional File 1 (AF1).\nSixty nine genes were selected for further validation by RQ-PCR using the Taqman Low Density Array card (TLDA) format (Additional File 2). These 69 genes formed part of the 95 genes selected for analysis using the TLDA format. The additional genes were those which had been found to be differentially expressed between the responders and non-responders to radiotherapy only treatment. Apart from the mandatory endogenous 18S rRNA included in the TLDA cards, based on the microarray data, UBC and β2 microglobulin, were included as additional endogenous reference genes.\nTwo of the samples CXL19-hov160 and CXM024-hov210 which had worked in microarray did not amplify satisfactorily in the RQ-TLDA assay and had to be removed from further analysis. In addition, RPS3A gene did not amplify in any of the samples.\nThe RQ values after calibrating with the Normal samples (Mean) for all the 94 genes showed 8 additional genes to be overexpressed; 4 (ASB16, CCL18, FST, THOC6) in Cancers, 1 (KLK9) in CIN3/CIS and 3 (RASSF6, TMEM123 and GLB1L3) in CIN1/2 samples. These 8 genes had initially been chosen for validation of the differentially expressed genes between responders and non-responders to radiotherapy. After excluding the genes which did not amplify, we now had 76 genes for further analysis.\nOf the 31 genes which had been selected based on a greater than 2 fold difference between cancer versus CIN1/2 & Normal, 28 were concordant between the microarray data and the RQ-RT-PCR (Concordant rate of 90%). Three of four genes selected based on higher level of expression in Normals compared with all other classes showed concordance between the different methods of analysis. In the case of CIN1/2, concordance was seen in 6/7 genes (86%). However, with CIN3, this dropped to 41% (11/27). In four additional genes, there was a two fold greater difference between CIN3/CIS and Normal but not with CIN1/2. The overall concordance rate between the microarray data and the RQ-RT-PCR was 70% (48/69).\nThe list of genes validated and found to have a greater than 2 fold difference compared to the Normal, in the 3 different classes (Cancer, CIN3/CIS and CIN1/2) is given in Table 1. Figure 1 provides the fold change relative to Normal for these genes.\nRq Values For The Genes Relative To Normal\nGene symbols in bold italics indicate those which were not concordant between microarray and RQ-RT-PCR analysis\nRelative quantitation levels of significant genes.\nThe genes were grouped on the basis of whether or not they were known to be involved in cervical tumorigenesis (Tables 2 and 3). Gene Ontology mapping was done using Babelomics software [18], which showed an over-representation of genes involved in cell cycle, cell division, catabolic process and multi-cellular organismal metabolic process. The genes identified to be differentially expressed were then analysed for specific pathways of relevance by manual curetting of data from published literature and online databases. The genes were grouped under the following categories: 1. Cell cycle regulatory genes (n = 13); 2. Interferon induced genes (n = 5); 3. Ubiquitin pathway (n = 5); 4. Myc Pathway [19] (n = 12); 5. HPV-E6/E7 related genes [20] (n = 14); 6. RNA targeting genes (n = 3) (details are given in Additional File 3). In addition, 40 genes in our list were found to be potentially regulated by p53 family of genes [21] (Additional File 4). Using GeneGo's Metacore software (Trial version) (url: http://www.genego.com), the relationship of our validated genes with known Transcription factors was analyzed. Based on this and from the manually curetted information, we then attempted to construct relationship chart (Figure 2) providing information on the gene interactions.\nGenes Identified as Up or Down-Regulated In Cervical Cancers For The First Time\nGenes Known To Be Up or Down-Regulated In Cervical Cancers Found Also In Our Study\nInter-relationship of our validated genes with known Transcription factors and E6 & E7 protein. Bold arrows indicate stimulatory effect; dotted arrows indicate inhibitory effect. Dot-Dash arrow refers to unknown effect.\nUsing IHC, we studied the protein expression for MMP3 in 5 normal cervical tissues, 30 dysplasias of varying grades and 27 invasive cancers. Using a semi-quantitative scoring system and a cut-off threshold set based on the normal cervical tissue staining, 6/30 dysplasias and 11/27 invasive cancers were found to overexpress MMP3 protein (Figure 3A). Among the patients whose tumours had been treated only with radical radiotherapy and had been followed up for a minimum period of 3 years, over-expression was seen in a greater number of tumours that failed treatment (6/9) compared to those free of disease at 3 years (2/12) (p = 0.03). p16 was found to be overexpressed in 19 of 31 dysplasias of varying grade and in 27/29 cancers (p = 0.005) (Figure 3B).\nImmunohistochemical staining for MMP3 (3A), p16 (3B) and UBE2C (3C) in invasive cancers (Magnification × 200).\nUsing IHC, we found UBE2C to be overexpressed in 28/32 cancers, 2/11 CIN3/CIS and none of the CIN1 or 2 (Fisher's exact test p = 2.2 e-11) (Figure 3C). Using RQ RT-PCR, UBE2C was found to be overexpressed by more than 2 fold in SiHa, HeLa, C33A and ME180 relative to the HEK293 cells (Figure 4A). The growth of SiHa cells transfected with dominant negative UBE2C was significantly reduced at 48 and 72 hours compared to SiHa WT and SiHa transfected with pcDNA vector alone (p < 0.001) (Figure 4B).\nUBE2C experiment data. 4A: RQ of UBE2C in cervical cancer cell lines. Fold change relative to HEK293 cells. 4B: Growth curve for SiHa WT cells, SiHa cells transfected either with pcDNA alone or with Dominant negative UBE2C. ★ Denotes a statistically significant change (p < 0.001).", "There was good overall concordance between the microarray and the RQ-RT-PCR data. The lower concordance rate seen with the CIN3/CIS may be due to the additional CIN3 sample processed directly using RQ-RT-PCR. The relative quantitation values with and without the additional sample is given as Additional File 5. The concordance rate between microarray and semi-quantitative RT-PCR in the study by Gius et al [8] was less than 50%, using the standard microarray data analysis package.\nThere were several instances, wherein, a small difference in Microarray (above the 2 fold mandatory criteria) sometimes translated to large differences with RQ-RT-PCR (e.g. p16, MMP1, MMP3) and vice versa (e.g. CD36). This reinforces the point about the limitation of the microarray technique and it does emphasize the need for further validation, using assays like RQ-RT-PCR.\nHPV16 was the predominant subtype seen in the invasive cancers and CIN3/CIS. However, we did not look for all the high risk subtypes and hence cannot exclude multiple subtype infection. Four of the cancers were HPV positive but HPV16 and 18 negative, suggesting that other high risk subtypes could be involved. None of the normal cervical tissues were HPV positive.\nThe genes that were for the first time, found to be over-expressed in cervical cancers compared to Normal cervix, is given along with information in which other cancers they have been reported to be overexpressed (Table 2A). Our study, for the first time, has identified 20 genes to be up-regulated in cervical cancers and 5 in CIN3; 14 genes were found to be down-regulated. In addition, 26 genes identified by other studies, as to playing a role in cervical cancer, were also confirmed in our study. UBE2C, CCNB1, CCNB2, PLOD2, NUP210, MELK, CDC20 were overexpressed in tumours and in CIN3/CIS relative to both Normal and CIN1/CIN2, suggesting that they could have an important role to play in the early phase of tumorigenesis. Among the genes which were up-regulated in cancers compared to that of Normal, CIN1/2 or CIN3/CIS, IL8, INDO, ISG15, ISG20, AGRN, DTXL, MMP1, MMP3, CCL18, TOP2A AND STAT1 are likely to play an important role in the progression of the disease.\nSTAT1 gene has a bi-phasic level, a rise in CIN1/2, drop in CIN3/CIS and a significant rise in invasive cancers. STAT1 has been considered generally to be a tumour suppressor, while STAT3 and STAT5 are known to be proto-oncogenes. However, recent studies have shown STAT3 to have both oncogenic and tumour suppressor function [22]. It could be that in cervical cancer, STAT1 may be protective in the early phase of HPV infection but could function as a proto-oncogene in the invasive stages of the disease. Highly invasive melanoma cell lines had high levels of STAT1 and c-myc [23].\nThe study by Lessnick et al., [24] showed that introduction of the potentially oncogenic EWS-FLI transcript into the fibroblasts, resulted in growth arrest rather than transformation. Knocking out p53 using HPV E6 helped overcome the growth arrest but was not sufficient to induce malignant transformation. The study used microarray to identify genes differentially expressed between the EWS-FLI transfected and the mock transfected cell line and found several genes related to growth promotion down-regulated. Our study had several genes [19] overlapping with theirs. Thirteen genes from our study were found to be HPV E6/E7 related genes[20] and 40 of the genes in our list were found to be potential p53 Family Target genes[21] (Additional File 3). In addition, there were 12 myc regulated genes, (MYC Cancer database at http://www.myc-cancer-gene.org/) of which CSTB which has been reported to be down-regulated by myc, was down-regulated in CIN3/CIS and in Cancer [19].\np16 gene, a tumour suppressor has been reported to be over-expressed in dysplasias and invasive cancer of the cervix. Several studies have tried to use this as a marker in the PAP smears for more reliable interpretation of the smear. von Knebel's group from Germany [25], had developed an ELISA to detect p16 in the cervical cell lysates, and reported a 96% sensitivity to pick up high grade dysplasias. Subsequently, the p16 ELISA assay was compared with Hybrid Capture 2 and was found to have comparable sensitivity and a slightly better specificity (46.9% versus 35.4%) [26]. Our RQ-RT-PCR data shows a gross over-expression of p16 in the CIN3 and invasive cancers (>250 fold). In our series of dysplasias and cancers, p16 protein was found to be overexpressed in invasive cancers compared to the dysplasias.\nFigure 2 shows the inter-relationship of our genes with E6 and E7 protein and other known Transcription factors including p53, E2F, c-myc, B-MYB and c-Jun. The important genes in our list MELK, ISG15, STAT1, IL8, MMP1 and MMP3, could be playing critical roles in the tumorigenic pathway and could be potential targets for newer therapies.\nUBE2C is an E2 enzyme involved in the process of ubiquitination. Townsley et al. [17] had developed a dominant negative UBE2C which lacks the catalytic activity. When the dominant negative UBE2C was expressed in SiHa cells, which have nearly 4 fold greater levels of UBE2C compared to HEK293 cells, it produced a significant growth inhibition (Figure 4B), indicating that the dominant negative UBE2C is competing with the wild type UBE2C, and can interfere with cell proliferation. Additional studies will be required to understand the mechanism by which this effect occurs.", "Our study has helped identify newer genes which could play a role in the cervical tumorigenesis and could offer the potential of developing newer diagnostic markers and therapeutic targets. We have confirmed over-expression of MMP3, UBE2C and p16 in tumours, by IHC. This will need to be validated further in a larger series of tumours and dysplasias. UBE2C will need to be studied further to assess its potential as a target for the treatment of cervical cancer.", "The authors declare that they have no competing interests.", "TR conceived the study; acquired, analysed & interpreted the data and drafted and revised the article. KS was involved in the acquisition and analysis of the microarray data. NV standardized and together with MB performed the microarray experiments and the immunohistochemistry. SS carried out all the pathological studies and assessment of samples for the microarray studies. GG standardized the UBE2C transfection into the SiHa cells and studied the effect on the growth of the cells. GS was involved in the clinical management and data analysis and follow-up of the patients. All the authors read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/80/prepub\n", "List of genes differentially expressed identified by microarray analysis.\nClick here for file\nList of genes taken up for validation.\nClick here for file\nIdentified genes linked to specific pathways.\nClick here for file\np53 family regulated genes.\nClick here for file\nRelative Quantitation with and without CXM180.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Estimating the global public health implications of electricity and coal consumption.
21339091
The growing health risks associated with greenhouse gas emissions highlight the need for new energy policies that emphasize efficiency and low-carbon energy intensity.
BACKGROUND
Using time-series data sets from 41 countries with varying development trajectories between 1965 and 2005, we developed an autoregressive model of life expectancy (LE) and infant mortality (IM) based on electricity consumption, coal consumption, and previous year's LE or IM. Prediction of health impacts from the Greenhouse Gas and Air Pollution Interactions and Synergies (GAINS) integrated air pollution emissions health impact model for coal-fired power plants was compared with the time-series model results.
METHODS
The time-series model predicted that increased electricity consumption was associated with reduced IM for countries that started with relatively high IM (> 100/1,000 live births) and low LE (< 57 years) in 1965, whereas LE was not significantly associated with electricity consumption regardless of IM and LE in 1965. Increasing coal consumption was associated with increased IM and reduced LE after accounting for electricity consumption. These results are consistent with results based on the GAINS model and previously published estimates of disease burdens attributable to energy-related environmental factors, including indoor and outdoor air pollution and water and sanitation.
RESULTS
Increased electricity consumption in countries with IM < 100/1,000 live births does not lead to greater health benefits, whereas coal consumption has significant detrimental health impacts.
CONCLUSIONS
[ "Air Pollution", "Coal", "Electric Power Supplies", "Greenhouse Effect", "Humans", "Infant", "Infant Mortality", "Life Expectancy", "Models, Biological", "Public Health", "Risk Assessment" ]
3114817
null
null
null
null
Results
AR models for low-, mid-, and high-IM/LE countries. Figure 1 plots composite AR models against IM and LE for the highest population country in each of the three categories (India, China, and the United States). Comparisons of the raw data with the fitted models suggest a good fit to these data (R2 = 0.66–0.92). Figure 2 presents time-series model results for nine countries, highlighting the differences between countries starting with high IM/low LE in 1965 and countries starting with mid-IM/LE and low IM/high LE. Table 1 presents model parameter values for each group. As expected, the rates of decrease in IM and increase in LE are much lower for countries that began with the lowest IM and the highest LE. For each of these models, the previous year’s coefficients for IM and LE, which can be interpreted as surrogates for overall improvements expected with time (e.g., overall development trajectory that would include education, vaccination rates, health care access, and spending), are important factors in predicting current IM or LE, respectively (Table 1). AR model results for IM per 1,000 live births (A) and LE at birth (years; B) across time, based on rates in 1965 (blue; with 95% confidence intervals), versus observed data (red), for three countries classified in each of the three categories: high IM/low LE (Brazil, India, and Indonesia), mid-IM/LE (China, Chile, and Mexico), and low IM/high LE (United States, Japan, and Germany). The model predicted a significant inverse relationship between electricity consumption and IM for countries with high IM/low LE in 1965. Interestingly, the model estimated a significant positive relationship between electricity consumption and IM for countries with mid-IM/LE and low IM/high LE in 1965. Electricity consumption was not significantly predictive of LE in high-IM/low-LE or low-IM/high-LE countries, although LE was inversely associated with increasing coal consumption in the mid-IM/LE countries. Finally, we found a significant positive association between coal consumption and IM estimated for the low-IM/high-LE countries (Table 1). These results corroborate previous research (Modi et al. 2005; Wilkinson et al. 2007) that suggested electricity consumption is important for improving overall public health metrics such as IM in countries with high IM, but there appeared to be an adverse impact on IM in countries with mid-IM and low IM. Increased outdoor air pollution, or lifestyle factors associated with higher levels of electricity use (and increased gross domestic product), such as increased chronic disease rates, may explain the significant positive relationship between IM and electricity use in countries with mid-IM and low IM. Our findings suggest that, controlling for electricity supply, coal consumption negatively affects health. This corroborates a multitude of research (Oak Ridge National Laboratory 1995; Rabl and Spadaro 2006; Spath et al. 1999) on specific health impacts from occupational and environmental exposures related to coal consumption, using broad population-level health metrics over 40 years across 41 different countries. However, this methodology has several limitations, particularly because data sets for potential confounders are unavailable across such a wide geographical space and time period [see Supplemental Material, “Limitations of AR Models” and Table 1 (doi:10.1289/ehp.1002241)]. Therefore, we further explored the relation between energy consumption and health using bottom-up methodologies that apply exposure–response relationships identified for specific health end points associated with energy production (e.g., PM exposure and mortality). Comparison with environmental burden of disease reports. Next, we assessed specific health impacts that may be driving the significant relationships between electricity use, coal consumption, and the broad health metrics of LE and IM noted above. We applied two independent methods for modeling health impacts from environmental exposures related to energy consumption and production. First, we used the WHO environmental burden of disease (EBoD) disability estimates (Ezzati et al. 2004), which are based on a standardized approach for evaluation of health impacts from environmental burdens. For example, the EBoD estimates of the global health impacts of ambient air pollution in 2002 used exposure scenarios that covered major metropolitan areas around the world and a dose–response function from a large, peer-reviewed epidemiological study (Pope et al. 1995). EBoD expresses health impacts in total mortality attributed to the exposure, as well as disability-adjusted life years (DALYs), which incorporates disease states (e.g., asthma attributed to ambient air pollution) in addition to mortality. In this analysis, we expected that if a relationship exists between electricity consumption and health, then the electricity and coal consumption in 2002 would correlate with the EBoD estimates of DALYs lost because of deficient water and sanitation, indoor air pollution, and outdoor air pollution. Electricity consumption per capita is negatively correlated with estimated DALYs lost because of the three environmental determinants of disease, both combined and individually (water and sanitation, indoor air pollution, and outdoor air pollution), for all countries (Table 2). This result indicates electricity use is associated with better health. When results are stratified by IM/LE classification, negative correlations with DALYs lost because of water and sanitation and indoor air pollution are higher for high-IM/low-LE and mid-IM/LE countries than for low-IM/high-LE countries. These trends are consistent with the hypothesis that access to electricity contributes to reducing the disease burden of diarrheal and acute lower respiratory infections (end points measured in the water and sanitation and air pollution EBoD studies, respectively). This could be explained by increased access to clean water associated with centralized power and reduced indoor air pollution related to reduced reliance on biomass or coal burning for cooking and heating. Correlation coefficients (r) and p-values for electricity or coal consumption (per capita) and the EBoD DALYs associated with water and sanitation (water), indoor air pollution (indoor), and outdoor air pollution (outdoor) in 2002 across 41 countries.a Comparison with the GAINS model of health impacts from coal-fired power stations. To assess the impact of coal-fired power generation on mortality more closely, we applied the GAINS model (Amann et al. 2008) to estimate air pollutant emissions from coal-fired power plants, consequent human exposure to PM, and the potential life-shortening effect of this exposure. Table 3 shows estimated effects of total emissions of particulate matter with aerodynamic diameter ≤ 10 μm (PM10) from coal-fired power stations on the average YLL in the European Union, India, and China. Relationships between PM emissions and YLL based on the GAINS model were similar across the regions. The GAINS model prediction was similar to the AR model prediction of YLL according to PM10 emissions for the European Union but was higher than the AR-based estimate for India and lower than that for China. However, for all three predictions, the confidence intervals of the AR model encompassed the GAINS predicted point estimate. GAINS- and AR-based estimates may also differ because the GAINS model estimates YLL among persons > 30 years of age only, whereas the AR time-series analysis estimates changes in LE from birth and therefore incorporates impacts on mortality at all ages. Estimated impact, by region, of coal-fired power stations on PM emissions and YLL over the lifetime of a cohort of adults > 30 years of age: GAINS model versus AR model.
null
null
[ "Supplemental Material" ]
[ "Click here for additional data file." ]
[ null ]
[ "Materials and Methods", "Results", "Discussion", "Supplemental Material" ]
[ "Data. LE, IM, electricity use, coal consumption, and population data\nbetween the years of 1965 and 2005 were obtained from the Gapminder database (Rosling 2009). The data sets were derived from\nseveral sources: UNICEF statistics (Hill et al.\n2006) for IM, defined as the number of deaths of infants < 1 year of\nage per 1,000 births; the Human Mortality Database (Wilmoth and Shkolnikov (2009); and World Population Prospects (United Nations Population Division 2009) for LE\nat birth, World Development Indicators Online (World dataBank 2009) for electric\npower consumption per capita, and Statistical Review of World Energy (British Petroleum 2009) for coal consumption per\ncapita. Of the 200 countries represented in the IM and LE data sets, 41 had adequate\nelectricity and coal consumption data for the 1965–2005 time span. The LE,\nIM, electricity, and coal consumption data sets for the 41 countries are described\nin greater detail in the Supplemental Material, “Methods–Data\nDescription,” and plotted in Supplemental Material, Figures 1–4 (doi:10.1289/ehp.1002241).\nTime-series AR model results for LE at birth (A) and IM\n(B) in the United States, China, and India,\nrepresenting the highest population countries in the low-IM/high-LE,\nmid-IM/LE, and high-IM/low-LE groups, respectively. LE at birth (years) and\nIM (per 1,000 live births) are plotted in red; results of the model\ndescribed by Equation 1, including 95% confidence intervals, are plotted in\nblue. Adjusted R2 values are 0.92 (India), 0.74\n(China), and 0.66 (United States) for LE models and 0.79 (India), 0.87\n(China), and 0.92 (United States) for IM models.\nAutoregressive models for low-, mid-, and high-IM countries.\nAutoregressive (AR) time-series models are commonly used to model LE and IM,\nparticularly when there are insufficient data for all potential explanatory factors\n(Antunes and Waldman 2002; El-Zein et al. 2004; Kale et al. 2004; Kovats et al.\n2004; Levine et al. 2001).\nWe modeled LE or IM using the following AR equation for each country:\ny(t) = a0 +\na1u1(t)\n+ b1u2(t)\n
 + dy(t–1) +\ne(t), [1]\nwhere y(t) is the average LE or IM at time\nt (years or mortality per 1,000 births),\nu1(t) is the average coal\nconsumption per capita at time t (kilowatt hour per person per\nyear), u2(t) is the average electricity\nconsumption per capita at time t (kilowatt hour per person per\nyear), y is the previous year time point (t\n– 1) and d is the coefficient of this parameter,\ne(t) is the zero mean normally distributed\nnoise, and a1 and b1 are the\ncoefficients being estimated. Equation 1 can be expanded to separate the\ndependencies of LE or IM solely due to patterns of coal and electricity consumption\n[see Supplemental Material, “Methods–AR Model Description”\n[doi:10.1289/ehp.1002241)]. The model was applied to individual country data sets.\nIM and LE data between the years of 1965 to 2005 were plotted against model results\nincorporating electricity use per capita and coal consumption per capita for each\ncountry (see Supplemental Material, Figure 5 (doi:10.1289/ehp.1002241)].\nThe individual countries were grouped into three categories, based on tertiles of the\nempirical joint probability distributions of IM and LE of all countries in the data\nset for the year 1965: countries with IM between 105 and 156 per 1,000 live births\nand LE between 44 and 57 years of age in 1965 (high IM/low LE), countries with IM\nbetween 44 and 98 per 1,000 births and LE between 56 and 70 years of age in 1965\n(mid-IM/LE), and countries with IM between 14 and 39 per 1,000 births and LE between\n69 and 71 years of age in 1965 (low IM/high LE). For each of the three groups, a\ncomposite model was developed where the individual country contribution to parameter\nfits of the composite model was given equal weight. To find the model that best\nfitted the group of countries across all the time points, parameter estimates were\ngenerated using the least squares approach on the model given by Equation 1.\nAnalysis of cross-sectional WHO environmental burden of disease\nreports. The EBoD series estimates the attributable fraction of disease\ndue to a particular environmental risk factor using the general framework for global\nassessment described in the The World Health Report 2002—Reducing\nRisks, Promoting Healthy Life (WHO 2002). Individual reports on a\nspecific environmental risk factor first outline the evidence linking the risk\nfactor to health and then describe a method for estimating the health impact of that\nrisk factor on the population. Only relationships between exposure and disease that\nwere sufficiently well described to permit quantitative estimates of the disease\nburden are considered in these reports. Risk factors with long latency periods or\nnonspecific outcomes, factors with exposures that are difficult to assess at the\npopulation level, and factors that are distal to the outcomes are particularly\ndifficult to quantify (Prüss-Üstün et al. 2003). To date, WHO has assessed\n16 environmental risk factors worldwide. Results from the reports for outdoor air\npollution (Cohen et al. 2005), indoor air\npollution (Desai et al. 2004), and water and\nsanitation (Fewtrell et al. 2007). These\nreports estimated the total burden of disease attributable to each of the\nenvironmental factors in 2002. We then looked at the relationships between\nattributable disease burden for each of these three environmental factors\nindividually, as well as the disease burden attributable to combinations of the\nfactors, in each country based on the WHO reports against per capita electricity and\ncoal consumption in 2002. Linear correlation between these two data sets was then\ntested using the corr function in Matlab (MatLab, Natick, Massachusetts, USA).\nAnalysis using the GAINS model. We used the GAINS model (Amann et al. 2008; Markandya et al. 2009), an integrated model estimating air\npollutant emissions from coal-fired power plants, consequent human exposure to PM,\nand the potential life-shortening effect of this exposure, for three regions: the\nEuropean Union, India, and China. The GAINS model is described in more detail in the\nSupplemental Material, “Methods–GAINS Model Description”\n(doi:10.1289/ehp.1002241).\nTo compare the results from the GAINS model with results from the AR model described\nabove, results from the AR model were translated into comparable units. The GAINS\nmodel results are expressed in years of life lost (YLL) over the lifetime of a\ncohort of adults > 30 years of age, using dose–response estimates of\npremature mortality identified in adults (Pope et\nal. 1995). Results from the AR model coefficients are expressed in terms\nof change in LE or IM per 1,000 kWh per capita. Therefore, the coal consumption\ncoefficients (a), as described in Table 1, were multiplied by the average coal consumption per capita in\n2005 (the year in which the GAINS model is applied) for the European Union\n(low-IM/high-LE model), China (mid-IM/LE model), and India (high-IM/low-LE model),\nrespectively. To match the units expressed in the GAINS model results, the\ntime-series AR results were multiplied by the average LE in 2005 in the European\nUnion, India, and China. An alpha level of 0.05 defined statistical\nsignificance.\nModel parameter estimates (mean and 95% confidence limit) for LE and IM\npredicted for the three groups of countries in 1965.", "AR models for low-, mid-, and high-IM/LE countries.\nFigure 1 plots composite AR models against IM\nand LE for the highest population country in each of the three categories (India,\nChina, and the United States). Comparisons of the raw data with the fitted models\nsuggest a good fit to these data (R2 =\n0.66–0.92). Figure 2 presents\ntime-series model results for nine countries, highlighting the differences between\ncountries starting with high IM/low LE in 1965 and countries starting with mid-IM/LE\nand low IM/high LE. Table 1 presents model\nparameter values for each group. As expected, the rates of decrease in IM and\nincrease in LE are much lower for countries that began with the lowest IM and the\nhighest LE. For each of these models, the previous year’s coefficients for IM\nand LE, which can be interpreted as surrogates for overall improvements expected\nwith time (e.g., overall development trajectory that would include education,\nvaccination rates, health care access, and spending), are important factors in\npredicting current IM or LE, respectively (Table\n1).\nAR model results for IM per 1,000 live births (A) and LE at\nbirth (years; B) across time, based on rates in 1965 (blue;\nwith 95% confidence intervals), versus observed data (red), for three\ncountries classified in each of the three categories: high IM/low LE\n(Brazil, India, and Indonesia), mid-IM/LE (China, Chile, and Mexico), and\nlow IM/high LE (United States, Japan, and Germany).\nThe model predicted a significant inverse relationship between electricity\nconsumption and IM for countries with high IM/low LE in 1965. Interestingly, the\nmodel estimated a significant positive relationship between electricity consumption\nand IM for countries with mid-IM/LE and low IM/high LE in 1965. Electricity\nconsumption was not significantly predictive of LE in high-IM/low-LE or\nlow-IM/high-LE countries, although LE was inversely associated with increasing coal\nconsumption in the mid-IM/LE countries. Finally, we found a significant positive\nassociation between coal consumption and IM estimated for the low-IM/high-LE\ncountries (Table 1).\nThese results corroborate previous research (Modi et\nal. 2005; Wilkinson et al. 2007)\nthat suggested electricity consumption is important for improving overall public\nhealth metrics such as IM in countries with high IM, but there appeared to be an\nadverse impact on IM in countries with mid-IM and low IM. Increased outdoor air\npollution, or lifestyle factors associated with higher levels of electricity use\n(and increased gross domestic product), such as increased chronic disease rates, may\nexplain the significant positive relationship between IM and electricity use in\ncountries with mid-IM and low IM.\nOur findings suggest that, controlling for electricity supply, coal consumption\nnegatively affects health. This corroborates a multitude of research (Oak Ridge National Laboratory 1995; Rabl and Spadaro 2006; Spath et al. 1999) on specific health impacts from occupational\nand environmental exposures related to coal consumption, using broad\npopulation-level health metrics over 40 years across 41 different countries.\nHowever, this methodology has several limitations, particularly because data sets\nfor potential confounders are unavailable across such a wide geographical space and\ntime period [see Supplemental Material, “Limitations of AR Models” and\nTable 1 (doi:10.1289/ehp.1002241)].\nTherefore, we further explored the relation between energy consumption and health\nusing bottom-up methodologies that apply exposure–response relationships\nidentified for specific health end points associated with energy production (e.g.,\nPM exposure and mortality).\nComparison with environmental burden of disease reports. Next, we\nassessed specific health impacts that may be driving the significant relationships\nbetween electricity use, coal consumption, and the broad health metrics of LE and IM\nnoted above. We applied two independent methods for modeling health impacts from\nenvironmental exposures related to energy consumption and production. First, we used\nthe WHO environmental burden of disease (EBoD) disability estimates (Ezzati et al. 2004), which are based on a\nstandardized approach for evaluation of health impacts from environmental burdens.\nFor example, the EBoD estimates of the global health impacts of ambient air\npollution in 2002 used exposure scenarios that covered major metropolitan areas\naround the world and a dose–response function from a large, peer-reviewed\nepidemiological study (Pope et al. 1995).\nEBoD expresses health impacts in total mortality attributed to the exposure, as well\nas disability-adjusted life years (DALYs), which incorporates disease states (e.g.,\nasthma attributed to ambient air pollution) in addition to mortality.\nIn this analysis, we expected that if a relationship exists between electricity\nconsumption and health, then the electricity and coal consumption in 2002 would\ncorrelate with the EBoD estimates of DALYs lost because of deficient water and\nsanitation, indoor air pollution, and outdoor air pollution. Electricity consumption\nper capita is negatively correlated with estimated DALYs lost because of the three\nenvironmental determinants of disease, both combined and individually (water and\nsanitation, indoor air pollution, and outdoor air pollution), for all countries\n(Table 2). This result indicates\nelectricity use is associated with better health. When results are stratified by\nIM/LE classification, negative correlations with DALYs lost because of water and\nsanitation and indoor air pollution are higher for high-IM/low-LE and mid-IM/LE\ncountries than for low-IM/high-LE countries. These trends are consistent with the\nhypothesis that access to electricity contributes to reducing the disease burden of\ndiarrheal and acute lower respiratory infections (end points measured in the water\nand sanitation and air pollution EBoD studies, respectively). This could be\nexplained by increased access to clean water associated with centralized power and\nreduced indoor air pollution related to reduced reliance on biomass or coal burning\nfor cooking and heating.\nCorrelation coefficients (r) and p-values\nfor electricity or coal consumption (per capita) and the EBoD DALYs\nassociated with water and sanitation (water), indoor air pollution (indoor),\nand outdoor air pollution (outdoor) in 2002 across 41\ncountries.a\nComparison with the GAINS model of health impacts from coal-fired power\nstations. To assess the impact of coal-fired power generation on\nmortality more closely, we applied the GAINS model (Amann et al. 2008) to estimate air pollutant emissions from coal-fired\npower plants, consequent human exposure to PM, and the potential life-shortening\neffect of this exposure. Table 3 shows\nestimated effects of total emissions of particulate matter with aerodynamic diameter\n≤ 10 μm (PM10) from coal-fired power stations on the\naverage YLL in the European Union, India, and China. Relationships between PM\nemissions and YLL based on the GAINS model were similar across the regions. The\nGAINS model prediction was similar to the AR model prediction of YLL according to\nPM10 emissions for the European Union but was higher than the\nAR-based estimate for India and lower than that for China. However, for all three\npredictions, the confidence intervals of the AR model encompassed the GAINS\npredicted point estimate. GAINS- and AR-based estimates may also differ because the\nGAINS model estimates YLL among persons > 30 years of age only, whereas the AR\ntime-series analysis estimates changes in LE from birth and therefore incorporates\nimpacts on mortality at all ages.\nEstimated impact, by region, of coal-fired power stations on PM emissions and\nYLL over the lifetime of a cohort of adults > 30 years of age: GAINS\nmodel versus AR model.", "The International Energy Agency projects a 50% increase in global energy demand in\nthe next 20 years, driven largely by the fast-growing economies of China and India\n[International Energy Agency (IEA) 2007]. Increased power generation accounts for\napproximately half of this increase, and transport for a further one-fifth.\nCurrently, coal is the dominant fuel used for power generation (> 40%), and in\nthe absence of policy changes, its share will rise, given trends in recent years,\nparticularly in China and India (IEA 2007).\nThis analysis attempts to clarify the independent effects of electricity and coal\nconsumption on global health. We have examined historical time-series trends and\ncompared the results with two health-impact modeling approaches, demonstrating\nconsistency in relationships identified across these independent methods. Several\nfactors are important to consider when comparing the “bottom-up” GAINS\nmodel to the “top-down” time-series analysis. The\n“bottom-up” GAINS methodology uses complex models to estimate\nPM10 emissions from coal-fired power plants, population-level\nPM10 exposures resulting from these emissions, and the impact of\nthese exposures on LE (YLL) among those > 30 years of age. In contrast, our\n“top-down” AR time-series analysis incorporated historical data on LE,\nIM, electricity use, and coal consumption over a 40-year period to estimate the\nimpact of coal consumption (vs. PM10 emissions due to coal consumption)\non LE from birth and IM across 41 countries that differ in geography, economy, and\nculture. Direct comparisons between the two approaches are complicated by\ndifferences in their data sources, assumptions, and estimated outcomes and\nexposures. Nonetheless, results based on these two distinctly different approaches\nboth support the hypothesis that coal consumption results in quantifiable health\nimpacts.\nUnder the assumption that historical trends hold relevance today, the results of\nthese health-impact models can inform climate change mitigation strategies. For\nexample, time-series modeling suggests that electricity consumption is significantly\nassociated with improved health only in countries with IM > 100/1,000 live\nbirths, whereas in countries with IM < 100/1,000 live births in 1965 the analysis\nsuggests that electricity consumption is associated with increased IM. At present,\nnational IM rates are < 100/1,000 live births in all 41 countries. However, as a\nrecent climate change mitigation strategy highlights (Chakravarty et al. 2009), it is critical to take into account the\ndistribution of electricity use and health status within countries to further define\nsubpopulations that may benefit from increased access to electricity.\nElectricity coefficients are significant for models of IM but not for LE. We\nhypothesize this may be due to the greater vulnerability of infants in impoverished\ncircumstances to environmental threats (e.g., contaminated water and poor\nsanitation), which tend to be mitigated with access to a reliable electricity source\nin high-IM/low-LE circumstances and greater susceptibility to mortality due to acute\nlower respiratory infections associated with air pollution in the mid-IM/LE and\nlow-IM/high-LE case. Impacts on IM are more immediate than are impacts on LE;\ntherefore, they are more easily captured by the regression model, and differences in\nstatistical power due to the smaller magnitude of the LE estimates may also play a\nrole in this result. Future analysis of specific causes of death in countries where\ndata are available across a sufficient time period would be a good starting point to\nbegin teasing apart these relationships.\nOur findings from the analysis of historical trends suggest that, controlling for\nelectricity supply, coal consumption negatively affects health (Table 1), and integrated modeling approaches\nsuch as GAINS are consistent with this result. Therefore, the projected increase in\nuse of coal for power generation is a great concern (Holdren and Smith 2000; Markandya and\nWilkinson 2007; Markandya et al.\n2009). Even with controls to reduce sulfur oxides and PM emissions,\ncoal-burning power plants produce relatively large amounts of air pollution. Also,\npower generation from coal using current technology is more carbon intensive than is\nany other energy system.\nResults from the present top-down time-series analysis of broad health indicators\nacross 40 years in 41 countries support the conclusions of external costs\nresearch—large, unaccounted for health costs are associated with coal\nconsumption. We acknowledge there are limitations in the work reported here, because\nAR models may not accurately account for unmeasured confounders by using the\nprevious year’s IM (LE) to capture the effect of unspecified variables that\nvary linearly with time. The present time-series analysis would have been greatly\nimproved if comprehensive data sets were available on several potential explanatory\nvariables, including education level, vaccination rates, and health care access and\nexpenditures.\nApplication of a standardized method for evaluation of global health impacts related\nto energy systems will be critical as climate change mitigation strategies are\nnegotiated internationally. The WHO methodology establishes a standardized framework\nfor the quantification of global health impacts that is not based on estimating a\nmonetary value of health impacts (Ezzati et al.\n2004). This is critical when using results for international policy\ndevelopment because methods used for the monetization of health impacts pose\nsignificant concerns among global health researchers, because it is particularly\ndifficult to determine a monetary value for death or disability that is applicable\nacross nations with vastly different cultures and values (Patz et al. 2007; Smith and\nHaigler 2008).\nIn summary, we assess the relationship between electricity use and coal consumption\nand health through analysis of historical data sets and comparison with exposure\nresponse models. Previous large-scale economic analyses have suggested that health\ncosts related to air pollution and climate change are the dominant external costs\nassociated with power generation systems, and our analysis points to ways in which\nhealth impacts can be integrated into climate change mitigation and energy policy\nresearch. We report consistent results using three different approaches to\nunderstanding relations between electricity, coal consumption, and health. Overall,\nit appears that increased electricity consumption in countries with IM <\n100/1,000 births (and LE > 57 years) does not lead to greater health benefits and\nthat coal consumption has significant detrimental health impacts.", "Click here for additional data file." ]
[ "materials|methods", "results", "discussion", null ]
[ "air pollution", "climate change", "coal", "electricity", "energy policy", "global health", "health impact modeling", "infant mortality", "life expectancy", "time series" ]
Factors that influence the childbearing intentions of Canadian men.
21339195
The role of men in the childbearing decision process and the factors that influence men's childbearing intentions have been relatively unexplored in the literature. This study aimed to describe the factors that strongly influence the childbearing intentions of men and to describe differences in these factors according to men's age group.
BACKGROUND
A telephone survey (response rate 84%) was conducted with 495 men between the ages of 20 and 45 living in an urban setting who, at the time of contact, did not have biological children. Men were asked about what factors strongly influence their intention to have children. Univariable and multivariable logistic regressions were conducted to determine if these factors were significantly associated with age.
METHODS
Of those sampled, 86% of men reported that at some point in the future they planned to become a parent. The factors that men considered to be most influential in their childbearing intentions were: the need to be financially secure, their partner's interest/desire to have children, their partner's suitability to be a parent and their personal interest/desire to have children. Men who were 35-45 years old had lower odds of stating that financial security (crude OR: 0.32, 95% CI: 0.18-0.54) and partner's interest in having children (crude OR: 0.57, 95% CI: 0.33-0.99) were very influential, but had higher odds of stating that their biological clock (crude OR: 4.37, 95% CI: 1.78-10.76) was very influential in their childbearing intentions than men in the 20-24 year age group.
RESULTS
The factors that influence men's intentions about when to become a parent may change with age. Understanding what influences men to have children, and what they understand about reproductive health is important for education, program and policy development.
CONCLUSIONS
[ "Adult", "Age Factors", "Biological Clocks", "Canada", "Humans", "Intention", "Male", "Paternal Age", "Reproductive Behavior", "Sex Factors" ]
3079468
Introduction
The role of men in the childbearing decision process, as well as the influences of paternal age on birth outcomes, have not been explored within the literature to the same extent as maternal factors (Chalmers and Meyer, 1996; Dudgeon and Inhorn, 2004). It is well known that the average age of childbearing among women has increased steadily over the past 20 years in developed countries, yet a similar trend seems to be occurring among men who are becoming fathers (Bray et al., 2006; Tough et al., 2007). For instance, statistics from England and Wales report that in 1993 fathers aged 35 years or over accounted for 25% of live births within marriage, which increased to 40% in 2003 (Bray et al., 2006). The association between advanced maternal age and adverse birth outcomes has long been recognized, which has led to some concern regarding the trend towards having children later in life. Paternal age, on the other hand, has received less attention although some research has found that men older than 35 years are twice as likely to be infertile as men younger than 25 (Ford et al., 2000). Some studies have also found associations between advanced paternal age and the risk of autism spectrum disorder (Reichenberg et al., 2006), schizophrenia (Malaspina et al., 2001), Down syndrome and other chromosomal anomalies (Fisch et al., 2003), autosomal dominant mutations (Friedman, 1981), congenital anomalies (Yang et al., 2006), preterm birth and low-birthweight (Zhu et al., 2005; Astolfi et al., 2006; Reichman and Teitler, 2006), miscarriage and fetal death (de la Rochebrochard and Thonneau, 2002). However, the associations reported between advanced paternal age and adverse birth outcomes have been somewhat inconsistent within the literature (Chen et al., 2008; Sartorius and Nieschlag, 2010). This could be due to a limited understanding of the factors that influence male fertility as well as inadequate control of confounding factors (Chen et al., 2008; Sartorius and Nieschlag, 2010). With evidence indicating that advanced parental age impacts birth outcomes, it is important to understand how the delay in childbearing comes about. Previous studies have shown that the male partner's intentions and desires can affect the timing of first pregnancy as well as women's desire for becoming pregnant (Chalmers and Meyer, 1996; Lazarus, 1997). One study found that women's desire to conceive is closely related to their evaluation of their particular relationship (Zabin et al., 2000) and other studies found that men play an important role in influencing the reproductive health behaviors of women both directly and indirectly (Thomson, 1997; Dudgeon and Inhorn, 2004). A longitudinal study conducted by Thomson (1997) concluded that husbands and wives desires to have a child were equally influential when examining a couple's births (Thomson, 1997). This study found when only one partner (male or female) wants to have a child, the birth rate is approximately half of that observed when both partners want to have a child (Thomson, 1997). With regards to the timing of childbearing, much of the literature has focused on factors that influence women's intentions of when to have children. Recently, some studies have emerged that are beginning to shed light on men's perspectives, although a number of these studies have been drawn from specific populations (e.g. university students, those on low-income seeking reproductive health care) as opposed to a broader community population (Lampic et al., 2006; Virtala et al., 2006; Foster et al., 2008). Understanding the perspectives of men from a broader community population with regards to the timing of childbearing will provide a more comprehensive picture of the factors contributing to the growing number of people who are having children after age 35. This study was undertaken among a broad sample of men to address the following objectives: (i) to describe the factors that strongly influence the childbearing intentions of men and (ii) to describe differences in these factors according to men's age group.
null
null
Results
[SUBTITLE] Participants [SUBSECTION] The questionnaire was completed by a total of 495 men with a mean age of 30 years. Over 50% of these men had completed post-secondary education (trade, college or university level; Table I). The majority of respondents were Caucasian, non-smokers, single/never married, working for profit and having a total household income between $30 000 and $59 999 (Table I). Men in the 20–24 age groups were more likely to have completed less education, have a lower family income, and were more likely to be renting a home or living with their parents (Table I). Almost all participants (95.8%) were raised by their biological parents. Furthermore, a large proportion of the participants indicated that their parents were not divorced or separated by the time the participants were 16 years of age (Table I). Men aged 20–24 were more likely than the others to have had their parents divorced/separated (Table I). Most of the participants did not live within a blended family (i.e. a family consisting of a combination of step parent and step siblings) at any point in their lives (Table I). About a third had a partner (31.6%), and only 6.3% of the entire sample was currently trying to become pregnant with their partner. Eight men reported that they and their partner had sought out fertility treatments to assist them in conceiving and 13 men had step-children. Table ICharacteristics and upbringing of participants, by men's age group.CharacteristicOverall (n = 495), n (%)20–24 Years (n = 135), n (%)25–29 Years (n = 116), n (%)30–34 Years (n = 122), n (%)35–45 Years (n = 122), n (%)P-valueMarried or common law161 (32.3)23 (17.0)40 (34.5)50 (40.0)48 (39.0)<0.001Ethnicity0.668 Caucasian411 (83.0)111 (83.5)92 (79.3)104 (84.6)104 (84.6) Other84 (17.0)22 (16.5)24 (20.7)19 (15.4)19 (15.4)Education completed<0.001 Did not complete post-secondary education204 (41.0)104 (77.6)35 (30.4)25 (20.0)40 (32.3) Completed post-secondary education294 (59.0)30 (22.4)80 (69.6)100 (80.0)84 (67.7)Annual household income<0.001 <$29 99979 (18.8)39 (37.1)17 (17.3)10 (9.5)13 (11.6) $30 000–$59 999143 (34.0)26 (24.8)33 (33.7)84 (45.7)36 (32.1) $60 000–$89 99978 (18.6)15 (14.3)20 (20.4)18 (17.1)25 (22.3) $90000 or more120 (28.6)25 (23.8)28 (28.6)29 (27.6)38 (33.9)Own home, condo or duplex185 (37.2)11 (8.1)35 (30.2)63 (50.8)76 (62.3)<0.001Main activity is working for profit374 (75.1)67 (49.6)85 (73.3)114 (91.2)108 (88.5)<0.001Smoking status0.782 Current smoker122 (24.4)36 (26.7)29 (25.0)24 (19.2)33 (26.8) Ex-smoker87 (17.4)21 (15.6)19 (16.4)24 (19.2)23 (18.7) Never smoked in lifetime290 (58.1)78 (57.8)68 (58.6)77 (61.6)67 (54.5)Consumed alcohol in past year448 (89.6)121 (89.6)109 (94.0)115 (92.0)103 (83.1)0.032Parents separated before participant was 16101 (20.4)46 (34.1)15 (13.0)23 (18.4)17 (14.0)<0.001Lived in a blended family at any time72 (14.4)26 (19.3)10 (8.6)20 (16.0)16 (13.0)0.104Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. Characteristics and upbringing of participants, by men's age group. Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. The questionnaire was completed by a total of 495 men with a mean age of 30 years. Over 50% of these men had completed post-secondary education (trade, college or university level; Table I). The majority of respondents were Caucasian, non-smokers, single/never married, working for profit and having a total household income between $30 000 and $59 999 (Table I). Men in the 20–24 age groups were more likely to have completed less education, have a lower family income, and were more likely to be renting a home or living with their parents (Table I). Almost all participants (95.8%) were raised by their biological parents. Furthermore, a large proportion of the participants indicated that their parents were not divorced or separated by the time the participants were 16 years of age (Table I). Men aged 20–24 were more likely than the others to have had their parents divorced/separated (Table I). Most of the participants did not live within a blended family (i.e. a family consisting of a combination of step parent and step siblings) at any point in their lives (Table I). About a third had a partner (31.6%), and only 6.3% of the entire sample was currently trying to become pregnant with their partner. Eight men reported that they and their partner had sought out fertility treatments to assist them in conceiving and 13 men had step-children. Table ICharacteristics and upbringing of participants, by men's age group.CharacteristicOverall (n = 495), n (%)20–24 Years (n = 135), n (%)25–29 Years (n = 116), n (%)30–34 Years (n = 122), n (%)35–45 Years (n = 122), n (%)P-valueMarried or common law161 (32.3)23 (17.0)40 (34.5)50 (40.0)48 (39.0)<0.001Ethnicity0.668 Caucasian411 (83.0)111 (83.5)92 (79.3)104 (84.6)104 (84.6) Other84 (17.0)22 (16.5)24 (20.7)19 (15.4)19 (15.4)Education completed<0.001 Did not complete post-secondary education204 (41.0)104 (77.6)35 (30.4)25 (20.0)40 (32.3) Completed post-secondary education294 (59.0)30 (22.4)80 (69.6)100 (80.0)84 (67.7)Annual household income<0.001 <$29 99979 (18.8)39 (37.1)17 (17.3)10 (9.5)13 (11.6) $30 000–$59 999143 (34.0)26 (24.8)33 (33.7)84 (45.7)36 (32.1) $60 000–$89 99978 (18.6)15 (14.3)20 (20.4)18 (17.1)25 (22.3) $90000 or more120 (28.6)25 (23.8)28 (28.6)29 (27.6)38 (33.9)Own home, condo or duplex185 (37.2)11 (8.1)35 (30.2)63 (50.8)76 (62.3)<0.001Main activity is working for profit374 (75.1)67 (49.6)85 (73.3)114 (91.2)108 (88.5)<0.001Smoking status0.782 Current smoker122 (24.4)36 (26.7)29 (25.0)24 (19.2)33 (26.8) Ex-smoker87 (17.4)21 (15.6)19 (16.4)24 (19.2)23 (18.7) Never smoked in lifetime290 (58.1)78 (57.8)68 (58.6)77 (61.6)67 (54.5)Consumed alcohol in past year448 (89.6)121 (89.6)109 (94.0)115 (92.0)103 (83.1)0.032Parents separated before participant was 16101 (20.4)46 (34.1)15 (13.0)23 (18.4)17 (14.0)<0.001Lived in a blended family at any time72 (14.4)26 (19.3)10 (8.6)20 (16.0)16 (13.0)0.104Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. Characteristics and upbringing of participants, by men's age group. Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. [SUBTITLE] Plans for childbearing [SUBSECTION] Eighty-six percent of men indicated that they planned to have children; of those who did not plan to ever have children, 5% indicated that they had considered children in the past. The proportion of men who never wanted, nor planned to have children, was 9.2%. Men who did not want to become fathers were significantly more likely to be married or in a common law relationship (P= 0.02), be Caucasian (P= 0.02), be current or former smokers (P= 0.004), own their homes (P= 0.02) and be working (P= 0.04) than those who planned to have children or had considered having children in the past. More than half of men felt that the ideal age to begin parenting was before 30 (Table II) and specifically 47.8% felt it was ideal to begin parenting between the ages of 25 and 29. Men who were 30 years of age or older were more likely to indicate it was ideal to begin parenting at age 30 or older or that age was not important (Table II). Only 2% of all men believed it was ideal to begin parenting after the age of 35. Table IIIdeal age to begin parenting, by men's age group.Overall (n = 448), n (%)20–24 Years (n = 127), n (%)25–29 Years (n = 110), n (%)30–34 Years (n = 109), n (%)35–45 Years (n = 102), n (%)P-valueIdeal Age to begin parenting<0.001Before 30 years of age233 (52.0)86 (67.7)63 (57.3)47 (43.1)37 (36.3)30 years of age or over130 (29.0)27 (21.3)29 (26.4)36 (33.0)38 (37.3)Age not important85 (19.0)14 (11.0)18 (16.4)26 (23.9)27 (26.5)Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. Ideal age to begin parenting, by men's age group. Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. Eighty-six percent of men indicated that they planned to have children; of those who did not plan to ever have children, 5% indicated that they had considered children in the past. The proportion of men who never wanted, nor planned to have children, was 9.2%. Men who did not want to become fathers were significantly more likely to be married or in a common law relationship (P= 0.02), be Caucasian (P= 0.02), be current or former smokers (P= 0.004), own their homes (P= 0.02) and be working (P= 0.04) than those who planned to have children or had considered having children in the past. More than half of men felt that the ideal age to begin parenting was before 30 (Table II) and specifically 47.8% felt it was ideal to begin parenting between the ages of 25 and 29. Men who were 30 years of age or older were more likely to indicate it was ideal to begin parenting at age 30 or older or that age was not important (Table II). Only 2% of all men believed it was ideal to begin parenting after the age of 35. Table IIIdeal age to begin parenting, by men's age group.Overall (n = 448), n (%)20–24 Years (n = 127), n (%)25–29 Years (n = 110), n (%)30–34 Years (n = 109), n (%)35–45 Years (n = 102), n (%)P-valueIdeal Age to begin parenting<0.001Before 30 years of age233 (52.0)86 (67.7)63 (57.3)47 (43.1)37 (36.3)30 years of age or over130 (29.0)27 (21.3)29 (26.4)36 (33.0)38 (37.3)Age not important85 (19.0)14 (11.0)18 (16.4)26 (23.9)27 (26.5)Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. Ideal age to begin parenting, by men's age group. Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages. [SUBTITLE] Factors influencing timing of childbearing [SUBSECTION] The top four factors that were stated to be very influential in determining the timing of parenting were similar among men of all four age groups. These were financial security (53.5%), partner's interest/desire for having children (50.7%), partner suitability to parent (48.1%) and one's own interest/desire for having children (39.4%; Table III). Men who were currently married or in a common law relationship were significantly less likely to report that concerns about losing their job while on parental leave (P= 0.04) and were significantly more likely to report that feelings of a biological clock (P= 0.04) were very influential factors in determining the timing of parenting. Table IIIFactors influencing the timing of childbearing, by men's age group.The following factors were very influential in men's desire about when to parentaOverall (n = 495), n (%)20–24 Years (n = 135) OR (95% CI)25–29 Years (n = 116), OR (95% CI)30–34 Years (n = 122) OR (95% CI)35–45 Years (n = 122) OR (95% CI)The need to be financially secure243 (53.5)Ref0.63 (0.38–1.07)0.60 (0.38–1.02)0.32 (0.18–0.54)Partner's interest/desire to have children213 (50.7)Ref0.79 (0.47–1.33)0.71 (0.42–1.21)0.57 (0.33–0.99)Partner's suitability to be a parent202 (48.1)Ref0.63 (0.37–1.06)1.08 (0.63–1.84)0.74 (0.43–1.29)Personal interest/desire to have children178 (39.4)Ref0.73 (0.43–1.23)1.16 (0.69–1.95)0.73 (0.43–1.25)Health status150 (33.0)Ref0.58 (0.33–1.00)0.75 (0.43–1.28)0.94 (0.55–1.61)The need for a permanent position in employment138 (30.7)Ref0.85 (0.49–1.47)0.92 (0.53–1.60)0.65 (0.36–1.17)The amount of time devoted to education and training117 (26.0)Ref0.66 (0.38–1.14)0.40 (0.22–0.73)0.34 (0.18–0.64)The amount of time devoted to career117 (25.8)Ref0.67 (0.38–1.17)0.54 (0.30–0.97)0.62 (0.34–1.11)The need to own a home98 (21.6)Ref0.66 (0.37–1.20)0.58 (0.32–1.07)0.42 (0.22–0.82)Proximity to family for social support65 (14.5)Ref0.93 (0.47–1.81)0.56 (0.26–1.19)0.54 (0.25–1.18)Desire to travel58 (12.8)Ref0.52 (0.24–1.13)0.54 (0.25–1.16)0.64 (0.30–1.36)Concerns of losing job while taking parental leave54 (12.0)Ref0.21 (0.09–0.54)0.26 (0.11–0.62)0.55 (0.27–1.13)Culture or faith53 (11.8)Ref0.42 (0.19–0.93)0.58 (0.28–1.21)0.27 (0.11–0.70)Feeling of the ‘biological clock’ ticking47 (10.4)Ref1.32 (0.46–3.77)1.89 (0.71–5.07)4.37 (1.78–10.76)Concerns of not advancing in employment while taking parental leave29 (6.6)Ref0.44 (0.15–1.30)0.46 (0.16–1.36)0.61 (0.22–1.69)Bold values indicate statistically significant (P < 0.05).Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all. Factors influencing the timing of childbearing, by men's age group. Bold values indicate statistically significant (P < 0.05). Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all. In the univariable (Table III) and multivariable (Table IV) analyses, financial security significantly differed by men's age group, with men aged 35–45 being less likely to rate this as a very important factor than men in the 20–24 age group (crude OR: 0.32, 95% CI: 0.18–0.54). The greatest number of significant differences were noted between the 20–24 year age group and the 35–45 year age group, with older men being less likely to rate partner's interest/desire to have children (crude OR: 0.57, 95% CI: 0.33–0.99), amount of time devoted to education and training (crude OR: 0.34, 95% CI: 0.18–0.64), the need to own a home (crude OR: 0.42, 95% CI: 0.22–0.82) and culture/faith (crude OR: 0.27, 95% CI: 0.11–0.70) as being very important in influencing their intentions regarding when to become a parent (Table III). Men in the oldest age group were significantly more likely to report that the feeling of a biological clock ticking (crude OR: 4.37, 95% CI: 1.78–10.76) was a very important factor in their childbearing intentions when compared with men in the youngest age group (Table III). As seen in the multivariable analysis (Table IV), very few demographic factors were significant predictors of factors that men deemed very influential in their childbearing intentions. Table IVDemographic predictors of factors that strongly influence childbearing intentions.Financial securityPartner's interestPartner's suitabilityPersonal interestIdeal age to begin parenting ≥30Age group 20–24RefRefRefRefRef 25–290.68 (0.36–1.28)1.07 (0.56–2.03)0.67 (0.35–1.28)0.71 (0.37–1.36)0.97 (0.44–2.15) 30–340.65 (0.33–1.28)1.21 (0.60–2.44)1.29 (0.64–2.60)1.12 (0.57–2.20)2.58 (1.14–5.85) 35–450.32 (0.17–0.62)0.85 (0.44–1.67)0.76 (0.39–1.49)0.75 (0.39–1.45)3.45 (1.58–7.55)Annual household income <$29 999RefRefRefRefRef $30 000–$59 9990.94 (0.49–1.80)1.01 (0.52–1.98)0.94 (0.48–1.83)0.64 (0.33–1.22)0.44 (0.20–0.99) $60 000–$89 9990.93 (0.45–1.90)1.37 (0.70–2.85)1.12 (0.54–2.33)0.73 (0.35–1.49)0.50 (0.21–1.22) ≥$900000.73 (0.38–1.42)1.31 (0.70–2.58)0.89 (0.45–1.77)1.11 (0.58–2.15)0.69 (0.31–1.55)Education Did not complete post-secondary educationRefRefRefRefRef Completed post-secondary education0.81 (0.50–1.30)0.81 (0.49–1.33)1.31 (0.80–2.16)1.06 (0.65–1.73)1.85 (1.02–3.38)Ethnicity WhiteRefRefRefRefRef Other0.92 (0.52–1.64)0.73 (0.40–1.34)0.53 (0.29–0.97)1.01 (0.56–1.82)0.55 (0.26–1.16)Marital status SingleRefRefRefRefRef Married/common law1.08 (0.68–1.72)0.64 (0.40–1.03)1.04 (0.65–1.67)0.72 (0.45–1.16)0.78 (0.44–1.38)Bold values indicate statistically significant (P< 0.05). Demographic predictors of factors that strongly influence childbearing intentions. Bold values indicate statistically significant (P< 0.05). The top four factors that were stated to be very influential in determining the timing of parenting were similar among men of all four age groups. These were financial security (53.5%), partner's interest/desire for having children (50.7%), partner suitability to parent (48.1%) and one's own interest/desire for having children (39.4%; Table III). Men who were currently married or in a common law relationship were significantly less likely to report that concerns about losing their job while on parental leave (P= 0.04) and were significantly more likely to report that feelings of a biological clock (P= 0.04) were very influential factors in determining the timing of parenting. Table IIIFactors influencing the timing of childbearing, by men's age group.The following factors were very influential in men's desire about when to parentaOverall (n = 495), n (%)20–24 Years (n = 135) OR (95% CI)25–29 Years (n = 116), OR (95% CI)30–34 Years (n = 122) OR (95% CI)35–45 Years (n = 122) OR (95% CI)The need to be financially secure243 (53.5)Ref0.63 (0.38–1.07)0.60 (0.38–1.02)0.32 (0.18–0.54)Partner's interest/desire to have children213 (50.7)Ref0.79 (0.47–1.33)0.71 (0.42–1.21)0.57 (0.33–0.99)Partner's suitability to be a parent202 (48.1)Ref0.63 (0.37–1.06)1.08 (0.63–1.84)0.74 (0.43–1.29)Personal interest/desire to have children178 (39.4)Ref0.73 (0.43–1.23)1.16 (0.69–1.95)0.73 (0.43–1.25)Health status150 (33.0)Ref0.58 (0.33–1.00)0.75 (0.43–1.28)0.94 (0.55–1.61)The need for a permanent position in employment138 (30.7)Ref0.85 (0.49–1.47)0.92 (0.53–1.60)0.65 (0.36–1.17)The amount of time devoted to education and training117 (26.0)Ref0.66 (0.38–1.14)0.40 (0.22–0.73)0.34 (0.18–0.64)The amount of time devoted to career117 (25.8)Ref0.67 (0.38–1.17)0.54 (0.30–0.97)0.62 (0.34–1.11)The need to own a home98 (21.6)Ref0.66 (0.37–1.20)0.58 (0.32–1.07)0.42 (0.22–0.82)Proximity to family for social support65 (14.5)Ref0.93 (0.47–1.81)0.56 (0.26–1.19)0.54 (0.25–1.18)Desire to travel58 (12.8)Ref0.52 (0.24–1.13)0.54 (0.25–1.16)0.64 (0.30–1.36)Concerns of losing job while taking parental leave54 (12.0)Ref0.21 (0.09–0.54)0.26 (0.11–0.62)0.55 (0.27–1.13)Culture or faith53 (11.8)Ref0.42 (0.19–0.93)0.58 (0.28–1.21)0.27 (0.11–0.70)Feeling of the ‘biological clock’ ticking47 (10.4)Ref1.32 (0.46–3.77)1.89 (0.71–5.07)4.37 (1.78–10.76)Concerns of not advancing in employment while taking parental leave29 (6.6)Ref0.44 (0.15–1.30)0.46 (0.16–1.36)0.61 (0.22–1.69)Bold values indicate statistically significant (P < 0.05).Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all. Factors influencing the timing of childbearing, by men's age group. Bold values indicate statistically significant (P < 0.05). Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all. In the univariable (Table III) and multivariable (Table IV) analyses, financial security significantly differed by men's age group, with men aged 35–45 being less likely to rate this as a very important factor than men in the 20–24 age group (crude OR: 0.32, 95% CI: 0.18–0.54). The greatest number of significant differences were noted between the 20–24 year age group and the 35–45 year age group, with older men being less likely to rate partner's interest/desire to have children (crude OR: 0.57, 95% CI: 0.33–0.99), amount of time devoted to education and training (crude OR: 0.34, 95% CI: 0.18–0.64), the need to own a home (crude OR: 0.42, 95% CI: 0.22–0.82) and culture/faith (crude OR: 0.27, 95% CI: 0.11–0.70) as being very important in influencing their intentions regarding when to become a parent (Table III). Men in the oldest age group were significantly more likely to report that the feeling of a biological clock ticking (crude OR: 4.37, 95% CI: 1.78–10.76) was a very important factor in their childbearing intentions when compared with men in the youngest age group (Table III). As seen in the multivariable analysis (Table IV), very few demographic factors were significant predictors of factors that men deemed very influential in their childbearing intentions. Table IVDemographic predictors of factors that strongly influence childbearing intentions.Financial securityPartner's interestPartner's suitabilityPersonal interestIdeal age to begin parenting ≥30Age group 20–24RefRefRefRefRef 25–290.68 (0.36–1.28)1.07 (0.56–2.03)0.67 (0.35–1.28)0.71 (0.37–1.36)0.97 (0.44–2.15) 30–340.65 (0.33–1.28)1.21 (0.60–2.44)1.29 (0.64–2.60)1.12 (0.57–2.20)2.58 (1.14–5.85) 35–450.32 (0.17–0.62)0.85 (0.44–1.67)0.76 (0.39–1.49)0.75 (0.39–1.45)3.45 (1.58–7.55)Annual household income <$29 999RefRefRefRefRef $30 000–$59 9990.94 (0.49–1.80)1.01 (0.52–1.98)0.94 (0.48–1.83)0.64 (0.33–1.22)0.44 (0.20–0.99) $60 000–$89 9990.93 (0.45–1.90)1.37 (0.70–2.85)1.12 (0.54–2.33)0.73 (0.35–1.49)0.50 (0.21–1.22) ≥$900000.73 (0.38–1.42)1.31 (0.70–2.58)0.89 (0.45–1.77)1.11 (0.58–2.15)0.69 (0.31–1.55)Education Did not complete post-secondary educationRefRefRefRefRef Completed post-secondary education0.81 (0.50–1.30)0.81 (0.49–1.33)1.31 (0.80–2.16)1.06 (0.65–1.73)1.85 (1.02–3.38)Ethnicity WhiteRefRefRefRefRef Other0.92 (0.52–1.64)0.73 (0.40–1.34)0.53 (0.29–0.97)1.01 (0.56–1.82)0.55 (0.26–1.16)Marital status SingleRefRefRefRefRef Married/common law1.08 (0.68–1.72)0.64 (0.40–1.03)1.04 (0.65–1.67)0.72 (0.45–1.16)0.78 (0.44–1.38)Bold values indicate statistically significant (P< 0.05). Demographic predictors of factors that strongly influence childbearing intentions. Bold values indicate statistically significant (P< 0.05).
null
null
[ "Participants and setting", "Questionnaire", "Primary measures", "Statistical analysis", "Ethical approval", "Participants", "Plans for childbearing", "Factors influencing timing of childbearing", "Authors’ roles", "Funding" ]
[ "English-speaking men and women between the ages of 20 and 45 years, residing in Calgary and Edmonton, Alberta, Canada, without biological children at the time of contact were involved in this population-based study. Participants were recruited through a random-digit dialing technique. An urban setting was chosen as delayed childbearing was found to be more prevalent in these areas when compared with rural settings (Tough et al., 2007). Individuals without children were chosen to understand what women and men deem important prior to pregnancy, and to minimize the confounding knowledge of previous pregnancies. An overall response rate of 84% (1506/1791 conversed with and eligibility established) was obtained for this survey and only male respondents are included in this analysis.", "The questionnaire was developed by collecting information from focus groups with women, and 10 convenience interviews undertaken with men. Seventeen questions about factors that would influence the participant's desires about when to have children were answered on a five-point Likert scale, the choices being: Not At All, Not Very Much, Neutral, Somewhat and Very Much. Questions about demographics and history of family structure and function were also included. Data were collected between October 2003 and February 2004 using a computer-assisted telephone interview (Ci3 WINCATI, Sawtooth Software).", "Responses to questions about factors that influenced childbearing were collapsed into two categories: (i) Very Much and (ii) Somewhat, Neutral, Not Very Much and Not At All. This grouping allowed for the identification of only highly influential factors in the timing of childbearing.", "This analysis was completed using SPSS (Statistical Package for the Social Sciences: PC version 15.0) and significance was set at P< 0.05. Men were grouped into four age groups: 20–24 years; 25–29 years; 30–34 years; and 35–45 years. Descriptive statistics were used to describe the characteristics of study participants stratified by age group. Categorical variables were expressed as frequencies and percentages with 95% CIs. χ2 tests were used to assess for differences in participant demographics. Univariable logistic regression was used to determine whether age was significantly associated with any of the factors deemed to be very important in influencing men's childbearing intentions. Multivariable logistic regression was undertaken to determine what demographic variables (age group, annual household income, highest level of education completed, ethnicity and marital status) were associated with the four most commonly reported factors that influenced childbearing decisions and the ideal age to begin parenting.", "This study was approved by the Conjoint Health Research Ethics Board at the University of Calgary.", "The questionnaire was completed by a total of 495 men with a mean age of 30 years. Over 50% of these men had completed post-secondary education (trade, college or university level; Table I). The majority of respondents were Caucasian, non-smokers, single/never married, working for profit and having a total household income between $30 000 and $59 999 (Table I). Men in the 20–24 age groups were more likely to have completed less education, have a lower family income, and were more likely to be renting a home or living with their parents (Table I). Almost all participants (95.8%) were raised by their biological parents. Furthermore, a large proportion of the participants indicated that their parents were not divorced or separated by the time the participants were 16 years of age (Table I). Men aged 20–24 were more likely than the others to have had their parents divorced/separated (Table I). Most of the participants did not live within a blended family (i.e. a family consisting of a combination of step parent and step siblings) at any point in their lives (Table I). About a third had a partner (31.6%), and only 6.3% of the entire sample was currently trying to become pregnant with their partner. Eight men reported that they and their partner had sought out fertility treatments to assist them in conceiving and 13 men had step-children.\nTable ICharacteristics and upbringing of participants, by men's age group.CharacteristicOverall (n = 495), n (%)20–24 Years (n = 135), n (%)25–29 Years (n = 116), n (%)30–34 Years (n = 122), n (%)35–45 Years (n = 122), n (%)P-valueMarried or common law161 (32.3)23 (17.0)40 (34.5)50 (40.0)48 (39.0)<0.001Ethnicity0.668 Caucasian411 (83.0)111 (83.5)92 (79.3)104 (84.6)104 (84.6) Other84 (17.0)22 (16.5)24 (20.7)19 (15.4)19 (15.4)Education completed<0.001 Did not complete post-secondary education204 (41.0)104 (77.6)35 (30.4)25 (20.0)40 (32.3) Completed post-secondary education294 (59.0)30 (22.4)80 (69.6)100 (80.0)84 (67.7)Annual household income<0.001 <$29 99979 (18.8)39 (37.1)17 (17.3)10 (9.5)13 (11.6) $30 000–$59 999143 (34.0)26 (24.8)33 (33.7)84 (45.7)36 (32.1) $60 000–$89 99978 (18.6)15 (14.3)20 (20.4)18 (17.1)25 (22.3) $90000 or more120 (28.6)25 (23.8)28 (28.6)29 (27.6)38 (33.9)Own home, condo or duplex185 (37.2)11 (8.1)35 (30.2)63 (50.8)76 (62.3)<0.001Main activity is working for profit374 (75.1)67 (49.6)85 (73.3)114 (91.2)108 (88.5)<0.001Smoking status0.782 Current smoker122 (24.4)36 (26.7)29 (25.0)24 (19.2)33 (26.8) Ex-smoker87 (17.4)21 (15.6)19 (16.4)24 (19.2)23 (18.7) Never smoked in lifetime290 (58.1)78 (57.8)68 (58.6)77 (61.6)67 (54.5)Consumed alcohol in past year448 (89.6)121 (89.6)109 (94.0)115 (92.0)103 (83.1)0.032Parents separated before participant was 16101 (20.4)46 (34.1)15 (13.0)23 (18.4)17 (14.0)<0.001Lived in a blended family at any time72 (14.4)26 (19.3)10 (8.6)20 (16.0)16 (13.0)0.104Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nCharacteristics and upbringing of participants, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.", "Eighty-six percent of men indicated that they planned to have children; of those who did not plan to ever have children, 5% indicated that they had considered children in the past. The proportion of men who never wanted, nor planned to have children, was 9.2%. Men who did not want to become fathers were significantly more likely to be married or in a common law relationship (P= 0.02), be Caucasian (P= 0.02), be current or former smokers (P= 0.004), own their homes (P= 0.02) and be working (P= 0.04) than those who planned to have children or had considered having children in the past.\nMore than half of men felt that the ideal age to begin parenting was before 30 (Table II) and specifically 47.8% felt it was ideal to begin parenting between the ages of 25 and 29. Men who were 30 years of age or older were more likely to indicate it was ideal to begin parenting at age 30 or older or that age was not important (Table II). Only 2% of all men believed it was ideal to begin parenting after the age of 35.\nTable IIIdeal age to begin parenting, by men's age group.Overall (n = 448), n (%)20–24 Years (n = 127), n (%)25–29 Years (n = 110), n (%)30–34 Years (n = 109), n (%)35–45 Years (n = 102), n (%)P-valueIdeal Age to begin parenting<0.001Before 30 years of age233 (52.0)86 (67.7)63 (57.3)47 (43.1)37 (36.3)30 years of age or over130 (29.0)27 (21.3)29 (26.4)36 (33.0)38 (37.3)Age not important85 (19.0)14 (11.0)18 (16.4)26 (23.9)27 (26.5)Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nIdeal age to begin parenting, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.", "The top four factors that were stated to be very influential in determining the timing of parenting were similar among men of all four age groups. These were financial security (53.5%), partner's interest/desire for having children (50.7%), partner suitability to parent (48.1%) and one's own interest/desire for having children (39.4%; Table III). Men who were currently married or in a common law relationship were significantly less likely to report that concerns about losing their job while on parental leave (P= 0.04) and were significantly more likely to report that feelings of a biological clock (P= 0.04) were very influential factors in determining the timing of parenting.\nTable IIIFactors influencing the timing of childbearing, by men's age group.The following factors were very influential in men's desire about when to parentaOverall (n = 495), n (%)20–24 Years (n = 135) OR (95% CI)25–29 Years (n = 116), OR (95% CI)30–34 Years (n = 122) OR (95% CI)35–45 Years (n = 122) OR (95% CI)The need to be financially secure243 (53.5)Ref0.63 (0.38–1.07)0.60 (0.38–1.02)0.32 (0.18–0.54)Partner's interest/desire to have children213 (50.7)Ref0.79 (0.47–1.33)0.71 (0.42–1.21)0.57 (0.33–0.99)Partner's suitability to be a parent202 (48.1)Ref0.63 (0.37–1.06)1.08 (0.63–1.84)0.74 (0.43–1.29)Personal interest/desire to have children178 (39.4)Ref0.73 (0.43–1.23)1.16 (0.69–1.95)0.73 (0.43–1.25)Health status150 (33.0)Ref0.58 (0.33–1.00)0.75 (0.43–1.28)0.94 (0.55–1.61)The need for a permanent position in employment138 (30.7)Ref0.85 (0.49–1.47)0.92 (0.53–1.60)0.65 (0.36–1.17)The amount of time devoted to education and training117 (26.0)Ref0.66 (0.38–1.14)0.40 (0.22–0.73)0.34 (0.18–0.64)The amount of time devoted to career117 (25.8)Ref0.67 (0.38–1.17)0.54 (0.30–0.97)0.62 (0.34–1.11)The need to own a home98 (21.6)Ref0.66 (0.37–1.20)0.58 (0.32–1.07)0.42 (0.22–0.82)Proximity to family for social support65 (14.5)Ref0.93 (0.47–1.81)0.56 (0.26–1.19)0.54 (0.25–1.18)Desire to travel58 (12.8)Ref0.52 (0.24–1.13)0.54 (0.25–1.16)0.64 (0.30–1.36)Concerns of losing job while taking parental leave54 (12.0)Ref0.21 (0.09–0.54)0.26 (0.11–0.62)0.55 (0.27–1.13)Culture or faith53 (11.8)Ref0.42 (0.19–0.93)0.58 (0.28–1.21)0.27 (0.11–0.70)Feeling of the ‘biological clock’ ticking47 (10.4)Ref1.32 (0.46–3.77)1.89 (0.71–5.07)4.37 (1.78–10.76)Concerns of not advancing in employment while taking parental leave29 (6.6)Ref0.44 (0.15–1.30)0.46 (0.16–1.36)0.61 (0.22–1.69)Bold values indicate statistically significant (P < 0.05).Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nFactors influencing the timing of childbearing, by men's age group.\nBold values indicate statistically significant (P < 0.05).\nQuestion: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nIn the univariable (Table III) and multivariable (Table IV) analyses, financial security significantly differed by men's age group, with men aged 35–45 being less likely to rate this as a very important factor than men in the 20–24 age group (crude OR: 0.32, 95% CI: 0.18–0.54). The greatest number of significant differences were noted between the 20–24 year age group and the 35–45 year age group, with older men being less likely to rate partner's interest/desire to have children (crude OR: 0.57, 95% CI: 0.33–0.99), amount of time devoted to education and training (crude OR: 0.34, 95% CI: 0.18–0.64), the need to own a home (crude OR: 0.42, 95% CI: 0.22–0.82) and culture/faith (crude OR: 0.27, 95% CI: 0.11–0.70) as being very important in influencing their intentions regarding when to become a parent (Table III). Men in the oldest age group were significantly more likely to report that the feeling of a biological clock ticking (crude OR: 4.37, 95% CI: 1.78–10.76) was a very important factor in their childbearing intentions when compared with men in the youngest age group (Table III). As seen in the multivariable analysis (Table IV), very few demographic factors were significant predictors of factors that men deemed very influential in their childbearing intentions.\nTable IVDemographic predictors of factors that strongly influence childbearing intentions.Financial securityPartner's interestPartner's suitabilityPersonal interestIdeal age to begin parenting ≥30Age group 20–24RefRefRefRefRef 25–290.68 (0.36–1.28)1.07 (0.56–2.03)0.67 (0.35–1.28)0.71 (0.37–1.36)0.97 (0.44–2.15) 30–340.65 (0.33–1.28)1.21 (0.60–2.44)1.29 (0.64–2.60)1.12 (0.57–2.20)2.58 (1.14–5.85) 35–450.32 (0.17–0.62)0.85 (0.44–1.67)0.76 (0.39–1.49)0.75 (0.39–1.45)3.45 (1.58–7.55)Annual household income <$29 999RefRefRefRefRef $30 000–$59 9990.94 (0.49–1.80)1.01 (0.52–1.98)0.94 (0.48–1.83)0.64 (0.33–1.22)0.44 (0.20–0.99) $60 000–$89 9990.93 (0.45–1.90)1.37 (0.70–2.85)1.12 (0.54–2.33)0.73 (0.35–1.49)0.50 (0.21–1.22) ≥$900000.73 (0.38–1.42)1.31 (0.70–2.58)0.89 (0.45–1.77)1.11 (0.58–2.15)0.69 (0.31–1.55)Education Did not complete post-secondary educationRefRefRefRefRef Completed post-secondary education0.81 (0.50–1.30)0.81 (0.49–1.33)1.31 (0.80–2.16)1.06 (0.65–1.73)1.85 (1.02–3.38)Ethnicity WhiteRefRefRefRefRef Other0.92 (0.52–1.64)0.73 (0.40–1.34)0.53 (0.29–0.97)1.01 (0.56–1.82)0.55 (0.26–1.16)Marital status SingleRefRefRefRefRef Married/common law1.08 (0.68–1.72)0.64 (0.40–1.03)1.04 (0.65–1.67)0.72 (0.45–1.16)0.78 (0.44–1.38)Bold values indicate statistically significant (P< 0.05).\nDemographic predictors of factors that strongly influence childbearing intentions.\nBold values indicate statistically significant (P< 0.05).", "S.C.T. conceived of and secured funding for this study. Data analysis was conducted by E.R., A.M. and M.J.; E.R. drafted the manuscript. All authors participated in the interpretation of data, revised the manuscript and approved the final version of the manuscript that is now being submitted for publication.", "Funding for this study and salary support for S.C.T. was provided by Alberta Innovates Health Solutions (formerly the Alberta Heritage Foundation for Medical Research). E.R., received O'Brien Centre Summer Studentship Funding (summer 2009). A.M. is supported by a Canadian Institutes of Health Research (CIHR) Doctoral Award in Genetics (Ethics, Law and Society) and a CIHR Strategic Training Grant in Genes, Development and Child Health. Funding to pay the Open Access publication charges for this article was provided by the Open Access Authors Fund, Libraries and Cultural Resources, University of Calgary." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and Methods", "Participants and setting", "Questionnaire", "Primary measures", "Statistical analysis", "Ethical approval", "Results", "Participants", "Plans for childbearing", "Factors influencing timing of childbearing", "Discussion", "Authors’ roles", "Funding" ]
[ "The role of men in the childbearing decision process, as well as the influences of paternal age on birth outcomes, have not been explored within the literature to the same extent as maternal factors (Chalmers and Meyer, 1996; Dudgeon and Inhorn, 2004). It is well known that the average age of childbearing among women has increased steadily over the past 20 years in developed countries, yet a similar trend seems to be occurring among men who are becoming fathers (Bray et al., 2006; Tough et al., 2007). For instance, statistics from England and Wales report that in 1993 fathers aged 35 years or over accounted for 25% of live births within marriage, which increased to 40% in 2003 (Bray et al., 2006).\nThe association between advanced maternal age and adverse birth outcomes has long been recognized, which has led to some concern regarding the trend towards having children later in life. Paternal age, on the other hand, has received less attention although some research has found that men older than 35 years are twice as likely to be infertile as men younger than 25 (Ford et al., 2000). Some studies have also found associations between advanced paternal age and the risk of autism spectrum disorder (Reichenberg et al., 2006), schizophrenia (Malaspina et al., 2001), Down syndrome and other chromosomal anomalies (Fisch et al., 2003), autosomal dominant mutations (Friedman, 1981), congenital anomalies (Yang et al., 2006), preterm birth and low-birthweight (Zhu et al., 2005; Astolfi et al., 2006; Reichman and Teitler, 2006), miscarriage and fetal death (de la Rochebrochard and Thonneau, 2002). However, the associations reported between advanced paternal age and adverse birth outcomes have been somewhat inconsistent within the literature (Chen et al., 2008; Sartorius and Nieschlag, 2010). This could be due to a limited understanding of the factors that influence male fertility as well as inadequate control of confounding factors (Chen et al., 2008; Sartorius and Nieschlag, 2010).\nWith evidence indicating that advanced parental age impacts birth outcomes, it is important to understand how the delay in childbearing comes about. Previous studies have shown that the male partner's intentions and desires can affect the timing of first pregnancy as well as women's desire for becoming pregnant (Chalmers and Meyer, 1996; Lazarus, 1997). One study found that women's desire to conceive is closely related to their evaluation of their particular relationship (Zabin et al., 2000) and other studies found that men play an important role in influencing the reproductive health behaviors of women both directly and indirectly (Thomson, 1997; Dudgeon and Inhorn, 2004). A longitudinal study conducted by Thomson (1997) concluded that husbands and wives desires to have a child were equally influential when examining a couple's births (Thomson, 1997). This study found when only one partner (male or female) wants to have a child, the birth rate is approximately half of that observed when both partners want to have a child (Thomson, 1997).\nWith regards to the timing of childbearing, much of the literature has focused on factors that influence women's intentions of when to have children. Recently, some studies have emerged that are beginning to shed light on men's perspectives, although a number of these studies have been drawn from specific populations (e.g. university students, those on low-income seeking reproductive health care) as opposed to a broader community population (Lampic et al., 2006; Virtala et al., 2006; Foster et al., 2008). Understanding the perspectives of men from a broader community population with regards to the timing of childbearing will provide a more comprehensive picture of the factors contributing to the growing number of people who are having children after age 35. This study was undertaken among a broad sample of men to address the following objectives: (i) to describe the factors that strongly influence the childbearing intentions of men and (ii) to describe differences in these factors according to men's age group.", "[SUBTITLE] Participants and setting [SUBSECTION] English-speaking men and women between the ages of 20 and 45 years, residing in Calgary and Edmonton, Alberta, Canada, without biological children at the time of contact were involved in this population-based study. Participants were recruited through a random-digit dialing technique. An urban setting was chosen as delayed childbearing was found to be more prevalent in these areas when compared with rural settings (Tough et al., 2007). Individuals without children were chosen to understand what women and men deem important prior to pregnancy, and to minimize the confounding knowledge of previous pregnancies. An overall response rate of 84% (1506/1791 conversed with and eligibility established) was obtained for this survey and only male respondents are included in this analysis.\nEnglish-speaking men and women between the ages of 20 and 45 years, residing in Calgary and Edmonton, Alberta, Canada, without biological children at the time of contact were involved in this population-based study. Participants were recruited through a random-digit dialing technique. An urban setting was chosen as delayed childbearing was found to be more prevalent in these areas when compared with rural settings (Tough et al., 2007). Individuals without children were chosen to understand what women and men deem important prior to pregnancy, and to minimize the confounding knowledge of previous pregnancies. An overall response rate of 84% (1506/1791 conversed with and eligibility established) was obtained for this survey and only male respondents are included in this analysis.\n[SUBTITLE] Questionnaire [SUBSECTION] The questionnaire was developed by collecting information from focus groups with women, and 10 convenience interviews undertaken with men. Seventeen questions about factors that would influence the participant's desires about when to have children were answered on a five-point Likert scale, the choices being: Not At All, Not Very Much, Neutral, Somewhat and Very Much. Questions about demographics and history of family structure and function were also included. Data were collected between October 2003 and February 2004 using a computer-assisted telephone interview (Ci3 WINCATI, Sawtooth Software).\nThe questionnaire was developed by collecting information from focus groups with women, and 10 convenience interviews undertaken with men. Seventeen questions about factors that would influence the participant's desires about when to have children were answered on a five-point Likert scale, the choices being: Not At All, Not Very Much, Neutral, Somewhat and Very Much. Questions about demographics and history of family structure and function were also included. Data were collected between October 2003 and February 2004 using a computer-assisted telephone interview (Ci3 WINCATI, Sawtooth Software).\n[SUBTITLE] Primary measures [SUBSECTION] Responses to questions about factors that influenced childbearing were collapsed into two categories: (i) Very Much and (ii) Somewhat, Neutral, Not Very Much and Not At All. This grouping allowed for the identification of only highly influential factors in the timing of childbearing.\nResponses to questions about factors that influenced childbearing were collapsed into two categories: (i) Very Much and (ii) Somewhat, Neutral, Not Very Much and Not At All. This grouping allowed for the identification of only highly influential factors in the timing of childbearing.\n[SUBTITLE] Statistical analysis [SUBSECTION] This analysis was completed using SPSS (Statistical Package for the Social Sciences: PC version 15.0) and significance was set at P< 0.05. Men were grouped into four age groups: 20–24 years; 25–29 years; 30–34 years; and 35–45 years. Descriptive statistics were used to describe the characteristics of study participants stratified by age group. Categorical variables were expressed as frequencies and percentages with 95% CIs. χ2 tests were used to assess for differences in participant demographics. Univariable logistic regression was used to determine whether age was significantly associated with any of the factors deemed to be very important in influencing men's childbearing intentions. Multivariable logistic regression was undertaken to determine what demographic variables (age group, annual household income, highest level of education completed, ethnicity and marital status) were associated with the four most commonly reported factors that influenced childbearing decisions and the ideal age to begin parenting.\nThis analysis was completed using SPSS (Statistical Package for the Social Sciences: PC version 15.0) and significance was set at P< 0.05. Men were grouped into four age groups: 20–24 years; 25–29 years; 30–34 years; and 35–45 years. Descriptive statistics were used to describe the characteristics of study participants stratified by age group. Categorical variables were expressed as frequencies and percentages with 95% CIs. χ2 tests were used to assess for differences in participant demographics. Univariable logistic regression was used to determine whether age was significantly associated with any of the factors deemed to be very important in influencing men's childbearing intentions. Multivariable logistic regression was undertaken to determine what demographic variables (age group, annual household income, highest level of education completed, ethnicity and marital status) were associated with the four most commonly reported factors that influenced childbearing decisions and the ideal age to begin parenting.\n[SUBTITLE] Ethical approval [SUBSECTION] This study was approved by the Conjoint Health Research Ethics Board at the University of Calgary.\nThis study was approved by the Conjoint Health Research Ethics Board at the University of Calgary.", "English-speaking men and women between the ages of 20 and 45 years, residing in Calgary and Edmonton, Alberta, Canada, without biological children at the time of contact were involved in this population-based study. Participants were recruited through a random-digit dialing technique. An urban setting was chosen as delayed childbearing was found to be more prevalent in these areas when compared with rural settings (Tough et al., 2007). Individuals without children were chosen to understand what women and men deem important prior to pregnancy, and to minimize the confounding knowledge of previous pregnancies. An overall response rate of 84% (1506/1791 conversed with and eligibility established) was obtained for this survey and only male respondents are included in this analysis.", "The questionnaire was developed by collecting information from focus groups with women, and 10 convenience interviews undertaken with men. Seventeen questions about factors that would influence the participant's desires about when to have children were answered on a five-point Likert scale, the choices being: Not At All, Not Very Much, Neutral, Somewhat and Very Much. Questions about demographics and history of family structure and function were also included. Data were collected between October 2003 and February 2004 using a computer-assisted telephone interview (Ci3 WINCATI, Sawtooth Software).", "Responses to questions about factors that influenced childbearing were collapsed into two categories: (i) Very Much and (ii) Somewhat, Neutral, Not Very Much and Not At All. This grouping allowed for the identification of only highly influential factors in the timing of childbearing.", "This analysis was completed using SPSS (Statistical Package for the Social Sciences: PC version 15.0) and significance was set at P< 0.05. Men were grouped into four age groups: 20–24 years; 25–29 years; 30–34 years; and 35–45 years. Descriptive statistics were used to describe the characteristics of study participants stratified by age group. Categorical variables were expressed as frequencies and percentages with 95% CIs. χ2 tests were used to assess for differences in participant demographics. Univariable logistic regression was used to determine whether age was significantly associated with any of the factors deemed to be very important in influencing men's childbearing intentions. Multivariable logistic regression was undertaken to determine what demographic variables (age group, annual household income, highest level of education completed, ethnicity and marital status) were associated with the four most commonly reported factors that influenced childbearing decisions and the ideal age to begin parenting.", "This study was approved by the Conjoint Health Research Ethics Board at the University of Calgary.", "[SUBTITLE] Participants [SUBSECTION] The questionnaire was completed by a total of 495 men with a mean age of 30 years. Over 50% of these men had completed post-secondary education (trade, college or university level; Table I). The majority of respondents were Caucasian, non-smokers, single/never married, working for profit and having a total household income between $30 000 and $59 999 (Table I). Men in the 20–24 age groups were more likely to have completed less education, have a lower family income, and were more likely to be renting a home or living with their parents (Table I). Almost all participants (95.8%) were raised by their biological parents. Furthermore, a large proportion of the participants indicated that their parents were not divorced or separated by the time the participants were 16 years of age (Table I). Men aged 20–24 were more likely than the others to have had their parents divorced/separated (Table I). Most of the participants did not live within a blended family (i.e. a family consisting of a combination of step parent and step siblings) at any point in their lives (Table I). About a third had a partner (31.6%), and only 6.3% of the entire sample was currently trying to become pregnant with their partner. Eight men reported that they and their partner had sought out fertility treatments to assist them in conceiving and 13 men had step-children.\nTable ICharacteristics and upbringing of participants, by men's age group.CharacteristicOverall (n = 495), n (%)20–24 Years (n = 135), n (%)25–29 Years (n = 116), n (%)30–34 Years (n = 122), n (%)35–45 Years (n = 122), n (%)P-valueMarried or common law161 (32.3)23 (17.0)40 (34.5)50 (40.0)48 (39.0)<0.001Ethnicity0.668 Caucasian411 (83.0)111 (83.5)92 (79.3)104 (84.6)104 (84.6) Other84 (17.0)22 (16.5)24 (20.7)19 (15.4)19 (15.4)Education completed<0.001 Did not complete post-secondary education204 (41.0)104 (77.6)35 (30.4)25 (20.0)40 (32.3) Completed post-secondary education294 (59.0)30 (22.4)80 (69.6)100 (80.0)84 (67.7)Annual household income<0.001 <$29 99979 (18.8)39 (37.1)17 (17.3)10 (9.5)13 (11.6) $30 000–$59 999143 (34.0)26 (24.8)33 (33.7)84 (45.7)36 (32.1) $60 000–$89 99978 (18.6)15 (14.3)20 (20.4)18 (17.1)25 (22.3) $90000 or more120 (28.6)25 (23.8)28 (28.6)29 (27.6)38 (33.9)Own home, condo or duplex185 (37.2)11 (8.1)35 (30.2)63 (50.8)76 (62.3)<0.001Main activity is working for profit374 (75.1)67 (49.6)85 (73.3)114 (91.2)108 (88.5)<0.001Smoking status0.782 Current smoker122 (24.4)36 (26.7)29 (25.0)24 (19.2)33 (26.8) Ex-smoker87 (17.4)21 (15.6)19 (16.4)24 (19.2)23 (18.7) Never smoked in lifetime290 (58.1)78 (57.8)68 (58.6)77 (61.6)67 (54.5)Consumed alcohol in past year448 (89.6)121 (89.6)109 (94.0)115 (92.0)103 (83.1)0.032Parents separated before participant was 16101 (20.4)46 (34.1)15 (13.0)23 (18.4)17 (14.0)<0.001Lived in a blended family at any time72 (14.4)26 (19.3)10 (8.6)20 (16.0)16 (13.0)0.104Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nCharacteristics and upbringing of participants, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nThe questionnaire was completed by a total of 495 men with a mean age of 30 years. Over 50% of these men had completed post-secondary education (trade, college or university level; Table I). The majority of respondents were Caucasian, non-smokers, single/never married, working for profit and having a total household income between $30 000 and $59 999 (Table I). Men in the 20–24 age groups were more likely to have completed less education, have a lower family income, and were more likely to be renting a home or living with their parents (Table I). Almost all participants (95.8%) were raised by their biological parents. Furthermore, a large proportion of the participants indicated that their parents were not divorced or separated by the time the participants were 16 years of age (Table I). Men aged 20–24 were more likely than the others to have had their parents divorced/separated (Table I). Most of the participants did not live within a blended family (i.e. a family consisting of a combination of step parent and step siblings) at any point in their lives (Table I). About a third had a partner (31.6%), and only 6.3% of the entire sample was currently trying to become pregnant with their partner. Eight men reported that they and their partner had sought out fertility treatments to assist them in conceiving and 13 men had step-children.\nTable ICharacteristics and upbringing of participants, by men's age group.CharacteristicOverall (n = 495), n (%)20–24 Years (n = 135), n (%)25–29 Years (n = 116), n (%)30–34 Years (n = 122), n (%)35–45 Years (n = 122), n (%)P-valueMarried or common law161 (32.3)23 (17.0)40 (34.5)50 (40.0)48 (39.0)<0.001Ethnicity0.668 Caucasian411 (83.0)111 (83.5)92 (79.3)104 (84.6)104 (84.6) Other84 (17.0)22 (16.5)24 (20.7)19 (15.4)19 (15.4)Education completed<0.001 Did not complete post-secondary education204 (41.0)104 (77.6)35 (30.4)25 (20.0)40 (32.3) Completed post-secondary education294 (59.0)30 (22.4)80 (69.6)100 (80.0)84 (67.7)Annual household income<0.001 <$29 99979 (18.8)39 (37.1)17 (17.3)10 (9.5)13 (11.6) $30 000–$59 999143 (34.0)26 (24.8)33 (33.7)84 (45.7)36 (32.1) $60 000–$89 99978 (18.6)15 (14.3)20 (20.4)18 (17.1)25 (22.3) $90000 or more120 (28.6)25 (23.8)28 (28.6)29 (27.6)38 (33.9)Own home, condo or duplex185 (37.2)11 (8.1)35 (30.2)63 (50.8)76 (62.3)<0.001Main activity is working for profit374 (75.1)67 (49.6)85 (73.3)114 (91.2)108 (88.5)<0.001Smoking status0.782 Current smoker122 (24.4)36 (26.7)29 (25.0)24 (19.2)33 (26.8) Ex-smoker87 (17.4)21 (15.6)19 (16.4)24 (19.2)23 (18.7) Never smoked in lifetime290 (58.1)78 (57.8)68 (58.6)77 (61.6)67 (54.5)Consumed alcohol in past year448 (89.6)121 (89.6)109 (94.0)115 (92.0)103 (83.1)0.032Parents separated before participant was 16101 (20.4)46 (34.1)15 (13.0)23 (18.4)17 (14.0)<0.001Lived in a blended family at any time72 (14.4)26 (19.3)10 (8.6)20 (16.0)16 (13.0)0.104Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nCharacteristics and upbringing of participants, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\n[SUBTITLE] Plans for childbearing [SUBSECTION] Eighty-six percent of men indicated that they planned to have children; of those who did not plan to ever have children, 5% indicated that they had considered children in the past. The proportion of men who never wanted, nor planned to have children, was 9.2%. Men who did not want to become fathers were significantly more likely to be married or in a common law relationship (P= 0.02), be Caucasian (P= 0.02), be current or former smokers (P= 0.004), own their homes (P= 0.02) and be working (P= 0.04) than those who planned to have children or had considered having children in the past.\nMore than half of men felt that the ideal age to begin parenting was before 30 (Table II) and specifically 47.8% felt it was ideal to begin parenting between the ages of 25 and 29. Men who were 30 years of age or older were more likely to indicate it was ideal to begin parenting at age 30 or older or that age was not important (Table II). Only 2% of all men believed it was ideal to begin parenting after the age of 35.\nTable IIIdeal age to begin parenting, by men's age group.Overall (n = 448), n (%)20–24 Years (n = 127), n (%)25–29 Years (n = 110), n (%)30–34 Years (n = 109), n (%)35–45 Years (n = 102), n (%)P-valueIdeal Age to begin parenting<0.001Before 30 years of age233 (52.0)86 (67.7)63 (57.3)47 (43.1)37 (36.3)30 years of age or over130 (29.0)27 (21.3)29 (26.4)36 (33.0)38 (37.3)Age not important85 (19.0)14 (11.0)18 (16.4)26 (23.9)27 (26.5)Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nIdeal age to begin parenting, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nEighty-six percent of men indicated that they planned to have children; of those who did not plan to ever have children, 5% indicated that they had considered children in the past. The proportion of men who never wanted, nor planned to have children, was 9.2%. Men who did not want to become fathers were significantly more likely to be married or in a common law relationship (P= 0.02), be Caucasian (P= 0.02), be current or former smokers (P= 0.004), own their homes (P= 0.02) and be working (P= 0.04) than those who planned to have children or had considered having children in the past.\nMore than half of men felt that the ideal age to begin parenting was before 30 (Table II) and specifically 47.8% felt it was ideal to begin parenting between the ages of 25 and 29. Men who were 30 years of age or older were more likely to indicate it was ideal to begin parenting at age 30 or older or that age was not important (Table II). Only 2% of all men believed it was ideal to begin parenting after the age of 35.\nTable IIIdeal age to begin parenting, by men's age group.Overall (n = 448), n (%)20–24 Years (n = 127), n (%)25–29 Years (n = 110), n (%)30–34 Years (n = 109), n (%)35–45 Years (n = 102), n (%)P-valueIdeal Age to begin parenting<0.001Before 30 years of age233 (52.0)86 (67.7)63 (57.3)47 (43.1)37 (36.3)30 years of age or over130 (29.0)27 (21.3)29 (26.4)36 (33.0)38 (37.3)Age not important85 (19.0)14 (11.0)18 (16.4)26 (23.9)27 (26.5)Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nIdeal age to begin parenting, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\n[SUBTITLE] Factors influencing timing of childbearing [SUBSECTION] The top four factors that were stated to be very influential in determining the timing of parenting were similar among men of all four age groups. These were financial security (53.5%), partner's interest/desire for having children (50.7%), partner suitability to parent (48.1%) and one's own interest/desire for having children (39.4%; Table III). Men who were currently married or in a common law relationship were significantly less likely to report that concerns about losing their job while on parental leave (P= 0.04) and were significantly more likely to report that feelings of a biological clock (P= 0.04) were very influential factors in determining the timing of parenting.\nTable IIIFactors influencing the timing of childbearing, by men's age group.The following factors were very influential in men's desire about when to parentaOverall (n = 495), n (%)20–24 Years (n = 135) OR (95% CI)25–29 Years (n = 116), OR (95% CI)30–34 Years (n = 122) OR (95% CI)35–45 Years (n = 122) OR (95% CI)The need to be financially secure243 (53.5)Ref0.63 (0.38–1.07)0.60 (0.38–1.02)0.32 (0.18–0.54)Partner's interest/desire to have children213 (50.7)Ref0.79 (0.47–1.33)0.71 (0.42–1.21)0.57 (0.33–0.99)Partner's suitability to be a parent202 (48.1)Ref0.63 (0.37–1.06)1.08 (0.63–1.84)0.74 (0.43–1.29)Personal interest/desire to have children178 (39.4)Ref0.73 (0.43–1.23)1.16 (0.69–1.95)0.73 (0.43–1.25)Health status150 (33.0)Ref0.58 (0.33–1.00)0.75 (0.43–1.28)0.94 (0.55–1.61)The need for a permanent position in employment138 (30.7)Ref0.85 (0.49–1.47)0.92 (0.53–1.60)0.65 (0.36–1.17)The amount of time devoted to education and training117 (26.0)Ref0.66 (0.38–1.14)0.40 (0.22–0.73)0.34 (0.18–0.64)The amount of time devoted to career117 (25.8)Ref0.67 (0.38–1.17)0.54 (0.30–0.97)0.62 (0.34–1.11)The need to own a home98 (21.6)Ref0.66 (0.37–1.20)0.58 (0.32–1.07)0.42 (0.22–0.82)Proximity to family for social support65 (14.5)Ref0.93 (0.47–1.81)0.56 (0.26–1.19)0.54 (0.25–1.18)Desire to travel58 (12.8)Ref0.52 (0.24–1.13)0.54 (0.25–1.16)0.64 (0.30–1.36)Concerns of losing job while taking parental leave54 (12.0)Ref0.21 (0.09–0.54)0.26 (0.11–0.62)0.55 (0.27–1.13)Culture or faith53 (11.8)Ref0.42 (0.19–0.93)0.58 (0.28–1.21)0.27 (0.11–0.70)Feeling of the ‘biological clock’ ticking47 (10.4)Ref1.32 (0.46–3.77)1.89 (0.71–5.07)4.37 (1.78–10.76)Concerns of not advancing in employment while taking parental leave29 (6.6)Ref0.44 (0.15–1.30)0.46 (0.16–1.36)0.61 (0.22–1.69)Bold values indicate statistically significant (P < 0.05).Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nFactors influencing the timing of childbearing, by men's age group.\nBold values indicate statistically significant (P < 0.05).\nQuestion: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nIn the univariable (Table III) and multivariable (Table IV) analyses, financial security significantly differed by men's age group, with men aged 35–45 being less likely to rate this as a very important factor than men in the 20–24 age group (crude OR: 0.32, 95% CI: 0.18–0.54). The greatest number of significant differences were noted between the 20–24 year age group and the 35–45 year age group, with older men being less likely to rate partner's interest/desire to have children (crude OR: 0.57, 95% CI: 0.33–0.99), amount of time devoted to education and training (crude OR: 0.34, 95% CI: 0.18–0.64), the need to own a home (crude OR: 0.42, 95% CI: 0.22–0.82) and culture/faith (crude OR: 0.27, 95% CI: 0.11–0.70) as being very important in influencing their intentions regarding when to become a parent (Table III). Men in the oldest age group were significantly more likely to report that the feeling of a biological clock ticking (crude OR: 4.37, 95% CI: 1.78–10.76) was a very important factor in their childbearing intentions when compared with men in the youngest age group (Table III). As seen in the multivariable analysis (Table IV), very few demographic factors were significant predictors of factors that men deemed very influential in their childbearing intentions.\nTable IVDemographic predictors of factors that strongly influence childbearing intentions.Financial securityPartner's interestPartner's suitabilityPersonal interestIdeal age to begin parenting ≥30Age group 20–24RefRefRefRefRef 25–290.68 (0.36–1.28)1.07 (0.56–2.03)0.67 (0.35–1.28)0.71 (0.37–1.36)0.97 (0.44–2.15) 30–340.65 (0.33–1.28)1.21 (0.60–2.44)1.29 (0.64–2.60)1.12 (0.57–2.20)2.58 (1.14–5.85) 35–450.32 (0.17–0.62)0.85 (0.44–1.67)0.76 (0.39–1.49)0.75 (0.39–1.45)3.45 (1.58–7.55)Annual household income <$29 999RefRefRefRefRef $30 000–$59 9990.94 (0.49–1.80)1.01 (0.52–1.98)0.94 (0.48–1.83)0.64 (0.33–1.22)0.44 (0.20–0.99) $60 000–$89 9990.93 (0.45–1.90)1.37 (0.70–2.85)1.12 (0.54–2.33)0.73 (0.35–1.49)0.50 (0.21–1.22) ≥$900000.73 (0.38–1.42)1.31 (0.70–2.58)0.89 (0.45–1.77)1.11 (0.58–2.15)0.69 (0.31–1.55)Education Did not complete post-secondary educationRefRefRefRefRef Completed post-secondary education0.81 (0.50–1.30)0.81 (0.49–1.33)1.31 (0.80–2.16)1.06 (0.65–1.73)1.85 (1.02–3.38)Ethnicity WhiteRefRefRefRefRef Other0.92 (0.52–1.64)0.73 (0.40–1.34)0.53 (0.29–0.97)1.01 (0.56–1.82)0.55 (0.26–1.16)Marital status SingleRefRefRefRefRef Married/common law1.08 (0.68–1.72)0.64 (0.40–1.03)1.04 (0.65–1.67)0.72 (0.45–1.16)0.78 (0.44–1.38)Bold values indicate statistically significant (P< 0.05).\nDemographic predictors of factors that strongly influence childbearing intentions.\nBold values indicate statistically significant (P< 0.05).\nThe top four factors that were stated to be very influential in determining the timing of parenting were similar among men of all four age groups. These were financial security (53.5%), partner's interest/desire for having children (50.7%), partner suitability to parent (48.1%) and one's own interest/desire for having children (39.4%; Table III). Men who were currently married or in a common law relationship were significantly less likely to report that concerns about losing their job while on parental leave (P= 0.04) and were significantly more likely to report that feelings of a biological clock (P= 0.04) were very influential factors in determining the timing of parenting.\nTable IIIFactors influencing the timing of childbearing, by men's age group.The following factors were very influential in men's desire about when to parentaOverall (n = 495), n (%)20–24 Years (n = 135) OR (95% CI)25–29 Years (n = 116), OR (95% CI)30–34 Years (n = 122) OR (95% CI)35–45 Years (n = 122) OR (95% CI)The need to be financially secure243 (53.5)Ref0.63 (0.38–1.07)0.60 (0.38–1.02)0.32 (0.18–0.54)Partner's interest/desire to have children213 (50.7)Ref0.79 (0.47–1.33)0.71 (0.42–1.21)0.57 (0.33–0.99)Partner's suitability to be a parent202 (48.1)Ref0.63 (0.37–1.06)1.08 (0.63–1.84)0.74 (0.43–1.29)Personal interest/desire to have children178 (39.4)Ref0.73 (0.43–1.23)1.16 (0.69–1.95)0.73 (0.43–1.25)Health status150 (33.0)Ref0.58 (0.33–1.00)0.75 (0.43–1.28)0.94 (0.55–1.61)The need for a permanent position in employment138 (30.7)Ref0.85 (0.49–1.47)0.92 (0.53–1.60)0.65 (0.36–1.17)The amount of time devoted to education and training117 (26.0)Ref0.66 (0.38–1.14)0.40 (0.22–0.73)0.34 (0.18–0.64)The amount of time devoted to career117 (25.8)Ref0.67 (0.38–1.17)0.54 (0.30–0.97)0.62 (0.34–1.11)The need to own a home98 (21.6)Ref0.66 (0.37–1.20)0.58 (0.32–1.07)0.42 (0.22–0.82)Proximity to family for social support65 (14.5)Ref0.93 (0.47–1.81)0.56 (0.26–1.19)0.54 (0.25–1.18)Desire to travel58 (12.8)Ref0.52 (0.24–1.13)0.54 (0.25–1.16)0.64 (0.30–1.36)Concerns of losing job while taking parental leave54 (12.0)Ref0.21 (0.09–0.54)0.26 (0.11–0.62)0.55 (0.27–1.13)Culture or faith53 (11.8)Ref0.42 (0.19–0.93)0.58 (0.28–1.21)0.27 (0.11–0.70)Feeling of the ‘biological clock’ ticking47 (10.4)Ref1.32 (0.46–3.77)1.89 (0.71–5.07)4.37 (1.78–10.76)Concerns of not advancing in employment while taking parental leave29 (6.6)Ref0.44 (0.15–1.30)0.46 (0.16–1.36)0.61 (0.22–1.69)Bold values indicate statistically significant (P < 0.05).Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nFactors influencing the timing of childbearing, by men's age group.\nBold values indicate statistically significant (P < 0.05).\nQuestion: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nIn the univariable (Table III) and multivariable (Table IV) analyses, financial security significantly differed by men's age group, with men aged 35–45 being less likely to rate this as a very important factor than men in the 20–24 age group (crude OR: 0.32, 95% CI: 0.18–0.54). The greatest number of significant differences were noted between the 20–24 year age group and the 35–45 year age group, with older men being less likely to rate partner's interest/desire to have children (crude OR: 0.57, 95% CI: 0.33–0.99), amount of time devoted to education and training (crude OR: 0.34, 95% CI: 0.18–0.64), the need to own a home (crude OR: 0.42, 95% CI: 0.22–0.82) and culture/faith (crude OR: 0.27, 95% CI: 0.11–0.70) as being very important in influencing their intentions regarding when to become a parent (Table III). Men in the oldest age group were significantly more likely to report that the feeling of a biological clock ticking (crude OR: 4.37, 95% CI: 1.78–10.76) was a very important factor in their childbearing intentions when compared with men in the youngest age group (Table III). As seen in the multivariable analysis (Table IV), very few demographic factors were significant predictors of factors that men deemed very influential in their childbearing intentions.\nTable IVDemographic predictors of factors that strongly influence childbearing intentions.Financial securityPartner's interestPartner's suitabilityPersonal interestIdeal age to begin parenting ≥30Age group 20–24RefRefRefRefRef 25–290.68 (0.36–1.28)1.07 (0.56–2.03)0.67 (0.35–1.28)0.71 (0.37–1.36)0.97 (0.44–2.15) 30–340.65 (0.33–1.28)1.21 (0.60–2.44)1.29 (0.64–2.60)1.12 (0.57–2.20)2.58 (1.14–5.85) 35–450.32 (0.17–0.62)0.85 (0.44–1.67)0.76 (0.39–1.49)0.75 (0.39–1.45)3.45 (1.58–7.55)Annual household income <$29 999RefRefRefRefRef $30 000–$59 9990.94 (0.49–1.80)1.01 (0.52–1.98)0.94 (0.48–1.83)0.64 (0.33–1.22)0.44 (0.20–0.99) $60 000–$89 9990.93 (0.45–1.90)1.37 (0.70–2.85)1.12 (0.54–2.33)0.73 (0.35–1.49)0.50 (0.21–1.22) ≥$900000.73 (0.38–1.42)1.31 (0.70–2.58)0.89 (0.45–1.77)1.11 (0.58–2.15)0.69 (0.31–1.55)Education Did not complete post-secondary educationRefRefRefRefRef Completed post-secondary education0.81 (0.50–1.30)0.81 (0.49–1.33)1.31 (0.80–2.16)1.06 (0.65–1.73)1.85 (1.02–3.38)Ethnicity WhiteRefRefRefRefRef Other0.92 (0.52–1.64)0.73 (0.40–1.34)0.53 (0.29–0.97)1.01 (0.56–1.82)0.55 (0.26–1.16)Marital status SingleRefRefRefRefRef Married/common law1.08 (0.68–1.72)0.64 (0.40–1.03)1.04 (0.65–1.67)0.72 (0.45–1.16)0.78 (0.44–1.38)Bold values indicate statistically significant (P< 0.05).\nDemographic predictors of factors that strongly influence childbearing intentions.\nBold values indicate statistically significant (P< 0.05).", "The questionnaire was completed by a total of 495 men with a mean age of 30 years. Over 50% of these men had completed post-secondary education (trade, college or university level; Table I). The majority of respondents were Caucasian, non-smokers, single/never married, working for profit and having a total household income between $30 000 and $59 999 (Table I). Men in the 20–24 age groups were more likely to have completed less education, have a lower family income, and were more likely to be renting a home or living with their parents (Table I). Almost all participants (95.8%) were raised by their biological parents. Furthermore, a large proportion of the participants indicated that their parents were not divorced or separated by the time the participants were 16 years of age (Table I). Men aged 20–24 were more likely than the others to have had their parents divorced/separated (Table I). Most of the participants did not live within a blended family (i.e. a family consisting of a combination of step parent and step siblings) at any point in their lives (Table I). About a third had a partner (31.6%), and only 6.3% of the entire sample was currently trying to become pregnant with their partner. Eight men reported that they and their partner had sought out fertility treatments to assist them in conceiving and 13 men had step-children.\nTable ICharacteristics and upbringing of participants, by men's age group.CharacteristicOverall (n = 495), n (%)20–24 Years (n = 135), n (%)25–29 Years (n = 116), n (%)30–34 Years (n = 122), n (%)35–45 Years (n = 122), n (%)P-valueMarried or common law161 (32.3)23 (17.0)40 (34.5)50 (40.0)48 (39.0)<0.001Ethnicity0.668 Caucasian411 (83.0)111 (83.5)92 (79.3)104 (84.6)104 (84.6) Other84 (17.0)22 (16.5)24 (20.7)19 (15.4)19 (15.4)Education completed<0.001 Did not complete post-secondary education204 (41.0)104 (77.6)35 (30.4)25 (20.0)40 (32.3) Completed post-secondary education294 (59.0)30 (22.4)80 (69.6)100 (80.0)84 (67.7)Annual household income<0.001 <$29 99979 (18.8)39 (37.1)17 (17.3)10 (9.5)13 (11.6) $30 000–$59 999143 (34.0)26 (24.8)33 (33.7)84 (45.7)36 (32.1) $60 000–$89 99978 (18.6)15 (14.3)20 (20.4)18 (17.1)25 (22.3) $90000 or more120 (28.6)25 (23.8)28 (28.6)29 (27.6)38 (33.9)Own home, condo or duplex185 (37.2)11 (8.1)35 (30.2)63 (50.8)76 (62.3)<0.001Main activity is working for profit374 (75.1)67 (49.6)85 (73.3)114 (91.2)108 (88.5)<0.001Smoking status0.782 Current smoker122 (24.4)36 (26.7)29 (25.0)24 (19.2)33 (26.8) Ex-smoker87 (17.4)21 (15.6)19 (16.4)24 (19.2)23 (18.7) Never smoked in lifetime290 (58.1)78 (57.8)68 (58.6)77 (61.6)67 (54.5)Consumed alcohol in past year448 (89.6)121 (89.6)109 (94.0)115 (92.0)103 (83.1)0.032Parents separated before participant was 16101 (20.4)46 (34.1)15 (13.0)23 (18.4)17 (14.0)<0.001Lived in a blended family at any time72 (14.4)26 (19.3)10 (8.6)20 (16.0)16 (13.0)0.104Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nCharacteristics and upbringing of participants, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.", "Eighty-six percent of men indicated that they planned to have children; of those who did not plan to ever have children, 5% indicated that they had considered children in the past. The proportion of men who never wanted, nor planned to have children, was 9.2%. Men who did not want to become fathers were significantly more likely to be married or in a common law relationship (P= 0.02), be Caucasian (P= 0.02), be current or former smokers (P= 0.004), own their homes (P= 0.02) and be working (P= 0.04) than those who planned to have children or had considered having children in the past.\nMore than half of men felt that the ideal age to begin parenting was before 30 (Table II) and specifically 47.8% felt it was ideal to begin parenting between the ages of 25 and 29. Men who were 30 years of age or older were more likely to indicate it was ideal to begin parenting at age 30 or older or that age was not important (Table II). Only 2% of all men believed it was ideal to begin parenting after the age of 35.\nTable IIIdeal age to begin parenting, by men's age group.Overall (n = 448), n (%)20–24 Years (n = 127), n (%)25–29 Years (n = 110), n (%)30–34 Years (n = 109), n (%)35–45 Years (n = 102), n (%)P-valueIdeal Age to begin parenting<0.001Before 30 years of age233 (52.0)86 (67.7)63 (57.3)47 (43.1)37 (36.3)30 years of age or over130 (29.0)27 (21.3)29 (26.4)36 (33.0)38 (37.3)Age not important85 (19.0)14 (11.0)18 (16.4)26 (23.9)27 (26.5)Note that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.\nIdeal age to begin parenting, by men's age group.\nNote that the denominators within the tables may vary due to: participants who may have responded ‘don't know’, participants who were ineligible to answer the question for various reasons, as well as participants who may not have given a response to the question. Thus, the percentages given in brackets are the valid percentages.", "The top four factors that were stated to be very influential in determining the timing of parenting were similar among men of all four age groups. These were financial security (53.5%), partner's interest/desire for having children (50.7%), partner suitability to parent (48.1%) and one's own interest/desire for having children (39.4%; Table III). Men who were currently married or in a common law relationship were significantly less likely to report that concerns about losing their job while on parental leave (P= 0.04) and were significantly more likely to report that feelings of a biological clock (P= 0.04) were very influential factors in determining the timing of parenting.\nTable IIIFactors influencing the timing of childbearing, by men's age group.The following factors were very influential in men's desire about when to parentaOverall (n = 495), n (%)20–24 Years (n = 135) OR (95% CI)25–29 Years (n = 116), OR (95% CI)30–34 Years (n = 122) OR (95% CI)35–45 Years (n = 122) OR (95% CI)The need to be financially secure243 (53.5)Ref0.63 (0.38–1.07)0.60 (0.38–1.02)0.32 (0.18–0.54)Partner's interest/desire to have children213 (50.7)Ref0.79 (0.47–1.33)0.71 (0.42–1.21)0.57 (0.33–0.99)Partner's suitability to be a parent202 (48.1)Ref0.63 (0.37–1.06)1.08 (0.63–1.84)0.74 (0.43–1.29)Personal interest/desire to have children178 (39.4)Ref0.73 (0.43–1.23)1.16 (0.69–1.95)0.73 (0.43–1.25)Health status150 (33.0)Ref0.58 (0.33–1.00)0.75 (0.43–1.28)0.94 (0.55–1.61)The need for a permanent position in employment138 (30.7)Ref0.85 (0.49–1.47)0.92 (0.53–1.60)0.65 (0.36–1.17)The amount of time devoted to education and training117 (26.0)Ref0.66 (0.38–1.14)0.40 (0.22–0.73)0.34 (0.18–0.64)The amount of time devoted to career117 (25.8)Ref0.67 (0.38–1.17)0.54 (0.30–0.97)0.62 (0.34–1.11)The need to own a home98 (21.6)Ref0.66 (0.37–1.20)0.58 (0.32–1.07)0.42 (0.22–0.82)Proximity to family for social support65 (14.5)Ref0.93 (0.47–1.81)0.56 (0.26–1.19)0.54 (0.25–1.18)Desire to travel58 (12.8)Ref0.52 (0.24–1.13)0.54 (0.25–1.16)0.64 (0.30–1.36)Concerns of losing job while taking parental leave54 (12.0)Ref0.21 (0.09–0.54)0.26 (0.11–0.62)0.55 (0.27–1.13)Culture or faith53 (11.8)Ref0.42 (0.19–0.93)0.58 (0.28–1.21)0.27 (0.11–0.70)Feeling of the ‘biological clock’ ticking47 (10.4)Ref1.32 (0.46–3.77)1.89 (0.71–5.07)4.37 (1.78–10.76)Concerns of not advancing in employment while taking parental leave29 (6.6)Ref0.44 (0.15–1.30)0.46 (0.16–1.36)0.61 (0.22–1.69)Bold values indicate statistically significant (P < 0.05).Question: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nFactors influencing the timing of childbearing, by men's age group.\nBold values indicate statistically significant (P < 0.05).\nQuestion: ‘How much of the following factors would influence your decision(s) about when to parent?’ Response choices included: very much, somewhat, neutral, not very much and not at all.\nIn the univariable (Table III) and multivariable (Table IV) analyses, financial security significantly differed by men's age group, with men aged 35–45 being less likely to rate this as a very important factor than men in the 20–24 age group (crude OR: 0.32, 95% CI: 0.18–0.54). The greatest number of significant differences were noted between the 20–24 year age group and the 35–45 year age group, with older men being less likely to rate partner's interest/desire to have children (crude OR: 0.57, 95% CI: 0.33–0.99), amount of time devoted to education and training (crude OR: 0.34, 95% CI: 0.18–0.64), the need to own a home (crude OR: 0.42, 95% CI: 0.22–0.82) and culture/faith (crude OR: 0.27, 95% CI: 0.11–0.70) as being very important in influencing their intentions regarding when to become a parent (Table III). Men in the oldest age group were significantly more likely to report that the feeling of a biological clock ticking (crude OR: 4.37, 95% CI: 1.78–10.76) was a very important factor in their childbearing intentions when compared with men in the youngest age group (Table III). As seen in the multivariable analysis (Table IV), very few demographic factors were significant predictors of factors that men deemed very influential in their childbearing intentions.\nTable IVDemographic predictors of factors that strongly influence childbearing intentions.Financial securityPartner's interestPartner's suitabilityPersonal interestIdeal age to begin parenting ≥30Age group 20–24RefRefRefRefRef 25–290.68 (0.36–1.28)1.07 (0.56–2.03)0.67 (0.35–1.28)0.71 (0.37–1.36)0.97 (0.44–2.15) 30–340.65 (0.33–1.28)1.21 (0.60–2.44)1.29 (0.64–2.60)1.12 (0.57–2.20)2.58 (1.14–5.85) 35–450.32 (0.17–0.62)0.85 (0.44–1.67)0.76 (0.39–1.49)0.75 (0.39–1.45)3.45 (1.58–7.55)Annual household income <$29 999RefRefRefRefRef $30 000–$59 9990.94 (0.49–1.80)1.01 (0.52–1.98)0.94 (0.48–1.83)0.64 (0.33–1.22)0.44 (0.20–0.99) $60 000–$89 9990.93 (0.45–1.90)1.37 (0.70–2.85)1.12 (0.54–2.33)0.73 (0.35–1.49)0.50 (0.21–1.22) ≥$900000.73 (0.38–1.42)1.31 (0.70–2.58)0.89 (0.45–1.77)1.11 (0.58–2.15)0.69 (0.31–1.55)Education Did not complete post-secondary educationRefRefRefRefRef Completed post-secondary education0.81 (0.50–1.30)0.81 (0.49–1.33)1.31 (0.80–2.16)1.06 (0.65–1.73)1.85 (1.02–3.38)Ethnicity WhiteRefRefRefRefRef Other0.92 (0.52–1.64)0.73 (0.40–1.34)0.53 (0.29–0.97)1.01 (0.56–1.82)0.55 (0.26–1.16)Marital status SingleRefRefRefRefRef Married/common law1.08 (0.68–1.72)0.64 (0.40–1.03)1.04 (0.65–1.67)0.72 (0.45–1.16)0.78 (0.44–1.38)Bold values indicate statistically significant (P< 0.05).\nDemographic predictors of factors that strongly influence childbearing intentions.\nBold values indicate statistically significant (P< 0.05).", "In this analysis, we demonstrate that in an urban setting in Canada almost 90% of men between the ages of 20 and 45 who do not currently have children, plan to become a parent at some point in their lives. These figures are similar to those reported in studies of male university students in the Nordic countries. Lampic et al. (2006) found that 97% of male university students in Sweden wanted to be fathers at some point in time (Lampic et al., 2006), while Virtala et al. (2006) showed that 87.4% of male university students in Finland wanted to become parents (Virtala et al., 2006). In the face of declining fertility rates in developing countries, it is encouraging that so many men hope to become fathers. However, a study examining male parenting desires and birth rates in eight European countries found that parenting desires outweigh actual births, with anywhere from 0.12 to 0.75 additional children per couple expected based on reported male desires (Puur et al., 2008).\nWhile most participants in the current study believed that the ideal age to begin parenting was between the ages of 25 and 29, as men became older, they had higher odds of reporting that the ideal age to begin parenting was over 30. As men approach their preconceived ideal age to parent, they may adjust their perceptions, so that they can meet other preconditions to having children (e.g. financial security, finding a suitable partner). Over half of all participants felt that financial security was a very influential factor in determining when to parent, followed by partner interest/desire to have children, and partner suitability to parent. These findings support those from previous studies conducted with women in the USA and Australia (Zabin et al., 2000; Hammarberg and Clarke, 2005; Foster et al., 2008), and studies with men in Sweden and the USA (Lampic et al., 2006; Foster et al., 2008). In the current study, financial security was more often reported among men aged 20–24 and less often among men aged 35–45 as being a very influential factor. However, men aged 35–45 had a significantly higher income than those aged 20–24. It may be that these older men had already obtained financial security, and thus, it was no longer seen as a prerequisite to having children. These findings could also potentially reflect career and professional stability; older men tend to have completed their education and be more established in their careers, whereas younger men tend to be in the midst of career development or completing their education.\nFrom this study, it appears that many men would like to mitigate the financial risks of having children by obtaining a certain level of financial security before starting a family. However, there are other risks to childbearing that the majority of men are unaware of. Less than 40% of men recognize the link between advanced maternal age and the increased risk for adverse birth outcomes such as low-birthweight/preterm delivery or multiple births (Tough et al., 2007). With men being highly involved in a woman's decision-making process about when to become pregnant (Chalmers and Meyer, 1996; Lazarus, 1997), there is a need for strategies to inform these individuals about the risks of waiting to have children, allowing them to make more informed decisions that weigh the benefits and risks of delaying childbearing. Although those aged 35–45 had the highest odds of reporting their ‘biological clock’ as being very influential in their childbearing intentions, only 1 in 10 participants indicated that this was an influential factor in determining when to parent. Langdridge et al. (2005) also found that biological drive was predictive of men's childbearing intentions, but this factor was a stronger predictor for women than it was for men (Langdridge et al., 2005). While we acknowledge that outside of a relationship where childbearing is possible, men cannot influence the timing of childbearing; however, by informing men and women throughout the lifespan about the risks of delayed childbearing, it is hoped that they may be more informed when making childbearing decisions within the context of a partnership.\nThe main limitation within this analysis is the potential lack of generalizability of the findings to all populations. Participants consisted mainly of urban residents, wherein the issue of delayed childbearing is most acute; therefore, these results may not reflect the feelings of men residing in rural settings. Additionally, no information is available on those who chose not to participate in the study; as such, it is possible that the findings of this study are not representative of all urban men in Alberta. It is unknown in which direction this possible selection bias may have affected the overall results. Economic and ethnic factors were fairly uniform among the participants involved in the study. The way in which this study was designed, however, allowed for random population sampling and thus the creation of a more accurate representation of an urban population. The population of men sampled is similar with regards to proportion of visible minorities, employment rates, income, educational attainment and smoking status that is seen in the 2001 Canadian Census community profiles for the cities of Calgary and Edmonton (Statistics Canada, 2003) and the 2003 Canadian Community Health Survey (Brennan et al., 2010). Statistical correlation for multiple testing was not undertaken; hence it is possible that spurious associations may have been found by chance alone. However, as this is a descriptive study that does not aim to assess the validity of a specific hypothesis, this is less of a concern.\nAs seen in this analysis, the ideals deemed important before parenting did not differ greatly between men of different ages, yet more individuals are delaying childbearing (Bray et al., 2006; Tough et al., 2007). While achieving financial and relationship security can have many positive long-term effects on family stability and child health and development, these benefits need to be weighed against the potential costs of delayed childbearing. Ultimately, most men hope to someday become parents, but they lack information on the negative impacts delayed childbearing can have on fertility and birth outcomes (Lampic et al., 2006). To ensure that men are able to make informed decisions related to family planning goals, educational activities regarding the health and social consequences of delayed childbearing aimed at the population need to be undertaken. Additionally, workplace policies that support men and women in parenting when their reproductive health is optimal should be explored.", "S.C.T. conceived of and secured funding for this study. Data analysis was conducted by E.R., A.M. and M.J.; E.R. drafted the manuscript. All authors participated in the interpretation of data, revised the manuscript and approved the final version of the manuscript that is now being submitted for publication.", "Funding for this study and salary support for S.C.T. was provided by Alberta Innovates Health Solutions (formerly the Alberta Heritage Foundation for Medical Research). E.R., received O'Brien Centre Summer Studentship Funding (summer 2009). A.M. is supported by a Canadian Institutes of Health Research (CIHR) Doctoral Award in Genetics (Ethics, Law and Society) and a CIHR Strategic Training Grant in Genes, Development and Child Health. Funding to pay the Open Access publication charges for this article was provided by the Open Access Authors Fund, Libraries and Cultural Resources, University of Calgary." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, "discussion", null, null ]
[ "reproduction", "men", "childbearing", "paternal age" ]
Depressive symptoms, physical inactivity and risk of cardiovascular mortality in older adults: the Cardiovascular Health Study.
21339320
Depressed older individuals have a higher mortality than older persons without depression. Depression is associated with physical inactivity, and low levels of physical activity have been shown in some cohorts to be a partial mediator of the relationship between depression and cardiovascular events and mortality.
BACKGROUND
A cohort of 5888 individuals (mean 72.8 ± 5.6 years, 58% female, 16% African-American) from four US communities was followed for an average of 10.3 years. Self-reported depressive symptoms (10-item Center for Epidemiological Studies Depression Scale) were assessed annually and self-reported physical activity was assessed at baseline and at 3 and 7 years. To estimate how much of the increased risk of cardiovascular mortality associated with depressive symptoms was due to physical inactivity, Cox regression with time-varying covariates was used to determine the percentage change in the log HR of depressive symptoms for cardiovascular mortality after adding physical activity variables.
METHODS
At baseline, 20% of participants scored above the cut-off for depressive symptoms. There were 2915 deaths (49.8%), of which 1176 (20.1%) were from cardiovascular causes. Depressive symptoms and physical inactivity each independently increased the risk of cardiovascular mortality and were strongly associated with each other (all p < 0.001). Individuals with both depressive symptoms and physical inactivity had greater cardiovascular mortality than those with either individually (p < 0.001, log rank test). Physical inactivity reduced the log HR of depressive symptoms for cardiovascular mortality by 26% after adjustment. This was similar for persons with (25%) and without (23%) established coronary heart disease.
RESULTS
Physical inactivity accounted for a significant proportion of the risk of cardiovascular mortality due to depressive symptoms in older adults, regardless of coronary heart disease status.
CONCLUSIONS
[ "Aged", "Aged, 80 and over", "Cardiovascular Diseases", "Depression", "Epidemiologic Methods", "Female", "Humans", "Male", "Motor Activity", "Psychiatric Status Rating Scales", "United States" ]
3044493
Introduction
Depression has been associated with increased cardiovascular mortality in older individuals in most,1–4 but not all,5 6 studies. Multiple potential behavioural and biological mechanisms have been proposed to explain this association,7 and it is likely that several mediating factors play a role. One important plausible mediating factor to consider is physical inactivity since many patients with depression are physically inactive.8 9 In addition, a strong inverse relationship exists between self-reported activity levels,10–16 as well as objective measures of daily energy expenditure,16 and mortality risk. The adverse health consequences of physical inactivity are greater in adults aged ≥65 years than in younger individuals.10–13 Physical inactivity has been shown partially to mediate the relationship between depression and mortality among persons with established cardiovascular disease,7 17 18 but the possibility that physical inactivity mediates the relationship between depression and mortality in individuals without known heart disease is not well documented using longitudinal assessments. In the Finland, Italy and the Netherlands Elderly (FINE) study,19 physical inactivity explained only a small percentage (9%) of the increased mortality risk due to depressive symptoms in men aged 70–90 years who were free of cardiovascular disease at baseline. The Cardiovascular Health Study (CHS) is a prospective observational study of community-dwelling individuals aged ≥65 years from whom extensive repeated evaluations of health information have been collected by systematic interviews and clinical examination. Previous studies have shown that both physical inactivity12 and depressive symptoms4 independently predict increased mortality in this cohort of individuals. The present study examined whether physical inactivity is a mediator of the increased risk of cardiovascular mortality associated with depressive symptoms among community dwelling older adults, adjusting for demographic variables, comorbid medical conditions and health behaviours.
Methods
[SUBTITLE] Study population [SUBSECTION] The CHS is a multicentre prospective cohort study of cardiovascular risk factors in ambulatory non-institutionalised men and women aged ≥65 years. Participants were randomly selected using Medicare eligibility lists of the Health Care Financing Administration from four communities in the USA: Washington County, Maryland; Forsyth County, North Carolina; Allegheny County, Pennsylvania; and Sacramento County, California. Initial recruitment of 5201 subjects occurred in 1989 with an additional 687 African-Americans recruited 3 years later, bringing the total to 5888 participants. The institutional review board at each centre approved the study and participants gave informed consent. Additional details of the study design and recruitment process have been published previously.20 21 The baseline visit included a standardised physical examination and questionnaire, laboratory testing and diagnostic evaluation. Participants returned annually for nine additional clinic visits and were also contacted by telephone every 6 months. Individual follow-up was available up to 14 years, the date of last known contact or death, whichever occurred first. Information was gathered on smoking status, alcohol consumption, medications and medical conditions including coronary heart disease (defined as angina, previous myocardial infarction or coronary revascularisation), heart failure, stroke, hypertension and diabetes. Examination included measurement of body mass index, waist-to-hip ratio and seated blood pressure using a random zero sphygmomanometer. Participants also had a standard resting 12-lead ECG, echocardiogram and laboratory tests including serum cholesterol, glucose, creatinine and C reactive protein. The CHS is a multicentre prospective cohort study of cardiovascular risk factors in ambulatory non-institutionalised men and women aged ≥65 years. Participants were randomly selected using Medicare eligibility lists of the Health Care Financing Administration from four communities in the USA: Washington County, Maryland; Forsyth County, North Carolina; Allegheny County, Pennsylvania; and Sacramento County, California. Initial recruitment of 5201 subjects occurred in 1989 with an additional 687 African-Americans recruited 3 years later, bringing the total to 5888 participants. The institutional review board at each centre approved the study and participants gave informed consent. Additional details of the study design and recruitment process have been published previously.20 21 The baseline visit included a standardised physical examination and questionnaire, laboratory testing and diagnostic evaluation. Participants returned annually for nine additional clinic visits and were also contacted by telephone every 6 months. Individual follow-up was available up to 14 years, the date of last known contact or death, whichever occurred first. Information was gathered on smoking status, alcohol consumption, medications and medical conditions including coronary heart disease (defined as angina, previous myocardial infarction or coronary revascularisation), heart failure, stroke, hypertension and diabetes. Examination included measurement of body mass index, waist-to-hip ratio and seated blood pressure using a random zero sphygmomanometer. Participants also had a standard resting 12-lead ECG, echocardiogram and laboratory tests including serum cholesterol, glucose, creatinine and C reactive protein. [SUBTITLE] Assessment of physical activity [SUBSECTION] Usual physical activity was assessed by a self-report questionnaire at baseline and again after 3 and 7 years. Four aspects of physical activity were evaluated:Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.Distance walked in blocks was categorised into quintiles.Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph. Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles. Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity. Distance walked in blocks was categorised into quintiles. Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph. To obtain an overall assessment of physical activity we created a composite Physical Activity Score (PAS) from these four components. A score of 0 was assigned to the lowest category in each area with 1 point for each increase in each domain of physical activity. This provided a range of 0 (least active) to 15 (most active). This score was then categorised into four groups: PAS 0–3, 4–7, 8–11 and 12–15. The internal consistency of this scale was good (Cronbach's α=0.78). Usual physical activity was assessed by a self-report questionnaire at baseline and again after 3 and 7 years. Four aspects of physical activity were evaluated:Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.Distance walked in blocks was categorised into quintiles.Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph. Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles. Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity. Distance walked in blocks was categorised into quintiles. Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph. To obtain an overall assessment of physical activity we created a composite Physical Activity Score (PAS) from these four components. A score of 0 was assigned to the lowest category in each area with 1 point for each increase in each domain of physical activity. This provided a range of 0 (least active) to 15 (most active). This score was then categorised into four groups: PAS 0–3, 4–7, 8–11 and 12–15. The internal consistency of this scale was good (Cronbach's α=0.78). [SUBTITLE] Assessment of depressive symptoms [SUBSECTION] The short (10-item) version25 of the Center for Epidemiological Studies Depression (CES-D) scale26 was used annually to assess self-reported depressive symptoms experienced in the past week. This version of the CES-D has shown good validity versus the 20-item CES-D, particularly in epidemiological studies and older populations.25 The scale consists of 10 items, each scored 0–3, for a maximum of 30 points. Higher scores indicate greater frequency of depressive symptoms and correlate with an increased risk of clinical depression. Depression scores were dichotomised at each visit with a cut-off of ≥8 as in previous studies in the CHS,4 27 creating low (CES-D <8) and high (CES-D ≥8) groups for analysis. The cut-off of 8 on the short version of the CES-D corresponds to a cut-off of ≥16 on the 20-item version of the CES-D. The short (10-item) version25 of the Center for Epidemiological Studies Depression (CES-D) scale26 was used annually to assess self-reported depressive symptoms experienced in the past week. This version of the CES-D has shown good validity versus the 20-item CES-D, particularly in epidemiological studies and older populations.25 The scale consists of 10 items, each scored 0–3, for a maximum of 30 points. Higher scores indicate greater frequency of depressive symptoms and correlate with an increased risk of clinical depression. Depression scores were dichotomised at each visit with a cut-off of ≥8 as in previous studies in the CHS,4 27 creating low (CES-D <8) and high (CES-D ≥8) groups for analysis. The cut-off of 8 on the short version of the CES-D corresponds to a cut-off of ≥16 on the 20-item version of the CES-D. [SUBTITLE] Assessment of events and cardiovascular mortality [SUBSECTION] All events occurring after the baseline visit were classified as incident events and were adjudicated by a centralised committee.22 The cause of death was determined from medical records, death certificates, ICD codes, obituaries and interviews with relatives and contacts. Cardiovascular deaths were those due to atherosclerotic coronary disease, cerebrovascular disease (stroke), other atherosclerotic disease (such as aortic aneurysm) and other vascular disease (such as valvular heart disease or pulmonary embolism).28 The CHS has nearly 100% ascertainment of mortality status. All events occurring after the baseline visit were classified as incident events and were adjudicated by a centralised committee.22 The cause of death was determined from medical records, death certificates, ICD codes, obituaries and interviews with relatives and contacts. Cardiovascular deaths were those due to atherosclerotic coronary disease, cerebrovascular disease (stroke), other atherosclerotic disease (such as aortic aneurysm) and other vascular disease (such as valvular heart disease or pulmonary embolism).28 The CHS has nearly 100% ascertainment of mortality status. [SUBTITLE] Statistical analysis [SUBSECTION] Missing data for any visit the participant was known to have attended were replaced with data from other visits. Missing baseline data were filled with values from the next available visit. When data were missing at later visits, the last observation was carried forward. This was done to maintain a more consistent sample size, as it may vary substantially when using time-varying covariates in analyses. Approximately 5% of data was missing at any visit for any variable. Missing data on depressive symptoms ranged from 1.5% at baseline to 10% at later visits. Participants who had never had an assessment of physical activity or depressive symptoms at any time during the study were excluded (n=36). Data from 5852 participants remained for analysis. Baseline characteristics were compared between high and low depressive symptoms groups using t tests with unequal variance for continuous variables and χ2 tests for dichotomous variables. The characteristics of the physical activity groups were compared using univariate linear (continuous variables) and logistic (dichotomous variables) regression evaluating for trend. The association between physical activity and depressive symptoms at baseline was evaluated using multivariate linear and logistic regression models. Cox proportional hazards regression was used to estimate the risk of cardiovascular mortality associated with depressive symptoms, physical inactivity and other covariates. In the models, participants were included up to the date of death or last known visit. The group of 687 African-Americans recruited 3 years after study initiation was treated as a late entry cohort and immortal person-time before recruitment was removed. Variables were chosen for the final model based on the significance of their univariate associations as well as clinical interest. The proportional hazards assumption was checked with a log-log plot of the survival function and was met for both depression and physical inactivity. All variables were treated as time-varying. For example, the CES-D was administered at baseline and annually for 9 additional years, so individuals may have up to 10 measurements. After dichotomisation they may fall into both high and low depression score groups at different points during follow-up. The Cox models used all available measurements and alternated the person-time to the appropriate risk group. Using repeated measurements reduces misclassification bias and provides more accurate estimates of risk throughout follow-up and at the time of events. For incident events occurring between visits, that period was updated if it occurred in the first half. For example, if a participant suffered a stroke 4 months after his fifth annual visit, the period between the fifth and sixth visits and subsequent periods were reclassified as prevalent stroke. If it occurred at 8 months, only subsequent periods were reclassified. To estimate how much of the increased risk of cardiovascular mortality due to depressive symptoms may be accounted for by physical inactivity, we determined the percentage change in the coefficient for depression after physical activity variables were individually added to Cox models. This was calculated as: log(HRdepression [model without physical activity]) – log(HRdepression [model with physical activity])/log(HRdepression model without physical activity). We considered this percentage change in the logHR of depression to be a measure of confounding or mediation. This was calculated for the PAS and for each of its components. Subgroup analyses were performed in a similar fashion, stratifying by baseline coronary heart disease status, race and gender. We also performed sensitivity analyses by using only baseline data, no time-varying covariates and without filling in missing values. A two sided p value <0.05 was considered statistically significant. All analyses were performed in STATA Version 10.1 (StataCorp LP). Missing data for any visit the participant was known to have attended were replaced with data from other visits. Missing baseline data were filled with values from the next available visit. When data were missing at later visits, the last observation was carried forward. This was done to maintain a more consistent sample size, as it may vary substantially when using time-varying covariates in analyses. Approximately 5% of data was missing at any visit for any variable. Missing data on depressive symptoms ranged from 1.5% at baseline to 10% at later visits. Participants who had never had an assessment of physical activity or depressive symptoms at any time during the study were excluded (n=36). Data from 5852 participants remained for analysis. Baseline characteristics were compared between high and low depressive symptoms groups using t tests with unequal variance for continuous variables and χ2 tests for dichotomous variables. The characteristics of the physical activity groups were compared using univariate linear (continuous variables) and logistic (dichotomous variables) regression evaluating for trend. The association between physical activity and depressive symptoms at baseline was evaluated using multivariate linear and logistic regression models. Cox proportional hazards regression was used to estimate the risk of cardiovascular mortality associated with depressive symptoms, physical inactivity and other covariates. In the models, participants were included up to the date of death or last known visit. The group of 687 African-Americans recruited 3 years after study initiation was treated as a late entry cohort and immortal person-time before recruitment was removed. Variables were chosen for the final model based on the significance of their univariate associations as well as clinical interest. The proportional hazards assumption was checked with a log-log plot of the survival function and was met for both depression and physical inactivity. All variables were treated as time-varying. For example, the CES-D was administered at baseline and annually for 9 additional years, so individuals may have up to 10 measurements. After dichotomisation they may fall into both high and low depression score groups at different points during follow-up. The Cox models used all available measurements and alternated the person-time to the appropriate risk group. Using repeated measurements reduces misclassification bias and provides more accurate estimates of risk throughout follow-up and at the time of events. For incident events occurring between visits, that period was updated if it occurred in the first half. For example, if a participant suffered a stroke 4 months after his fifth annual visit, the period between the fifth and sixth visits and subsequent periods were reclassified as prevalent stroke. If it occurred at 8 months, only subsequent periods were reclassified. To estimate how much of the increased risk of cardiovascular mortality due to depressive symptoms may be accounted for by physical inactivity, we determined the percentage change in the coefficient for depression after physical activity variables were individually added to Cox models. This was calculated as: log(HRdepression [model without physical activity]) – log(HRdepression [model with physical activity])/log(HRdepression model without physical activity). We considered this percentage change in the logHR of depression to be a measure of confounding or mediation. This was calculated for the PAS and for each of its components. Subgroup analyses were performed in a similar fashion, stratifying by baseline coronary heart disease status, race and gender. We also performed sensitivity analyses by using only baseline data, no time-varying covariates and without filling in missing values. A two sided p value <0.05 was considered statistically significant. All analyses were performed in STATA Version 10.1 (StataCorp LP).
Results
At study entry the mean±SD age of the participants was 72.8±5.6 years (range 65–100); 58% were female and 16% were non-white, of whom 96% were African-Americans. Differences in characteristics of participants at study entry by depression score and physical activity group are shown in table 1. Participants with a CES-D score above the cut-off were more often female, non-white, current smokers, less educated, consumed less alcohol and had a greater prevalence of comorbidities. About 8% of those with high depression scores were taking antidepressants compared with 3% of the participants with low depression scores. Those who were more physically inactive were older, more often female, non-white, current smokers, less educated and consumed less alcohol than physically active participants. They had higher body mass index and blood pressure, more comorbidities and less favourable lipid and chemistry profiles than more physically active participants. Baseline characteristics according to depression score and physical activity group Data expressed as mean±SD for continuous variables or percentage for dichotomous variables. Depressive symptoms based on the Center for Epidemiological Studies-Depression Scale (10-item version): low (score 0–7) and high (score ≥8). p Values from two-sample t tests for continuous and χ2 tests for dichotomous variables. p Values from linear regression for continuous and logistic regression for dichotomous variables. BP, blood pressure; BMI, body mass index; CHD, coronary heart disease; CHF, congestive heart failure; CRP, C reactive protein; HS, high school; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; TCA, tricyclic antidepressant use. The percentage of participants with a CES-D score above the cut-off increased over time, ranging from 20% at baseline to 30% after 10 years. During this time the study population also became less physically active with an overall shift towards lower activity groups. The percentage of persons falling into the lowest activity group increased from 10% at baseline to 19% at 10 years, while the percentage in the highest activity group fell from 14% to 9%. Depression scores and physical activity levels were strongly associated. After adjustment for age, race and sex, persons in the lowest physical activity group were more likely to have a high depression score than those in the most active group (OR 3.4; 95% CI 2.6 to 4.5; p<0.001). The results were similar when variables were treated as continuous and for each component of the PAS examined individually (not shown). The mean follow-up duration was 10.3 years (maximum 14), which provided 60 652 person-years of observation. Mean follow-up duration was shorter for non-white participants (9.1 years), in part because most (71%) entered the study 3 years after the original cohort because of the recruitment of 687 African-Americans at that time. Overall, there were 2915 deaths (49.8%), including 1176 cardiovascular deaths (20.1%). The risk for cardiovascular mortality stratified by level of depressive symptoms and by physical activity scores is shown in table 2. Persons with high depression scores had a 27% (multivariable adjusted) to 67% (unadjusted) increased risk of cardiovascular death compared with those with low depression scores. For physical activity, a stepwise increase in the risk of cardiovascular death was observed among progressively less active groups. Compared with those in the most active group, persons in the lowest activity group had a 217% (multivariable adjusted) to 425% (unadjusted) increased risk of cardiovascular mortality. Risk of cardiovascular mortality by depression score and physical activity All p values<0.001 except PAS group III: p=0.008 (unadjusted), p=0.031 (demographic) and p=0.143 (multivariable). Adjusted for age, race and gender. The number of participants in each group is taken from baseline. Adjusted for age, race, gender, clinic location (four sites), education (less than high school, high school, beyond high school), body mass index (underweight, normal, overweight, obese), smoking (never, former, current), alcohol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, diabetes, hypertension, coronary heart disease, congestive heart failure, stroke and antidepressant medication use. Depressive symptoms based on the Center for Epidemiological Studies-Depression Scale (10-item version): low (score 0–7) and high (score ≥8). Table 3 shows the percentage reduction in the logHR of the depression score for cardiovascular mortality after adding physical activity to the multivariable adjusted models. In the full cohort the addition of PAS resulted in a 26% reduction in the logHR while its individual components accounted for reductions of 10–19%. This effect was similar when the 4671 (79.8%) participants without known coronary heart disease were compared with the 1181 (20.2%) participants with known coronary heart disease. Percentage reduction in the log hazard ratio of depressive symptoms for cardiovascular mortality after adding physical activity variables Models were multivariable adjusted for age, race, gender, clinic location, education, body mass index, smoking, alcohol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, diabetes, hypertension, coronary heart disease (CHD), congestive heart failure, stroke and antidepressant medication use. All p<0.05 for log HR of depression except for results in non-white column (p range 0.16–0.24). Population size with coronary heart disease shown is at study entry and changes over time as it was modelled as time-varying. Categorical variables. Continuous variable. Physical inactivity accounted for a greater percentage of the risk due to depression in men than in women. The addition of PAS reduced the logHR of depression by 31% in men and by 22% in women, with similar results in the individual components. PAS also tended to account for a greater percentage of the risk among white (27%) compared with non-white (11%) participants. However, the logHR of depression among non-whites in multivariable models did not achieve statistical significance (p range 0.16–0.24). In a Kaplan–Meier analysis (figure 1), physical inactivity (median PAS score ≤7) combined with high depressive symptoms (CES-D ≥8) were associated with a significantly greater risk than having either condition alone (both p<0.001, log rank test). At these cut-off values the risk due to physical inactivity was greater than the risk due to high depressive symptoms (p=0.004, log rank test). The magnitude of the increase in risk between inactive and active participants was the same in both depression groups (p for interaction=0.251, demographic adjusted), suggesting that the risks of depression and inactivity for cardiovascular death are additive. Overall, the results were similar when using only baseline data (without time-varying covariates) and without substitution of missing data. Cumulative incidence of cardiovascular death according to depression score and physical activity status. Number of cardiovascular deaths 1176/5852 (20.1%). High depression score, Center for Epidemiological Studies Depression Score (CES-D) ≥8; physical inactivity, Physical Activity Score (PAS) ≤7.
null
null
[ "Study population", "Assessment of physical activity", "Assessment of depressive symptoms", "Assessment of events and cardiovascular mortality", "Statistical analysis" ]
[ "The CHS is a multicentre prospective cohort study of cardiovascular risk factors in ambulatory non-institutionalised men and women aged ≥65 years. Participants were randomly selected using Medicare eligibility lists of the Health Care Financing Administration from four communities in the USA: Washington County, Maryland; Forsyth County, North Carolina; Allegheny County, Pennsylvania; and Sacramento County, California. Initial recruitment of 5201 subjects occurred in 1989 with an additional 687 African-Americans recruited 3 years later, bringing the total to 5888 participants. The institutional review board at each centre approved the study and participants gave informed consent. Additional details of the study design and recruitment process have been published previously.20 21\nThe baseline visit included a standardised physical examination and questionnaire, laboratory testing and diagnostic evaluation. Participants returned annually for nine additional clinic visits and were also contacted by telephone every 6 months. Individual follow-up was available up to 14 years, the date of last known contact or death, whichever occurred first.\nInformation was gathered on smoking status, alcohol consumption, medications and medical conditions including coronary heart disease (defined as angina, previous myocardial infarction or coronary revascularisation), heart failure, stroke, hypertension and diabetes. Examination included measurement of body mass index, waist-to-hip ratio and seated blood pressure using a random zero sphygmomanometer. Participants also had a standard resting 12-lead ECG, echocardiogram and laboratory tests including serum cholesterol, glucose, creatinine and C reactive protein.", "Usual physical activity was assessed by a self-report questionnaire at baseline and again after 3 and 7 years. Four aspects of physical activity were evaluated:Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.Distance walked in blocks was categorised into quintiles.Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nLeisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.\nExercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.\nDistance walked in blocks was categorised into quintiles.\nPace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nTo obtain an overall assessment of physical activity we created a composite Physical Activity Score (PAS) from these four components. A score of 0 was assigned to the lowest category in each area with 1 point for each increase in each domain of physical activity. This provided a range of 0 (least active) to 15 (most active). This score was then categorised into four groups: PAS 0–3, 4–7, 8–11 and 12–15. The internal consistency of this scale was good (Cronbach's α=0.78).", "The short (10-item) version25 of the Center for Epidemiological Studies Depression (CES-D) scale26 was used annually to assess self-reported depressive symptoms experienced in the past week. This version of the CES-D has shown good validity versus the 20-item CES-D, particularly in epidemiological studies and older populations.25 The scale consists of 10 items, each scored 0–3, for a maximum of 30 points. Higher scores indicate greater frequency of depressive symptoms and correlate with an increased risk of clinical depression.\nDepression scores were dichotomised at each visit with a cut-off of ≥8 as in previous studies in the CHS,4 27 creating low (CES-D <8) and high (CES-D ≥8) groups for analysis. The cut-off of 8 on the short version of the CES-D corresponds to a cut-off of ≥16 on the 20-item version of the CES-D.", "All events occurring after the baseline visit were classified as incident events and were adjudicated by a centralised committee.22 The cause of death was determined from medical records, death certificates, ICD codes, obituaries and interviews with relatives and contacts. Cardiovascular deaths were those due to atherosclerotic coronary disease, cerebrovascular disease (stroke), other atherosclerotic disease (such as aortic aneurysm) and other vascular disease (such as valvular heart disease or pulmonary embolism).28 The CHS has nearly 100% ascertainment of mortality status.", "Missing data for any visit the participant was known to have attended were replaced with data from other visits. Missing baseline data were filled with values from the next available visit. When data were missing at later visits, the last observation was carried forward. This was done to maintain a more consistent sample size, as it may vary substantially when using time-varying covariates in analyses. Approximately 5% of data was missing at any visit for any variable. Missing data on depressive symptoms ranged from 1.5% at baseline to 10% at later visits. Participants who had never had an assessment of physical activity or depressive symptoms at any time during the study were excluded (n=36). Data from 5852 participants remained for analysis.\nBaseline characteristics were compared between high and low depressive symptoms groups using t tests with unequal variance for continuous variables and χ2 tests for dichotomous variables. The characteristics of the physical activity groups were compared using univariate linear (continuous variables) and logistic (dichotomous variables) regression evaluating for trend. The association between physical activity and depressive symptoms at baseline was evaluated using multivariate linear and logistic regression models.\nCox proportional hazards regression was used to estimate the risk of cardiovascular mortality associated with depressive symptoms, physical inactivity and other covariates. In the models, participants were included up to the date of death or last known visit. The group of 687 African-Americans recruited 3 years after study initiation was treated as a late entry cohort and immortal person-time before recruitment was removed. Variables were chosen for the final model based on the significance of their univariate associations as well as clinical interest. The proportional hazards assumption was checked with a log-log plot of the survival function and was met for both depression and physical inactivity.\nAll variables were treated as time-varying. For example, the CES-D was administered at baseline and annually for 9 additional years, so individuals may have up to 10 measurements. After dichotomisation they may fall into both high and low depression score groups at different points during follow-up. The Cox models used all available measurements and alternated the person-time to the appropriate risk group. Using repeated measurements reduces misclassification bias and provides more accurate estimates of risk throughout follow-up and at the time of events. For incident events occurring between visits, that period was updated if it occurred in the first half. For example, if a participant suffered a stroke 4 months after his fifth annual visit, the period between the fifth and sixth visits and subsequent periods were reclassified as prevalent stroke. If it occurred at 8 months, only subsequent periods were reclassified.\nTo estimate how much of the increased risk of cardiovascular mortality due to depressive symptoms may be accounted for by physical inactivity, we determined the percentage change in the coefficient for depression after physical activity variables were individually added to Cox models. This was calculated as:\nlog(HRdepression [model without physical activity]) – log(HRdepression [model with physical activity])/log(HRdepression model without physical activity).\nWe considered this percentage change in the logHR of depression to be a measure of confounding or mediation. This was calculated for the PAS and for each of its components.\nSubgroup analyses were performed in a similar fashion, stratifying by baseline coronary heart disease status, race and gender. We also performed sensitivity analyses by using only baseline data, no time-varying covariates and without filling in missing values. A two sided p value <0.05 was considered statistically significant. All analyses were performed in STATA Version 10.1 (StataCorp LP)." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Study population", "Assessment of physical activity", "Assessment of depressive symptoms", "Assessment of events and cardiovascular mortality", "Statistical analysis", "Results", "Discussion" ]
[ "Depression has been associated with increased cardiovascular mortality in older individuals in most,1–4 but not all,5 6 studies. Multiple potential behavioural and biological mechanisms have been proposed to explain this association,7 and it is likely that several mediating factors play a role. One important plausible mediating factor to consider is physical inactivity since many patients with depression are physically inactive.8 9 In addition, a strong inverse relationship exists between self-reported activity levels,10–16 as well as objective measures of daily energy expenditure,16 and mortality risk. The adverse health consequences of physical inactivity are greater in adults aged ≥65 years than in younger individuals.10–13\nPhysical inactivity has been shown partially to mediate the relationship between depression and mortality among persons with established cardiovascular disease,7 17 18 but the possibility that physical inactivity mediates the relationship between depression and mortality in individuals without known heart disease is not well documented using longitudinal assessments. In the Finland, Italy and the Netherlands Elderly (FINE) study,19 physical inactivity explained only a small percentage (9%) of the increased mortality risk due to depressive symptoms in men aged 70–90 years who were free of cardiovascular disease at baseline.\nThe Cardiovascular Health Study (CHS) is a prospective observational study of community-dwelling individuals aged ≥65 years from whom extensive repeated evaluations of health information have been collected by systematic interviews and clinical examination. Previous studies have shown that both physical inactivity12 and depressive symptoms4 independently predict increased mortality in this cohort of individuals. The present study examined whether physical inactivity is a mediator of the increased risk of cardiovascular mortality associated with depressive symptoms among community dwelling older adults, adjusting for demographic variables, comorbid medical conditions and health behaviours.", "[SUBTITLE] Study population [SUBSECTION] The CHS is a multicentre prospective cohort study of cardiovascular risk factors in ambulatory non-institutionalised men and women aged ≥65 years. Participants were randomly selected using Medicare eligibility lists of the Health Care Financing Administration from four communities in the USA: Washington County, Maryland; Forsyth County, North Carolina; Allegheny County, Pennsylvania; and Sacramento County, California. Initial recruitment of 5201 subjects occurred in 1989 with an additional 687 African-Americans recruited 3 years later, bringing the total to 5888 participants. The institutional review board at each centre approved the study and participants gave informed consent. Additional details of the study design and recruitment process have been published previously.20 21\nThe baseline visit included a standardised physical examination and questionnaire, laboratory testing and diagnostic evaluation. Participants returned annually for nine additional clinic visits and were also contacted by telephone every 6 months. Individual follow-up was available up to 14 years, the date of last known contact or death, whichever occurred first.\nInformation was gathered on smoking status, alcohol consumption, medications and medical conditions including coronary heart disease (defined as angina, previous myocardial infarction or coronary revascularisation), heart failure, stroke, hypertension and diabetes. Examination included measurement of body mass index, waist-to-hip ratio and seated blood pressure using a random zero sphygmomanometer. Participants also had a standard resting 12-lead ECG, echocardiogram and laboratory tests including serum cholesterol, glucose, creatinine and C reactive protein.\nThe CHS is a multicentre prospective cohort study of cardiovascular risk factors in ambulatory non-institutionalised men and women aged ≥65 years. Participants were randomly selected using Medicare eligibility lists of the Health Care Financing Administration from four communities in the USA: Washington County, Maryland; Forsyth County, North Carolina; Allegheny County, Pennsylvania; and Sacramento County, California. Initial recruitment of 5201 subjects occurred in 1989 with an additional 687 African-Americans recruited 3 years later, bringing the total to 5888 participants. The institutional review board at each centre approved the study and participants gave informed consent. Additional details of the study design and recruitment process have been published previously.20 21\nThe baseline visit included a standardised physical examination and questionnaire, laboratory testing and diagnostic evaluation. Participants returned annually for nine additional clinic visits and were also contacted by telephone every 6 months. Individual follow-up was available up to 14 years, the date of last known contact or death, whichever occurred first.\nInformation was gathered on smoking status, alcohol consumption, medications and medical conditions including coronary heart disease (defined as angina, previous myocardial infarction or coronary revascularisation), heart failure, stroke, hypertension and diabetes. Examination included measurement of body mass index, waist-to-hip ratio and seated blood pressure using a random zero sphygmomanometer. Participants also had a standard resting 12-lead ECG, echocardiogram and laboratory tests including serum cholesterol, glucose, creatinine and C reactive protein.\n[SUBTITLE] Assessment of physical activity [SUBSECTION] Usual physical activity was assessed by a self-report questionnaire at baseline and again after 3 and 7 years. Four aspects of physical activity were evaluated:Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.Distance walked in blocks was categorised into quintiles.Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nLeisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.\nExercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.\nDistance walked in blocks was categorised into quintiles.\nPace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nTo obtain an overall assessment of physical activity we created a composite Physical Activity Score (PAS) from these four components. A score of 0 was assigned to the lowest category in each area with 1 point for each increase in each domain of physical activity. This provided a range of 0 (least active) to 15 (most active). This score was then categorised into four groups: PAS 0–3, 4–7, 8–11 and 12–15. The internal consistency of this scale was good (Cronbach's α=0.78).\nUsual physical activity was assessed by a self-report questionnaire at baseline and again after 3 and 7 years. Four aspects of physical activity were evaluated:Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.Distance walked in blocks was categorised into quintiles.Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nLeisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.\nExercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.\nDistance walked in blocks was categorised into quintiles.\nPace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nTo obtain an overall assessment of physical activity we created a composite Physical Activity Score (PAS) from these four components. A score of 0 was assigned to the lowest category in each area with 1 point for each increase in each domain of physical activity. This provided a range of 0 (least active) to 15 (most active). This score was then categorised into four groups: PAS 0–3, 4–7, 8–11 and 12–15. The internal consistency of this scale was good (Cronbach's α=0.78).\n[SUBTITLE] Assessment of depressive symptoms [SUBSECTION] The short (10-item) version25 of the Center for Epidemiological Studies Depression (CES-D) scale26 was used annually to assess self-reported depressive symptoms experienced in the past week. This version of the CES-D has shown good validity versus the 20-item CES-D, particularly in epidemiological studies and older populations.25 The scale consists of 10 items, each scored 0–3, for a maximum of 30 points. Higher scores indicate greater frequency of depressive symptoms and correlate with an increased risk of clinical depression.\nDepression scores were dichotomised at each visit with a cut-off of ≥8 as in previous studies in the CHS,4 27 creating low (CES-D <8) and high (CES-D ≥8) groups for analysis. The cut-off of 8 on the short version of the CES-D corresponds to a cut-off of ≥16 on the 20-item version of the CES-D.\nThe short (10-item) version25 of the Center for Epidemiological Studies Depression (CES-D) scale26 was used annually to assess self-reported depressive symptoms experienced in the past week. This version of the CES-D has shown good validity versus the 20-item CES-D, particularly in epidemiological studies and older populations.25 The scale consists of 10 items, each scored 0–3, for a maximum of 30 points. Higher scores indicate greater frequency of depressive symptoms and correlate with an increased risk of clinical depression.\nDepression scores were dichotomised at each visit with a cut-off of ≥8 as in previous studies in the CHS,4 27 creating low (CES-D <8) and high (CES-D ≥8) groups for analysis. The cut-off of 8 on the short version of the CES-D corresponds to a cut-off of ≥16 on the 20-item version of the CES-D.\n[SUBTITLE] Assessment of events and cardiovascular mortality [SUBSECTION] All events occurring after the baseline visit were classified as incident events and were adjudicated by a centralised committee.22 The cause of death was determined from medical records, death certificates, ICD codes, obituaries and interviews with relatives and contacts. Cardiovascular deaths were those due to atherosclerotic coronary disease, cerebrovascular disease (stroke), other atherosclerotic disease (such as aortic aneurysm) and other vascular disease (such as valvular heart disease or pulmonary embolism).28 The CHS has nearly 100% ascertainment of mortality status.\nAll events occurring after the baseline visit were classified as incident events and were adjudicated by a centralised committee.22 The cause of death was determined from medical records, death certificates, ICD codes, obituaries and interviews with relatives and contacts. Cardiovascular deaths were those due to atherosclerotic coronary disease, cerebrovascular disease (stroke), other atherosclerotic disease (such as aortic aneurysm) and other vascular disease (such as valvular heart disease or pulmonary embolism).28 The CHS has nearly 100% ascertainment of mortality status.\n[SUBTITLE] Statistical analysis [SUBSECTION] Missing data for any visit the participant was known to have attended were replaced with data from other visits. Missing baseline data were filled with values from the next available visit. When data were missing at later visits, the last observation was carried forward. This was done to maintain a more consistent sample size, as it may vary substantially when using time-varying covariates in analyses. Approximately 5% of data was missing at any visit for any variable. Missing data on depressive symptoms ranged from 1.5% at baseline to 10% at later visits. Participants who had never had an assessment of physical activity or depressive symptoms at any time during the study were excluded (n=36). Data from 5852 participants remained for analysis.\nBaseline characteristics were compared between high and low depressive symptoms groups using t tests with unequal variance for continuous variables and χ2 tests for dichotomous variables. The characteristics of the physical activity groups were compared using univariate linear (continuous variables) and logistic (dichotomous variables) regression evaluating for trend. The association between physical activity and depressive symptoms at baseline was evaluated using multivariate linear and logistic regression models.\nCox proportional hazards regression was used to estimate the risk of cardiovascular mortality associated with depressive symptoms, physical inactivity and other covariates. In the models, participants were included up to the date of death or last known visit. The group of 687 African-Americans recruited 3 years after study initiation was treated as a late entry cohort and immortal person-time before recruitment was removed. Variables were chosen for the final model based on the significance of their univariate associations as well as clinical interest. The proportional hazards assumption was checked with a log-log plot of the survival function and was met for both depression and physical inactivity.\nAll variables were treated as time-varying. For example, the CES-D was administered at baseline and annually for 9 additional years, so individuals may have up to 10 measurements. After dichotomisation they may fall into both high and low depression score groups at different points during follow-up. The Cox models used all available measurements and alternated the person-time to the appropriate risk group. Using repeated measurements reduces misclassification bias and provides more accurate estimates of risk throughout follow-up and at the time of events. For incident events occurring between visits, that period was updated if it occurred in the first half. For example, if a participant suffered a stroke 4 months after his fifth annual visit, the period between the fifth and sixth visits and subsequent periods were reclassified as prevalent stroke. If it occurred at 8 months, only subsequent periods were reclassified.\nTo estimate how much of the increased risk of cardiovascular mortality due to depressive symptoms may be accounted for by physical inactivity, we determined the percentage change in the coefficient for depression after physical activity variables were individually added to Cox models. This was calculated as:\nlog(HRdepression [model without physical activity]) – log(HRdepression [model with physical activity])/log(HRdepression model without physical activity).\nWe considered this percentage change in the logHR of depression to be a measure of confounding or mediation. This was calculated for the PAS and for each of its components.\nSubgroup analyses were performed in a similar fashion, stratifying by baseline coronary heart disease status, race and gender. We also performed sensitivity analyses by using only baseline data, no time-varying covariates and without filling in missing values. A two sided p value <0.05 was considered statistically significant. All analyses were performed in STATA Version 10.1 (StataCorp LP).\nMissing data for any visit the participant was known to have attended were replaced with data from other visits. Missing baseline data were filled with values from the next available visit. When data were missing at later visits, the last observation was carried forward. This was done to maintain a more consistent sample size, as it may vary substantially when using time-varying covariates in analyses. Approximately 5% of data was missing at any visit for any variable. Missing data on depressive symptoms ranged from 1.5% at baseline to 10% at later visits. Participants who had never had an assessment of physical activity or depressive symptoms at any time during the study were excluded (n=36). Data from 5852 participants remained for analysis.\nBaseline characteristics were compared between high and low depressive symptoms groups using t tests with unequal variance for continuous variables and χ2 tests for dichotomous variables. The characteristics of the physical activity groups were compared using univariate linear (continuous variables) and logistic (dichotomous variables) regression evaluating for trend. The association between physical activity and depressive symptoms at baseline was evaluated using multivariate linear and logistic regression models.\nCox proportional hazards regression was used to estimate the risk of cardiovascular mortality associated with depressive symptoms, physical inactivity and other covariates. In the models, participants were included up to the date of death or last known visit. The group of 687 African-Americans recruited 3 years after study initiation was treated as a late entry cohort and immortal person-time before recruitment was removed. Variables were chosen for the final model based on the significance of their univariate associations as well as clinical interest. The proportional hazards assumption was checked with a log-log plot of the survival function and was met for both depression and physical inactivity.\nAll variables were treated as time-varying. For example, the CES-D was administered at baseline and annually for 9 additional years, so individuals may have up to 10 measurements. After dichotomisation they may fall into both high and low depression score groups at different points during follow-up. The Cox models used all available measurements and alternated the person-time to the appropriate risk group. Using repeated measurements reduces misclassification bias and provides more accurate estimates of risk throughout follow-up and at the time of events. For incident events occurring between visits, that period was updated if it occurred in the first half. For example, if a participant suffered a stroke 4 months after his fifth annual visit, the period between the fifth and sixth visits and subsequent periods were reclassified as prevalent stroke. If it occurred at 8 months, only subsequent periods were reclassified.\nTo estimate how much of the increased risk of cardiovascular mortality due to depressive symptoms may be accounted for by physical inactivity, we determined the percentage change in the coefficient for depression after physical activity variables were individually added to Cox models. This was calculated as:\nlog(HRdepression [model without physical activity]) – log(HRdepression [model with physical activity])/log(HRdepression model without physical activity).\nWe considered this percentage change in the logHR of depression to be a measure of confounding or mediation. This was calculated for the PAS and for each of its components.\nSubgroup analyses were performed in a similar fashion, stratifying by baseline coronary heart disease status, race and gender. We also performed sensitivity analyses by using only baseline data, no time-varying covariates and without filling in missing values. A two sided p value <0.05 was considered statistically significant. All analyses were performed in STATA Version 10.1 (StataCorp LP).", "The CHS is a multicentre prospective cohort study of cardiovascular risk factors in ambulatory non-institutionalised men and women aged ≥65 years. Participants were randomly selected using Medicare eligibility lists of the Health Care Financing Administration from four communities in the USA: Washington County, Maryland; Forsyth County, North Carolina; Allegheny County, Pennsylvania; and Sacramento County, California. Initial recruitment of 5201 subjects occurred in 1989 with an additional 687 African-Americans recruited 3 years later, bringing the total to 5888 participants. The institutional review board at each centre approved the study and participants gave informed consent. Additional details of the study design and recruitment process have been published previously.20 21\nThe baseline visit included a standardised physical examination and questionnaire, laboratory testing and diagnostic evaluation. Participants returned annually for nine additional clinic visits and were also contacted by telephone every 6 months. Individual follow-up was available up to 14 years, the date of last known contact or death, whichever occurred first.\nInformation was gathered on smoking status, alcohol consumption, medications and medical conditions including coronary heart disease (defined as angina, previous myocardial infarction or coronary revascularisation), heart failure, stroke, hypertension and diabetes. Examination included measurement of body mass index, waist-to-hip ratio and seated blood pressure using a random zero sphygmomanometer. Participants also had a standard resting 12-lead ECG, echocardiogram and laboratory tests including serum cholesterol, glucose, creatinine and C reactive protein.", "Usual physical activity was assessed by a self-report questionnaire at baseline and again after 3 and 7 years. Four aspects of physical activity were evaluated:Leisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.Exercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.Distance walked in blocks was categorised into quintiles.Pace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nLeisure time activity was assessed by a modified Minnesota Leisure-Time Activities questionnaire which asked participants about 15 different activities during the previous 2 weeks.22 The questionnaire has been validated previously in other cohorts.23 Responses were used to estimate energy expenditure in kilocalories per week and then categorised into quintiles.\nExercise was categorised as no exercise, low, moderate and high intensity as previously described.24 Briefly, participants who engaged in at least one of swimming, hiking, aerobics, tennis, jogging or racquetball or who walked for exercise at a brisk (>4 mph) pace were categorised as having engaged in high-intensity activity; those who engaged in at least one of gardening, mowing, raking, golf, bowling, biking, dancing, calisthenics or exercise cycle or who walked for exercise at an average (>2–3 mph) or fairly brisk pace (>3–4 mph) were categorised as having engaged in moderate intensity activity; and participants who did not report participating in any of the 15 leisure time activities or who walked for exercise at a casual or strolling pace (<2 mph) were categorised as having engaged in low intensity activity.\nDistance walked in blocks was categorised into quintiles.\nPace of walking was categorised as no walking, <2, 2–3, 3–4 and >4 mph.\nTo obtain an overall assessment of physical activity we created a composite Physical Activity Score (PAS) from these four components. A score of 0 was assigned to the lowest category in each area with 1 point for each increase in each domain of physical activity. This provided a range of 0 (least active) to 15 (most active). This score was then categorised into four groups: PAS 0–3, 4–7, 8–11 and 12–15. The internal consistency of this scale was good (Cronbach's α=0.78).", "The short (10-item) version25 of the Center for Epidemiological Studies Depression (CES-D) scale26 was used annually to assess self-reported depressive symptoms experienced in the past week. This version of the CES-D has shown good validity versus the 20-item CES-D, particularly in epidemiological studies and older populations.25 The scale consists of 10 items, each scored 0–3, for a maximum of 30 points. Higher scores indicate greater frequency of depressive symptoms and correlate with an increased risk of clinical depression.\nDepression scores were dichotomised at each visit with a cut-off of ≥8 as in previous studies in the CHS,4 27 creating low (CES-D <8) and high (CES-D ≥8) groups for analysis. The cut-off of 8 on the short version of the CES-D corresponds to a cut-off of ≥16 on the 20-item version of the CES-D.", "All events occurring after the baseline visit were classified as incident events and were adjudicated by a centralised committee.22 The cause of death was determined from medical records, death certificates, ICD codes, obituaries and interviews with relatives and contacts. Cardiovascular deaths were those due to atherosclerotic coronary disease, cerebrovascular disease (stroke), other atherosclerotic disease (such as aortic aneurysm) and other vascular disease (such as valvular heart disease or pulmonary embolism).28 The CHS has nearly 100% ascertainment of mortality status.", "Missing data for any visit the participant was known to have attended were replaced with data from other visits. Missing baseline data were filled with values from the next available visit. When data were missing at later visits, the last observation was carried forward. This was done to maintain a more consistent sample size, as it may vary substantially when using time-varying covariates in analyses. Approximately 5% of data was missing at any visit for any variable. Missing data on depressive symptoms ranged from 1.5% at baseline to 10% at later visits. Participants who had never had an assessment of physical activity or depressive symptoms at any time during the study were excluded (n=36). Data from 5852 participants remained for analysis.\nBaseline characteristics were compared between high and low depressive symptoms groups using t tests with unequal variance for continuous variables and χ2 tests for dichotomous variables. The characteristics of the physical activity groups were compared using univariate linear (continuous variables) and logistic (dichotomous variables) regression evaluating for trend. The association between physical activity and depressive symptoms at baseline was evaluated using multivariate linear and logistic regression models.\nCox proportional hazards regression was used to estimate the risk of cardiovascular mortality associated with depressive symptoms, physical inactivity and other covariates. In the models, participants were included up to the date of death or last known visit. The group of 687 African-Americans recruited 3 years after study initiation was treated as a late entry cohort and immortal person-time before recruitment was removed. Variables were chosen for the final model based on the significance of their univariate associations as well as clinical interest. The proportional hazards assumption was checked with a log-log plot of the survival function and was met for both depression and physical inactivity.\nAll variables were treated as time-varying. For example, the CES-D was administered at baseline and annually for 9 additional years, so individuals may have up to 10 measurements. After dichotomisation they may fall into both high and low depression score groups at different points during follow-up. The Cox models used all available measurements and alternated the person-time to the appropriate risk group. Using repeated measurements reduces misclassification bias and provides more accurate estimates of risk throughout follow-up and at the time of events. For incident events occurring between visits, that period was updated if it occurred in the first half. For example, if a participant suffered a stroke 4 months after his fifth annual visit, the period between the fifth and sixth visits and subsequent periods were reclassified as prevalent stroke. If it occurred at 8 months, only subsequent periods were reclassified.\nTo estimate how much of the increased risk of cardiovascular mortality due to depressive symptoms may be accounted for by physical inactivity, we determined the percentage change in the coefficient for depression after physical activity variables were individually added to Cox models. This was calculated as:\nlog(HRdepression [model without physical activity]) – log(HRdepression [model with physical activity])/log(HRdepression model without physical activity).\nWe considered this percentage change in the logHR of depression to be a measure of confounding or mediation. This was calculated for the PAS and for each of its components.\nSubgroup analyses were performed in a similar fashion, stratifying by baseline coronary heart disease status, race and gender. We also performed sensitivity analyses by using only baseline data, no time-varying covariates and without filling in missing values. A two sided p value <0.05 was considered statistically significant. All analyses were performed in STATA Version 10.1 (StataCorp LP).", "At study entry the mean±SD age of the participants was 72.8±5.6 years (range 65–100); 58% were female and 16% were non-white, of whom 96% were African-Americans. Differences in characteristics of participants at study entry by depression score and physical activity group are shown in table 1. Participants with a CES-D score above the cut-off were more often female, non-white, current smokers, less educated, consumed less alcohol and had a greater prevalence of comorbidities. About 8% of those with high depression scores were taking antidepressants compared with 3% of the participants with low depression scores. Those who were more physically inactive were older, more often female, non-white, current smokers, less educated and consumed less alcohol than physically active participants. They had higher body mass index and blood pressure, more comorbidities and less favourable lipid and chemistry profiles than more physically active participants.\nBaseline characteristics according to depression score and physical activity group\nData expressed as mean±SD for continuous variables or percentage for dichotomous variables.\nDepressive symptoms based on the Center for Epidemiological Studies-Depression Scale (10-item version): low (score 0–7) and high (score ≥8).\np Values from two-sample t tests for continuous and χ2 tests for dichotomous variables.\np Values from linear regression for continuous and logistic regression for dichotomous variables.\nBP, blood pressure; BMI, body mass index; CHD, coronary heart disease; CHF, congestive heart failure; CRP, C reactive protein; HS, high school; HDL-C, high-density lipoprotein cholesterol; LDL-C, low-density lipoprotein cholesterol; TCA, tricyclic antidepressant use.\nThe percentage of participants with a CES-D score above the cut-off increased over time, ranging from 20% at baseline to 30% after 10 years. During this time the study population also became less physically active with an overall shift towards lower activity groups. The percentage of persons falling into the lowest activity group increased from 10% at baseline to 19% at 10 years, while the percentage in the highest activity group fell from 14% to 9%. Depression scores and physical activity levels were strongly associated. After adjustment for age, race and sex, persons in the lowest physical activity group were more likely to have a high depression score than those in the most active group (OR 3.4; 95% CI 2.6 to 4.5; p<0.001). The results were similar when variables were treated as continuous and for each component of the PAS examined individually (not shown).\nThe mean follow-up duration was 10.3 years (maximum 14), which provided 60 652 person-years of observation. Mean follow-up duration was shorter for non-white participants (9.1 years), in part because most (71%) entered the study 3 years after the original cohort because of the recruitment of 687 African-Americans at that time. Overall, there were 2915 deaths (49.8%), including 1176 cardiovascular deaths (20.1%).\nThe risk for cardiovascular mortality stratified by level of depressive symptoms and by physical activity scores is shown in table 2. Persons with high depression scores had a 27% (multivariable adjusted) to 67% (unadjusted) increased risk of cardiovascular death compared with those with low depression scores. For physical activity, a stepwise increase in the risk of cardiovascular death was observed among progressively less active groups. Compared with those in the most active group, persons in the lowest activity group had a 217% (multivariable adjusted) to 425% (unadjusted) increased risk of cardiovascular mortality.\nRisk of cardiovascular mortality by depression score and physical activity\nAll p values<0.001 except PAS group III: p=0.008 (unadjusted), p=0.031 (demographic) and p=0.143 (multivariable).\nAdjusted for age, race and gender.\nThe number of participants in each group is taken from baseline.\nAdjusted for age, race, gender, clinic location (four sites), education (less than high school, high school, beyond high school), body mass index (underweight, normal, overweight, obese), smoking (never, former, current), alcohol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, diabetes, hypertension, coronary heart disease, congestive heart failure, stroke and antidepressant medication use.\nDepressive symptoms based on the Center for Epidemiological Studies-Depression Scale (10-item version): low (score 0–7) and high (score ≥8).\nTable 3 shows the percentage reduction in the logHR of the depression score for cardiovascular mortality after adding physical activity to the multivariable adjusted models. In the full cohort the addition of PAS resulted in a 26% reduction in the logHR while its individual components accounted for reductions of 10–19%. This effect was similar when the 4671 (79.8%) participants without known coronary heart disease were compared with the 1181 (20.2%) participants with known coronary heart disease.\nPercentage reduction in the log hazard ratio of depressive symptoms for cardiovascular mortality after adding physical activity variables\nModels were multivariable adjusted for age, race, gender, clinic location, education, body mass index, smoking, alcohol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, diabetes, hypertension, coronary heart disease (CHD), congestive heart failure, stroke and antidepressant medication use.\nAll p<0.05 for log HR of depression except for results in non-white column (p range 0.16–0.24).\nPopulation size with coronary heart disease shown is at study entry and changes over time as it was modelled as time-varying.\nCategorical variables.\nContinuous variable.\nPhysical inactivity accounted for a greater percentage of the risk due to depression in men than in women. The addition of PAS reduced the logHR of depression by 31% in men and by 22% in women, with similar results in the individual components. PAS also tended to account for a greater percentage of the risk among white (27%) compared with non-white (11%) participants. However, the logHR of depression among non-whites in multivariable models did not achieve statistical significance (p range 0.16–0.24).\nIn a Kaplan–Meier analysis (figure 1), physical inactivity (median PAS score ≤7) combined with high depressive symptoms (CES-D ≥8) were associated with a significantly greater risk than having either condition alone (both p<0.001, log rank test). At these cut-off values the risk due to physical inactivity was greater than the risk due to high depressive symptoms (p=0.004, log rank test). The magnitude of the increase in risk between inactive and active participants was the same in both depression groups (p for interaction=0.251, demographic adjusted), suggesting that the risks of depression and inactivity for cardiovascular death are additive. Overall, the results were similar when using only baseline data (without time-varying covariates) and without substitution of missing data.\nCumulative incidence of cardiovascular death according to depression score and physical activity status. Number of cardiovascular deaths 1176/5852 (20.1%). High depression score, Center for Epidemiological Studies Depression Score (CES-D) ≥8; physical inactivity, Physical Activity Score (PAS) ≤7.", "This study shows that community-dwelling older adults in the CHS with high depression scores were at an increased risk of cardiovascular mortality during a follow-up period of approximately 10 years. Individuals with CES-D scores above the cut-off had a 67% greater risk of cardiovascular mortality than individuals with low depression scores, a risk that remained significant after adjustment for other important predictors of cardiovascular mortality. This finding alone is not surprising since previous reports from the CHS have shown that depression is associated with greater all-cause mortality4 and cardiovascular mortality27 in this cohort. This study addressed the contribution of physical inactivity to the increased mortality risk related to depression, since physical inactivity has been associated with substantially higher mortality in the CHS12 and since depression is known to be associated with physical inactivity.8 9 The major new finding of the present study is that physical inactivity accounts for approximately 25% of the increased risk of cardiovascular mortality due to depression in community-dwelling older adults.\nOther studies have reported that physical inactivity may be a partial mediator of the relationship between depression and cardiovascular events7 or mortality.18 In a group of initially medically stable patients with coronary artery disease, Brummett et al18 found that depressive symptoms were associated with increased mortality and with physical inactivity, as in the present study. These authors found that physical inactivity partially mediated the relationship between depression and mortality in patients with established coronary artery disease. A 13% reduction in the parameter estimate for Zung depression scores was reported after adding exercise to the Cox regression model used to identify factors predictive of survival in that population. Whooley et al7 reported that physical inactivity accounted for almost half of the association between depressive symptoms and cardiovascular events in a group of outpatients with stable coronary heart disease. These findings suggest that physical inactivity may play a somewhat more important role in a clinic-based population of individuals with established coronary heart disease than in community-dwelling older adults, although direct comparisons between studies is challenging because of differences in how physical activity is measured, in the methodology used to assess the contribution of physical inactivity to the increased risk of depression, and in the predictors of cardiovascular mortality accounted for in the analyses. There are also important differences in patient populations between studies which make direct comparisons difficult. Participants in the CHS were on average more than 20 years older than in the study by Brummett et al18 and 58% were women in the present study compared with only 18% in the studies by Brummett et al18 and by Whooley et al.7 Both studied patients with established cardiovascular disease, whereas only about 20% of participants in the CHS had established cardiovascular disease. Our findings should therefore be more generalisable to the older population and suggest that depressed individuals—regardless of whether they have established cardiovascular disease—may be able to reduce some of the risk due to depression by being more physically active. Of note, when the analysis was restricted only to CHS participants with established cardiovascular disease, our findings did not change substantially.\nIn the FINE study,19 as in the present study, depressive symptoms were associated with lower levels of self-reported physical activity, and both depressive symptoms and physical inactivity were associated with increased cardiovascular mortality. However, physical inactivity was felt to account for a smaller percentage of the increased mortality risk related to depression (ie, only 9%), and the authors concluded that it was not likely to be a mediator of the relationship between depressive symptoms and increased mortality.29 There were important differences in the FINE study and the present report. The FINE population was smaller (n=909 vs 5888 in our study); the participants were from Finland, the Netherlands and Italy versus the USA; the participants were all men (vs 58% women in our study); only baseline measurements were used; a different measure of physical activity was used from that in the CHS; and a different measure of depression was used (the Zung depression scale vs the CES-D in the CHS).\nNot surprisingly, high depression scores and low physical activity were strongly associated in the present study. Individuals in the lowest physical activity group were >3 times as likely to have a high depression score as those in the most active group. Previous work in this area shows that the relationship between depression and physical inactivity is complex and bidirectional. Regular physical activity decreases the risk of depression29 whereas cessation of exercise can lead to the development of depressive symptoms.30 A recent systematic review provided evidence for the other direction of this relationship—namely, that baseline depression might lead to the development of a sedentary lifestyle or to a lower level of physical activity.9\nThe present study has several important strengths including the large number of participants, the inclusion of individuals with and without known coronary heart disease at baseline, and the relatively long period of follow-up (up to 14 years). The CHS also provides nearly 100% ascertainment of mortality and obtained many time-varying covariates with multiple repeated measures over time, reducing misclassification. In addition, the participants in the CHS were randomly selected from four geographically distinct communities and are likely to be more representative of the US population as a whole. The only previous study that examined whether physical inactivity mediates the relationship between depression and mortality among community-dwelling older adults without known heart disease included only men,19 whereas the present study included both men and women.\nOne limitation of our study is that the percentage reduction in the logHR of depression could represent not only mediation by physical inactivity but might also be due to confounding if physical inactivity preceded depressive symptoms. However, since depression has previously been shown to lead to inactivity,8 much of this effect may be due to mediation. There are also limitations of the measures of both physical activity and depression. In the CHS, physical activity was determined by self-report rather than by objective measures and self-reported physical activity may be influenced by social desirability and social approval.31 Of note, however, studies have shown that self-report is highly correlated to more objective measures of physical activity and fitness, including treadmill performance32 and maximal oxygen consumption.33 Since physical activity has many dimensions, a strength of our study is that participants were asked about a variety of different domains of physical activit, including multiple common leisure time activities. In addition, information was obtained about a variety of different attributes of these activities including frequency, duration, pace and intensity.\nAnother limitation of our study is that depressive symptoms were assessed using the CES-D rather than by a structured interview. The CES-D is a valid and reliable instrument for assessing depressive symptoms in large community samples including community-dwelling older adults.34 The CES-D does not result in a clinical diagnosis of a major depressive disorder (MDD) but its construct and predictive validity have been established in older adults in general25 and in the CHS cohort in particular.4 The prevalence of significant depressive symptoms of 20–30% in this cohort (ie, participants with CES-D scores above the cut-off) is higher than the approximately 5% prevalence of MDD in community-dwelling older adults,35 which indicates that most of the participants classified as ‘depressed’ based on the CES-D had sub-threshold depressive symptoms. Additional studies are needed to examine the role of physical inactivity as a mediator of the adverse health risks among clinically depressed individuals.\nIn summary, the present study shows that physical inactivity accounts for a significant proportion (approximately 25%) of the increased cardiovascular mortality risk due to depressive symptoms in adults aged ≥65 years. These data suggest that preventive health and wellness programmes in older adults, particularly those with depression, should focus on encouraging enrolment and continued participation in exercise programmes. Future research might examine whether incentives could be used to change health behaviour in certain individuals, as has been done in other settings.36 37 Positive financial incentives, health insurance rebates, transportation vouchers or health club memberships might enhance participation of older adults with depression in these programmes and thereby reduce healthcare utilisation and the risk of cardiovascular events." ]
[ "intro", "methods", null, null, null, null, null, "results", "discussion" ]
[ "Physical inactivity", "depression", "depressive symptoms", "mortality", "exercise" ]
Secondary attack rate of tuberculosis in urban households in Kampala, Uganda.
21339819
Tuberculosis is an ancient disease that continues to threaten individual and public health today, especially in sub-Saharan Africa. Current surveillance systems describe general risk of tuberculosis in a population but do not characterize the risk to an individual following exposure to an infectious case.
BACKGROUND
In a study of household contacts of infectious tuberculosis cases (n = 1918) and a community survey of tuberculosis infection (N = 1179) in Kampala, Uganda, we estimated the secondary attack rate for tuberculosis disease and tuberculosis infection. The ratio of these rates is the likelihood of progressive primary disease after recent household infection.
METHODS
The secondary attack rate for tuberculosis disease was 3.0% (95% confidence interval: 2.2, 3.8). The overall secondary attack rate for tuberculosis infection was 47.4 (95% confidence interval: 44.3, 50.6) and did not vary widely with age, HIV status or BCG vaccination. The risk for progressive primary disease was highest among the young or HIV infected and was reduced by BCG vaccination.
RESULTS
Early case detection and treatment may limit household transmission of M. tuberculosis. Household members at high risk for disease should be protected through vaccination or treatment of latent tuberculosis infection.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Child", "Child, Preschool", "Family Characteristics", "Female", "Humans", "Incidence", "Male", "Middle Aged", "Recurrence", "Risk Factors", "Tuberculosis", "Uganda", "Urban Population", "Young Adult" ]
3038854
null
null
Methods
This study was approved by the Ugandan Council for Science and Technology and the Institutional Review Board at the University Hospitals of Cleveland. Informed consent was obtained from adults, assent from adolescents with permission from parents or guardians, and consent from parents or guardians for children. All consent was obtained in writing. To study the dynamics of M. tuberculosis transmission and active tuberculosis in African households, we performed a longitudinal study of tuberculosis (sputum smear-positive for acid fast bacilli) in 497 index cases and their household contacts (n = 1918, Figure 1). Tuberculosis cases were identified at the Tuberculosis Treatment Center of Mulago Hospital in Kampala, Uganda [6]. Household contacts were identified through household contact tracing performed within 4 weeks of the initial diagnosis of tuberculosis in the index case. Contacts were followed for two years from the time of diagnosis in the index case and were evaluated at 6 month intervals for tuberculosis disease. These evaluations included history and physical examination; contacts identified as tuberculosis suspects were further evaluated with sputum microscopy and culture, chest radiography, and HIV serostatus. A similar approach was used for sick visit evaluations. Tuberculin skin testing was repeated three months after household evaluation to include recent skin test converters. Of 442 contacts with a tuberculin skin test (TST)<5 mm at baseline, 380 contacts (86%) were available for repeat evaluation. To measure the prevalence of tuberculosis infection in households without active cases, we performed a cross-sectional study of 200 neighborhood control households without cases of active tuberculosis and enrolled 1179 people residing in the same or adjacent neighborhoods. Neighborhood control households were identified by selecting a neighboring village to the index household within the same or adjacent parish, and then by randomly selecting households for the study either from a pre-assembled list of households in the village, if available, or by recruiting consecutive households along a road or path. Households were eligible to be controls if no case of tuberculosis was present in the household for at least one year, at least one member in the household was within 5 years of age as the index case, and the household contained two or more members. By choosing adjacent or neighboring parishes to the index households, community controls were matched to the index households for socioeconomic status and underlying level of community transmission. In each index case and neighborhood household, we evaluated all members for latent tuberculosis infection and active tuberculosis using standard clinical methods [7] within four weeks of household evaluation and estimated the age-specific prevalence of latent tuberculosis infection and active disease. Co-prevalent tuberculosis was defined as a tuberculosis case occurring within three months of the initial diagnosis in the index case; incident tuberculosis was defined as a case of disease occurring after three months [6]. Latent tuberculosis infection was measured using purified protein derivative (Tubersol) and the Mantoux method. A criterion for a positive test of 10 mm was used to minimize misclassification from previous BCG vaccination [8]. Contacts who converted the TST to positive with 3 months were considered to be infected at baseline [9]. The presence of a BCG scar was assessed by a trained health care provider and verified with medical records where possible. Tuberculosis suspects were evaluated with medical history, physical examination, sputum microscopy and culture, and chest x-ray [6]. To characterize the strains of M. tuberculosis in households, sputum samples were obtained from the 76 household contacts with culture-confirmed tuberculosis and their index cases. Isolates of M. tuberculosis from 61 pairs (80%) were analyzed using restriction fragment length polymorphisms [10] (RFLP) to determine strain type. In 15 pairs, an isolate from either the index case or contact was not available because of contamination or failure to grow. Isolates of M. tuberculosis were considered to be matched if they had: (1) more than five copies of IS6110 and the fragments showed 100 percent match at a band deviation of 2.5 percent or less; (2) less than six copies of IS6110 and the fragments were 100 percent matched and the isolates showed identical PGRS patterns [11]. A secondary case of tuberculosis was defined as a contact case who had disease with the same strain of M. tuberculosis as the index case as determined by the RFLP pattern of both isolates. For the purposes of this analysis, we assume that infection in the index and contact cases did not occur through a common source case outside of the household. To apply the concepts of the SAR to tuberculosis, we decomposed the attack rate into two parts that reflect the natural history of the disease [12] and then derived methods to estimate the SAR for tuberculosis disease and infection separately. In the natural history of tuberculosis, infection with M. tuberculosis must first occur in a susceptible individual after one or more exposures to an infectious index case. Once infection is established, active disease may ensue depending on host immune response and virulence properties of the pathogen. The SAR for tuberculosis disease (SARD) may be thought of as the product of the SAR for infection with M. tuberculosis from the index case (SARI) and the probability of developing disease within a specified time interval following infection (p D|I):The SAR for tuberculosis disease was estimated directly through contact investigations by determining the proportion of household contacts that had or developed tuberculosis within 24 months of the diagnosis in the index case and shared the same strain of M. tuberculosis as the index case using RFLP analysis. For comparison, the SAR for disease was calculated separately using all contact cases regardless of strain type. Since we were not able to obtain RFLP results on 15 culture-confirmed contact cases, we estimated the total number of matched strains as the sum of observed and expected matches. Expected matches were estimated for index-case isolate pairs without RFLP results according to the proportions observed in pairs with RFLP patterns. The SAR for tuberculosis infection in household contacts is the probability of infection by the same strain of M. tuberculosis as the infectious index case during the exposure period. Since it is not possible to know the strain producing a latent tuberculosis infection, we estimated the SAR for infection as the difference in age-specific prevalence of latent infection between the household contacts and community controls (Appendix S1). The prevalence difference estimates the additional risk for latent infection associated with living in a house of an infectious index case. With the SAR for tuberculosis disease and infection estimated, the probability of progressive primary tuberculosis given recent household infection (p D) is the ratio of the SAR for disease to the SAR for infection.
null
null
null
null
[ "Introduction", "Results", "Discussion" ]
[ "Tuberculosis is a disease that is both curable and preventable, yet still poses a threat to personal and public health today, especially in developing countries. In most countries, the burden of tuberculosis is monitored by rates of disease obtained through surveillance systems that rely on passive case finding and centralized reporting. This type of surveillance is subject to the ecologic fallacy because it describes the average risk of tuberculosis in a population but does not characterize the risk to an individual following exposure to an infectious case. For an individual living in an area endemic for tuberculosis, the latter risk may be of greater relevance.\nIn a setting endemic for tuberculosis, such as Sub-Saharan Africa, one cannot always determine whether heightened risk for tuberculosis results from increased frequency of exposure to infectious cases due to the high prevalence of disease, enhanced risk of acquiring infection once exposed, or increased risk of disease once infected. The secondary attack rate (SAR), which measures the probability of disease transmission to an individual in the context of a defined exposure [1], [2], may be used to tease apart these component risks among household contacts. Although the SAR is most often applied to infectious diseases with short incubation periods in well-defined social networks, such as households, schools, and hospitals [1], [3]–[5], its methods may be extended to include chronic infectious diseases, such as tuberculosis, with the use of modern molecular techniques to identify and track strains.\nIn this report, we adapt classic concepts of SAR to tuberculosis and derive new ways to determine the SAR for both tuberculosis infection and disease, and to estimate the risk of developing tuberculosis after household exposure.", "Household contacts (n = 1918) and community members (n = 1179) were similar as regards age, gender, vaccination with BCG, level of crowding in the household, type and location of residence. Among the 1918 household contacts, 114 cases of tuberculosis were identified, of which 76 cases (67%) were confirmed by culture. Culture-confirmed disease was present in 28 of 55 (53%) children younger than 5 years, 7 of 10 (70%) children 5 to 15 years, and 40 of 49 (82%) contacts older than 15 years. Of the 76 culture-confirmed cases, 49 cases were co-prevalent cases, the remaining 27 were incident cases occurring during the 24 month follow-up period. RFLP analysis was performed on 61 of the 76 isolates (80%). Overall, the RFLP pattern of contact cases matched the pattern of index cases in 46 of 61 pairs (75%; Table 1). In the remaining 15 pairs of index and contact cases, the RFLP pattern did not match; these isolate pairs are distributed among children, HIV seropositive, and BCG vaccinated contacts (Table 1). HIV serostatus was not known for 262 contacts; 2 cases of tuberculosis with a matched isolate occurred among these contacts. BCG vaccination status was not known or was uncertain for 70 contacts; 1 case of tuberculosis with a matched isolate occurred among these contacts.\n**Co-prevalent cases with the same finger print pattern as the index case. Since 15 cases did not have RLFP results, this number is estimated using the observed proportion (see methods) of RLFP matches. 46/61 observed matches; thus, 46/61*76 culture confirmed cases = 57.3 = 57.\nThe total number of cases with matched RFLP patterns is the number of isolates with observed matches plus expected number of matches from isolates grown in culture but not analyzed with RFLP. Expected number of matches was estimated as the product of the observed proportion of matches and the number of pairs without RFLP results plus observed matches.\n*HIV serostatus was not available in 262 (13.7%) of contacts. HIV serostatus was not measured in community control households; the general secondary attack rate for infection was therefore used to estimate risk of disease after household infection.\nVaccination status missing or uncertain in 70 household contacts and 4 community members.\nThe overall SAR for disease using case pairs with matched RFLP patterns was 3.0% (95% confidence interval: 2.2, 3.8; Table 1). Without accounting for the strain types, the SAR for disease was 3.9%, an overestimation of 25%. The SAR for disease was bimodal according to age with the highest risk among children 5 years old or younger (5.1%) and among contacts 26 to 45 years old (5.0%), and the lowest risk among contacts 6 to 15 years old (0.8%; Table 1). The high level of SAR for disease in the age category 26–45 was attributable to HIV infection; when analyzing only the HIV seronegative contacts by age, the SAR for disease dropped in the age category to 2.7 (95% CI: 0.3, 5.0), whereas the rate of disease remained similar in the other age groups. In HIV-infected contacts the SAR for disease was 8.8%, whereas in HIV seronegative contacts, the rate was 2.5%. For contacts with BCG vaccination, the SAR for disease was 2.7% for contacts compared with 3.5% for contacts without vaccination.\nOf the 1918 contacts, 1201 contacts (63%) without disease had TST≥10 mm, 119 contacts (6%) converted to a positive TST within three months of initial evaluation, and 49 had co-prevalent disease (2.6%), yielding a total of 1369 contacts (71%) with infection at the time of household investigation. The prevalence of infection was greater for household contacts compared to community controls for all age categories (Table 2). The overall difference in prevalence of infection was 47.4% (95% confidence interval: 44.3, 50.6). Among the household contacts, the prevalence of tuberculosis infection increased with age from 63% in children 5 years and younger to 87.5% among older adults (Table 2, Figure 2). Among community members, the prevalence of tuberculosis infection increased with age from 12.6% in children 5 years and younger to 34.6% among older adults (Table 3, Figure 2). The age-specific prevalence difference ranged from 45.5 to 53.9% across the age groups but did not differ among age groups (test for linear trend, P = 0.91).\nVaccination status missing or uncertain in 70 household contacts and 4 community members.\n*Defined as the sum of contacts with TS>10 mm within 3 months of household evaluation who do not have evidence of active tuberculosis.\nBecause BCG vaccination may confound the relation between household exposure to and infection with M. tuberculosis, we performed a stratified analysis based on BCG vaccination (Table 2). The prevalence of tuberculosis infection was greater among non-vaccinated contacts and controls compared with their vaccinated counterparts. Prevalence of infection was also greater in contacts than controls regardless of vaccination status. The prevalence difference in infection was similar regardless of BCG vaccination status.\nThe overall risk of progressive primary disease, that is the probability of developing disease after acquiring new infection with M. tuberculosis through household contact, was 6.3% (Table 3). Part of this elevated risk was carried by children 5 years old or younger who had a conditional risk of disease of 10.1% as compared with the risk of 4.6% in contacts older than 5 years. HIV infection in the household contact conferred highest absolute risk for progressive primary disease of 18.6%. The probability of disease was 20% lower in the vaccinated compared with the unvaccinated contacts.", "In this study from an urban setting in East Africa, we found that, overall, the SAR for disease was 3% but that it varied according to age and HIV serostatus, as expected. The SAR for infection with M. tuberculosis was high, 47%, but it was similar across age groups, HIV status, and BCG vaccination, indicating parity in the risk for tuberculosis infection among household contacts. Thus, the observed variation in the SAR for disease was attributable not to the likelihood of acquiring new infection in the household but to the differing risks for progressive primary disease among newly infected household contacts.\nThe SAR of an infectious disease quantifies the risk of disease transmission to an individual in the context of a defined exposure [1], [13]. Formally, the SAR is the conditional probability of transmission of infection, or disease, to a susceptible. This analysis extends the classic model of the SAR for infectious diseases [1], [14] to tuberculosis in a household contact setting. By representing the natural history of tuberculosis as a two-stage process of infection followed by disease [12], and by evaluating household contacts where the exposure to an infectious case is known by design, we separate the risk for infection from the risk for disease, and thereby obtain separate estimates for the SAR for infection and the SAR for disease. Moreover, the ratio of these attack rates provides the likelihood of progressive primary disease resulting from recent household infection and adjusts for previous tuberculosis infection in contacts.\nIn the household contact setting, the SAR is used as a measure of risk for disease in the household and is estimated as the proportion of household members exposed who also develop disease within a specified time period. The validity of the SAR, however, depends on the degree of concordance of strain types between index and contact cases. Because some disease in households results from transmission outside the household contact network, failure to account for these cases overestimates the SAR for disease. Recent population-based studies from industrialized countries have shown that the strain of M. tuberculosis may differ between the index and contact cases in up to 30% of pairs. In this study, we observed a similar proportion of discordant pairs. In fact, in this setting, the SAR for disease would have been overestimated by 25% without verifying the strain-specific chain of transmission by RFLP analysis.\nTuberculosis has a long and variable latent period, sometimes lasting decades. To convey meaning about risk for disease, the SAR for disease must specify a time frame for the development of disease. In this study, the SAR for disease captured risk for two years after the diagnosis of the index case. By design, then, we estimated the risk for progressive primary disease after household exposure to an index case. The SAR captures the risk of disease after exposure to an infectious case but does not accurately estimate the risk of disease after acquiring new infection. As seen in this study, and in other household contact studies [15]–[18], not all exposed household contacts become infected. Since we estimated the SAR for infection to be 47%, the actual risk of developing disease after acquiring new infection is about twice the SAR for disease [18].\nIn this analysis, we merged the concepts of the SAR with those of disease prevalence [19] and multi-causal models [20]–[22] to estimate the SAR for tuberculosis infection in households. This method estimates SAR for infection by calculating the age-specific difference in prevalence of latent tuberculosis infection between household contacts and community members. This prevalence difference best approximates the SAR for infection when the annual risk of infection in the community is low or when the infectious period for the index case is short (Appendix S1). In this study, the median duration of cough, a surrogate for infectiousness, was 90 days [6], so with an annual risk of infection is as high as 3% per year [23], the prevalence difference overestimates the SAR by less than 1%. If we restrict our interest to a specific strain of M. tuberculosis, that is, the strain producing disease in the index case, then the prevalence difference is likely to be an excellent estimate of the SAR because in endemic settings, there are typically hundreds of circulating strains during any period of time [24]–[26] so the annual risk of infection from a specific strain in the community will be small.\nThis estimate of the SAR for infection carries other limitations and assumptions. Although the TST is the standard method for assessing infection with M. tuberculosis, it lacks sensitivity in the setting of immunosuppression (e.g., HIV infection or malnutrition) and specificity where BCG vaccination is widely used [27], [28]. Although HIV infection is endemic in Uganda and may cause false-negative TST results that may lead us to underestimate the SAR for infection, the HIV seroprevalence of 12% among contacts did not affect the prevalence difference (data not shown). To minimize false-positive misclassification of the TST results due to BCG vaccination, we used 10 mm as our criterion for a positive TST [8]. Some of the limitations of the TST may be mitigated by the use of interferon-γ release assays which may improve upon the specificity of the TST in the diagnosis of latent tuberculosis infection. The methods presented here can be readily modified to use the new immune-based assays in estimating secondary attack rates. In this analysis, we also estimated the average risk of infection as the difference in average age-specific prevalence of latent infection (i.e., the prevalence in household contacts compared with community members). At the individual level, these estimates may not apply because a given contact may have been previously infected and experience risks that differ from the overall average of that age group.\nIn the household of an infectious index case, the interactions between the contacts and index case are complex. The duration and intensity of exposure to the index case may depend on the familial relationship, traditional roles of caring for ill relatives, ability of the index case to cough, ventilation in the house, to name a few. Each discrete exposure is associated with a real but unknown probability of becoming infected. Since it is not feasible to measure the risk of infection for any single exposure to the index case, we used age-specific prevalence as a measure of the cumulative risk over time of the discrete and multiple exposures. We assume a binomial model, discrete exposures occurring randomly in time, and homogeneous mixing of household members.\nIn conclusion, we have combined modern molecular techniques with traditional epidemiologic methods to introduce a new approach for estimating the risk of tuberculosis following recent infection with M. tuberculosis in African households. This method shows that contact cases of tuberculosis often, but not always, shared the same strain of M. tuberculosis as the index case, despite high level of tuberculosis transmission in the community. The risk for tuberculosis infection resulting from household transmission in an urban African home is high. Since the risk of infection did not vary widely with age or previous BCG vaccination, the observed variability in progressive primary disease depended on characteristics such as age and immune status of the household contact. These observations highlight the importance of careful exposure history, especially in the context of drug-resistant tuberculosis, and early case detection and treatment to limit household transmission of M. tuberculosis. Furthermore, household members at high risk for disease must be protected through treatment of latent tuberculosis infection." ]
[ null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Supporting Information" ]
[ "Tuberculosis is a disease that is both curable and preventable, yet still poses a threat to personal and public health today, especially in developing countries. In most countries, the burden of tuberculosis is monitored by rates of disease obtained through surveillance systems that rely on passive case finding and centralized reporting. This type of surveillance is subject to the ecologic fallacy because it describes the average risk of tuberculosis in a population but does not characterize the risk to an individual following exposure to an infectious case. For an individual living in an area endemic for tuberculosis, the latter risk may be of greater relevance.\nIn a setting endemic for tuberculosis, such as Sub-Saharan Africa, one cannot always determine whether heightened risk for tuberculosis results from increased frequency of exposure to infectious cases due to the high prevalence of disease, enhanced risk of acquiring infection once exposed, or increased risk of disease once infected. The secondary attack rate (SAR), which measures the probability of disease transmission to an individual in the context of a defined exposure [1], [2], may be used to tease apart these component risks among household contacts. Although the SAR is most often applied to infectious diseases with short incubation periods in well-defined social networks, such as households, schools, and hospitals [1], [3]–[5], its methods may be extended to include chronic infectious diseases, such as tuberculosis, with the use of modern molecular techniques to identify and track strains.\nIn this report, we adapt classic concepts of SAR to tuberculosis and derive new ways to determine the SAR for both tuberculosis infection and disease, and to estimate the risk of developing tuberculosis after household exposure.", "This study was approved by the Ugandan Council for Science and Technology and the Institutional Review Board at the University Hospitals of Cleveland. Informed consent was obtained from adults, assent from adolescents with permission from parents or guardians, and consent from parents or guardians for children. All consent was obtained in writing.\nTo study the dynamics of M. tuberculosis transmission and active tuberculosis in African households, we performed a longitudinal study of tuberculosis (sputum smear-positive for acid fast bacilli) in 497 index cases and their household contacts (n = 1918, Figure 1). Tuberculosis cases were identified at the Tuberculosis Treatment Center of Mulago Hospital in Kampala, Uganda [6]. Household contacts were identified through household contact tracing performed within 4 weeks of the initial diagnosis of tuberculosis in the index case. Contacts were followed for two years from the time of diagnosis in the index case and were evaluated at 6 month intervals for tuberculosis disease. These evaluations included history and physical examination; contacts identified as tuberculosis suspects were further evaluated with sputum microscopy and culture, chest radiography, and HIV serostatus. A similar approach was used for sick visit evaluations. Tuberculin skin testing was repeated three months after household evaluation to include recent skin test converters. Of 442 contacts with a tuberculin skin test (TST)<5 mm at baseline, 380 contacts (86%) were available for repeat evaluation.\nTo measure the prevalence of tuberculosis infection in households without active cases, we performed a cross-sectional study of 200 neighborhood control households without cases of active tuberculosis and enrolled 1179 people residing in the same or adjacent neighborhoods. Neighborhood control households were identified by selecting a neighboring village to the index household within the same or adjacent parish, and then by randomly selecting households for the study either from a pre-assembled list of households in the village, if available, or by recruiting consecutive households along a road or path. Households were eligible to be controls if no case of tuberculosis was present in the household for at least one year, at least one member in the household was within 5 years of age as the index case, and the household contained two or more members. By choosing adjacent or neighboring parishes to the index households, community controls were matched to the index households for socioeconomic status and underlying level of community transmission.\nIn each index case and neighborhood household, we evaluated all members for latent tuberculosis infection and active tuberculosis using standard clinical methods [7] within four weeks of household evaluation and estimated the age-specific prevalence of latent tuberculosis infection and active disease. Co-prevalent tuberculosis was defined as a tuberculosis case occurring within three months of the initial diagnosis in the index case; incident tuberculosis was defined as a case of disease occurring after three months [6]. Latent tuberculosis infection was measured using purified protein derivative (Tubersol) and the Mantoux method. A criterion for a positive test of 10 mm was used to minimize misclassification from previous BCG vaccination [8]. Contacts who converted the TST to positive with 3 months were considered to be infected at baseline [9]. The presence of a BCG scar was assessed by a trained health care provider and verified with medical records where possible. Tuberculosis suspects were evaluated with medical history, physical examination, sputum microscopy and culture, and chest x-ray [6].\nTo characterize the strains of M. tuberculosis in households, sputum samples were obtained from the 76 household contacts with culture-confirmed tuberculosis and their index cases. Isolates of M. tuberculosis from 61 pairs (80%) were analyzed using restriction fragment length polymorphisms [10] (RFLP) to determine strain type. In 15 pairs, an isolate from either the index case or contact was not available because of contamination or failure to grow. Isolates of M. tuberculosis were considered to be matched if they had: (1) more than five copies of IS6110 and the fragments showed 100 percent match at a band deviation of 2.5 percent or less; (2) less than six copies of IS6110 and the fragments were 100 percent matched and the isolates showed identical PGRS patterns [11]. A secondary case of tuberculosis was defined as a contact case who had disease with the same strain of M. tuberculosis as the index case as determined by the RFLP pattern of both isolates. For the purposes of this analysis, we assume that infection in the index and contact cases did not occur through a common source case outside of the household.\nTo apply the concepts of the SAR to tuberculosis, we decomposed the attack rate into two parts that reflect the natural history of the disease [12] and then derived methods to estimate the SAR for tuberculosis disease and infection separately. In the natural history of tuberculosis, infection with M. tuberculosis must first occur in a susceptible individual after one or more exposures to an infectious index case. Once infection is established, active disease may ensue depending on host immune response and virulence properties of the pathogen. The SAR for tuberculosis disease (SARD) may be thought of as the product of the SAR for infection with M. tuberculosis from the index case (SARI) and the probability of developing disease within a specified time interval following infection (p\nD|I):The SAR for tuberculosis disease was estimated directly through contact investigations by determining the proportion of household contacts that had or developed tuberculosis within 24 months of the diagnosis in the index case and shared the same strain of M. tuberculosis as the index case using RFLP analysis. For comparison, the SAR for disease was calculated separately using all contact cases regardless of strain type. Since we were not able to obtain RFLP results on 15 culture-confirmed contact cases, we estimated the total number of matched strains as the sum of observed and expected matches. Expected matches were estimated for index-case isolate pairs without RFLP results according to the proportions observed in pairs with RFLP patterns.\nThe SAR for tuberculosis infection in household contacts is the probability of infection by the same strain of M. tuberculosis as the infectious index case during the exposure period. Since it is not possible to know the strain producing a latent tuberculosis infection, we estimated the SAR for infection as the difference in age-specific prevalence of latent infection between the household contacts and community controls (Appendix S1). The prevalence difference estimates the additional risk for latent infection associated with living in a house of an infectious index case. With the SAR for tuberculosis disease and infection estimated, the probability of progressive primary tuberculosis given recent household infection (p\nD) is the ratio of the SAR for disease to the SAR for infection.", "Household contacts (n = 1918) and community members (n = 1179) were similar as regards age, gender, vaccination with BCG, level of crowding in the household, type and location of residence. Among the 1918 household contacts, 114 cases of tuberculosis were identified, of which 76 cases (67%) were confirmed by culture. Culture-confirmed disease was present in 28 of 55 (53%) children younger than 5 years, 7 of 10 (70%) children 5 to 15 years, and 40 of 49 (82%) contacts older than 15 years. Of the 76 culture-confirmed cases, 49 cases were co-prevalent cases, the remaining 27 were incident cases occurring during the 24 month follow-up period. RFLP analysis was performed on 61 of the 76 isolates (80%). Overall, the RFLP pattern of contact cases matched the pattern of index cases in 46 of 61 pairs (75%; Table 1). In the remaining 15 pairs of index and contact cases, the RFLP pattern did not match; these isolate pairs are distributed among children, HIV seropositive, and BCG vaccinated contacts (Table 1). HIV serostatus was not known for 262 contacts; 2 cases of tuberculosis with a matched isolate occurred among these contacts. BCG vaccination status was not known or was uncertain for 70 contacts; 1 case of tuberculosis with a matched isolate occurred among these contacts.\n**Co-prevalent cases with the same finger print pattern as the index case. Since 15 cases did not have RLFP results, this number is estimated using the observed proportion (see methods) of RLFP matches. 46/61 observed matches; thus, 46/61*76 culture confirmed cases = 57.3 = 57.\nThe total number of cases with matched RFLP patterns is the number of isolates with observed matches plus expected number of matches from isolates grown in culture but not analyzed with RFLP. Expected number of matches was estimated as the product of the observed proportion of matches and the number of pairs without RFLP results plus observed matches.\n*HIV serostatus was not available in 262 (13.7%) of contacts. HIV serostatus was not measured in community control households; the general secondary attack rate for infection was therefore used to estimate risk of disease after household infection.\nVaccination status missing or uncertain in 70 household contacts and 4 community members.\nThe overall SAR for disease using case pairs with matched RFLP patterns was 3.0% (95% confidence interval: 2.2, 3.8; Table 1). Without accounting for the strain types, the SAR for disease was 3.9%, an overestimation of 25%. The SAR for disease was bimodal according to age with the highest risk among children 5 years old or younger (5.1%) and among contacts 26 to 45 years old (5.0%), and the lowest risk among contacts 6 to 15 years old (0.8%; Table 1). The high level of SAR for disease in the age category 26–45 was attributable to HIV infection; when analyzing only the HIV seronegative contacts by age, the SAR for disease dropped in the age category to 2.7 (95% CI: 0.3, 5.0), whereas the rate of disease remained similar in the other age groups. In HIV-infected contacts the SAR for disease was 8.8%, whereas in HIV seronegative contacts, the rate was 2.5%. For contacts with BCG vaccination, the SAR for disease was 2.7% for contacts compared with 3.5% for contacts without vaccination.\nOf the 1918 contacts, 1201 contacts (63%) without disease had TST≥10 mm, 119 contacts (6%) converted to a positive TST within three months of initial evaluation, and 49 had co-prevalent disease (2.6%), yielding a total of 1369 contacts (71%) with infection at the time of household investigation. The prevalence of infection was greater for household contacts compared to community controls for all age categories (Table 2). The overall difference in prevalence of infection was 47.4% (95% confidence interval: 44.3, 50.6). Among the household contacts, the prevalence of tuberculosis infection increased with age from 63% in children 5 years and younger to 87.5% among older adults (Table 2, Figure 2). Among community members, the prevalence of tuberculosis infection increased with age from 12.6% in children 5 years and younger to 34.6% among older adults (Table 3, Figure 2). The age-specific prevalence difference ranged from 45.5 to 53.9% across the age groups but did not differ among age groups (test for linear trend, P = 0.91).\nVaccination status missing or uncertain in 70 household contacts and 4 community members.\n*Defined as the sum of contacts with TS>10 mm within 3 months of household evaluation who do not have evidence of active tuberculosis.\nBecause BCG vaccination may confound the relation between household exposure to and infection with M. tuberculosis, we performed a stratified analysis based on BCG vaccination (Table 2). The prevalence of tuberculosis infection was greater among non-vaccinated contacts and controls compared with their vaccinated counterparts. Prevalence of infection was also greater in contacts than controls regardless of vaccination status. The prevalence difference in infection was similar regardless of BCG vaccination status.\nThe overall risk of progressive primary disease, that is the probability of developing disease after acquiring new infection with M. tuberculosis through household contact, was 6.3% (Table 3). Part of this elevated risk was carried by children 5 years old or younger who had a conditional risk of disease of 10.1% as compared with the risk of 4.6% in contacts older than 5 years. HIV infection in the household contact conferred highest absolute risk for progressive primary disease of 18.6%. The probability of disease was 20% lower in the vaccinated compared with the unvaccinated contacts.", "In this study from an urban setting in East Africa, we found that, overall, the SAR for disease was 3% but that it varied according to age and HIV serostatus, as expected. The SAR for infection with M. tuberculosis was high, 47%, but it was similar across age groups, HIV status, and BCG vaccination, indicating parity in the risk for tuberculosis infection among household contacts. Thus, the observed variation in the SAR for disease was attributable not to the likelihood of acquiring new infection in the household but to the differing risks for progressive primary disease among newly infected household contacts.\nThe SAR of an infectious disease quantifies the risk of disease transmission to an individual in the context of a defined exposure [1], [13]. Formally, the SAR is the conditional probability of transmission of infection, or disease, to a susceptible. This analysis extends the classic model of the SAR for infectious diseases [1], [14] to tuberculosis in a household contact setting. By representing the natural history of tuberculosis as a two-stage process of infection followed by disease [12], and by evaluating household contacts where the exposure to an infectious case is known by design, we separate the risk for infection from the risk for disease, and thereby obtain separate estimates for the SAR for infection and the SAR for disease. Moreover, the ratio of these attack rates provides the likelihood of progressive primary disease resulting from recent household infection and adjusts for previous tuberculosis infection in contacts.\nIn the household contact setting, the SAR is used as a measure of risk for disease in the household and is estimated as the proportion of household members exposed who also develop disease within a specified time period. The validity of the SAR, however, depends on the degree of concordance of strain types between index and contact cases. Because some disease in households results from transmission outside the household contact network, failure to account for these cases overestimates the SAR for disease. Recent population-based studies from industrialized countries have shown that the strain of M. tuberculosis may differ between the index and contact cases in up to 30% of pairs. In this study, we observed a similar proportion of discordant pairs. In fact, in this setting, the SAR for disease would have been overestimated by 25% without verifying the strain-specific chain of transmission by RFLP analysis.\nTuberculosis has a long and variable latent period, sometimes lasting decades. To convey meaning about risk for disease, the SAR for disease must specify a time frame for the development of disease. In this study, the SAR for disease captured risk for two years after the diagnosis of the index case. By design, then, we estimated the risk for progressive primary disease after household exposure to an index case. The SAR captures the risk of disease after exposure to an infectious case but does not accurately estimate the risk of disease after acquiring new infection. As seen in this study, and in other household contact studies [15]–[18], not all exposed household contacts become infected. Since we estimated the SAR for infection to be 47%, the actual risk of developing disease after acquiring new infection is about twice the SAR for disease [18].\nIn this analysis, we merged the concepts of the SAR with those of disease prevalence [19] and multi-causal models [20]–[22] to estimate the SAR for tuberculosis infection in households. This method estimates SAR for infection by calculating the age-specific difference in prevalence of latent tuberculosis infection between household contacts and community members. This prevalence difference best approximates the SAR for infection when the annual risk of infection in the community is low or when the infectious period for the index case is short (Appendix S1). In this study, the median duration of cough, a surrogate for infectiousness, was 90 days [6], so with an annual risk of infection is as high as 3% per year [23], the prevalence difference overestimates the SAR by less than 1%. If we restrict our interest to a specific strain of M. tuberculosis, that is, the strain producing disease in the index case, then the prevalence difference is likely to be an excellent estimate of the SAR because in endemic settings, there are typically hundreds of circulating strains during any period of time [24]–[26] so the annual risk of infection from a specific strain in the community will be small.\nThis estimate of the SAR for infection carries other limitations and assumptions. Although the TST is the standard method for assessing infection with M. tuberculosis, it lacks sensitivity in the setting of immunosuppression (e.g., HIV infection or malnutrition) and specificity where BCG vaccination is widely used [27], [28]. Although HIV infection is endemic in Uganda and may cause false-negative TST results that may lead us to underestimate the SAR for infection, the HIV seroprevalence of 12% among contacts did not affect the prevalence difference (data not shown). To minimize false-positive misclassification of the TST results due to BCG vaccination, we used 10 mm as our criterion for a positive TST [8]. Some of the limitations of the TST may be mitigated by the use of interferon-γ release assays which may improve upon the specificity of the TST in the diagnosis of latent tuberculosis infection. The methods presented here can be readily modified to use the new immune-based assays in estimating secondary attack rates. In this analysis, we also estimated the average risk of infection as the difference in average age-specific prevalence of latent infection (i.e., the prevalence in household contacts compared with community members). At the individual level, these estimates may not apply because a given contact may have been previously infected and experience risks that differ from the overall average of that age group.\nIn the household of an infectious index case, the interactions between the contacts and index case are complex. The duration and intensity of exposure to the index case may depend on the familial relationship, traditional roles of caring for ill relatives, ability of the index case to cough, ventilation in the house, to name a few. Each discrete exposure is associated with a real but unknown probability of becoming infected. Since it is not feasible to measure the risk of infection for any single exposure to the index case, we used age-specific prevalence as a measure of the cumulative risk over time of the discrete and multiple exposures. We assume a binomial model, discrete exposures occurring randomly in time, and homogeneous mixing of household members.\nIn conclusion, we have combined modern molecular techniques with traditional epidemiologic methods to introduce a new approach for estimating the risk of tuberculosis following recent infection with M. tuberculosis in African households. This method shows that contact cases of tuberculosis often, but not always, shared the same strain of M. tuberculosis as the index case, despite high level of tuberculosis transmission in the community. The risk for tuberculosis infection resulting from household transmission in an urban African home is high. Since the risk of infection did not vary widely with age or previous BCG vaccination, the observed variability in progressive primary disease depended on characteristics such as age and immune status of the household contact. These observations highlight the importance of careful exposure history, especially in the context of drug-resistant tuberculosis, and early case detection and treatment to limit household transmission of M. tuberculosis. Furthermore, household members at high risk for disease must be protected through treatment of latent tuberculosis infection.", "(DOC)\nClick here for additional data file." ]
[ null, "methods", null, null, "supplementary-material" ]
[]
Illicit methylphenidate use among Iranian medical students: prevalence and knowledge.
21340040
Methylphenidate, a medication prescribed for individuals suffering from attention-deficit/hyperactivity disorder, is increasingly being misused by students.
BACKGROUND
Anonymous, self-administered questionnaires were completed by all medical students entering the university between 2000 and 2007.
METHODS
Methylphenidate users' mean knowledge score was higher than that of nonusers (15.83 ± 3.14 vs 13.66 ± 3.10, P = 0.008). Age, gender, and school year were positively correlated with knowledge score (P < 0.05). Data analysis demonstrated that 27 participants (8.7%) had taken methylphenidate at least once in their lifetime. The respondents believed that the most common motive for methylphenidate use among youths was that it aided concentration and therefore ability to study.
RESULTS
This study indicates a relatively low level of knowledge about methylphenidate among Iranian medical students. More educational programs regarding the use of methylphenidate are required and should be focused on the student suppliers, clinicians, pharmacists, and medical students.
CONCLUSION
[ "Adolescent", "Adult", "Age Factors", "Central Nervous System Stimulants", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Iran", "Male", "Methylphenidate", "Prevalence", "Sex Factors", "Students, Medical", "Substance-Related Disorders", "Surveys and Questionnaires", "Young Adult" ]
3038997
null
null
Methods
The study population comprised all medical students entering the Faculty of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran, between 2000 and 2007. A self-administered, anonymous questionnaire based on information obtained from a review of the literature on methylphenidate was utilized. The questionnaires were distributed among all the medical students from the first (2007 entrance) to seventh (2000 entrance) school year who were present at daily classes and/or hospital wards between January and March 2007. The reliability and validity of the questionnaire were qualified and examined by a pilot study and the supervision of expert professors. The questionnaire comprised four sections: 1) demographic information including age, gender, marital status, living place, and average grade of participants (five items); 2) methylphenidate-related questions covering general information, clinical symptoms, and legal aspects (11 items); 3) questions to determine the frequency of methylphenidate use (two items); and 4) a multiple-choice question regarding the respondents’ perceptions of the reasons for the increasing tendency toward methylphenidate use (one item). Concerning knowledge about methylphenidate, the section began with a question on whether individuals had heard of methylphenidate. This was followed by a question about the source of the respondents’ information (mass media, medical books, scientific journals, Internet, friends, or a physician’s prescription). In addition, there were 11 statements concerning students’ knowledge of methylphenidate, and possible responses for this section included “true”, “false”, and “I don’t know”. The knowledge score was calculated by allocating +2 for a correct answer, 0 for an incorrect answer, and +1 for “I don’t know” responses. A total of 22 points could be achieved if all questions were answered correctly. Higher scores were indicative of a greater level of knowledge. In order to determine the frequency of methylphenidate use among the students, they were asked whether they had used methylphenidate, excluding those who had been prescribed the medication. If respondents replied in the affirmative to this question they answered a follow-up question relating to how often they used methylphenidate. Possible responses for this question included “used before last year”, “used in the past 12 months”, “used in the past 6 months”, and “used in the past 30 days”. In addition, a question relating to the preferred route of administration was included in the questionnaire. Additional questions concerning the reasons for taking prescription stimulant medications were posed to all respondents. Students were given the option to choose any of the response categories including increasing wakefulness, peer pressure, curiosity, increasing self-confidence, weight loss, increasing concentration, and euphoric effects. Data were presented as the mean ± standard deviation, or percentage when appropriate. All statistical analyses were performed with SPSS (SPSS Inc, Chicago, IL, USA) for Windows, Version 16.0, using a Chi-square test, Fisher’s exact test, and independent samples t-test. Spearman’s correlation coefficient was calculated to determine the correlation between quantitative variables (demographic items and total knowledge score). A P value <0.05 was considered statistically significant.
null
null
null
null
[ "Introduction", "Results", "Discussion" ]
[ "According to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR), substance abuse is a mental disorder in which the use of a substance reaches a point th at induces significant dysfunction in the life of the abuser.1 Among such substances, prescription stimulants are assumed to be harmless ways of increasing levels of energy and concentration, enhancing school performance, or using for recreation, while lessening the desire for sleep.2 Youths show a high predilection for the use of prescription stimulants.3\nMethylphenidate (Ritalin) is a medication prescribed for individuals, in particular children, who have attention-deficit/hyperactivity disorder (ADHD).4 The attention-improving characteristic of methylphenidate has been attributed to the amplification of dopamine release in the central nervous system.5,6 There is an increasing trend toward the misuse of methylphenidate by youths. However, some research has concluded that methylphenidate use is exclusive to college students, because less use was found in a nonstudent population of the same age range.3 Methylphenidate is known for its high abuse potential; it has the same effects as cocaine or amphetamines5 and is the most widely researched of the prescription stimulant drugs.7,8 Cases of oral, intranasal, and intravenous abuse of the drug are well documented.9,10 The latter, a less common route of administration, can be precarious because fillers in the pills could block small blood vessels, causing injury to lungs and eyes.11 However, when methylphenidate is administered at high doses intranasally or orally, the risk of addiction increases and the physical side effects are augmented.5,9 No well-controlled studies have hitherto investigated methylphenidate tolerance, but clinicians and patients have reported situations where decreased efficacy and psychological dependence appear to occur.12\nIn recent years, researchers have investigated the illicit use of stimulants prescribed for the treatment of ADHD among American college students.13 Teter et al indicated that 3% of students surveyed at a large public university used methylphenidate for nonmedical purposes.14 Furthermore, Dupont et al reported that more than 5% of approximately 2000 college students used methylphenidate nonmedically at least once.15 Interestingly, Low and Gendaszek reported a prevalence of 35% among undergraduates in a psychology class for the misuse of methylphenidate or amphetamine.16 Prescription stimulant misuse ranges from 5% to 35% in college-aged individuals (for a review, see Wilens et al17). Demanding times throughout the academic year, including final examinations, can lead to an increased demand for prescription stimulants such as methylphenidate.9,17 Appetite suppression, wakefulness, increased attentiveness, and euphoria are known to be stimulant effects of methylphenidate.5\nThese data provide some understanding of the prevalence of methylphenidate abuse, particularly in the US and Canada. No corresponding data are available from other countries, and an in-depth description of the users’ conceptions and choices is lacking. A few recent studies have focused on various health topics among Iranian medical students,18–20 but there has been no research examining the prevalence and correlation of nonmedical use of methylphenidate among such students. Therefore, this study was carried out to determine the frequency of methylphenidate use among a group of Iranian medical students and to assess their knowledge of methylphenidate.", "Of 500 medical students, 310 completed the questionnaire, representing a response rate of 62%. The mean age of the respondents was 21.4 ± 2.07 years (range 18–28 years). Forty-three percent (n = 134) of the students were male and 56.7% (n = 176) were female. The students’ mean grade average was 15.4 ± 1.3 out of 20 (range 12.4–18.8). A list of the knowledge questions with the percentage of each response is provided in Table 1. Methylphenidate users’ mean knowledge score (total of correct and incorrect responses) was higher than that of nonusers (15.83 ± 3.14 vs 13.66 ± 3.10, Table 2, P = 0.008). Among the demographic items, age (r = 0.29; P = 0.001), gender (r = 0.22; P = 0.015), and school year (r = 0.28; P = 0.001) were positively correlated with the knowledge score. There was no significant correlation between other demographic items (grade average, marital status, and place of residence) and knowledge score (P > 0.05).\nData analysis demonstrated that 27 participants (8.7%) had taken methylphenidate at least once in their lifetime; two students were excluded from the study because they misused methylphenidate while under a physician’s order. Of these 27 individuals, 20 (74%) declared that they had taken methylphenidate within the last year and three (11%) within the last month. Ninety-seven participants (31.2%) knew someone among their classmates and friends who had misused methylphenidate. The frequency of methylphenidate use was significantly higher among men than among women (92% vs 8%, Table 2, P < 0.001). Furthermore, methylphenidate use was significantly higher among respondents with a grade average of 15 or less (74%) than among students with higher grade averages (Chi-square test, P < 0.001). Methylphenidate had previously been prescribed by physicians for three of the users (11%), and six students (22.2%) indicated that they were encouraged by their peers to take methylphenidate for its positive effects. The preferred route of administration was oral (88.8%), followed by nasal (3.7%) and injection (3.7%). Many of the respondents (21%) indicated that friends were the major source of information concerning methylphenidate, followed by scientific books and journals (17.6%), mass media (radio, television, and newspapers) (4.2%), and the Internet (3.4%). The data from respondents and methylphenidate users are provided in Table 2.\nRespondents believed that the most common motive for methylphenidate use was to increase concentration (41.7%). Furthermore, students declared that increasing wakefulness (17.3%), curiosity (16.7%), increasing energy levels (8.8%), increasing self-confidence (6.1%), peer pressure (5%), and weight loss (4.4%) were also common reasons for methylphenidate misuse.", "In the present study, the prevalence of methylphenidate misuse among a group of Iranian medical students was assessed along with their knowledge of methylphenidate. Data analysis demonstrated that less than 9% of medical students had used methylphenidate at least once in their lifetime. Of those who admitted to using the drug, approximately 75% had taken methylphenidate in the past year and 10% in the last 30 days. A wide range of methylphenidate use has been reported in the literature.13,17 Teter et al surveyed a large sample of public university students among whom 3% indicated nonmedical use of methylphenidate within the previous year.14 According to the Monitoring the Future (MTF) study, annual use of methylphenidate among college students was 4.2% and 3.9% in 2005 and 2006, respectively.3 In another study by Teter et al, approximately 25% of illicit users of prescription stimulants used methylphenidate.21 In contrast, Low and Gendaszek indicated that 35% of 150 undergraduates in a psychology class had used either methylphenidate or amphetamine at least once.16 In the present study, approximately 30% of respondents knew someone among their classmates and friends who had misused methylphenidate. The difference between the two calculated prevalences of methylphenidate use among the surveyed group of medical students (8.8% vs 31.2%) might suggest that students felt more comfortable about reporting methylphenidate use by their peers than by themselves, despite the anonymity of the questionnaires. However, this difference could also be due to overlap between the reports, ie, one particular methylphenidate user could be reported by himself/herself and his/her friends simultaneously. To the best of our knowledge, the present study is the first survey to determine the frequency of methylphenidate misuse among a group of medical students and to assess their knowledge of methylphenidate.\nThe overall findings from this study indicate a relatively low level of knowledge concerning methylphenidate, its physiological and psychological side effects, or legal consequences of illicit use among the medical students surveyed. However, the students’ knowledge score increased the longer they had been studying at the university. Assessing the knowledge of the university students with regard to methylphenidate has been neglected in most previous research, though DeSantis et al indicated that most users among undergraduates at a southeastern research university in the US possessed limited knowledge of prescription stimulants, including methylphenidate.13\nIn accordance with the MTF study, men reported higher past-year rates of methylphenidate misuse than women did.3 Hall et al22 and Simoni-Wastila23 found a comparable gender difference in the misuse of methylphenidate. This study presented a comparable finding, but the frequency of methylphenidate misuse among men was much higher than among women (92% vs 8%). In contrast, Teter et al reported that undergraduate men and women used illicit methylphenidate equally.14 In the present study, medical students with lower grade point averages were more likely to use methylphenidate illicitly. This finding is similar to those of the surveys carried out by McCabe et al.24,25 Furthermore, in the present study, methylphenidate use was reported most frequently among single medical students who resided on campus, but this was not statistically significant.\nIndividuals misuse methylphenidate and other stimulants to keep alert and improve concentration as they prepare for tests or complete term papers.26 Teter et al reported that the majority of methylphenidate users among surveyed students used methylphenidate to enhance their academic performance by increasing concentration and alertness.27 Two recent surveys of Iranian pharmacists and general practitioners demonstrated that the majority of the respondents believed that Ritalin was used to increase attention and concentration.28,29 Likewise, in this study, the majority of the respondents, including the methylphenidate users, highlighted that increased concentration was the most common motive for the use of methylphenidate. However, Barrett et al stated that 70% of methylphenidate users administered it for recreational reasons.10\nIn the present study, the most common routes of methylphenidate administration were oral (89%), intranasal (3.7%), and injection (3.7%). The predominantly oral methylphenidate misuse in this study is consistent with previous reports.10,15,21,30 Interestingly, Teter et al reported smoking as another route of methylphenidate administration, but this was not evident in the present research.21 In the present study, friends were the main source of information. This finding is in contrast to our previous reports concerning medical and female high school students’ knowledge of bird flu18 and HIV/AIDS31 in the same city, which revealed that the mass media were the main source of information.18,31 In previous reports, Iranian pharmacists and general practitioners stated that they obtained information on methylphenidate from the mass media and medical journals.28,29\nThis study has certain limitations. First, the validity of self-reported methylphenidate use among respondents depends on their willingness to reply truthfully on the questionnaire. Second, the sample in the present study was from a single university, thereby necessitating similar studies be conducted in other medical schools for comparison. Third, the present study did not explicitly address duration or frequency of methylphenidate use. Therefore, it is unknown whether nonprescription users took methylphenidate regularly or only occasionally. Fourth, the response rate in the present study was relatively low (62%). It is possible that the prevalence of methylphenidate misuse was underestimated if those who misused the drug chose not to complete the survey (eg, for fear of lack of anonymity). However, it should be highlighted that this study is the first to estimate the prevalence of methylphenidate use among medical students.\nAs methylphenidate is legally available to a small group of students (student suppliers) as ADHD medication, the authors believe that focusing on the small group of student suppliers may be an effective intervention approach to addressing the current problem of methylphenidate misuse among university students. Hence, the medical community, including clinicians and pharmacists, may consider reducing the monthly allocation of pills. In addition, clinicians should counsel students regarding the probable serious adverse effects to health if methylphenidate is misused, as well as the potential legal consequences of misuse. Furthermore, training health care professionals and medical students is of great importance in controlling the current trend. The findings of this study should be considered seriously by local health centers. Being aware of the scope and context of the problem could aid the development of prevention and monitoring programs for prescription drug misuse and diversion. The relatively low level of knowledge concerning methylphenidate among medical students in this study is primarily a reflection of insufficient academic courses in the medical school curriculum. It is recommended that knowledge of medical students about this topic be improved through access to textbooks, articles, seminars, and specific courses. Moreover, the addition of the topic of stimulant drugs, including methylphenidate, to pharmacology, toxicology, and psychiatry courses during medical education is advised." ]
[ null, null, null ]
[ "Introduction", "Methods", "Results", "Discussion" ]
[ "According to the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition, Text Revision (DSM-IV-TR), substance abuse is a mental disorder in which the use of a substance reaches a point th at induces significant dysfunction in the life of the abuser.1 Among such substances, prescription stimulants are assumed to be harmless ways of increasing levels of energy and concentration, enhancing school performance, or using for recreation, while lessening the desire for sleep.2 Youths show a high predilection for the use of prescription stimulants.3\nMethylphenidate (Ritalin) is a medication prescribed for individuals, in particular children, who have attention-deficit/hyperactivity disorder (ADHD).4 The attention-improving characteristic of methylphenidate has been attributed to the amplification of dopamine release in the central nervous system.5,6 There is an increasing trend toward the misuse of methylphenidate by youths. However, some research has concluded that methylphenidate use is exclusive to college students, because less use was found in a nonstudent population of the same age range.3 Methylphenidate is known for its high abuse potential; it has the same effects as cocaine or amphetamines5 and is the most widely researched of the prescription stimulant drugs.7,8 Cases of oral, intranasal, and intravenous abuse of the drug are well documented.9,10 The latter, a less common route of administration, can be precarious because fillers in the pills could block small blood vessels, causing injury to lungs and eyes.11 However, when methylphenidate is administered at high doses intranasally or orally, the risk of addiction increases and the physical side effects are augmented.5,9 No well-controlled studies have hitherto investigated methylphenidate tolerance, but clinicians and patients have reported situations where decreased efficacy and psychological dependence appear to occur.12\nIn recent years, researchers have investigated the illicit use of stimulants prescribed for the treatment of ADHD among American college students.13 Teter et al indicated that 3% of students surveyed at a large public university used methylphenidate for nonmedical purposes.14 Furthermore, Dupont et al reported that more than 5% of approximately 2000 college students used methylphenidate nonmedically at least once.15 Interestingly, Low and Gendaszek reported a prevalence of 35% among undergraduates in a psychology class for the misuse of methylphenidate or amphetamine.16 Prescription stimulant misuse ranges from 5% to 35% in college-aged individuals (for a review, see Wilens et al17). Demanding times throughout the academic year, including final examinations, can lead to an increased demand for prescription stimulants such as methylphenidate.9,17 Appetite suppression, wakefulness, increased attentiveness, and euphoria are known to be stimulant effects of methylphenidate.5\nThese data provide some understanding of the prevalence of methylphenidate abuse, particularly in the US and Canada. No corresponding data are available from other countries, and an in-depth description of the users’ conceptions and choices is lacking. A few recent studies have focused on various health topics among Iranian medical students,18–20 but there has been no research examining the prevalence and correlation of nonmedical use of methylphenidate among such students. Therefore, this study was carried out to determine the frequency of methylphenidate use among a group of Iranian medical students and to assess their knowledge of methylphenidate.", "The study population comprised all medical students entering the Faculty of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran, between 2000 and 2007. A self-administered, anonymous questionnaire based on information obtained from a review of the literature on methylphenidate was utilized. The questionnaires were distributed among all the medical students from the first (2007 entrance) to seventh (2000 entrance) school year who were present at daily classes and/or hospital wards between January and March 2007. The reliability and validity of the questionnaire were qualified and examined by a pilot study and the supervision of expert professors. The questionnaire comprised four sections: 1) demographic information including age, gender, marital status, living place, and average grade of participants (five items); 2) methylphenidate-related questions covering general information, clinical symptoms, and legal aspects (11 items); 3) questions to determine the frequency of methylphenidate use (two items); and 4) a multiple-choice question regarding the respondents’ perceptions of the reasons for the increasing tendency toward methylphenidate use (one item).\nConcerning knowledge about methylphenidate, the section began with a question on whether individuals had heard of methylphenidate. This was followed by a question about the source of the respondents’ information (mass media, medical books, scientific journals, Internet, friends, or a physician’s prescription). In addition, there were 11 statements concerning students’ knowledge of methylphenidate, and possible responses for this section included “true”, “false”, and “I don’t know”. The knowledge score was calculated by allocating +2 for a correct answer, 0 for an incorrect answer, and +1 for “I don’t know” responses. A total of 22 points could be achieved if all questions were answered correctly. Higher scores were indicative of a greater level of knowledge.\nIn order to determine the frequency of methylphenidate use among the students, they were asked whether they had used methylphenidate, excluding those who had been prescribed the medication. If respondents replied in the affirmative to this question they answered a follow-up question relating to how often they used methylphenidate. Possible responses for this question included “used before last year”, “used in the past 12 months”, “used in the past 6 months”, and “used in the past 30 days”. In addition, a question relating to the preferred route of administration was included in the questionnaire.\nAdditional questions concerning the reasons for taking prescription stimulant medications were posed to all respondents. Students were given the option to choose any of the response categories including increasing wakefulness, peer pressure, curiosity, increasing self-confidence, weight loss, increasing concentration, and euphoric effects.\nData were presented as the mean ± standard deviation, or percentage when appropriate. All statistical analyses were performed with SPSS (SPSS Inc, Chicago, IL, USA) for Windows, Version 16.0, using a Chi-square test, Fisher’s exact test, and independent samples t-test. Spearman’s correlation coefficient was calculated to determine the correlation between quantitative variables (demographic items and total knowledge score). A P value <0.05 was considered statistically significant.", "Of 500 medical students, 310 completed the questionnaire, representing a response rate of 62%. The mean age of the respondents was 21.4 ± 2.07 years (range 18–28 years). Forty-three percent (n = 134) of the students were male and 56.7% (n = 176) were female. The students’ mean grade average was 15.4 ± 1.3 out of 20 (range 12.4–18.8). A list of the knowledge questions with the percentage of each response is provided in Table 1. Methylphenidate users’ mean knowledge score (total of correct and incorrect responses) was higher than that of nonusers (15.83 ± 3.14 vs 13.66 ± 3.10, Table 2, P = 0.008). Among the demographic items, age (r = 0.29; P = 0.001), gender (r = 0.22; P = 0.015), and school year (r = 0.28; P = 0.001) were positively correlated with the knowledge score. There was no significant correlation between other demographic items (grade average, marital status, and place of residence) and knowledge score (P > 0.05).\nData analysis demonstrated that 27 participants (8.7%) had taken methylphenidate at least once in their lifetime; two students were excluded from the study because they misused methylphenidate while under a physician’s order. Of these 27 individuals, 20 (74%) declared that they had taken methylphenidate within the last year and three (11%) within the last month. Ninety-seven participants (31.2%) knew someone among their classmates and friends who had misused methylphenidate. The frequency of methylphenidate use was significantly higher among men than among women (92% vs 8%, Table 2, P < 0.001). Furthermore, methylphenidate use was significantly higher among respondents with a grade average of 15 or less (74%) than among students with higher grade averages (Chi-square test, P < 0.001). Methylphenidate had previously been prescribed by physicians for three of the users (11%), and six students (22.2%) indicated that they were encouraged by their peers to take methylphenidate for its positive effects. The preferred route of administration was oral (88.8%), followed by nasal (3.7%) and injection (3.7%). Many of the respondents (21%) indicated that friends were the major source of information concerning methylphenidate, followed by scientific books and journals (17.6%), mass media (radio, television, and newspapers) (4.2%), and the Internet (3.4%). The data from respondents and methylphenidate users are provided in Table 2.\nRespondents believed that the most common motive for methylphenidate use was to increase concentration (41.7%). Furthermore, students declared that increasing wakefulness (17.3%), curiosity (16.7%), increasing energy levels (8.8%), increasing self-confidence (6.1%), peer pressure (5%), and weight loss (4.4%) were also common reasons for methylphenidate misuse.", "In the present study, the prevalence of methylphenidate misuse among a group of Iranian medical students was assessed along with their knowledge of methylphenidate. Data analysis demonstrated that less than 9% of medical students had used methylphenidate at least once in their lifetime. Of those who admitted to using the drug, approximately 75% had taken methylphenidate in the past year and 10% in the last 30 days. A wide range of methylphenidate use has been reported in the literature.13,17 Teter et al surveyed a large sample of public university students among whom 3% indicated nonmedical use of methylphenidate within the previous year.14 According to the Monitoring the Future (MTF) study, annual use of methylphenidate among college students was 4.2% and 3.9% in 2005 and 2006, respectively.3 In another study by Teter et al, approximately 25% of illicit users of prescription stimulants used methylphenidate.21 In contrast, Low and Gendaszek indicated that 35% of 150 undergraduates in a psychology class had used either methylphenidate or amphetamine at least once.16 In the present study, approximately 30% of respondents knew someone among their classmates and friends who had misused methylphenidate. The difference between the two calculated prevalences of methylphenidate use among the surveyed group of medical students (8.8% vs 31.2%) might suggest that students felt more comfortable about reporting methylphenidate use by their peers than by themselves, despite the anonymity of the questionnaires. However, this difference could also be due to overlap between the reports, ie, one particular methylphenidate user could be reported by himself/herself and his/her friends simultaneously. To the best of our knowledge, the present study is the first survey to determine the frequency of methylphenidate misuse among a group of medical students and to assess their knowledge of methylphenidate.\nThe overall findings from this study indicate a relatively low level of knowledge concerning methylphenidate, its physiological and psychological side effects, or legal consequences of illicit use among the medical students surveyed. However, the students’ knowledge score increased the longer they had been studying at the university. Assessing the knowledge of the university students with regard to methylphenidate has been neglected in most previous research, though DeSantis et al indicated that most users among undergraduates at a southeastern research university in the US possessed limited knowledge of prescription stimulants, including methylphenidate.13\nIn accordance with the MTF study, men reported higher past-year rates of methylphenidate misuse than women did.3 Hall et al22 and Simoni-Wastila23 found a comparable gender difference in the misuse of methylphenidate. This study presented a comparable finding, but the frequency of methylphenidate misuse among men was much higher than among women (92% vs 8%). In contrast, Teter et al reported that undergraduate men and women used illicit methylphenidate equally.14 In the present study, medical students with lower grade point averages were more likely to use methylphenidate illicitly. This finding is similar to those of the surveys carried out by McCabe et al.24,25 Furthermore, in the present study, methylphenidate use was reported most frequently among single medical students who resided on campus, but this was not statistically significant.\nIndividuals misuse methylphenidate and other stimulants to keep alert and improve concentration as they prepare for tests or complete term papers.26 Teter et al reported that the majority of methylphenidate users among surveyed students used methylphenidate to enhance their academic performance by increasing concentration and alertness.27 Two recent surveys of Iranian pharmacists and general practitioners demonstrated that the majority of the respondents believed that Ritalin was used to increase attention and concentration.28,29 Likewise, in this study, the majority of the respondents, including the methylphenidate users, highlighted that increased concentration was the most common motive for the use of methylphenidate. However, Barrett et al stated that 70% of methylphenidate users administered it for recreational reasons.10\nIn the present study, the most common routes of methylphenidate administration were oral (89%), intranasal (3.7%), and injection (3.7%). The predominantly oral methylphenidate misuse in this study is consistent with previous reports.10,15,21,30 Interestingly, Teter et al reported smoking as another route of methylphenidate administration, but this was not evident in the present research.21 In the present study, friends were the main source of information. This finding is in contrast to our previous reports concerning medical and female high school students’ knowledge of bird flu18 and HIV/AIDS31 in the same city, which revealed that the mass media were the main source of information.18,31 In previous reports, Iranian pharmacists and general practitioners stated that they obtained information on methylphenidate from the mass media and medical journals.28,29\nThis study has certain limitations. First, the validity of self-reported methylphenidate use among respondents depends on their willingness to reply truthfully on the questionnaire. Second, the sample in the present study was from a single university, thereby necessitating similar studies be conducted in other medical schools for comparison. Third, the present study did not explicitly address duration or frequency of methylphenidate use. Therefore, it is unknown whether nonprescription users took methylphenidate regularly or only occasionally. Fourth, the response rate in the present study was relatively low (62%). It is possible that the prevalence of methylphenidate misuse was underestimated if those who misused the drug chose not to complete the survey (eg, for fear of lack of anonymity). However, it should be highlighted that this study is the first to estimate the prevalence of methylphenidate use among medical students.\nAs methylphenidate is legally available to a small group of students (student suppliers) as ADHD medication, the authors believe that focusing on the small group of student suppliers may be an effective intervention approach to addressing the current problem of methylphenidate misuse among university students. Hence, the medical community, including clinicians and pharmacists, may consider reducing the monthly allocation of pills. In addition, clinicians should counsel students regarding the probable serious adverse effects to health if methylphenidate is misused, as well as the potential legal consequences of misuse. Furthermore, training health care professionals and medical students is of great importance in controlling the current trend. The findings of this study should be considered seriously by local health centers. Being aware of the scope and context of the problem could aid the development of prevention and monitoring programs for prescription drug misuse and diversion. The relatively low level of knowledge concerning methylphenidate among medical students in this study is primarily a reflection of insufficient academic courses in the medical school curriculum. It is recommended that knowledge of medical students about this topic be improved through access to textbooks, articles, seminars, and specific courses. Moreover, the addition of the topic of stimulant drugs, including methylphenidate, to pharmacology, toxicology, and psychiatry courses during medical education is advised." ]
[ null, "methods", null, null ]
[ "methylphenidate", "medical student", "prevalence", "Iran" ]
Pathological and ultrastructural analysis of surgical lung biopsies in patients with swine-origin influenza type A/H1N1 and acute respiratory failure.
21340209
Cases of H1N1 and other pulmonary infections evolve to acute respiratory failure and death when co-infections or lung injury predominate over the immune response, thus requiring early diagnosis to improve treatment.
BACKGROUND
Lung specimens underwent microbiologic analysis, and examination by optical and electron microscopy. Immunophenotyping was used to characterize macrophages, natural killer, T and B cells, and expression of cytokines and iNOS.
METHODS
The pathological features observed were necrotizing bronchiolitis, diffuse alveolar damage, alveolar hemorrhage and abnormal immune response. Ultrastructural analysis showed viral-like particles in all cases.
RESULTS
Viral-like particles can be successfully demonstrated in lung tissue by ultrastructural examination, without confirmation of the virus by RT-PCR on nasopharyngeal aspirates. Bronchioles and epithelium, rather than endothelium, are probably the primary target of infection, and diffuse alveolar damage the consequence of the effect of airways obliteration and dysfunction on innate immunity, suggesting that treatment should be focused on epithelial repair.
CONCLUSIONS
[ "Adult", "Aged, 80 and over", "Biopsy", "Bronchi", "Female", "Humans", "Influenza A Virus, H1N1 Subtype", "Influenza, Human", "Lung", "Male", "Middle Aged", "Respiratory Insufficiency", "Respiratory Mucosa" ]
3020331
INTRODUCTION
Recently, a novel swine‐origin influenza A (H1N1) virus with molecular features of North American and Eurasian swine, avian, and human influenza viruses1-4 has been associated with an outbreak of respiratory disease. According to the World Health Organization (WHO), between 25 April and 11 October 2009, 399,232 confirmed cases of H1N1 influenza virus and 4,735 deaths occurred throughout the world.5 Brazil reported 1,528 deaths up to 10 November 2009.6 Swine‐origin influenza A (H1N1) virus infection can cause severe acute respiratory failure (ARF), requiring admission to an intensive care unit (ICU) in 15–30% of previously healthy young to middle‐aged people.3,4,7,8 Death may occur when co‐infections or lung injury prevail over the immune response, resulting in a progressive worsening of lung function (low compliance and oxygenation). Early diagnosis and a complete understanding of the pathological features of the H1N1 virus are important to help to improve treatment and the prognosis of this lethal disease. Analysis of the lung tissue from an open lung biopsy (OLB) of these severe cases can help in understanding the pathogenesis of this severe and sometimes fatal development. Until now, no reports of OLB findings used to guide the treatment of patients with H1N1 pneumonitis have been published, although according to many authors OLB is safe and diagnostically useful in patients with ARF, enabling appropriate therapy.9-12 The pathogenesis of ARF associated with swine‐origin influenza virus (S‐OIV) infection in humans is unknown. The influenza virus triggers pulmonary inflammation owing to an infiltration of inflammatory cells and an immune response. Bronchial epithelial cells are the primary target and the principal host for the virus.13,14 Normally, influenza viruses are recognized and destroyed by innate immune mechanisms which involve macrophages, interferon (IFN) α, β and other cytokines, natural killer (NK) cells and complement. When influenza viruses escape from these early defense mechanisms, they are captured and eliminated by adaptive immune mechanisms, where T and B cells and their antigen‐specific effectors (cytotoxic T lymphocytes, cytokines such as IFNγ and antibodies) target the virus. Additionally, antigen‐specific memory cells (T and B cells) are involved in the prevention of the subsequent viral infection.14 Thus, pathological findings obtained by an OLB, coupled to ultrastructural and immunologic analysis, may have an impact on decisions about changes in treatment strategies employed for these critically ill patients, and also provide a greater understanding of the pathophysiology of S‐OIV infection. The objective of this study was to analyze pathologically and ultrastructurally S‐OIV lung infection and the pulmonary immune response in a series of five cases with OLB.
METHODS
[SUBTITLE] Patients and Collection of Specimens [SUBSECTION] We studied pathologically and ultrastructurally five patients suspected of having a pandemic S‐OIV virus who developed ARF requiring ventilatory support. Nasal swabs for RT‐PCR for H1N1 were collected from all patients. The OLBs indicated by the clinicians were carried out after receiving consent from the families. These patients had a severe evolution of the virus and more information about the physiopathology of the disease was required in order to provide adequate treatment. If no improvement of the respiratory status was seen in the patients with ARF after ≥5 days (defined as no decrease of the Lung Injury Score) an OLB was indicated.15 Lung tissue sections (4 µm thick), prepared from 10% formalin‐fixed, routinely processed, paraffin‐embedded blocks, were stained with hematoxylin‐eosin. The following methods of histochemical staining were carried out: Grocott's methenamine silver stain, Brown–Brenn, and Ziehl–Neelsen. The following pathological changes were analyzed: a) necrotizing bronchiolitis, b) alveolar collapse, c) dilatation of the airspaces, d) hyaline membrane, e) fibroplasia, f) squamous metaplasia, g) multinucleated cells, h) alveolar hemorrhage, i) acute inflammatory exudates, j) atypical pneumocytes. Pathological changes were graded, using two sections, according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue. This semiquantitative analysis is currently routinely used in most studies of the department of pathology of the University of São Paulo Medical School.16,17 For immunohistochemistry, the avidin–biotin–peroxidase complex and streptavidin–biotin enzyme complex immunostaining methods were used with antibodies against: lymphocytes CD4 (clone: MO834, dilution 1:1000), CD8 (clone: M7103, dilution 1∶20), CD20 (clone: M755, dilution 1∶40), macrophages‐histiocytes CD68 (clone: M814, dilution 1∶30), mouse monoclonal antibodies from DAKO, Carpinteria, CA, USA; S100 (clone: Z311, dilution 1∶1000) rabbit polyclonal antibodies from DAKO; CD1a (clone: MCA1657, dilution 1: 200) mouse monoclonal antibodies from Serotec, Oxford, UK; natural killer, NK (clone: MS136P, dilution 1∶1000) mouse monoclonal antibodies from Neomarkers, Fremont, CA, USA; interleukin 4 (IL‐4) (dilution 1∶40), IL‐10 (dilution 1∶40) goat polyclonal antibodies from R&D Systems, Minneapolis, MN, USA; IFNγ (clone: MAB285, dilution 1∶30), mouse monoclonal antibodies from R&D Systems; tumor necrosis factor alpha (TNFα) (clone: AF210NA, dilution 1∶40) all mouse monoclonal antibodies from R&D Systems; inducible nitric oxide synthase (iNOS) (dilution 1∶500) polyclonal rabbit from Calbiochem, La Jolla, CA, USA. Immunohistochemical reactions were carried out in accordance with the manufacturer's instruction. Diaminobenzidine was used as the color substrate, and Meyer's hematoxylin was used for counterstaining. Cell immunophenotypes and immune expression of cells using the different methods of immunohistochemical staining were identified and graded according to a five‐point semiquantitative intensity‐based scoring system as: 0 = negative, 1 = positive in 1–25%, 2 = positive in 26–50%, 3 = positive in 51–75%, and 4 = positive in 76–100% of examined tissue.18 Small blocks (1 mm3) of lungs were fixed in 2% glutaraldehyde/2% paraformaldehyde in cacodylate buffer overnight, then fixed in 1% osmium tetroxide, dehydrated, and embedded in araldite. Ultrathin sections obtained from selected areas were double‐stained and examined in a Philips TECNAI 10 electron microscope at 80 kV. For each electron microscopy image (15/case), the following structural changes were analyzed: a) cytoplasmic swelling, b) degenerative changes, c) sloughing of necrotizing alveolar epithelial cell type I (AECI) and II (AECII), d) denudation of the epithelial basement membrane, e) hyaline membranes, f) alveolar septal collapse, g) viral particles such as tubuloreticular structures (TRS) and cylindrical confronting cisternae (CCC), h) multinucleated AECII. Ultrastructural findings were graded according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue.16,17 We studied pathologically and ultrastructurally five patients suspected of having a pandemic S‐OIV virus who developed ARF requiring ventilatory support. Nasal swabs for RT‐PCR for H1N1 were collected from all patients. The OLBs indicated by the clinicians were carried out after receiving consent from the families. These patients had a severe evolution of the virus and more information about the physiopathology of the disease was required in order to provide adequate treatment. If no improvement of the respiratory status was seen in the patients with ARF after ≥5 days (defined as no decrease of the Lung Injury Score) an OLB was indicated.15 Lung tissue sections (4 µm thick), prepared from 10% formalin‐fixed, routinely processed, paraffin‐embedded blocks, were stained with hematoxylin‐eosin. The following methods of histochemical staining were carried out: Grocott's methenamine silver stain, Brown–Brenn, and Ziehl–Neelsen. The following pathological changes were analyzed: a) necrotizing bronchiolitis, b) alveolar collapse, c) dilatation of the airspaces, d) hyaline membrane, e) fibroplasia, f) squamous metaplasia, g) multinucleated cells, h) alveolar hemorrhage, i) acute inflammatory exudates, j) atypical pneumocytes. Pathological changes were graded, using two sections, according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue. This semiquantitative analysis is currently routinely used in most studies of the department of pathology of the University of São Paulo Medical School.16,17 For immunohistochemistry, the avidin–biotin–peroxidase complex and streptavidin–biotin enzyme complex immunostaining methods were used with antibodies against: lymphocytes CD4 (clone: MO834, dilution 1:1000), CD8 (clone: M7103, dilution 1∶20), CD20 (clone: M755, dilution 1∶40), macrophages‐histiocytes CD68 (clone: M814, dilution 1∶30), mouse monoclonal antibodies from DAKO, Carpinteria, CA, USA; S100 (clone: Z311, dilution 1∶1000) rabbit polyclonal antibodies from DAKO; CD1a (clone: MCA1657, dilution 1: 200) mouse monoclonal antibodies from Serotec, Oxford, UK; natural killer, NK (clone: MS136P, dilution 1∶1000) mouse monoclonal antibodies from Neomarkers, Fremont, CA, USA; interleukin 4 (IL‐4) (dilution 1∶40), IL‐10 (dilution 1∶40) goat polyclonal antibodies from R&D Systems, Minneapolis, MN, USA; IFNγ (clone: MAB285, dilution 1∶30), mouse monoclonal antibodies from R&D Systems; tumor necrosis factor alpha (TNFα) (clone: AF210NA, dilution 1∶40) all mouse monoclonal antibodies from R&D Systems; inducible nitric oxide synthase (iNOS) (dilution 1∶500) polyclonal rabbit from Calbiochem, La Jolla, CA, USA. Immunohistochemical reactions were carried out in accordance with the manufacturer's instruction. Diaminobenzidine was used as the color substrate, and Meyer's hematoxylin was used for counterstaining. Cell immunophenotypes and immune expression of cells using the different methods of immunohistochemical staining were identified and graded according to a five‐point semiquantitative intensity‐based scoring system as: 0 = negative, 1 = positive in 1–25%, 2 = positive in 26–50%, 3 = positive in 51–75%, and 4 = positive in 76–100% of examined tissue.18 Small blocks (1 mm3) of lungs were fixed in 2% glutaraldehyde/2% paraformaldehyde in cacodylate buffer overnight, then fixed in 1% osmium tetroxide, dehydrated, and embedded in araldite. Ultrathin sections obtained from selected areas were double‐stained and examined in a Philips TECNAI 10 electron microscope at 80 kV. For each electron microscopy image (15/case), the following structural changes were analyzed: a) cytoplasmic swelling, b) degenerative changes, c) sloughing of necrotizing alveolar epithelial cell type I (AECI) and II (AECII), d) denudation of the epithelial basement membrane, e) hyaline membranes, f) alveolar septal collapse, g) viral particles such as tubuloreticular structures (TRS) and cylindrical confronting cisternae (CCC), h) multinucleated AECII. Ultrastructural findings were graded according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue.16,17
RESULTS
[SUBTITLE] Patients [SUBSECTION] Five patients (two male, three female) mean age 48 years (range 35–81) were studied; only patient No 4 had pre‐existing medical illnesses (Table 1) and chest x‐ray abnormality at disease onset. All the patients presented with a 4–10 days' (median 5 days) history of shortness of breath and flu‐like symptoms and rapid clinical deterioration. They were transferred to the ICU for tracheal intubation and ventilation (range 8–25 days; median 17) and diagnosed as having ARF.15 All the patients received 75 mg twice a day by nasal enteral tube of olsetamivir (range 4–14 days; median 10) and intravenous steroids (range 9–20 days; median 12). After obtaining these results the dose was changed from 75 mg twice a day to 150 mg twice a day through a nasal enteral tube, in accordance with the Brazilian guidelines. The presence of the H1N1 virus was confirmed in all five patients (Table 1) by nasal swab or lung tissue positivity of RT‐PCR according to guidelines from the Centers for Disease Control and Prevention.19 Other microbiological investigations, including the isolation of other viruses, were negative. During the evolution of disease in the patients in the ICU, Staphylococcus aureus was isolated from a blood culture (patients 2 and 3) and Klebsiella spp were identified in tracheal aspirate specimens (patient 1). Patients 1, 2 and 4 are alive, but patients 3 and 5 died of respiratory failure, with concurrent congestive heart failure, hepatic encephalopathy, and acute renal failure. Five patients (two male, three female) mean age 48 years (range 35–81) were studied; only patient No 4 had pre‐existing medical illnesses (Table 1) and chest x‐ray abnormality at disease onset. All the patients presented with a 4–10 days' (median 5 days) history of shortness of breath and flu‐like symptoms and rapid clinical deterioration. They were transferred to the ICU for tracheal intubation and ventilation (range 8–25 days; median 17) and diagnosed as having ARF.15 All the patients received 75 mg twice a day by nasal enteral tube of olsetamivir (range 4–14 days; median 10) and intravenous steroids (range 9–20 days; median 12). After obtaining these results the dose was changed from 75 mg twice a day to 150 mg twice a day through a nasal enteral tube, in accordance with the Brazilian guidelines. The presence of the H1N1 virus was confirmed in all five patients (Table 1) by nasal swab or lung tissue positivity of RT‐PCR according to guidelines from the Centers for Disease Control and Prevention.19 Other microbiological investigations, including the isolation of other viruses, were negative. During the evolution of disease in the patients in the ICU, Staphylococcus aureus was isolated from a blood culture (patients 2 and 3) and Klebsiella spp were identified in tracheal aspirate specimens (patient 1). Patients 1, 2 and 4 are alive, but patients 3 and 5 died of respiratory failure, with concurrent congestive heart failure, hepatic encephalopathy, and acute renal failure. [SUBTITLE] Necrotizing Bronchiolitis, Collapsogenic Diffuse Alveolar Damage and Alveolar Hemorrhage [SUBSECTION] Figure 1 depicts the pathological findings in the surgical lung biopsy specimens. The main pathological features were necrotizing bronchiolitis, clastogenic diffuse alveolar damage (DAD), and alveolar hemorrhage (Table 2). Pulmonary specimens from patients 3 and 5 presented more intense changes at optical microscopy. The membranous and respiratory bronchioles were extensively compromised by epithelial necrosis, squamous metaplasia, and obliteration by fibroplasia (Figure 1A–F). The parenchyma was modified by extensive alveolar collapse, dilatation of the airspaces, alveolar hemorrhage, and sparse hyaline membrane formation (Figure 1G–I). There was interstitial thickening, with mild to moderate fibroplasia (Figure 1I), but a disproportionately sparse infiltrate of inflammatory cells, mainly histiocytes, including multinucleated forms, lymphocytes and megakaryocytes (Figure 1J–K). Atypical bronchiolar and alveolar epithelial cells (AECs) were seen in all five patients, although the distribution was focal (Figure 1J). These atypical forms included multinucleated giant cells with irregularly distributed nuclei (Figure 1K, L) or bronchiolar and AECs with large atypical nuclei, prominent eosinophilic nucleoli, and granular amphophilic cytoplasm (Figure 1M). However, distinct viral inclusions were not apparent. Figure 1 depicts the pathological findings in the surgical lung biopsy specimens. The main pathological features were necrotizing bronchiolitis, clastogenic diffuse alveolar damage (DAD), and alveolar hemorrhage (Table 2). Pulmonary specimens from patients 3 and 5 presented more intense changes at optical microscopy. The membranous and respiratory bronchioles were extensively compromised by epithelial necrosis, squamous metaplasia, and obliteration by fibroplasia (Figure 1A–F). The parenchyma was modified by extensive alveolar collapse, dilatation of the airspaces, alveolar hemorrhage, and sparse hyaline membrane formation (Figure 1G–I). There was interstitial thickening, with mild to moderate fibroplasia (Figure 1I), but a disproportionately sparse infiltrate of inflammatory cells, mainly histiocytes, including multinucleated forms, lymphocytes and megakaryocytes (Figure 1J–K). Atypical bronchiolar and alveolar epithelial cells (AECs) were seen in all five patients, although the distribution was focal (Figure 1J). These atypical forms included multinucleated giant cells with irregularly distributed nuclei (Figure 1K, L) or bronchiolar and AECs with large atypical nuclei, prominent eosinophilic nucleoli, and granular amphophilic cytoplasm (Figure 1M). However, distinct viral inclusions were not apparent. [SUBTITLE] Bronchial and Alveolar Epithelium Necrosis and Viral‐like Particles [SUBSECTION] The ultrastructural features were represented by bronchial and alveolar epithelium necrosis, a destroyed alveolar epithelium/basement membrane unity and the presence of viral‐like particles (Table 3). Patients 3 and 5 presented more prominent changes at submicroscopic level. Cytoplasmic swelling, necrosis, and degenerative changes of the endoplasmic reticulum and other organelles were present in bronchial and AECs (Figure 2A–C). A large number of bronchiolar and AECs were detached from the basement membrane and were showing apoptosis (Figure 2A, B). Lymphocytes also exhibited apoptosis. Sloughing of apoptotic bronchiolar cells and AECs causing denudation of the epithelial basement membrane was followed by deposition of hyaline membranes (Figure 2D). Ultrastructural evidence of alveolar collapse was also present by the apposition of the alveolar septa (Figure 2E–G). The regenerating bronchiolar epithelium extended along the adjacent alveolar septa showing features of cells with prominent surface microvilli with decreased or absent lamellar bodies and considerable cytologic atypia (Figure 2H–L). Increased myofibroblasts and collagen fibers were also present (Figure 2I). Multinucleated epithelial cells with prominent nucleoli were noted in most cases, although such cells were sparse (Figure 2K). The proliferating bronchiolar and AECs containing TRS and CCC, probably representing residual viral‐like particles, were distinguished in all cases (Figure 2M–R). TRS appeared as reticular aggregates of branching membranous tubules located within the cisternae of the endoplasmic reticulum (Figure 2M–O) or were compact (Figure 2Q, R). CCC were identified as elongated, slightly curved cylindrical structures (Figure 2P, Q), ring shaped (Figure 2R) or fused membranous lamellae, representing cisternae of endoplasmic reticulum. The ultrastructural features were represented by bronchial and alveolar epithelium necrosis, a destroyed alveolar epithelium/basement membrane unity and the presence of viral‐like particles (Table 3). Patients 3 and 5 presented more prominent changes at submicroscopic level. Cytoplasmic swelling, necrosis, and degenerative changes of the endoplasmic reticulum and other organelles were present in bronchial and AECs (Figure 2A–C). A large number of bronchiolar and AECs were detached from the basement membrane and were showing apoptosis (Figure 2A, B). Lymphocytes also exhibited apoptosis. Sloughing of apoptotic bronchiolar cells and AECs causing denudation of the epithelial basement membrane was followed by deposition of hyaline membranes (Figure 2D). Ultrastructural evidence of alveolar collapse was also present by the apposition of the alveolar septa (Figure 2E–G). The regenerating bronchiolar epithelium extended along the adjacent alveolar septa showing features of cells with prominent surface microvilli with decreased or absent lamellar bodies and considerable cytologic atypia (Figure 2H–L). Increased myofibroblasts and collagen fibers were also present (Figure 2I). Multinucleated epithelial cells with prominent nucleoli were noted in most cases, although such cells were sparse (Figure 2K). The proliferating bronchiolar and AECs containing TRS and CCC, probably representing residual viral‐like particles, were distinguished in all cases (Figure 2M–R). TRS appeared as reticular aggregates of branching membranous tubules located within the cisternae of the endoplasmic reticulum (Figure 2M–O) or were compact (Figure 2Q, R). CCC were identified as elongated, slightly curved cylindrical structures (Figure 2P, Q), ring shaped (Figure 2R) or fused membranous lamellae, representing cisternae of endoplasmic reticulum. [SUBTITLE] Deficient Innate and Adaptative Immune Response [SUBSECTION] Figure 3 depicts immunological findings in the surgical lung biopsy specimens. The immunologic features were dominated by a decrease in the innate and adaptive immune response (Table 4). Patients 3 and 5 presented with immunologic impairment. In all patients small aggregates of macrophages, CD4+ T‐helper cells, CD8+ T‐cytotoxic cells, CD20+ B‐cells, CD1a+ dendritic cells, S100+ dendritic cells, natural killer lymphocytes were present around vessels and bronchioles. Dendritic cells and TNFα were expressed sparsely in macrophages, AECs and endothelial cells, whereas IFNγ was expressed in small mononucleated cells in lungs from patients with S‐OIV. There was a very strong expression of IL‐4, IL‐10 and iNOS in small mononucleated cells. Figure 3 depicts immunological findings in the surgical lung biopsy specimens. The immunologic features were dominated by a decrease in the innate and adaptive immune response (Table 4). Patients 3 and 5 presented with immunologic impairment. In all patients small aggregates of macrophages, CD4+ T‐helper cells, CD8+ T‐cytotoxic cells, CD20+ B‐cells, CD1a+ dendritic cells, S100+ dendritic cells, natural killer lymphocytes were present around vessels and bronchioles. Dendritic cells and TNFα were expressed sparsely in macrophages, AECs and endothelial cells, whereas IFNγ was expressed in small mononucleated cells in lungs from patients with S‐OIV. There was a very strong expression of IL‐4, IL‐10 and iNOS in small mononucleated cells.
null
null
[ "Patients and Collection of Specimens", "Patients", "Necrotizing Bronchiolitis, Collapsogenic Diffuse Alveolar Damage and Alveolar Hemorrhage", "Bronchial and Alveolar Epithelium Necrosis and Viral‐like Particles", "Deficient Innate and Adaptative Immune Response", "ACKNOWLEDGEMENTS" ]
[ "We studied pathologically and ultrastructurally five patients suspected of having a pandemic S‐OIV virus who developed ARF requiring ventilatory support. Nasal swabs for RT‐PCR for H1N1 were collected from all patients. The OLBs indicated by the clinicians were carried out after receiving consent from the families. These patients had a severe evolution of the virus and more information about the physiopathology of the disease was required in order to provide adequate treatment. If no improvement of the respiratory status was seen in the patients with ARF after ≥5 days (defined as no decrease of the Lung Injury Score) an OLB was indicated.15 \nLung tissue sections (4 µm thick), prepared from 10% formalin‐fixed, routinely processed, paraffin‐embedded blocks, were stained with hematoxylin‐eosin. The following methods of histochemical staining were carried out: Grocott's methenamine silver stain, Brown–Brenn, and Ziehl–Neelsen. The following pathological changes were analyzed: a) necrotizing bronchiolitis, b) alveolar collapse, c) dilatation of the airspaces, d) hyaline membrane, e) fibroplasia, f) squamous metaplasia, g) multinucleated cells, h) alveolar hemorrhage, i) acute inflammatory exudates, j) atypical pneumocytes. Pathological changes were graded, using two sections, according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue. This semiquantitative analysis is currently routinely used in most studies of the department of pathology of the University of São Paulo Medical School.16,17 \nFor immunohistochemistry, the avidin–biotin–peroxidase complex and streptavidin–biotin enzyme complex immunostaining methods were used with antibodies against: lymphocytes CD4 (clone: MO834, dilution 1:1000), CD8 (clone: M7103, dilution 1∶20), CD20 (clone: M755, dilution 1∶40), macrophages‐histiocytes CD68 (clone: M814, dilution 1∶30), mouse monoclonal antibodies from DAKO, Carpinteria, CA, USA; S100 (clone: Z311, dilution 1∶1000) rabbit polyclonal antibodies from DAKO; CD1a (clone: MCA1657, dilution 1: 200) mouse monoclonal antibodies from Serotec, Oxford, UK; natural killer, NK (clone: MS136P, dilution 1∶1000) mouse monoclonal antibodies from Neomarkers, Fremont, CA, USA; interleukin 4 (IL‐4) (dilution 1∶40), IL‐10 (dilution 1∶40) goat polyclonal antibodies from R&D Systems, Minneapolis, MN, USA; IFNγ (clone: MAB285, dilution 1∶30), mouse monoclonal antibodies from R&D Systems; tumor necrosis factor alpha (TNFα) (clone: AF210NA, dilution 1∶40) all mouse monoclonal antibodies from R&D Systems; inducible nitric oxide synthase (iNOS) (dilution 1∶500) polyclonal rabbit from Calbiochem, La Jolla, CA, USA.\nImmunohistochemical reactions were carried out in accordance with the manufacturer's instruction. Diaminobenzidine was used as the color substrate, and Meyer's hematoxylin was used for counterstaining. Cell immunophenotypes and immune expression of cells using the different methods of immunohistochemical staining were identified and graded according to a five‐point semiquantitative intensity‐based scoring system as: 0 = negative, 1 = positive in 1–25%, 2 = positive in 26–50%, 3 = positive in 51–75%, and 4 = positive in 76–100% of examined tissue.18 \nSmall blocks (1 mm3) of lungs were fixed in 2% glutaraldehyde/2% paraformaldehyde in cacodylate buffer overnight, then fixed in 1% osmium tetroxide, dehydrated, and embedded in araldite. Ultrathin sections obtained from selected areas were double‐stained and examined in a Philips TECNAI 10 electron microscope at 80 kV. For each electron microscopy image (15/case), the following structural changes were analyzed: a) cytoplasmic swelling, b) degenerative changes, c) sloughing of necrotizing alveolar epithelial cell type I (AECI) and II (AECII), d) denudation of the epithelial basement membrane, e) hyaline membranes, f) alveolar septal collapse, g) viral particles such as tubuloreticular structures (TRS) and cylindrical confronting cisternae (CCC), h) multinucleated AECII. Ultrastructural findings were graded according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue.16,17 ", "Five patients (two male, three female) mean age 48 years (range 35–81) were studied; only patient No 4 had pre‐existing medical illnesses (Table 1) and chest x‐ray abnormality at disease onset. All the patients presented with a 4–10 days' (median 5 days) history of shortness of breath and flu‐like symptoms and rapid clinical deterioration. They were transferred to the ICU for tracheal intubation and ventilation (range 8–25 days; median 17) and diagnosed as having ARF.15 All the patients received 75 mg twice a day by nasal enteral tube of olsetamivir (range 4–14 days; median 10) and intravenous steroids (range 9–20 days; median 12). After obtaining these results the dose was changed from 75 mg twice a day to 150 mg twice a day through a nasal enteral tube, in accordance with the Brazilian guidelines. The presence of the H1N1 virus was confirmed in all five patients (Table 1) by nasal swab or lung tissue positivity of RT‐PCR according to guidelines from the Centers for Disease Control and Prevention.19 Other microbiological investigations, including the isolation of other viruses, were negative. During the evolution of disease in the patients in the ICU, Staphylococcus aureus was isolated from a blood culture (patients 2 and 3) and Klebsiella spp were identified in tracheal aspirate specimens (patient 1). Patients 1, 2 and 4 are alive, but patients 3 and 5 died of respiratory failure, with concurrent congestive heart failure, hepatic encephalopathy, and acute renal failure.", "Figure 1 depicts the pathological findings in the surgical lung biopsy specimens. The main pathological features were necrotizing bronchiolitis, clastogenic diffuse alveolar damage (DAD), and alveolar hemorrhage (Table 2). Pulmonary specimens from patients 3 and 5 presented more intense changes at optical microscopy. The membranous and respiratory bronchioles were extensively compromised by epithelial necrosis, squamous metaplasia, and obliteration by fibroplasia (Figure 1A–F). The parenchyma was modified by extensive alveolar collapse, dilatation of the airspaces, alveolar hemorrhage, and sparse hyaline membrane formation (Figure 1G–I). There was interstitial thickening, with mild to moderate fibroplasia (Figure 1I), but a disproportionately sparse infiltrate of inflammatory cells, mainly histiocytes, including multinucleated forms, lymphocytes and megakaryocytes (Figure 1J–K).\nAtypical bronchiolar and alveolar epithelial cells (AECs) were seen in all five patients, although the distribution was focal (Figure 1J). These atypical forms included multinucleated giant cells with irregularly distributed nuclei (Figure 1K, L) or bronchiolar and AECs with large atypical nuclei, prominent eosinophilic nucleoli, and granular amphophilic cytoplasm (Figure 1M). However, distinct viral inclusions were not apparent.", "The ultrastructural features were represented by bronchial and alveolar epithelium necrosis, a destroyed alveolar epithelium/basement membrane unity and the presence of viral‐like particles (Table 3). Patients 3 and 5 presented more prominent changes at submicroscopic level. Cytoplasmic swelling, necrosis, and degenerative changes of the endoplasmic reticulum and other organelles were present in bronchial and AECs (Figure 2A–C). A large number of bronchiolar and AECs were detached from the basement membrane and were showing apoptosis (Figure 2A, B). Lymphocytes also exhibited apoptosis. Sloughing of apoptotic bronchiolar cells and AECs causing denudation of the epithelial basement membrane was followed by deposition of hyaline membranes (Figure 2D).\nUltrastructural evidence of alveolar collapse was also present by the apposition of the alveolar septa (Figure 2E–G). The regenerating bronchiolar epithelium extended along the adjacent alveolar septa showing features of cells with prominent surface microvilli with decreased or absent lamellar bodies and considerable cytologic atypia (Figure 2H–L). Increased myofibroblasts and collagen fibers were also present (Figure 2I). Multinucleated epithelial cells with prominent nucleoli were noted in most cases, although such cells were sparse (Figure 2K). The proliferating bronchiolar and AECs containing TRS and CCC, probably representing residual viral‐like particles, were distinguished in all cases (Figure 2M–R). TRS appeared as reticular aggregates of branching membranous tubules located within the cisternae of the endoplasmic reticulum (Figure 2M–O) or were compact (Figure 2Q, R). CCC were identified as elongated, slightly curved cylindrical structures (Figure 2P, Q), ring shaped (Figure 2R) or fused membranous lamellae, representing cisternae of endoplasmic reticulum.", "Figure 3 depicts immunological findings in the surgical lung biopsy specimens. The immunologic features were dominated by a decrease in the innate and adaptive immune response (Table 4). Patients 3 and 5 presented with immunologic impairment.\nIn all patients small aggregates of macrophages, CD4+ T‐helper cells, CD8+ T‐cytotoxic cells, CD20+ B‐cells, CD1a+ dendritic cells, S100+ dendritic cells, natural killer lymphocytes were present around vessels and bronchioles. Dendritic cells and TNFα were expressed sparsely in macrophages, AECs and endothelial cells, whereas IFNγ was expressed in small mononucleated cells in lungs from patients with S‐OIV. There was a very strong expression of IL‐4, IL‐10 and iNOS in small mononucleated cells.", "This study was supported by the following Brazilian agencies: the National Council for Scientific and Technological Development [CNPq]; Foundation for the Support of Research of the State of São Paulo [FAPESP]; and the Laboratories for Medical Research [LIMs], Hospital das Clinicas, University of São Paulo Medical School.\nWe gratefully acknowledge the excellent technical assistance of Julia Maria L.L. Silvestre, Hélio Correa, Maria Cecília dos Santos Marcondes, Adão Caetano Sobrinho e Carla Pagliari. We are also grateful to Dr. Patrícia Rieken Macedo Rocco, Antônio Carlos Campos Pignatari, Andrea Sette for their help." ]
[ null, null, null, null, null, null ]
[ "INTRODUCTION", "METHODS", "Patients and Collection of Specimens", "RESULTS", "Patients", "Necrotizing Bronchiolitis, Collapsogenic Diffuse Alveolar Damage and Alveolar Hemorrhage", "Bronchial and Alveolar Epithelium Necrosis and Viral‐like Particles", "Deficient Innate and Adaptative Immune Response", "DISCUSSION", "ACKNOWLEDGEMENTS" ]
[ "Recently, a novel swine‐origin influenza A (H1N1) virus with molecular features of North American and Eurasian swine, avian, and human influenza viruses1-4 has been associated with an outbreak of respiratory disease. According to the World Health Organization (WHO), between 25 April and 11 October 2009, 399,232 confirmed cases of H1N1 influenza virus and 4,735 deaths occurred throughout the world.5 Brazil reported 1,528 deaths up to 10 November 2009.6 \nSwine‐origin influenza A (H1N1) virus infection can cause severe acute respiratory failure (ARF), requiring admission to an intensive care unit (ICU) in 15–30% of previously healthy young to middle‐aged people.3,4,7,8 Death may occur when co‐infections or lung injury prevail over the immune response, resulting in a progressive worsening of lung function (low compliance and oxygenation). Early diagnosis and a complete understanding of the pathological features of the H1N1 virus are important to help to improve treatment and the prognosis of this lethal disease. Analysis of the lung tissue from an open lung biopsy (OLB) of these severe cases can help in understanding the pathogenesis of this severe and sometimes fatal development. Until now, no reports of OLB findings used to guide the treatment of patients with H1N1 pneumonitis have been published, although according to many authors OLB is safe and diagnostically useful in patients with ARF, enabling appropriate therapy.9-12 \nThe pathogenesis of ARF associated with swine‐origin influenza virus (S‐OIV) infection in humans is unknown. The influenza virus triggers pulmonary inflammation owing to an infiltration of inflammatory cells and an immune response. Bronchial epithelial cells are the primary target and the principal host for the virus.13,14 Normally, influenza viruses are recognized and destroyed by innate immune mechanisms which involve macrophages, interferon (IFN) α, β and other cytokines, natural killer (NK) cells and complement. When influenza viruses escape from these early defense mechanisms, they are captured and eliminated by adaptive immune mechanisms, where T and B cells and their antigen‐specific effectors (cytotoxic T lymphocytes, cytokines such as IFNγ and antibodies) target the virus. Additionally, antigen‐specific memory cells (T and B cells) are involved in the prevention of the subsequent viral infection.14 \nThus, pathological findings obtained by an OLB, coupled to ultrastructural and immunologic analysis, may have an impact on decisions about changes in treatment strategies employed for these critically ill patients, and also provide a greater understanding of the pathophysiology of S‐OIV infection. The objective of this study was to analyze pathologically and ultrastructurally S‐OIV lung infection and the pulmonary immune response in a series of five cases with OLB.", "[SUBTITLE] Patients and Collection of Specimens [SUBSECTION] We studied pathologically and ultrastructurally five patients suspected of having a pandemic S‐OIV virus who developed ARF requiring ventilatory support. Nasal swabs for RT‐PCR for H1N1 were collected from all patients. The OLBs indicated by the clinicians were carried out after receiving consent from the families. These patients had a severe evolution of the virus and more information about the physiopathology of the disease was required in order to provide adequate treatment. If no improvement of the respiratory status was seen in the patients with ARF after ≥5 days (defined as no decrease of the Lung Injury Score) an OLB was indicated.15 \nLung tissue sections (4 µm thick), prepared from 10% formalin‐fixed, routinely processed, paraffin‐embedded blocks, were stained with hematoxylin‐eosin. The following methods of histochemical staining were carried out: Grocott's methenamine silver stain, Brown–Brenn, and Ziehl–Neelsen. The following pathological changes were analyzed: a) necrotizing bronchiolitis, b) alveolar collapse, c) dilatation of the airspaces, d) hyaline membrane, e) fibroplasia, f) squamous metaplasia, g) multinucleated cells, h) alveolar hemorrhage, i) acute inflammatory exudates, j) atypical pneumocytes. Pathological changes were graded, using two sections, according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue. This semiquantitative analysis is currently routinely used in most studies of the department of pathology of the University of São Paulo Medical School.16,17 \nFor immunohistochemistry, the avidin–biotin–peroxidase complex and streptavidin–biotin enzyme complex immunostaining methods were used with antibodies against: lymphocytes CD4 (clone: MO834, dilution 1:1000), CD8 (clone: M7103, dilution 1∶20), CD20 (clone: M755, dilution 1∶40), macrophages‐histiocytes CD68 (clone: M814, dilution 1∶30), mouse monoclonal antibodies from DAKO, Carpinteria, CA, USA; S100 (clone: Z311, dilution 1∶1000) rabbit polyclonal antibodies from DAKO; CD1a (clone: MCA1657, dilution 1: 200) mouse monoclonal antibodies from Serotec, Oxford, UK; natural killer, NK (clone: MS136P, dilution 1∶1000) mouse monoclonal antibodies from Neomarkers, Fremont, CA, USA; interleukin 4 (IL‐4) (dilution 1∶40), IL‐10 (dilution 1∶40) goat polyclonal antibodies from R&D Systems, Minneapolis, MN, USA; IFNγ (clone: MAB285, dilution 1∶30), mouse monoclonal antibodies from R&D Systems; tumor necrosis factor alpha (TNFα) (clone: AF210NA, dilution 1∶40) all mouse monoclonal antibodies from R&D Systems; inducible nitric oxide synthase (iNOS) (dilution 1∶500) polyclonal rabbit from Calbiochem, La Jolla, CA, USA.\nImmunohistochemical reactions were carried out in accordance with the manufacturer's instruction. Diaminobenzidine was used as the color substrate, and Meyer's hematoxylin was used for counterstaining. Cell immunophenotypes and immune expression of cells using the different methods of immunohistochemical staining were identified and graded according to a five‐point semiquantitative intensity‐based scoring system as: 0 = negative, 1 = positive in 1–25%, 2 = positive in 26–50%, 3 = positive in 51–75%, and 4 = positive in 76–100% of examined tissue.18 \nSmall blocks (1 mm3) of lungs were fixed in 2% glutaraldehyde/2% paraformaldehyde in cacodylate buffer overnight, then fixed in 1% osmium tetroxide, dehydrated, and embedded in araldite. Ultrathin sections obtained from selected areas were double‐stained and examined in a Philips TECNAI 10 electron microscope at 80 kV. For each electron microscopy image (15/case), the following structural changes were analyzed: a) cytoplasmic swelling, b) degenerative changes, c) sloughing of necrotizing alveolar epithelial cell type I (AECI) and II (AECII), d) denudation of the epithelial basement membrane, e) hyaline membranes, f) alveolar septal collapse, g) viral particles such as tubuloreticular structures (TRS) and cylindrical confronting cisternae (CCC), h) multinucleated AECII. Ultrastructural findings were graded according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue.16,17 \nWe studied pathologically and ultrastructurally five patients suspected of having a pandemic S‐OIV virus who developed ARF requiring ventilatory support. Nasal swabs for RT‐PCR for H1N1 were collected from all patients. The OLBs indicated by the clinicians were carried out after receiving consent from the families. These patients had a severe evolution of the virus and more information about the physiopathology of the disease was required in order to provide adequate treatment. If no improvement of the respiratory status was seen in the patients with ARF after ≥5 days (defined as no decrease of the Lung Injury Score) an OLB was indicated.15 \nLung tissue sections (4 µm thick), prepared from 10% formalin‐fixed, routinely processed, paraffin‐embedded blocks, were stained with hematoxylin‐eosin. The following methods of histochemical staining were carried out: Grocott's methenamine silver stain, Brown–Brenn, and Ziehl–Neelsen. The following pathological changes were analyzed: a) necrotizing bronchiolitis, b) alveolar collapse, c) dilatation of the airspaces, d) hyaline membrane, e) fibroplasia, f) squamous metaplasia, g) multinucleated cells, h) alveolar hemorrhage, i) acute inflammatory exudates, j) atypical pneumocytes. Pathological changes were graded, using two sections, according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue. This semiquantitative analysis is currently routinely used in most studies of the department of pathology of the University of São Paulo Medical School.16,17 \nFor immunohistochemistry, the avidin–biotin–peroxidase complex and streptavidin–biotin enzyme complex immunostaining methods were used with antibodies against: lymphocytes CD4 (clone: MO834, dilution 1:1000), CD8 (clone: M7103, dilution 1∶20), CD20 (clone: M755, dilution 1∶40), macrophages‐histiocytes CD68 (clone: M814, dilution 1∶30), mouse monoclonal antibodies from DAKO, Carpinteria, CA, USA; S100 (clone: Z311, dilution 1∶1000) rabbit polyclonal antibodies from DAKO; CD1a (clone: MCA1657, dilution 1: 200) mouse monoclonal antibodies from Serotec, Oxford, UK; natural killer, NK (clone: MS136P, dilution 1∶1000) mouse monoclonal antibodies from Neomarkers, Fremont, CA, USA; interleukin 4 (IL‐4) (dilution 1∶40), IL‐10 (dilution 1∶40) goat polyclonal antibodies from R&D Systems, Minneapolis, MN, USA; IFNγ (clone: MAB285, dilution 1∶30), mouse monoclonal antibodies from R&D Systems; tumor necrosis factor alpha (TNFα) (clone: AF210NA, dilution 1∶40) all mouse monoclonal antibodies from R&D Systems; inducible nitric oxide synthase (iNOS) (dilution 1∶500) polyclonal rabbit from Calbiochem, La Jolla, CA, USA.\nImmunohistochemical reactions were carried out in accordance with the manufacturer's instruction. Diaminobenzidine was used as the color substrate, and Meyer's hematoxylin was used for counterstaining. Cell immunophenotypes and immune expression of cells using the different methods of immunohistochemical staining were identified and graded according to a five‐point semiquantitative intensity‐based scoring system as: 0 = negative, 1 = positive in 1–25%, 2 = positive in 26–50%, 3 = positive in 51–75%, and 4 = positive in 76–100% of examined tissue.18 \nSmall blocks (1 mm3) of lungs were fixed in 2% glutaraldehyde/2% paraformaldehyde in cacodylate buffer overnight, then fixed in 1% osmium tetroxide, dehydrated, and embedded in araldite. Ultrathin sections obtained from selected areas were double‐stained and examined in a Philips TECNAI 10 electron microscope at 80 kV. For each electron microscopy image (15/case), the following structural changes were analyzed: a) cytoplasmic swelling, b) degenerative changes, c) sloughing of necrotizing alveolar epithelial cell type I (AECI) and II (AECII), d) denudation of the epithelial basement membrane, e) hyaline membranes, f) alveolar septal collapse, g) viral particles such as tubuloreticular structures (TRS) and cylindrical confronting cisternae (CCC), h) multinucleated AECII. Ultrastructural findings were graded according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue.16,17 ", "We studied pathologically and ultrastructurally five patients suspected of having a pandemic S‐OIV virus who developed ARF requiring ventilatory support. Nasal swabs for RT‐PCR for H1N1 were collected from all patients. The OLBs indicated by the clinicians were carried out after receiving consent from the families. These patients had a severe evolution of the virus and more information about the physiopathology of the disease was required in order to provide adequate treatment. If no improvement of the respiratory status was seen in the patients with ARF after ≥5 days (defined as no decrease of the Lung Injury Score) an OLB was indicated.15 \nLung tissue sections (4 µm thick), prepared from 10% formalin‐fixed, routinely processed, paraffin‐embedded blocks, were stained with hematoxylin‐eosin. The following methods of histochemical staining were carried out: Grocott's methenamine silver stain, Brown–Brenn, and Ziehl–Neelsen. The following pathological changes were analyzed: a) necrotizing bronchiolitis, b) alveolar collapse, c) dilatation of the airspaces, d) hyaline membrane, e) fibroplasia, f) squamous metaplasia, g) multinucleated cells, h) alveolar hemorrhage, i) acute inflammatory exudates, j) atypical pneumocytes. Pathological changes were graded, using two sections, according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue. This semiquantitative analysis is currently routinely used in most studies of the department of pathology of the University of São Paulo Medical School.16,17 \nFor immunohistochemistry, the avidin–biotin–peroxidase complex and streptavidin–biotin enzyme complex immunostaining methods were used with antibodies against: lymphocytes CD4 (clone: MO834, dilution 1:1000), CD8 (clone: M7103, dilution 1∶20), CD20 (clone: M755, dilution 1∶40), macrophages‐histiocytes CD68 (clone: M814, dilution 1∶30), mouse monoclonal antibodies from DAKO, Carpinteria, CA, USA; S100 (clone: Z311, dilution 1∶1000) rabbit polyclonal antibodies from DAKO; CD1a (clone: MCA1657, dilution 1: 200) mouse monoclonal antibodies from Serotec, Oxford, UK; natural killer, NK (clone: MS136P, dilution 1∶1000) mouse monoclonal antibodies from Neomarkers, Fremont, CA, USA; interleukin 4 (IL‐4) (dilution 1∶40), IL‐10 (dilution 1∶40) goat polyclonal antibodies from R&D Systems, Minneapolis, MN, USA; IFNγ (clone: MAB285, dilution 1∶30), mouse monoclonal antibodies from R&D Systems; tumor necrosis factor alpha (TNFα) (clone: AF210NA, dilution 1∶40) all mouse monoclonal antibodies from R&D Systems; inducible nitric oxide synthase (iNOS) (dilution 1∶500) polyclonal rabbit from Calbiochem, La Jolla, CA, USA.\nImmunohistochemical reactions were carried out in accordance with the manufacturer's instruction. Diaminobenzidine was used as the color substrate, and Meyer's hematoxylin was used for counterstaining. Cell immunophenotypes and immune expression of cells using the different methods of immunohistochemical staining were identified and graded according to a five‐point semiquantitative intensity‐based scoring system as: 0 = negative, 1 = positive in 1–25%, 2 = positive in 26–50%, 3 = positive in 51–75%, and 4 = positive in 76–100% of examined tissue.18 \nSmall blocks (1 mm3) of lungs were fixed in 2% glutaraldehyde/2% paraformaldehyde in cacodylate buffer overnight, then fixed in 1% osmium tetroxide, dehydrated, and embedded in araldite. Ultrathin sections obtained from selected areas were double‐stained and examined in a Philips TECNAI 10 electron microscope at 80 kV. For each electron microscopy image (15/case), the following structural changes were analyzed: a) cytoplasmic swelling, b) degenerative changes, c) sloughing of necrotizing alveolar epithelial cell type I (AECI) and II (AECII), d) denudation of the epithelial basement membrane, e) hyaline membranes, f) alveolar septal collapse, g) viral particles such as tubuloreticular structures (TRS) and cylindrical confronting cisternae (CCC), h) multinucleated AECII. Ultrastructural findings were graded according to a five‐point semiquantitative severity‐based scoring system as: 0 = normal lung parenchyma, 1 = changes in 1–25%, 2 = changes in 26–50%, 3 = changes in 51–75%, and 4 = changes in 76–100% of examined tissue.16,17 ", "[SUBTITLE] Patients [SUBSECTION] Five patients (two male, three female) mean age 48 years (range 35–81) were studied; only patient No 4 had pre‐existing medical illnesses (Table 1) and chest x‐ray abnormality at disease onset. All the patients presented with a 4–10 days' (median 5 days) history of shortness of breath and flu‐like symptoms and rapid clinical deterioration. They were transferred to the ICU for tracheal intubation and ventilation (range 8–25 days; median 17) and diagnosed as having ARF.15 All the patients received 75 mg twice a day by nasal enteral tube of olsetamivir (range 4–14 days; median 10) and intravenous steroids (range 9–20 days; median 12). After obtaining these results the dose was changed from 75 mg twice a day to 150 mg twice a day through a nasal enteral tube, in accordance with the Brazilian guidelines. The presence of the H1N1 virus was confirmed in all five patients (Table 1) by nasal swab or lung tissue positivity of RT‐PCR according to guidelines from the Centers for Disease Control and Prevention.19 Other microbiological investigations, including the isolation of other viruses, were negative. During the evolution of disease in the patients in the ICU, Staphylococcus aureus was isolated from a blood culture (patients 2 and 3) and Klebsiella spp were identified in tracheal aspirate specimens (patient 1). Patients 1, 2 and 4 are alive, but patients 3 and 5 died of respiratory failure, with concurrent congestive heart failure, hepatic encephalopathy, and acute renal failure.\nFive patients (two male, three female) mean age 48 years (range 35–81) were studied; only patient No 4 had pre‐existing medical illnesses (Table 1) and chest x‐ray abnormality at disease onset. All the patients presented with a 4–10 days' (median 5 days) history of shortness of breath and flu‐like symptoms and rapid clinical deterioration. They were transferred to the ICU for tracheal intubation and ventilation (range 8–25 days; median 17) and diagnosed as having ARF.15 All the patients received 75 mg twice a day by nasal enteral tube of olsetamivir (range 4–14 days; median 10) and intravenous steroids (range 9–20 days; median 12). After obtaining these results the dose was changed from 75 mg twice a day to 150 mg twice a day through a nasal enteral tube, in accordance with the Brazilian guidelines. The presence of the H1N1 virus was confirmed in all five patients (Table 1) by nasal swab or lung tissue positivity of RT‐PCR according to guidelines from the Centers for Disease Control and Prevention.19 Other microbiological investigations, including the isolation of other viruses, were negative. During the evolution of disease in the patients in the ICU, Staphylococcus aureus was isolated from a blood culture (patients 2 and 3) and Klebsiella spp were identified in tracheal aspirate specimens (patient 1). Patients 1, 2 and 4 are alive, but patients 3 and 5 died of respiratory failure, with concurrent congestive heart failure, hepatic encephalopathy, and acute renal failure.\n[SUBTITLE] Necrotizing Bronchiolitis, Collapsogenic Diffuse Alveolar Damage and Alveolar Hemorrhage [SUBSECTION] Figure 1 depicts the pathological findings in the surgical lung biopsy specimens. The main pathological features were necrotizing bronchiolitis, clastogenic diffuse alveolar damage (DAD), and alveolar hemorrhage (Table 2). Pulmonary specimens from patients 3 and 5 presented more intense changes at optical microscopy. The membranous and respiratory bronchioles were extensively compromised by epithelial necrosis, squamous metaplasia, and obliteration by fibroplasia (Figure 1A–F). The parenchyma was modified by extensive alveolar collapse, dilatation of the airspaces, alveolar hemorrhage, and sparse hyaline membrane formation (Figure 1G–I). There was interstitial thickening, with mild to moderate fibroplasia (Figure 1I), but a disproportionately sparse infiltrate of inflammatory cells, mainly histiocytes, including multinucleated forms, lymphocytes and megakaryocytes (Figure 1J–K).\nAtypical bronchiolar and alveolar epithelial cells (AECs) were seen in all five patients, although the distribution was focal (Figure 1J). These atypical forms included multinucleated giant cells with irregularly distributed nuclei (Figure 1K, L) or bronchiolar and AECs with large atypical nuclei, prominent eosinophilic nucleoli, and granular amphophilic cytoplasm (Figure 1M). However, distinct viral inclusions were not apparent.\nFigure 1 depicts the pathological findings in the surgical lung biopsy specimens. The main pathological features were necrotizing bronchiolitis, clastogenic diffuse alveolar damage (DAD), and alveolar hemorrhage (Table 2). Pulmonary specimens from patients 3 and 5 presented more intense changes at optical microscopy. The membranous and respiratory bronchioles were extensively compromised by epithelial necrosis, squamous metaplasia, and obliteration by fibroplasia (Figure 1A–F). The parenchyma was modified by extensive alveolar collapse, dilatation of the airspaces, alveolar hemorrhage, and sparse hyaline membrane formation (Figure 1G–I). There was interstitial thickening, with mild to moderate fibroplasia (Figure 1I), but a disproportionately sparse infiltrate of inflammatory cells, mainly histiocytes, including multinucleated forms, lymphocytes and megakaryocytes (Figure 1J–K).\nAtypical bronchiolar and alveolar epithelial cells (AECs) were seen in all five patients, although the distribution was focal (Figure 1J). These atypical forms included multinucleated giant cells with irregularly distributed nuclei (Figure 1K, L) or bronchiolar and AECs with large atypical nuclei, prominent eosinophilic nucleoli, and granular amphophilic cytoplasm (Figure 1M). However, distinct viral inclusions were not apparent.\n[SUBTITLE] Bronchial and Alveolar Epithelium Necrosis and Viral‐like Particles [SUBSECTION] The ultrastructural features were represented by bronchial and alveolar epithelium necrosis, a destroyed alveolar epithelium/basement membrane unity and the presence of viral‐like particles (Table 3). Patients 3 and 5 presented more prominent changes at submicroscopic level. Cytoplasmic swelling, necrosis, and degenerative changes of the endoplasmic reticulum and other organelles were present in bronchial and AECs (Figure 2A–C). A large number of bronchiolar and AECs were detached from the basement membrane and were showing apoptosis (Figure 2A, B). Lymphocytes also exhibited apoptosis. Sloughing of apoptotic bronchiolar cells and AECs causing denudation of the epithelial basement membrane was followed by deposition of hyaline membranes (Figure 2D).\nUltrastructural evidence of alveolar collapse was also present by the apposition of the alveolar septa (Figure 2E–G). The regenerating bronchiolar epithelium extended along the adjacent alveolar septa showing features of cells with prominent surface microvilli with decreased or absent lamellar bodies and considerable cytologic atypia (Figure 2H–L). Increased myofibroblasts and collagen fibers were also present (Figure 2I). Multinucleated epithelial cells with prominent nucleoli were noted in most cases, although such cells were sparse (Figure 2K). The proliferating bronchiolar and AECs containing TRS and CCC, probably representing residual viral‐like particles, were distinguished in all cases (Figure 2M–R). TRS appeared as reticular aggregates of branching membranous tubules located within the cisternae of the endoplasmic reticulum (Figure 2M–O) or were compact (Figure 2Q, R). CCC were identified as elongated, slightly curved cylindrical structures (Figure 2P, Q), ring shaped (Figure 2R) or fused membranous lamellae, representing cisternae of endoplasmic reticulum.\nThe ultrastructural features were represented by bronchial and alveolar epithelium necrosis, a destroyed alveolar epithelium/basement membrane unity and the presence of viral‐like particles (Table 3). Patients 3 and 5 presented more prominent changes at submicroscopic level. Cytoplasmic swelling, necrosis, and degenerative changes of the endoplasmic reticulum and other organelles were present in bronchial and AECs (Figure 2A–C). A large number of bronchiolar and AECs were detached from the basement membrane and were showing apoptosis (Figure 2A, B). Lymphocytes also exhibited apoptosis. Sloughing of apoptotic bronchiolar cells and AECs causing denudation of the epithelial basement membrane was followed by deposition of hyaline membranes (Figure 2D).\nUltrastructural evidence of alveolar collapse was also present by the apposition of the alveolar septa (Figure 2E–G). The regenerating bronchiolar epithelium extended along the adjacent alveolar septa showing features of cells with prominent surface microvilli with decreased or absent lamellar bodies and considerable cytologic atypia (Figure 2H–L). Increased myofibroblasts and collagen fibers were also present (Figure 2I). Multinucleated epithelial cells with prominent nucleoli were noted in most cases, although such cells were sparse (Figure 2K). The proliferating bronchiolar and AECs containing TRS and CCC, probably representing residual viral‐like particles, were distinguished in all cases (Figure 2M–R). TRS appeared as reticular aggregates of branching membranous tubules located within the cisternae of the endoplasmic reticulum (Figure 2M–O) or were compact (Figure 2Q, R). CCC were identified as elongated, slightly curved cylindrical structures (Figure 2P, Q), ring shaped (Figure 2R) or fused membranous lamellae, representing cisternae of endoplasmic reticulum.\n[SUBTITLE] Deficient Innate and Adaptative Immune Response [SUBSECTION] Figure 3 depicts immunological findings in the surgical lung biopsy specimens. The immunologic features were dominated by a decrease in the innate and adaptive immune response (Table 4). Patients 3 and 5 presented with immunologic impairment.\nIn all patients small aggregates of macrophages, CD4+ T‐helper cells, CD8+ T‐cytotoxic cells, CD20+ B‐cells, CD1a+ dendritic cells, S100+ dendritic cells, natural killer lymphocytes were present around vessels and bronchioles. Dendritic cells and TNFα were expressed sparsely in macrophages, AECs and endothelial cells, whereas IFNγ was expressed in small mononucleated cells in lungs from patients with S‐OIV. There was a very strong expression of IL‐4, IL‐10 and iNOS in small mononucleated cells.\nFigure 3 depicts immunological findings in the surgical lung biopsy specimens. The immunologic features were dominated by a decrease in the innate and adaptive immune response (Table 4). Patients 3 and 5 presented with immunologic impairment.\nIn all patients small aggregates of macrophages, CD4+ T‐helper cells, CD8+ T‐cytotoxic cells, CD20+ B‐cells, CD1a+ dendritic cells, S100+ dendritic cells, natural killer lymphocytes were present around vessels and bronchioles. Dendritic cells and TNFα were expressed sparsely in macrophages, AECs and endothelial cells, whereas IFNγ was expressed in small mononucleated cells in lungs from patients with S‐OIV. There was a very strong expression of IL‐4, IL‐10 and iNOS in small mononucleated cells.", "Five patients (two male, three female) mean age 48 years (range 35–81) were studied; only patient No 4 had pre‐existing medical illnesses (Table 1) and chest x‐ray abnormality at disease onset. All the patients presented with a 4–10 days' (median 5 days) history of shortness of breath and flu‐like symptoms and rapid clinical deterioration. They were transferred to the ICU for tracheal intubation and ventilation (range 8–25 days; median 17) and diagnosed as having ARF.15 All the patients received 75 mg twice a day by nasal enteral tube of olsetamivir (range 4–14 days; median 10) and intravenous steroids (range 9–20 days; median 12). After obtaining these results the dose was changed from 75 mg twice a day to 150 mg twice a day through a nasal enteral tube, in accordance with the Brazilian guidelines. The presence of the H1N1 virus was confirmed in all five patients (Table 1) by nasal swab or lung tissue positivity of RT‐PCR according to guidelines from the Centers for Disease Control and Prevention.19 Other microbiological investigations, including the isolation of other viruses, were negative. During the evolution of disease in the patients in the ICU, Staphylococcus aureus was isolated from a blood culture (patients 2 and 3) and Klebsiella spp were identified in tracheal aspirate specimens (patient 1). Patients 1, 2 and 4 are alive, but patients 3 and 5 died of respiratory failure, with concurrent congestive heart failure, hepatic encephalopathy, and acute renal failure.", "Figure 1 depicts the pathological findings in the surgical lung biopsy specimens. The main pathological features were necrotizing bronchiolitis, clastogenic diffuse alveolar damage (DAD), and alveolar hemorrhage (Table 2). Pulmonary specimens from patients 3 and 5 presented more intense changes at optical microscopy. The membranous and respiratory bronchioles were extensively compromised by epithelial necrosis, squamous metaplasia, and obliteration by fibroplasia (Figure 1A–F). The parenchyma was modified by extensive alveolar collapse, dilatation of the airspaces, alveolar hemorrhage, and sparse hyaline membrane formation (Figure 1G–I). There was interstitial thickening, with mild to moderate fibroplasia (Figure 1I), but a disproportionately sparse infiltrate of inflammatory cells, mainly histiocytes, including multinucleated forms, lymphocytes and megakaryocytes (Figure 1J–K).\nAtypical bronchiolar and alveolar epithelial cells (AECs) were seen in all five patients, although the distribution was focal (Figure 1J). These atypical forms included multinucleated giant cells with irregularly distributed nuclei (Figure 1K, L) or bronchiolar and AECs with large atypical nuclei, prominent eosinophilic nucleoli, and granular amphophilic cytoplasm (Figure 1M). However, distinct viral inclusions were not apparent.", "The ultrastructural features were represented by bronchial and alveolar epithelium necrosis, a destroyed alveolar epithelium/basement membrane unity and the presence of viral‐like particles (Table 3). Patients 3 and 5 presented more prominent changes at submicroscopic level. Cytoplasmic swelling, necrosis, and degenerative changes of the endoplasmic reticulum and other organelles were present in bronchial and AECs (Figure 2A–C). A large number of bronchiolar and AECs were detached from the basement membrane and were showing apoptosis (Figure 2A, B). Lymphocytes also exhibited apoptosis. Sloughing of apoptotic bronchiolar cells and AECs causing denudation of the epithelial basement membrane was followed by deposition of hyaline membranes (Figure 2D).\nUltrastructural evidence of alveolar collapse was also present by the apposition of the alveolar septa (Figure 2E–G). The regenerating bronchiolar epithelium extended along the adjacent alveolar septa showing features of cells with prominent surface microvilli with decreased or absent lamellar bodies and considerable cytologic atypia (Figure 2H–L). Increased myofibroblasts and collagen fibers were also present (Figure 2I). Multinucleated epithelial cells with prominent nucleoli were noted in most cases, although such cells were sparse (Figure 2K). The proliferating bronchiolar and AECs containing TRS and CCC, probably representing residual viral‐like particles, were distinguished in all cases (Figure 2M–R). TRS appeared as reticular aggregates of branching membranous tubules located within the cisternae of the endoplasmic reticulum (Figure 2M–O) or were compact (Figure 2Q, R). CCC were identified as elongated, slightly curved cylindrical structures (Figure 2P, Q), ring shaped (Figure 2R) or fused membranous lamellae, representing cisternae of endoplasmic reticulum.", "Figure 3 depicts immunological findings in the surgical lung biopsy specimens. The immunologic features were dominated by a decrease in the innate and adaptive immune response (Table 4). Patients 3 and 5 presented with immunologic impairment.\nIn all patients small aggregates of macrophages, CD4+ T‐helper cells, CD8+ T‐cytotoxic cells, CD20+ B‐cells, CD1a+ dendritic cells, S100+ dendritic cells, natural killer lymphocytes were present around vessels and bronchioles. Dendritic cells and TNFα were expressed sparsely in macrophages, AECs and endothelial cells, whereas IFNγ was expressed in small mononucleated cells in lungs from patients with S‐OIV. There was a very strong expression of IL‐4, IL‐10 and iNOS in small mononucleated cells.", "This case series documents for the first time the pathological and ultrastructural findings of lung tissue from five patients admitted to the ICU with ARF and S‐OIV infection who were submitted to OLB.\nS‐OIV (H1N1) virus and the pulmonary syndrome is an acute respiratory illness, first identified in Mexico with at present, 399,232 cases registered, 4,735 deaths, affecting more than 179 countries.2,5 Our patients, most of them previously healthy, had an atypical influenza‐like illness that progressed during a period of 5–7 days.\nThe two patients who died showed a higher degree of pathological commitment of the disease at the OLB. Most of our patients were young to middle‐aged and had previously been healthy. Increased risk for severe S‐OIV illness is found in young children, 10–19 age groups, patients older than 65 years, pregnant women, obese people and those with comorbidities.1,7,20 Fifteen to thirty per cent of patients with H1N1 infection required ICU admission. Mortality among the patients who required mechanical ventilation was around 58%.7 \nIn our case series the OLB findings showed that the lung damage was most likely due to infection by the influenza virus. The main pathological finding revealed necrotizing bronchiolitis and DAD, respiratory epithelial cells probably being the primary target of the infection. The extensive destruction of the respiratory and AECs and dysfunction in the immune and adaptative immune response led to DAD. As previously reported, possible mechanisms of damage include direct injury to the respiratory and alveolar epithelium exposing the basement membrane and leading to alveolar collapse by loss of surfactant,13,14,21 with a secondary cytokine storm.22 This is followed by exudation of macromolecules from the circulation, which finally form hyaline membranes. Activation of the cytokines is part of the immune reaction aiming to eradicate the virus. In this study, the systemic IFNγ and TNFα cytokine activation probably resulted in reactive hemophagocytic syndrome in the bronchiole‐associated lymphoid tissue and possibly also mediated the epithelial necrosis.23 A mild inflammatory infiltration is most often seen in viral pneumonias; this has been explained by a cytokine‐mediated blockade of lymphocytopoiesis and also by blockade of release from the bone marrow.24 \nIn our cases, expression in the lung of IFNγ by small mononucleated cells and TNFα by macrophages and AECs was low. This finding may be supported by Kim and colleagues,25 who described in S‐OIV infection, maximal indices of TNFα expression in lungs of infected pigs in the first days of the infection and then a gradual decrease. Conversely, we found a very strong expression of IL‐4, IL‐10 and iNOS by macrophages. The sparse inflammatory and immune reaction found in our samples, which involves targetting of the virus by NK cells, lymphocytes T and B cells, CD8+ cytotoxic T‐lymphocyte cells, as well as CD1a and S100 cells, may be due to a combination of lymphoid tissue necrosis and apoptosis and exhaustion of lymphoid proliferation in response to the cytokine overdrive. In addition, the high IL‐10 expression associated with its anti‐inflammatory action may explain the low degree of inflammation observed in our cases. Taken together, our results suggest that in S‐OIV infection, altered innate and adaptative immune responses may lead to incomplete virus eradication in the primary target of the infection and, consequently, imbalance between inflammation and reparation, resulting in bronchiolar obliteration and DAD.\nDAD is likely to be a consequence of bronchiolar obstruction and consequent hypoxia rather than direct invasion of the viruses. It is a severe pattern of lung injury and could be secondary to various pulmonary and extrapulmonary insults.26 In this series of cases we found DAD in which alveolar collapse was prominent, differing from classic DAD found in ARF or secondary to other pulmonary and extrapulmonary insults. This finding may have important implications in the ventilation strategy of the patients.27 In addition, the presence of intra‐alveolar hemorrhage may suggest virus‐associated hemophagocytic syndrome.23 \nIn our current series, pulmonary ultrastructural analysis was important to obtain an understanding of the pathophysiology of this new disease. First, we demonstrated apoptosis and necrosis in the bronchiolar epithelium together with viral‐like particles, thus suggesting the bronchiolar epithelium as the primary target of the virus infection. Second, we documented the submicroscopic pattern of a clastogenic DAD in S‐OIV infection. Third, we found indirect evidence of virus infection in alveolar and bronchiolar epithelial cells represented by the TRS and CCC. These submicroscopic structures were demonstrated ultrastructurally in the lung tissue of all the patients and their presence suggests an inactivation of the virus by oseltamivir treatment or an altered innate immune response of these patients. They appeared mainly in respiratory cells and AECs and have previously been described in a variety of cell types.28,29 Usually, they occur in endothelial cells and lymphocytes from patients with autoimmune diseases and viral infections.30 Patients with acquired immunodeficiency syndrome present TRS and CCC in these same cells.31 The mechanism of TRS and CCC production in vivo is not definitely established. Nevertheless, clinical and experimental studies have shown that the presence of both structures in these diseases is directly associated with the increase of IFNα and IFNβ but not with IFNγ.29 \nOne theory to explain the nature and pathogenesis of TRS and CCC suggests that these structures are incomplete viral particles.30 In our study, these viral‐like particles were noted mainly in the respiratory epithelial cells, but not in the other cell types within the lung. These observations reinforce the hypothesis that the primary target cells for S‐IOV infection are probably the bronchiolar epithelium. The atypical morphology of the bronchiolar and alveolar epithelial cells was probably related to viral cytopathic effects or reactive changes. In fact, the presence of multinucleated epithelial cells is not exclusive to S‐IOV, and is seen in pneumonia caused by the family of Paramyxoviridae, including parainfluenza viruses, measles, mumps, respiratory syncytial virus and, perhaps, metapneumovirus.31 Although multinucleated cells were seen in our cases, these probably reflect non‐specific secondary changes.\nWe describe a case series of five patients with influenza‐like illness with pneumonia and ensuing ARF who underwent OLB with subsequently confirmed diagnosis by RT‐PCR testing for S‐IOV infections. This report has some limitations. First, this study may not validate the importance of OLB in this population; however, it did provide information about this new disease. Second, it is difficult to compare our findings with those of others because to our knowledge no studies reporting an OLB in patients with S‐IOV have been published. Although there are already many autopsy series with patients with H1N1 that can be used for comparison, the pathological findings at autopsy are modified mainly by the presence of associated co‐infections and mechanical ventilation.32-39 \nIn summary, we have presented the pulmonary pathology in a confirmed and well‐defined series of cases of S‐IOV infection associated with ARF. The pathological features, in addition to necrotizing bronchiolitis and DAD, included the presence of multinucleated cells and intra‐alveolar fibrin exudates (organizing pneumonia‐like lesions). Although each of these features is non‐specific, their combined occurrence, together with positive serologic, microbiologic, and immunologic investigations and/or ultrastructural tissue examination enables the diagnosis of S‐IOV infection to be confirmed, and is particularly useful in clinically suspicious cases that do not fulfill the WHO criteria or in clinically inapparent cases.\nWe have shown that viral‐like particles can be successfully demonstrated in lung tissue by ultrastructural examination, highlighting the importance of OLB, particularly in those patients without confirmation of the virus. We also showed that bronchioles and epithelium, rather than endothelium, are probably the primary target of infection, and that DAD is the consequence of airways obliteration and dysfunction on innate immunity, suggesting that the treatment should be focused on epithelium repair.", "This study was supported by the following Brazilian agencies: the National Council for Scientific and Technological Development [CNPq]; Foundation for the Support of Research of the State of São Paulo [FAPESP]; and the Laboratories for Medical Research [LIMs], Hospital das Clinicas, University of São Paulo Medical School.\nWe gratefully acknowledge the excellent technical assistance of Julia Maria L.L. Silvestre, Hélio Correa, Maria Cecília dos Santos Marcondes, Adão Caetano Sobrinho e Carla Pagliari. We are also grateful to Dr. Patrícia Rieken Macedo Rocco, Antônio Carlos Campos Pignatari, Andrea Sette for their help." ]
[ "intro", "methods", null, "results", null, null, null, null, "discussion", null ]
[ "Open lung biopsy", "Acute lung injury", "Diffuse alveolar damage", "Virus", "Electron microscopy", "Innate immunity" ]
Influence of preoperative serum N-terminal pro-brain type natriuretic peptide on the postoperative outcome and survival rates of coronary artery bypass patients.
21340210
The N-terminal fragment of pro-brain type natriuretic peptide (NT-proBNP) is an established biomarker for cardiac failure.
BACKGROUND
In 819 patients undergoing isolated CABG surgery preoperative serum NT-proBNP levels were measured. NT-proBNP was correlated with various postoperative outcome parameters and survival rate after a median follow-up time of 18 (0.5-44) months. Risk factors of mortality were identified using χ², Mann-Whitney test, and Cox regression.
METHODS
NT-proBNP levels > 430 ng/ml and > 502 ng/ml predicted hospital and overall mortality (p < 0.05), with an incidence of 1.6% and 4%, respectively. Kaplan-Meier analysis revealed decreased survival rates in patients with NT-proBNP > 502 ng/ml (p = 0.001). Age, preoperative serum creatinine, diabetes, chronic obstructive pulmonary disease, low left ventricular ejection fraction and BNP levels >502 ng/ml were isolated as risk factors for overall mortality. Multivariate Cox regression analysis, including the known factors influencing NT-proBNP levels, identified NT-proBNP as an independent risk factor for mortality (OR = 3.079 (CI = 1.149-8.247), p = 0.025). Preoperative NT-proBNP levels >502 ng/ml were associated with increased ventilation time (p = 0.005), longer intensive care unit stay (p=0.001), higher incidence of postoperative hemofiltration (p = 0.001), use of intra-aortic balloon pump (p < 0.001), and postoperative atrial fibrillation (p = 0.031)
RESULTS
Preoperative NT-proBNP levels > 502 ng/ml predict mid-term mortality after isolated CABG and are associated with significantly higher hospital mortality and perioperative complications.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Biomarkers", "Coronary Artery Bypass", "Coronary Artery Disease", "Epidemiologic Methods", "Female", "Heart Failure", "Hospital Mortality", "Humans", "Male", "Middle Aged", "Natriuretic Peptide, Brain", "Peptide Fragments", "Postoperative Complications", "Postoperative Period", "Preoperative Care", "Risk Factors" ]
3020332
INTRODUCTION
Brain type natriuretic peptide (BNP) is primarily produced by cardiac myocytes. Physiological effects of BNP are a peripheral vasodilatation and inhibition of renin–angiotensin production.1-3 The precursor peptide proBNP is split into the active hormone BNP and the N-terminal fragment of proBNP (NT-proBNP). Both BNP and NT-proBNP are established markers for cardiac failure. However, other pathologies such as exacerbated chronic obstructive pulmonary disease, acute coronary syndromes, atrial fibrillation, and myocarditis can cause elevated BNP levels. Additionally, higher NT-proBNP levels are associated with: female gender, impaired renal function, and older age. Obesity has been shown to decrease NT-proBNP levels.2 Increased BNP levels are a prognostic marker associated with higher mortality in patients with myocardial infarction, cardiogenic shock, and pulmonary embolism.4-6 In patients with coronary artery disease increased BNP levels are associated with an increased rate of myocardial infarction and cardiovascular death during mid-term follow-up.7 We aimed to investigate if preoperative serum NT-proBNP levels are associated with hospital and mid-term mortality and with postoperative outcome variables in patients undergoing isolated coronary artery bypass grafting (CABG)
PATIENTS AND METHODS
Data used in the analysis were collected from the institutional database at the Department of Cardiac Surgery, Innsbruck Medical University, which contained data collected using the data collection form of the Society of Thoracic Surgeons (Adult Cardiac Surgery Database Data Collection form Version 2.25.1). Preoperative NT-proBNP was measured routinely in all patients on the day of admission before surgery. The data of 819 patients undergoing isolated CABG were analysed. Patients included in the study were consecutive cases with CABG seen at our department between 2001 and 2007. Patients with concomitant procedures (valve surgery, aortic surgery, etc) were excluded. Patients with missing preoperative NT-proBNP values were also excluded. Patients were followed up by request of the national death registry (Statistik Austria). [SUBTITLE] Definitions [SUBSECTION] Primary end points of the study included: Hospital mortality, defined as the incidence of death occurring during admission to hospital for surgery or up to 30 days after CABG Overall mortality, defined as hospital deaths and all deaths during the follow-up time. Secondary end points of the study were: Carotid artery disease, defined as the presence of carotid artery stenosis of ≥50%, or carotid artery occlusion, or post-carotid endarterectomy. The EuroSCORE, used for perioperative risk stratification as described by Nashef and coworkers.8 Hypertension, diagnosed if current blood pressure was >140 mmHg systolic or >90 mmHg diastolic, or if the patient was currently receiving antihypertensive drugs, or if the patient had a history of hypertension. Stroke, defined as any neurologic impairment of motor, sensory or cognitive function that persisted for >24 h, or was associated with death, and that could not be explained by other neurologic etiologies (ie, postoperative delirium, dementia, head trauma). History of neurologic events, defined as a preoperative history of transient ischemic attack or stroke. Chronic obstructive pulmonary disease (COPD), defined by long-term use of bronchodilators or steroids for lung disease and forced expiratory volume in one second (FEV1) <80% of the predicted value in preoperative spirometry, which was performed routinely at our clinic. Peripheral vascular disease, defined as the presence of intermittent claudication, or previous or planned intervention on the abdominal aorta or limb arteries. Ventilation time, defined as the duration of ventilation needed postoperatively, including potential need for additional ventilation after re-intubation. Intensive care unit (ICU) stay, defined as the duration of ICU stay needed after the operation, including potential need for additional ICU stay after readmission. Perioperative myocardial infarction, defined as the development of new persistent ST-segment, T- or Q-wave changes in electrocardiogram (ECG), new left bundle branch block, and/or troponin T elevation above the 5×99th centile of the upper reference limit and/or creatine kinase (CK)-MB/CK ratio of >10%. Need for intra-aortic balloon pump (IABP), defined as any use of an IABP after establishing cardiopulmonary bypass or in the postoperative period. Postoperative atrial fibrillation, defined as any new episode of atrial fibrillation documented by online monitoring or on ECG during the postoperative period. Reoperation for bleeding, defined as any incidence of reoperation needed for bleeding complications. Primary end points of the study included: Hospital mortality, defined as the incidence of death occurring during admission to hospital for surgery or up to 30 days after CABG Overall mortality, defined as hospital deaths and all deaths during the follow-up time. Secondary end points of the study were: Carotid artery disease, defined as the presence of carotid artery stenosis of ≥50%, or carotid artery occlusion, or post-carotid endarterectomy. The EuroSCORE, used for perioperative risk stratification as described by Nashef and coworkers.8 Hypertension, diagnosed if current blood pressure was >140 mmHg systolic or >90 mmHg diastolic, or if the patient was currently receiving antihypertensive drugs, or if the patient had a history of hypertension. Stroke, defined as any neurologic impairment of motor, sensory or cognitive function that persisted for >24 h, or was associated with death, and that could not be explained by other neurologic etiologies (ie, postoperative delirium, dementia, head trauma). History of neurologic events, defined as a preoperative history of transient ischemic attack or stroke. Chronic obstructive pulmonary disease (COPD), defined by long-term use of bronchodilators or steroids for lung disease and forced expiratory volume in one second (FEV1) <80% of the predicted value in preoperative spirometry, which was performed routinely at our clinic. Peripheral vascular disease, defined as the presence of intermittent claudication, or previous or planned intervention on the abdominal aorta or limb arteries. Ventilation time, defined as the duration of ventilation needed postoperatively, including potential need for additional ventilation after re-intubation. Intensive care unit (ICU) stay, defined as the duration of ICU stay needed after the operation, including potential need for additional ICU stay after readmission. Perioperative myocardial infarction, defined as the development of new persistent ST-segment, T- or Q-wave changes in electrocardiogram (ECG), new left bundle branch block, and/or troponin T elevation above the 5×99th centile of the upper reference limit and/or creatine kinase (CK)-MB/CK ratio of >10%. Need for intra-aortic balloon pump (IABP), defined as any use of an IABP after establishing cardiopulmonary bypass or in the postoperative period. Postoperative atrial fibrillation, defined as any new episode of atrial fibrillation documented by online monitoring or on ECG during the postoperative period. Reoperation for bleeding, defined as any incidence of reoperation needed for bleeding complications. [SUBTITLE] Statistical Analysis [SUBSECTION] Continuous variables are shown as median and range, categorical variables as number and percentage. The 1-, 2-, and 3-year survival rates were calculated using a life table. We calculated the best cut-off values of NT-proBNP to predict hospital mortality or overall mortality by using receiver operating characteristic (ROC) curves. The comparison between categorical variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using χ2 analysis. The comparison between continuous variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using a Mann–Whitney test. Survival curves were generated using Kaplan–Meier estimates. Differences in the survival rate were calculated using the log rank test. For multivariate analysis we used a Cox regression model. As covariates the factors commonly known to influence NT-proBNP levels were entered: gender, age (we chose an age >67 years, which was the median of our cohort), body mass index (calculated as body weight [kg]/height2 [m2]; we chose a BMI >27 kg/m2, which was the median of our cohort), serum creatinine levels above the upper reference limit of 1.2 mg/dl. A p value <0.05 was regarded as statistically significant. SPSS™ 15.0 for Windows (SPSS Inc, Chicago, USA) statistical software was used. Continuous variables are shown as median and range, categorical variables as number and percentage. The 1-, 2-, and 3-year survival rates were calculated using a life table. We calculated the best cut-off values of NT-proBNP to predict hospital mortality or overall mortality by using receiver operating characteristic (ROC) curves. The comparison between categorical variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using χ2 analysis. The comparison between continuous variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using a Mann–Whitney test. Survival curves were generated using Kaplan–Meier estimates. Differences in the survival rate were calculated using the log rank test. For multivariate analysis we used a Cox regression model. As covariates the factors commonly known to influence NT-proBNP levels were entered: gender, age (we chose an age >67 years, which was the median of our cohort), body mass index (calculated as body weight [kg]/height2 [m2]; we chose a BMI >27 kg/m2, which was the median of our cohort), serum creatinine levels above the upper reference limit of 1.2 mg/dl. A p value <0.05 was regarded as statistically significant. SPSS™ 15.0 for Windows (SPSS Inc, Chicago, USA) statistical software was used.
RESULTS
After a median follow up time of 18 (0.5–44) months 33/819 (4%) patients died, including 13/819 (1.6%) hospital deaths. The 1, 2-, and 3-year survival rates (including hospital mortality) were 97%, 93%, and 90%, respectively. Analysis of the ROC curves showed that a serum NT-proBNP cut-off level of 430 ng/l best predicted hospital mortality and a cut-off level of 502 ng/l best predicted overall mortality (Figure 1). Univariate analysis showed that age, preoperative serum creatinine, peripheral vascular disease, and NT-proBNP levels >430 ng/l were significantly associated with hospital mortality. Age, preoperative serum creatinine, diabetes mellitus, history of cerebrovascular accident, chronic obstructive pulmonary disease, left ventricular ejection fraction, and NT-proBNP levels >502 ng/l were significantly associated with overall mortality (Tables 1 and 2). The Kaplan–Meier analysis confirmed a significantly decreased survival rate of patients with NT-proBNP levels >502 ng/l (n = 306) compared with NT-proBNP levels ≤502 ng/l (n = 513) (p = 0.001, Figure 2) The 1, 2-, and 3-year survival rates (including hospital mortality) were 98%, 96%, and 92%, respectively, if NT-proBNP levels were ≤502 ng/l vs. 94%, 87%, and 87%, respectively, if NT-proBNP levels were >502 ng/l. [SUBTITLE] Mortality According to Quartiles of Serum NT-proBNP levels [SUBSECTION] Incidence of death during follow-up was significantly higher in the highest (18/205) in comparison with the lowest NT-proBNP quartile (1/203), (p<0.001, Table 3). Cox regression analysis of the different quartiles of NT-proBNP also showed significant differences. (p<0.001, Figure 3) Multivariate Cox regression analysis (covariates: gender, age >67 years, BMI >27 kg/m2, serum creatinine >1.2 mg/dl) revealed NT-proBNP as an independent risk factor for mid-term survival (p = 0.025, OR = 3.079, CI  = 1.149–8.247). Incidence of death during follow-up was significantly higher in the highest (18/205) in comparison with the lowest NT-proBNP quartile (1/203), (p<0.001, Table 3). Cox regression analysis of the different quartiles of NT-proBNP also showed significant differences. (p<0.001, Figure 3) Multivariate Cox regression analysis (covariates: gender, age >67 years, BMI >27 kg/m2, serum creatinine >1.2 mg/dl) revealed NT-proBNP as an independent risk factor for mid-term survival (p = 0.025, OR = 3.079, CI  = 1.149–8.247). [SUBTITLE] Association Between NT-proBNP Levels and Perioperative Outcome [SUBSECTION] Patients with NT-proBNP levels >502 ng/l had more comorbidities and consecutively a higher EuroSCORE than those with NT-proBNP levels ≤502 ng/l. Postoperatively those patients had a significantly longer ventilation time (p = 0.005), longer ICU stay (p = 0.001), a higher rate of renal failure requiring hemofiltration (p = 0.001), a higher rate of IABPs (p<0.001), and a higher rate of postoperative atrial fibrillation (p = 0.031) than patients with NT-proBNP levels ≤502 ng/l (Table 4). Patients with NT-proBNP levels >502 ng/l had more comorbidities and consecutively a higher EuroSCORE than those with NT-proBNP levels ≤502 ng/l. Postoperatively those patients had a significantly longer ventilation time (p = 0.005), longer ICU stay (p = 0.001), a higher rate of renal failure requiring hemofiltration (p = 0.001), a higher rate of IABPs (p<0.001), and a higher rate of postoperative atrial fibrillation (p = 0.031) than patients with NT-proBNP levels ≤502 ng/l (Table 4).
null
null
[ "Definitions", "Statistical Analysis", "Mortality According to Quartiles of Serum NT-proBNP levels", "Association Between NT-proBNP Levels and Perioperative Outcome", "NT-proBNP and Hospital Mortality", "NT-proBNP and Mid-term Survival Rates", "NT-proBNP and Postoperative Complications", "Clinical impact of the study" ]
[ "Primary end points of the study included:\nHospital mortality, defined as the incidence of death occurring during admission to hospital for surgery or up to 30 days after CABG\nOverall mortality, defined as hospital deaths and all deaths during the follow-up time.\nSecondary end points of the study were:\nCarotid artery disease, defined as the presence of carotid artery stenosis of ≥50%, or carotid artery occlusion, or post-carotid endarterectomy.\nThe EuroSCORE, used for perioperative risk stratification as described by Nashef and coworkers.8 \nHypertension, diagnosed if current blood pressure was >140 mmHg systolic or >90 mmHg diastolic, or if the patient was currently receiving antihypertensive drugs, or if the patient had a history of hypertension.\nStroke, defined as any neurologic impairment of motor, sensory or cognitive function that persisted for >24 h, or was associated with death, and that could not be explained by other neurologic etiologies (ie, postoperative delirium, dementia, head trauma).\nHistory of neurologic events, defined as a preoperative history of transient ischemic attack or stroke.\nChronic obstructive pulmonary disease (COPD), defined by long-term use of bronchodilators or steroids for lung disease and forced expiratory volume in one second (FEV1) <80% of the predicted value in preoperative spirometry, which was performed routinely at our clinic.\nPeripheral vascular disease, defined as the presence of intermittent claudication, or previous or planned intervention on the abdominal aorta or limb arteries.\nVentilation time, defined as the duration of ventilation needed postoperatively, including potential need for additional ventilation after re-intubation.\nIntensive care unit (ICU) stay, defined as the duration of ICU stay needed after the operation, including potential need for additional ICU stay after readmission.\nPerioperative myocardial infarction, defined as the development of new persistent ST-segment, T- or Q-wave changes in electrocardiogram (ECG), new left bundle branch block, and/or troponin T elevation above the 5×99th centile of the upper reference limit and/or creatine kinase (CK)-MB/CK ratio of >10%.\nNeed for intra-aortic balloon pump (IABP), defined as any use of an IABP after establishing cardiopulmonary bypass or in the postoperative period.\nPostoperative atrial fibrillation, defined as any new episode of atrial fibrillation documented by online monitoring or on ECG during the postoperative period.\nReoperation for bleeding, defined as any incidence of reoperation needed for bleeding complications.", "Continuous variables are shown as median and range, categorical variables as number and percentage. The 1-, 2-, and 3-year survival rates were calculated using a life table. We calculated the best cut-off values of NT-proBNP to predict hospital mortality or overall mortality by using receiver operating characteristic (ROC) curves. The comparison between categorical variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using χ2 analysis. The comparison between continuous variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using a Mann–Whitney test. Survival curves were generated using Kaplan–Meier estimates. Differences in the survival rate were calculated using the log rank test. For multivariate analysis we used a Cox regression model. As covariates the factors commonly known to influence NT-proBNP levels were entered: gender, age (we chose an age >67 years, which was the median of our cohort), body mass index (calculated as body weight [kg]/height2 [m2]; we chose a BMI >27 kg/m2, which was the median of our cohort), serum creatinine levels above the upper reference limit of 1.2 mg/dl. A p value <0.05 was regarded as statistically significant. SPSS™ 15.0 for Windows (SPSS Inc, Chicago, USA) statistical software was used.", "Incidence of death during follow-up was significantly higher in the highest (18/205) in comparison with the lowest NT-proBNP quartile (1/203), (p<0.001, Table 3). Cox regression analysis of the different quartiles of NT-proBNP also showed significant differences. (p<0.001, Figure 3)\nMultivariate Cox regression analysis (covariates: gender, age >67 years, BMI >27 kg/m2, serum creatinine >1.2 mg/dl) revealed NT-proBNP as an independent risk factor for mid-term survival (p = 0.025, OR = 3.079, CI  = 1.149–8.247).", "Patients with NT-proBNP levels >502 ng/l had more comorbidities and consecutively a higher EuroSCORE than those with NT-proBNP levels ≤502 ng/l. Postoperatively those patients had a significantly longer ventilation time (p = 0.005), longer ICU stay (p = 0.001), a higher rate of renal failure requiring hemofiltration (p = 0.001), a higher rate of IABPs (p<0.001), and a higher rate of postoperative atrial fibrillation (p = 0.031) than patients with NT-proBNP levels ≤502 ng/l (Table 4).", "In our series ROC curve analysis revealed an NT-proBNP cut-off level of 430 ng/l to best predict hospital mortality and a cut-off level of 502 ng/l for prediction of overall mortality. These levels are comparable to cut-off levels for predicting postoperative cardiac events in patients undergoing vascular surgery, which were reported to be between 280 and 533 pg/ml.1 Nozohoor et al. found that an increased BNP level on admission to the ICU was a risk factor for heart failure following aortic valve replacement.9 Nevertheless, one has to keep in mind that cut-off points vary between different study cohorts. Thus we focused on patients undergoing isolated CABG surgery in our study.\nWe found that age, preoperative serum creatinine, peripheral vascular disease, and NT-proBNP levels >430 ng/l were significantly associated with hospital mortality. Although the first three risk factors are also found to be predictive for hospital mortality in the EuroSCORE, NT-proBNP is not part of this preoperative risk score for cardiac surgery patients.8 In an interesting study, Grabowski et al. found a higher early mortality and a decreased PCI success rate (especially an increased number of no-reflow phenomenon) in patients with acute ST elevation myocardial infarction and with high levels of serum BNP.4 ", "In a series of 98 male patients undergoing different types of heart surgery Hutfless et al. found increased BNP levels in patients who died within 1 year after surgery.10 Kragelund et al. found a decreased long-term survival in 1039 patients with stable coronary artery disease and increased BNP levels.11 Other authors found that BNP and NT-proBNP levels were predictive for survival rates in patients with acute coronary syndromes and acute myocardial infarction.12,13 In our study, mid-term survival rates were significantly decreased in patients with elevated NT-proBNP levels. Furthermore, NT-proBNP remained as a significant risk factor of survival when the commonly known factors influencing BNP levels (ie, gender, age BMI, serum creatinine) were included in the multivariate analysis.", "We found an increased mechanical ventilation time and length of ICU stay in patients with elevated preoperative NT-proBNP levels. In general, those patients exhibited a higher rate of comorbidities, resulting in an increased risk score (EuroSCORE). In detail, a higher rate of postoperative renal failure requiring hemofiltration was found, which may be explained by the higher preoperative serum creatinine levels in patients with NT-proBNP levels >502 ng/ml. We also found a higher rate of IABPs associated with higher NT-proBNP levels. This is in good agreement with Hutfless et al., who reported higher BNP levels in patients requiring an IABP postoperatively compared with those who did not.10 In exacerbated COPD elevated BNP levels were associated with a prolonged stay in the ICU.14 \nAtrial fibrillation is a common complication after cardiac surgery. Although it is easily manageable, it causes a (transient) circulatory disturbance that may be critical for the intensive care patient. In our study we found a higher rate of postoperative atrial fibrillation in patients with elevated NT-proBNP levels. Wazni and coworkers found that patients with atrial fibrillation following cardiac surgery exhibited higher BNP levels than patients who remained in sinus rhythm throughout the postoperative course.15 ", "Preoperative measurement of NT-proBNP levels can be used, in addition to established risk scores, to determine CABG patients with an increased risk. Since the mid-term survival following coronary bypass surgery is significantly decreased, patients with increased NT-proBNP should be followed up closely. In accordance with this, Mayer et al. recently published results showing a decreased long-term survival rate of coronary patients without clinical manifestation of heart failure and NT-proBNP levels >862 pmol/l.16 \nWe conclude that elevated preoperative serum NT-proBNP levels are associated with a higher postoperative early and mid-term mortality, as well as morbidity, in patients undergoing isolated CABG." ]
[ null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "PATIENTS AND METHODS", "Definitions", "Statistical Analysis", "RESULTS", "Mortality According to Quartiles of Serum NT-proBNP levels", "Association Between NT-proBNP Levels and Perioperative Outcome", "DISCUSSION", "NT-proBNP and Hospital Mortality", "NT-proBNP and Mid-term Survival Rates", "NT-proBNP and Postoperative Complications", "Clinical impact of the study" ]
[ "Brain type natriuretic peptide (BNP) is primarily produced by cardiac myocytes. Physiological effects of BNP are a peripheral vasodilatation and inhibition of renin–angiotensin production.1-3 The precursor peptide proBNP is split into the active hormone BNP and the N-terminal fragment of proBNP (NT-proBNP). Both BNP and NT-proBNP are established markers for cardiac failure. However, other pathologies such as exacerbated chronic obstructive pulmonary disease, acute coronary syndromes, atrial fibrillation, and myocarditis can cause elevated BNP levels. Additionally, higher NT-proBNP levels are associated with: female gender, impaired renal function, and older age. Obesity has been shown to decrease NT-proBNP levels.2 Increased BNP levels are a prognostic marker associated with higher mortality in patients with myocardial infarction, cardiogenic shock, and pulmonary embolism.4-6 In patients with coronary artery disease increased BNP levels are associated with an increased rate of myocardial infarction and cardiovascular death during mid-term follow-up.7 \nWe aimed to investigate if preoperative serum NT-proBNP levels are associated with hospital and mid-term mortality and with postoperative outcome variables in patients undergoing isolated coronary artery bypass grafting (CABG)", "Data used in the analysis were collected from the institutional database at the Department of Cardiac Surgery, Innsbruck Medical University, which contained data collected using the data collection form of the Society of Thoracic Surgeons (Adult Cardiac Surgery Database Data Collection form Version 2.25.1). Preoperative NT-proBNP was measured routinely in all patients on the day of admission before surgery. The data of 819 patients undergoing isolated CABG were analysed. Patients included in the study were consecutive cases with CABG seen at our department between 2001 and 2007. Patients with concomitant procedures (valve surgery, aortic surgery, etc) were excluded. Patients with missing preoperative NT-proBNP values were also excluded. Patients were followed up by request of the national death registry (Statistik Austria).\n[SUBTITLE] Definitions [SUBSECTION] Primary end points of the study included:\nHospital mortality, defined as the incidence of death occurring during admission to hospital for surgery or up to 30 days after CABG\nOverall mortality, defined as hospital deaths and all deaths during the follow-up time.\nSecondary end points of the study were:\nCarotid artery disease, defined as the presence of carotid artery stenosis of ≥50%, or carotid artery occlusion, or post-carotid endarterectomy.\nThe EuroSCORE, used for perioperative risk stratification as described by Nashef and coworkers.8 \nHypertension, diagnosed if current blood pressure was >140 mmHg systolic or >90 mmHg diastolic, or if the patient was currently receiving antihypertensive drugs, or if the patient had a history of hypertension.\nStroke, defined as any neurologic impairment of motor, sensory or cognitive function that persisted for >24 h, or was associated with death, and that could not be explained by other neurologic etiologies (ie, postoperative delirium, dementia, head trauma).\nHistory of neurologic events, defined as a preoperative history of transient ischemic attack or stroke.\nChronic obstructive pulmonary disease (COPD), defined by long-term use of bronchodilators or steroids for lung disease and forced expiratory volume in one second (FEV1) <80% of the predicted value in preoperative spirometry, which was performed routinely at our clinic.\nPeripheral vascular disease, defined as the presence of intermittent claudication, or previous or planned intervention on the abdominal aorta or limb arteries.\nVentilation time, defined as the duration of ventilation needed postoperatively, including potential need for additional ventilation after re-intubation.\nIntensive care unit (ICU) stay, defined as the duration of ICU stay needed after the operation, including potential need for additional ICU stay after readmission.\nPerioperative myocardial infarction, defined as the development of new persistent ST-segment, T- or Q-wave changes in electrocardiogram (ECG), new left bundle branch block, and/or troponin T elevation above the 5×99th centile of the upper reference limit and/or creatine kinase (CK)-MB/CK ratio of >10%.\nNeed for intra-aortic balloon pump (IABP), defined as any use of an IABP after establishing cardiopulmonary bypass or in the postoperative period.\nPostoperative atrial fibrillation, defined as any new episode of atrial fibrillation documented by online monitoring or on ECG during the postoperative period.\nReoperation for bleeding, defined as any incidence of reoperation needed for bleeding complications.\nPrimary end points of the study included:\nHospital mortality, defined as the incidence of death occurring during admission to hospital for surgery or up to 30 days after CABG\nOverall mortality, defined as hospital deaths and all deaths during the follow-up time.\nSecondary end points of the study were:\nCarotid artery disease, defined as the presence of carotid artery stenosis of ≥50%, or carotid artery occlusion, or post-carotid endarterectomy.\nThe EuroSCORE, used for perioperative risk stratification as described by Nashef and coworkers.8 \nHypertension, diagnosed if current blood pressure was >140 mmHg systolic or >90 mmHg diastolic, or if the patient was currently receiving antihypertensive drugs, or if the patient had a history of hypertension.\nStroke, defined as any neurologic impairment of motor, sensory or cognitive function that persisted for >24 h, or was associated with death, and that could not be explained by other neurologic etiologies (ie, postoperative delirium, dementia, head trauma).\nHistory of neurologic events, defined as a preoperative history of transient ischemic attack or stroke.\nChronic obstructive pulmonary disease (COPD), defined by long-term use of bronchodilators or steroids for lung disease and forced expiratory volume in one second (FEV1) <80% of the predicted value in preoperative spirometry, which was performed routinely at our clinic.\nPeripheral vascular disease, defined as the presence of intermittent claudication, or previous or planned intervention on the abdominal aorta or limb arteries.\nVentilation time, defined as the duration of ventilation needed postoperatively, including potential need for additional ventilation after re-intubation.\nIntensive care unit (ICU) stay, defined as the duration of ICU stay needed after the operation, including potential need for additional ICU stay after readmission.\nPerioperative myocardial infarction, defined as the development of new persistent ST-segment, T- or Q-wave changes in electrocardiogram (ECG), new left bundle branch block, and/or troponin T elevation above the 5×99th centile of the upper reference limit and/or creatine kinase (CK)-MB/CK ratio of >10%.\nNeed for intra-aortic balloon pump (IABP), defined as any use of an IABP after establishing cardiopulmonary bypass or in the postoperative period.\nPostoperative atrial fibrillation, defined as any new episode of atrial fibrillation documented by online monitoring or on ECG during the postoperative period.\nReoperation for bleeding, defined as any incidence of reoperation needed for bleeding complications.\n[SUBTITLE] Statistical Analysis [SUBSECTION] Continuous variables are shown as median and range, categorical variables as number and percentage. The 1-, 2-, and 3-year survival rates were calculated using a life table. We calculated the best cut-off values of NT-proBNP to predict hospital mortality or overall mortality by using receiver operating characteristic (ROC) curves. The comparison between categorical variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using χ2 analysis. The comparison between continuous variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using a Mann–Whitney test. Survival curves were generated using Kaplan–Meier estimates. Differences in the survival rate were calculated using the log rank test. For multivariate analysis we used a Cox regression model. As covariates the factors commonly known to influence NT-proBNP levels were entered: gender, age (we chose an age >67 years, which was the median of our cohort), body mass index (calculated as body weight [kg]/height2 [m2]; we chose a BMI >27 kg/m2, which was the median of our cohort), serum creatinine levels above the upper reference limit of 1.2 mg/dl. A p value <0.05 was regarded as statistically significant. SPSS™ 15.0 for Windows (SPSS Inc, Chicago, USA) statistical software was used.\nContinuous variables are shown as median and range, categorical variables as number and percentage. The 1-, 2-, and 3-year survival rates were calculated using a life table. We calculated the best cut-off values of NT-proBNP to predict hospital mortality or overall mortality by using receiver operating characteristic (ROC) curves. The comparison between categorical variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using χ2 analysis. The comparison between continuous variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using a Mann–Whitney test. Survival curves were generated using Kaplan–Meier estimates. Differences in the survival rate were calculated using the log rank test. For multivariate analysis we used a Cox regression model. As covariates the factors commonly known to influence NT-proBNP levels were entered: gender, age (we chose an age >67 years, which was the median of our cohort), body mass index (calculated as body weight [kg]/height2 [m2]; we chose a BMI >27 kg/m2, which was the median of our cohort), serum creatinine levels above the upper reference limit of 1.2 mg/dl. A p value <0.05 was regarded as statistically significant. SPSS™ 15.0 for Windows (SPSS Inc, Chicago, USA) statistical software was used.", "Primary end points of the study included:\nHospital mortality, defined as the incidence of death occurring during admission to hospital for surgery or up to 30 days after CABG\nOverall mortality, defined as hospital deaths and all deaths during the follow-up time.\nSecondary end points of the study were:\nCarotid artery disease, defined as the presence of carotid artery stenosis of ≥50%, or carotid artery occlusion, or post-carotid endarterectomy.\nThe EuroSCORE, used for perioperative risk stratification as described by Nashef and coworkers.8 \nHypertension, diagnosed if current blood pressure was >140 mmHg systolic or >90 mmHg diastolic, or if the patient was currently receiving antihypertensive drugs, or if the patient had a history of hypertension.\nStroke, defined as any neurologic impairment of motor, sensory or cognitive function that persisted for >24 h, or was associated with death, and that could not be explained by other neurologic etiologies (ie, postoperative delirium, dementia, head trauma).\nHistory of neurologic events, defined as a preoperative history of transient ischemic attack or stroke.\nChronic obstructive pulmonary disease (COPD), defined by long-term use of bronchodilators or steroids for lung disease and forced expiratory volume in one second (FEV1) <80% of the predicted value in preoperative spirometry, which was performed routinely at our clinic.\nPeripheral vascular disease, defined as the presence of intermittent claudication, or previous or planned intervention on the abdominal aorta or limb arteries.\nVentilation time, defined as the duration of ventilation needed postoperatively, including potential need for additional ventilation after re-intubation.\nIntensive care unit (ICU) stay, defined as the duration of ICU stay needed after the operation, including potential need for additional ICU stay after readmission.\nPerioperative myocardial infarction, defined as the development of new persistent ST-segment, T- or Q-wave changes in electrocardiogram (ECG), new left bundle branch block, and/or troponin T elevation above the 5×99th centile of the upper reference limit and/or creatine kinase (CK)-MB/CK ratio of >10%.\nNeed for intra-aortic balloon pump (IABP), defined as any use of an IABP after establishing cardiopulmonary bypass or in the postoperative period.\nPostoperative atrial fibrillation, defined as any new episode of atrial fibrillation documented by online monitoring or on ECG during the postoperative period.\nReoperation for bleeding, defined as any incidence of reoperation needed for bleeding complications.", "Continuous variables are shown as median and range, categorical variables as number and percentage. The 1-, 2-, and 3-year survival rates were calculated using a life table. We calculated the best cut-off values of NT-proBNP to predict hospital mortality or overall mortality by using receiver operating characteristic (ROC) curves. The comparison between categorical variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using χ2 analysis. The comparison between continuous variables and hospital mortality, overall death, or NT-proBNP >502 ng/l was performed using a Mann–Whitney test. Survival curves were generated using Kaplan–Meier estimates. Differences in the survival rate were calculated using the log rank test. For multivariate analysis we used a Cox regression model. As covariates the factors commonly known to influence NT-proBNP levels were entered: gender, age (we chose an age >67 years, which was the median of our cohort), body mass index (calculated as body weight [kg]/height2 [m2]; we chose a BMI >27 kg/m2, which was the median of our cohort), serum creatinine levels above the upper reference limit of 1.2 mg/dl. A p value <0.05 was regarded as statistically significant. SPSS™ 15.0 for Windows (SPSS Inc, Chicago, USA) statistical software was used.", "After a median follow up time of 18 (0.5–44) months 33/819 (4%) patients died, including 13/819 (1.6%) hospital deaths. The 1, 2-, and 3-year survival rates (including hospital mortality) were 97%, 93%, and 90%, respectively.\nAnalysis of the ROC curves showed that a serum NT-proBNP cut-off level of 430 ng/l best predicted hospital mortality and a cut-off level of 502 ng/l best predicted overall mortality (Figure 1). Univariate analysis showed that age, preoperative serum creatinine, peripheral vascular disease, and NT-proBNP levels >430 ng/l were significantly associated with hospital mortality. Age, preoperative serum creatinine, diabetes mellitus, history of cerebrovascular accident, chronic obstructive pulmonary disease, left ventricular ejection fraction, and NT-proBNP levels >502 ng/l were significantly associated with overall mortality (Tables 1 and 2).\nThe Kaplan–Meier analysis confirmed a significantly decreased survival rate of patients with NT-proBNP levels >502 ng/l (n = 306) compared with NT-proBNP levels ≤502 ng/l (n = 513) (p = 0.001, Figure 2) The 1, 2-, and 3-year survival rates (including hospital mortality) were 98%, 96%, and 92%, respectively, if NT-proBNP levels were ≤502 ng/l vs. 94%, 87%, and 87%, respectively, if NT-proBNP levels were >502 ng/l.\n[SUBTITLE] Mortality According to Quartiles of Serum NT-proBNP levels [SUBSECTION] Incidence of death during follow-up was significantly higher in the highest (18/205) in comparison with the lowest NT-proBNP quartile (1/203), (p<0.001, Table 3). Cox regression analysis of the different quartiles of NT-proBNP also showed significant differences. (p<0.001, Figure 3)\nMultivariate Cox regression analysis (covariates: gender, age >67 years, BMI >27 kg/m2, serum creatinine >1.2 mg/dl) revealed NT-proBNP as an independent risk factor for mid-term survival (p = 0.025, OR = 3.079, CI  = 1.149–8.247).\nIncidence of death during follow-up was significantly higher in the highest (18/205) in comparison with the lowest NT-proBNP quartile (1/203), (p<0.001, Table 3). Cox regression analysis of the different quartiles of NT-proBNP also showed significant differences. (p<0.001, Figure 3)\nMultivariate Cox regression analysis (covariates: gender, age >67 years, BMI >27 kg/m2, serum creatinine >1.2 mg/dl) revealed NT-proBNP as an independent risk factor for mid-term survival (p = 0.025, OR = 3.079, CI  = 1.149–8.247).\n[SUBTITLE] Association Between NT-proBNP Levels and Perioperative Outcome [SUBSECTION] Patients with NT-proBNP levels >502 ng/l had more comorbidities and consecutively a higher EuroSCORE than those with NT-proBNP levels ≤502 ng/l. Postoperatively those patients had a significantly longer ventilation time (p = 0.005), longer ICU stay (p = 0.001), a higher rate of renal failure requiring hemofiltration (p = 0.001), a higher rate of IABPs (p<0.001), and a higher rate of postoperative atrial fibrillation (p = 0.031) than patients with NT-proBNP levels ≤502 ng/l (Table 4).\nPatients with NT-proBNP levels >502 ng/l had more comorbidities and consecutively a higher EuroSCORE than those with NT-proBNP levels ≤502 ng/l. Postoperatively those patients had a significantly longer ventilation time (p = 0.005), longer ICU stay (p = 0.001), a higher rate of renal failure requiring hemofiltration (p = 0.001), a higher rate of IABPs (p<0.001), and a higher rate of postoperative atrial fibrillation (p = 0.031) than patients with NT-proBNP levels ≤502 ng/l (Table 4).", "Incidence of death during follow-up was significantly higher in the highest (18/205) in comparison with the lowest NT-proBNP quartile (1/203), (p<0.001, Table 3). Cox regression analysis of the different quartiles of NT-proBNP also showed significant differences. (p<0.001, Figure 3)\nMultivariate Cox regression analysis (covariates: gender, age >67 years, BMI >27 kg/m2, serum creatinine >1.2 mg/dl) revealed NT-proBNP as an independent risk factor for mid-term survival (p = 0.025, OR = 3.079, CI  = 1.149–8.247).", "Patients with NT-proBNP levels >502 ng/l had more comorbidities and consecutively a higher EuroSCORE than those with NT-proBNP levels ≤502 ng/l. Postoperatively those patients had a significantly longer ventilation time (p = 0.005), longer ICU stay (p = 0.001), a higher rate of renal failure requiring hemofiltration (p = 0.001), a higher rate of IABPs (p<0.001), and a higher rate of postoperative atrial fibrillation (p = 0.031) than patients with NT-proBNP levels ≤502 ng/l (Table 4).", "[SUBTITLE] NT-proBNP and Hospital Mortality [SUBSECTION] In our series ROC curve analysis revealed an NT-proBNP cut-off level of 430 ng/l to best predict hospital mortality and a cut-off level of 502 ng/l for prediction of overall mortality. These levels are comparable to cut-off levels for predicting postoperative cardiac events in patients undergoing vascular surgery, which were reported to be between 280 and 533 pg/ml.1 Nozohoor et al. found that an increased BNP level on admission to the ICU was a risk factor for heart failure following aortic valve replacement.9 Nevertheless, one has to keep in mind that cut-off points vary between different study cohorts. Thus we focused on patients undergoing isolated CABG surgery in our study.\nWe found that age, preoperative serum creatinine, peripheral vascular disease, and NT-proBNP levels >430 ng/l were significantly associated with hospital mortality. Although the first three risk factors are also found to be predictive for hospital mortality in the EuroSCORE, NT-proBNP is not part of this preoperative risk score for cardiac surgery patients.8 In an interesting study, Grabowski et al. found a higher early mortality and a decreased PCI success rate (especially an increased number of no-reflow phenomenon) in patients with acute ST elevation myocardial infarction and with high levels of serum BNP.4 \nIn our series ROC curve analysis revealed an NT-proBNP cut-off level of 430 ng/l to best predict hospital mortality and a cut-off level of 502 ng/l for prediction of overall mortality. These levels are comparable to cut-off levels for predicting postoperative cardiac events in patients undergoing vascular surgery, which were reported to be between 280 and 533 pg/ml.1 Nozohoor et al. found that an increased BNP level on admission to the ICU was a risk factor for heart failure following aortic valve replacement.9 Nevertheless, one has to keep in mind that cut-off points vary between different study cohorts. Thus we focused on patients undergoing isolated CABG surgery in our study.\nWe found that age, preoperative serum creatinine, peripheral vascular disease, and NT-proBNP levels >430 ng/l were significantly associated with hospital mortality. Although the first three risk factors are also found to be predictive for hospital mortality in the EuroSCORE, NT-proBNP is not part of this preoperative risk score for cardiac surgery patients.8 In an interesting study, Grabowski et al. found a higher early mortality and a decreased PCI success rate (especially an increased number of no-reflow phenomenon) in patients with acute ST elevation myocardial infarction and with high levels of serum BNP.4 \n[SUBTITLE] NT-proBNP and Mid-term Survival Rates [SUBSECTION] In a series of 98 male patients undergoing different types of heart surgery Hutfless et al. found increased BNP levels in patients who died within 1 year after surgery.10 Kragelund et al. found a decreased long-term survival in 1039 patients with stable coronary artery disease and increased BNP levels.11 Other authors found that BNP and NT-proBNP levels were predictive for survival rates in patients with acute coronary syndromes and acute myocardial infarction.12,13 In our study, mid-term survival rates were significantly decreased in patients with elevated NT-proBNP levels. Furthermore, NT-proBNP remained as a significant risk factor of survival when the commonly known factors influencing BNP levels (ie, gender, age BMI, serum creatinine) were included in the multivariate analysis.\nIn a series of 98 male patients undergoing different types of heart surgery Hutfless et al. found increased BNP levels in patients who died within 1 year after surgery.10 Kragelund et al. found a decreased long-term survival in 1039 patients with stable coronary artery disease and increased BNP levels.11 Other authors found that BNP and NT-proBNP levels were predictive for survival rates in patients with acute coronary syndromes and acute myocardial infarction.12,13 In our study, mid-term survival rates were significantly decreased in patients with elevated NT-proBNP levels. Furthermore, NT-proBNP remained as a significant risk factor of survival when the commonly known factors influencing BNP levels (ie, gender, age BMI, serum creatinine) were included in the multivariate analysis.\n[SUBTITLE] NT-proBNP and Postoperative Complications [SUBSECTION] We found an increased mechanical ventilation time and length of ICU stay in patients with elevated preoperative NT-proBNP levels. In general, those patients exhibited a higher rate of comorbidities, resulting in an increased risk score (EuroSCORE). In detail, a higher rate of postoperative renal failure requiring hemofiltration was found, which may be explained by the higher preoperative serum creatinine levels in patients with NT-proBNP levels >502 ng/ml. We also found a higher rate of IABPs associated with higher NT-proBNP levels. This is in good agreement with Hutfless et al., who reported higher BNP levels in patients requiring an IABP postoperatively compared with those who did not.10 In exacerbated COPD elevated BNP levels were associated with a prolonged stay in the ICU.14 \nAtrial fibrillation is a common complication after cardiac surgery. Although it is easily manageable, it causes a (transient) circulatory disturbance that may be critical for the intensive care patient. In our study we found a higher rate of postoperative atrial fibrillation in patients with elevated NT-proBNP levels. Wazni and coworkers found that patients with atrial fibrillation following cardiac surgery exhibited higher BNP levels than patients who remained in sinus rhythm throughout the postoperative course.15 \nWe found an increased mechanical ventilation time and length of ICU stay in patients with elevated preoperative NT-proBNP levels. In general, those patients exhibited a higher rate of comorbidities, resulting in an increased risk score (EuroSCORE). In detail, a higher rate of postoperative renal failure requiring hemofiltration was found, which may be explained by the higher preoperative serum creatinine levels in patients with NT-proBNP levels >502 ng/ml. We also found a higher rate of IABPs associated with higher NT-proBNP levels. This is in good agreement with Hutfless et al., who reported higher BNP levels in patients requiring an IABP postoperatively compared with those who did not.10 In exacerbated COPD elevated BNP levels were associated with a prolonged stay in the ICU.14 \nAtrial fibrillation is a common complication after cardiac surgery. Although it is easily manageable, it causes a (transient) circulatory disturbance that may be critical for the intensive care patient. In our study we found a higher rate of postoperative atrial fibrillation in patients with elevated NT-proBNP levels. Wazni and coworkers found that patients with atrial fibrillation following cardiac surgery exhibited higher BNP levels than patients who remained in sinus rhythm throughout the postoperative course.15 \n[SUBTITLE] Clinical impact of the study [SUBSECTION] Preoperative measurement of NT-proBNP levels can be used, in addition to established risk scores, to determine CABG patients with an increased risk. Since the mid-term survival following coronary bypass surgery is significantly decreased, patients with increased NT-proBNP should be followed up closely. In accordance with this, Mayer et al. recently published results showing a decreased long-term survival rate of coronary patients without clinical manifestation of heart failure and NT-proBNP levels >862 pmol/l.16 \nWe conclude that elevated preoperative serum NT-proBNP levels are associated with a higher postoperative early and mid-term mortality, as well as morbidity, in patients undergoing isolated CABG.\nPreoperative measurement of NT-proBNP levels can be used, in addition to established risk scores, to determine CABG patients with an increased risk. Since the mid-term survival following coronary bypass surgery is significantly decreased, patients with increased NT-proBNP should be followed up closely. In accordance with this, Mayer et al. recently published results showing a decreased long-term survival rate of coronary patients without clinical manifestation of heart failure and NT-proBNP levels >862 pmol/l.16 \nWe conclude that elevated preoperative serum NT-proBNP levels are associated with a higher postoperative early and mid-term mortality, as well as morbidity, in patients undergoing isolated CABG.", "In our series ROC curve analysis revealed an NT-proBNP cut-off level of 430 ng/l to best predict hospital mortality and a cut-off level of 502 ng/l for prediction of overall mortality. These levels are comparable to cut-off levels for predicting postoperative cardiac events in patients undergoing vascular surgery, which were reported to be between 280 and 533 pg/ml.1 Nozohoor et al. found that an increased BNP level on admission to the ICU was a risk factor for heart failure following aortic valve replacement.9 Nevertheless, one has to keep in mind that cut-off points vary between different study cohorts. Thus we focused on patients undergoing isolated CABG surgery in our study.\nWe found that age, preoperative serum creatinine, peripheral vascular disease, and NT-proBNP levels >430 ng/l were significantly associated with hospital mortality. Although the first three risk factors are also found to be predictive for hospital mortality in the EuroSCORE, NT-proBNP is not part of this preoperative risk score for cardiac surgery patients.8 In an interesting study, Grabowski et al. found a higher early mortality and a decreased PCI success rate (especially an increased number of no-reflow phenomenon) in patients with acute ST elevation myocardial infarction and with high levels of serum BNP.4 ", "In a series of 98 male patients undergoing different types of heart surgery Hutfless et al. found increased BNP levels in patients who died within 1 year after surgery.10 Kragelund et al. found a decreased long-term survival in 1039 patients with stable coronary artery disease and increased BNP levels.11 Other authors found that BNP and NT-proBNP levels were predictive for survival rates in patients with acute coronary syndromes and acute myocardial infarction.12,13 In our study, mid-term survival rates were significantly decreased in patients with elevated NT-proBNP levels. Furthermore, NT-proBNP remained as a significant risk factor of survival when the commonly known factors influencing BNP levels (ie, gender, age BMI, serum creatinine) were included in the multivariate analysis.", "We found an increased mechanical ventilation time and length of ICU stay in patients with elevated preoperative NT-proBNP levels. In general, those patients exhibited a higher rate of comorbidities, resulting in an increased risk score (EuroSCORE). In detail, a higher rate of postoperative renal failure requiring hemofiltration was found, which may be explained by the higher preoperative serum creatinine levels in patients with NT-proBNP levels >502 ng/ml. We also found a higher rate of IABPs associated with higher NT-proBNP levels. This is in good agreement with Hutfless et al., who reported higher BNP levels in patients requiring an IABP postoperatively compared with those who did not.10 In exacerbated COPD elevated BNP levels were associated with a prolonged stay in the ICU.14 \nAtrial fibrillation is a common complication after cardiac surgery. Although it is easily manageable, it causes a (transient) circulatory disturbance that may be critical for the intensive care patient. In our study we found a higher rate of postoperative atrial fibrillation in patients with elevated NT-proBNP levels. Wazni and coworkers found that patients with atrial fibrillation following cardiac surgery exhibited higher BNP levels than patients who remained in sinus rhythm throughout the postoperative course.15 ", "Preoperative measurement of NT-proBNP levels can be used, in addition to established risk scores, to determine CABG patients with an increased risk. Since the mid-term survival following coronary bypass surgery is significantly decreased, patients with increased NT-proBNP should be followed up closely. In accordance with this, Mayer et al. recently published results showing a decreased long-term survival rate of coronary patients without clinical manifestation of heart failure and NT-proBNP levels >862 pmol/l.16 \nWe conclude that elevated preoperative serum NT-proBNP levels are associated with a higher postoperative early and mid-term mortality, as well as morbidity, in patients undergoing isolated CABG." ]
[ "intro", "methods", null, null, "results", null, null, "discussion", null, null, null, null ]
[ "Brain type natriuretic peptide", "BNP", "NT-proBNP", "CABG", "Coronary artery disease" ]
Reliability of the Brazilian version of the Functional Assessment of Cancer Therapy-Lung (FACT-L) and the FACT-Lung Symptom Index (FLSI).
21340211
The assessment of quality of life in patients with lung cancer has become an important evaluative endpoint in current clinical trials. For lung cancer patients, one of the most common quality of life tools available is the FACT-L. Despite the amount of data available regarding this questionnaire, there are no data on its performance in Brazilian lung cancer patients.
INTRODUCTION
The FACT-L with the FLSI questionnaire was prospectively administered to 30 consecutive, stable, lung cancer outpatients at baseline and at 2 weeks.
METHODS
The intraclass correlation coefficient between test and retest for the FACT-L ranged from 0.79 to 0.96 and for the FLSI was 0.87. There was no correlation between these questionnaire dimensions and clinical or functional parameters.
RESULTS
The Brazilian version of the FACT-L with FLSI questionnaire is reliable and is quick and simple to apply. This instrument can now be used to properly evaluate the quality of life of Brazilian lung cancer patients.
CONCLUSIONS
[ "Brazil", "Female", "Humans", "Lung Neoplasms", "Male", "Middle Aged", "Prospective Studies", "Quality of Life", "Reproducibility of Results", "Surveys and Questionnaires" ]
3020333
INTRODUCTION
Lung cancer has become a disease of great global impact and remains the leading cause of death from cancer in the world.1 As smoking and environmental pollution cannot be controlled in the short term, the incidence of lung cancer continues to increase, especially in females.2,3 In Brazil, the estimate for 2010 is approximately 18 new cases per 100 000 men and 10 per 100 000 women, corresponding to 17 800 and 9 930 new cases of lung cancer among men and women respectively.4 Advances in lung cancer therapy have been improving survival rates, although the prognosis remains poor. The 5‐year survival rate is around 15% in developed countries1 and 10% in Brazil.4 Therefore, the impact of both disease and treatment on the health and psychosocial functioning of these patients should be considered.5 In this context, quality of life assessment and the analysis of the main symptoms that lead to functional capacity limitation have become issues of utmost importance in the evaluation of lung cancer patients. However, in Brazil, there are few studies that evaluate this aspect of the disease, mainly because of the lack of specific tools adapted to and reproducible for the Brazilian Portuguese language. Several instruments are currently used to evaluate the quality of life in patients with lung cancer. The Functional Assessment of Cancer Therapy‐Lung (FACT‐L) was generated by the Functional Assessment of Chronic Illness Therapy (FACIT) group. It is a specific, multidimensional questionnaire widely used in clinical studies.6-9 Moreover, the FACT‐Lung Symptom Index (FLSI), which is a brief measure involving five common symptoms in lung cancer patients, can be applied in combination with the FACT‐L or alone. Although the FACT‐L is currently used in Brazil, mainly in international multicenter trails, there is no study assessing the reliability of this questionnaire in the Brazilian Portuguese language. Therefore, the purpose of this study was to evaluate the reproducibility of the FACT‐L and the FLSI for Brazilian lung cancer patients.
METHODS
A convenience sample comprising 30 patients with lung cancer was recruited from the outpatient lung cancer clinic of the São Paulo Hospital ‐ Federal University of São Paulo. This study was approved by the Institutional Review Board of our center, and a Term of Informed Consent was signed by all patients. The following inclusion criteria were used: histologically proven lung cancer; 18 years of age or older; a minimum score of 21 on the Mini Mental State Examination (MMSE);10,11 out of chemotherapy/radiotherapy treatment; and clinical stability during the study and at least 10 days before the beginning of the evaluations. Clinical stability was defined as the absence of change in cough, sputum, and dyspnea, assessed by a structured form filled out during outpatient follow‐up, and no hospitalizations or modifications in the therapeutic regimen. An exclusion criterion was the refusal to answer any questionnaire. The sample size was based on previous reliability studies of other quality of life questionnaires related to respiratory diseases in Brazil.12–15 Clinical evaluation and physical examination were performed by a team of physicians, based on a structured form. All patients met the stability criteria. The drug regimen used by the patients remained unchanged during the 15‐day interval between questionnaire applications. In the first visit, the following independent variables were collected: gender (male and female proportion), age (in years), history of tobacco use (yes or no) and consumption (pack–years), histologic subtypes (adenocarcinoma, squamous cell, small cell lung cancer and others),16 staging according to the 1997 TNM classification for non‐small cell lung cancer (NSCLC) patients (stratified from IA to IIIA and from IIIB to IV),17 Karnofsky Performance Status (KPS),18 spirometry (forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) percentage of predicted and FEV1/FVC percentage),19 MMSE, and FACT‐L and FLSI scores.9 The Portuguese version of the FACT‐L and FLSI was released for use by the FACIT group (the developer of the questionnaire). The translation into Brazilian Portuguese, back translation and review by an expert committee to access the semantic, conceptual, idiomatic, cultural and metabolic equivalences were previously done by that group.20,21 The FACT‐L, version 4, is a combination of the 27‐item FACT‐General (FACT‐G) and the 9‐item Lung Cancer Subscale (LCS). A total FACT‐G score is calculated by summing the physical well‐being (PWB), social/family well‐being (SWB), emotional well‐being (EWB), and functional well‐being (FWB) subscale scores, with a score ranging from 0 to 108. A total FACT‐L score is obtained by summing the FACT‐G score with the LCS (two of the nine items are not scored). The FACT‐L score ranges from 0 to 136.6 The FACT‐L Trial Outcome Index (FACT‐L TOI) is, a priori, an index that sums the PWB, FWB, and LCS into a 21‐item scale. Its score ranges from 0 to 84.6 The FLSI is a symptom index with six questions regarding the five most frequent symptoms reported by lung cancer patients, especially in the advanced stages: dyspnea, fatigue, pain, weight loss, and coughing. Its score ranges from 0 to 24. Higher scores generated by the FACT‐L and FLSI correspond to better quality of life. The score on each aspect of the FACT‐L and FLSI is obtained according to the option chosen by the patient. The available options were: “not at all”, “a little bit”, “somewhat”, “quite a bit”, or “very much”. The higher the score obtained by each patient in question, the greater the final score of the subscale and the better the quality of life. However, these scores have a non‐linear behavior. Patients answered the questionnaire after being read each question, all by the same interviewer. The process took place in a calm environment with no interruptions allowed. The questionnaires were reviewed at the end of the interview to avoid any missed questions. The response time was timed in the two visits. All doubts expressed by patients during the questionnaire were documented. Patients were also asked not only about what they felt in terms of question content, but also about the length of the questionnaire. The test–retest design was adopted for the reliability study. The questionnaire was administered by the same researcher twice with a 15‐day interval. The scores obtained on different scales and subscales of FACT‐G were compared with reference values established by the FACIT group.22 For this comparison, we used the minimal clinically significant difference, determined by the same group.6 [SUBTITLE] Statistical analysis [SUBSECTION] Variables were expressed as mean and standard deviation. The intraclass correlation coefficient (ICC) and Kappa reliability coefficient were calculated to assess the reliability of the questionnaire and questions respectively. To compare the two groups, we used the chi‐square test for categorical variables, t test for parametric continuous variables, and Mann–Whitney test for nonparametric continuous variables. For the correlations between spirometry and questionnaire scores, the Spearman's correlation coefficient was used. Statistical analysis was performed with SPSS® software version 13.0. All tests were two‐tailed, and the level of significance was 5%. Variables were expressed as mean and standard deviation. The intraclass correlation coefficient (ICC) and Kappa reliability coefficient were calculated to assess the reliability of the questionnaire and questions respectively. To compare the two groups, we used the chi‐square test for categorical variables, t test for parametric continuous variables, and Mann–Whitney test for nonparametric continuous variables. For the correlations between spirometry and questionnaire scores, the Spearman's correlation coefficient was used. Statistical analysis was performed with SPSS® software version 13.0. All tests were two‐tailed, and the level of significance was 5%.
RESULTS
The main characteristics of the 30 patients who completed the study are shown in Table 1. Some 63.3% of the patients were over 60 years of age. There was no statistically significant difference between genders regarding age, KPS, spirometry, staging, and histological type. Among patients who never smoked, only one had a history of passive smoking. There was a prevalence of smoking habits in males (p = 0.04), with a higher tobacco smoke load for men than for women, consuming a mean of 53.2 pack–years (SD = 31.6) and 29.6 pack–years (SD = 25.5) respectively (p = 0.02). Eight patients (26.7%) were diagnosed with chronic obstructive pulmonary disease (COPD), according to the GOLD guideline.23 The mean values for each scale of the FACT‐L and FLSI are illustrated in Table 2. The intraclass correlation coefficient values for the different scales of the FACT‐L and FLSI showed excellent correlations (Table 2). The kappa coefficient was used to test question reliability, which was less than 0.4 on questions GP1, GP5 (PWB), GS3, GS4, GS5 (SWB), GF1, GF2, GF3, GF5, GF6 (FWB), and LCL4 (LCS). The remaining questions had a moderate agreement. As for the FLSI, the kappa coefficient was moderate on question B1. In the other questions, the kappa coefficient was less than 0.4. There was no correlation between spirometry and any of the questionnaire scales. The mean score of the FACT‐G scales was similar to the reference values for all scales that have these values established.22 Table 3 shows the mean values for the scales of FACT‐G and FACT‐L described in several studies of reliability and cultural adaptation into other languages. The time spent by patients in answering the questionnaire was measured at both visits (Table 4). The response time on the second application was significantly lower (p = 0.001).
null
null
[ "Statistical analysis" ]
[ "Variables were expressed as mean and standard deviation. The intraclass correlation coefficient (ICC) and Kappa reliability coefficient were calculated to assess the reliability of the questionnaire and questions respectively. To compare the two groups, we used the chi‐square test for categorical variables, t test for parametric continuous variables, and Mann–Whitney test for nonparametric continuous variables. For the correlations between spirometry and questionnaire scores, the Spearman's correlation coefficient was used. Statistical analysis was performed with SPSS® software version 13.0. All tests were two‐tailed, and the level of significance was 5%." ]
[ null ]
[ "INTRODUCTION", "METHODS", "Statistical analysis", "RESULTS", "DISCUSSION" ]
[ "Lung cancer has become a disease of great global impact and remains the leading cause of death from cancer in the world.1 As smoking and environmental pollution cannot be controlled in the short term, the incidence of lung cancer continues to increase, especially in females.2,3 In Brazil, the estimate for 2010 is approximately 18 new cases per 100 000 men and 10 per 100 000 women, corresponding to 17 800 and 9 930 new cases of lung cancer among men and women respectively.4 \nAdvances in lung cancer therapy have been improving survival rates, although the prognosis remains poor. The 5‐year survival rate is around 15% in developed countries1 and 10% in Brazil.4 Therefore, the impact of both disease and treatment on the health and psychosocial functioning of these patients should be considered.5 In this context, quality of life assessment and the analysis of the main symptoms that lead to functional capacity limitation have become issues of utmost importance in the evaluation of lung cancer patients. However, in Brazil, there are few studies that evaluate this aspect of the disease, mainly because of the lack of specific tools adapted to and reproducible for the Brazilian Portuguese language.\nSeveral instruments are currently used to evaluate the quality of life in patients with lung cancer. The Functional Assessment of Cancer Therapy‐Lung (FACT‐L) was generated by the Functional Assessment of Chronic Illness Therapy (FACIT) group. It is a specific, multidimensional questionnaire widely used in clinical studies.6-9 Moreover, the FACT‐Lung Symptom Index (FLSI), which is a brief measure involving five common symptoms in lung cancer patients, can be applied in combination with the FACT‐L or alone.\nAlthough the FACT‐L is currently used in Brazil, mainly in international multicenter trails, there is no study assessing the reliability of this questionnaire in the Brazilian Portuguese language. Therefore, the purpose of this study was to evaluate the reproducibility of the FACT‐L and the FLSI for Brazilian lung cancer patients.", "A convenience sample comprising 30 patients with lung cancer was recruited from the outpatient lung cancer clinic of the São Paulo Hospital ‐ Federal University of São Paulo. This study was approved by the Institutional Review Board of our center, and a Term of Informed Consent was signed by all patients.\nThe following inclusion criteria were used: histologically proven lung cancer; 18 years of age or older; a minimum score of 21 on the Mini Mental State Examination (MMSE);10,11 out of chemotherapy/radiotherapy treatment; and clinical stability during the study and at least 10 days before the beginning of the evaluations. Clinical stability was defined as the absence of change in cough, sputum, and dyspnea, assessed by a structured form filled out during outpatient follow‐up, and no hospitalizations or modifications in the therapeutic regimen. An exclusion criterion was the refusal to answer any questionnaire.\nThe sample size was based on previous reliability studies of other quality of life questionnaires related to respiratory diseases in Brazil.12–15 \nClinical evaluation and physical examination were performed by a team of physicians, based on a structured form. All patients met the stability criteria. The drug regimen used by the patients remained unchanged during the 15‐day interval between questionnaire applications.\nIn the first visit, the following independent variables were collected: gender (male and female proportion), age (in years), history of tobacco use (yes or no) and consumption (pack–years), histologic subtypes (adenocarcinoma, squamous cell, small cell lung cancer and others),16 staging according to the 1997 TNM classification for non‐small cell lung cancer (NSCLC) patients (stratified from IA to IIIA and from IIIB to IV),17 Karnofsky Performance Status (KPS),18 spirometry (forced expiratory volume in the first second (FEV1) and forced vital capacity (FVC) percentage of predicted and FEV1/FVC percentage),19 MMSE, and FACT‐L and FLSI scores.9 \nThe Portuguese version of the FACT‐L and FLSI was released for use by the FACIT group (the developer of the questionnaire). The translation into Brazilian Portuguese, back translation and review by an expert committee to access the semantic, conceptual, idiomatic, cultural and metabolic equivalences were previously done by that group.20,21 \nThe FACT‐L, version 4, is a combination of the 27‐item FACT‐General (FACT‐G) and the 9‐item Lung Cancer Subscale (LCS).\nA total FACT‐G score is calculated by summing the physical well‐being (PWB), social/family well‐being (SWB), emotional well‐being (EWB), and functional well‐being (FWB) subscale scores, with a score ranging from 0 to 108. A total FACT‐L score is obtained by summing the FACT‐G score with the LCS (two of the nine items are not scored). The FACT‐L score ranges from 0 to 136.6 \nThe FACT‐L Trial Outcome Index (FACT‐L TOI) is, a priori, an index that sums the PWB, FWB, and LCS into a 21‐item scale. Its score ranges from 0 to 84.6 \nThe FLSI is a symptom index with six questions regarding the five most frequent symptoms reported by lung cancer patients, especially in the advanced stages: dyspnea, fatigue, pain, weight loss, and coughing. Its score ranges from 0 to 24.\nHigher scores generated by the FACT‐L and FLSI correspond to better quality of life.\nThe score on each aspect of the FACT‐L and FLSI is obtained according to the option chosen by the patient. The available options were: “not at all”, “a little bit”, “somewhat”, “quite a bit”, or “very much”. The higher the score obtained by each patient in question, the greater the final score of the subscale and the better the quality of life. However, these scores have a non‐linear behavior.\nPatients answered the questionnaire after being read each question, all by the same interviewer. The process took place in a calm environment with no interruptions allowed. The questionnaires were reviewed at the end of the interview to avoid any missed questions. The response time was timed in the two visits.\nAll doubts expressed by patients during the questionnaire were documented. Patients were also asked not only about what they felt in terms of question content, but also about the length of the questionnaire.\nThe test–retest design was adopted for the reliability study. The questionnaire was administered by the same researcher twice with a 15‐day interval.\nThe scores obtained on different scales and subscales of FACT‐G were compared with reference values established by the FACIT group.22 For this comparison, we used the minimal clinically significant difference, determined by the same group.6 \n[SUBTITLE] Statistical analysis [SUBSECTION] Variables were expressed as mean and standard deviation. The intraclass correlation coefficient (ICC) and Kappa reliability coefficient were calculated to assess the reliability of the questionnaire and questions respectively. To compare the two groups, we used the chi‐square test for categorical variables, t test for parametric continuous variables, and Mann–Whitney test for nonparametric continuous variables. For the correlations between spirometry and questionnaire scores, the Spearman's correlation coefficient was used. Statistical analysis was performed with SPSS® software version 13.0. All tests were two‐tailed, and the level of significance was 5%.\nVariables were expressed as mean and standard deviation. The intraclass correlation coefficient (ICC) and Kappa reliability coefficient were calculated to assess the reliability of the questionnaire and questions respectively. To compare the two groups, we used the chi‐square test for categorical variables, t test for parametric continuous variables, and Mann–Whitney test for nonparametric continuous variables. For the correlations between spirometry and questionnaire scores, the Spearman's correlation coefficient was used. Statistical analysis was performed with SPSS® software version 13.0. All tests were two‐tailed, and the level of significance was 5%.", "Variables were expressed as mean and standard deviation. The intraclass correlation coefficient (ICC) and Kappa reliability coefficient were calculated to assess the reliability of the questionnaire and questions respectively. To compare the two groups, we used the chi‐square test for categorical variables, t test for parametric continuous variables, and Mann–Whitney test for nonparametric continuous variables. For the correlations between spirometry and questionnaire scores, the Spearman's correlation coefficient was used. Statistical analysis was performed with SPSS® software version 13.0. All tests were two‐tailed, and the level of significance was 5%.", "The main characteristics of the 30 patients who completed the study are shown in Table 1. Some 63.3% of the patients were over 60 years of age.\nThere was no statistically significant difference between genders regarding age, KPS, spirometry, staging, and histological type.\nAmong patients who never smoked, only one had a history of passive smoking. There was a prevalence of smoking habits in males (p = 0.04), with a higher tobacco smoke load for men than for women, consuming a mean of 53.2 pack–years (SD = 31.6) and 29.6 pack–years (SD = 25.5) respectively (p = 0.02).\nEight patients (26.7%) were diagnosed with chronic obstructive pulmonary disease (COPD), according to the GOLD guideline.23 \nThe mean values for each scale of the FACT‐L and FLSI are illustrated in Table 2. The intraclass correlation coefficient values for the different scales of the FACT‐L and FLSI showed excellent correlations (Table 2).\nThe kappa coefficient was used to test question reliability, which was less than 0.4 on questions GP1, GP5 (PWB), GS3, GS4, GS5 (SWB), GF1, GF2, GF3, GF5, GF6 (FWB), and LCL4 (LCS). The remaining questions had a moderate agreement. As for the FLSI, the kappa coefficient was moderate on question B1. In the other questions, the kappa coefficient was less than 0.4.\nThere was no correlation between spirometry and any of the questionnaire scales.\nThe mean score of the FACT‐G scales was similar to the reference values for all scales that have these values established.22 Table 3 shows the mean values for the scales of FACT‐G and FACT‐L described in several studies of reliability and cultural adaptation into other languages.\nThe time spent by patients in answering the questionnaire was measured at both visits (Table 4). The response time on the second application was significantly lower (p = 0.001).", "The focus of this study was to analyze the reliability of the FACT‐L and the FLSI, a specific instrument for assessing the quality of life of lung cancer patients in the Brazilian population. It was observed that the Brazilian version shows excellent reliability for this population.\nThe FACT‐L is an instrument that has been widely used in phase I, II, and III clinical trials. It has been translated and adapted into several languages and cultures.2,3,7,22,24–27 A feature of this instrument is the nonlinearity of the scores of its scales, a factor that complicates the interpretation of isolated study data. To facilitate the explanation of the results, the authors of the general questionnaire (FACT‐G) have developed normative values from two reference groups, one from normal adults and another from adults with cancer in general.22 However, these normative values are only for the general questionnaire (FACT‐G) and its scales. When comparing the results obtained in our study with the reference values, we observe that the score was similar in all scales and for the total value. A possible explanation for the fact that our patients have quality of life similar to patients with other cancer types is that, in the study for the establishment of benchmarks, besides including patients with various types of cancer, such as breast cancer, colon and rectum cancer, and cancer of the head and neck, they also included patients with lung cancer. Furthermore, 76.7% of the sample comprised patients treated and in remission from the underlying disease and, therefore, with better quality of life. When our FACT‐L results are compared with those from other studies using the same questionnaire, we find that there is great variability between countries and different cultures,2,3,7,22,24–27 which may be due to differing perceptions of the questions according to each culture and also differences in patient characteristics, such as staging, age, socioeconomic status, among others.\nThe FLSI is an index designed to identify the presence of the major symptoms related to lung cancer, including dyspnea, pain, fatigue, cough, and weight loss.28 These symptoms can negatively influence quality of life. In this study, in addition to the medical evaluation, FLSI was also used to assess the patient's clinical stability.\nThe reliability for all scores of the FACT‐L showed ICC values greater than 0.75, ranging from 0.78 (SWB) to 0.96 (FACT‐L TOI). These values show excellent reliability and are similar to those found in other studies. In the validation study of the Korean version of the FACT‐L,25 the ICC ranged from 0.52 to 0.84. The Chinese version3 varied from 0.76 to 0.82. In the reliability study of the original version, the ICC ranged from 0.56 (SWB) to 0.89 (FACT‐L TOI).7 The FLSI also showed high reliability, which confirms the stability of the sample. Most of the FACT‐L questions showed adequate reliability, analyzed by the kappa coefficient.\nIn this study, the reliability analyses were conducted with a convenience sample similar to other studies evaluating the reliability and cultural adaptation of other questionnaires to the Portuguese language.12–15 \nIn our sample, males predominated, in agreement with the worldwide prevalence of lung cancer.1 The mean age was 61.3 years, which is consistent with most studies that include patients with lung cancer.1,5 Regarding histological type, adenocarcinoma was the most prevalent, consistent with several epidemiological studies in developed countries.1 Unfortunately, regarding staging, there was a predominance of stage III and IV, stages with less chance of survival after treatment.29 The mean FEV1 (% predicted) and FEV1/FVC ratio in our study were similar to the results reported by Young et al., who observed a FEV1 (% predicted) mean of 73% and FEV1/FVC mean of 64% in smokers with lung cancer.30 In our study, there was no correlation between the questionnaire and the analyzed lung function parameters, consistent with other studies that found no significant correlation when investigating the effects of altered pulmonary function on the quality of life of cancer patients.31,32 \nThe process of translating the FACT‐L questionnaire and FLSI into the Portuguese language (Brazil) was performed by the FACIT group. Only a few doubts were reported by the patients, which warrants its use in the current form.\nFew patients had difficulty in interpreting the word “muitíssimo”; however, many of the patients interviewed understood the meaning of the term not because it is a familiar word, but because they realized that it was a scale with a progressive score in which the numerical value assigned to the word “muitíssimo” yielded the highest score. The suggested changes were sent to the FACIT group.\nAlterations to this tool were proposed only for question Q1 “Independentemente do seu nível a(c)tual de a(c)tividade sexual, favor de responder à pergunta a seguir. Se preferir não responder, assinale o quadrículo □ e passe para a próxima seção”. Despite not raising doubts, the words “atual” and “atividade” do not require the letter “(c)”, and there is no need to use the preposition “de” after the word “favor”, both grammatical constructions used in the Portuguese language in Portugal.\nWe conclude that the Brazilian version of the FACT‐L questionnaire and FLSI is reproducible, fast, and of simple application, and that they are capable of measuring the quality of life in lung cancer patients in Brazil." ]
[ "intro", "methods", null, "results", "discussion" ]
[ "Quality of life", "Lung cancer", "Questionnaires", "Reproducibility of results", "Validation studies" ]
An evaluation of children with Kawasaki disease in Istanbul: a retrospective follow-up study.
21340213
Kawasaki disease (KD) is an acute, self-limiting vasculitis of unknown etiology. The incidence of KD is increasing world wide. However, the epidemiological data for KD in Turkey has not been well described.
BACKGROUND
Patients with KD were retrospectively identified from the hospital discharge records between 2002 and 2010. Atypical cases of KD were excluded. A standardized form was used to collect demographic data, clinical information, echocardiography and laboratory results.
METHOD
Thirty-five patients with KD, with a mean age of 2.5 + 1.9 years, were identified. Eighty-five point seven per cent of patients were under 5 years of age. A seasonal pattern favouring the winter months was noticed. In addition to fever and bilateral conjunctival injection, changes in the oral cavity and lips were the most commonly detected clinical signs in our cases. Coronary artery abnormalities were detected in nine patients. The majority of our patients had started treatment with intravenous immunoglobulin in the first 10 days of the onset of fever, and only one patient required systemic steroids for intravenous immunoglobulin-resistant KD. The coronary artery abnormalities resolved in all nine patients within 8 months.
RESULTS
This study is the most comprehensive series of children from Turkey with KD included in Medline. As adult-onset ischemic heart disease may be due to KD in childhood, further prospective clinical investigations are needed to understand the epidemiology, management and long-term follow-up of the disease.
CONCLUSION
[ "Child", "Child, Preschool", "Female", "Follow-Up Studies", "Humans", "Immunoglobulins, Intravenous", "Immunologic Factors", "Infant", "Male", "Mucocutaneous Lymph Node Syndrome", "Reference Values", "Retrospective Studies", "Turkey" ]
3020335
INTRODUCTION
Kawasaki disease (KD) is an acute, self‐limiting systemic vasculitis of unknown etiology, which mainly affects children aged <5 years. It was first described by Tomisaku Kawasaki in 1967 in Japan.1 KD has now replaced rheumatic fever as the leading cause of acquired heart disease in childhood in the developed world, and is the second most common childhood vasculitis.2,3 Although the incidence of KD varies among countries, it is much higher in children from Asian countries.4,5 The clinical signs of KD are similar to those of many other childhood illnesses. The disease is often complicated by coronary artery abnormalities (CAA), including dilatation and/or aneurysms, and thus is a leading cause of acquired heart disease in children.6,7 Some clinical features other than the classic diagnostic criteria are intense irritability, cough, diarrhea, sterile pyuria, arthritis, arthralgia, redness and induration at the site of a Bacille–Calmette–Guerin (BCG) scar. Patients with prolonged fever and fewer than four of the other principal criteria are diagnosed as atypical or incomplete KD if CAA are present.8 The incentive for this research came from the remarkable lack of knowledge about the epidemiology and features of KD in Turkey. In this article we present the demographic, clinical and laboratory features of children with KD, who were diagnosed and managed in our hospital.
null
null
RESULTS
Thirty‐five patients with KD were treated in the Istanbul American Hospital during the 8‐year period. The mean age of the patients was 2.5±1.9 years (range 2 months to 7 years). Eighty per cent of the cases were diagnosed within the first 10 days, with the longest time to diagnosis being 15 days. The demographic and clinical characteristics of patients are summarized in Table 2. Eighty six per cent (30 cases) of patients were aged <5 years and 14% (five cases) of patients were aged >5 years at the time of diagnosis. Fever was the main clinical sign in all patients and mean body temperature was 39.4±0.9°C at diagnosis. Although intense irritability is not part of the classic diagnostic criteria, it was present in all patients. In one patient there was redness and induration at the site of the BCG scar. Three patients had arthritis and one patient had abdominal pain and diarrhea. Laboratory findings showed that anemia (Hb <11 g/dl) was present in 14 (40%) patients who were all a year old. Elevated serum transaminases were detected in 10 (28.6%) patients. Laboratory values of the patients at diagnosis are shown in Table 3. No positive culture (blood, urine and throat) was found in our patients. Sterile pyuria was present in two patients. Patients were receiving antibiotic therapy at diagnosis, with the exception of two patients. During the 8‐year period, the monthly number of cases was lowest between July and September and peaked in the winter months (Fig 1). The highest incidence was seen in February, with seven patients. CAA were detected in nine (25.7%) cases (six males, three female). One patient had ectasia in the right coronary artery and two patients in the left coronary artery. Six children had coronary artery aneurysms (one in both right and left coronary arteries with pericardial effusion, three had left coronary artery aneurysms and two had right coronary artery aneurysms with mitral valve regurgitation). All children were treated with intravenous immunoglobulin (IVIg) at a dose of 2 g/kg for a period of 10–12 h in addition to high‐dose aspirin (100 mg/kg) during the febrile period, according to current consensus guidelines.10 In one patient, after a second infusion of IVIg, high‐dose (30 mg/kg) methylprednisolone pulse therapy was carried out for a period of 2 h. Symptoms resolved in this child after two intravenous doses of steroids. There was no recurrence in any patient.
null
null
[ "CONCLUSIONS", "Conflict of interest" ]
[ "In summary, it is our belief that KD is not a rare disease in Turkey. This study is the most comprehensive series of children from Turkey with KD included in Medline. As adult‐onset ischemic heart disease may be due to KD in childhood,9,18,28 further prospective clinical investigations are needed to understand the epidemiology, management and long‐term follow‐up the disease in Turkey.", "None." ]
[ null, null ]
[ "INTRODUCTION", "MATERIAL AND METHODS", "RESULTS", "DISCUSSION", "CONCLUSIONS", "Conflict of interest" ]
[ "Kawasaki disease (KD) is an acute, self‐limiting systemic vasculitis of unknown etiology, which mainly affects children aged <5 years. It was first described by Tomisaku Kawasaki in 1967 in Japan.1 KD has now replaced rheumatic fever as the leading cause of acquired heart disease in childhood in the developed world, and is the second most common childhood vasculitis.2,3 Although the incidence of KD varies among countries, it is much higher in children from Asian countries.4,5 The clinical signs of KD are similar to those of many other childhood illnesses. The disease is often complicated by coronary artery abnormalities (CAA), including dilatation and/or aneurysms, and thus is a leading cause of acquired heart disease in children.6,7 Some clinical features other than the classic diagnostic criteria are intense irritability, cough, diarrhea, sterile pyuria, arthritis, arthralgia, redness and induration at the site of a Bacille–Calmette–Guerin (BCG) scar. Patients with prolonged fever and fewer than four of the other principal criteria are diagnosed as atypical or incomplete KD if CAA are present.8 \nThe incentive for this research came from the remarkable lack of knowledge about the epidemiology and features of KD in Turkey. In this article we present the demographic, clinical and laboratory features of children with KD, who were diagnosed and managed in our hospital.", "Patients with KD were identified from hospital discharge records between 2002 and 2010. All the children were being followed up routinely at an outpatient clinic of the American Hospital, Istanbul, Turkey—a private hospital, generally relying on a high socioeconomic population from Istanbul. Diagnosis of KD was made according to American Heart Association guidelines.9 Table 1 shows the standard diagnostic criteria for KD. Medical charts of patients with KD were reviewed using a standardized form to collect demographic data, clinical information, and laboratory test results, retrospectively. All children diagnosed with atypical KD were excluded. Echocardiography was performed during hospitalization and follow‐up in all patients.\nDefinitions of CAA were based on the following criteria: for children aged <5 years, an internal lumen diameter (ILD) ≤3 mm was considered normal and for children aged ≥5 years, an ILD ≤4 mm was considered normal. An ILD of a coronary artery segment enlarged to <1.5 times the normal upper limit was defined as a dilatation, and an ILD enlarged to ≥1.5 times the normal upper limit was defined as an aneurysm. When a coronary artery was larger than normal (dilated) and without a segmental aneurysm, the vessel was considered ectasic.9 Echocardiography was usually repeated within 2 weeks of the onset of illness, during the fourth week, and thereafter depending on the initial findings. All patients underwent laboratory investigations for platelets, leukocyte (white blood cell) count, hemoglobin (Hb), C‐reactive protein (CRP), erythrocyte sedimentation rate (ESR), aspartate aminotransferase, alanine aminotransferase and underwent urine analysis.\nQualitative data are presented as frequencies with percentages and quantitative data as means with standard deviations (SD).", "Thirty‐five patients with KD were treated in the Istanbul American Hospital during the 8‐year period. The mean age of the patients was 2.5±1.9 years (range 2 months to 7 years). Eighty per cent of the cases were diagnosed within the first 10 days, with the longest time to diagnosis being 15 days. The demographic and clinical characteristics of patients are summarized in Table 2. Eighty six per cent (30 cases) of patients were aged <5 years and 14% (five cases) of patients were aged >5 years at the time of diagnosis. Fever was the main clinical sign in all patients and mean body temperature was 39.4±0.9°C at diagnosis. Although intense irritability is not part of the classic diagnostic criteria, it was present in all patients. In one patient there was redness and induration at the site of the BCG scar. Three patients had arthritis and one patient had abdominal pain and diarrhea. Laboratory findings showed that anemia (Hb <11 g/dl) was present in 14 (40%) patients who were all a year old. Elevated serum transaminases were detected in 10 (28.6%) patients. Laboratory values of the patients at diagnosis are shown in Table 3. No positive culture (blood, urine and throat) was found in our patients. Sterile pyuria was present in two patients.\nPatients were receiving antibiotic therapy at diagnosis, with the exception of two patients. During the 8‐year period, the monthly number of cases was lowest between July and September and peaked in the winter months (Fig 1). The highest incidence was seen in February, with seven patients.\nCAA were detected in nine (25.7%) cases (six males, three female). One patient had ectasia in the right coronary artery and two patients in the left coronary artery. Six children had coronary artery aneurysms (one in both right and left coronary arteries with pericardial effusion, three had left coronary artery aneurysms and two had right coronary artery aneurysms with mitral valve regurgitation). All children were treated with intravenous immunoglobulin (IVIg) at a dose of 2 g/kg for a period of 10–12 h in addition to high‐dose aspirin (100 mg/kg) during the febrile period, according to current consensus guidelines.10 In one patient, after a second infusion of IVIg, high‐dose (30 mg/kg) methylprednisolone pulse therapy was carried out for a period of 2 h. Symptoms resolved in this child after two intravenous doses of steroids. There was no recurrence in any patient.", "KD is an acute febrile vasculitis that has been reported with increasing incidence among all racial ethnic groups. While Japan has the highest and increasing annual incidence in the world (184.6 per 100,000 children aged <5 years between 2005 and 2006), the epidemiology of KD in Europe has not been well described.11,12 Likewise, the epidemiology of KD in Turkey is also not well known. A nationwide survey found that KD had an incidence of 9% of childhood vasculitides.13 \nThere are many unconfirmed hypotheses about the cause of KD. Although clinical, laboratory and epidemiologic features strongly suggest an infectious cause, the etiology of KD still remains unknown. One possibility is that a viral repiratory infection (represented by a seasonal pattern), juxtaposed with a subsequent secondary bacterial colonization, precipitates a cascade of events that result in an exaggerated immunologic response.14,15 Our study confirms the winter peak of KD (Figure 1), but no clear relationship with respiratory pathogens was shown as virologic and immunologic data were lacking. KD is more common in boys than girls and 85% of cases occur in children aged <5 years.11 In our study, we detected a male predominance and age distribution resembling that of previous studies.16 Studies from the United States also show that KD is more common during the winter and that 76% of children are <5 years old.17 \nIn KD, the fever is typically high‐spiking and remittent, with peak temperatures of >39°C and, in many cases, >40°C.18 Our patients' fever pattern was consistent with this classic finding, with a mean body temperature of 39.4±0.9°C at diagnosis. In addition to fever, bilateral conjunctival injection and changes in the oral cavity and lips were the most commonly detected clinical signs in our cases, as found in other studies from Turkey and other parts of the world.19-22 The occurrence of cervical lymphadenopathy in KD is variable. It is seen in only 50–75% of patients, whereas most of the other features are seen in approximately 90%.23 It is usually unilateral and confined to the anterior cervical triangle, and its classic criteria include ≥1 lymph node that is >1.5 cm in diameter.18 In our study, cervical lymphadenopathy was the least common of the principal clinical features (48.6%). The lymphadenopathies in our patients were typically classic, except for one patient who presented with massive cervical node enlargement.\nMultiple clinical findings may be observed in patients with KD. Arthritis or arthralgia can occur in the first week of the illness and tends to affect multiple joints, including the small interphalangeal joints and large weight‐bearing joints.18 In our series, two patients had arthritis in their knees and one patient had arthritis in his ankle.\nChildren with KD are often more irritable than the children with other febrile ilnesses,18 and all the patients in this study were irritable. Gastrointestinal complaints, including diarrhea, vomiting and abdominal pain, occur in about one‐third of patients.24 Rarely, KD can present as an acute surgical abdomen.24 Contrary to previous studies, only one of our 35 patients had abdominal pain and diarrhea. Erythema and induration at the site of BCG vaccination is common in Japan, where BCG is used widely.25 As in Japan, BCG is routinely performed in Turkey, but only one patient had redness and induration at the site of the BCG scar.\nKD is mainly a clinical diagnosis and there are no pathognomonic laboratory tests or findings. However, leukocytosis is typical during the acute stage of KD. Approximately 50% of patients had white blood cell counts >15,000/mm3.18 Similar to other reports,8,10,22 the mean leukocyte count of our study group was 15,896±6,383/mm3. Normocytic anemia may develop, particularly with more prolonged duration of active inflammation.8,18 Marked anemia (Hb <11 g/dl) on admission was noted in 14 (40%) patients who were all over 1 year old and who had prolonged fever for more than 7 days. Thrombocytosis is a characteristic feature of the later phases of the illness. It is rarely present in the first week of the illness, usually appearing in the second week, and peaking in the third week with a gradual return to normal by 4–8 weeks after onset in uncomplicated cases. Platelet counts are usually >450,000/mm3 in patients evaluated after day 7 of illness.18 In our series most of the patients were diagnosed in the second week and the mean platelet count was 496,889±208,503/mm3, which is consistent with other studies.18 Elevations of CRP and ESR are almost universal in KD, usually returning to normal values by 6–10 weeks after the onset of illness.18 Mean values of CRP and ESR were high in our patients. Burns et al.26 reported mild to moderate elevations in serum transaminases in ≤40% of patients. Although the mean values of transaminases were not high in our study group, the ratio of patients with high levels (28.6%) was consistent with this multicenter study.\nUrine analysis showed intermittent mild to moderate sterile pyuria in approximately 33% of patients, suggesting urethritis.18 Urine cultures of two patients with sterile pyuria were normal.\nAs the principal clinical findings that fulfill the diagnostic criteria are not specific for KD, patients with other diseases (Table 1) with similar clinical features were excluded from our study.\nThe major sequelae of KD are related to the cardiovascular, and more specifically, the coronary arterial system. Therefore cardiac imaging by echocardiography is a critical part of the evaluation of all patients with suspected KD. For uncomplicated cases, echocardiography is recommended at the time of diagnosis, at 2 weeks, and at 6–8 weeks after the onset of disease.27 We followed the above recommendations and also repeated the echocardiography beyond 8 weeks for all patients with previously normal findings, as other abnormalities in the coronary artery vasculature (aneurysms) and aortic root (dilatation with or without regurgitation) may develop, even among patients with normal baseline echocardiograms.27,28 Additionally, echocardiography was performed with higher frequency for patients with pericardial effusion, mitral valve regurgitation and IVIg‐resistant KD, for closer evaluation and follow‐up. None of our patients with normal baseline echocardiography was shown to develop CAA on follow‐up echocardiograms beyond the 8 weeks. CAA develop in approximately 15–25% of untreated children with the disease and may lead to ischemic heart disease, myocardial infarction or sudden death.18 In our patients, CAA were detected in nine (25.7%) cases. None of them led to myocardial infarction or ischemic heart disease. A recent study from Turkey reported the CAA rate as 33.3% in 24 children with KD.22 Burns et al. described the emergence of the peak mortality as 15–45 days after the onset of fever; and during this time, well‐established coronary vasculitis occurs concomitantly with a marked elevation of the platelet count and a hypercoagulable state.28 The mean duration of fever in our cases was 7.8±2.8 days (range 4–15). Our patients had fewer adverse sequelae, as all were treated before the peak mortality days, thus suggesting the importance of early diagnosis and treatment. The opportunity for early management occurred as the patients were being routinely followed up in our outpatient clinic.\nThe current medical management of KD is IVIg and high‐dose aspirin. IVIg is very effective when given in the first ten days of illness. It reduces the risk of CAA from 20–25% to 1–2%.6,9 A subgroup of patients with KD will be resistant to IVIg therapy; these patients are at greatest risk of developing coronary artery aneurysms and long‐term sequelae of the disease.29 Data have demonstrated the usefulness and safety of systemic steroids in patients resistant to IVIg.30 In our series, the majority of our patients were given IVIg before the tenth day and only one patient was IVIg resistant, who was successfully treated with pulse methylprednisolone. Newburger et al. reported that 50–67% of aneurysms resolve within 1–2 years.31 Echocardiographic evaluation of the nine children with CAA in our study was normal within 8 months (three within 6 months, four within 7 months, and two within 8 months). All patients are alive and receiving annual echocardiographic follow‐up.", "In summary, it is our belief that KD is not a rare disease in Turkey. This study is the most comprehensive series of children from Turkey with KD included in Medline. As adult‐onset ischemic heart disease may be due to KD in childhood,9,18,28 further prospective clinical investigations are needed to understand the epidemiology, management and long‐term follow‐up the disease in Turkey.", "None." ]
[ "intro", "materials|methods", "results", "discussion", null, null ]
[ "Kawasaki disease", "Coronary artery disease", "Echocardiography", "Steroids" ]
Forced oscillation technique in the detection of smoking-induced respiratory alterations: diagnostic accuracy and comparison with spirometry.
21340218
Detection of smoking effects is of utmost importance in the prevention of cigarette-induced chronic airway obstruction. The forced oscillation technique offers a simple and detailed approach to investigate the mechanical properties of the respiratory system. However, there have been no data concerning the use of the forced oscillation technique to evaluate respiratory mechanics in groups with different degrees of tobacco consumption.
INTRODUCTION
One hundred and seventy subjects were divided into five groups according to the number of pack-years smoked: four groups of smokers classified as < 20, 20-39, 40-59, and > 60 pack-years and a control group. The four groups of smokers were compared with the control group using receiver operating characteristic (ROC) curves.
METHODS
The early adverse effects of smoking in the group with < 20 pack-years were adequately detected by forced oscillation technique parameters. In this group, the comparisons of the ROC curves showed significantly better diagnostic accuracy (p < 0.01) for forced oscillation technique parameters. On the other hand, in groups of 20-39, 40-59, and > 60 pack-years, the diagnostic performance of the forced oscillation technique was similar to that observed with spirometry.
RESULTS
This study revealed that forced oscillation technique parameters were able to detect early smoking-induced respiratory involvement when pathologic changes are still potentially reversible. These findings support the use of the forced oscillation technique as a versatile clinical diagnostic tool in helping with chronic obstructive lung disease prevention, diagnosis, and treatment.
CONCLUSIONS
[ "Airway Resistance", "Case-Control Studies", "Early Diagnosis", "Humans", "Pulmonary Disease, Chronic Obstructive", "ROC Curve", "Respiratory Function Tests", "Respiratory Mechanics", "Sensitivity and Specificity", "Smoking", "Spirometry", "Time Factors" ]
3020340
INTRODUCTION
Recently, it was reported that the deterioration in pulmonary function associated with the development of chronic obstructive lung disease (COPD) is directly related to both the duration of the smoking habit and the number of pack–years consumed.1,2 A better understanding of how the smoking habit influences the deterioration in respiratory mechanics would be useful in the precocious diagnosis of COPD, which is usually obtained only in the later stages when respiratory function is already impaired. Owing to the high prevalence of and medical costs associated with COPD, the precocious identification and treatment of these patients is important in order to avoid the severe and expensive stages of this disease.3 The alterations in respiratory mechanics due to smoking are usually evaluated using respiratory flows and volumes obtained by spirometry. However, the modifications in respiratory mechanics are not always detected by this test.4 Moreover, some patients are not able to perform spirometry reliably, as it requires good subject cooperation and maximal effort.5 The forced oscillation technique (FOT) offers a simple and detailed approach to investigate the mechanical properties of the respiratory system. This method characterizes the respiratory impedance and its two components, respiratory system resistance (Rrs) and reactance (Xrs). These parameters are usually measured at various frequencies by means of small pressure oscillations (about 2 cmH2O) superimposed at the mouth during spontaneous breathing. The method is simple, requires only passive cooperation, and no forced expiratory maneuvers. Another important advantage, particularly in pathophysiological research, is that the FOT is able to provide information on the mechanical characteristics of the respiratory system that are complementary to spirometry.6–8 The FOT was applied successfully by a number of investigators to obtain a detailed analysis of the respiratory mechanics in smokers compared with non‐smokers,4,9–11 as well as comparisons among non‐smokers, former smokers, and smokers.12 Recently, this technique has also been applied successfully in our laboratory in physiological studies of the aging process,13 in the detection of early respiratory changes in smokers,14 as well as in studies conducted in asthmatic15 and sarcoidotic16 patients. Therefore, the FOT has great potential to increase our knowledge regarding the pathophysiology of smoking, as well as in helping in the clinical diagnosis of respiratory alterations in this disease. However, to the best of our knowledge, there are no available data using the FOT to investigate changes in respiratory impedance in groups with different degrees of tobacco consumption. In this context, the purpose of this study was twofold: (1) to evaluate the ability of the FOT indices to detect smoking‐induced respiratory alterations, with special emphasis on early alterations; and (2) to compare the diagnostic accuracy of FOT and spirometric parameters in groups with different numbers of pack–years. First, we investigated the influence of the increasing pack–years on the FOT parameters. Then, the sensitivity and specificity of the FOT parameters in identifying respiratory alterations in groups with different numbers of pack–years were analyzed. Finally, the diagnostic accuracy of FOT parameters was compared with the figures obtained by spirometric volumes and flows.
METHODS
As it is impossible to answer the questions proposed in this study based on a follow‐up of individual subjects, a cross‐sectional study in comparable groups of healthy individuals and smoking patients with several degrees of tobacco consumption was performed, in a similar way to the work conducted by Verbanck et al.17 Therefore, healthy control subjects with normal spirometry who had never smoked, as well as smoking subjects who were on no regular medications and had no allergic, respiratory, cardiovascular, gastrointestinal, renal, or neurological symptoms were recruited. All subjects had stable health for at least four consecutive weeks and had signed written informed consent. The institutional ethics committee approved the protocol. Baseline data, including age, height, and weight, were obtained from each subject at the time of the procedures. All smokers were current smokers and had been instructed to abstain from smoking for at least 2 h before the testing. The amount of tobacco smoked and the duration of smoking were quantified using the number of pack–years, which was calculated by multiplying the mean number of packs (20 cigarettes) consumed daily by the number of years that the subject had their smoking habit.18 The smoking subjects were then stratified into subgroups of <20, 20–39, 40–59, and >60 pack–years. Smokers were recruited from both university personnel who smoke and patients who visited the smoking cessation clinic. In addition, another 26 patients with documented COPD and a smoking history of >23.5 pack–years were recruited from the outpatient clinic. These patients were stable at the time of testing. Total respiratory resistance and reactance were measured using a forced oscillation system as described previously.19–21 These measurements were conducted in conformity with the recommendations issued by a task force from the European Respiratory Society.8 Briefly, small‐amplitude pressure variations from 4 to 32 Hz generated by a loudspeaker were applied at the mouth, using a mouthpiece. The pressure input was measured with a Honeywell 176 PC pressure transducer (Honeywell Microswitch, Boston, MA, USA) and airway flow with a screen pneumotachograph. The signals were digitized by a personal computer, and their fast Fourier transform (FFT) was computed using blocks of 4,096 points. Three measurements were made of 16 s each, and the result of the test was calculated as the mean of these measurements. To perform the FOT analysis, the volunteer remained in a sitting position, keeping the head in a normal position and breathing at functional residual capacity (FRC) through a mouthpiece. During the measurements, the subjects wore a nose clip and firmly supported their cheeks and submandibular tissue with their hands.8,22,23 The validity of the data was measured by computing the coherence function. Only values with a coherence function of 0.9 or more were considered adequate.11,24,25 Any time the computed coherence was less than this threshold, the maneuver was not considered valid, and the examination was repeated. Whenever adequate coherence measurements could not be obtained according to these criteria, the patient was excluded from the study.16,14,22,23 Resistive impedance data were subjected to linear regression analysis over the frequency range of 4–16 Hz. The resistive impedance at 0 Hz (R0) was extrapolated from this analysis. This parameter is related to the total resistance of the respiratory system.24 The mean resistance (Rm), commonly related to the airway caliber,7 was also calculated for this frequency range. Increases in these flow resistive properties are associated with increased work of breathing. Additionally, the slope of the resistive component of the respiratory impedance (S), which is associated with respiratory system homogeneity,12,26 was also obtained from this analysis. Negative values of this parameter reflect abnormal patterns of ventilation distribution, which are related to alterations in ventilation–perfusion relationships. The mean reactance (Xm), a property usually related to respiratory system nonhomogeneity,27 was calculated based on the entire studied frequency range (4–32 Hz). Respiratory mechanical properties were also characterized by the resonance frequency (fr), which is defined as the frequency at which the Xrs equals zero, and the respiratory system dynamic compliance (Crs,dyn). Dynamic compliance reflects the lungs and bronchial wall compliances, the compliance of the chest wall/abdomen compartment, and thoracic gas compression. The time lag between spirometric and FOT measurements was always <15 min. Using a closed circuit spirometer (Vitrace VT‐139; Pro‐médico, Rio de Janeiro, Brazil), measurements of forced vital capacity (FVC), forced expiratory volume for the first second (FEV1), FEV1/FVC, and the ratio of forced expiratory flow (FEF) between 25% and 75% of FVC to FVC (FEF/FVC) were obtained for subjects in a sitting position. These parameters were presented as raw data and percentiles of the predicted values (%). Predicted values for spirometry were obtained from Knudson et al.28 and Pereira et al.29 Forced expiratory maneuvers were repeated until three sequential measurements were obtained. The indices studied were those obtained through the better curve, which was selected based on the higher value of FEV1 plus FVC. Quality control of spirometry is given by the American Thoracic Society (ATS) criteria, with the software allowing the detection of non‐acceptable maneuvers. The sample size for this study was calculated using the software MedCalc version 8.2 (Medicalc Software, Mariakerke, Belgium). It was based on an anticipated comparison of means obtained in preliminary studies and an assumption of type I and type II errors of 5%. The minimal sample size required was 32 subjects per group. In the present study, there were 34 volunteers in each group. Initially, univariate and multiple regression analyses were adjusted for pack–years and age and then applied to identify the association of these variables with the FOT parameters. These analyses were performed using Stata 8.2 software. The volunteers were then stratified and, when the achieved data presented a statistically normal distribution, the data were reanalyzed using one‐way analysis of variance (ANOVA), which was further corrected by the Tukey significant difference test. A nonparametric test (Kruskal–Wallis (KW)), associated with a Mann–Whitney U test, was applied when the data did not present in a normal distribution. These analyses were performed using Statistica 5.0 software. The results are presented as mean±standard deviation. A p value of <0.05 was considered statistically significant. The performance of the FOT indices in the detection of smoking‐induced respiratory alterations in the several pack–years groups was evaluated by means of receiver operating characteristic (ROC) analysis.30 These evaluations were constructed using MedCalc 8.2. Comparisons of the AUC among parameters obtained from FOT and spirometry were conducted using MedCalc 8.2, according to the theory described by Metz.31 The values of sensitivity, specificity, and area under the curve (AUC) for spirometry and FOT were obtained based on the optimal cut‐off point, as determined from the ROC curve analysis.30
RESULTS
There were no significant differences in weight or height among the groups. However, there were significant differences in age among the groups. In general, the spirometric parameters were highest in normal subjects and decreased significantly as the pack–years increased (p<0.0001). Univariate analyses (Table 2) show that all the FOT parameters presented highly significant correlations with pack–years (p<0.001). All the FOT parameters, except Crs,dyn, also correlated with age. The multivariate analyses showed that the contribution of pack–years was highly significant for all the FOT parameters (p<0.001, Table 3). In contrast, the contribution of age was only significant for fr, Xm and Crs,dyn, whereas for R0, it was not significant, and S and Rm were near the limit of significance. The amount of tobacco smoked significantly increased R0 (KW‐ANOVA, p<0.0001), Rm (KW‐ANOVA, p<0.0001), and S (KW‐ANOVA, p<0.0001), as seen in Figure 1. Mean values of R0 and Rm increased significantly when groups of normal and smoking subjects of <20 pack–years were compared (p<0.0001 and p<0.00001 respectively). R0 and Rm were also increased with higher pack–years, which resulted in increasing statistical significance when compared with the control group. The comparisons between adjacent groups were statistically significant only for R0 when comparing the two highest pack–years groups. On the other hand, S was not statistically significant when comparing the control with the <20 or 20–39 pack–years groups, but was significantly increased in comparison with the two highest pack–years groups (p<0.001). As the amount of tobacco smoked increased, Crs,dyn (KW‐ANOVA, p<0.0001) and Xm (KW‐ANOVA, p<0.0001) were significantly reduced, whereas fr (KW‐ANOVA, p<0.0001) was increased (Figure 2). The differences comparing the control group with the <20 and 20–39 pack–years groups were not significant for fr and Xm (Figure 2A and B). In contrast, the same comparisons resulted in significant reductions in Crs,dyn (Figure 2C). Considering the comparisons between adjacent classes, significant modifications were observed between the two highest pack–years groups in all three of the reactive parameters studied. Figure 3 presents the ROC curves for FOT and spirometric parameters in all the studied groups. The performance of the FOT and spirometric indices in the detection of smoking‐induced respiratory alterations is described in Figure 4. Table 4 shows detailed values of area under the ROC curve (AUC), sensitivity, and specificity for the optimal cut‐off point for the FOT indices. The results of the comparative analysis among the AUC of FOT and spirometric parameters are described in Figure 5. In general, R0 (Figure 5A), Rm (Figure 5C), and Crs,dyn (Figure 5E) presented significantly higher AUC in smoking subjects with <20 pack–years, and AUC similar to that presented by spirometric parameters as the amount of tobacco smoked increased. Spirometric parameters presented significantly higher AUC than S (Figure 5B), fr (Figure 5D), and Xm (Figure 5F) considering groups of smoking subjects with 20–39 pack–years and 40–59 pack–years.
null
null
[ "ACKNOWLEDGEMENTS" ]
[ "The authors would like to thank J.A. Mesquita Jr. and J. G. Santos for their assistance. The Brazilian Council for Scientific and Technological Development (CNPq) and the Rio de Janeiro State Research Supporting Foundation (FAPERJ) supported this study." ]
[ null ]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION", "ACKNOWLEDGEMENTS" ]
[ "Recently, it was reported that the deterioration in pulmonary function associated with the development of chronic obstructive lung disease (COPD) is directly related to both the duration of the smoking habit and the number of pack–years consumed.1,2 A better understanding of how the smoking habit influences the deterioration in respiratory mechanics would be useful in the precocious diagnosis of COPD, which is usually obtained only in the later stages when respiratory function is already impaired. Owing to the high prevalence of and medical costs associated with COPD, the precocious identification and treatment of these patients is important in order to avoid the severe and expensive stages of this disease.3 \nThe alterations in respiratory mechanics due to smoking are usually evaluated using respiratory flows and volumes obtained by spirometry. However, the modifications in respiratory mechanics are not always detected by this test.4 Moreover, some patients are not able to perform spirometry reliably, as it requires good subject cooperation and maximal effort.5 The forced oscillation technique (FOT) offers a simple and detailed approach to investigate the mechanical properties of the respiratory system. This method characterizes the respiratory impedance and its two components, respiratory system resistance (Rrs) and reactance (Xrs). These parameters are usually measured at various frequencies by means of small pressure oscillations (about 2 cmH2O) superimposed at the mouth during spontaneous breathing. The method is simple, requires only passive cooperation, and no forced expiratory maneuvers. Another important advantage, particularly in pathophysiological research, is that the FOT is able to provide information on the mechanical characteristics of the respiratory system that are complementary to spirometry.6–8 The FOT was applied successfully by a number of investigators to obtain a detailed analysis of the respiratory mechanics in smokers compared with non‐smokers,4,9–11 as well as comparisons among non‐smokers, former smokers, and smokers.12 Recently, this technique has also been applied successfully in our laboratory in physiological studies of the aging process,13 in the detection of early respiratory changes in smokers,14 as well as in studies conducted in asthmatic15 and sarcoidotic16 patients.\nTherefore, the FOT has great potential to increase our knowledge regarding the pathophysiology of smoking, as well as in helping in the clinical diagnosis of respiratory alterations in this disease. However, to the best of our knowledge, there are no available data using the FOT to investigate changes in respiratory impedance in groups with different degrees of tobacco consumption.\nIn this context, the purpose of this study was twofold: (1) to evaluate the ability of the FOT indices to detect smoking‐induced respiratory alterations, with special emphasis on early alterations; and (2) to compare the diagnostic accuracy of FOT and spirometric parameters in groups with different numbers of pack–years.\nFirst, we investigated the influence of the increasing pack–years on the FOT parameters. Then, the sensitivity and specificity of the FOT parameters in identifying respiratory alterations in groups with different numbers of pack–years were analyzed. Finally, the diagnostic accuracy of FOT parameters was compared with the figures obtained by spirometric volumes and flows.", "As it is impossible to answer the questions proposed in this study based on a follow‐up of individual subjects, a cross‐sectional study in comparable groups of healthy individuals and smoking patients with several degrees of tobacco consumption was performed, in a similar way to the work conducted by Verbanck et al.17 Therefore, healthy control subjects with normal spirometry who had never smoked, as well as smoking subjects who were on no regular medications and had no allergic, respiratory, cardiovascular, gastrointestinal, renal, or neurological symptoms were recruited. All subjects had stable health for at least four consecutive weeks and had signed written informed consent. The institutional ethics committee approved the protocol. Baseline data, including age, height, and weight, were obtained from each subject at the time of the procedures. All smokers were current smokers and had been instructed to abstain from smoking for at least 2 h before the testing.\nThe amount of tobacco smoked and the duration of smoking were quantified using the number of pack–years, which was calculated by multiplying the mean number of packs (20 cigarettes) consumed daily by the number of years that the subject had their smoking habit.18 The smoking subjects were then stratified into subgroups of <20, 20–39, 40–59, and >60 pack–years.\nSmokers were recruited from both university personnel who smoke and patients who visited the smoking cessation clinic. In addition, another 26 patients with documented COPD and a smoking history of >23.5 pack–years were recruited from the outpatient clinic. These patients were stable at the time of testing.\nTotal respiratory resistance and reactance were measured using a forced oscillation system as described previously.19–21 These measurements were conducted in conformity with the recommendations issued by a task force from the European Respiratory Society.8 Briefly, small‐amplitude pressure variations from 4 to 32 Hz generated by a loudspeaker were applied at the mouth, using a mouthpiece. The pressure input was measured with a Honeywell 176 PC pressure transducer (Honeywell Microswitch, Boston, MA, USA) and airway flow with a screen pneumotachograph. The signals were digitized by a personal computer, and their fast Fourier transform (FFT) was computed using blocks of 4,096 points. Three measurements were made of 16 s each, and the result of the test was calculated as the mean of these measurements. To perform the FOT analysis, the volunteer remained in a sitting position, keeping the head in a normal position and breathing at functional residual capacity (FRC) through a mouthpiece. During the measurements, the subjects wore a nose clip and firmly supported their cheeks and submandibular tissue with their hands.8,22,23 \nThe validity of the data was measured by computing the coherence function. Only values with a coherence function of 0.9 or more were considered adequate.11,24,25 Any time the computed coherence was less than this threshold, the maneuver was not considered valid, and the examination was repeated. Whenever adequate coherence measurements could not be obtained according to these criteria, the patient was excluded from the study.16,14,22,23 \nResistive impedance data were subjected to linear regression analysis over the frequency range of 4–16 Hz. The resistive impedance at 0 Hz (R0) was extrapolated from this analysis. This parameter is related to the total resistance of the respiratory system.24 The mean resistance (Rm), commonly related to the airway caliber,7 was also calculated for this frequency range. Increases in these flow resistive properties are associated with increased work of breathing. Additionally, the slope of the resistive component of the respiratory impedance (S), which is associated with respiratory system homogeneity,12,26 was also obtained from this analysis. Negative values of this parameter reflect abnormal patterns of ventilation distribution, which are related to alterations in ventilation–perfusion relationships.\nThe mean reactance (Xm), a property usually related to respiratory system nonhomogeneity,27 was calculated based on the entire studied frequency range (4–32 Hz). Respiratory mechanical properties were also characterized by the resonance frequency (fr), which is defined as the frequency at which the Xrs equals zero, and the respiratory system dynamic compliance (Crs,dyn). Dynamic compliance reflects the lungs and bronchial wall compliances, the compliance of the chest wall/abdomen compartment, and thoracic gas compression. The time lag between spirometric and FOT measurements was always <15 min.\nUsing a closed circuit spirometer (Vitrace VT‐139; Pro‐médico, Rio de Janeiro, Brazil), measurements of forced vital capacity (FVC), forced expiratory volume for the first second (FEV1), FEV1/FVC, and the ratio of forced expiratory flow (FEF) between 25% and 75% of FVC to FVC (FEF/FVC) were obtained for subjects in a sitting position. These parameters were presented as raw data and percentiles of the predicted values (%). Predicted values for spirometry were obtained from Knudson et al.28 and Pereira et al.29 Forced expiratory maneuvers were repeated until three sequential measurements were obtained. The indices studied were those obtained through the better curve, which was selected based on the higher value of FEV1 plus FVC. Quality control of spirometry is given by the American Thoracic Society (ATS) criteria, with the software allowing the detection of non‐acceptable maneuvers.\nThe sample size for this study was calculated using the software MedCalc version 8.2 (Medicalc Software, Mariakerke, Belgium). It was based on an anticipated comparison of means obtained in preliminary studies and an assumption of type I and type II errors of 5%. The minimal sample size required was 32 subjects per group. In the present study, there were 34 volunteers in each group.\nInitially, univariate and multiple regression analyses were adjusted for pack–years and age and then applied to identify the association of these variables with the FOT parameters. These analyses were performed using Stata 8.2 software. The volunteers were then stratified and, when the achieved data presented a statistically normal distribution, the data were reanalyzed using one‐way analysis of variance (ANOVA), which was further corrected by the Tukey significant difference test. A nonparametric test (Kruskal–Wallis (KW)), associated with a Mann–Whitney U test, was applied when the data did not present in a normal distribution. These analyses were performed using Statistica 5.0 software. The results are presented as mean±standard deviation. A p value of <0.05 was considered statistically significant.\nThe performance of the FOT indices in the detection of smoking‐induced respiratory alterations in the several pack–years groups was evaluated by means of receiver operating characteristic (ROC) analysis.30 These evaluations were constructed using MedCalc 8.2.\nComparisons of the AUC among parameters obtained from FOT and spirometry were conducted using MedCalc 8.2, according to the theory described by Metz.31 The values of sensitivity, specificity, and area under the curve (AUC) for spirometry and FOT were obtained based on the optimal cut‐off point, as determined from the ROC curve analysis.30 ", "There were no significant differences in weight or height among the groups. However, there were significant differences in age among the groups. In general, the spirometric parameters were highest in normal subjects and decreased significantly as the pack–years increased (p<0.0001).\nUnivariate analyses (Table 2) show that all the FOT parameters presented highly significant correlations with pack–years (p<0.001). All the FOT parameters, except Crs,dyn, also correlated with age.\nThe multivariate analyses showed that the contribution of pack–years was highly significant for all the FOT parameters (p<0.001, Table 3). In contrast, the contribution of age was only significant for fr, Xm and Crs,dyn, whereas for R0, it was not significant, and S and Rm were near the limit of significance.\nThe amount of tobacco smoked significantly increased R0 (KW‐ANOVA, p<0.0001), Rm (KW‐ANOVA, p<0.0001), and S (KW‐ANOVA, p<0.0001), as seen in Figure 1. Mean values of R0 and Rm increased significantly when groups of normal and smoking subjects of <20 pack–years were compared (p<0.0001 and p<0.00001 respectively). R0 and Rm were also increased with higher pack–years, which resulted in increasing statistical significance when compared with the control group. The comparisons between adjacent groups were statistically significant only for R0 when comparing the two highest pack–years groups. On the other hand, S was not statistically significant when comparing the control with the <20 or 20–39 pack–years groups, but was significantly increased in comparison with the two highest pack–years groups (p<0.001).\nAs the amount of tobacco smoked increased, Crs,dyn (KW‐ANOVA, p<0.0001) and Xm (KW‐ANOVA, p<0.0001) were significantly reduced, whereas fr (KW‐ANOVA, p<0.0001) was increased (Figure 2). The differences comparing the control group with the <20 and 20–39 pack–years groups were not significant for fr and Xm (Figure 2A and B). In contrast, the same comparisons resulted in significant reductions in Crs,dyn (Figure 2C). Considering the comparisons between adjacent classes, significant modifications were observed between the two highest pack–years groups in all three of the reactive parameters studied.\nFigure 3 presents the ROC curves for FOT and spirometric parameters in all the studied groups. The performance of the FOT and spirometric indices in the detection of smoking‐induced respiratory alterations is described in Figure 4. Table 4 shows detailed values of area under the ROC curve (AUC), sensitivity, and specificity for the optimal cut‐off point for the FOT indices.\nThe results of the comparative analysis among the AUC of FOT and spirometric parameters are described in Figure 5. In general, R0 (Figure 5A), Rm (Figure 5C), and Crs,dyn (Figure 5E) presented significantly higher AUC in smoking subjects with <20 pack–years, and AUC similar to that presented by spirometric parameters as the amount of tobacco smoked increased. Spirometric parameters presented significantly higher AUC than S (Figure 5B), fr (Figure 5D), and Xm (Figure 5F) considering groups of smoking subjects with 20–39 pack–years and 40–59 pack–years.", "This study documented a significantly deleterious effect of smoking on the impedance of the respiratory system. Although many other published reports have used the FOT to compare control groups with ex‐smokers and/or smoking subjects, to the best of our knowledge, this study is the first study to investigate respiratory impedance in groups with different degrees of tobacco consumption. Earlier studies have found deleterious alterations in the respiratory impedance of smokers. The present study supports these results and also shows that these modifications are proportional to the number of pack–years the subjects smoked.\nIn agreement with earlier studies,18,32 we found that the spirometric parameters decreased as the pack–years increased. Our groups, defined by pack–years, had significant differences in age (Table 1), which is expected because the increase in pack–years demands time for exposure.\nIn the present study, we found that univariate analysis (Table 2) showed a higher relationship between the FOT parameters and pack–years (mean r2 = 0.28) than between the FOT parameters and age (mean r2 = 0.13). In fact, age was not significantly related to the resistive FOT parameters (Table 3), whereas in the reactive parameters, the contributions of age were similar to pack–years in fr and Xm, and negligible in Csr,dyn. Comparing r2 in univariate (Table 2) and multivariate analysis (Table 3), it is apparent that the introduction of age slightly increased this parameter.\nThe significant increase in R0 (Figure 1A, KW‐ANOVA, p<0.0001) and the moderate (r2 = 0.34), but significant (p<0.001), correlation with pack–years link high levels of tobacco consumption with respiratory obstruction. Smoking leads to a series of bronchial modifications that are consistent with these results, including edema, inflammation in the mucosa, and hypertrophy of the mucosal glands, resulting in hypersecretion of mucus. Hypertrophy of smooth muscle and fibrosis of the bronchial wall are two additional factors that can contribute to elevations in both airway and tissue resistances.2 Increased R0 values for smoking subjects have been reported before.4,10–12 The increase in R0 with pack–years can be explained given that modifications resulting from smoking begin in the small, peripheral airways, where large increases in resistance do not significantly increase total resistance. However, as the pack–years increase, the pathophysiological abnormalities start affecting the larger airways. Significant differences were observed comparing the control and the <20 and 20–39 pack–years groups. This suggests that R0 values could be useful in detecting initial airway obstructions that are associated with smoking.\nMean resistance (Rm) is associated with the caliber of the central airways.7 Therefore, obstruction of these airways could explain the increases in Rm values (Figure 1B), which could be related to inflammatory alterations.2 More specifically, Rm increased significantly with pack–years (KW‐ANOVA, p<0.0001), showing a significant (p<0.001) relationship, which would explain 24% of the variance in this index. Hayes and colleagues10 found no significant difference in Rm comparing non‐smokers and smokers. In contrast, we found significant differences between the control group and the smoking groups, even with the fewest pack–years group (<20) (Figure 1B). This suggests that Rm could also be useful in detecting early changes from smoking.\nNot all studies that have evaluated S values in smokers agree. Although Làndsér and colleagues9 and Hayes and colleagues10 reported a small, but not significant, difference comparing values of non‐smokers with smokers, significant differences were reported by Brochard and colleagues11 in similar studies conducted among non‐smokers, ex‐smokers, and smokers. Figure 1C shows that the respiratory system of the smoking subjects became more inhomogeneous with increasing pack–years. The significant modification in S (KW‐ANOVA, p<0.0001) had moderate (r2 = 0.36) and significant (p<0.0001) correlation with pack–years, which links increased tobacco consumption with decreased respiratory system homogeneity. This can be explained by inflammatory alterations2 and tissue modifications. It was not statistically possible to separate the mean S values of the control group from those of the <20 and 20–39 pack–years groups. On the other hand, comparisons between the control group and the 40–59 and >60 pack–years groups displayed significant differences (p<0.006), often associated with the more pronounced mechanical modifications present in these groups. Significant differences between adjacent groups can be found when comparing the groups with higher tobacco consumption. The shunt impedance of the upper airway walls, which are mechanically in parallel with the respiratory system, introduces an error that is expected to be large in obstructed patients33 and results in artifactual frequency dependence on the Rrs values. Therefore, besides respiratory system nonhomogeneity, the upper airway wall shunt probably affects the results described in Figure 1C.\nIn the studied groups, increased pack–years of tobacco smoking were significantly associated with a decline in Crs,dyn (Figure 2A, KW‐ANOVA, p<0.0001; r = −0.37, p<0.0001). Interestingly, a highly significant decrease could already be noted comparing the control and the <20 pack–years groups. In contrast to these results, Hayes and colleagues10 found no differences when comparing the Crs,dyn in non‐smokers and asymptotic smokers with normal spirometry. A reduction in Crs,dyn values reflects a change in pulmonary tissue, chest wall, modification of the distensibility of the airways,7 and/or an increase in airway resistance.34 Therefore, the reduction in the Crs,dyn value observed in the smokers could be associated with a progressive increase in peripheral airway resistance or a reduction in respiratory system compliance. There are always highly significant statistical differences in the comparisons between non‐smoking and smoking groups, which suggests the usefulness of Crs,dyn as a sensitive index of smoking effects.\nIncreasing tobacco use in the studied groups was associated with an increase in fr (Figure 2B; KW‐ANOVA, p<0.0001; r = 0.50, p<0.0001). The comparisons of fr values between control and smoking groups do not reveal a significant difference until comparison with the >60 pack–years of tobacco exposure group. This suggests that fr is not useful as an index for the initial effects of smoking.\nThe increase in pack–years in the different groups studied resulted in decreased Xm mean values (Figure 2C; KW‐ANOVA, p<0.0001). The significant inverse correlation between Xm and pack–years (r = −0.57, p<0.0001) reflects the impact of smoking on the reduction in respiratory system homogeneity and dynamic compliance of the studied subjects. Comparing the control and smoking groups, statistically significant modifications were only observed between the control and the 40–59 pack–years group and between the control and the >60 pack–years group. Significant differences between adjacent groups could also be found when comparing the higher pack–years groups. These results suggest that Xm might be useful in detecting the respiratory effects of smoking, predominantly in the more advanced stages.\nAnother point for discussion concerns the characteristics of the subjects investigated in this study. It must be pointed out that the results presented here need to be interpreted with caution. As several of the subjects studied were recruited from the University Hospital smoking cessation and outpatient clinics, these results can probably be limited to describe this population. This is particularly true in groups with high pack–years. New studies are necessary to extrapolate these results to the general population.\nRecently, the National Heart, Lung and Blood Institute recommended that research on new technologies to improve non‐invasive tests of lung function in COPD should be a priority.35 The FOT was suggested by Crapo et al.5 as an attractive alternative for diagnosing obstruction in COPD, as it requires little patient effort and cooperation.\nCosio and colleagues36 pointed out that the pathological changes induced by tobacco use were still potentially reversible up to 770 cigarettes/day × years smoked (38.5 pack–years). Our results comparing R0, Rm, and Crs,dyn in the group with <20 pack–years (mean = 7.3±5.4 pack–years) and the group with 20–39 pack–years (mean = 29.3±5.6 pack–years) suggest that small abnormalities exist prior to 38.5 pack–years (Figures 1 and 2) and that FOT could be useful in detecting the effects of smoking, while they are still potentially reversible. In order to investigate this possibility, ROC curves were elaborated. According to the literature, ROC curves with AUCs between 0.50 and 0.70 indicate low diagnostic accuracy, AUCs between 0.70 and 0.90 indicate moderate accuracy, and AUCs between 0.90 and 1.00 indicate high accuracy.30,37 An AUC >0.80 is usually considered adequate for clinical use.30,37 Thus, S, Xm, and fr reached acceptable values for clinical use only in the group with the higher duration of the smoking habit (Figures 3 and 4, Table 4). In contrast, AUC values for R0, Rm, and Crs,dyn near to those considered for high accuracy measurements were obtained for the 20–39 pack–years group (Figures 3 and 4, Table 4). In this condition, Rm was the most adequate to correctly identify the effects of smoking, with a sensitivity of 82.4% and a specificity of 85.3%. As expected, because of the smaller alterations in respiratory mechanics, the <20 pack–years group showed reduced AUC values (Figures 3 and 4, Table 4). Even in this adverse situation, R0 and Rm were able to obtain AUC values considered adequate for clinical use, while Crs,dyn reached the limit value (0.78). Among all the studied parameters, R0 was the best, showing a sensitivity of 73.5% and a specificity of 73.5%. These promising results suggest that the FOT may be useful in prevention of the adverse effects of the smoking habit, non‐invasively detecting early smoking‐induced respiratory abnormalities while the pathologic changes are still potentially reversible. This also suggests that the FOT may be useful as a screening tool in the management of smoking‐induced lung disease.\nDifference in AUC has become one of the most commonly used measures for comparing the performance of two diagnostic systems.38 According to Metz,31 when we have a number of ROC curves to compare, the AUC is usually the best discriminator. In smoking subjects with <20 pack–years, this analysis was clearly in favor of R0 and Rm (Figure 5A and C). The diagnostic accuracy of Crs,dyn was higher than all the spirometric parameters, except FEF25–75% (Figure 5E). There were no statistically significant differences in AUC considering Xm, fr, and S and the spirometric parameters (Figure 5B, D and F). This suggests that R0, Rm, and Crs,dyn values could be more useful than spirometric parameters in detecting early changes associated with smoking.\nThe results obtained in the present study considering patients with 20–39 pack–years show that the area under the ROC curve was significantly larger for R0, Rm, and Crs,dyn than for FEV1(%) and FEV1(L) (Figure 5A, C and E). There were no differences in diagnostic accuracy taking into consideration the other spirometric parameters. On the other hand, several spirometric parameters presented significantly higher AUC values than S, fr, and Xm (Figure 5B, D and F).\nOostven et al.,8 suggested that, in general, the clinical diagnostic capacity of respiratory impedance measurement by the FOT is comparable to that of spirometry. In line with this hypothesis, the comparison of diagnostic accuracy of the FOT and spirometric parameters in the group of patients with 40–59 pack–years revealed similar performances of R0, Rm, and Crs,dyn and spirometric parameters (Figure 5A, C and E). The majority of the parameters obtained by spirometric measurements were significantly more accurate than S, fr, and Xm (Figure 5B, D and F), which suggests that R0, Rm, and Crs,dyn are as useful as spirometry, and that spirometric parameters are more adequate than S, fr, and Xm as indices of smoking effects in this range of pack–years.\nThe spirometric parameters FEV1(%) and FEV1(L) are standard measures of lung function commonly used in the evaluation of patients with COPD.39 The diagnostic performance of R0 (Figure 5A) and Crs,dyn (Figure 5E) was significantly higher than that of FEV1(%) and FEV1(L), and similar to that presented by other spirometric parameters in smoking subjects with >60 pack–years. Rm and S showed accuracies similar to those presented by the spirometric parameters (Figure 5B and C). In general, fr and Xm were less accurate than spirometric parameters (Figure 5D and F).\nSpirometric measurements require good cooperation, are also effort dependent, and can lead to temporary alterations in bronchomotor tone due to the deep inspiration required, which can have implications for respiratory mechanics measurements. In contrast, the FOT is easy to perform, requires minimal cooperation from the patient, and no respiratory maneuvers are needed. These practical considerations, together with the results of the present study, indicate that R0, Crs,dyn, Rm, and S may be added to other conventional examinations to help with the clinical evaluation of patients with COPD who are not able to adequately perform spirometric measurements.\nIn conclusion, the smoking habit resulted in changes in respiratory mechanics that were proportional to pack–years. The FOT provided resistive and reactive parameters that were in close agreement with the involved pathophysiology. An important increase in respiratory system resistance and a reduction in dynamic compliance were observed in all the groups studied.\nAccordingly, the parameters with the highest sensitivity and specificity for identifying smoking patients were R0, Rm, and Crs,dyn. Even in the group with the smallest alterations (<20 pack–years), R0 and Rm were able to obtain AUC values considered adequate for clinical use. Our data demonstrated that the FOT might be useful for early detection of obstructive disease related to the smoking habit, which agrees with the literature consensus that the mechanical modifications resulting from smoking should be detected as early as possible in order to advise smoking cessation.\nThe comparison of the diagnostic accuracy of the FOT and spirometric parameters indicated that R0, Rm, and Crs,dyn were more accurate than spirometric indices in diagnosing smaller alterations (<20 pack–years). These parameters presented diagnostic performance similar to the spirometric parameters in groups with higher pack–years.\nThese results suggest that the FOT can be proposed as a complementary method to detect the harmful effects of smoking while they are still potentially reversible, contributing to COPD prevention, diagnosis, and treatment.", "The authors would like to thank J.A. Mesquita Jr. and J. G. Santos for their assistance. The Brazilian Council for Scientific and Technological Development (CNPq) and the Rio de Janeiro State Research Supporting Foundation (FAPERJ) supported this study." ]
[ "intro", "methods", "results", "discussion", null ]
[ "Tobacco consumption", "Chronic obstructive pulmonary disease", "Respiratory mechanics", "Forced oscillation technique", "Diagnosis" ]
Association between ear creases and peripheral arterial disease.
21340222
Peripheral arterial disease is a severe manifestation of atherosclerosis that can lead to critical ischemia of the lower limbs and is also associated with high cardiovascular risk. Diagonal lobular and anterior tragal ear creases have been associated with coronary artery disease, but they have not yet been investigated in patients with peripheral arterial disease.
INTRODUCTION
Cross-sectional study including 60 male patients with peripheral arterial disease of the lower limbs and 60 dermatologic outpatients matched for age and gender. The associations were adjusted for other risk factors by conditional logistic regression.
METHODS
The prevalence of diagonal and anterior tragal ear creases was higher among cases (73% vs. 25% and 80% vs. 43%, respectively) than controls; these associations remained significant even when adjusting for other known risk factors of atherosclerosis (odds ratio = 8.1 and 4.1, respectively).
RESULTS
Ear creases are independently associated with peripheral arterial disease and may be an external marker for risk identification.
CONCLUSIONS
[ "Aged", "Case-Control Studies", "Cross-Sectional Studies", "Ear, External", "Humans", "Lower Extremity", "Male", "Peripheral Arterial Disease", "Risk Factors" ]
3020344
INTRODUCTION
Peripheral arterial disease (PAD) is a severe manifestation of atherosclerosis that may be asymptomatic or lead to critical ischemia of the lower limbs, and is also associated with increased risk of death from cardiovascular disease.1 Some known risk factors for PAD include smoking, diabetes mellitus, familial history of atherosclerotic event, hypertension, and hypercholesterolemia.2 The diagonal lobular crease (DLC), described by Frank,3 and anterior tragal crease (ATC), by Miot et al.4 (Figure 1), were identified as factors independently associated with coronary artery disease,5-7 but they have not yet been studied in patients with PAD. Detection of external signs associated with PAD can assist in risk stratification and identification of patients who would benefit from early intervention or modification of risk factors related to disease progression. We studied the prevalence of ear creases in patients with PAD of the lower limbs, in comparison with patients without documented atherosclerotic disease.
METHODS
A cross‐sectional study consisted of interviews with adult male patients treated at the University Hospital. Cases were selected from claudication outpatients, with PAD of the lower limbs confirmed by arteriography. The control group was matched for gender and age and was recruited among dermatologic outpatients, who had not been revascularized (legs or ankle) or who had an ankle‐brachial index not <0.9 or >1.3 (arterial noncompliance). Patients who presented any of the following conditions were excluded from the study: immunosuppression, ear deformity, earring use or arterial obstruction due to other etiologies that could not be attributed to PAD. The main dependent variable was the presence of PAD in the lower limbs, and the main independent variable was the presence of bilateral auricular creases. Other covariates were diabetes mellitus, previous smoking, age, familial and personal past history of atherosclerotic vascular disease (cerebral or myocardial atherosclerotic event), dyslipidemia, body mass index, and hypertension. Patients who did not know their family history were considered to have a negative history. Categorical variables are given as frequencies and bivariately compared by the χ2 test or Fisher's exact test. Continuous variables are represented by mean and standard deviation and compared by the Student t‐test. The association between variables is expressed as an odds ratio from the bivariate analysis subsequently adjusted by conditional logistic regression, comprising covariates with p<0.2. Data were analyzed by the software SPSS 17.0. Two‐tailed p values <0.05 were considered significant. Sample size was defined according to the final multiple logistic model with α = 0.05 and power  =  85%.8 This study was approved by the institutional ethics committee.
RESULTS
Sixty patients with PAD and 60 controls were enrolled in the study. The main clinical and demographic characteristics of patients and controls are listed in Table 1. The bivariate analysis disclosed the higher frequencies of known PAD risk factors among patients and also identified their positive association with ear creases. DLC and ATC were also more prevalent among cases than controls in all age groups (data not shown). Multivariate analysis of the frequency of ear creases adjusted for age, diabetes mellitus, hypertension, myocardial infarction, dyslipidemia, and tobacco reinforced the association between PAD and ear creases (Table 2).
null
null
[]
[]
[]
[ "INTRODUCTION", "METHODS", "RESULTS", "DISCUSSION" ]
[ "Peripheral arterial disease (PAD) is a severe manifestation of atherosclerosis that may be asymptomatic or lead to critical ischemia of the lower limbs, and is also associated with increased risk of death from cardiovascular disease.1 \nSome known risk factors for PAD include smoking, diabetes mellitus, familial history of atherosclerotic event, hypertension, and hypercholesterolemia.2 The diagonal lobular crease (DLC), described by Frank,3 and anterior tragal crease (ATC), by Miot et al.4 (Figure 1), were identified as factors independently associated with coronary artery disease,5-7 but they have not yet been studied in patients with PAD. Detection of external signs associated with PAD can assist in risk stratification and identification of patients who would benefit from early intervention or modification of risk factors related to disease progression.\nWe studied the prevalence of ear creases in patients with PAD of the lower limbs, in comparison with patients without documented atherosclerotic disease.", "A cross‐sectional study consisted of interviews with adult male patients treated at the University Hospital.\nCases were selected from claudication outpatients, with PAD of the lower limbs confirmed by arteriography. The control group was matched for gender and age and was recruited among dermatologic outpatients, who had not been revascularized (legs or ankle) or who had an ankle‐brachial index not <0.9 or >1.3 (arterial noncompliance).\nPatients who presented any of the following conditions were excluded from the study: immunosuppression, ear deformity, earring use or arterial obstruction due to other etiologies that could not be attributed to PAD.\nThe main dependent variable was the presence of PAD in the lower limbs, and the main independent variable was the presence of bilateral auricular creases. Other covariates were diabetes mellitus, previous smoking, age, familial and personal past history of atherosclerotic vascular disease (cerebral or myocardial atherosclerotic event), dyslipidemia, body mass index, and hypertension. Patients who did not know their family history were considered to have a negative history.\nCategorical variables are given as frequencies and bivariately compared by the χ2 test or Fisher's exact test. Continuous variables are represented by mean and standard deviation and compared by the Student t‐test. The association between variables is expressed as an odds ratio from the bivariate analysis subsequently adjusted by conditional logistic regression, comprising covariates with p<0.2.\nData were analyzed by the software SPSS 17.0. Two‐tailed p values <0.05 were considered significant.\nSample size was defined according to the final multiple logistic model with α = 0.05 and power  =  85%.8 \nThis study was approved by the institutional ethics committee.", "Sixty patients with PAD and 60 controls were enrolled in the study. The main clinical and demographic characteristics of patients and controls are listed in Table 1. The bivariate analysis disclosed the higher frequencies of known PAD risk factors among patients and also identified their positive association with ear creases. DLC and ATC were also more prevalent among cases than controls in all age groups (data not shown).\nMultivariate analysis of the frequency of ear creases adjusted for age, diabetes mellitus, hypertension, myocardial infarction, dyslipidemia, and tobacco reinforced the association between PAD and ear creases (Table 2).", "Earlobe creases were more common in male patients with PAD than in age‐matched controls. These creases may represent an external sign of microangiopathic lesions in terminal circulation of the ear that occur at the same time as PAD since atherosclerosis is a systemic disease. In other series of patients, overall mortality and sudden death were significantly higher in the group with DLC.9,10 Male patients were selected to minimize the effect that earrings might have had on the ears, and because men are more affected by PAD and atherosclerosis than women. As the prevalence of both PAD and bilateral ear lobe creases increases over time, age matching was performed to normalize these risks between the groups.\nControls did not undergo arteriography for ethical and logistic reasons. The ankle‐brachial index was chosen as the method to select controls owing to its high specificity (99%) and sensitivity (89%) for identification of PAD.11 Moreover, the accidental inclusion of cases among controls would have diluted the association of ear creases and atherosclerosis, rather than strengthened it.\nThe noninvasive identification of external findings, including DLC and ATC, digital clubbing, cyanosis, and hair loss on extremities provides information additional to epidemiological aspects such as smoking, hypertension, diabetes and dyslipidemia in raising a suspicion of atherosclerosis and risk of cardiovascular events.12 \nFurther investigations should be performed with larger samples, heterogeneously distributed according to age, gender, ethnicity, and PAD severity, to provide a stratified risk analysis and a long‐term follow up evaluation of prognostic factors in PAD.\nBilateral DLC and ATC are semiological findings which are quick and easy to evaluate and were independently associated with PAD in this population." ]
[ "intro", "methods", "results", "discussion" ]
[ "Ear", "Atherosclerosis", "Peripheral", "Arterial", "Disease" ]
Comparison of prophylactic and therapeutic use of short-chain fatty acid enemas in diversion colitis: a study in Wistar rats.
21340226
Diversion colitis is a chronic inflammatory process affecting the dysfunctional colon, possibly evolving with mucous and blood discharge. The most favored hypotheses to explain its development is short-chain fatty-acid deficiency in the colon lumen.
INTRODUCTION
Wistar rats were submitted to colostomy with distal colon exclusion. Two control groups (A1 and B1) received rectally administered physiological saline, whereas two experimental groups (A2 and B2) received rectally administered short-chain fatty-acids. The A groups were prophylactically treated (5th to 40th days postoperatively), whereas the B groups were therapeutically treated (after post-operative day 40). The mucosal thickness of the excluded colon was measured histologically. The inflammatory reaction of the mucosal lamina propria and the lymphoid tissue response were quantified through established scores.
METHODS
There was a significant thickness recovery of the colonic mucosa in group B2 animals (p = 0.0001), which also exhibited a significant reduction in the number of eosinophilic polymorphonuclear cells in the lamina propria (p = 0.0126) and in the intestinal lumen (p = 0.0256). Group A2 showed no mucosal thickness recovery and significant increases in the numbers of lymphocytes (p = 0.0006) and eosinophilic polymorphonuclear cells in the lamina propria of the mucosa (p = 0.0022).
RESULTS
Therapeutic use of short-chain fatty-acids significantly reduced eosinophilic polymorphonuclear cell numbers in the intestinal wall and in the colonic lumen; it also reversed the atrophy of the colonic mucosa. Prophylactic use did not impede the development of mucosal atrophy.
CONCLUSION
[ "Animals", "Atrophy", "Colitis", "Colostomy", "Disease Models, Animal", "Enema", "Fatty Acids, Volatile", "Intestinal Mucosa", "Male", "Postoperative Care", "Random Allocation", "Rats", "Rats, Wistar" ]
3020348
INTRODUCTION
After colostomy, the nonfunctioning intestinal segment presents inflammatory alterations comprising a nosological entity known as diversion colitis (DC).1 The preferred treatment for DC is intestinal transit reconstruction which, in most cases, resolves the inflammatory process.2 Ma et al.3 analyzed 21 cases of DC and concluded that moderate chronic inflammation with lymphoplasmocytary infiltrate in the lamina propria, vascular congestion, minimal alterations in crypt architecture and a slight decline in their number were the main histopathologic alterations found in the disease. Additionally, these changes were accompanied by the presence of prominent lymphoid nodules with or without hyperplasia of the germinal centers. Haque et al.4 observed the presence of eosinophilic polymorphonuclear cells (EPNs) in the lamina propria and colonic lumen of children with DC. Pinto et al.5 reported that the onset of significant atrophy in the colonic mucosa of Wistar rats coincides with the 40th day after surgical exclusion of the colon. Grove et al.6 demonstrated the relationship between diet and the presence of short chain fatty acids (SCFAs) in the colonic lumen. Roediger showed that around 70% of colonocyte energy requirements originate in the SCFAs and, further, that 90% of these SCFAs are formed by acetic, n‐propionic and n‐butyric acid, the latter of which serves as the main energy source for colonocytes.7 The SCFAs exert trophic effects on the large bowel through direct contact of the acids with the colonic mucosa.8 Colonic trophism mechanisms occur as a result of increased energetic oxygenation, stimulated flow of blood microcirculation caused by dilated resistance arteries, enterotrophic hormone production and stimulation of the enteric nervous system. This trophism occurs transmurally and is not restricted to the mucosa.8,9 In addition to stimulating collagen maturation, these aforementioned functions are fundamental in colonic physiology and contribute to decreased bacterial translocation, intestinal adaptation in short bowel syndrome, stimulation of healing and increased anastomosis resistance.8-10 Some authors have postulated that the use of SCFAs in the nonfunctioning segment reverses the alteration of DC.2,11 Other studies were not as successful, however, and their authors question this form of treatment.12,13 The present study aimed to assess the use of SCFAs in the nonfunctioning colonic stump of Wistar rats in order to demonstrate microscopically the existence (or lack thereof) of a prophylactic or therapeutic role.
null
null
RESULTS
[SUBTITLE] Histomorphometry [SUBSECTION] Values for colonic mucosal thickness are shown in Figure 2. No significance difference was found between the control (A1) and prophylactic (A2) groups in the prevention of DC in terms of mucosal atrophy (p  =  0.1680). However, there was a significant difference between the control group (B1) and treatment group (B2), p  =  0.0001. Values for colonic mucosal thickness are shown in Figure 2. No significance difference was found between the control (A1) and prophylactic (A2) groups in the prevention of DC in terms of mucosal atrophy (p  =  0.1680). However, there was a significant difference between the control group (B1) and treatment group (B2), p  =  0.0001. [SUBTITLE] Qualitative Histological Analysis [SUBSECTION] Table 2 shows the different cell types, including the presence and size of MALT lymphoid follicles (LFs), with values expressed in accordance with Myers' modified score. The highest indices observed were the presence of LFs (×5), mainly large size (×3), and number of EPNs (×3) in the colonic lumen; these findings favored the diagnosis of DC. The histologic sections from group A1 (control) exhibiting mucosal atrophy are shown in Figure 3(a). Figure 3(b) shows sections from group A2 (prophylactic) exhibiting mucosal atrophy and slight MALT hyperplasia, and Figure 3(c) shows sections from group B1 (control) exhibiting mucosal atrophy and MALT hyperplasia. Figure 3(d) shows sections from group B2 (treatment) exhibiting normal mucosal thickness and MALT hyperplasia. The assessment of histologic scores by cell type and lymphoid follicles of MALT, which are represented by the sum of the median values in the control (A1) and prophylactic (A2) groups, is shown in Table 3. The results obtained in the qualitative analysis of the samples revealed a significant numerical increase in lymphocytes (p  =  0.006) and EPNs in the lamina propria of the colonic mucosa (p  =  0.0022) in the prophylactic group (A2) compared to the control group (A1). The prophylactic use of SCFAs (group A2) did not significantly reduce MALT hyperplasia compared to the control group (A1) (p  =  0.0670). With respect to total cellularity, there were no significant differences between groups A1 and A2 (p  =  0.2233). Table 4 shows a significant reduction in EPNs in the lamina propria of the treatment group (B2) compared to the control group (B1) (p  =  0.0126), and a reduction was also observed in the intestinal lumen (p  =  0.0256). We also found that the therapeutic use of SCFAs had little effect on the inhibition of MALT hyperplasia when comparing groups B2 and B1 (p  =  0.5514). Total cellularity showed no significant difference between groups B1 and B2 (p  =  0.0781). Table 2 shows the different cell types, including the presence and size of MALT lymphoid follicles (LFs), with values expressed in accordance with Myers' modified score. The highest indices observed were the presence of LFs (×5), mainly large size (×3), and number of EPNs (×3) in the colonic lumen; these findings favored the diagnosis of DC. The histologic sections from group A1 (control) exhibiting mucosal atrophy are shown in Figure 3(a). Figure 3(b) shows sections from group A2 (prophylactic) exhibiting mucosal atrophy and slight MALT hyperplasia, and Figure 3(c) shows sections from group B1 (control) exhibiting mucosal atrophy and MALT hyperplasia. Figure 3(d) shows sections from group B2 (treatment) exhibiting normal mucosal thickness and MALT hyperplasia. The assessment of histologic scores by cell type and lymphoid follicles of MALT, which are represented by the sum of the median values in the control (A1) and prophylactic (A2) groups, is shown in Table 3. The results obtained in the qualitative analysis of the samples revealed a significant numerical increase in lymphocytes (p  =  0.006) and EPNs in the lamina propria of the colonic mucosa (p  =  0.0022) in the prophylactic group (A2) compared to the control group (A1). The prophylactic use of SCFAs (group A2) did not significantly reduce MALT hyperplasia compared to the control group (A1) (p  =  0.0670). With respect to total cellularity, there were no significant differences between groups A1 and A2 (p  =  0.2233). Table 4 shows a significant reduction in EPNs in the lamina propria of the treatment group (B2) compared to the control group (B1) (p  =  0.0126), and a reduction was also observed in the intestinal lumen (p  =  0.0256). We also found that the therapeutic use of SCFAs had little effect on the inhibition of MALT hyperplasia when comparing groups B2 and B1 (p  =  0.5514). Total cellularity showed no significant difference between groups B1 and B2 (p  =  0.0781).
null
null
[ "Animals", "Surgical Study", "Additional Interventions", "Histomorphometry", "Qualitative Histological Analysis", "Statistics", "Histomorphometry", "Qualitative Histological Analysis", "CONCLUSION", "ACKNOWLEDGEMENTS" ]
[ "Forty male Wistar rats weighing between 220 g and 230 g were housed in a room under standard conditions of temperature, light, humidity, water and diet (Labina‐Purina, São Paulo, Brazil) according to specifications described by Reeves et al.14 The animals were supplied by the vivarium of the Center for Experimental Surgery, Department of Surgery of the CCS (Centro de Ciências da Saúde) (Health Science Center) of UFRN (Universidade Federal do Rio Grande do Norte) (Federal University of Rio Grande do Norte). The animals were treated according to the use of Nonhuman Animals in Research: a Guide for Scientists.15 The surgery was performed in the Laboratory of the Operatory Technique discipline (UFRN) with the collaboration of the Department of Pathology (UFRN) and the Postgraduate Program in Health Sciences (PPGCSA (Programa de Pós‐graduação do Centro de Ciências da Saúde), UFRN). This was a prospective, analytical, experimental, intervention study.", "All of the animals were submitted to a 12‐hour fast with the exclusive use of water. Immediately before surgery, a retrograde intestinal wash was performed with physiological saline to remove all fecal matter.5 All surgical procedures were conducted under aseptic conditions. Anesthesia was obtained using pentobarbital (20 mg/Kg intraperitoneally) and ketamine (50 mg/Kg intramuscularly). The animals were allowed to breathe spontaneously throughout the experiment. They were fixed in dorsal decubitus, and trichotomy and asepsis using povidone‐iodine were performed.16 \nThe abdominal cavity was opened by a median laparotomy of around 5 cm to identify the cecum and proximal colon, which was divided 5 cm from the ileocecal valve. The distal colon was submitted to terminal suture. After its sectioning, the distal colonic segment was kept inside the abdominal cavity. The proximal segment was colostomized and exteriorized through the abdominal wall left of the median incision. A single‐barreled end colostomy was performed along with fixation of the stoma to the abdominal wall (primary maturation) with a 6‐0 polypropylene thread. The abdominal wall was then closed with separate 3‐0 cotton sutures (Fig. 1).5 ", "Four groups of ten adult Wistar rats submitted to colostomy underwent additional interventions. The protocol distributed the animals into groups A (A1 and A2) and B (B1 and B2). Two control groups (A1 and B1) received infusions of physiological saline administered rectally, whereas the two experimental groups (A2 and B2) received short‐chain fatty‐acids rectally. The A groups were prophylactically treated (5th to 40th post‐operative days, twice a week), whereas the B groups were therapeutically treated (after post‐operative day 40, for seven days). The intervention posologies are described in table 1.\nThe SCFA solution was developed in the biochemistry laboratory of the Bioscience Center at UFRN and was composed of: 75 mmol/L of sodium acetate; 35 mmol/L of sodium propionate; 20 mmol/L of butyric‐N acid; 2.5 mmol/L of calcium chloride; 7.5 mmol/L of magnesium chloride; 10 mmol of potassium chloride.17 The solutions were elaborated in an iso‐osmolar (280 mosm/L), and the pH was adjusted to seven by using appropriate amounts of NaOH or HCI.", "The mean of four microscopic measures of mucosal thickness in the rats of each subgroup was measured using a Nikon Lobophot 10 × microscope (Nikon, Tokyo, Japan). This equipment expresses the mean thickness in millimeters by multiplying the value obtained by the standard correction factor indicated for the lens (0.0078) as a function of its amplification and diameter. The measures were obtained for each of the transversal and longitudinal histological sections of the colonic wall using the final mean for each animal from the values found.5 ", "Different cell type counts were performed to assess neutrophilic polymorphonuclear cells (NPNs), lymphocytes, and EPNs. The lymphoid follicle size of the mucous associated lymphoid tissue (MALT) was assessed in the lamina propria of the mucosa as well as in the colonic wall.\nThe intensity of alterations was graduated on a 6‐point scale ranging from 0 to 5. These values were later converted into whole numbers using our modified version of Myers' index,18 which is based on variables appropriate to the study of DC. This procedure gave rise to a histologic score. A variable was stipulated for each type of variable analyzed, depending on whether it was favorable for DC diagnosis. This value was multiplied by the intensity of the alterations observed in the histologic sections.", "In quantitative assessment, statistical analysis compared the measures obtained in the two paired groups (A1 and A2) and (B1 and B2) using analysis of variance procedures (Student's t‐test). For qualitative assessment, exploratory analysis was performed by building tables and then comparing the mean intensity of the cells observed in each group according to cell type. To determine the existence of significant differences in cellularity between groups (A1 and A2) and (B1 and B2), the non‐parametric Mann‐Whitney test was applied. A significance level of 5% (p‐value ≤ 0.05) was set for all of the results assessed.", "Values for colonic mucosal thickness are shown in Figure 2. No significance difference was found between the control (A1) and prophylactic (A2) groups in the prevention of DC in terms of mucosal atrophy (p  =  0.1680). However, there was a significant difference between the control group (B1) and treatment group (B2), p  =  0.0001.", "Table 2 shows the different cell types, including the presence and size of MALT lymphoid follicles (LFs), with values expressed in accordance with Myers' modified score. The highest indices observed were the presence of LFs (×5), mainly large size (×3), and number of EPNs (×3) in the colonic lumen; these findings favored the diagnosis of DC.\nThe histologic sections from group A1 (control) exhibiting mucosal atrophy are shown in Figure 3(a). Figure 3(b) shows sections from group A2 (prophylactic) exhibiting mucosal atrophy and slight MALT hyperplasia, and Figure 3(c) shows sections from group B1 (control) exhibiting mucosal atrophy and MALT hyperplasia. Figure 3(d) shows sections from group B2 (treatment) exhibiting normal mucosal thickness and MALT hyperplasia.\nThe assessment of histologic scores by cell type and lymphoid follicles of MALT, which are represented by the sum of the median values in the control (A1) and prophylactic (A2) groups, is shown in Table 3. The results obtained in the qualitative analysis of the samples revealed a significant numerical increase in lymphocytes (p  =  0.006) and EPNs in the lamina propria of the colonic mucosa (p  =  0.0022) in the prophylactic group (A2) compared to the control group (A1). The prophylactic use of SCFAs (group A2) did not significantly reduce MALT hyperplasia compared to the control group (A1) (p  =  0.0670). With respect to total cellularity, there were no significant differences between groups A1 and A2 (p  =  0.2233).\nTable 4 shows a significant reduction in EPNs in the lamina propria of the treatment group (B2) compared to the control group (B1) (p  =  0.0126), and a reduction was also observed in the intestinal lumen (p  =  0.0256). We also found that the therapeutic use of SCFAs had little effect on the inhibition of MALT hyperplasia when comparing groups B2 and B1 (p  =  0.5514). Total cellularity showed no significant difference between groups B1 and B2 (p  =  0.0781).", "In conclusion, the prophylactic action of SCFAs on DC in terms of colonic mucosal trophism was not confirmed. This study demonstrates the therapeutic use of SCFAs in experimental DC by showing the significant effects of SCFAs on atrophy regression and EPN reduction in the intestinal lumen and lamina propria of the colonic mucosa, despite their lack of interference with the intensity of MALT hyperplasia. Thus, the therapeutic application of SCFAs may be of great significance in clinical practice, especially in patients without associated inflammatory disease. SCFAs may shorten hospitalization and favor better postoperative management in colostomized patients by reducing complications.", "This study was partially funded by a grant (no. 135226/06‐6) from CNPq." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Animals", "Surgical Study", "Additional Interventions", "Histomorphometry", "Qualitative Histological Analysis", "Statistics", "RESULTS", "Histomorphometry", "Qualitative Histological Analysis", "DISCUSSION", "CONCLUSION", "ACKNOWLEDGEMENTS" ]
[ "After colostomy, the nonfunctioning intestinal segment presents inflammatory alterations comprising a nosological entity known as diversion colitis (DC).1 The preferred treatment for DC is intestinal transit reconstruction which, in most cases, resolves the inflammatory process.2 \nMa et al.3 analyzed 21 cases of DC and concluded that moderate chronic inflammation with lymphoplasmocytary infiltrate in the lamina propria, vascular congestion, minimal alterations in crypt architecture and a slight decline in their number were the main histopathologic alterations found in the disease. Additionally, these changes were accompanied by the presence of prominent lymphoid nodules with or without hyperplasia of the germinal centers. Haque et al.4 observed the presence of eosinophilic polymorphonuclear cells (EPNs) in the lamina propria and colonic lumen of children with DC. Pinto et al.5 reported that the onset of significant atrophy in the colonic mucosa of Wistar rats coincides with the 40th day after surgical exclusion of the colon.\nGrove et al.6 demonstrated the relationship between diet and the presence of short chain fatty acids (SCFAs) in the colonic lumen. Roediger showed that around 70% of colonocyte energy requirements originate in the SCFAs and, further, that 90% of these SCFAs are formed by acetic, n‐propionic and n‐butyric acid, the latter of which serves as the main energy source for colonocytes.7 \nThe SCFAs exert trophic effects on the large bowel through direct contact of the acids with the colonic mucosa.8 Colonic trophism mechanisms occur as a result of increased energetic oxygenation, stimulated flow of blood microcirculation caused by dilated resistance arteries, enterotrophic hormone production and stimulation of the enteric nervous system. This trophism occurs transmurally and is not restricted to the mucosa.8,9 \nIn addition to stimulating collagen maturation, these aforementioned functions are fundamental in colonic physiology and contribute to decreased bacterial translocation, intestinal adaptation in short bowel syndrome, stimulation of healing and increased anastomosis resistance.8-10 \nSome authors have postulated that the use of SCFAs in the nonfunctioning segment reverses the alteration of DC.2,11 Other studies were not as successful, however, and their authors question this form of treatment.12,13 \nThe present study aimed to assess the use of SCFAs in the nonfunctioning colonic stump of Wistar rats in order to demonstrate microscopically the existence (or lack thereof) of a prophylactic or therapeutic role.", "[SUBTITLE] Animals [SUBSECTION] Forty male Wistar rats weighing between 220 g and 230 g were housed in a room under standard conditions of temperature, light, humidity, water and diet (Labina‐Purina, São Paulo, Brazil) according to specifications described by Reeves et al.14 The animals were supplied by the vivarium of the Center for Experimental Surgery, Department of Surgery of the CCS (Centro de Ciências da Saúde) (Health Science Center) of UFRN (Universidade Federal do Rio Grande do Norte) (Federal University of Rio Grande do Norte). The animals were treated according to the use of Nonhuman Animals in Research: a Guide for Scientists.15 The surgery was performed in the Laboratory of the Operatory Technique discipline (UFRN) with the collaboration of the Department of Pathology (UFRN) and the Postgraduate Program in Health Sciences (PPGCSA (Programa de Pós‐graduação do Centro de Ciências da Saúde), UFRN). This was a prospective, analytical, experimental, intervention study.\nForty male Wistar rats weighing between 220 g and 230 g were housed in a room under standard conditions of temperature, light, humidity, water and diet (Labina‐Purina, São Paulo, Brazil) according to specifications described by Reeves et al.14 The animals were supplied by the vivarium of the Center for Experimental Surgery, Department of Surgery of the CCS (Centro de Ciências da Saúde) (Health Science Center) of UFRN (Universidade Federal do Rio Grande do Norte) (Federal University of Rio Grande do Norte). The animals were treated according to the use of Nonhuman Animals in Research: a Guide for Scientists.15 The surgery was performed in the Laboratory of the Operatory Technique discipline (UFRN) with the collaboration of the Department of Pathology (UFRN) and the Postgraduate Program in Health Sciences (PPGCSA (Programa de Pós‐graduação do Centro de Ciências da Saúde), UFRN). This was a prospective, analytical, experimental, intervention study.\n[SUBTITLE] Surgical Study [SUBSECTION] All of the animals were submitted to a 12‐hour fast with the exclusive use of water. Immediately before surgery, a retrograde intestinal wash was performed with physiological saline to remove all fecal matter.5 All surgical procedures were conducted under aseptic conditions. Anesthesia was obtained using pentobarbital (20 mg/Kg intraperitoneally) and ketamine (50 mg/Kg intramuscularly). The animals were allowed to breathe spontaneously throughout the experiment. They were fixed in dorsal decubitus, and trichotomy and asepsis using povidone‐iodine were performed.16 \nThe abdominal cavity was opened by a median laparotomy of around 5 cm to identify the cecum and proximal colon, which was divided 5 cm from the ileocecal valve. The distal colon was submitted to terminal suture. After its sectioning, the distal colonic segment was kept inside the abdominal cavity. The proximal segment was colostomized and exteriorized through the abdominal wall left of the median incision. A single‐barreled end colostomy was performed along with fixation of the stoma to the abdominal wall (primary maturation) with a 6‐0 polypropylene thread. The abdominal wall was then closed with separate 3‐0 cotton sutures (Fig. 1).5 \nAll of the animals were submitted to a 12‐hour fast with the exclusive use of water. Immediately before surgery, a retrograde intestinal wash was performed with physiological saline to remove all fecal matter.5 All surgical procedures were conducted under aseptic conditions. Anesthesia was obtained using pentobarbital (20 mg/Kg intraperitoneally) and ketamine (50 mg/Kg intramuscularly). The animals were allowed to breathe spontaneously throughout the experiment. They were fixed in dorsal decubitus, and trichotomy and asepsis using povidone‐iodine were performed.16 \nThe abdominal cavity was opened by a median laparotomy of around 5 cm to identify the cecum and proximal colon, which was divided 5 cm from the ileocecal valve. The distal colon was submitted to terminal suture. After its sectioning, the distal colonic segment was kept inside the abdominal cavity. The proximal segment was colostomized and exteriorized through the abdominal wall left of the median incision. A single‐barreled end colostomy was performed along with fixation of the stoma to the abdominal wall (primary maturation) with a 6‐0 polypropylene thread. The abdominal wall was then closed with separate 3‐0 cotton sutures (Fig. 1).5 \n[SUBTITLE] Additional Interventions [SUBSECTION] Four groups of ten adult Wistar rats submitted to colostomy underwent additional interventions. The protocol distributed the animals into groups A (A1 and A2) and B (B1 and B2). Two control groups (A1 and B1) received infusions of physiological saline administered rectally, whereas the two experimental groups (A2 and B2) received short‐chain fatty‐acids rectally. The A groups were prophylactically treated (5th to 40th post‐operative days, twice a week), whereas the B groups were therapeutically treated (after post‐operative day 40, for seven days). The intervention posologies are described in table 1.\nThe SCFA solution was developed in the biochemistry laboratory of the Bioscience Center at UFRN and was composed of: 75 mmol/L of sodium acetate; 35 mmol/L of sodium propionate; 20 mmol/L of butyric‐N acid; 2.5 mmol/L of calcium chloride; 7.5 mmol/L of magnesium chloride; 10 mmol of potassium chloride.17 The solutions were elaborated in an iso‐osmolar (280 mosm/L), and the pH was adjusted to seven by using appropriate amounts of NaOH or HCI.\nFour groups of ten adult Wistar rats submitted to colostomy underwent additional interventions. The protocol distributed the animals into groups A (A1 and A2) and B (B1 and B2). Two control groups (A1 and B1) received infusions of physiological saline administered rectally, whereas the two experimental groups (A2 and B2) received short‐chain fatty‐acids rectally. The A groups were prophylactically treated (5th to 40th post‐operative days, twice a week), whereas the B groups were therapeutically treated (after post‐operative day 40, for seven days). The intervention posologies are described in table 1.\nThe SCFA solution was developed in the biochemistry laboratory of the Bioscience Center at UFRN and was composed of: 75 mmol/L of sodium acetate; 35 mmol/L of sodium propionate; 20 mmol/L of butyric‐N acid; 2.5 mmol/L of calcium chloride; 7.5 mmol/L of magnesium chloride; 10 mmol of potassium chloride.17 The solutions were elaborated in an iso‐osmolar (280 mosm/L), and the pH was adjusted to seven by using appropriate amounts of NaOH or HCI.\n[SUBTITLE] Histomorphometry [SUBSECTION] The mean of four microscopic measures of mucosal thickness in the rats of each subgroup was measured using a Nikon Lobophot 10 × microscope (Nikon, Tokyo, Japan). This equipment expresses the mean thickness in millimeters by multiplying the value obtained by the standard correction factor indicated for the lens (0.0078) as a function of its amplification and diameter. The measures were obtained for each of the transversal and longitudinal histological sections of the colonic wall using the final mean for each animal from the values found.5 \nThe mean of four microscopic measures of mucosal thickness in the rats of each subgroup was measured using a Nikon Lobophot 10 × microscope (Nikon, Tokyo, Japan). This equipment expresses the mean thickness in millimeters by multiplying the value obtained by the standard correction factor indicated for the lens (0.0078) as a function of its amplification and diameter. The measures were obtained for each of the transversal and longitudinal histological sections of the colonic wall using the final mean for each animal from the values found.5 \n[SUBTITLE] Qualitative Histological Analysis [SUBSECTION] Different cell type counts were performed to assess neutrophilic polymorphonuclear cells (NPNs), lymphocytes, and EPNs. The lymphoid follicle size of the mucous associated lymphoid tissue (MALT) was assessed in the lamina propria of the mucosa as well as in the colonic wall.\nThe intensity of alterations was graduated on a 6‐point scale ranging from 0 to 5. These values were later converted into whole numbers using our modified version of Myers' index,18 which is based on variables appropriate to the study of DC. This procedure gave rise to a histologic score. A variable was stipulated for each type of variable analyzed, depending on whether it was favorable for DC diagnosis. This value was multiplied by the intensity of the alterations observed in the histologic sections.\nDifferent cell type counts were performed to assess neutrophilic polymorphonuclear cells (NPNs), lymphocytes, and EPNs. The lymphoid follicle size of the mucous associated lymphoid tissue (MALT) was assessed in the lamina propria of the mucosa as well as in the colonic wall.\nThe intensity of alterations was graduated on a 6‐point scale ranging from 0 to 5. These values were later converted into whole numbers using our modified version of Myers' index,18 which is based on variables appropriate to the study of DC. This procedure gave rise to a histologic score. A variable was stipulated for each type of variable analyzed, depending on whether it was favorable for DC diagnosis. This value was multiplied by the intensity of the alterations observed in the histologic sections.\n[SUBTITLE] Statistics [SUBSECTION] In quantitative assessment, statistical analysis compared the measures obtained in the two paired groups (A1 and A2) and (B1 and B2) using analysis of variance procedures (Student's t‐test). For qualitative assessment, exploratory analysis was performed by building tables and then comparing the mean intensity of the cells observed in each group according to cell type. To determine the existence of significant differences in cellularity between groups (A1 and A2) and (B1 and B2), the non‐parametric Mann‐Whitney test was applied. A significance level of 5% (p‐value ≤ 0.05) was set for all of the results assessed.\nIn quantitative assessment, statistical analysis compared the measures obtained in the two paired groups (A1 and A2) and (B1 and B2) using analysis of variance procedures (Student's t‐test). For qualitative assessment, exploratory analysis was performed by building tables and then comparing the mean intensity of the cells observed in each group according to cell type. To determine the existence of significant differences in cellularity between groups (A1 and A2) and (B1 and B2), the non‐parametric Mann‐Whitney test was applied. A significance level of 5% (p‐value ≤ 0.05) was set for all of the results assessed.", "Forty male Wistar rats weighing between 220 g and 230 g were housed in a room under standard conditions of temperature, light, humidity, water and diet (Labina‐Purina, São Paulo, Brazil) according to specifications described by Reeves et al.14 The animals were supplied by the vivarium of the Center for Experimental Surgery, Department of Surgery of the CCS (Centro de Ciências da Saúde) (Health Science Center) of UFRN (Universidade Federal do Rio Grande do Norte) (Federal University of Rio Grande do Norte). The animals were treated according to the use of Nonhuman Animals in Research: a Guide for Scientists.15 The surgery was performed in the Laboratory of the Operatory Technique discipline (UFRN) with the collaboration of the Department of Pathology (UFRN) and the Postgraduate Program in Health Sciences (PPGCSA (Programa de Pós‐graduação do Centro de Ciências da Saúde), UFRN). This was a prospective, analytical, experimental, intervention study.", "All of the animals were submitted to a 12‐hour fast with the exclusive use of water. Immediately before surgery, a retrograde intestinal wash was performed with physiological saline to remove all fecal matter.5 All surgical procedures were conducted under aseptic conditions. Anesthesia was obtained using pentobarbital (20 mg/Kg intraperitoneally) and ketamine (50 mg/Kg intramuscularly). The animals were allowed to breathe spontaneously throughout the experiment. They were fixed in dorsal decubitus, and trichotomy and asepsis using povidone‐iodine were performed.16 \nThe abdominal cavity was opened by a median laparotomy of around 5 cm to identify the cecum and proximal colon, which was divided 5 cm from the ileocecal valve. The distal colon was submitted to terminal suture. After its sectioning, the distal colonic segment was kept inside the abdominal cavity. The proximal segment was colostomized and exteriorized through the abdominal wall left of the median incision. A single‐barreled end colostomy was performed along with fixation of the stoma to the abdominal wall (primary maturation) with a 6‐0 polypropylene thread. The abdominal wall was then closed with separate 3‐0 cotton sutures (Fig. 1).5 ", "Four groups of ten adult Wistar rats submitted to colostomy underwent additional interventions. The protocol distributed the animals into groups A (A1 and A2) and B (B1 and B2). Two control groups (A1 and B1) received infusions of physiological saline administered rectally, whereas the two experimental groups (A2 and B2) received short‐chain fatty‐acids rectally. The A groups were prophylactically treated (5th to 40th post‐operative days, twice a week), whereas the B groups were therapeutically treated (after post‐operative day 40, for seven days). The intervention posologies are described in table 1.\nThe SCFA solution was developed in the biochemistry laboratory of the Bioscience Center at UFRN and was composed of: 75 mmol/L of sodium acetate; 35 mmol/L of sodium propionate; 20 mmol/L of butyric‐N acid; 2.5 mmol/L of calcium chloride; 7.5 mmol/L of magnesium chloride; 10 mmol of potassium chloride.17 The solutions were elaborated in an iso‐osmolar (280 mosm/L), and the pH was adjusted to seven by using appropriate amounts of NaOH or HCI.", "The mean of four microscopic measures of mucosal thickness in the rats of each subgroup was measured using a Nikon Lobophot 10 × microscope (Nikon, Tokyo, Japan). This equipment expresses the mean thickness in millimeters by multiplying the value obtained by the standard correction factor indicated for the lens (0.0078) as a function of its amplification and diameter. The measures were obtained for each of the transversal and longitudinal histological sections of the colonic wall using the final mean for each animal from the values found.5 ", "Different cell type counts were performed to assess neutrophilic polymorphonuclear cells (NPNs), lymphocytes, and EPNs. The lymphoid follicle size of the mucous associated lymphoid tissue (MALT) was assessed in the lamina propria of the mucosa as well as in the colonic wall.\nThe intensity of alterations was graduated on a 6‐point scale ranging from 0 to 5. These values were later converted into whole numbers using our modified version of Myers' index,18 which is based on variables appropriate to the study of DC. This procedure gave rise to a histologic score. A variable was stipulated for each type of variable analyzed, depending on whether it was favorable for DC diagnosis. This value was multiplied by the intensity of the alterations observed in the histologic sections.", "In quantitative assessment, statistical analysis compared the measures obtained in the two paired groups (A1 and A2) and (B1 and B2) using analysis of variance procedures (Student's t‐test). For qualitative assessment, exploratory analysis was performed by building tables and then comparing the mean intensity of the cells observed in each group according to cell type. To determine the existence of significant differences in cellularity between groups (A1 and A2) and (B1 and B2), the non‐parametric Mann‐Whitney test was applied. A significance level of 5% (p‐value ≤ 0.05) was set for all of the results assessed.", "[SUBTITLE] Histomorphometry [SUBSECTION] Values for colonic mucosal thickness are shown in Figure 2. No significance difference was found between the control (A1) and prophylactic (A2) groups in the prevention of DC in terms of mucosal atrophy (p  =  0.1680). However, there was a significant difference between the control group (B1) and treatment group (B2), p  =  0.0001.\nValues for colonic mucosal thickness are shown in Figure 2. No significance difference was found between the control (A1) and prophylactic (A2) groups in the prevention of DC in terms of mucosal atrophy (p  =  0.1680). However, there was a significant difference between the control group (B1) and treatment group (B2), p  =  0.0001.\n[SUBTITLE] Qualitative Histological Analysis [SUBSECTION] Table 2 shows the different cell types, including the presence and size of MALT lymphoid follicles (LFs), with values expressed in accordance with Myers' modified score. The highest indices observed were the presence of LFs (×5), mainly large size (×3), and number of EPNs (×3) in the colonic lumen; these findings favored the diagnosis of DC.\nThe histologic sections from group A1 (control) exhibiting mucosal atrophy are shown in Figure 3(a). Figure 3(b) shows sections from group A2 (prophylactic) exhibiting mucosal atrophy and slight MALT hyperplasia, and Figure 3(c) shows sections from group B1 (control) exhibiting mucosal atrophy and MALT hyperplasia. Figure 3(d) shows sections from group B2 (treatment) exhibiting normal mucosal thickness and MALT hyperplasia.\nThe assessment of histologic scores by cell type and lymphoid follicles of MALT, which are represented by the sum of the median values in the control (A1) and prophylactic (A2) groups, is shown in Table 3. The results obtained in the qualitative analysis of the samples revealed a significant numerical increase in lymphocytes (p  =  0.006) and EPNs in the lamina propria of the colonic mucosa (p  =  0.0022) in the prophylactic group (A2) compared to the control group (A1). The prophylactic use of SCFAs (group A2) did not significantly reduce MALT hyperplasia compared to the control group (A1) (p  =  0.0670). With respect to total cellularity, there were no significant differences between groups A1 and A2 (p  =  0.2233).\nTable 4 shows a significant reduction in EPNs in the lamina propria of the treatment group (B2) compared to the control group (B1) (p  =  0.0126), and a reduction was also observed in the intestinal lumen (p  =  0.0256). We also found that the therapeutic use of SCFAs had little effect on the inhibition of MALT hyperplasia when comparing groups B2 and B1 (p  =  0.5514). Total cellularity showed no significant difference between groups B1 and B2 (p  =  0.0781).\nTable 2 shows the different cell types, including the presence and size of MALT lymphoid follicles (LFs), with values expressed in accordance with Myers' modified score. The highest indices observed were the presence of LFs (×5), mainly large size (×3), and number of EPNs (×3) in the colonic lumen; these findings favored the diagnosis of DC.\nThe histologic sections from group A1 (control) exhibiting mucosal atrophy are shown in Figure 3(a). Figure 3(b) shows sections from group A2 (prophylactic) exhibiting mucosal atrophy and slight MALT hyperplasia, and Figure 3(c) shows sections from group B1 (control) exhibiting mucosal atrophy and MALT hyperplasia. Figure 3(d) shows sections from group B2 (treatment) exhibiting normal mucosal thickness and MALT hyperplasia.\nThe assessment of histologic scores by cell type and lymphoid follicles of MALT, which are represented by the sum of the median values in the control (A1) and prophylactic (A2) groups, is shown in Table 3. The results obtained in the qualitative analysis of the samples revealed a significant numerical increase in lymphocytes (p  =  0.006) and EPNs in the lamina propria of the colonic mucosa (p  =  0.0022) in the prophylactic group (A2) compared to the control group (A1). The prophylactic use of SCFAs (group A2) did not significantly reduce MALT hyperplasia compared to the control group (A1) (p  =  0.0670). With respect to total cellularity, there were no significant differences between groups A1 and A2 (p  =  0.2233).\nTable 4 shows a significant reduction in EPNs in the lamina propria of the treatment group (B2) compared to the control group (B1) (p  =  0.0126), and a reduction was also observed in the intestinal lumen (p  =  0.0256). We also found that the therapeutic use of SCFAs had little effect on the inhibition of MALT hyperplasia when comparing groups B2 and B1 (p  =  0.5514). Total cellularity showed no significant difference between groups B1 and B2 (p  =  0.0781).", "Values for colonic mucosal thickness are shown in Figure 2. No significance difference was found between the control (A1) and prophylactic (A2) groups in the prevention of DC in terms of mucosal atrophy (p  =  0.1680). However, there was a significant difference between the control group (B1) and treatment group (B2), p  =  0.0001.", "Table 2 shows the different cell types, including the presence and size of MALT lymphoid follicles (LFs), with values expressed in accordance with Myers' modified score. The highest indices observed were the presence of LFs (×5), mainly large size (×3), and number of EPNs (×3) in the colonic lumen; these findings favored the diagnosis of DC.\nThe histologic sections from group A1 (control) exhibiting mucosal atrophy are shown in Figure 3(a). Figure 3(b) shows sections from group A2 (prophylactic) exhibiting mucosal atrophy and slight MALT hyperplasia, and Figure 3(c) shows sections from group B1 (control) exhibiting mucosal atrophy and MALT hyperplasia. Figure 3(d) shows sections from group B2 (treatment) exhibiting normal mucosal thickness and MALT hyperplasia.\nThe assessment of histologic scores by cell type and lymphoid follicles of MALT, which are represented by the sum of the median values in the control (A1) and prophylactic (A2) groups, is shown in Table 3. The results obtained in the qualitative analysis of the samples revealed a significant numerical increase in lymphocytes (p  =  0.006) and EPNs in the lamina propria of the colonic mucosa (p  =  0.0022) in the prophylactic group (A2) compared to the control group (A1). The prophylactic use of SCFAs (group A2) did not significantly reduce MALT hyperplasia compared to the control group (A1) (p  =  0.0670). With respect to total cellularity, there were no significant differences between groups A1 and A2 (p  =  0.2233).\nTable 4 shows a significant reduction in EPNs in the lamina propria of the treatment group (B2) compared to the control group (B1) (p  =  0.0126), and a reduction was also observed in the intestinal lumen (p  =  0.0256). We also found that the therapeutic use of SCFAs had little effect on the inhibition of MALT hyperplasia when comparing groups B2 and B1 (p  =  0.5514). Total cellularity showed no significant difference between groups B1 and B2 (p  =  0.0781).", "In clinical practice, the objective assessment of DC is made according to three main criteria: (1) analysis of mucosal thickness; (2) study of the role of inflammatory cells of the lamina propria and MALT and; (3) investigation of surface colonocyte alterations. The colonic mucosa exhibited reduced thickness in all of the cases studied. The latter parameter is therefore more reliable than the others, as it can be easily measured.19 \nThe relevant results obtained in the present study show that SCFAs are effective for the treatment of DC as they reverse colonic mucosal atrophy. Therapeutic use of SCFAs has also been shown to reduce the number of EPNs in the lamina propria of the mucosa and colonic lumen, thereby decreasing the inflammatory process. In contrast, therapeutic SCFA use did not benefit the nonfunctioning colonic tissue by reversing lymphoid follicular hyperplasia in MALT.20 SCFAs were ineffective when used as DC prophylaxis, as shown by the fact that they did not impede mucosal atrophy on the 40th day postoperatively.\nThese findings corroborate data described in the literature. Harig et al.,2 for example, observed clinical, endoscopic, and histopathologic reversal in four patients with DC who underwent SCFA treatment in the excluded segment. Worsening resulted with treatment interruption or when the infusion solution was replaced by physiological saline.\nWith the exclusive use of topically infused SCFAs, Kiely et al.11 observed symptom remission and significant improvement in the endoscopic and histopathologic DC findings in three out of five patients studied.\nSengupta et al.21 submitted rats to a fiber‐free diet to establish DC‐like intestinal atrophy. These authors observed that the topical use of butyrate favored increased cellularity in the colonic crypt, elevated mitoses, and consequent cell proliferation and atrophy reversal. The authors also observed that the effect of butyrate was dose‐dependent and that its action was of short duration.\nOliveira‐Neto and Aguilar‐Nascimento22 assessed the infusion effect of a solution containing fibers on the nonfunctioning colonic stump of 11 patients. These authors observed a significant decrease in the degree of DC, as well as a significant increase in crypt depth after infusion. Given that SCFAs are a result of fiber degradation caused by anaerobic bacteria in the colonic lumen, these findings suggested that SCFAs were directly responsible for DC remission.\nSome researchers, however, found no relevance in the use of SCFAs as an active agent in DC remission. Guillemot et al. published a double‐blind study with thirteen patients.12 These authors observed no significant endoscopic or histopathologic differences in DC reversal between the SCFAs and physiological saline infusion groups.\nSchauber et al.13 examined 9 patients in a double‐blind study and only 7 of these completed the protocol. All of the subjects had intestinal inflammatory disease and colostomy was indicated. The study showed no significant endoscopic or bacteriologic differences between the control and SCFA groups.\nThe disparity in results between these authors and those who observed data favoring the use of SCFAs may be explained by the differences in clinical indications for colostomy. Recent studies have revealed the inhibitory action of butyrate metabolism in patients with idiopathic ulcerative colitis.23 \nThe trophic action of SCFAs on the large intestine, which occurs through its direct contact with the colonic mucosa, is important in the clinical control of DC.8 This action reduces the signs and symptoms associated with the condition itself (e.g. mucous discharge and transrectal bleeding). Further, SCFAs also prevent the emergence of complications related to mucosal atrophy and colonic epithelium lesion, as well as complications inherent to re‐establishing intestinal transit.\nNeut et al.20 reported that the colonic mucosal atrophy with loss of integrity seen in DC could predispose patients to bacterial translocation by interfering with local immunity and modifying native bacterial flora, both quantitatively and qualitatively. The present study found that the use of SCFAs for a short period of time not only reversed atrophy but also reduced the risk of losing mucosal integrity. Additionally, it did not interfere with MALT hyperplasia of the nonfunctioning colon. The fact that this latter finding has not been reported in the literature makes it scientifically relevant. These results, therefore, favor the preservation of local immunity while avoiding the modification of native bacterial flora. Furthermore, bacterial translocation was not observed in the results reported by Pinto Jr. et al.,24 corroborating the results obtained in this study.\nLim et al.25 raised the hypothesis that DC is a factor predisposing patients to the emergence of idiopathic ulcerative rectocolitis. This occurs through the sensitization of leukocytes in the nonfunctioning colon and subsequent leukocyte aggression toward the endothelium of the functional colon as a result of the emergence of anticolonic self‐antibodies. The therapeutic action of SCFAs observed in this study would reduce the possibility of disease evolution.\nOwing to their high morbidity and mortality rates, intestinal anastomosis fistulae remain the greatest threat to gastrointestinal tract surgeons. Pearce et al.26 observed higher re‐anastomosis dehiscence indices when the interval for transit reconstruction was more than six months, which is sufficient time for atrophy of the nonfunctioning colon wall to occur. SCFA enemas facilitated the healing process of colonic anastomosis in rats.10 Possible mechanisms that might mediate this effect include an increase in cell proliferation of the colonic mucosa and the acceleration of collagen maturation.27 The first mechanism accelerates re‐epithelization and increases blood flow, with a consequent rise in oxygen supply. The topical use of SCFAs twice daily for 7 days (the therapeutic methodology used in this study) reversed colonic mucosal atrophy. This therapy would reduce the potential risk of fistula formation in patients undergoing intestinal transit reconstruction.\nThe use of SCFAs is important for proper nutrition of the nonfunctioning colonocyte, because SCFAs reverse colonocyte atrophy and therefore minimize symptoms in some patients. This treatment may be particularly valuable for those patients undergoing intestinal transit reconstruction, however, because the aim of atrophy reversal is to prevent complications inherent to the surgery.", "In conclusion, the prophylactic action of SCFAs on DC in terms of colonic mucosal trophism was not confirmed. This study demonstrates the therapeutic use of SCFAs in experimental DC by showing the significant effects of SCFAs on atrophy regression and EPN reduction in the intestinal lumen and lamina propria of the colonic mucosa, despite their lack of interference with the intensity of MALT hyperplasia. Thus, the therapeutic application of SCFAs may be of great significance in clinical practice, especially in patients without associated inflammatory disease. SCFAs may shorten hospitalization and favor better postoperative management in colostomized patients by reducing complications.", "This study was partially funded by a grant (no. 135226/06‐6) from CNPq." ]
[ "intro", "materials|methods", null, null, null, null, null, null, "results", null, null, "discussion", null, null ]
[ "Colostomy", "Short‐Chain Fatty Acids", "Diversion colitis", "Prophylactic", "Treatment" ]
Passive stiffness of rat skeletal muscle undernourished during fetal development.
21340228
A poor nutrition supply during fetal development affects physiological functions of the fetus. From a mechanical point of view, skeletal muscle can be also characterized by its resistance to passive stretch.
INTRODUCTION
Male Wistar rats were divided into two groups according to their mother's diet during pregnancy: a control group (mothers fed a 17% protein diet) and an isocaloric low-protein group (mothers fed a 7.8% protein diet). At birth, all mothers received a standardized meal ad libitum. At the age of 25 and 90 days, the soleus muscle and extensor digitorum longus (EDL) muscles were removed in order to test the passive mechanical properties. A first mechanical test consisted of an incremental stepwise extension test using fast velocity stretching (500 mm/s) enabling us to measure, for each extension stepwise, the dynamic stress (σd) and the steady stress (σs). A second test consisted of a slow velocity stretch in order to calculate normalized stiffness and tangent modulus from the stress-strain relationship.
METHODS
The results for the mechanical properties showed an important increase in passive stiffness in both the soleus and EDL muscles in weaned rat. In contrast, no modification was observed in young adult rats.
RESULTS
The increase in passive stiffness in skeletal muscle of weaned rat submitted to intrauterine undernutrition it is most likely due to changes in muscle passive stiffness.
CONCLUSIONS
[ "Analysis of Variance", "Animals", "Animals, Newborn", "Diet, Protein-Restricted", "Elasticity", "Female", "Fetal Development", "Male", "Malnutrition", "Models, Animal", "Muscle Contraction", "Muscle, Skeletal", "Pregnancy", "Prenatal Exposure Delayed Effects", "Prenatal Nutritional Physiological Phenomena", "Rats", "Rats, Wistar", "Reflex, Stretch", "Weaning" ]
3020350
INTRODUCTION
Numerous studies have shown the influence of nutrient supply on development in utero.1–4 A poor nutrition supply during fetal development affects physiological functions of the fetus and has long‐term consequences at adulthood.5,6 This concept of “programming” represents the mechanism whereby a stimulus or an insult during a critical developmental period has permanent effects on structure, physiology, and metabolism.7 There is evidence of programming affecting structure and function of skeletal muscles postnatally.4 For example, Ozanne et al.3,8 showed alterations in muscle metabolic capacities in rats undernourished during fetal development and lactation. Modifications in muscle fiber type distribution in both young and adult mammals have been reported as well as a decrease in the fiber density in the diaphragm of pups whose mothers had suffered nutritional deprivation.1,9,10 However, few studies have examined the consequence of early undernutrition in mechanical muscle properties. In a previous study, we have shown modifications in both contractile and series elastic properties of rat muscles undernourished during fetal development.11 From a mechanical point of view, skeletal muscle can also be characterized by its resistance to passive stretch. From a functional point of view, muscle passive properties are important to take into account because (1) these characteristics contribute in part to the maximal joint range of motion, (2) part of the force developed by the contracting muscle will be devoted to the stretch of passive antagonist and (3) a relation between passive stiffness and spindle discharge has been shown.12 The aim of this study was to evaluate the effect of a low‐protein diet during fetal life on the passive mechanical properties of a postural muscle (soleus) and a nonpostural muscle (extensor digitorum longus, EDL). This study was conducted in weaned rats and in young adult rats to analyze the short‐ and long‐term effects of this early nutritional manipulation.
null
null
RESULTS
The body mass of pups from UN mothers was significantly lower at 25 days (92.6 ± 5.18 g vs 71.3 ± 1.3 g for the C and UN groups, respectively; p<0.05) and 90 days (449.5 ± 9.2g vs 413.2 ± 14g for C and UN group, respectively; p<0.05). Absolute and relative mass of the soleus and EDL muscles was significantly smaller in the UN group than in C group in weaned and young adult rats (Fig. 3). Results of the incremental stepwise extension test indicated increases in resistance to passive stretch for each extension in both soleus and EDL muscles in weaned rats (Fig. 4). At this age, soleus muscle of UN rats showed an increase in dynamic tensions by 40%, 48%, 57%, and 52% for the first, second, third, and fourth increment, respectively (Fig. 4). Similar increases were obtained in EDL muscle. In addition, undernutrition induced an increase in steady tension of about 65% in the soleus and 100% in the EDL (Fig. 4). At 90 days, no difference in either dynamic tension or steady tension was observed in the soleus and the EDL between the control group and the undernourished group. Passive force developed at 25% strain during the stretch–release test and was not modified in the soleus and EDL muscles of weaned and young adult rat (Fig. 5). When passive force was expressed in terms of normalized tension (i.e., force divided by PCSA), there was an increase in the passive tension in both the soleus and the EDL muscles in weaned rats (Fig. 5). This increase in resistance to passive stretch observed in the soleus and EDL of the UN group was also confirmed by the increase in the tangent modulus and in normalized stiffness. In young adult rats, no difference was observed in these parameters between groups in either the soleus or the EDL muscles.
null
null
[ "Experimental animals", "Biomechanical analysis", "Statistical analysis", "CONCLUSIONS" ]
[ "Virgin female Wistar rats (body mass 281.86 ± 14.97 g) were housed individually with males under standardized conditions. On the day the copulation plug was found, the females were isolated and assigned to one of two experimental groups: a control group (C, n = 11) and an undernourished group (UN, n = 11). During gestation, rats of group C were fed a control diet (17% protein) according to the recommendations of AIN‐93G,13 and UN animals received a low‐protein isocaloric diet (7.8%) ad libitum (Table 1). On the first day after birth, all mothers received a control diet (17% of protein) ad libitum and litters were limited to six male pups per mother. At weaning, pups were fed a standardized meal (17% of protein) ad libitum until 60 days old.13 Afterwards, offspring received a 12% protein diet ad libitum until 90 days of age.\nThe protocols used in the present study were in accordance with the guidelines and regulations of the Ethical Hygiene and Safety Committee of the Compiègne University of Technology.", "Rats 25 days old (n = 14) and 90 days old (n = 16) were anesthetized with an intraperitoneal injection of sodium pentobarbital (30 mg/kg of body mass). The soleus and EDL muscles were carefully excised from the hind limb and placed in a dissection chamber containing Ringer's solution (composition in mM: NaCl, 115; NaHCO3, 28; CaCl2, 2.5, MgSO4, 3.1, KCl, 3.5, KH2PO4, 1.4; glucose, 11.1) maintained at 25°C and oxygenated with a gas mixture of 95% O2 and 5% CO2 that resulted in a pH of 7.3. At the end of the experiment all animals was killed in accordance with the animal committee at Compiègne University of Technology. The proximal part of the muscle was fixed to a force transducer and the distal extremity was linked to the moving part of a servocontrolled ergometer described in detail elsewhere.14 The muscle was adjusted from its slack length (Ls), i.e., the muscle length from which a resting tension of 10 mN was obtained. It was then submitted to two different procedures: an incremental stepwise extension test and a stretch‐release test at slow velocity. For each test type, three tests were performed, the first two tests were used for preconditioning the muscle and the third test served for data analysis.\nWith regard to the incremental stepwise extension test, the muscle was stretched by four successive stepwise extensions, initially imposed from Ls. Each stepwise extension consisted of a 5% Ls step at fast velocity (500 mm/s) that was maintained for 80 s to observe a reduction in tension toward a plateau value. After the fourth stepwise extension, the muscle was suddenly released to Ls. This test enabled us to measure, for each extension stepwise, the dynamic force (Fd) that corresponded to the maximal force developed by muscle at the end of the fast extension and the steady force (Fs) at the end of the plateau in length (Fig. 1). Then, Fd and Fs were divided by the physiological cross‐sectional area (PCSA) of the muscle, which yields the dynamic tension (σd) and the steady tension (σs). PCSA of muscle was calculated using the equation PCSA  =  MW/(1.06 × Lf), where MW is muscle mass, 1.06 is the muscle density (in g/cm3), and Lf is the fiber length. Lf corresponds to 72% and 44% of the length of the soleus and the EDL muscles, respectively.15,16 \nThe stretch–release test consisted of stretching the muscles at amplitude up to 125% Ls with a slow velocity (0.1 mm/s) following by a release until Ls with the same velocity (Fig. 2). From these data, stress (i.e., passive force normalized in respect of PCSA) and strain (i.e., deformation/Ls) were calculated to construct the stress–strain curve. From this curve, tension at 125% Ls (F125/PCSA), stiffness at 125% Ls and tangent modulus (i.e., slope in the linear portion of the stress‐strain curve) were calculated.", "All data are presented as mean ± SEM. A two‐way (to evaluate the effect of the age and diet on body mass) and three‐way (to evaluate the effect of age, diet and muscle on the other parameters) Analysis of variance (ANOVA) for repeated measurements followed by the Holm Sidak post hoc test were performed. A level of 95% was set as the statistical difference. The statistical treatment of the data was performed with the Sigmastat software (Systat Software, Inc., Chicago, IL).", "This study has permitted understanding of the effect of a prenatal undernutrition on the passive elastic component of the postural muscle (soleus) and a nonpostural muscle (EDL). Prenatal undernutrition showed short‐term alterations in passive stiffness that can be explained in terms of adaptations in passive structures and/or distribution of endosarcomeric and exosarcomeric proteins in the skeletal muscle. However, further biochemical investigations are necessary to establish the effects of this particular nutritional manipulation in a noncontractile protein profile of skeletal muscle." ]
[ null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Experimental animals", "Biomechanical analysis", "Statistical analysis", "RESULTS", "DISCUSSION", "CONCLUSIONS" ]
[ "Numerous studies have shown the influence of nutrient supply on development in utero.1–4 \nA poor nutrition supply during fetal development affects physiological functions of the fetus and has long‐term consequences at adulthood.5,6 This concept of “programming” represents the mechanism whereby a stimulus or an insult during a critical developmental period has permanent effects on structure, physiology, and metabolism.7 There is evidence of programming affecting structure and function of skeletal muscles postnatally.4 For example, Ozanne et al.3,8 showed alterations in muscle metabolic capacities in rats undernourished during fetal development and lactation. Modifications in muscle fiber type distribution in both young and adult mammals have been reported as well as a decrease in the fiber density in the diaphragm of pups whose mothers had suffered nutritional deprivation.1,9,10 However, few studies have examined the consequence of early undernutrition in mechanical muscle properties.\nIn a previous study, we have shown modifications in both contractile and series elastic properties of rat muscles undernourished during fetal development.11 From a mechanical point of view, skeletal muscle can also be characterized by its resistance to passive stretch. From a functional point of view, muscle passive properties are important to take into account because (1) these characteristics contribute in part to the maximal joint range of motion, (2) part of the force developed by the contracting muscle will be devoted to the stretch of passive antagonist and (3) a relation between passive stiffness and spindle discharge has been shown.12 \nThe aim of this study was to evaluate the effect of a low‐protein diet during fetal life on the passive mechanical properties of a postural muscle (soleus) and a nonpostural muscle (extensor digitorum longus, EDL). This study was conducted in weaned rats and in young adult rats to analyze the short‐ and long‐term effects of this early nutritional manipulation.", "[SUBTITLE] Experimental animals [SUBSECTION] Virgin female Wistar rats (body mass 281.86 ± 14.97 g) were housed individually with males under standardized conditions. On the day the copulation plug was found, the females were isolated and assigned to one of two experimental groups: a control group (C, n = 11) and an undernourished group (UN, n = 11). During gestation, rats of group C were fed a control diet (17% protein) according to the recommendations of AIN‐93G,13 and UN animals received a low‐protein isocaloric diet (7.8%) ad libitum (Table 1). On the first day after birth, all mothers received a control diet (17% of protein) ad libitum and litters were limited to six male pups per mother. At weaning, pups were fed a standardized meal (17% of protein) ad libitum until 60 days old.13 Afterwards, offspring received a 12% protein diet ad libitum until 90 days of age.\nThe protocols used in the present study were in accordance with the guidelines and regulations of the Ethical Hygiene and Safety Committee of the Compiègne University of Technology.\nVirgin female Wistar rats (body mass 281.86 ± 14.97 g) were housed individually with males under standardized conditions. On the day the copulation plug was found, the females were isolated and assigned to one of two experimental groups: a control group (C, n = 11) and an undernourished group (UN, n = 11). During gestation, rats of group C were fed a control diet (17% protein) according to the recommendations of AIN‐93G,13 and UN animals received a low‐protein isocaloric diet (7.8%) ad libitum (Table 1). On the first day after birth, all mothers received a control diet (17% of protein) ad libitum and litters were limited to six male pups per mother. At weaning, pups were fed a standardized meal (17% of protein) ad libitum until 60 days old.13 Afterwards, offspring received a 12% protein diet ad libitum until 90 days of age.\nThe protocols used in the present study were in accordance with the guidelines and regulations of the Ethical Hygiene and Safety Committee of the Compiègne University of Technology.\n[SUBTITLE] Biomechanical analysis [SUBSECTION] Rats 25 days old (n = 14) and 90 days old (n = 16) were anesthetized with an intraperitoneal injection of sodium pentobarbital (30 mg/kg of body mass). The soleus and EDL muscles were carefully excised from the hind limb and placed in a dissection chamber containing Ringer's solution (composition in mM: NaCl, 115; NaHCO3, 28; CaCl2, 2.5, MgSO4, 3.1, KCl, 3.5, KH2PO4, 1.4; glucose, 11.1) maintained at 25°C and oxygenated with a gas mixture of 95% O2 and 5% CO2 that resulted in a pH of 7.3. At the end of the experiment all animals was killed in accordance with the animal committee at Compiègne University of Technology. The proximal part of the muscle was fixed to a force transducer and the distal extremity was linked to the moving part of a servocontrolled ergometer described in detail elsewhere.14 The muscle was adjusted from its slack length (Ls), i.e., the muscle length from which a resting tension of 10 mN was obtained. It was then submitted to two different procedures: an incremental stepwise extension test and a stretch‐release test at slow velocity. For each test type, three tests were performed, the first two tests were used for preconditioning the muscle and the third test served for data analysis.\nWith regard to the incremental stepwise extension test, the muscle was stretched by four successive stepwise extensions, initially imposed from Ls. Each stepwise extension consisted of a 5% Ls step at fast velocity (500 mm/s) that was maintained for 80 s to observe a reduction in tension toward a plateau value. After the fourth stepwise extension, the muscle was suddenly released to Ls. This test enabled us to measure, for each extension stepwise, the dynamic force (Fd) that corresponded to the maximal force developed by muscle at the end of the fast extension and the steady force (Fs) at the end of the plateau in length (Fig. 1). Then, Fd and Fs were divided by the physiological cross‐sectional area (PCSA) of the muscle, which yields the dynamic tension (σd) and the steady tension (σs). PCSA of muscle was calculated using the equation PCSA  =  MW/(1.06 × Lf), where MW is muscle mass, 1.06 is the muscle density (in g/cm3), and Lf is the fiber length. Lf corresponds to 72% and 44% of the length of the soleus and the EDL muscles, respectively.15,16 \nThe stretch–release test consisted of stretching the muscles at amplitude up to 125% Ls with a slow velocity (0.1 mm/s) following by a release until Ls with the same velocity (Fig. 2). From these data, stress (i.e., passive force normalized in respect of PCSA) and strain (i.e., deformation/Ls) were calculated to construct the stress–strain curve. From this curve, tension at 125% Ls (F125/PCSA), stiffness at 125% Ls and tangent modulus (i.e., slope in the linear portion of the stress‐strain curve) were calculated.\nRats 25 days old (n = 14) and 90 days old (n = 16) were anesthetized with an intraperitoneal injection of sodium pentobarbital (30 mg/kg of body mass). The soleus and EDL muscles were carefully excised from the hind limb and placed in a dissection chamber containing Ringer's solution (composition in mM: NaCl, 115; NaHCO3, 28; CaCl2, 2.5, MgSO4, 3.1, KCl, 3.5, KH2PO4, 1.4; glucose, 11.1) maintained at 25°C and oxygenated with a gas mixture of 95% O2 and 5% CO2 that resulted in a pH of 7.3. At the end of the experiment all animals was killed in accordance with the animal committee at Compiègne University of Technology. The proximal part of the muscle was fixed to a force transducer and the distal extremity was linked to the moving part of a servocontrolled ergometer described in detail elsewhere.14 The muscle was adjusted from its slack length (Ls), i.e., the muscle length from which a resting tension of 10 mN was obtained. It was then submitted to two different procedures: an incremental stepwise extension test and a stretch‐release test at slow velocity. For each test type, three tests were performed, the first two tests were used for preconditioning the muscle and the third test served for data analysis.\nWith regard to the incremental stepwise extension test, the muscle was stretched by four successive stepwise extensions, initially imposed from Ls. Each stepwise extension consisted of a 5% Ls step at fast velocity (500 mm/s) that was maintained for 80 s to observe a reduction in tension toward a plateau value. After the fourth stepwise extension, the muscle was suddenly released to Ls. This test enabled us to measure, for each extension stepwise, the dynamic force (Fd) that corresponded to the maximal force developed by muscle at the end of the fast extension and the steady force (Fs) at the end of the plateau in length (Fig. 1). Then, Fd and Fs were divided by the physiological cross‐sectional area (PCSA) of the muscle, which yields the dynamic tension (σd) and the steady tension (σs). PCSA of muscle was calculated using the equation PCSA  =  MW/(1.06 × Lf), where MW is muscle mass, 1.06 is the muscle density (in g/cm3), and Lf is the fiber length. Lf corresponds to 72% and 44% of the length of the soleus and the EDL muscles, respectively.15,16 \nThe stretch–release test consisted of stretching the muscles at amplitude up to 125% Ls with a slow velocity (0.1 mm/s) following by a release until Ls with the same velocity (Fig. 2). From these data, stress (i.e., passive force normalized in respect of PCSA) and strain (i.e., deformation/Ls) were calculated to construct the stress–strain curve. From this curve, tension at 125% Ls (F125/PCSA), stiffness at 125% Ls and tangent modulus (i.e., slope in the linear portion of the stress‐strain curve) were calculated.\n[SUBTITLE] Statistical analysis [SUBSECTION] All data are presented as mean ± SEM. A two‐way (to evaluate the effect of the age and diet on body mass) and three‐way (to evaluate the effect of age, diet and muscle on the other parameters) Analysis of variance (ANOVA) for repeated measurements followed by the Holm Sidak post hoc test were performed. A level of 95% was set as the statistical difference. The statistical treatment of the data was performed with the Sigmastat software (Systat Software, Inc., Chicago, IL).\nAll data are presented as mean ± SEM. A two‐way (to evaluate the effect of the age and diet on body mass) and three‐way (to evaluate the effect of age, diet and muscle on the other parameters) Analysis of variance (ANOVA) for repeated measurements followed by the Holm Sidak post hoc test were performed. A level of 95% was set as the statistical difference. The statistical treatment of the data was performed with the Sigmastat software (Systat Software, Inc., Chicago, IL).", "Virgin female Wistar rats (body mass 281.86 ± 14.97 g) were housed individually with males under standardized conditions. On the day the copulation plug was found, the females were isolated and assigned to one of two experimental groups: a control group (C, n = 11) and an undernourished group (UN, n = 11). During gestation, rats of group C were fed a control diet (17% protein) according to the recommendations of AIN‐93G,13 and UN animals received a low‐protein isocaloric diet (7.8%) ad libitum (Table 1). On the first day after birth, all mothers received a control diet (17% of protein) ad libitum and litters were limited to six male pups per mother. At weaning, pups were fed a standardized meal (17% of protein) ad libitum until 60 days old.13 Afterwards, offspring received a 12% protein diet ad libitum until 90 days of age.\nThe protocols used in the present study were in accordance with the guidelines and regulations of the Ethical Hygiene and Safety Committee of the Compiègne University of Technology.", "Rats 25 days old (n = 14) and 90 days old (n = 16) were anesthetized with an intraperitoneal injection of sodium pentobarbital (30 mg/kg of body mass). The soleus and EDL muscles were carefully excised from the hind limb and placed in a dissection chamber containing Ringer's solution (composition in mM: NaCl, 115; NaHCO3, 28; CaCl2, 2.5, MgSO4, 3.1, KCl, 3.5, KH2PO4, 1.4; glucose, 11.1) maintained at 25°C and oxygenated with a gas mixture of 95% O2 and 5% CO2 that resulted in a pH of 7.3. At the end of the experiment all animals was killed in accordance with the animal committee at Compiègne University of Technology. The proximal part of the muscle was fixed to a force transducer and the distal extremity was linked to the moving part of a servocontrolled ergometer described in detail elsewhere.14 The muscle was adjusted from its slack length (Ls), i.e., the muscle length from which a resting tension of 10 mN was obtained. It was then submitted to two different procedures: an incremental stepwise extension test and a stretch‐release test at slow velocity. For each test type, three tests were performed, the first two tests were used for preconditioning the muscle and the third test served for data analysis.\nWith regard to the incremental stepwise extension test, the muscle was stretched by four successive stepwise extensions, initially imposed from Ls. Each stepwise extension consisted of a 5% Ls step at fast velocity (500 mm/s) that was maintained for 80 s to observe a reduction in tension toward a plateau value. After the fourth stepwise extension, the muscle was suddenly released to Ls. This test enabled us to measure, for each extension stepwise, the dynamic force (Fd) that corresponded to the maximal force developed by muscle at the end of the fast extension and the steady force (Fs) at the end of the plateau in length (Fig. 1). Then, Fd and Fs were divided by the physiological cross‐sectional area (PCSA) of the muscle, which yields the dynamic tension (σd) and the steady tension (σs). PCSA of muscle was calculated using the equation PCSA  =  MW/(1.06 × Lf), where MW is muscle mass, 1.06 is the muscle density (in g/cm3), and Lf is the fiber length. Lf corresponds to 72% and 44% of the length of the soleus and the EDL muscles, respectively.15,16 \nThe stretch–release test consisted of stretching the muscles at amplitude up to 125% Ls with a slow velocity (0.1 mm/s) following by a release until Ls with the same velocity (Fig. 2). From these data, stress (i.e., passive force normalized in respect of PCSA) and strain (i.e., deformation/Ls) were calculated to construct the stress–strain curve. From this curve, tension at 125% Ls (F125/PCSA), stiffness at 125% Ls and tangent modulus (i.e., slope in the linear portion of the stress‐strain curve) were calculated.", "All data are presented as mean ± SEM. A two‐way (to evaluate the effect of the age and diet on body mass) and three‐way (to evaluate the effect of age, diet and muscle on the other parameters) Analysis of variance (ANOVA) for repeated measurements followed by the Holm Sidak post hoc test were performed. A level of 95% was set as the statistical difference. The statistical treatment of the data was performed with the Sigmastat software (Systat Software, Inc., Chicago, IL).", "The body mass of pups from UN mothers was significantly lower at 25 days (92.6 ± 5.18 g vs 71.3 ± 1.3 g for the C and UN groups, respectively; p<0.05) and 90 days (449.5 ± 9.2g vs 413.2 ± 14g for C and UN group, respectively; p<0.05).\nAbsolute and relative mass of the soleus and EDL muscles was significantly smaller in the UN group than in C group in weaned and young adult rats (Fig. 3).\nResults of the incremental stepwise extension test indicated increases in resistance to passive stretch for each extension in both soleus and EDL muscles in weaned rats (Fig. 4). At this age, soleus muscle of UN rats showed an increase in dynamic tensions by 40%, 48%, 57%, and 52% for the first, second, third, and fourth increment, respectively (Fig. 4). Similar increases were obtained in EDL muscle. In addition, undernutrition induced an increase in steady tension of about 65% in the soleus and 100% in the EDL (Fig. 4). At 90 days, no difference in either dynamic tension or steady tension was observed in the soleus and the EDL between the control group and the undernourished group.\nPassive force developed at 25% strain during the stretch–release test and was not modified in the soleus and EDL muscles of weaned and young adult rat (Fig. 5). When passive force was expressed in terms of normalized tension (i.e., force divided by PCSA), there was an increase in the passive tension in both the soleus and the EDL muscles in weaned rats (Fig. 5). This increase in resistance to passive stretch observed in the soleus and EDL of the UN group was also confirmed by the increase in the tangent modulus and in normalized stiffness. In young adult rats, no difference was observed in these parameters between groups in either the soleus or the EDL muscles.", "The results of present study are in accordance with numerous studies showing poor maternal nutrition during gestation affects fetal growth and development.1,3,6,8,11 Thus, the decrease in the pup weight can be associated with the availability of nutrients for transfer to the fetus, possibly involving metabolic parameters such as glucose and insulin.17 Consequent to maternal nutrient restriction, the soleus and EDL muscle weight was significantly reduced in weaned and young adult rats. This muscle atrophy is consistent with the programming of skeletal muscle insulin sensitivity during fetal development8 as it has been shown that insulin‐sensitive tissues undergo important changes in response to maternal protein restriction.18,19 \nA previous study had shown that maternal protein restriction during gestation induced changes in both contractile and series elastics properties.11 In addition to these mechanical changes, the present work has demonstrated that the passive elastic properties are also changed by this early nutritional manipulation. In effect, even if passive force that developed during the slow velocity stretch was not different between nutritional groups, the normalized tension showed an increase in soleus and EDL muscles in UN weaned rats. This increase in resistance to passive stretch was also perceived by the increase in the normalized stiffness and the increase in the tangent modulus. Results of the incremental stepwise test showed that passive stiffness was increased during both the dynamic and the static phases and for short and long stretches.\nMuscle passive stiffness is a function of the parallel elastic component described in Hill's model.20 Its properties are affected by membrane structure and specifically by the concentration and type of collagen.21–25 The effects of nutritional status on the regulation of skeletal muscle collagen content are varied. Roy et al.26 reported no influence of nutritional level in muscle collagen content in pectoralis muscle of broilers but with some differences in the collagen structure of the perimysium. In the gastrocnemius muscle of adult mice deprived of food for 2 days, Jagoe et al.27 showed a decrease in gene expression for many extracellular matrix proteins like collagen. More recently, Stevenson et al.28 studied the transcriptional profile of myotube under starvation conditions. These authors reported a downregulation of genes involved in collagen synthesis and maturation. Nevertheless, the effect of nutritional supply during fetal development seems to be different in the development of connective tissue in skeletal muscle. In swine, Karunaratne et al.29 reported that the smallest littermate, reflecting a poor level of in utero nutritional supply, contained a higher concentration of type I collagen than the largest littermate. In our study, such an increase in the content of collagen could explain the increase in the passive tension observed in the soleus and EDL muscles of undernourished rats.\nIn addition to collagen, other connective proteins are a source of muscle passive tension. Titin, a 3‐MDa elastic filamentous protein, links the Z line to the myosin filament in sarcomeres. Wang et al.30 reported that passive elastic properties of muscle fibers are related to expression of the titin isoform. Moreover, Toursel et al.31 showed a decrease in passive tension in soleus fiber of unloaded rat in relation to a decrease in titin content. Passive stiffness results also from the relation between endosarcomeric and exosarcomeric protein networks constituted by different structural proteins like desmin.32 Lastly, telethonin (Titin‐cap), an important component of the N‐terminal titin anchor in the Z line,33 seems to act on passive stiffness.34 Interestingly, it has been shown that nutritional status changes the gene expression of these proteins. Byrne et al.35 reported an upregulation of cytoskeletal proteins like desmin or telethonin in the muscles of steers after nutritional restriction. Oumi et al.36 reported muscle ultrastructure damages induced in rats nourished with a low‐protein diet for 2 weeks after weaning. More precisely, they showed disorganization in some sarcomeres, with a disruption of the Z line appearing jagged. As postulated by Oumi et al.,36 these sarcomere damages could be the result of the “disintegration” of structural proteins like desmin and titin. Muscle disorganization, such as those observed by these authors, could induce an increase in passive stiffness. As a matter of fact, Anderson et al.37 reported an increase in passive stiffness in desmin knockout mice and ascribed this mechanical change to the adaptation of passives structures consequent to the lack of desmin.\nLastly, no modification in the passive stiffness properties was observed between groups in young adult rats. The total recovery of these elastic properties reveals that the changes observed in the weaning rats can be completely reversed after nutritional recovery before the animal reaches the adult age. Nevertheless, it will be interesting to evaluate older animals in order to confirm or invalidate that the in utero low‐protein diet supply has no long‐term consequences in muscle passive mechanical properties.", "This study has permitted understanding of the effect of a prenatal undernutrition on the passive elastic component of the postural muscle (soleus) and a nonpostural muscle (EDL). Prenatal undernutrition showed short‐term alterations in passive stiffness that can be explained in terms of adaptations in passive structures and/or distribution of endosarcomeric and exosarcomeric proteins in the skeletal muscle. However, further biochemical investigations are necessary to establish the effects of this particular nutritional manipulation in a noncontractile protein profile of skeletal muscle." ]
[ "intro", "materials|methods", null, null, null, "results", "discussion", null ]
[ "Passive stiffness", "Skeletal muscle", "Fetal", "Development", "Undernutrition" ]
Hot water extract of Chlorella vulgaris induced DNA damage and apoptosis.
21340229
The search for food and spices that can induce apoptosis in cancer cells has been a major study interest in the last decade. Chlorella vulgaris, a unicellular green algae, has been reported to have antioxidant and anti-cancer properties. However, its chemopreventive effects in inhibiting the growth of cancer cells have not been studied in great detail.
INTRODUCTION
HepG2 liver cancer cells and WRL68 normal liver cells were treated with various concentrations (0-4 mg/ml) of hot water extract of C. vulgaris after 24 hours incubation. Apoptosis rate was evaluated by TUNEL assay while DNA damage was assessed by Comet assay. Apoptosis proteins were evaluated by Western blot analysis.
METHODS
Chlorella vulgaris decreased the number of viable HepG2 cells in a dose dependent manner (p < 0.05), with an IC50 of 1.6 mg/ml. DNA damage as measured by Comet assay was increased in HepG2 cells at all concentrations of Chlorella vulgaris tested. Evaluation of apoptosis by TUNEL assay showed that Chlorella vulgaris induced a higher apoptotic rate (70%) in HepG2 cells compared to normal liver cells, WRL68 (15%). Western blot analysis showed increased expression of pro-apoptotic proteins P53, Bax and caspase-3 in the HepG2 cells compared to normal liver cells WRL68, and decreased expression of the anti-apoptotic protein Bcl-2.
RESULTS
Chlorella vulgaris may have anti-cancer effects by inducing apoptosis signaling cascades via an increased expression of P53, Bax and caspase-3 proteins and through a reduction of Bcl-2 protein, which subsequently lead to increased DNA damage and apoptosis.
CONCLUSIONS
[ "Apoptosis", "Apoptosis Regulatory Proteins", "Chlorella vulgaris", "DNA Damage", "Hep G2 Cells", "Hot Temperature", "Humans", "Plant Extracts", "Tumor Suppressor Protein p53", "Water" ]
3020351
INTRODUCTION
Hepatocellular carcinoma (HCC) is the fifth most common malignancy worldwide.1 In Malaysia, HCC is the thirteenth most common cancer affecting the population.2 The etiology of liver cancer is multi factorial; some of the well known risk factors include hepatitis B and C viral infection, exposure to chemicals such as aromatic hydrocarbons, and the ingestion of aflatoxin B1.3-5 The process of carcinogenesis involves an initiating event which induces genetic damage, followed by survival and progression of selected clones of the transformed cells to form tumors. Apoptosis, a form of programmed cell death, has been associated with delaying or inhibiting cancer growth.6,7 Much of cancer research over the past two decades has focused on genes that, when mutated, act in either a dominant or recessive manner to enhance proliferation with dysregulation of apoptosis that is responsible for initiating cancer and its progression.8,9 Many emerging data in recent years have shown that dietary chemopreventive agents preferentially inhibit growth of cancer cells by targeting signaling molecules, such as caspases, that subsequently lead to the induction of apoptosis.7 Some of these dietary agents include epigallocatechin‐3‐gallate (EGCG), found in green tea,10 curcumin in turmeric,11 genistein in soybeans,12 lycopene in tomatoes,13 anthocyanins in pomegranates14 and isothiocyanates in broccoli.15 There is a large body of evidence that links DNA damage and apoptosis.16,17 The tumor suppressor protein p53 provides an important link between DNA damage and apoptosis as it has been shown to mediate the upregulation of apoptotic proteins such as Bax, caspase‐3 and ‐8, Noxa, PUMA and p53AIP.18 Chlorella vulgaris, a unicellular green algae, has been widely used as a food supplement and credited with high antioxidant and therapeutic abilities.19,20 The supplement can be taken in the form of tablets, capsules, as a food additive or extracted as a liquid. Some health claims and benefits include improvement in the control of hypertension, fibromyalgia and ulcerative colitis.21 In vivo studies have revealed the significant antitumor and antigenotoxic efficacy of C. vulgaris.20 A study by Hasegawa et. al showed that a glycoprotein, designated ARS‐2, found in C. vulgaris extract has an anti‐cancer effect on mice‐induced fibrosarcoma.22 To the best of our knowledge, the effect of C. vulgaris on hepatoma cells has not been studied in great detail. In this study, we investigated the anti‐cancer effect of C. vulgaris extract on the hepatoma cell line HepG2 by evaluating changes in proliferation, DNA damage and apoptosis.
null
null
RESULTS
Chlorella vulgaris inhibited the proliferation of human liver cancer cells (HepG2) in a concentration‐dependent manner, ranging from 0‐4 mg/ml as shown by BrdU proliferation assay with a 50% reduction at 1.6 mg/ml, (IC50) (Fig. 1). The high IC50 is expected as C. vulgaris is classified as a food and not a drug. Proliferation of normal liver cells (WRL68) was decreased when treated with C. vulgaris extract, resulting in a 50% reduction at 1.7 mg/ml. One hundred percent inhibition of proliferation of HepG2 cells and WRL68 cells was achieved at approximately 3 mg/ml and 4mg/ml, respectively. It was clearly seen that C. vulgaris induced apoptosis in HepG2 cells with distinct nuclear condensation and blebbing of the plasma membrane, a typical characteristic of apoptotic bodies (Fig. 2).24 The percentange of apoptosis of HepG2 and WRL68 cells as induced by C. vulgaris extract at 2mg/ml is demonstrated in Figure 3; the rate of apoptosis was significantly higher (70%) in HepG2 cells compared to normal WRL68 cells (15%). WRL68 cells also underwent apoptosis but at a higher concentration of C. vulgaris (3‐4 mg/ml), perhaps as a result of the cytotoxicity effects seen at higher concentrations (Fig. 1) The involvement of pro‐ and antiapoptotic proteins is demonstrated in Figure 4. Figure 4a shows that the level of p53 increased in a dose‐dependent manner and reached maximum induction at 2 h with 2 mg/ml treatment of C. vulgaris. Interestingly, treatment with C. vulgaris resulted in increased expression of Bax (Fig. 4c), and a decreased expression of Bcl‐2 (Fig. 4b) in a time‐dependent manner. However, the activation of the mitochondrial apoptotic pathway in downstream caspase‐8 was not significant (Fig. 4e), but a significant increase in the active form of caspase‐3 was observed after 24 h of treatment with C.vulgaris in a time‐dependent manner (Fig. 4d). These results demonstrate that the mitochondrial signaling pathway is involved in C. vulgaris‐induced apoptosis of HepG2 cells. DNA damage in HepG2 cells was increased by C. vulgaris treatments at all concentrations (Fig. 5). At low doses of C. vulgaris extract, HepG2 and WRL68 cells generated smaller comets that were not significant, whilst at 2 mg/ml, C. vulgaris induced more damage in HepG2 cells (80%) compared to WRL68 (50%).
null
null
[ "Culturing and Extraction of Chlorella vulgaris", "Cell Culture and Treatment", "Cell Proliferation Assay", "Apoptosis by TUNEL Assay", "DNA Damage Measurement by Comet Assay", "Measurement of p53, Bax, Bcl‐2, Caspase‐3 and ‐8 by Western blotting", "Statistical Analysis", "CONCLUSION", "ACKNOWLEDGEMENTS" ]
[ "Stock of C. vulgaris Beijerinck (strain 072) was obtained from the University of Malaya Algae Culture Collection (UMACC, Malaysia) and grown in Bold's basal media (BBM) with a 12 hours dark 12 hours light cycle. Cells were harvested by centrifugation at 1000 rpm. The algae were dried using a freeze dryer and then mixed in distilled water at a concentration of 10% (w/v). The algal suspension was then boiled at 100°C for 20 minutes using reflux method followed by centrifugation at 10 000 rpm for 20 min. The supernatant was lyophilized using a freeze dryer to obtain the powdered form of C. vulgaris", "Liver cancer cell line HepG2 and normal liver cell line WRL68 were maintained in Eagle's minimal essential medium (EMEM; Flow Labaratories, Sydney Australia) supplemented with 1mM sodium pyruvate (SIGMA, St Louis, MO, USA), 2mM glutamine, 10% fetal calf serum and 100 U/ml penicillin and streptomycin at 37°C in humidified 5% CO2 incubator. Cell proliferation, apoptosis and collection of protein were performed when cells reached 70% confluence density. C. vulgaris extract at various concentrations (0‐4 mg/ml) was added to cell lines 24 hours after incubation.", "A 96‐well plate was seeded with HepG2 and WRL68 cells at a uniform density of 2 × 104 cells per well. Twenty‐four hours after incubation, cells were treated with the hot water extract of C. vulgaris and incubated for a further 24 hours. Cells were labeled with bromodeoxyuridine (BrdU) during the last 3 hours of C. vulgaris extract treatment. The cells were fixed with denaturing solution and the incorporation of BrdU was detected by immunoreaction. After substrate solution was added to each well, the amount of BrdU incorporated was determined by measuring the absorbance at dual wavelengths (450/690 nm) using a scanning multi‐well spectrophotometer [enzyme‐linked immunosorbent assay (ELISA) reader]", "Apoptotic cell death was determined by Dead End™ Colorimetric TUNEL System (Promega, Madison, WI, USA). After 24 hours of treatment with C. vulgaris extract, floating cells and adherent cells in culture were collected in a tube, trypsinized, centrifuged and washed in phosphate buffered saline (PBS). Cells were resuspended and applied to poly‐L‐lysine‐coated slides and air dried. Cells were fixed by immersing slides in 4% formalin in PBS for 25 minutes at room temperature. After washing with PBS, cells were permeabilized by immersing the slides in 0.2% Triton × 100 solution in PBS for 5 minutes at room temperature. Cells were then equilibrated with 100 µl of equilibration buffer (2.5 mM Tris‐HCl pH 6.6, 0.2 M potassium cacodylate, 2.5 mM CoCl2, 0.25 mg/mL BSA.) for 7 minutes. The equilibrated areas were blotted with tissue paper before adding biotinylated nucleotide and TdT reaction mix (100 µl) to the slides. The slides were then covered with coverslips to ensure an even distribution of the reagent before incubating at 37°C for 60 minutes in a humidified chamber. Coverslips were removed and the slides were immersed in 2× saline‐sodium citrate (SSC; sodium chloride 0.15M, trisodium citrate 0.015M) buffer for 15 minutes at room temperature, and washed twice with PBS. Endogenous peroxidases were blocked by immersing the slides in 0.3% hydrogen peroxide for 4 minutes at room temperature and washed again in PBS. Streptavidin horseradish peroxidase (HRP) solution (1∶500 PBS) was added to each slide and incubated for 30 minutes at room temperature. After final washing with PBS, diaminobenzedine (DAB) solution was added to the slides for 20 minutes until light brown staining was observed. Finally, each slide was mounted with DPX (BDH, England) to be examined under light microscope.", "The assay was performed as described by Singh et al.23 Thirty microlitres of the HepG2 cell suspension (<400,000 cells/ml) was mixed with 80 µl of 1% low melting point (LMP) agarose and added to fully frosted slides that had been covered with a layer of 1% LMP agarose. Subsequently, the slides were immersed in lysis solution [2.5M NaOH, pH10; 0.1M ethylenediaminetetraacetic acid (EDTA); 0.01M Tris; and 1% Triton × 100] for 1 h at 4°C, followed by electrophoresis solution (300mM NaOH; and 1mM EDTA, pH13) for 20 min to allow DNA unwinding, and electrophoresed for 20 min at 25 V and 300 mA. Finally, the slides were neutralized with 0.4M Tris buffer (pH 7.5), stained with ethidium bromide (5 mg/ml) and analyzed using a fluorescence microscope (Carl Zeiss, Göttingen, Germany). Images of 50 randomly selected cells per experimental point were visually analyzed under the microscope.", "HepG2 cells (1×107/dish) were seeded into a 9 cm dish and treated with various concentrations of hot water extract of C. vulgaris for 2, 6, 12, 18 and 24 hours. Subsequently, cells were incubated in RIPA lysis buffer [50 mM Tris–HCl, 150 mM NaCl, 1 mM ethylene glycol tetraacetic acid (EGTA), 1 mM EDTA, 20 mM NaF, 100 mM Na3VO4, 1% NP‐40, 1% Triton X‐100, 1 mM phenylmethylsulfonyl fluoride (PMSF), 10 µg/ml Aprotinin and 10 µg/ml Leupeptin] (SIGMA, St Louis, MO, USA) on ice for 30 min to lyse the cells. After centrifugation, total protein was determined using a Bio‐Rad protein assay kit (Bio‐Rad, Hercules, CA, USA). Protein was resolved (50µg) by 10–15% SDS‐PAGE and transferred to polyvinyl difluoride (PVDF) membranes. The membrane was blocked with blocking buffer (5% skim milk in 1% Tween 20 in 20 mM Tris‐buffered saline, pH7.5) by incubating for 1 h at room temperature followed by incubation with the appropriate primary antibody (p53, Bax, Bcl‐2, caspase‐3 and ‐8; Chemicon, Billerica, MA, USA) at dilutions of 1∶1000 in blocking buffer for 2 h at room temperature. The membranes were then incubated with the respective secondary antibodies for 1 h and antigens were detected by using the enhanced chemiluminescence blotting detection system (GE Healthcare, Piscataway, NJ, USA).", "Results are expressed as mean ± SD with the experiment repeated at least 3 times. Statistical evaluation was done using the Student's t‐test. A p value of <0.05 was considered significant.", "The results of the present study suggest that the anticancer mechanism of C. vulgaris in hepatoma cells (HepG2) is by inhibiting DNA synthesis, triggering DNA damage and inducing apoptosis. This is shown by the reduced incorporation of BrdU into replicating DNA of HepG2 cells and an increase in the number of DNA damage‐inducing and apoptotic proteins, Bax and caspase‐3 in HepG2 cells treated with C. vulgaris. The possible mechanism is speculated to be an increase in P53, Bax and caspase‐3 expression that would subsequently lead to apoptosis.", "This research was supported by a grant from Ministry of Science, Technology and Innovation (MOSTI)" ]
[ null, null, null, null, null, null, null, null, null ]
[ "INTRODUCTION", "MATERIALS AND METHODS", "Culturing and Extraction of Chlorella vulgaris", "Cell Culture and Treatment", "Cell Proliferation Assay", "Apoptosis by TUNEL Assay", "DNA Damage Measurement by Comet Assay", "Measurement of p53, Bax, Bcl‐2, Caspase‐3 and ‐8 by Western blotting", "Statistical Analysis", "RESULTS", "DISCUSSION", "CONCLUSION", "ACKNOWLEDGEMENTS" ]
[ "Hepatocellular carcinoma (HCC) is the fifth most common malignancy worldwide.1 In Malaysia, HCC is the thirteenth most common cancer affecting the population.2 \nThe etiology of liver cancer is multi factorial; some of the well known risk factors include hepatitis B and C viral infection, exposure to chemicals such as aromatic hydrocarbons, and the ingestion of aflatoxin B1.3-5 \nThe process of carcinogenesis involves an initiating event which induces genetic damage, followed by survival and progression of selected clones of the transformed cells to form tumors. Apoptosis, a form of programmed cell death, has been associated with delaying or inhibiting cancer growth.6,7 Much of cancer research over the past two decades has focused on genes that, when mutated, act in either a dominant or recessive manner to enhance proliferation with dysregulation of apoptosis that is responsible for initiating cancer and its progression.8,9 Many emerging data in recent years have shown that dietary chemopreventive agents preferentially inhibit growth of cancer cells by targeting signaling molecules, such as caspases, that subsequently lead to the induction of apoptosis.7 Some of these dietary agents include epigallocatechin‐3‐gallate (EGCG), found in green tea,10 curcumin in turmeric,11 genistein in soybeans,12 lycopene in tomatoes,13 anthocyanins in pomegranates14 and isothiocyanates in broccoli.15 \nThere is a large body of evidence that links DNA damage and apoptosis.16,17 The tumor suppressor protein p53 provides an important link between DNA damage and apoptosis as it has been shown to mediate the upregulation of apoptotic proteins such as Bax, caspase‐3 and ‐8, Noxa, PUMA and p53AIP.18 \nChlorella vulgaris, a unicellular green algae, has been widely used as a food supplement and credited with high antioxidant and therapeutic abilities.19,20 The supplement can be taken in the form of tablets, capsules, as a food additive or extracted as a liquid. Some health claims and benefits include improvement in the control of hypertension, fibromyalgia and ulcerative colitis.21 In vivo studies have revealed the significant antitumor and antigenotoxic efficacy of C. vulgaris.20 A study by Hasegawa et. al showed that a glycoprotein, designated ARS‐2, found in C. vulgaris extract has an anti‐cancer effect on mice‐induced fibrosarcoma.22 \nTo the best of our knowledge, the effect of C. vulgaris on hepatoma cells has not been studied in great detail. In this study, we investigated the anti‐cancer effect of C. vulgaris extract on the hepatoma cell line HepG2 by evaluating changes in proliferation, DNA damage and apoptosis.", "[SUBTITLE] Culturing and Extraction of Chlorella vulgaris [SUBSECTION] Stock of C. vulgaris Beijerinck (strain 072) was obtained from the University of Malaya Algae Culture Collection (UMACC, Malaysia) and grown in Bold's basal media (BBM) with a 12 hours dark 12 hours light cycle. Cells were harvested by centrifugation at 1000 rpm. The algae were dried using a freeze dryer and then mixed in distilled water at a concentration of 10% (w/v). The algal suspension was then boiled at 100°C for 20 minutes using reflux method followed by centrifugation at 10 000 rpm for 20 min. The supernatant was lyophilized using a freeze dryer to obtain the powdered form of C. vulgaris\nStock of C. vulgaris Beijerinck (strain 072) was obtained from the University of Malaya Algae Culture Collection (UMACC, Malaysia) and grown in Bold's basal media (BBM) with a 12 hours dark 12 hours light cycle. Cells were harvested by centrifugation at 1000 rpm. The algae were dried using a freeze dryer and then mixed in distilled water at a concentration of 10% (w/v). The algal suspension was then boiled at 100°C for 20 minutes using reflux method followed by centrifugation at 10 000 rpm for 20 min. The supernatant was lyophilized using a freeze dryer to obtain the powdered form of C. vulgaris\n[SUBTITLE] Cell Culture and Treatment [SUBSECTION] Liver cancer cell line HepG2 and normal liver cell line WRL68 were maintained in Eagle's minimal essential medium (EMEM; Flow Labaratories, Sydney Australia) supplemented with 1mM sodium pyruvate (SIGMA, St Louis, MO, USA), 2mM glutamine, 10% fetal calf serum and 100 U/ml penicillin and streptomycin at 37°C in humidified 5% CO2 incubator. Cell proliferation, apoptosis and collection of protein were performed when cells reached 70% confluence density. C. vulgaris extract at various concentrations (0‐4 mg/ml) was added to cell lines 24 hours after incubation.\nLiver cancer cell line HepG2 and normal liver cell line WRL68 were maintained in Eagle's minimal essential medium (EMEM; Flow Labaratories, Sydney Australia) supplemented with 1mM sodium pyruvate (SIGMA, St Louis, MO, USA), 2mM glutamine, 10% fetal calf serum and 100 U/ml penicillin and streptomycin at 37°C in humidified 5% CO2 incubator. Cell proliferation, apoptosis and collection of protein were performed when cells reached 70% confluence density. C. vulgaris extract at various concentrations (0‐4 mg/ml) was added to cell lines 24 hours after incubation.\n[SUBTITLE] Cell Proliferation Assay [SUBSECTION] A 96‐well plate was seeded with HepG2 and WRL68 cells at a uniform density of 2 × 104 cells per well. Twenty‐four hours after incubation, cells were treated with the hot water extract of C. vulgaris and incubated for a further 24 hours. Cells were labeled with bromodeoxyuridine (BrdU) during the last 3 hours of C. vulgaris extract treatment. The cells were fixed with denaturing solution and the incorporation of BrdU was detected by immunoreaction. After substrate solution was added to each well, the amount of BrdU incorporated was determined by measuring the absorbance at dual wavelengths (450/690 nm) using a scanning multi‐well spectrophotometer [enzyme‐linked immunosorbent assay (ELISA) reader]\nA 96‐well plate was seeded with HepG2 and WRL68 cells at a uniform density of 2 × 104 cells per well. Twenty‐four hours after incubation, cells were treated with the hot water extract of C. vulgaris and incubated for a further 24 hours. Cells were labeled with bromodeoxyuridine (BrdU) during the last 3 hours of C. vulgaris extract treatment. The cells were fixed with denaturing solution and the incorporation of BrdU was detected by immunoreaction. After substrate solution was added to each well, the amount of BrdU incorporated was determined by measuring the absorbance at dual wavelengths (450/690 nm) using a scanning multi‐well spectrophotometer [enzyme‐linked immunosorbent assay (ELISA) reader]\n[SUBTITLE] Apoptosis by TUNEL Assay [SUBSECTION] Apoptotic cell death was determined by Dead End™ Colorimetric TUNEL System (Promega, Madison, WI, USA). After 24 hours of treatment with C. vulgaris extract, floating cells and adherent cells in culture were collected in a tube, trypsinized, centrifuged and washed in phosphate buffered saline (PBS). Cells were resuspended and applied to poly‐L‐lysine‐coated slides and air dried. Cells were fixed by immersing slides in 4% formalin in PBS for 25 minutes at room temperature. After washing with PBS, cells were permeabilized by immersing the slides in 0.2% Triton × 100 solution in PBS for 5 minutes at room temperature. Cells were then equilibrated with 100 µl of equilibration buffer (2.5 mM Tris‐HCl pH 6.6, 0.2 M potassium cacodylate, 2.5 mM CoCl2, 0.25 mg/mL BSA.) for 7 minutes. The equilibrated areas were blotted with tissue paper before adding biotinylated nucleotide and TdT reaction mix (100 µl) to the slides. The slides were then covered with coverslips to ensure an even distribution of the reagent before incubating at 37°C for 60 minutes in a humidified chamber. Coverslips were removed and the slides were immersed in 2× saline‐sodium citrate (SSC; sodium chloride 0.15M, trisodium citrate 0.015M) buffer for 15 minutes at room temperature, and washed twice with PBS. Endogenous peroxidases were blocked by immersing the slides in 0.3% hydrogen peroxide for 4 minutes at room temperature and washed again in PBS. Streptavidin horseradish peroxidase (HRP) solution (1∶500 PBS) was added to each slide and incubated for 30 minutes at room temperature. After final washing with PBS, diaminobenzedine (DAB) solution was added to the slides for 20 minutes until light brown staining was observed. Finally, each slide was mounted with DPX (BDH, England) to be examined under light microscope.\nApoptotic cell death was determined by Dead End™ Colorimetric TUNEL System (Promega, Madison, WI, USA). After 24 hours of treatment with C. vulgaris extract, floating cells and adherent cells in culture were collected in a tube, trypsinized, centrifuged and washed in phosphate buffered saline (PBS). Cells were resuspended and applied to poly‐L‐lysine‐coated slides and air dried. Cells were fixed by immersing slides in 4% formalin in PBS for 25 minutes at room temperature. After washing with PBS, cells were permeabilized by immersing the slides in 0.2% Triton × 100 solution in PBS for 5 minutes at room temperature. Cells were then equilibrated with 100 µl of equilibration buffer (2.5 mM Tris‐HCl pH 6.6, 0.2 M potassium cacodylate, 2.5 mM CoCl2, 0.25 mg/mL BSA.) for 7 minutes. The equilibrated areas were blotted with tissue paper before adding biotinylated nucleotide and TdT reaction mix (100 µl) to the slides. The slides were then covered with coverslips to ensure an even distribution of the reagent before incubating at 37°C for 60 minutes in a humidified chamber. Coverslips were removed and the slides were immersed in 2× saline‐sodium citrate (SSC; sodium chloride 0.15M, trisodium citrate 0.015M) buffer for 15 minutes at room temperature, and washed twice with PBS. Endogenous peroxidases were blocked by immersing the slides in 0.3% hydrogen peroxide for 4 minutes at room temperature and washed again in PBS. Streptavidin horseradish peroxidase (HRP) solution (1∶500 PBS) was added to each slide and incubated for 30 minutes at room temperature. After final washing with PBS, diaminobenzedine (DAB) solution was added to the slides for 20 minutes until light brown staining was observed. Finally, each slide was mounted with DPX (BDH, England) to be examined under light microscope.\n[SUBTITLE] DNA Damage Measurement by Comet Assay [SUBSECTION] The assay was performed as described by Singh et al.23 Thirty microlitres of the HepG2 cell suspension (<400,000 cells/ml) was mixed with 80 µl of 1% low melting point (LMP) agarose and added to fully frosted slides that had been covered with a layer of 1% LMP agarose. Subsequently, the slides were immersed in lysis solution [2.5M NaOH, pH10; 0.1M ethylenediaminetetraacetic acid (EDTA); 0.01M Tris; and 1% Triton × 100] for 1 h at 4°C, followed by electrophoresis solution (300mM NaOH; and 1mM EDTA, pH13) for 20 min to allow DNA unwinding, and electrophoresed for 20 min at 25 V and 300 mA. Finally, the slides were neutralized with 0.4M Tris buffer (pH 7.5), stained with ethidium bromide (5 mg/ml) and analyzed using a fluorescence microscope (Carl Zeiss, Göttingen, Germany). Images of 50 randomly selected cells per experimental point were visually analyzed under the microscope.\nThe assay was performed as described by Singh et al.23 Thirty microlitres of the HepG2 cell suspension (<400,000 cells/ml) was mixed with 80 µl of 1% low melting point (LMP) agarose and added to fully frosted slides that had been covered with a layer of 1% LMP agarose. Subsequently, the slides were immersed in lysis solution [2.5M NaOH, pH10; 0.1M ethylenediaminetetraacetic acid (EDTA); 0.01M Tris; and 1% Triton × 100] for 1 h at 4°C, followed by electrophoresis solution (300mM NaOH; and 1mM EDTA, pH13) for 20 min to allow DNA unwinding, and electrophoresed for 20 min at 25 V and 300 mA. Finally, the slides were neutralized with 0.4M Tris buffer (pH 7.5), stained with ethidium bromide (5 mg/ml) and analyzed using a fluorescence microscope (Carl Zeiss, Göttingen, Germany). Images of 50 randomly selected cells per experimental point were visually analyzed under the microscope.\n[SUBTITLE] Measurement of p53, Bax, Bcl‐2, Caspase‐3 and ‐8 by Western blotting [SUBSECTION] HepG2 cells (1×107/dish) were seeded into a 9 cm dish and treated with various concentrations of hot water extract of C. vulgaris for 2, 6, 12, 18 and 24 hours. Subsequently, cells were incubated in RIPA lysis buffer [50 mM Tris–HCl, 150 mM NaCl, 1 mM ethylene glycol tetraacetic acid (EGTA), 1 mM EDTA, 20 mM NaF, 100 mM Na3VO4, 1% NP‐40, 1% Triton X‐100, 1 mM phenylmethylsulfonyl fluoride (PMSF), 10 µg/ml Aprotinin and 10 µg/ml Leupeptin] (SIGMA, St Louis, MO, USA) on ice for 30 min to lyse the cells. After centrifugation, total protein was determined using a Bio‐Rad protein assay kit (Bio‐Rad, Hercules, CA, USA). Protein was resolved (50µg) by 10–15% SDS‐PAGE and transferred to polyvinyl difluoride (PVDF) membranes. The membrane was blocked with blocking buffer (5% skim milk in 1% Tween 20 in 20 mM Tris‐buffered saline, pH7.5) by incubating for 1 h at room temperature followed by incubation with the appropriate primary antibody (p53, Bax, Bcl‐2, caspase‐3 and ‐8; Chemicon, Billerica, MA, USA) at dilutions of 1∶1000 in blocking buffer for 2 h at room temperature. The membranes were then incubated with the respective secondary antibodies for 1 h and antigens were detected by using the enhanced chemiluminescence blotting detection system (GE Healthcare, Piscataway, NJ, USA).\nHepG2 cells (1×107/dish) were seeded into a 9 cm dish and treated with various concentrations of hot water extract of C. vulgaris for 2, 6, 12, 18 and 24 hours. Subsequently, cells were incubated in RIPA lysis buffer [50 mM Tris–HCl, 150 mM NaCl, 1 mM ethylene glycol tetraacetic acid (EGTA), 1 mM EDTA, 20 mM NaF, 100 mM Na3VO4, 1% NP‐40, 1% Triton X‐100, 1 mM phenylmethylsulfonyl fluoride (PMSF), 10 µg/ml Aprotinin and 10 µg/ml Leupeptin] (SIGMA, St Louis, MO, USA) on ice for 30 min to lyse the cells. After centrifugation, total protein was determined using a Bio‐Rad protein assay kit (Bio‐Rad, Hercules, CA, USA). Protein was resolved (50µg) by 10–15% SDS‐PAGE and transferred to polyvinyl difluoride (PVDF) membranes. The membrane was blocked with blocking buffer (5% skim milk in 1% Tween 20 in 20 mM Tris‐buffered saline, pH7.5) by incubating for 1 h at room temperature followed by incubation with the appropriate primary antibody (p53, Bax, Bcl‐2, caspase‐3 and ‐8; Chemicon, Billerica, MA, USA) at dilutions of 1∶1000 in blocking buffer for 2 h at room temperature. The membranes were then incubated with the respective secondary antibodies for 1 h and antigens were detected by using the enhanced chemiluminescence blotting detection system (GE Healthcare, Piscataway, NJ, USA).\n[SUBTITLE] Statistical Analysis [SUBSECTION] Results are expressed as mean ± SD with the experiment repeated at least 3 times. Statistical evaluation was done using the Student's t‐test. A p value of <0.05 was considered significant.\nResults are expressed as mean ± SD with the experiment repeated at least 3 times. Statistical evaluation was done using the Student's t‐test. A p value of <0.05 was considered significant.", "Stock of C. vulgaris Beijerinck (strain 072) was obtained from the University of Malaya Algae Culture Collection (UMACC, Malaysia) and grown in Bold's basal media (BBM) with a 12 hours dark 12 hours light cycle. Cells were harvested by centrifugation at 1000 rpm. The algae were dried using a freeze dryer and then mixed in distilled water at a concentration of 10% (w/v). The algal suspension was then boiled at 100°C for 20 minutes using reflux method followed by centrifugation at 10 000 rpm for 20 min. The supernatant was lyophilized using a freeze dryer to obtain the powdered form of C. vulgaris", "Liver cancer cell line HepG2 and normal liver cell line WRL68 were maintained in Eagle's minimal essential medium (EMEM; Flow Labaratories, Sydney Australia) supplemented with 1mM sodium pyruvate (SIGMA, St Louis, MO, USA), 2mM glutamine, 10% fetal calf serum and 100 U/ml penicillin and streptomycin at 37°C in humidified 5% CO2 incubator. Cell proliferation, apoptosis and collection of protein were performed when cells reached 70% confluence density. C. vulgaris extract at various concentrations (0‐4 mg/ml) was added to cell lines 24 hours after incubation.", "A 96‐well plate was seeded with HepG2 and WRL68 cells at a uniform density of 2 × 104 cells per well. Twenty‐four hours after incubation, cells were treated with the hot water extract of C. vulgaris and incubated for a further 24 hours. Cells were labeled with bromodeoxyuridine (BrdU) during the last 3 hours of C. vulgaris extract treatment. The cells were fixed with denaturing solution and the incorporation of BrdU was detected by immunoreaction. After substrate solution was added to each well, the amount of BrdU incorporated was determined by measuring the absorbance at dual wavelengths (450/690 nm) using a scanning multi‐well spectrophotometer [enzyme‐linked immunosorbent assay (ELISA) reader]", "Apoptotic cell death was determined by Dead End™ Colorimetric TUNEL System (Promega, Madison, WI, USA). After 24 hours of treatment with C. vulgaris extract, floating cells and adherent cells in culture were collected in a tube, trypsinized, centrifuged and washed in phosphate buffered saline (PBS). Cells were resuspended and applied to poly‐L‐lysine‐coated slides and air dried. Cells were fixed by immersing slides in 4% formalin in PBS for 25 minutes at room temperature. After washing with PBS, cells were permeabilized by immersing the slides in 0.2% Triton × 100 solution in PBS for 5 minutes at room temperature. Cells were then equilibrated with 100 µl of equilibration buffer (2.5 mM Tris‐HCl pH 6.6, 0.2 M potassium cacodylate, 2.5 mM CoCl2, 0.25 mg/mL BSA.) for 7 minutes. The equilibrated areas were blotted with tissue paper before adding biotinylated nucleotide and TdT reaction mix (100 µl) to the slides. The slides were then covered with coverslips to ensure an even distribution of the reagent before incubating at 37°C for 60 minutes in a humidified chamber. Coverslips were removed and the slides were immersed in 2× saline‐sodium citrate (SSC; sodium chloride 0.15M, trisodium citrate 0.015M) buffer for 15 minutes at room temperature, and washed twice with PBS. Endogenous peroxidases were blocked by immersing the slides in 0.3% hydrogen peroxide for 4 minutes at room temperature and washed again in PBS. Streptavidin horseradish peroxidase (HRP) solution (1∶500 PBS) was added to each slide and incubated for 30 minutes at room temperature. After final washing with PBS, diaminobenzedine (DAB) solution was added to the slides for 20 minutes until light brown staining was observed. Finally, each slide was mounted with DPX (BDH, England) to be examined under light microscope.", "The assay was performed as described by Singh et al.23 Thirty microlitres of the HepG2 cell suspension (<400,000 cells/ml) was mixed with 80 µl of 1% low melting point (LMP) agarose and added to fully frosted slides that had been covered with a layer of 1% LMP agarose. Subsequently, the slides were immersed in lysis solution [2.5M NaOH, pH10; 0.1M ethylenediaminetetraacetic acid (EDTA); 0.01M Tris; and 1% Triton × 100] for 1 h at 4°C, followed by electrophoresis solution (300mM NaOH; and 1mM EDTA, pH13) for 20 min to allow DNA unwinding, and electrophoresed for 20 min at 25 V and 300 mA. Finally, the slides were neutralized with 0.4M Tris buffer (pH 7.5), stained with ethidium bromide (5 mg/ml) and analyzed using a fluorescence microscope (Carl Zeiss, Göttingen, Germany). Images of 50 randomly selected cells per experimental point were visually analyzed under the microscope.", "HepG2 cells (1×107/dish) were seeded into a 9 cm dish and treated with various concentrations of hot water extract of C. vulgaris for 2, 6, 12, 18 and 24 hours. Subsequently, cells were incubated in RIPA lysis buffer [50 mM Tris–HCl, 150 mM NaCl, 1 mM ethylene glycol tetraacetic acid (EGTA), 1 mM EDTA, 20 mM NaF, 100 mM Na3VO4, 1% NP‐40, 1% Triton X‐100, 1 mM phenylmethylsulfonyl fluoride (PMSF), 10 µg/ml Aprotinin and 10 µg/ml Leupeptin] (SIGMA, St Louis, MO, USA) on ice for 30 min to lyse the cells. After centrifugation, total protein was determined using a Bio‐Rad protein assay kit (Bio‐Rad, Hercules, CA, USA). Protein was resolved (50µg) by 10–15% SDS‐PAGE and transferred to polyvinyl difluoride (PVDF) membranes. The membrane was blocked with blocking buffer (5% skim milk in 1% Tween 20 in 20 mM Tris‐buffered saline, pH7.5) by incubating for 1 h at room temperature followed by incubation with the appropriate primary antibody (p53, Bax, Bcl‐2, caspase‐3 and ‐8; Chemicon, Billerica, MA, USA) at dilutions of 1∶1000 in blocking buffer for 2 h at room temperature. The membranes were then incubated with the respective secondary antibodies for 1 h and antigens were detected by using the enhanced chemiluminescence blotting detection system (GE Healthcare, Piscataway, NJ, USA).", "Results are expressed as mean ± SD with the experiment repeated at least 3 times. Statistical evaluation was done using the Student's t‐test. A p value of <0.05 was considered significant.", "Chlorella vulgaris inhibited the proliferation of human liver cancer cells (HepG2) in a concentration‐dependent manner, ranging from 0‐4 mg/ml as shown by BrdU proliferation assay with a 50% reduction at 1.6 mg/ml, (IC50) (Fig. 1). The high IC50 is expected as C. vulgaris is classified as a food and not a drug. Proliferation of normal liver cells (WRL68) was decreased when treated with C. vulgaris extract, resulting in a 50% reduction at 1.7 mg/ml. One hundred percent inhibition of proliferation of HepG2 cells and WRL68 cells was achieved at approximately 3 mg/ml and 4mg/ml, respectively.\nIt was clearly seen that C. vulgaris induced apoptosis in HepG2 cells with distinct nuclear condensation and blebbing of the plasma membrane, a typical characteristic of apoptotic bodies (Fig. 2).24 \nThe percentange of apoptosis of HepG2 and WRL68 cells as induced by C. vulgaris extract at 2mg/ml is demonstrated in Figure 3; the rate of apoptosis was significantly higher (70%) in HepG2 cells compared to normal WRL68 cells (15%). WRL68 cells also underwent apoptosis but at a higher concentration of C. vulgaris (3‐4 mg/ml), perhaps as a result of the cytotoxicity effects seen at higher concentrations (Fig. 1)\nThe involvement of pro‐ and antiapoptotic proteins is demonstrated in Figure 4. Figure 4a shows that the level of p53 increased in a dose‐dependent manner and reached maximum induction at 2 h with 2 mg/ml treatment of C. vulgaris. Interestingly, treatment with C. vulgaris resulted in increased expression of Bax (Fig. 4c), and a decreased expression of Bcl‐2 (Fig. 4b) in a time‐dependent manner. However, the activation of the mitochondrial apoptotic pathway in downstream caspase‐8 was not significant (Fig. 4e), but a significant increase in the active form of caspase‐3 was observed after 24 h of treatment with C.vulgaris in a time‐dependent manner (Fig. 4d). These results demonstrate that the mitochondrial signaling pathway is involved in C. vulgaris‐induced apoptosis of HepG2 cells.\nDNA damage in HepG2 cells was increased by C. vulgaris treatments at all concentrations (Fig. 5). At low doses of C. vulgaris extract, HepG2 and WRL68 cells generated smaller comets that were not significant, whilst at 2 mg/ml, C. vulgaris induced more damage in HepG2 cells (80%) compared to WRL68 (50%).", "Many chemopreventive agents have been associated with antiproliferative and apoptotic effects on cancer cells because of their high antioxidant activity, targeting signaling molecules, and preventing or protecting cells from further damage or transformation into cancer cells.7 Chlorella vulgaris has been shown to exhibit high antioxidant activity compared to other microalgae.25 The antiproliferative effect of C. vulgaris (Fig. 1) could be caused by an acidic glycoprotein found to have antitumour and antioxidant properties in tumour‐bearing mice.22 \nDNA damage is a common event in life following which, repair mechanisms and apoptosis will be activated to maintain genome integrity. The tumor‐suppressor protein p53 provides an important link between DNA damage and apoptosis. DNA damage results in cell cycle arrest at checkpoints or at G1 or G2 stage to inhibit cell cycle progression and to induce apoptosis, thus protecting cells against further damage.24 Antioxidants are important inhibitors of lipid peroxidation as a defence mechanism against oxidative damage.26 Plants and microalgae have developed a protective mechanism by possessing antioxidant compounds mainly with phenolic moieties to eradicate the accumulation of reactive oxygen species (ROS) produced by ultraviolet (UV) radiation or heat from sunlight.26 Our study showed that C. vulgaris, with its high antioxidant activity, protected normal WRL68 cells against severe DNA damage but induced severe DNA damage in HepG2 cells.\nThe induction of apoptosis is known to be an efficient strategy for cancer therapy. Many Chinese herbal remedies such as Ganoderma lucidum, Rubus coreanum, Paeoniae Radix and Phyllanthus urinaria have been shown to trigger the apoptotic pathway in MCF‐7 human breast cancer cells, HT‐29 human colon cancer cells, HepG2 human hepatoma cells and Lewis lung carcinoma cells, respectively.27-30 Apoptosis is modulated by antiapoptotic and proapoptotic effectors, which involve a large number of proteins. The propapoptotic and antiapoptotic members of the Bcl‐2 family act as a rheostat in regulating programmed cell death and as a target of anticancer therapy.31 The ratio of death antagonists (Bcl2, Bcl‐xL) to agonists (Bax, Bcl‐xs, Bad, Bid) determines whether a cell will respond to an apoptotic stimulus. Down‐regulation of the death suppressor Bcl‐2 protein could repress tumor growth by promoting programmed cell death.32-34.\nTo investigate the mechanisms of C. vulgaris‐induced apoptosis, we assessed the expression and activity of tumor suppressor protein P53 and a host of pro‐ and antiapoptotic proteins: Bcl‐2, Bax, and caspases‐3 and ‐8. Our results showed that C. vulgaris mediated apoptosis in a P53‐dependent manner with increased expression of Bax and decreased expression of Bcl‐2 proteins in a time‐dependent manner. The activation of p53 and related family members can either enforce cell cycle arrest or induce apoptosis. The p53‐dependent responses are directed against the damaged cell to protect the organism. The rules that govern the choice between growth, arrest and apoptosis are likely to be enforced by other proteins that can antagonize or synergize with p53 to regulate apoptosis.35 The maximal increase of Bax expression in HepG2 cells after 18 h incubation with C. vulgaris may be the result of the increase of P53 accumulation that peaked after 2 h.\nAs a herbal extract, C. vulgaris contains a variety of compounds including antioxidants and a glycoprotein that may act on different pathways of tumor cell growth and survival, triggering an antagonistic effect of Bax and Bcl‐2.\nApoptotic signals are generally believed to be mediated through a hierarchy of caspase activation controlled by one of two distinct pathways that are associated with either mitochondrial caspase‐8 or ‐9. The initiating caspases then converge on the central effector caspases, caspases‐3 and ‐7. Although we did not find significant activation of downstream caspase‐8 there was, however, a significant increase of the active form of caspase‐3 after 24 h of treatment with C. vulgaris, in a time‐dependent manner. These results demonstrated that the mitochondrial signaling pathway is involved in C. vulgaris‐induced apoptosis of HepG2 cells. It could be postulated that C. vulgaris did not induce the initiator caspase‐8 but it may have activated the other initiator caspase‐9. Among the ten different members of caspases identified in mammalian cells, caspase‐3 may serve as a general mediator of apoptosis. When cells are undergoing apoptosis, executioner or effector caspase‐3 triggers cellular proteins such as poly (ADP‐ribose) polymerase and DNA fragmentation factor, resulting in the characteristic changes of apoptosis.36 Caspase‐3 is synthesized as a 33 kDa inactive proenzyme that requires proteolytic activation. Our results showed that a high level of proenzyme of caspase‐3 was present in untreated tumor cells, and active caspase‐3 gradually increased after C. vulgaris treatment, suggesting that C. vulgaris induced apoptosis through a caspase‐3‐dependent mechanism.", "The results of the present study suggest that the anticancer mechanism of C. vulgaris in hepatoma cells (HepG2) is by inhibiting DNA synthesis, triggering DNA damage and inducing apoptosis. This is shown by the reduced incorporation of BrdU into replicating DNA of HepG2 cells and an increase in the number of DNA damage‐inducing and apoptotic proteins, Bax and caspase‐3 in HepG2 cells treated with C. vulgaris. The possible mechanism is speculated to be an increase in P53, Bax and caspase‐3 expression that would subsequently lead to apoptosis.", "This research was supported by a grant from Ministry of Science, Technology and Innovation (MOSTI)" ]
[ "intro", "materials|methods", null, null, null, null, null, null, null, "results", "discussion", null, null ]
[ "HepG2", "Chlorella vulgaris", "DNA damage", "chemopreventive", "apoptosis" ]
Titanium elastic nailing versus hip spica cast in treatment of femoral-shaft fractures in children.
21340544
There is no consensus on treatment of closed femoral-shaft fractures in children. We compared hip spica cast with titanium elastic nailing (TEN) in the treatment of femoral-shaft fractures in children.
BACKGROUND
Forty-six children, 6-12 years old, with simple femoral-shaft fractures were randomized to receive skeletal traction followed by hip spica cast (n = 23) or TEN (n = 23). Length of hospital stay, time to start walking with aids, time to start independent walking, time absent from school, parent satisfaction, and range of knee motion were compared between the two groups 6 months after injury.
MATERIALS AND METHODS
The two groups were similar in background characteristics. Compared with the children treated with spica cast, those treated with TEN had shorter hospital stay (P < 0.001) and took a shorter time to start walking with support or independently (P < 0.001), returned to school sooner (P < 0.001), and had higher parent satisfaction (P = 0.003). Range of knee motion was 138.7 ± 3.4° in the spica cast group and 133.5 ± 13.4° in the TEN group (P = 0.078). Three patients (13.0%) in the spica cast group compared with none in the TEN group had malunion (P = 0.117). Postoperative infection was observed in three patients (13.0%) in the TEN group.
RESULTS
The results showed significant benefits of TEN compared with traction and hip spica cast in the treatment of femoral-shaft fractures in children. Further trials with longer follow-ups and comparison of TEN with other methods, such as external fixation, in children's femoral fractures are warranted.
CONCLUSIONS
[ "Bone Nails", "Casts, Surgical", "Child", "Female", "Femoral Fractures", "Follow-Up Studies", "Humans", "Length of Stay", "Male", "Range of Motion, Articular", "Recovery of Function", "Titanium", "Traction", "Treatment Outcome", "Walking" ]
3052430
Introduction
Femoral-shaft fractures are among the most common fractures of the lower extremity in children, with an annual incidence of up to 1 per 5,000 [1, 2]. There are several different options for treating femoral-shaft fractures in children, including skeletal or skin traction, early or immediate application of a hip spica cast, pontoon spica, closed reduction and minimally invasive plate osteosynthesis, external fixation, plate fixation, and internal fixation with the insertion of intramedullary nails [3, 4]. Selecting the management strategy is dependent on factors such as the presence of other associated injuries or multiple trauma, fracture properties, age, and socioeconomic factors. Because of its clinical effectiveness and low rate of complications, elastic stable intramedullary nailing for fractures of long bones in the skeletally immature patient (e.g., children) has gained widespread popularity. Titanium elastic nailing (TEN) is commonly used to stabilize femoral fractures in school-aged children, but there have been few controlled studies and with only relatively short-term follow-ups assessing the risks and benefits of this procedure compared with those of the traditional traction and application of a spica cast. The results of previous prospective and retrospective studies comparing TEN with traction and a spica cast were mostly in favor of TEN, considering recovery time, complication rate, and in some cases hospital charges [2, 5, 6]. According to the lack of data in this regard, we designed a prospective randomized controlled study to compare TEN with traction and a spica cast in treating femoral fractures in children in terms of recovery and complications.
null
null
Results
During the study period, 55 children were presented with femoral-shaft fractures to the centers. From among these patients, 51 met the inclusion criteria (four patients had open fractures). Five patients did not agree to participate in the study protocol, so 46 children with simple closed femoral fractures (23 in each group) entered and completed the study. There was no significant difference between the two groups in terms of age and gender (P > 0.05). Compared with children treated with spica cast, those treated with TEN had shorter hospital stay (P < 0.001) and took a shorter time to start walking with support or independently (P < 0.001), returned to school sooner (P < 0.001), and had better parent satisfaction (P = 0.003). The range of knee motion was 138.7 ± 3.4° in the spica cast group and 133.5 ± 13.4° in the TEN group, with no significant difference (P = 0.078) (Table 1).Table 1Comparison of outcomes between groupsTEN n = 23Spica cast n = 23P valueAge7.1 ± 1.86.5 ± 1.50.225*Male/female15 (65.2%)/8 (34.7%)16 (69.5%)/7 (30.4%)0.500**Hospital stay (days)6.9 ± 2.920.5 ± 5.8<0.001*Time to start walking with aids (days)17.6 ± 10.265.6 ± 10.7<0.001*Time to start walking independently (days)35.2 ± 13.280.0 ± 10.1<0.001*Time to return to school (days)31.5 ± 13.464.3 ± 19.6<0.001*Parent satisfaction Excellent12 (52.1%)2 (8.6%)0.003** Good11 (47.8%)15 (65.2%) Moderate02 (8.6%) Weak04 (17.3%)Knee range of motion (degree)133.5 ± 13.4138.7 ± 3.40.078*Malunion03 (13.0%)0.117**Infection3 (13.0%)00.117**Data are presented as mean ± standard deviation or number (%)* Independent sample t -test** Chi-square test Comparison of outcomes between groups Data are presented as mean ± standard deviation or number (%) * Independent sample t -test ** Chi-square test Three patients (13.0%) in the spica cast group had malunion, whereas none occurred in the TEN group (P = 0.117). Three patients had postoperative infection (13.0%), all in the TEN group, but none was obsered in the spica cast group (P = 0.117) (Table 1). There was also one transitional proneal nerve injury after TEN that repaired spontaneously. No arterial injury occurred in any patients during the procedures.
Conclusion
The results showed significant benefits for TEN compared with traction and hip spica cast in treating femoral-shaft fractures in children. Complication rates associated with hip spica cast was also higher than that associated with TEN. Further trials with longer follow-ups and comparison of TEN with other methods, such as external fixation, in children’s femoral-shaft fractures are warranted.
[]
[]
[]
[ "Introduction", "Materials and methods", "Results", "Discussion", "Conclusion" ]
[ "Femoral-shaft fractures are among the most common fractures of the lower extremity in children, with an annual incidence of up to 1 per 5,000 [1, 2]. There are several different options for treating femoral-shaft fractures in children, including skeletal or skin traction, early or immediate application of a hip spica cast, pontoon spica, closed reduction and minimally invasive plate osteosynthesis, external fixation, plate fixation, and internal fixation with the insertion of intramedullary nails [3, 4]. Selecting the management strategy is dependent on factors such as the presence of other associated injuries or multiple trauma, fracture properties, age, and socioeconomic factors.\nBecause of its clinical effectiveness and low rate of complications, elastic stable intramedullary nailing for fractures of long bones in the skeletally immature patient (e.g., children) has gained widespread popularity. Titanium elastic nailing (TEN) is commonly used to stabilize femoral fractures in school-aged children, but there have been few controlled studies and with only relatively short-term follow-ups assessing the risks and benefits of this procedure compared with those of the traditional traction and application of a spica cast. The results of previous prospective and retrospective studies comparing TEN with traction and a spica cast were mostly in favor of TEN, considering recovery time, complication rate, and in some cases hospital charges [2, 5, 6]. According to the lack of data in this regard, we designed a prospective randomized controlled study to compare TEN with traction and a spica cast in treating femoral fractures in children in terms of recovery and complications.", "This randomized controlled trial was conducted from February 2009 to January 2010 in the Department of Orthopedic Surgery of two university hospitals in Isfahan, Iran. Children 6–12 years of age with simple femoral-shaft fractures participated in the study consecutively. Exclusion criteria were segmental Winquist types III and IV comminuted fractures, previously diagnosed neuromuscular disease (e.g., cerebral palsy), metabolic bone disorders (e.g., osteomalacia), or pathological fractures. Considering α = 0.05, study power = 80%, and d = 5 days as the minimal expected difference between the two groups, a sample size of 22 patients was considered for each group. Parents of all children gave informed consent prior to the study, which was authorized by the local Scientific Ethical Committee of Isfahan University of Medical Sciences, Isfahan, Iran, and was performed in accordance with the ethical standards of the 1964 Declaration of Helsinki as revised in 2000. Also, the study has been registered at http://www.clinicaltrials.gov (NCT01190696).\nUsing random allocation software, patients were divided into two groups of TEN and spica cast and were treated by a single orthopedic surgeon. Hip-supported long-limb casting splints without skeletal traction was applied for all patients at admission for controlling pain and preventing deformities. For patients in the TEN group, the standard TEN technique was applied according to the method described by Flynn and colleagues [6]. Operation was done under general anesthesia on a fracture table. After a linear incision, opening the fascia, and passing the muscle fibers, a hole was opened in the bone and enlarged. Then, each titanium elastic nail was retrogradely placed through the distal part of the femur. Each nail was 40% of the canal diameter at the narrowest site of the femoral shaft. Reduction and fixation was done under C-arm image intensifier. All patients received first-generation cephalosporin prophylaxis, which was initiated 12 h preoperatively and continued 24–48 h postoperatively [6]. Patients in the spica cast group were treated with skeletal traction for about 3 weeks and then with a spica cast. The traction pin was inserted in the distal part of the femur in the operating room and under general anesthesia. Control radiography was carried out after the traction and later at 1-week intervals. The pin was removed after sufficient callus consolidation had been achieved, and a one-and-a-half hip spica was applied (with the hips at 20–30°of flexion and the limb in 10–15° external rotation) in the operating room under general anesthesia. The cast was maintained for about 1 month; After cast removal, patients were referred for physical therapy for initial gait training and additional physical therapy if a satisfactory range of motion was not achieved.\nThe length of hospital stay was recorded, and follow-up visits were made at 2, 4, 12, and 24 weeks after discharge. Limb alignment and rotation, range of knee motion, and incision and skin infections were assessed at each visit. Recovery milestones were time to start walking with aids, time to start independent walking, time absent from school, and parental satisfaction, which ranged from weak = 0 to excellent = 4.\nMajor complications were defined as those leading to unscheduled operative treatment, malunion, or nonunion. Nonunion was defined as the absence of osseous union >6 months after the injury.\n Data were analyzed using SPSS software (windows version 16.0) by independent samplet test and chi-square test for comparing means and categorical data, respectively, between groups.", "During the study period, 55 children were presented with femoral-shaft fractures to the centers. From among these patients, 51 met the inclusion criteria (four patients had open fractures). Five patients did not agree to participate in the study protocol, so 46 children with simple closed femoral fractures (23 in each group) entered and completed the study. There was no significant difference between the two groups in terms of age and gender (P > 0.05). Compared with children treated with spica cast, those treated with TEN had shorter hospital stay (P < 0.001) and took a shorter time to start walking with support or independently (P < 0.001), returned to school sooner (P < 0.001), and had better parent satisfaction (P = 0.003). The range of knee motion was 138.7 ± 3.4° in the spica cast group and 133.5 ± 13.4° in the TEN group, with no significant difference (P = 0.078) (Table 1).Table 1Comparison of outcomes between groupsTEN n = 23Spica cast n = 23P valueAge7.1 ± 1.86.5 ± 1.50.225*Male/female15 (65.2%)/8 (34.7%)16 (69.5%)/7 (30.4%)0.500**Hospital stay (days)6.9 ± 2.920.5 ± 5.8<0.001*Time to start walking with aids (days)17.6 ± 10.265.6 ± 10.7<0.001*Time to start walking independently (days)35.2 ± 13.280.0 ± 10.1<0.001*Time to return to school (days)31.5 ± 13.464.3 ± 19.6<0.001*Parent satisfaction Excellent12 (52.1%)2 (8.6%)0.003** Good11 (47.8%)15 (65.2%) Moderate02 (8.6%) Weak04 (17.3%)Knee range of motion (degree)133.5 ± 13.4138.7 ± 3.40.078*Malunion03 (13.0%)0.117**Infection3 (13.0%)00.117**Data are presented as mean ± standard deviation or number (%)* Independent sample t -test** Chi-square test\nComparison of outcomes between groups\nData are presented as mean ± standard deviation or number (%)\n* Independent sample t -test\n** Chi-square test\nThree patients (13.0%) in the spica cast group had malunion, whereas none occurred in the TEN group (P = 0.117). Three patients had postoperative infection (13.0%), all in the TEN group, but none was obsered in the spica cast group (P = 0.117) (Table 1). There was also one transitional proneal nerve injury after TEN that repaired spontaneously. No arterial injury occurred in any patients during the procedures.", "Although spica casting with skeletal traction is traditionally used for femoral-shaft fractures in children, recent studies have shown its possible effects on social, economic, educational, and emotional costs. In contrast, elastic intramedullary nailing of femoral-shaft fractures has gained extensive popularity because of its better clinical and psycho-socioeconomic outcomes with lower risk of complications [5–7]. In our study, we showed the benefits of the TEN surgical method versus traction and spica casting with respect to hospital stay, time to start walking with support or independently, returning to school, and parent satisfaction. Our findings were in agreement with the results of many studies that showed the efficacy and benefits of elastic nails for treating femoral-shaft fractures. Ligier et al. [8] used elastic intramedullary nail (anterograde or retrograde) with Kirschner wires or pins. They reported more desirable outcomes in >120 femoral-shaft fractures treated with TEN. In Reeve et al.’s study [9], 41 patients with femoral fractures were treated with traction and casting, and 49 cases underwent intramedullary nailing surgery. They showed complications were higher in the traction and casting group in comparison with the group undergoing surgery.\nIn our study, the duration of hospital stay was significantly longer in the traction and spica cast group than in the TEN group. This is in conformity with other studies [6, 9–11], which reported shorter hospital stays with TEN, but is in contrast to Saseendar’s study [12]. This difference was due to the fact in Saseendar’s study, patients in the TEN group were discharged only after suture removal to have a closer follow-up for the presence of early postoperative complications (if any), and the spica patients were usually discharged a day or two following spica casting after assessing for the presence of plaster-of-Paris-related complications.\n Our findings showed shorter time to start walking with support or independently and sooner return to school in the TEN group compared with the spica casting group. It is probably because of better contact of the fracture surfaces and anatomical reduction in patients who underwent TEN surgery. Such earlier recovery milestones have also been shown by Greisberg et al. [10] and Flynn et al. [6]. \nIn our study, a higher rate of malunion was observed in the traction and spica group compared with the TEN group. This finding conforms to the results of a similar study conducted by Kirby et al., which compared traction and cast with intramedullary nailing and reported malunion only in the traction and casting group [13]. In other studies, the rate of malunion in the traction and cast group was higher than that in the TEN group [11, 14].\nOur study had certain limitations. Treatment cost, limb length, and angulation degree were not measured in either group. As with any other new procedure, we had a small sample size, and thus the results could show falsely high complication rates.", "The results showed significant benefits for TEN compared with traction and hip spica cast in treating femoral-shaft fractures in children. Complication rates associated with hip spica cast was also higher than that associated with TEN. Further trials with longer follow-ups and comparison of TEN with other methods, such as external fixation, in children’s femoral-shaft fractures are warranted." ]
[ "introduction", "materials|methods", "results", "discussion", "conclusion" ]
[ "Spica cast", "Titanium elastic nailing", "Femoral-shaft fracture", "Pediatrics" ]
DTI studies in patients with Alzheimer's disease, mild cognitive impairment, or normal cognition with evaluation of the intrinsic background gradients.
21340578
The objective of the study was to explore the impact of the background gradients on diffusion tensor (DT) magnetic resonance imaging (DT-MRI) in patients with Alzheimer's disease (AD), mild cognitive impairment (MCI), or cognitively normal (CN) aging.
INTRODUCTION
Two DT-MRI sets with positive and negative polarities of the diffusion-sensitizing gradients were obtained in 15 AD patients, 18 MCI patients, and 16 CN control subjects. The maps of mean diffusivity (MD) and fractional anisotropy (FA) were computed separately for positive (p: pMD and pFA) and negative (n: nMD and nFA) polarities, and we computed the geometric mean (gm) of the DT-MRI to obtain the gmFA and gmMD with reducing the background gradient effects. Regional variations were assessed across the groups using one-way analysis of variance.
METHODS
Increased regional gmMD values in the AD subjects, as compared to the regional gmMD values in the MCI and CN subjects, were found primarily in the frontal, limbic, and temporal lobe regions. We also found increased nMD and pMD values in the AD subjects compared to those values in the MCI and CN subjects, including in the temporal lobe and the left limbic parahippocampal gyrus white matter. Results of comparisons among the three methods showed that the left limbic parahippocampal gyrus and right temporal gyrus were the increased MD in the AD patients for all three methods.
RESULTS
Background gradients affect the DT-MRI measurements in AD patients. Geometric average diffusion measures can be useful to minimize the intrinsic local magnetic susceptibility variations in brain tissue.
CONCLUSION
[ "Aged", "Aged, 80 and over", "Aging", "Alzheimer Disease", "Brain", "Case-Control Studies", "Cognition", "Cognitive Dysfunction", "Diffusion Magnetic Resonance Imaging", "Female", "Frontal Lobe", "Hippocampus", "Humans", "Image Processing, Computer-Assisted", "Male", "Temporal Lobe" ]
3184226
Introduction
Diffusion tensor (DT) magnetic resonance imaging (DT-MRI) is sensitive to the directionality of the random motion of water in tissue, and it involves the application of external diffusion-sensitizing magnetic field gradients along different orientations to quantify the properties of diffusion. Numerous DT-MRI studies of neurodegenerative diseases have reported abnormal diffusion values in the brain, including Alzheimer's disease (AD), which is a devastating condition that leads to progressive memory loss and rapid cognitive decline. Although AD is generally considered to affect primarily the gray matter, several studies have found changes of the isotropic and anisotropic diffusion in white matter associated with AD progression by using DT-MRI [1–6]. The diffusion abnormalities in AD were predominantly found in the posterior regions of the brain such as the hippocampal gyrus, the temporal white matter, the splenium of the corpus callosum, and the posterior cingulum. In patients with mild cognitive impairment (MCI), which is considered to represent a transitional stage between normal aging and AD, the changes seem to parallel those in AD with similar posterior regions showing abnormalities. In contrast to AD and MCI, the diffusion abnormalities in subjects with age-associated changes (cognitively normal, CN) occur in the frontal regions, and specifically in the frontal white matter, the anterior cingulum, and the genu of the corpus callosum [7]. Although the marked differences seen on DT-MRI between AD or MCI and normal aging have been considered as potential imaging markers [8–10], the underlying mechanism of the DT-MRI changes remains largely unexplained. In particular, the local variations in cell density, oligodentrocytes, myelination, and also amyloid plaques, which are a hallmark of AD [11–13], can be the source of local magnetic susceptibility variations, which in turn can alter water diffusion. Furthermore, it has been shown in rat brain [14] that brain iron, which occurs in high concentrations in oligodentrocytes and plaques [15–17], can modulate the diffusion measurements. These finding suggests that local magnetic susceptibility variations in brain tissue may contribute to the DT-MRI abnormalities seen in AD and MCI in the form of intrinsic susceptibility-dependent background gradients that add to the external diffusion weighting gradients. The previous DT-MRI studies did not take into account the local variations in brain. All investigators may be interested in knowing where in the brain and for which patients these different diffusion effects are with and without considering the background gradients. So, the overall goal of this study was to investigate whether intrinsic background gradients contribute to the pattern of regional diffusion abnormalities in patients with AD, MCI, and CN, and this potentially reflects the underlying pathological processes associated with brain iron. Specifically, we hypothesized that AD patients show a systematic pattern of higher regional background gradients compared to the MCI and CN subjects.
null
null
Results
The demographic characteristics of the subjects are summarized in Table 1. There were no significant differences in age and gender across the groups. As expected, the MMSE scores were significantly lower for the AD patients as compared to those of the other groups (p < 0.05), but the MMSE scores did not significantly differ between the MCI and CN subjects (p > 0.05). Figure 1 shows the results of the voxel-wise comparisons of the pMD, nMD, and gmMD maps between the AD and MCI groups (Fig. 1a) and between the AD and CN groups (Fig. 1b) based on one-way ANOVA tests. Compared to the MCI patients, the AD patients had increased pMD values mainly in the temporal and frontal lobes. The AD patients also had increased nMD values (AD > MCI) mainly in the temporal lobe. Moreover, we found that the AD patients had increased gmMD values (AD > MCI) predominantly in the right superior temporal gyrus, the left limbic parahippocampal gyrus white matter, and the left superior and medial frontal gyrus. There were no significantly decreased MD values in the gmMD, pMD, and nMD maps from the patients with AD as compared with that of the MCI subjects. The detailed results are summarized in Table 2. Fig. 1Results of the voxel-wise comparisons of mean diffusivity (MD; gmMD, nMD, pMD) between the Alzheimer's disease (AD) and mild cognitive impairment (MCI) groups (a) and between the AD and cognitive normal (CN) groups (b) using one-way ANOVA tests. There were no decreased MD values for all three maps of gmMD, pMD, and nMD in the patients with AD as compared with that of the MCI or CN patients. There were no significant differences between the MCI and CN patients Table 2The significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI# X Y Z gm50−4−226025.565.27R. temporal subgyrus, WM50−18−46024.834.64R. superior temporal gyrus, GM2258−6−46024.84.61R. superior temporal gyrus, WMROI1−32−24−222645.325.07L. limbic parahippocampal gyrus, WMROI238−72323234.944.73R. occipital subgyrus, WM−462143034.814.62L. superior frontal gyrus, GM9−450203034.654.47L. medial frontal gyrus, WM−662−43034.414.26L. medial frontal gyrus, WMNeg50−6−201,0415.785.46R. temporal subgyrus, WM50−18−41,0415.024.8R. superior temporal gyrus, GM2258−6−41,0415.034.81R. superior temporal gyrus, WMROI1−30−22−225465.735.41L. limbic parahippocampal gyrus, WM38−72304325.184.95R. temporal subgyrus, WM36−7264323.943.83R. middle occipital gyrus, WM−26−6−325464.354.21L. limbic parahippocampal gyrus, WM−322−225464.494.33L. limbic parahippocampal gyrus, WMPos40−74301,7594.824.63R. middle temporal gyrus, WMROI3−462101,0245.385.11L. medial frontal gyrus, GM−450201,0245.445.17L. medial frontal gyrus, WM10−662−41,0245.234.98L. medial frontal gyrus, WM24−86161,7595.745.42R. occipital ceneus, WMROI454−6461,75954.78R. middle temporal gyrus, WM−56−44325235.224.98L. parietal supramarginal gyrus, WM−50−38305234.414.26L. interior parietal lobule, WM−58−34185234.44.25L. superior temporal gyrus, WM−302−227324.494.33L. limbic parahippocampal gyrus, GM34−24−4−327323.653.56L. limbic uncus, WMROI550−4−227795.345.09R. temporal subgyrus, WM−32−26−227325.014.79L. limbic parahippocampal gyrus, WMROI24040264084.714.52R. middle frontal gyrus, GM950−6−47794.64.43R. sublobar insula4830284084.524.36R. middle frontal gyrus, GM950−18−47794.664.49R. superior temporal gyrus, GM22464684084.033.91R. middle frontal gyrus, WM gm geometric mean, Neg negative, Pos positive, BA Brodmann area, R. right, L. left, WM white matter, GM gray matter Results of the voxel-wise comparisons of mean diffusivity (MD; gmMD, nMD, pMD) between the Alzheimer's disease (AD) and mild cognitive impairment (MCI) groups (a) and between the AD and cognitive normal (CN) groups (b) using one-way ANOVA tests. There were no decreased MD values for all three maps of gmMD, pMD, and nMD in the patients with AD as compared with that of the MCI or CN patients. There were no significant differences between the MCI and CN patients The significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests) gm geometric mean, Neg negative, Pos positive, BA Brodmann area, R. right, L. left, WM white matter, GM gray matter Compared to the CN subjects, the AD patients had increased pMD values mainly in the temporal gyrus. The AD patients also had increased nMD values (AD > CN) mainly in the temporal gyrus. Moreover, we found that the AD patients had increased gmMD values (AD > CN, Fig. 1b) predominantly in the left limbic parahippocampal gyrus, the left limbic uncus, the left and right temporal subgyrus, and the right middle temporal gyrus. These detailed results are summarized in Table 3. Table 3The significantly different regions when comparing between the AD and CN groups (AD > CN) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI# X Y Z gm−32−24−222895.184.94L. limbic parahippocampal gyrus, WMROI2−26−8−282894.324.18L. limbic uncus, GM, amygdala−300−202894.294.15L. temporal subgyrus, WM50−2−221574.994.78R. temporal subgyrus, WM40−74285304.94.7R. middle temporal gyrus, WMROI356−6485304.694.51R. middle temporal gyrus, WM32−66345304.594.42R. temporal subgyrus, WMNeg−32−24−224155.475.19L. limbic parahippocampal gyrus, WMROI2−26−8−304154.94.69L. limbic parahippocampal gyrus, WM−300−204154.224.09L. temporal subgyrus, WM50−2−222945.094.86R. temporal subgyrus, WM40−74286224.854.65R. middle temporal gyrus, WMROI334−66346224.74.52R. temporal subgyrus, WM48−18−42944.44.24R. sublobar insula, GM1346−6026224.474.31R. temporal subgyrus, WMPos−32−26−224094.874.67L. limbic parahippocampal gyrus, WMROI2−28−2−204094.454.29L. limbic parahippocampal gyrus, WM42−74281,0105.224.95R. middle temporal gyrus, WMROI354−6481,0105.285.03R. middle temporal gyrus, WM30−66341,0104.814.62R. occipital subgyrus, WM26−84164855.124.89R. occipital cuneus, WMROI430−9444854.734.55R. occipital subgyrus, WM38−94−24854.394.24R. inferior occipital gyrus, WM R. right, L. left The significantly different regions when comparing between the AD and CN groups (AD > CN) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests) R. right, L. left Figure 2 shows the differences in the pFA maps between the MCI or CN and AD subjects. Compared to the MCI patients, the AD patients had increased pFA values (AD > MCI) mainly in the temporal and frontal gyrus and the posterior cingulate. The detailed results are also summarized in Table 4. Compared to the CN subjects, the AD patients also had increased pFA values (AD > CN) mainly in the left inferior and superior frontal gyrus. We did not find any significant differences in the pFA between the MCI and CN subjects. Similarly, we did not find any significant differences in the gmFA or nFA across the three groups. The results of the FA are summarized in Table 5. Fig. 2Results of the voxel-wise comparisons of factional anisotropy (FA; gmFA, nFA, pFA) among the three AD, MCI, and CN groups using one-way ANOVA tests. The pFA maps were not significantly different between the MCI and CN groups. The gmFA and nFA maps were not significantly different among the three AD, MCI, and CN groups at all Table 4The significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI# X Y Z gmNo significant differenceNegNo significant differencePos−58−32144,9755.955.6L. superior temporal gyrus, WMROI6−542484,9755.895.56L. inferior frontal gyrus, GM4552−12221,5505.755.43R. sublobar extranuclear, WM502101,5505.465.18R. sublobar insula, WM38−24181,5505.914.71R. sublobar insula, GM34−30−163835.515.23R. limbic parahippocampal gyrus, WM50−40−223833.93.79R. temporal fusiform gyrus, GM1364−16−64594.914.71R. superior temporal gyrus, WMROI758−26−44595.264.12R. superior temporal gyrus, WM37 ROI8−2−74267614.834.63L. occipital cuneus, GM18−2−60307614.654.47L. limbic posterior cingulate−2−88187614.474.31L. occipital cuneus, GM18−56−66104044.724.53L. middle temporal gyrus, WM−60−5824044.244.1L. middle temporal gyrus, WM R. right, L. left Table 5The significantly different regions when comparing between the AD and CN groups (AD > CN) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI# X Y Z gmNo significant differenceNegNo significant differencePos−4842−42,9866.496.05L. inferior frontal gyrus, WM−562282,9865.785.46L. inferior frontal gyrus−3656163574.814.62L. superior frontal gyrus, GM10−245643574.394.24L. superior frontal gyrus, WMROI9−2854243574.294.15L. superior frontal gyrus, WMROI10 R. right, L. left Results of the voxel-wise comparisons of factional anisotropy (FA; gmFA, nFA, pFA) among the three AD, MCI, and CN groups using one-way ANOVA tests. The pFA maps were not significantly different between the MCI and CN groups. The gmFA and nFA maps were not significantly different among the three AD, MCI, and CN groups at all The significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests) R. right, L. left The significantly different regions when comparing between the AD and CN groups (AD > CN) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests) R. right, L. left The significant regions that overlap in all three DT-MRI sets (positive gradients, negative gradients, and geometric mean) as well as those regions that differ are listed in Table 6 for the MD (pMD, nMD, and gmMD) and in Table 7 for the FA measures. The overlapping regions of increased MD in the AD patients relative to the MCI subjects (AD > MCI) included the right superior temporal gyrus and left limbic parahippocampal gyrus. In contrast, the nonoverlapping regions of increased MD in the AD patients relative to the MCI patients (AD > MCI) included: the right occipital subgyrus and the left medial and superior frontal gyrus for the gmMD; the right middle occipital gyrus for the nMD; the right middle temporal gyrus, the left medial frontal gyrus, the right occipital cuneus, the left parietal supramarginal gyrus, the left limbic parahippocampal gyrus, the left limbic uncus, the right insula, and the right middle frontal gyrus for the pMD. Table 6Summaries of the common and different regions from Tables 2 and 3 for the mean diffusivity (MD)RegionCommon regions (AD > MCI)ROI1R. superior temporal gyrus, WMR. temporal subgyrus, WMR. superior temporal gyrus, GMROI2L. limbic parahippocampal gyrus, WMDifferent regions (AD > MCI)gmMDR. occipital subgyrus, WML. superior frontal gyrus, GML. medial frontal gyrus, WMnMDR. middle occipital gyrus, WMpMD (ROI3)R. middle temporal gyrus, WMR. and L. medial frontal gyrus, GM and WMROI4R. occipital ceneus, WML. parietal supramarginal gyrus, WML. interior parietal lobule, WML. limbic parahippocampal gyrus, GMROI5L. limbic uncus, WMR. sublobar insulaCommon regions (AD > CN)ROI2L. limbic parahippocampal gyrus, WMROI3R. middle temporal gyrus, WMDifferent regions (AD > CN)gmMDL. limbic uncus, GM, AmygdalaR. and L. temporal subgyrus, WMnMDR. and L. temporal subgyrus, WMR. sublobar insula, GMpMDR. temporal subgyrus, WMROI4R. occipital cuneus, WMR. occipital subgyrus, WMR. inferior occipital gyrus, WM R. right, L. left Table 7Summaries of the common and different regions from Tables 2 and 3 for fractional anisotropy (FA)RegionCommon regions (AD > MCI)No common regionDifferent regions (AD > MCI)pFA (ROI 6,7,8)R. and L. superior temporal gyrus, WML. inferior frontal gyrus, GMR. sublobar extranuclear, WMR. sublobar insula, GM and WMR. limbic parahippocampal gyrus, WMR. temporal fusiform gyrus, GML. occipital cuneus, GML. limbic posterior cingulateL. middle temporal gyrus, WMCommon regions (AD > CN)No common regionDifferent regions (AD > CN)pFAL. inferior frontal gyrus, WMROI 9, 10L. superior frontal gyrus, GM and WM R. right, L. left Summaries of the common and different regions from Tables 2 and 3 for the mean diffusivity (MD) R. right, L. left Summaries of the common and different regions from Tables 2 and 3 for fractional anisotropy (FA) R. right, L. left Similarly, the overlapping regions of increased MD in the AD patients relative to that of the control subjects (AD > CN) included the left limbic parahippocampal gyrus and right middle temporal gyrus. In contrast, the nonoverlapping regions of increased MD in the AD patients relative to that of the control subjects included: the left limbic uncus and the left and right temporal subgyrus for the gmMD; the left and right temporal subgyrus and the right sublobar insula for the nMD; the right temporal subgyrus, the right occipital subgyrus, the right occipital cuneus and the right inferior occipital gyrus for the pMD. Because the gmFA and nFA differences between the AD patients and the other two groups of subjects were not significant, there were no overlapping regions for the FA measures. Tables 8 and 9 list the results of ROI analyses for the MD and the FA, respectively. As we can see in the tables, MD values were significantly different between AD and MCI or between AD and CN. There was no significant difference between MCI and CN. Those results were the same as those from VBM analyses. Table 8The ROI data and results of the corresponding statistical test for the mean diffusivity (MD) valuesROISubjects p valueADMCICNAD–MCIAD–CNMCI–CNSuperior temporal gyrus, ROI1, X ±58, Y −6, z −4Rtgm1.045±0.1560.875±0.0720.908±0.0840.000260.004900.22578Rtneg1.049±0.1610.870±0.0760.907±0.0910.000220.005000.21123Rtpos1.042±0.1530.879±0.0690.909±0.0790.000300.004500.24897Ltgm1.023±0.1550.911±0.1170.955±0.0990.024890.154340.25234Ltneg1.027±0.1630.917±0.1270.960±0.1080.037550.185080.30315Ltpos1.022±0.1500.907±0.1100.954±0.0920.016350.134880.19277Limbic parahippocampal gyrus, ROI 2, X ±32, y −24, z −22Rtgm0.100±0.1110.891±0.0710.882±0.0600.001870.000910.69383Rtneg0.933±0.1110.877±0.0740.875±0.0610.001120.000880.93385Rtpos1.007±0.1110.906±0.0740.890±0.0580.003950.000880.49030Ltgm1.036±0.1160.879±0.0600.882±0.0750.000020.000120.92913Ltneg1.031±0.1110.868±0.0580.870±0.0740.000010.000050.94042Ltpos1.036±0.1180.890±0.0620.893±0.0780.000080.000390.90633Middle temporal gyrus, ROI 3, X ±40, y −74, z 28Rtgm0.963±0.0990.837±0.0620.832±0.0580.000110.000090.78235Rtneg0.963±0.1080.832±0.0620.836±0.0620.000130.000340.89277Rtpos0.966±0.1050.843±0.0640.829±0.0540.000250.000080.49316Ltgm0.944±0.1150.852±0.0590.875±0.0780.005900.059850.33837Ltneg0.944±0.1180.846±0.0650.871±0.0770.005110.050680.30961Ltpos0.947±0.1140.859±0.0540.879±0.0810.010480.149420.23174Occipital cuneus, ROI 4, X ±26, y −84, z 16Rtgm0.948±0.0900.829±0.0510.839±0.0580.000040.000350.61712Rtneg0.934±0.0820.828±0.0590.839±0.0600.000150.000850.59032Rtpos0.962±0.1060.830±0.0460.838±0.0570.000040.000310.65473Ltgm0.924±0.1030.853±0.0800.879±0.0630.035070.150810.31487Ltneg0.920±0.1040.853±0.0870.873±0.0630.051820.132210.45565Ltpos0.927±0.1070.853±0.0750.884±0.0640.027150.183120.21208Limbic uncus, ROI 5, X ±24, y −4, z −32Rtgm1.047±0.1420.950±0.1030.928±0.0660.031110.005320.46841Rtneg1.040±0.1650.933±0.1110.918±0.0810.034750.013530.65553Rtpos1.053±0.1350.967±0.1000.943±0.0650.045220.006920.42162Ltgm1.055±0.1180.962±0.0710.912±0.0970.000530.000900.61871Ltneg1.053±0.1190.912±0.0740.899±0.0990.000230.000490.66891Ltpos1.062±0.1240.939±0.0690.923±0.0980.001180.001660.57150 Rt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positiveThe ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups Table 9The ROI data and results of the corresponding statistical test for the fractional anisotropy (FA) valuesROISubjects p valueADMCICNAD–MCIAD–CNMCI–CNSuperior temporal gyrus, ROI 6, x ±58, y −32, z +14Rtgm0.238±0.0150.225±0.0140.233±0.0340.022070.658770.35914Rtneg0.293±0.0220.269±0.0140.280±0.0360.000770.247910.23248Rtpos0.292±0.0270.269±0.0160.276±0.0460.004670.251590.54348Ltgm0.227±0.0160.204±0.0140.213±0.0160.000140.023290.10851Ltneg0.282±0.0260.250±0.0170.266±0.0210.000240.075570.02310Ltpos0.296±0.0270.255±0.0200.267±0.0230.000020.003410.12132Superior temporal gyrus, ROI 7, x ±64, y −16, z −6Rtgm0.212±0.0230.190±0.0130.194±0.0120.001050.008070.32091Rtneg0.264±0.0280.237±0.0140.246±0.0200.001370.044800.18303Rtpos0.271±0.0360.236±0.0150.244±0.0180.000620.011910.14461Ltgm0.120±0.0170.190±0.0160.181±0.0190.101160.006930.13342Ltneg0.251±0.0150.240±0.0210.233±0.0290.120770.041210.40239Ltpos0.265±0.0360.238±0.0150.229±0.0210.006670.002020.18436Superior temporal gyrus, ROI 8, x ±58, y −26, z −4Rtgm0.231±0.0230.210±0.0160.217±0.2340.004770.119930.29256Rtneg0.276±0.0260.254±0.0210.265±0.0300.011280.319820.19190Rtpos0.284±0.0320.249±0.0130.262±0.0280.000180.051070.08649Ltgm0.223±0.0260.218±0.0200.220±0.0200.557590.679710.85453Ltneg0.270±0.0200.266±0.0260.268±0.0290.620740.848380.80622Ltpos0.281±0.0300.263±0.0180.269±0.0260.038180.222050.44215Superior frontal gyrus, ROI 9, x ±24, y +56, z +4Rtgm0.253±0.0270.255±0.0260.254±0.0330.879670.939170.95524Rtneg0.303±0.0220.293±0.0230.301±0.0330.241570.909150.40468Rtpos0.311±0.0300.302±0.0360.295±0.0300.473680.151430.51314Ltgm0.268±0.0250.244±0.0270.247±0.0250.014530.029800.73554Ltneg0.311±0.0260.289±0.0340.302±0.0300.051340.398980.24469Ltpos0.347±0.0450.303±0.0340.298±0.0300.002980.001250.67386Superior frontal gyrus, ROI 10, x ±28, y +58, z +24Rtgm0.205±0.0270.188±0.0270.182±0.0280.088420.030570.54784Rtneg0.249±0.0420.235±0.0320.235±0.0330.283680.302110.98821Rtpos0.279±0.0460.248±0.0410.239±0.0370.047140.011250.49593Ltgm0.203±0.0270.178±0.0270.168±0.0420.009850.009890.44181Ltneg0.258±0.0280.229±0.0360.223±0.0520.015800.029850.73585Ltpos0.272±0.0390.241±0.0450.221±0.0570.050280.008000.26447 Rt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positiveThe ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups The ROI data and results of the corresponding statistical test for the mean diffusivity (MD) values Rt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positive The ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups The ROI data and results of the corresponding statistical test for the fractional anisotropy (FA) values Rt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positive The ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups
Conclusions
Accurate DT-MRI measurements require considering the effects from background gradients, and especially in patients with pathological brain conditions such as AD. Furthermore, geometric average diffusion measures (e.g., gmMD) can be useful to minimize the intrinsic local magnetic susceptibility variations in brain tissue. As we demonstrated for the case of AD, these maps may provide complementary information to the standard DTI maps.
[ "Theoretical background", "Subjects", "MRI acquisition", "DT-MRI preprocessing", "Postprocessing and statistical analyses", "Image coregistration", "Spatial normalization", "Statistical analyses" ]
[ "Neeman et al. [18] reported the use of diffusion-encoding schemes that were made up of couples of gradients with positive and negative polarities in order to minimize a cross-term effect in the case of static field inhomogeneities. To minimize the scalar effects of the unknown cross-term effect of the background and the diffusion-encoding gradients, the geometric mean (gm) operation was used for both the positive and negative polarities of the diffusion-encoding gradients by applying the following equations [18, 19]:\n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {S_{gm}} = \\sqrt {{{S_p} \\cdot {S_n}}} = {S_0}Exp\\left( { - {b_{gm}} \\cdot ADC} \\right) $$\\end{document}\n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} = \\frac{{{b_p} + {b_n}}}{2} = - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + bG_b^2 + cG_{img}^2 + f{G_b} \\cdot {G_{img}}} \\right) $$\\end{document}\n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} \\cong - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + cG_{img}^2} \\right) $$\\end{document}where the geometric mean operation is defined as \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document} where S\np and S\nn are the signals acquired with positive (p) and negative (n) gradients polarities, S\n0 is the signal acquired without a diffusion gradient, the b value = 0 s/mm2, b\np is the b value with using positive diffusion gradients, b\nn is the b value with using negative diffusion gradients, b\ngm is the b value calculated by b\np and b\nn, γ is the gyromagnetic ratio, δ is the duration of the externally applied diffusion gradient, G\nd is the amplitude of the known diffusion-encoding gradients, G\nb is an amplitude of the unknown background gradients, G\nimg is the amplitude of the known imaging gradients, and a, b, c, and f are coefficients. Please note that with this calculation, the cross terms, both G\nb * G\nd and G\nimg * G\nd, disappeared. The background gradients \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_b^2 $$\\end{document} and the cross term, G\nb * G\nimg, in Eq. 2 can be ignored because those values are much smaller than \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_d^2 $$\\end{document}. Therefore, the b\ngm value can finally be considered as Eq. 3.", "Table 1 lists the demographic data of the subjects, including gender, age, and the Mini-Mental State Examination Score (MMSE), which is a general measure of cognitive performance. Fifteen patients diagnosed with AD (mean age 75 years, standard deviation (SD) 9.2, age range 61–86 years, 9 males and 6 females, MMSE range 7–28, mean MMSE 21.9) based on the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorder's Association (NINCDS-ADRDA) criteria were studied using a 1.5T clinical MRI system. In addition, 18 patients diagnosed with MCI (mean age 72 years, SD 8.5, age range 56–96 years, 7 males and 11 females, MMSE range 23–30, mean MMSE 28.5) and 16 CN control subjects (mean age 73 years, SD 9.5, age range 62–85 years, 9 males and 7 females, MMSE range 28–30, mean MMSE 29.3) were recruited as well. The diagnosis of MCI followed Petersen's criteria [20]. Prior to the onset of this study, informed consent was obtained from all subjects, and the protocol was approved by the local institutional review board in the USA. All experiments on human subjects were conducted in accordance with the Declaration of Helsinki.\nTable 1Demographic data and the neuropsychologic test resultsADMCICNSubjects151816Agea (years)76.6 (9.1)72.6 (8.5)73.1 (9.5)Gender Male979 Female6117MMSE21.9 (5.5)b\n28.5 (1.8)29.3 (0.8)\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination ScoreThe data are presented as the mean (standard deviation)\naThere are no statistically significant differences between the groups (p > 0.05)\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)\n\nDemographic data and the neuropsychologic test results\n\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination Score\nThe data are presented as the mean (standard deviation)\n\naThere are no statistically significant differences between the groups (p > 0.05)\n\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)", "The DT-MRI measurements were performed using a single-shot echo-planar imaging (EPI) sequence with inversion-prepared magnetization to suppress the cerebrospinal fluid (CSF) [21]. CSF suppression was used to reduce errors in the diffusion measurements from the partial volume effects in the voxels that represent CSF. A double refocusing spin-echo acquisition with bipolar external diffusion gradients [22] was employed to minimize the artifacts due to eddy currents. Six diffusion encoding directions [23] and five diffusion sensitivities (b values 0, 160, 360, 640, and 1,000 s/mm2) were acquired to determine the apparent diffusion coefficients and the diffusion tensor for each voxel. Furthermore, two DT-MRI datasets were acquired with alternating polarities of the external diffusion-sensitizing gradients (positive +G\nd and negative −Gd) to investigate the effects of background gradients on the DT-MRI measures of the patients with AD. The other imaging parameters were as follows: repetition time (TR)/echo time (TE)/inversion time (TI) = 5,000/100/3,000 ms with 2.4 × 2.4 mm2 in-plane resolution and 19 slices of 5-mm slice thickness without a gap, which covered approximately 80% of the brain.\nIn addition to the DT-MRI scan, the sagittal structural volumetric T1-weighted (T1W) images were acquired as follows: TR/TE/TI = 10/4/300 ms, flip angle = 15° and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\hbox{spatial resolution}} = {1} \\times {1} \\times {1}.{5}\\;{\\hbox{mm}} $$\\end{document} resolution. The intermediate (or proton density)-weighted (PD) and T2-weighted (T2W) axial images were also acquired using a multislice double spin-echo (DSE) sequence. The imaging parameters for DSE were as follows: TR/TE1/ = 5,000/20/80 ms with 1.25 × 1 mm in-plane resolution and a 3-mm thickness, and contiguous slices covering the entire brain. The structural images allowed registration between the structural data and the DT-MRI data and spatial normalization of the DT-MRI indices into a reference space (vide infra). In addition, a T2-weighted spin-echo EPI image (referred to below as a reference EPI image) was acquired at the same resolution and orientation as the diffusion scans, but with whole brain coverage and without inversion preparation to improve registering the DT-MRI data to the structural images. Acquisition of the reference EPI was necessary because the DT-MRI slices did not cover the whole brain and the EPI data without diffusion gradients (EPI at b = 0 s/mm2) had limited structural information because of the inversion pulse.", "In order to map the diffusion indices of the mean diffusivity (MD) and the fractional anisotropy (FA) from the DT-MRI data obtained from the positive (+G\nd) and negative (−G\nd) diffusion-encoding gradients, we developed in house software with interactive data language (IDL; Research Systems, Inc., Boulder, CO). The MD and FA maps (pMD and pFA) were calculated by using only the positive polarity of the diffusion-encoding gradients, and these maps (nMD and nFA) were also calculated by using only the negative polarity of the diffusion-encoding gradients separately without taking into account a background gradient effect. In addition, the maps of the geometric means of MD and FA (gmMD and gmFA) were calculated to minimize the unknown cross-term effects of the background and the diffusion-encoding gradients. The geometric means of FA and MD were calculated by taking the inner product of the diffusion-encoding gradients with positive and negative polarities according to \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document}, where S\np and S\nn are the diffusion-weighted signals acquired with the positive (p) and negative (n) gradient polarities, respectively [18, 19]. The calculations of the geometric mean used the same b values as the ones used for the DT-MRI with the positive and negative diffusion-encoding gradients. A more detailed description of the geometrical mean computation can be found in the “Theoretical background” section. All the maps were calculated before performing imaging coregistration.", "[SUBTITLE] Image coregistration [SUBSECTION] The raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\nThe raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\n[SUBTITLE] Spatial normalization [SUBSECTION] A study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\nA study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\n[SUBTITLE] Statistical analyses [SUBSECTION] In order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.\nIn order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.", "The raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.", "A study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.", "In order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups." ]
[ null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Theoretical background", "Subjects", "MRI acquisition", "DT-MRI preprocessing", "Postprocessing and statistical analyses", "Image coregistration", "Spatial normalization", "Statistical analyses", "Results", "Discussion", "Conclusions" ]
[ "Diffusion tensor (DT) magnetic resonance imaging (DT-MRI) is sensitive to the directionality of the random motion of water in tissue, and it involves the application of external diffusion-sensitizing magnetic field gradients along different orientations to quantify the properties of diffusion. Numerous DT-MRI studies of neurodegenerative diseases have reported abnormal diffusion values in the brain, including Alzheimer's disease (AD), which is a devastating condition that leads to progressive memory loss and rapid cognitive decline. Although AD is generally considered to affect primarily the gray matter, several studies have found changes of the isotropic and anisotropic diffusion in white matter associated with AD progression by using DT-MRI [1–6]. The diffusion abnormalities in AD were predominantly found in the posterior regions of the brain such as the hippocampal gyrus, the temporal white matter, the splenium of the corpus callosum, and the posterior cingulum. In patients with mild cognitive impairment (MCI), which is considered to represent a transitional stage between normal aging and AD, the changes seem to parallel those in AD with similar posterior regions showing abnormalities. In contrast to AD and MCI, the diffusion abnormalities in subjects with age-associated changes (cognitively normal, CN) occur in the frontal regions, and specifically in the frontal white matter, the anterior cingulum, and the genu of the corpus callosum [7].\nAlthough the marked differences seen on DT-MRI between AD or MCI and normal aging have been considered as potential imaging markers [8–10], the underlying mechanism of the DT-MRI changes remains largely unexplained. In particular, the local variations in cell density, oligodentrocytes, myelination, and also amyloid plaques, which are a hallmark of AD [11–13], can be the source of local magnetic susceptibility variations, which in turn can alter water diffusion. Furthermore, it has been shown in rat brain [14] that brain iron, which occurs in high concentrations in oligodentrocytes and plaques [15–17], can modulate the diffusion measurements. These finding suggests that local magnetic susceptibility variations in brain tissue may contribute to the DT-MRI abnormalities seen in AD and MCI in the form of intrinsic susceptibility-dependent background gradients that add to the external diffusion weighting gradients. The previous DT-MRI studies did not take into account the local variations in brain.\nAll investigators may be interested in knowing where in the brain and for which patients these different diffusion effects are with and without considering the background gradients. So, the overall goal of this study was to investigate whether intrinsic background gradients contribute to the pattern of regional diffusion abnormalities in patients with AD, MCI, and CN, and this potentially reflects the underlying pathological processes associated with brain iron. Specifically, we hypothesized that AD patients show a systematic pattern of higher regional background gradients compared to the MCI and CN subjects.", "[SUBTITLE] Theoretical background [SUBSECTION] Neeman et al. [18] reported the use of diffusion-encoding schemes that were made up of couples of gradients with positive and negative polarities in order to minimize a cross-term effect in the case of static field inhomogeneities. To minimize the scalar effects of the unknown cross-term effect of the background and the diffusion-encoding gradients, the geometric mean (gm) operation was used for both the positive and negative polarities of the diffusion-encoding gradients by applying the following equations [18, 19]:\n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {S_{gm}} = \\sqrt {{{S_p} \\cdot {S_n}}} = {S_0}Exp\\left( { - {b_{gm}} \\cdot ADC} \\right) $$\\end{document}\n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} = \\frac{{{b_p} + {b_n}}}{2} = - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + bG_b^2 + cG_{img}^2 + f{G_b} \\cdot {G_{img}}} \\right) $$\\end{document}\n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} \\cong - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + cG_{img}^2} \\right) $$\\end{document}where the geometric mean operation is defined as \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document} where S\np and S\nn are the signals acquired with positive (p) and negative (n) gradients polarities, S\n0 is the signal acquired without a diffusion gradient, the b value = 0 s/mm2, b\np is the b value with using positive diffusion gradients, b\nn is the b value with using negative diffusion gradients, b\ngm is the b value calculated by b\np and b\nn, γ is the gyromagnetic ratio, δ is the duration of the externally applied diffusion gradient, G\nd is the amplitude of the known diffusion-encoding gradients, G\nb is an amplitude of the unknown background gradients, G\nimg is the amplitude of the known imaging gradients, and a, b, c, and f are coefficients. Please note that with this calculation, the cross terms, both G\nb * G\nd and G\nimg * G\nd, disappeared. The background gradients \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_b^2 $$\\end{document} and the cross term, G\nb * G\nimg, in Eq. 2 can be ignored because those values are much smaller than \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_d^2 $$\\end{document}. Therefore, the b\ngm value can finally be considered as Eq. 3.\nNeeman et al. [18] reported the use of diffusion-encoding schemes that were made up of couples of gradients with positive and negative polarities in order to minimize a cross-term effect in the case of static field inhomogeneities. To minimize the scalar effects of the unknown cross-term effect of the background and the diffusion-encoding gradients, the geometric mean (gm) operation was used for both the positive and negative polarities of the diffusion-encoding gradients by applying the following equations [18, 19]:\n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {S_{gm}} = \\sqrt {{{S_p} \\cdot {S_n}}} = {S_0}Exp\\left( { - {b_{gm}} \\cdot ADC} \\right) $$\\end{document}\n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} = \\frac{{{b_p} + {b_n}}}{2} = - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + bG_b^2 + cG_{img}^2 + f{G_b} \\cdot {G_{img}}} \\right) $$\\end{document}\n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} \\cong - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + cG_{img}^2} \\right) $$\\end{document}where the geometric mean operation is defined as \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document} where S\np and S\nn are the signals acquired with positive (p) and negative (n) gradients polarities, S\n0 is the signal acquired without a diffusion gradient, the b value = 0 s/mm2, b\np is the b value with using positive diffusion gradients, b\nn is the b value with using negative diffusion gradients, b\ngm is the b value calculated by b\np and b\nn, γ is the gyromagnetic ratio, δ is the duration of the externally applied diffusion gradient, G\nd is the amplitude of the known diffusion-encoding gradients, G\nb is an amplitude of the unknown background gradients, G\nimg is the amplitude of the known imaging gradients, and a, b, c, and f are coefficients. Please note that with this calculation, the cross terms, both G\nb * G\nd and G\nimg * G\nd, disappeared. The background gradients \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_b^2 $$\\end{document} and the cross term, G\nb * G\nimg, in Eq. 2 can be ignored because those values are much smaller than \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_d^2 $$\\end{document}. Therefore, the b\ngm value can finally be considered as Eq. 3.\n[SUBTITLE] Subjects [SUBSECTION] Table 1 lists the demographic data of the subjects, including gender, age, and the Mini-Mental State Examination Score (MMSE), which is a general measure of cognitive performance. Fifteen patients diagnosed with AD (mean age 75 years, standard deviation (SD) 9.2, age range 61–86 years, 9 males and 6 females, MMSE range 7–28, mean MMSE 21.9) based on the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorder's Association (NINCDS-ADRDA) criteria were studied using a 1.5T clinical MRI system. In addition, 18 patients diagnosed with MCI (mean age 72 years, SD 8.5, age range 56–96 years, 7 males and 11 females, MMSE range 23–30, mean MMSE 28.5) and 16 CN control subjects (mean age 73 years, SD 9.5, age range 62–85 years, 9 males and 7 females, MMSE range 28–30, mean MMSE 29.3) were recruited as well. The diagnosis of MCI followed Petersen's criteria [20]. Prior to the onset of this study, informed consent was obtained from all subjects, and the protocol was approved by the local institutional review board in the USA. All experiments on human subjects were conducted in accordance with the Declaration of Helsinki.\nTable 1Demographic data and the neuropsychologic test resultsADMCICNSubjects151816Agea (years)76.6 (9.1)72.6 (8.5)73.1 (9.5)Gender Male979 Female6117MMSE21.9 (5.5)b\n28.5 (1.8)29.3 (0.8)\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination ScoreThe data are presented as the mean (standard deviation)\naThere are no statistically significant differences between the groups (p > 0.05)\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)\n\nDemographic data and the neuropsychologic test results\n\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination Score\nThe data are presented as the mean (standard deviation)\n\naThere are no statistically significant differences between the groups (p > 0.05)\n\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)\nTable 1 lists the demographic data of the subjects, including gender, age, and the Mini-Mental State Examination Score (MMSE), which is a general measure of cognitive performance. Fifteen patients diagnosed with AD (mean age 75 years, standard deviation (SD) 9.2, age range 61–86 years, 9 males and 6 females, MMSE range 7–28, mean MMSE 21.9) based on the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorder's Association (NINCDS-ADRDA) criteria were studied using a 1.5T clinical MRI system. In addition, 18 patients diagnosed with MCI (mean age 72 years, SD 8.5, age range 56–96 years, 7 males and 11 females, MMSE range 23–30, mean MMSE 28.5) and 16 CN control subjects (mean age 73 years, SD 9.5, age range 62–85 years, 9 males and 7 females, MMSE range 28–30, mean MMSE 29.3) were recruited as well. The diagnosis of MCI followed Petersen's criteria [20]. Prior to the onset of this study, informed consent was obtained from all subjects, and the protocol was approved by the local institutional review board in the USA. All experiments on human subjects were conducted in accordance with the Declaration of Helsinki.\nTable 1Demographic data and the neuropsychologic test resultsADMCICNSubjects151816Agea (years)76.6 (9.1)72.6 (8.5)73.1 (9.5)Gender Male979 Female6117MMSE21.9 (5.5)b\n28.5 (1.8)29.3 (0.8)\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination ScoreThe data are presented as the mean (standard deviation)\naThere are no statistically significant differences between the groups (p > 0.05)\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)\n\nDemographic data and the neuropsychologic test results\n\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination Score\nThe data are presented as the mean (standard deviation)\n\naThere are no statistically significant differences between the groups (p > 0.05)\n\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)\n[SUBTITLE] MRI acquisition [SUBSECTION] The DT-MRI measurements were performed using a single-shot echo-planar imaging (EPI) sequence with inversion-prepared magnetization to suppress the cerebrospinal fluid (CSF) [21]. CSF suppression was used to reduce errors in the diffusion measurements from the partial volume effects in the voxels that represent CSF. A double refocusing spin-echo acquisition with bipolar external diffusion gradients [22] was employed to minimize the artifacts due to eddy currents. Six diffusion encoding directions [23] and five diffusion sensitivities (b values 0, 160, 360, 640, and 1,000 s/mm2) were acquired to determine the apparent diffusion coefficients and the diffusion tensor for each voxel. Furthermore, two DT-MRI datasets were acquired with alternating polarities of the external diffusion-sensitizing gradients (positive +G\nd and negative −Gd) to investigate the effects of background gradients on the DT-MRI measures of the patients with AD. The other imaging parameters were as follows: repetition time (TR)/echo time (TE)/inversion time (TI) = 5,000/100/3,000 ms with 2.4 × 2.4 mm2 in-plane resolution and 19 slices of 5-mm slice thickness without a gap, which covered approximately 80% of the brain.\nIn addition to the DT-MRI scan, the sagittal structural volumetric T1-weighted (T1W) images were acquired as follows: TR/TE/TI = 10/4/300 ms, flip angle = 15° and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\hbox{spatial resolution}} = {1} \\times {1} \\times {1}.{5}\\;{\\hbox{mm}} $$\\end{document} resolution. The intermediate (or proton density)-weighted (PD) and T2-weighted (T2W) axial images were also acquired using a multislice double spin-echo (DSE) sequence. The imaging parameters for DSE were as follows: TR/TE1/ = 5,000/20/80 ms with 1.25 × 1 mm in-plane resolution and a 3-mm thickness, and contiguous slices covering the entire brain. The structural images allowed registration between the structural data and the DT-MRI data and spatial normalization of the DT-MRI indices into a reference space (vide infra). In addition, a T2-weighted spin-echo EPI image (referred to below as a reference EPI image) was acquired at the same resolution and orientation as the diffusion scans, but with whole brain coverage and without inversion preparation to improve registering the DT-MRI data to the structural images. Acquisition of the reference EPI was necessary because the DT-MRI slices did not cover the whole brain and the EPI data without diffusion gradients (EPI at b = 0 s/mm2) had limited structural information because of the inversion pulse.\nThe DT-MRI measurements were performed using a single-shot echo-planar imaging (EPI) sequence with inversion-prepared magnetization to suppress the cerebrospinal fluid (CSF) [21]. CSF suppression was used to reduce errors in the diffusion measurements from the partial volume effects in the voxels that represent CSF. A double refocusing spin-echo acquisition with bipolar external diffusion gradients [22] was employed to minimize the artifacts due to eddy currents. Six diffusion encoding directions [23] and five diffusion sensitivities (b values 0, 160, 360, 640, and 1,000 s/mm2) were acquired to determine the apparent diffusion coefficients and the diffusion tensor for each voxel. Furthermore, two DT-MRI datasets were acquired with alternating polarities of the external diffusion-sensitizing gradients (positive +G\nd and negative −Gd) to investigate the effects of background gradients on the DT-MRI measures of the patients with AD. The other imaging parameters were as follows: repetition time (TR)/echo time (TE)/inversion time (TI) = 5,000/100/3,000 ms with 2.4 × 2.4 mm2 in-plane resolution and 19 slices of 5-mm slice thickness without a gap, which covered approximately 80% of the brain.\nIn addition to the DT-MRI scan, the sagittal structural volumetric T1-weighted (T1W) images were acquired as follows: TR/TE/TI = 10/4/300 ms, flip angle = 15° and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\hbox{spatial resolution}} = {1} \\times {1} \\times {1}.{5}\\;{\\hbox{mm}} $$\\end{document} resolution. The intermediate (or proton density)-weighted (PD) and T2-weighted (T2W) axial images were also acquired using a multislice double spin-echo (DSE) sequence. The imaging parameters for DSE were as follows: TR/TE1/ = 5,000/20/80 ms with 1.25 × 1 mm in-plane resolution and a 3-mm thickness, and contiguous slices covering the entire brain. The structural images allowed registration between the structural data and the DT-MRI data and spatial normalization of the DT-MRI indices into a reference space (vide infra). In addition, a T2-weighted spin-echo EPI image (referred to below as a reference EPI image) was acquired at the same resolution and orientation as the diffusion scans, but with whole brain coverage and without inversion preparation to improve registering the DT-MRI data to the structural images. Acquisition of the reference EPI was necessary because the DT-MRI slices did not cover the whole brain and the EPI data without diffusion gradients (EPI at b = 0 s/mm2) had limited structural information because of the inversion pulse.\n[SUBTITLE] DT-MRI preprocessing [SUBSECTION] In order to map the diffusion indices of the mean diffusivity (MD) and the fractional anisotropy (FA) from the DT-MRI data obtained from the positive (+G\nd) and negative (−G\nd) diffusion-encoding gradients, we developed in house software with interactive data language (IDL; Research Systems, Inc., Boulder, CO). The MD and FA maps (pMD and pFA) were calculated by using only the positive polarity of the diffusion-encoding gradients, and these maps (nMD and nFA) were also calculated by using only the negative polarity of the diffusion-encoding gradients separately without taking into account a background gradient effect. In addition, the maps of the geometric means of MD and FA (gmMD and gmFA) were calculated to minimize the unknown cross-term effects of the background and the diffusion-encoding gradients. The geometric means of FA and MD were calculated by taking the inner product of the diffusion-encoding gradients with positive and negative polarities according to \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document}, where S\np and S\nn are the diffusion-weighted signals acquired with the positive (p) and negative (n) gradient polarities, respectively [18, 19]. The calculations of the geometric mean used the same b values as the ones used for the DT-MRI with the positive and negative diffusion-encoding gradients. A more detailed description of the geometrical mean computation can be found in the “Theoretical background” section. All the maps were calculated before performing imaging coregistration.\nIn order to map the diffusion indices of the mean diffusivity (MD) and the fractional anisotropy (FA) from the DT-MRI data obtained from the positive (+G\nd) and negative (−G\nd) diffusion-encoding gradients, we developed in house software with interactive data language (IDL; Research Systems, Inc., Boulder, CO). The MD and FA maps (pMD and pFA) were calculated by using only the positive polarity of the diffusion-encoding gradients, and these maps (nMD and nFA) were also calculated by using only the negative polarity of the diffusion-encoding gradients separately without taking into account a background gradient effect. In addition, the maps of the geometric means of MD and FA (gmMD and gmFA) were calculated to minimize the unknown cross-term effects of the background and the diffusion-encoding gradients. The geometric means of FA and MD were calculated by taking the inner product of the diffusion-encoding gradients with positive and negative polarities according to \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document}, where S\np and S\nn are the diffusion-weighted signals acquired with the positive (p) and negative (n) gradient polarities, respectively [18, 19]. The calculations of the geometric mean used the same b values as the ones used for the DT-MRI with the positive and negative diffusion-encoding gradients. A more detailed description of the geometrical mean computation can be found in the “Theoretical background” section. All the maps were calculated before performing imaging coregistration.\n[SUBTITLE] Postprocessing and statistical analyses [SUBSECTION] [SUBTITLE] Image coregistration [SUBSECTION] The raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\nThe raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\n[SUBTITLE] Spatial normalization [SUBSECTION] A study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\nA study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\n[SUBTITLE] Statistical analyses [SUBSECTION] In order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.\nIn order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.\n[SUBTITLE] Image coregistration [SUBSECTION] The raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\nThe raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\n[SUBTITLE] Spatial normalization [SUBSECTION] A study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\nA study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\n[SUBTITLE] Statistical analyses [SUBSECTION] In order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.\nIn order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.", "Neeman et al. [18] reported the use of diffusion-encoding schemes that were made up of couples of gradients with positive and negative polarities in order to minimize a cross-term effect in the case of static field inhomogeneities. To minimize the scalar effects of the unknown cross-term effect of the background and the diffusion-encoding gradients, the geometric mean (gm) operation was used for both the positive and negative polarities of the diffusion-encoding gradients by applying the following equations [18, 19]:\n1\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {S_{gm}} = \\sqrt {{{S_p} \\cdot {S_n}}} = {S_0}Exp\\left( { - {b_{gm}} \\cdot ADC} \\right) $$\\end{document}\n2\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} = \\frac{{{b_p} + {b_n}}}{2} = - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + bG_b^2 + cG_{img}^2 + f{G_b} \\cdot {G_{img}}} \\right) $$\\end{document}\n3\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {b_{gm}} \\cong - {\\gamma^2}{\\delta^2}\\left( {aG_d^2 + cG_{img}^2} \\right) $$\\end{document}where the geometric mean operation is defined as \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document} where S\np and S\nn are the signals acquired with positive (p) and negative (n) gradients polarities, S\n0 is the signal acquired without a diffusion gradient, the b value = 0 s/mm2, b\np is the b value with using positive diffusion gradients, b\nn is the b value with using negative diffusion gradients, b\ngm is the b value calculated by b\np and b\nn, γ is the gyromagnetic ratio, δ is the duration of the externally applied diffusion gradient, G\nd is the amplitude of the known diffusion-encoding gradients, G\nb is an amplitude of the unknown background gradients, G\nimg is the amplitude of the known imaging gradients, and a, b, c, and f are coefficients. Please note that with this calculation, the cross terms, both G\nb * G\nd and G\nimg * G\nd, disappeared. The background gradients \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_b^2 $$\\end{document} and the cross term, G\nb * G\nimg, in Eq. 2 can be ignored because those values are much smaller than \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ G_d^2 $$\\end{document}. Therefore, the b\ngm value can finally be considered as Eq. 3.", "Table 1 lists the demographic data of the subjects, including gender, age, and the Mini-Mental State Examination Score (MMSE), which is a general measure of cognitive performance. Fifteen patients diagnosed with AD (mean age 75 years, standard deviation (SD) 9.2, age range 61–86 years, 9 males and 6 females, MMSE range 7–28, mean MMSE 21.9) based on the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorder's Association (NINCDS-ADRDA) criteria were studied using a 1.5T clinical MRI system. In addition, 18 patients diagnosed with MCI (mean age 72 years, SD 8.5, age range 56–96 years, 7 males and 11 females, MMSE range 23–30, mean MMSE 28.5) and 16 CN control subjects (mean age 73 years, SD 9.5, age range 62–85 years, 9 males and 7 females, MMSE range 28–30, mean MMSE 29.3) were recruited as well. The diagnosis of MCI followed Petersen's criteria [20]. Prior to the onset of this study, informed consent was obtained from all subjects, and the protocol was approved by the local institutional review board in the USA. All experiments on human subjects were conducted in accordance with the Declaration of Helsinki.\nTable 1Demographic data and the neuropsychologic test resultsADMCICNSubjects151816Agea (years)76.6 (9.1)72.6 (8.5)73.1 (9.5)Gender Male979 Female6117MMSE21.9 (5.5)b\n28.5 (1.8)29.3 (0.8)\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination ScoreThe data are presented as the mean (standard deviation)\naThere are no statistically significant differences between the groups (p > 0.05)\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)\n\nDemographic data and the neuropsychologic test results\n\nAD Alzheimer's disease, MCI mild cognitive impairment, CN cognitively normal, MMSE Mini-Mental State Examination Score\nThe data are presented as the mean (standard deviation)\n\naThere are no statistically significant differences between the groups (p > 0.05)\n\nbThere are statistically significant differences between the AD group and the other groups (p < 0.0005), but not between the MCI and CN groups (p > 0.215)", "The DT-MRI measurements were performed using a single-shot echo-planar imaging (EPI) sequence with inversion-prepared magnetization to suppress the cerebrospinal fluid (CSF) [21]. CSF suppression was used to reduce errors in the diffusion measurements from the partial volume effects in the voxels that represent CSF. A double refocusing spin-echo acquisition with bipolar external diffusion gradients [22] was employed to minimize the artifacts due to eddy currents. Six diffusion encoding directions [23] and five diffusion sensitivities (b values 0, 160, 360, 640, and 1,000 s/mm2) were acquired to determine the apparent diffusion coefficients and the diffusion tensor for each voxel. Furthermore, two DT-MRI datasets were acquired with alternating polarities of the external diffusion-sensitizing gradients (positive +G\nd and negative −Gd) to investigate the effects of background gradients on the DT-MRI measures of the patients with AD. The other imaging parameters were as follows: repetition time (TR)/echo time (TE)/inversion time (TI) = 5,000/100/3,000 ms with 2.4 × 2.4 mm2 in-plane resolution and 19 slices of 5-mm slice thickness without a gap, which covered approximately 80% of the brain.\nIn addition to the DT-MRI scan, the sagittal structural volumetric T1-weighted (T1W) images were acquired as follows: TR/TE/TI = 10/4/300 ms, flip angle = 15° and \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ {\\hbox{spatial resolution}} = {1} \\times {1} \\times {1}.{5}\\;{\\hbox{mm}} $$\\end{document} resolution. The intermediate (or proton density)-weighted (PD) and T2-weighted (T2W) axial images were also acquired using a multislice double spin-echo (DSE) sequence. The imaging parameters for DSE were as follows: TR/TE1/ = 5,000/20/80 ms with 1.25 × 1 mm in-plane resolution and a 3-mm thickness, and contiguous slices covering the entire brain. The structural images allowed registration between the structural data and the DT-MRI data and spatial normalization of the DT-MRI indices into a reference space (vide infra). In addition, a T2-weighted spin-echo EPI image (referred to below as a reference EPI image) was acquired at the same resolution and orientation as the diffusion scans, but with whole brain coverage and without inversion preparation to improve registering the DT-MRI data to the structural images. Acquisition of the reference EPI was necessary because the DT-MRI slices did not cover the whole brain and the EPI data without diffusion gradients (EPI at b = 0 s/mm2) had limited structural information because of the inversion pulse.", "In order to map the diffusion indices of the mean diffusivity (MD) and the fractional anisotropy (FA) from the DT-MRI data obtained from the positive (+G\nd) and negative (−G\nd) diffusion-encoding gradients, we developed in house software with interactive data language (IDL; Research Systems, Inc., Boulder, CO). The MD and FA maps (pMD and pFA) were calculated by using only the positive polarity of the diffusion-encoding gradients, and these maps (nMD and nFA) were also calculated by using only the negative polarity of the diffusion-encoding gradients separately without taking into account a background gradient effect. In addition, the maps of the geometric means of MD and FA (gmMD and gmFA) were calculated to minimize the unknown cross-term effects of the background and the diffusion-encoding gradients. The geometric means of FA and MD were calculated by taking the inner product of the diffusion-encoding gradients with positive and negative polarities according to \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$ \\sqrt {{{S_p} * {S_n}}} $$\\end{document}, where S\np and S\nn are the diffusion-weighted signals acquired with the positive (p) and negative (n) gradient polarities, respectively [18, 19]. The calculations of the geometric mean used the same b values as the ones used for the DT-MRI with the positive and negative diffusion-encoding gradients. A more detailed description of the geometrical mean computation can be found in the “Theoretical background” section. All the maps were calculated before performing imaging coregistration.", "[SUBTITLE] Image coregistration [SUBSECTION] The raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\nThe raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.\n[SUBTITLE] Spatial normalization [SUBSECTION] A study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\nA study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.\n[SUBTITLE] Statistical analyses [SUBSECTION] In order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.\nIn order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.", "The raw DT-MRI and DT_b0 EPI were assumed to be intrinsically aligned. This assumption is reasonable because first, both datasets are subjected to similar geometrical distortions; second, additional distortions due to the eddy currents induced by the diffusion-encoding gradients are minimal since we used a double refocusing sequence; and third, the datasets were acquired in an interleaved fashion to reduce movement effects between the frames. The DT_b0 EPI data (TE = 100 ms and inversion-prepared and 19 slices) were coregistered to the reference EPI data (TE = 100 ms without inversion, but with whole brain coverage) using an affine transformation available with Statistical Parameter Mapping software (SPM2, Wellcome Department of Cognitive Neurology, England, UK). The reference EPI data were then further coregistered to the T2W images, which were in turn coregistered to the PD images (acquired together with the T2W using double spin-echo acquisition). Finally, the PD images, and thus the DT_b0 EPI data and the maps of the DT-MRI data were coregistered to the 3D T1W images. These steps allowed a reliable coregistration between the MD and FA maps and the anatomical 3D T1W images.", "A study-specific template was created by transforming the 3D T1W images from all the subjects in this study into a T1W SPM2 template space using affine transformations and by averaging all the transferred 3D T1W images provided from the SPM2 website. We created this template since our study population had brain disease and the study population was considerably older than the populations used in the standard templates, such as the Montréal Neurological Institute (MNI) template. After creating the study specific template, all the 3D T1W images from the individual subjects were again spatially normalized to this study specific template using a 12-parameter nonlinear transformation [24, 25]. The same transformation parameters were then applied to normalize all the MD and FA maps, which were also interpolated to the 2 mm × 2 mm × 2 mm voxel size of the brain template. The maps from the positive and negative external diffusion-sensitizing gradients (p/nFA and p/nMD) and the corresponding maps of the geometric means (gmFA and gmMD) were then smoothed using an 8 × 8×12 mm Gaussian kernel.", "In order to investigate the differences in diffusion abnormalities across the groups, voxel-wise one-way analysis of variance (ANOVA) tests were performed on the gmMD and gmFA maps within the framework of SPM2. These analyses were repeated for the pMD and pFA maps as well as for the nMD and nFA maps. In this analysis, we want to know where in the brain and for which patients were the gmMDs different from either the nMD or pMD. To account for multiple comparisons in the voxel-by-voxel tests, the concept of a false discovery rate (FDR) [26] was used, and a threshold for the significance of FDR p = 0.01 was applied. Based on the results of the tests, we obtained common areas of the three MDs that were nMD, pMD, and gmMD to evaluate the advantages of the geometrical mean operation.\nIn addition to the voxel-based analyses, we also analyzed the data with defining a region-of-interest (ROI). The ROIs were defined according to the results of voxel-based analyses. For the MD, the ROIs were the right and left superior temporal gyrus (ROI1), the right and left limbic parahippocampal gyrus (ROI2), and the right and left middle temporal gyrus (ROI3) and the right and left occipital cuneus (ROI4) and the right left limbic uncus (ROI5). For the FA, the ROIs were the right and left superior temporal gyrus (ROI 6, 7, and 8) and the right and left superior frontal gyrus (ROI 9, 10). The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups.", "The demographic characteristics of the subjects are summarized in Table 1. There were no significant differences in age and gender across the groups. As expected, the MMSE scores were significantly lower for the AD patients as compared to those of the other groups (p < 0.05), but the MMSE scores did not significantly differ between the MCI and CN subjects (p > 0.05).\nFigure 1 shows the results of the voxel-wise comparisons of the pMD, nMD, and gmMD maps between the AD and MCI groups (Fig. 1a) and between the AD and CN groups (Fig. 1b) based on one-way ANOVA tests. Compared to the MCI patients, the AD patients had increased pMD values mainly in the temporal and frontal lobes. The AD patients also had increased nMD values (AD > MCI) mainly in the temporal lobe. Moreover, we found that the AD patients had increased gmMD values (AD > MCI) predominantly in the right superior temporal gyrus, the left limbic parahippocampal gyrus white matter, and the left superior and medial frontal gyrus. There were no significantly decreased MD values in the gmMD, pMD, and nMD maps from the patients with AD as compared with that of the MCI subjects. The detailed results are summarized in Table 2.\nFig. 1Results of the voxel-wise comparisons of mean diffusivity (MD; gmMD, nMD, pMD) between the Alzheimer's disease (AD) and mild cognitive impairment (MCI) groups (a) and between the AD and cognitive normal (CN) groups (b) using one-way ANOVA tests. There were no decreased MD values for all three maps of gmMD, pMD, and nMD in the patients with AD as compared with that of the MCI or CN patients. There were no significant differences between the MCI and CN patients\nTable 2The significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI#\nX\n\nY\n\nZ\ngm50−4−226025.565.27R. temporal subgyrus, WM50−18−46024.834.64R. superior temporal gyrus, GM2258−6−46024.84.61R. superior temporal gyrus, WMROI1−32−24−222645.325.07L. limbic parahippocampal gyrus, WMROI238−72323234.944.73R. occipital subgyrus, WM−462143034.814.62L. superior frontal gyrus, GM9−450203034.654.47L. medial frontal gyrus, WM−662−43034.414.26L. medial frontal gyrus, WMNeg50−6−201,0415.785.46R. temporal subgyrus, WM50−18−41,0415.024.8R. superior temporal gyrus, GM2258−6−41,0415.034.81R. superior temporal gyrus, WMROI1−30−22−225465.735.41L. limbic parahippocampal gyrus, WM38−72304325.184.95R. temporal subgyrus, WM36−7264323.943.83R. middle occipital gyrus, WM−26−6−325464.354.21L. limbic parahippocampal gyrus, WM−322−225464.494.33L. limbic parahippocampal gyrus, WMPos40−74301,7594.824.63R. middle temporal gyrus, WMROI3−462101,0245.385.11L. medial frontal gyrus, GM−450201,0245.445.17L. medial frontal gyrus, WM10−662−41,0245.234.98L. medial frontal gyrus, WM24−86161,7595.745.42R. occipital ceneus, WMROI454−6461,75954.78R. middle temporal gyrus, WM−56−44325235.224.98L. parietal supramarginal gyrus, WM−50−38305234.414.26L. interior parietal lobule, WM−58−34185234.44.25L. superior temporal gyrus, WM−302−227324.494.33L. limbic parahippocampal gyrus, GM34−24−4−327323.653.56L. limbic uncus, WMROI550−4−227795.345.09R. temporal subgyrus, WM−32−26−227325.014.79L. limbic parahippocampal gyrus, WMROI24040264084.714.52R. middle frontal gyrus, GM950−6−47794.64.43R. sublobar insula4830284084.524.36R. middle frontal gyrus, GM950−18−47794.664.49R. superior temporal gyrus, GM22464684084.033.91R. middle frontal gyrus, WM\ngm geometric mean, Neg negative, Pos positive, BA Brodmann area, R. right, L. left, WM white matter, GM gray matter\n\nResults of the voxel-wise comparisons of mean diffusivity (MD; gmMD, nMD, pMD) between the Alzheimer's disease (AD) and mild cognitive impairment (MCI) groups (a) and between the AD and cognitive normal (CN) groups (b) using one-way ANOVA tests. There were no decreased MD values for all three maps of gmMD, pMD, and nMD in the patients with AD as compared with that of the MCI or CN patients. There were no significant differences between the MCI and CN patients\nThe significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests)\n\ngm geometric mean, Neg negative, Pos positive, BA Brodmann area, R. right, L. left, WM white matter, GM gray matter\nCompared to the CN subjects, the AD patients had increased pMD values mainly in the temporal gyrus. The AD patients also had increased nMD values (AD > CN) mainly in the temporal gyrus. Moreover, we found that the AD patients had increased gmMD values (AD > CN, Fig. 1b) predominantly in the left limbic parahippocampal gyrus, the left limbic uncus, the left and right temporal subgyrus, and the right middle temporal gyrus. These detailed results are summarized in Table 3.\nTable 3The significantly different regions when comparing between the AD and CN groups (AD > CN) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI#\nX\n\nY\n\nZ\ngm−32−24−222895.184.94L. limbic parahippocampal gyrus, WMROI2−26−8−282894.324.18L. limbic uncus, GM, amygdala−300−202894.294.15L. temporal subgyrus, WM50−2−221574.994.78R. temporal subgyrus, WM40−74285304.94.7R. middle temporal gyrus, WMROI356−6485304.694.51R. middle temporal gyrus, WM32−66345304.594.42R. temporal subgyrus, WMNeg−32−24−224155.475.19L. limbic parahippocampal gyrus, WMROI2−26−8−304154.94.69L. limbic parahippocampal gyrus, WM−300−204154.224.09L. temporal subgyrus, WM50−2−222945.094.86R. temporal subgyrus, WM40−74286224.854.65R. middle temporal gyrus, WMROI334−66346224.74.52R. temporal subgyrus, WM48−18−42944.44.24R. sublobar insula, GM1346−6026224.474.31R. temporal subgyrus, WMPos−32−26−224094.874.67L. limbic parahippocampal gyrus, WMROI2−28−2−204094.454.29L. limbic parahippocampal gyrus, WM42−74281,0105.224.95R. middle temporal gyrus, WMROI354−6481,0105.285.03R. middle temporal gyrus, WM30−66341,0104.814.62R. occipital subgyrus, WM26−84164855.124.89R. occipital cuneus, WMROI430−9444854.734.55R. occipital subgyrus, WM38−94−24854.394.24R. inferior occipital gyrus, WM\nR. right, L. left\n\nThe significantly different regions when comparing between the AD and CN groups (AD > CN) using the MD with (gmMD) and without (nMD and pMD) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as the cluster level; one-way ANOVA tests)\n\nR. right, L. left\nFigure 2 shows the differences in the pFA maps between the MCI or CN and AD subjects. Compared to the MCI patients, the AD patients had increased pFA values (AD > MCI) mainly in the temporal and frontal gyrus and the posterior cingulate. The detailed results are also summarized in Table 4. Compared to the CN subjects, the AD patients also had increased pFA values (AD > CN) mainly in the left inferior and superior frontal gyrus. We did not find any significant differences in the pFA between the MCI and CN subjects. Similarly, we did not find any significant differences in the gmFA or nFA across the three groups. The results of the FA are summarized in Table 5.\nFig. 2Results of the voxel-wise comparisons of factional anisotropy (FA; gmFA, nFA, pFA) among the three AD, MCI, and CN groups using one-way ANOVA tests. The pFA maps were not significantly different between the MCI and CN groups. The gmFA and nFA maps were not significantly different among the three AD, MCI, and CN groups at all\nTable 4The significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI#\nX\n\nY\n\nZ\ngmNo significant differenceNegNo significant differencePos−58−32144,9755.955.6L. superior temporal gyrus, WMROI6−542484,9755.895.56L. inferior frontal gyrus, GM4552−12221,5505.755.43R. sublobar extranuclear, WM502101,5505.465.18R. sublobar insula, WM38−24181,5505.914.71R. sublobar insula, GM34−30−163835.515.23R. limbic parahippocampal gyrus, WM50−40−223833.93.79R. temporal fusiform gyrus, GM1364−16−64594.914.71R. superior temporal gyrus, WMROI758−26−44595.264.12R. superior temporal gyrus, WM37 ROI8−2−74267614.834.63L. occipital cuneus, GM18−2−60307614.654.47L. limbic posterior cingulate−2−88187614.474.31L. occipital cuneus, GM18−56−66104044.724.53L. middle temporal gyrus, WM−60−5824044.244.1L. middle temporal gyrus, WM\nR. right, L. left\nTable 5The significantly different regions when comparing between the AD and CN groups (AD > CN) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests)Talairach coordinateClusterTZRegionBA ROI#\nX\n\nY\n\nZ\ngmNo significant differenceNegNo significant differencePos−4842−42,9866.496.05L. inferior frontal gyrus, WM−562282,9865.785.46L. inferior frontal gyrus−3656163574.814.62L. superior frontal gyrus, GM10−245643574.394.24L. superior frontal gyrus, WMROI9−2854243574.294.15L. superior frontal gyrus, WMROI10\nR. right, L. left\n\nResults of the voxel-wise comparisons of factional anisotropy (FA; gmFA, nFA, pFA) among the three AD, MCI, and CN groups using one-way ANOVA tests. The pFA maps were not significantly different between the MCI and CN groups. The gmFA and nFA maps were not significantly different among the three AD, MCI, and CN groups at all\nThe significantly different regions when comparing between the AD and MCI groups (AD > MCI) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests)\n\nR. right, L. left\nThe significantly different regions when comparing between the AD and CN groups (AD > CN) using the FA maps with (gmFA) and without (nFA and pFA) minimizing the effects of the background gradients (corrected FDR (p < 0.01) as a cluster level; one-way ANOVA tests)\n\nR. right, L. left\nThe significant regions that overlap in all three DT-MRI sets (positive gradients, negative gradients, and geometric mean) as well as those regions that differ are listed in Table 6 for the MD (pMD, nMD, and gmMD) and in Table 7 for the FA measures. The overlapping regions of increased MD in the AD patients relative to the MCI subjects (AD > MCI) included the right superior temporal gyrus and left limbic parahippocampal gyrus. In contrast, the nonoverlapping regions of increased MD in the AD patients relative to the MCI patients (AD > MCI) included: the right occipital subgyrus and the left medial and superior frontal gyrus for the gmMD; the right middle occipital gyrus for the nMD; the right middle temporal gyrus, the left medial frontal gyrus, the right occipital cuneus, the left parietal supramarginal gyrus, the left limbic parahippocampal gyrus, the left limbic uncus, the right insula, and the right middle frontal gyrus for the pMD.\nTable 6Summaries of the common and different regions from Tables 2 and 3 for the mean diffusivity (MD)RegionCommon regions (AD > MCI)ROI1R. superior temporal gyrus, WMR. temporal subgyrus, WMR. superior temporal gyrus, GMROI2L. limbic parahippocampal gyrus, WMDifferent regions (AD > MCI)gmMDR. occipital subgyrus, WML. superior frontal gyrus, GML. medial frontal gyrus, WMnMDR. middle occipital gyrus, WMpMD (ROI3)R. middle temporal gyrus, WMR. and L. medial frontal gyrus, GM and WMROI4R. occipital ceneus, WML. parietal supramarginal gyrus, WML. interior parietal lobule, WML. limbic parahippocampal gyrus, GMROI5L. limbic uncus, WMR. sublobar insulaCommon regions (AD > CN)ROI2L. limbic parahippocampal gyrus, WMROI3R. middle temporal gyrus, WMDifferent regions (AD > CN)gmMDL. limbic uncus, GM, AmygdalaR. and L. temporal subgyrus, WMnMDR. and L. temporal subgyrus, WMR. sublobar insula, GMpMDR. temporal subgyrus, WMROI4R. occipital cuneus, WMR. occipital subgyrus, WMR. inferior occipital gyrus, WM\nR. right, L. left\nTable 7Summaries of the common and different regions from Tables 2 and 3 for fractional anisotropy (FA)RegionCommon regions (AD > MCI)No common regionDifferent regions (AD > MCI)pFA (ROI 6,7,8)R. and L. superior temporal gyrus, WML. inferior frontal gyrus, GMR. sublobar extranuclear, WMR. sublobar insula, GM and WMR. limbic parahippocampal gyrus, WMR. temporal fusiform gyrus, GML. occipital cuneus, GML. limbic posterior cingulateL. middle temporal gyrus, WMCommon regions (AD > CN)No common regionDifferent regions (AD > CN)pFAL. inferior frontal gyrus, WMROI 9, 10L. superior frontal gyrus, GM and WM\nR. right, L. left\n\nSummaries of the common and different regions from Tables 2 and 3 for the mean diffusivity (MD)\n\nR. right, L. left\nSummaries of the common and different regions from Tables 2 and 3 for fractional anisotropy (FA)\n\nR. right, L. left\nSimilarly, the overlapping regions of increased MD in the AD patients relative to that of the control subjects (AD > CN) included the left limbic parahippocampal gyrus and right middle temporal gyrus. In contrast, the nonoverlapping regions of increased MD in the AD patients relative to that of the control subjects included: the left limbic uncus and the left and right temporal subgyrus for the gmMD; the left and right temporal subgyrus and the right sublobar insula for the nMD; the right temporal subgyrus, the right occipital subgyrus, the right occipital cuneus and the right inferior occipital gyrus for the pMD. Because the gmFA and nFA differences between the AD patients and the other two groups of subjects were not significant, there were no overlapping regions for the FA measures.\nTables 8 and 9 list the results of ROI analyses for the MD and the FA, respectively. As we can see in the tables, MD values were significantly different between AD and MCI or between AD and CN. There was no significant difference between MCI and CN. Those results were the same as those from VBM analyses.\nTable 8The ROI data and results of the corresponding statistical test for the mean diffusivity (MD) valuesROISubjects\np valueADMCICNAD–MCIAD–CNMCI–CNSuperior temporal gyrus, ROI1, X ±58, Y −6, z −4Rtgm1.045±0.1560.875±0.0720.908±0.0840.000260.004900.22578Rtneg1.049±0.1610.870±0.0760.907±0.0910.000220.005000.21123Rtpos1.042±0.1530.879±0.0690.909±0.0790.000300.004500.24897Ltgm1.023±0.1550.911±0.1170.955±0.0990.024890.154340.25234Ltneg1.027±0.1630.917±0.1270.960±0.1080.037550.185080.30315Ltpos1.022±0.1500.907±0.1100.954±0.0920.016350.134880.19277Limbic parahippocampal gyrus, ROI 2, X ±32, y −24, z −22Rtgm0.100±0.1110.891±0.0710.882±0.0600.001870.000910.69383Rtneg0.933±0.1110.877±0.0740.875±0.0610.001120.000880.93385Rtpos1.007±0.1110.906±0.0740.890±0.0580.003950.000880.49030Ltgm1.036±0.1160.879±0.0600.882±0.0750.000020.000120.92913Ltneg1.031±0.1110.868±0.0580.870±0.0740.000010.000050.94042Ltpos1.036±0.1180.890±0.0620.893±0.0780.000080.000390.90633Middle temporal gyrus, ROI 3, X ±40, y −74, z 28Rtgm0.963±0.0990.837±0.0620.832±0.0580.000110.000090.78235Rtneg0.963±0.1080.832±0.0620.836±0.0620.000130.000340.89277Rtpos0.966±0.1050.843±0.0640.829±0.0540.000250.000080.49316Ltgm0.944±0.1150.852±0.0590.875±0.0780.005900.059850.33837Ltneg0.944±0.1180.846±0.0650.871±0.0770.005110.050680.30961Ltpos0.947±0.1140.859±0.0540.879±0.0810.010480.149420.23174Occipital cuneus, ROI 4, X ±26, y −84, z 16Rtgm0.948±0.0900.829±0.0510.839±0.0580.000040.000350.61712Rtneg0.934±0.0820.828±0.0590.839±0.0600.000150.000850.59032Rtpos0.962±0.1060.830±0.0460.838±0.0570.000040.000310.65473Ltgm0.924±0.1030.853±0.0800.879±0.0630.035070.150810.31487Ltneg0.920±0.1040.853±0.0870.873±0.0630.051820.132210.45565Ltpos0.927±0.1070.853±0.0750.884±0.0640.027150.183120.21208Limbic uncus, ROI 5, X ±24, y −4, z −32Rtgm1.047±0.1420.950±0.1030.928±0.0660.031110.005320.46841Rtneg1.040±0.1650.933±0.1110.918±0.0810.034750.013530.65553Rtpos1.053±0.1350.967±0.1000.943±0.0650.045220.006920.42162Ltgm1.055±0.1180.962±0.0710.912±0.0970.000530.000900.61871Ltneg1.053±0.1190.912±0.0740.899±0.0990.000230.000490.66891Ltpos1.062±0.1240.939±0.0690.923±0.0980.001180.001660.57150\nRt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positiveThe ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups\nTable 9The ROI data and results of the corresponding statistical test for the fractional anisotropy (FA) valuesROISubjects\np valueADMCICNAD–MCIAD–CNMCI–CNSuperior temporal gyrus, ROI 6, x ±58, y −32, z +14Rtgm0.238±0.0150.225±0.0140.233±0.0340.022070.658770.35914Rtneg0.293±0.0220.269±0.0140.280±0.0360.000770.247910.23248Rtpos0.292±0.0270.269±0.0160.276±0.0460.004670.251590.54348Ltgm0.227±0.0160.204±0.0140.213±0.0160.000140.023290.10851Ltneg0.282±0.0260.250±0.0170.266±0.0210.000240.075570.02310Ltpos0.296±0.0270.255±0.0200.267±0.0230.000020.003410.12132Superior temporal gyrus, ROI 7, x ±64, y −16, z −6Rtgm0.212±0.0230.190±0.0130.194±0.0120.001050.008070.32091Rtneg0.264±0.0280.237±0.0140.246±0.0200.001370.044800.18303Rtpos0.271±0.0360.236±0.0150.244±0.0180.000620.011910.14461Ltgm0.120±0.0170.190±0.0160.181±0.0190.101160.006930.13342Ltneg0.251±0.0150.240±0.0210.233±0.0290.120770.041210.40239Ltpos0.265±0.0360.238±0.0150.229±0.0210.006670.002020.18436Superior temporal gyrus, ROI 8, x ±58, y −26, z −4Rtgm0.231±0.0230.210±0.0160.217±0.2340.004770.119930.29256Rtneg0.276±0.0260.254±0.0210.265±0.0300.011280.319820.19190Rtpos0.284±0.0320.249±0.0130.262±0.0280.000180.051070.08649Ltgm0.223±0.0260.218±0.0200.220±0.0200.557590.679710.85453Ltneg0.270±0.0200.266±0.0260.268±0.0290.620740.848380.80622Ltpos0.281±0.0300.263±0.0180.269±0.0260.038180.222050.44215Superior frontal gyrus, ROI 9, x ±24, y +56, z +4Rtgm0.253±0.0270.255±0.0260.254±0.0330.879670.939170.95524Rtneg0.303±0.0220.293±0.0230.301±0.0330.241570.909150.40468Rtpos0.311±0.0300.302±0.0360.295±0.0300.473680.151430.51314Ltgm0.268±0.0250.244±0.0270.247±0.0250.014530.029800.73554Ltneg0.311±0.0260.289±0.0340.302±0.0300.051340.398980.24469Ltpos0.347±0.0450.303±0.0340.298±0.0300.002980.001250.67386Superior frontal gyrus, ROI 10, x ±28, y +58, z +24Rtgm0.205±0.0270.188±0.0270.182±0.0280.088420.030570.54784Rtneg0.249±0.0420.235±0.0320.235±0.0330.283680.302110.98821Rtpos0.279±0.0460.248±0.0410.239±0.0370.047140.011250.49593Ltgm0.203±0.0270.178±0.0270.168±0.0420.009850.009890.44181Ltneg0.258±0.0280.229±0.0360.223±0.0520.015800.029850.73585Ltpos0.272±0.0390.241±0.0450.221±0.0570.050280.008000.26447\nRt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positiveThe ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups\n\nThe ROI data and results of the corresponding statistical test for the mean diffusivity (MD) values\n\nRt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positive\nThe ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups\nThe ROI data and results of the corresponding statistical test for the fractional anisotropy (FA) values\n\nRt right, Lt left, AD Alzheimer's disease, MCI mild cognitive impairment, CN cognitive normal, Gm geometric mean, neg negative, p positive\nThe ROIs are listed in the table above. The significant level was used with p = 0.016 (p = 0.05/3 times repeated) because we repeated the same tasks three times among the three groups", "A new finding of this study is that in AD patients, the background gradients have a significantly greater effect on the MD than that in the MCI or CN subjects. This finding is consistent with our hypothesis that the elevated iron-rich processes in AD patients, such as the accumulation of amyloid plaques, induce local variations in magnetic susceptibility, and these are detectable as regional changes in the background gradients. This finding suggests that the contributions to the background gradients are heterogeneous. Finally, our results from the regular DT-MRI maps using diffusion-sensitizing gradients with positive polarity alone are consistent with the results of the previous DT-MRI studies on AD, MCI, and CN subjects.\nAD was associated with a systematic pattern of diffusion alternations in the MD maps relative to MCI and CN even when the geometrical mean values of diffusivity (gmMD) were computed, which should in principle lead to diminished effects of the background gradients. The increased gmMD values in the AD subjects were found in the region of the right superior temporal gyrus. A previous DT-MRI study that used single polarity diffusion-encoding gradients reported elevated MD values in the AD subjects relative to that in the MCI subjects in the same region [5]. In addition, we also found increased gmMD values in the AD subjects in the left superior frontal gyrus gray matter and left medial frontal gyrus white matter. Furthermore, compared to the CN subjects, we found increased gmMD values in the AD subjects in the limbic parahippocampal and right middle temporal gyrus, and the left and right temporal white matter. Those areas were also consistent with those of the previous single polarity DT-MRI studies in AD patients [1, 3, 4, 27]. Furthermore, we found increased gmMD values in the AD subjects involving the left limbic uncus gray matter, and this region has not been reported before in DT-MRI studies. The findings are consistent with our results of comparing the MD maps derived from the diffusion-sensitizing gradients with different polarity, and the findings further support our hypothesis that intrinsic susceptibility variations of the brain contribute to the regional diffusion abnormalities in AD patients.\nOne of our findings was that the regional distribution of elevated gmMD values in the AD patients did not completely overlap with the corresponding pattern of nMD and pMD variations in the AD patients. The regional difference could reflect the heterogeneity of AD pathology or different stages of the disease, as well as the variations in the processes leading to local variations in magnetic susceptibility. For example, the regions where all three diffusion measurements (pMD, nMD, and gmMD) show significant alterations might be related to alterations primarily in the cell density that alter the MD, but they do not substantially alter the magnet susceptibility. On the other hand, the regions without a significant change in the gmMD, but that are with significant changes in the nMD and pMD, could mainly reflect the characteristic local magnetic susceptibility variations related to the paramagnetic effects from iron-rich processes involving the oligodentrocytes and amyloid plaques. The sensitivity differences between in the various MD maps cannot explain the regional discordance between the changes in the pMD and nMD. We cannot directly compare the sensitivity difference between the gmMD maps and the pMD or nMD maps because the gmMD map is derived by averaging two DT-MRI signals and generally has a higher signal-to-noise ratio than the individual pMD and nMD, which are based on a single DT-MRI signal. Therefore, the geometrically calculated MD (gmMD) in the AD or MCI patients may be better to differentiate accurately between the groups as compared to that obtained with the single polarity.\nWe tested our concepts in water phantom before we applied in human brain. The mean ADC value or average tensor components or mean eigenvalue is different about 0.00004 mm2/s, and the FA value is different about 0.00188 between the positive and negative diffusion gradients. This means that our human results should be clearly shown reliable measurements of DTI data without absence of any systematic bias due to the different gradient settings (positive vs negative).\nIn voxel-wise comparisons of human brain data, the results from geometric mean appear highly similar to those from negative gradients, while positive gradients show very different results. We may explain the reason as the following. The largest contribution of cross terms tends to be when the applied diffusion gradient and the intrinsic background magnetic field gradient are in the same direction and sign as the frequency-encoding direction. With the used TE value, this would mean that we would get more positive pixels for positive diffusion and read gradients than for negative diffusion and positive read gradients. This is supported by getting comparable diffusion rates for all gradient directions on an isotropic phantom experiment with the same acquisition parameters.\nOur DTI findings using diffusion-sensitizing gradients with the single positive polarity are largely consistent with the findings of the previous DT-MRI studies that used a single gradient polarity [1, 3–5, 27]. Specifically, our finding in AD patients of increased pMD values in the right middle temporal gyrus white matter and the left superior temporal gyrus white matter, as compared to that of the MCI patients, are consistent with a previous report [5]. Similarly, our results in AD patients of increased pMD values in the limbic parahippocampal gyrus white matter, the right temporal gyrus white matter, and the right occipital white matter are consistent with the results of the previous studies [1, 3, 4, 27].\nPrevious studies have also demonstrated that the background gradient effects in DT-MRI can be amplified due to an interaction (also known as the “cross-term effect”) between the external and background gradients. Several diffusion studies have reported significant cross-term effects on the measurements of the apparent diffusion coefficients (ADC) in phantoms [28], rat brain [14], and pig spinal cord [29]. Indeed, several studies have shown that cross-term effects can also depend, at least in part, on the types of sequence and the external gradient patterns [30, 31], and these effects can in principle be minimized by the combined use of a multispin-echo preparation and pulse gradients for diffusion encoding [32] or the combined use of asymmetric bipolar diffusion-encoding gradients and twice refocused spin-echo preparation [22]. A further class of experimental strategies that also accounts for the presence of field inhomogeneities is based on the use of bipolar gradient pluses [22, 33, 34]. However, some ADC studies have found no detectable cross-term effects [35], including a DT-MRI study in normal young human brain [36]. Our results generalize the cross-term findings by demonstrating that the effect can substantially vary between different brain conditions. Our results in AD patients that not all the variations in the pMD or nMD maps were also present in the gmMD maps emphasize the importance of accounting for the background gradients when comparing the DT-MRI data between groups with different brain conditions. In particular, the findings indicate that a bias toward increased MD variability can be introduced if the background gradients are not considered. For the FA index, there were no differences between the positive and negative acquisitions. This is not surprising because FA is derived by taking the ratios of the eigenvalues. The sensitivity to detect an effect on FA is substantially diminished as compared to that of MD, which is simply the average of the eigenvalues.\nFinally, although some previous investigators [2, 37, 38] have found significant differences of FA and MD between the MCI and CN patients, we did not find any significant differences for either FA or MD between these two groups. One explanation is that the MCI subjects in this study were only mildly impaired, as was reflected by their average MMSE score, which did not significantly differ from the score of the normal subjects. Another possibility is that our MCI group included only a few subjects with preclinical AD pathology as compared to that of the MCI groups in the other previous studies.\nIn general, the FA values in the white matter can be contaminated by signal from CSF spaces, which are more pronounced in AD than in CN subject. This will support increased MD and decreased FA values which showed several previous papers. However, in our study, we use the CSF-suppressed DTI method to minimize partial volume effect. Increased MD values in a voxel may be not directly related to CSF contamination. We think that this may be related to loss of integrity of neuronal structure. In this case, FA value can also decrease or increase dependent on the types of neuronal loss in the voxel. Previous findings of decreased FA in AD may be due to (1) neuronal loss in the voxel (microstructural changes) or (2) increased contributions of CSF signals caused by loss of cortical GM (atrophy; macroscopic changes) because many of previous studies did not use an inversion-recovery DTI sequence. A lot of CSF signals may be contributed to decreased FA values in AD in the previous findings. However, previous and our findings of increased FA in AD may be due to loss selectively in a certain direction rather than every neuron in the voxel (selective neuronal loss and microstructural changes). In addition, our finding of increased FA in AD may be additionally due to less contribution of CSF signals. If brain atrophy is contributed to FA changes, then our finding may be unrelated to microstructural alternation. The recent paper has shown increased FA values in those patients, although they did not use any kinds of CSF suppression techniques. Teipel et al. found increased FA values in AD patients compared with elderly controls [39]. This increased FA values in AD may be associated with a decrease in crossing fibers or other nonparallel organization [40]. Teipel et al. [39] also found decreased FA values in AD patients compared with elderly control, but not in the present study. The largest difference between two studies is the selection of a statistical threshold. Teipel et al. [39] used uncorrected p value = 0.01, but we used FDR-corrected p value = 0.01 to consider multiple comparisons to reduce the false-positive findings. When we reinvestigated the results with the uncorrected p value = 0.01, then we also found decreased FA values in AD patients compared with elderly control in the temporal and frontal lobes which were similar findings from Teipel et al. [39]. In addition, there is usually more gray matter loss in AD than in elderly normal subjects. Within a voxel which composes both gray matter and white matter, FA in our study can also be increased when there is gray matter loss without changing white matter volume because we acquired DTI data with suppressing signals from the brain atrophy. In this case, the relative contribution of white matter in the voxel can be increased. In general, people have been found increased MD and decreased FA in AD compared with normal. This decreased FA may only represent increased atrophy in a certain voxel because people have analyzed their FA data without correcting the atrophy contribution. In this study, we used an inversion-recovery DT-MRI sequence, which minimize the CSF contamination in a voxel. During FA analysis, we may find tissue alternations in the voxel, independent on the contributions of brain atrophy, which usually happens in gray matter. Therefore, it may be very important to correct the brain atrophy before analyzing FA maps, especially obtained FA maps in patients with AD.\nIn this study, MD values in some ROIs are higher in CN than in MCI. FA values in some ROIs are higher in MCI than in CN. We cannot clearly explain why this happens. However, we may think as the following: Although we divided the subject groups as MCI and CN based on the full neuropsychologic tests, the MMSE score does not significantly differ between two groups. In addition, although there is no significant difference of age between two groups, the MCI group is relatively younger than the CN group. As we found the results based on both the voxel-wise and the ROI-based comparisons, there were no significant differences any indices between two groups. Although several previous studies found increased MD values and decreased FA values in AD compared with CN, some studies also did not found any differences between MCI and CN groups for both indices [40]. In this case, the pathological changes in some areas in AD may not yet affect individuals with MCI. Furthermore, as we can see in Table 8, MD values in MCI are higher than those in CN in the right limbic parahippocampal gyrus, in the left middle temporal gyrus, and in the right and left limbic uncus, as expected. In addition, as we can see in Table 9, FA values in most ROIs are higher in CN than those in MCI, especially in the right and left superior temporal gyrus, as expected. The pathological changes in those areas may start to affect individuals with MCI. This may indicate regional variations of MD and FA values in patients with MCI compared with CN subjects.\nSeveral limitations of our study ought to be mentioned. Firstly, none of the AD patient had a definite diagnosis confirmed by autopsy. Therefore, the DTI alterations in the AD patients might not be related to AD at all. Another limitation is that we did not investigate the spatially variant background gradients for the higher order diffusion effects, such as kurtosis. Therefore, the contributions from these other effects might have biased our results. Another technical limitation is that the geometrical average maps, which are derived by averaging two signals, have a higher SNR than do the individual maps of diffusivity without averaging, and this leads to differences in sensitivity. Therefore, the findings of regional discrepancies between these maps are difficult to interpret. Finally, in this study, we mainly used a voxel-based analysis of the DTI indices of both MD and FA maps. Currently, the voxel-based analysis is still under investigating in optimization processing of DT-MRI data. Although the voxel-based analysis can be used a whole brain investigation without having a specific hypothesis, the results may be altered by several factors, such as a spatial normalization of DT-MRI data acquired with an EPI sequence, a coregistration between anatomic magnetic resonance images and maps of DTI indices, a filter size of smoothing kernel, and/or registration of individual MD or FA maps onto a common space. Therefore, optimizations of the preprocessing and postprocessing steps are required to minimize any errors during processing [41, 42].", "Accurate DT-MRI measurements require considering the effects from background gradients, and especially in patients with pathological brain conditions such as AD. Furthermore, geometric average diffusion measures (e.g., gmMD) can be useful to minimize the intrinsic local magnetic susceptibility variations in brain tissue. As we demonstrated for the case of AD, these maps may provide complementary information to the standard DTI maps." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, null, null, "results", "discussion", "conclusion" ]
[ "Diffusion tensor imaging", "Background gradients", "Alzheimer's disease", "Mild cognitive impairment", "Geometric mean analysis" ]
Low transmission rate of 2009 H1N1 Influenza during a long-distance bus trip.
21340580
Current data on the risk of transmission of 2009 H1N1 Influenza in public transportation systems (e.g., public trains, busses, airplanes) are conflicting. The main transmission route of this virus is thought to be via droplets, but airborne transmission has not been completely ruled out.
BACKGROUND
This is a contact tracing investigation of a young woman subsequently diagnosed with the 2009 H1N1 Influenza virus who was symptomatic during a long-distance bus trip from Spain to Switzerland. Fever and cough had begun 24 h earlier, 2 h before she stepped onto a bus for a long-distance trip. After the 2009 H1N1 virus had been confirmed in the patient, the other bus travellers were contacted by telephone on day 7 and 10 after the bus trip.
METHODS
Of the 72 individuals travelling on the bus with the H1N1-infected young woman, 52 (72%) could be contacted. Only one of these 52 developed fever, with onset of symptoms 3 days after the bus trip, and rRT-PCR analysis of the nasopharyngeal swab showed the infection to be caused by the 2009 H1N1 virus. One other person complained of coughing 1 day after the bus trip, but without fever, and no further investigation was carried out. All other passengers remained without fever, coughing, or arthralgia. The risk of transmission was calculated as 1.96% (95% confidence interval 0-5.76%).
RESULTS
The transmission rate of 2009 H1N1 Influenza was low on a long-distance bus trip.
CONCLUSION
[ "Adult", "Contact Tracing", "Female", "Humans", "Influenza A Virus, H1N1 Subtype", "Influenza, Human", "Male", "Nasopharynx", "Reverse Transcriptase Polymerase Chain Reaction", "Risk Assessment", "Spain", "Switzerland", "Travel", "Young Adult" ]
7099280
Introduction
We report here a contact tracing investigation of a young woman with confirmed 2009 H1N1 Influenza who was symptomatic during a long-distance bus trip from Spain to Switzerland. Up to 10 August 2009, 56% of confirmed Swiss cases of 2009 H1N1 Influenza were due to infection outside of the country, with only 14% postulated to be due to transmission in Switzerland itself [1]. The principle transmission mode of 2009 N1N1 influenza is still under debate. Analysis of cough revealed that >99.9% of the expectorated particles are >8 μm [2] and therefore defined as droplets. Particles in the size range of 5–10 μm have to shown to be capable of penetrating deeply into the tracheobronchial region (50% of 10-μm particles), but for particles >20 μm, there is essentially no penetration beyond the trachea [3]. While aerosols remain in the air, droplets fall to the ground, with a settling velocity that is in proportion to their diameter [3]. Based on these known facts, the main transmission route of 2009 H1N1 virus is believed to be by droplet exposure of mucosal surfaces [4]. As such, the hallmarks of transmission precaution are good hand hygiene and the wearing of gloves and surgical masks [5]. However, airborne transmission by small particle aerosols may also occur [3, 6]. As with most respiratory pathogens, including influenza, the relative contribution of each of these types of transmission has not been adequately ascertained. Up to August 2010, the Centers of Disease Control and Prevention (CDC) recommended that healthcare personnel who are in close contact with patients with suspected or confirmed 2009 H1N1 Influenza take respiratory protection measures that are at least as protective as a fit-tested disposable N95 respirator. Most authorities recommended that persons with suspected 2009 H1N1 Influenza who are not severely ill should remain at home until they are at least 24 h without fever or symptoms of fever to limit further transmissions. However, the production and possible transmission of viral particles is possible 24 h before the first signs occur, and a substantial proportion of seasonal influenza as well as the 2009 H1N1 Influenza is mild or even subclinical and not recognized by the patient [7, 8]. These persons can be infectious for others. For hospitalized patients, strategies to prevent transmission have been developed and standardized [9]. These include administration of influenza vaccine, implementation of respiratory hygiene and cough etiquette, appropriate management of ill healthcare workers, adherence to infection control precautions for all patient-care activities and aerosol-generating procedures, and implementation of environmental and engineering infection control measures. However, in the ambulatory setting, it may be difficult to guarantee special waiting rooms for patients with suspected 2009 H1N1 Influenza (or other respiratory pathogens). Furthermore, symptoms may not be obvious, so these people are often placed in a waiting room together with other patients, eventually using the same lavatories and examination rooms. The risk of influenza transmission in public areas (e.g., public trains, busses, and airplanes) has not been defined, and clinical studies and mathematical models show conflicting results [10–12]. Typical airborne pathogens, such as tuberculosis, have been well studied, but to date little is known on the influenza virus where the main transmission route is either directly from person-to-person via droplets, or indirectly via a contaminated surface. On hard surfaces, the influenza virus is infective for up to 24 h; the survival time is much shorter on cloth, paper, and tissues, i.e., 8–12 h; on hands after transfer from environmental sources, the virus survives for only 5 min [13].
null
null
Results
Data were available for 52/72 (72%) of the bus passengers; the remainder could not be contacted after four attempts. One person became ill on day 3 after the trip, and further investigation revealed the presence of H1N1 virus by rRT-PCR analysis of a nasopharyngeal swap. One other person who also reported having symptoms of influenza in Spain, even prior to boarding the bus, also tested positive for H1N1. Genotyping of the two persons who were ill during the bus trip could not identify a common source. Unfortunately, the amount of the nucleic acid amplified from the secondary case was too small to allow genotyping, so transmission could not be confirmed by genotyping. One other person complained of cough without fever that had begun on day 3 after the return home from Spain but which was still present on day 13. As fever was not present on days 3 and 13, the case definition for suspected influenza was not fulfilled and we postulated an unspecific viral upper respiratory tract infection; however, no nasopharyngeal swab was obtained from this individual. Six of the individuals contacted complained of fever during the holiday in Spain, but no further investigation was made as their symptoms had resolved at the time of the first telephone call. Four of these six individuals also complained of concomitant cough and arthralgia, while two of the six reported fever and only cough. The persons contacted spent their holiday at three different locations in six, one, and five different hotels, respectively. The six persons reporting fever during the holiday were staying at four different hotels and two different places, with one cluster of three persons in the same hotel and in close contact. The passenger with the first signs of infection on day 3 after his return and proven 2009 H1N1 Influenza was probably infected during the bus trip. He did share the hotel with the cluster of persons with fever during their stay in Spain but had no other contact. He did not share the hotel with the two persons with proven 2009 H1N1 Influenza. The other person who complained of cough and arthralgia, but without fever, did not fall into the case definition. Therefore, transmission of the 2009 H1N1 Influenza virus may have occurred in only 1/51 persons, with two proven index cases with fever and coughing during the bus trip. This person did not sit in close proximity to the two infected persons (see Fig. 1). The index person sat at the opposite window and three rows behind, the other infected person at the same window seat but eight rows in front of the patient becoming ill 3 days later. We calculated the risk of transmission for laboratory confirmed symptomatic cases as 1.96% (95% confidence interval 0–5.76%).Fig. 1Seating arrangements of passengers on the long distance bus Seating arrangements of passengers on the long distance bus
null
null
[ "Methods", "" ]
[ "On 1 August 2009, a 19-year-old female patient was admitted to the emergency department of our hospital with complaints of fever (39.2°C), cough, and arthralgia. The symptoms had begun 24 h earlier, just before she stepped onto a long-distance bus in Spain to return to Switzerland after a 1-week holiday. A nasopharyngeal swap was obtained, and 2009 H1N1 Influenza was confirmed by real time rRT-PCR, as previously described [14]. Confirmatory testing and genotyping was performed by the World Health Organization (WHO) collaborating reference center of Influenza (Geneva, Switzerland). The transportation company was contacted, and a complete passenger list of all those on the bus (72 passengers), with seating plan, was obtained. These busses make one round-trip weekly, collecting people from different parts in Spain and bringing them to different parts of Switzerland. During the summer of 2009, the infection rate of 2009 H1N1 Influenza was significantly higher in Spain than in Switzerland. Attempts were made to contact all passengers by telephone. When this was successful, the passengers were asked whether they had symptoms of fever (≥38°C), coughing, and/or arthralgia, and if so, when these symptoms first appeared. A suspected case was defined as the onset of fever (≥38°C) and cough or sore throat [15]. According to the exposure criteria published by the CDC on 1 May 2009, all passengers were included in the survey, as they had “travelled to a community, which has one or more confirmed swine-origin influenza A (H1N1) cases” [15]. Persons without symptoms who did not feel feverish were not asked to take their temperature. The first telephone call was made 6 days after the passengers had returned from Spain. If no fever had occurred by day 6, the persons were contacted a second time on between days 10 and 13.\nIf no symptoms of influenza had presented by day 10 after the possible contagion during the bus trip, it was considered that no infection had occurred. The bus was double floored, 13.9 m length, with an integrated ventilation system, air inlets on the roof, adjustable nozzles above each passenger and venting in the front of the bus by negative pressure, without air recirculation or HEPA-filters.\nThe average age of the passengers contacted was 19.7 ± 7 years, of whom 18 (34.6%) were male.", "Below is the link to the electronic supplementary material.\nSupplementary material (DOC 957 kb)\n\nSupplementary material (DOC 957 kb)" ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Electronic supplementary material", "" ]
[ "We report here a contact tracing investigation of a young woman with confirmed 2009 H1N1 Influenza who was symptomatic during a long-distance bus trip from Spain to Switzerland. Up to 10 August 2009, 56% of confirmed Swiss cases of 2009 H1N1 Influenza were due to infection outside of the country, with only 14% postulated to be due to transmission in Switzerland itself [1]. The principle transmission mode of 2009 N1N1 influenza is still under debate. Analysis of cough revealed that >99.9% of the expectorated particles are >8 μm [2] and therefore defined as droplets. Particles in the size range of 5–10 μm have to shown to be capable of penetrating deeply into the tracheobronchial region (50% of 10-μm particles), but for particles >20 μm, there is essentially no penetration beyond the trachea [3]. While aerosols remain in the air, droplets fall to the ground, with a settling velocity that is in proportion to their diameter [3]. Based on these known facts, the main transmission route of 2009 H1N1 virus is believed to be by droplet exposure of mucosal surfaces [4]. As such, the hallmarks of transmission precaution are good hand hygiene and the wearing of gloves and surgical masks [5]. However, airborne transmission by small particle aerosols may also occur [3, 6]. As with most respiratory pathogens, including influenza, the relative contribution of each of these types of transmission has not been adequately ascertained. Up to August 2010, the Centers of Disease Control and Prevention (CDC) recommended that healthcare personnel who are in close contact with patients with suspected or confirmed 2009 H1N1 Influenza take respiratory protection measures that are at least as protective as a fit-tested disposable N95 respirator. Most authorities recommended that persons with suspected 2009 H1N1 Influenza who are not severely ill should remain at home until they are at least 24 h without fever or symptoms of fever to limit further transmissions. However, the production and possible transmission of viral particles is possible 24 h before the first signs occur, and a substantial proportion of seasonal influenza as well as the 2009 H1N1 Influenza is mild or even subclinical and not recognized by the patient [7, 8]. These persons can be infectious for others.\nFor hospitalized patients, strategies to prevent transmission have been developed and standardized [9]. These include administration of influenza vaccine, implementation of respiratory hygiene and cough etiquette, appropriate management of ill healthcare workers, adherence to infection control precautions for all patient-care activities and aerosol-generating procedures, and implementation of environmental and engineering infection control measures.\nHowever, in the ambulatory setting, it may be difficult to guarantee special waiting rooms for patients with suspected 2009 H1N1 Influenza (or other respiratory pathogens). Furthermore, symptoms may not be obvious, so these people are often placed in a waiting room together with other patients, eventually using the same lavatories and examination rooms. The risk of influenza transmission in public areas (e.g., public trains, busses, and airplanes) has not been defined, and clinical studies and mathematical models show conflicting results [10–12]. Typical airborne pathogens, such as tuberculosis, have been well studied, but to date little is known on the influenza virus where the main transmission route is either directly from person-to-person via droplets, or indirectly via a contaminated surface. On hard surfaces, the influenza virus is infective for up to 24 h; the survival time is much shorter on cloth, paper, and tissues, i.e., 8–12 h; on hands after transfer from environmental sources, the virus survives for only 5 min [13].", "On 1 August 2009, a 19-year-old female patient was admitted to the emergency department of our hospital with complaints of fever (39.2°C), cough, and arthralgia. The symptoms had begun 24 h earlier, just before she stepped onto a long-distance bus in Spain to return to Switzerland after a 1-week holiday. A nasopharyngeal swap was obtained, and 2009 H1N1 Influenza was confirmed by real time rRT-PCR, as previously described [14]. Confirmatory testing and genotyping was performed by the World Health Organization (WHO) collaborating reference center of Influenza (Geneva, Switzerland). The transportation company was contacted, and a complete passenger list of all those on the bus (72 passengers), with seating plan, was obtained. These busses make one round-trip weekly, collecting people from different parts in Spain and bringing them to different parts of Switzerland. During the summer of 2009, the infection rate of 2009 H1N1 Influenza was significantly higher in Spain than in Switzerland. Attempts were made to contact all passengers by telephone. When this was successful, the passengers were asked whether they had symptoms of fever (≥38°C), coughing, and/or arthralgia, and if so, when these symptoms first appeared. A suspected case was defined as the onset of fever (≥38°C) and cough or sore throat [15]. According to the exposure criteria published by the CDC on 1 May 2009, all passengers were included in the survey, as they had “travelled to a community, which has one or more confirmed swine-origin influenza A (H1N1) cases” [15]. Persons without symptoms who did not feel feverish were not asked to take their temperature. The first telephone call was made 6 days after the passengers had returned from Spain. If no fever had occurred by day 6, the persons were contacted a second time on between days 10 and 13.\nIf no symptoms of influenza had presented by day 10 after the possible contagion during the bus trip, it was considered that no infection had occurred. The bus was double floored, 13.9 m length, with an integrated ventilation system, air inlets on the roof, adjustable nozzles above each passenger and venting in the front of the bus by negative pressure, without air recirculation or HEPA-filters.\nThe average age of the passengers contacted was 19.7 ± 7 years, of whom 18 (34.6%) were male.", "Data were available for 52/72 (72%) of the bus passengers; the remainder could not be contacted after four attempts. One person became ill on day 3 after the trip, and further investigation revealed the presence of H1N1 virus by rRT-PCR analysis of a nasopharyngeal swap. One other person who also reported having symptoms of influenza in Spain, even prior to boarding the bus, also tested positive for H1N1. Genotyping of the two persons who were ill during the bus trip could not identify a common source. Unfortunately, the amount of the nucleic acid amplified from the secondary case was too small to allow genotyping, so transmission could not be confirmed by genotyping. One other person complained of cough without fever that had begun on day 3 after the return home from Spain but which was still present on day 13. As fever was not present on days 3 and 13, the case definition for suspected influenza was not fulfilled and we postulated an unspecific viral upper respiratory tract infection; however, no nasopharyngeal swab was obtained from this individual. Six of the individuals contacted complained of fever during the holiday in Spain, but no further investigation was made as their symptoms had resolved at the time of the first telephone call. Four of these six individuals also complained of concomitant cough and arthralgia, while two of the six reported fever and only cough.\nThe persons contacted spent their holiday at three different locations in six, one, and five different hotels, respectively.\nThe six persons reporting fever during the holiday were staying at four different hotels and two different places, with one cluster of three persons in the same hotel and in close contact.\nThe passenger with the first signs of infection on day 3 after his return and proven 2009 H1N1 Influenza was probably infected during the bus trip. He did share the hotel with the cluster of persons with fever during their stay in Spain but had no other contact. He did not share the hotel with the two persons with proven 2009 H1N1 Influenza. The other person who complained of cough and arthralgia, but without fever, did not fall into the case definition. Therefore, transmission of the 2009 H1N1 Influenza virus may have occurred in only 1/51 persons, with two proven index cases with fever and coughing during the bus trip. This person did not sit in close proximity to the two infected persons (see Fig. 1). The index person sat at the opposite window and three rows behind, the other infected person at the same window seat but eight rows in front of the patient becoming ill 3 days later. We calculated the risk of transmission for laboratory confirmed symptomatic cases as 1.96% (95% confidence interval 0–5.76%).Fig. 1Seating arrangements of passengers on the long distance bus\n\nSeating arrangements of passengers on the long distance bus", "Since April 2009, when the first cases of the 2009 H1N1 Influenza infections were identified, the pandemic has spread throughout the world. Early studies reported a high hospitalization rate, but with growing experience in dealing with this new virus, it has become clear that the disease has a relatively mild course in the majority of patients, with most not requiring hospitalization [16]. However, in contrast to seasonal forms of Influenza A, young people are more affected, and pregnant women are especially at risk of developing severe disease. Consequently, rapid antiviral treatment is recommended.\nThe transmission rate of the 2009 H1N1 virus seems to be higher than that of seasonal influenza [16], explaining its rapid spread throughout the world. Health authorities have warned that many segments of public life could be affected (e.g., transportation, education, healthcare systems). For hospitalized patients, guidelines are in place for dealing with those individuals harboring the 2009 H1N1 virus, but outside of hospitals or even in the ambulatory care setting, it has often proven difficult to establish strict guidelines.\nIt is believed that most transmissions of influenza virus occur via person-to-person or via droplets falling upon hard surfaces where the virus can survive up to 48 h. However, airborne transmission cannot be ruled out completely, and small droplet nuclei containing influenza virus have been found in waiting rooms in an emergency department [17]. The individuals investigated in this study were on the same bus for more than 12 h, and at least two persons had symptomatic documented 2009 H1N1 Influenza. Only one other person developed H1N1 Influenza, resulting in a transmission rate of 1.96%. The average age of our population was 19.7 ± 7 years. Younger people are, in contrast to seasonal influenza, more vulnerable to the 2009 H1N1 virus [18, 19]. Consequently, our data emphasize that airborne transmission may not be the main route of the spread of the 2009 H1N1 Influenza virus. However, several authors propose that in special situations, airborne transmission of influenza virus may be underestimated [20]. Other pathogens that are usually transmitted by direct contact can, under certain conditions, spread through the air [21]. A significant increase in the dispersal as well as transmission to patients and even outbreaks in hospital wards have been demonstrated for Staphylococcus aureus and linked to concomitant upper respiratory tract infection in otherwise healthy nasal carriers [22–24]. This phenomenon is called “cloud adult” and was also proposed for “supersreaders” in the severe acute respiratory syndrome (SARS) epidemic [25].\nWe believe that the situation of this long-distance bus trip with a relatively low transmission rate cannot be transferred directly to other public transport modes. It was a night bus with just a few stops, and the passengers were sleeping most of the time. Consequently, the movement of passengers in the bus as well as boarding events were much less frequent than those on a city bus or in a public train. As the movements of passengers may facilitate the dispersion of droplets to surfaces as well as person-to person contacts, we postulate that the transmission rate on public transport systems may be higher. However, in an outbreak of 2009 H1N1 Influenza in two school classes in the UK, there was no evidence of transmissions on a school bus where the children were exposed for more than 50 min to a symptomatic case [26].\nThis investigation has its limitations. Some people contract only mild or even asymptomatic 2009 H1N1 Influenza [27]. We contacted the passengers by telephone only and did not investigate any further if the passenger considered him/herself not to be ill. Thus, the real transmission rate could have been higher. We postulate that one person was infected during the bus trip, but transmission even before boarding or shortly after the bus trip cannot be completely ruled out.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\nSupplementary material (DOC 957 kb)\n\nSupplementary material (DOC 957 kb)\nBelow is the link to the electronic supplementary material.\nSupplementary material (DOC 957 kb)\n\nSupplementary material (DOC 957 kb)", "Below is the link to the electronic supplementary material.\nSupplementary material (DOC 957 kb)\n\nSupplementary material (DOC 957 kb)" ]
[ "introduction", null, "results", "discussion", "supplementary-material", null ]
[ "Influenza", "Influenza Virus", "Cough", "Severe Acute Respiratory Syndrome", "H1N1 Virus" ]
Randomized phase III study comparing the efficacy and safety of irinotecan plus S-1 with S-1 alone as first-line treatment for advanced gastric cancer (study GC0301/TOP-002).
21340666
Irinotecan hydrochloride and S-1, an oral fluoropyrimidine, have shown antitumor activity against advanced gastric cancer as single agents in phase I/II studies. The combination of irinotecan and S-1 (IRI-S) is also active against advanced gastric cancer. This study was conducted to compare the efficacy and safety of IRI-S versus S-1 monotherapy in patients with advanced or recurrent gastric cancer.
BACKGROUND
Patients were randomly assigned to oral S-1 (80 mg/m² daily for 28 days every 6 weeks) or oral S-1 (80 mg/m² daily for 21 days every 5 weeks) plus irinotecan (80 mg/m² by intravenous infusion on days 1 and 15 every 5 weeks) (IRI-S). The primary endpoint was overall survival. Secondary endpoints included the time to treatment failure, 1- and 2-year survival rates, response rate, and safety.
METHODS
The median survival time with IRI-S versus S-1 monotherapy was 12.8 versus 10.5 months (P = 0.233), time to treatment failure was 4.5 versus 3.6 months (P = 0.157), and the 1-year survival rate was 52.0 versus 44.9%, respectively. The response rate was significantly higher for IRI-S than for S-1 monotherapy (41.5 vs. 26.9%, P = 0.035). Neutropenia and diarrhea occurred more frequently with IRI-S, but were manageable. Patients treated with IRI-S received more courses of therapy at a relative dose intensity similar to that of S-1 monotherapy.
RESULTS
Although IRI-S achieved longer median survival than S-1 monotherapy and was well tolerated, it did not show significant superiority in this study.
CONCLUSIONS
[ "Adult", "Aged", "Antimetabolites, Antineoplastic", "Antineoplastic Agents, Phytogenic", "Antineoplastic Combined Chemotherapy Protocols", "Camptothecin", "Disease Progression", "Drug Combinations", "Female", "Follow-Up Studies", "Humans", "Irinotecan", "Male", "Middle Aged", "Neoplasm Recurrence, Local", "Oxonic Acid", "Stomach Neoplasms", "Tegafur", "Treatment Outcome", "Young Adult" ]
3056989
Introduction
Gastric cancer is the second leading cause of cancer-related deaths after lung cancer in Japan, and it was responsible for approximately 50,000 deaths in 2005 [1]. While surgery and appropriate adjuvant chemotherapy have resulted in superior stage-by-stage survival when compared with that in other parts of the world [2], the prognosis of unresectable or recurrent gastric cancer remains dismal. The development of more effective chemotherapeutic regimens is therefore warranted. In Western countries where a combination of 5-fluorouracil (5-FU) and cisplatin (CDDP) [3] has served as a reference arm in several phase III studies [4–6], triplets employing epirubicin [7] or docetaxel [5] in addition to this combination are the current standards, with modifications such as the replacement of CDDP with oxaliplatin and the replacement of infusional 5-FU with oral agents such as capecitabine [8]. Failure with the first-line treatment usually denotes the termination of chemotherapy, and second-line treatments are rarely considered outside of clinical trials. In Japan, where a phase III study (JCOG9205) failed to show superiority of a 5-FU/CDDP combination over 5-FU alone [9], the 5-FU monotherapy remained a standard of care, and other cytotoxic agents were usually delivered sequentially as second-line and third-line therapies rather than concurrently as combination therapy. With this strategy, the median survival time (MST) of patients with advanced gastric cancer whose treatment started with infusional 5-FU alone actually reached 10.8 months [9]. In the 1990s, S-1 (TS-1; Taiho Pharmaceutical, Tokyo, Japan), an oral derivative of 5-FU, was developed for the treatment of gastric cancer [10–12]. With an exceptionally high response rate of 46% as a single agent, this drug rapidly established itself as a community standard in Japan and was used widely in clinical practice. Phase III trials eventually proved the non-inferiority of S-1 when compared with infusional 5-FU in the advanced/metastatic setting [13], along with the superiority of S-1 monotherapy over observation alone in the postoperative adjuvant setting [14]. In addition, S-1 was found to be a unique cytotoxic drug, in that Japanese patients tolerated higher doses than Western patients, due to differences in the gene polymorphism of relevant enzymes [15]. Thus, the development of novel chemotherapeutic regimens in Japan during the 2000s has inevitably centered around this drug. The establishment of doublets to enhance response rates and improve on survival was the next important step, and several phase I/II studies were performed to explore combinations of S-1 with other cytotoxic drugs such as CDDP [16], docetaxel [17], paclitaxel [18], and irinotecan (Yakult Honsha, Tokyo, Japan; Daiichi Sankyo, Tokyo, Japan) [19]. All these combinations were found to be promising, with response rates of around 50% and relatively favorable safety profiles. A series of phase III trials comparing these doublets with S-1 monotherapy were subsequently planned and conducted to seek optimal first-line treatments. Of these, a phase III trial to explore S-1/CDDP was the first to complete accrual, and a significant improvement in MST of this combination over S-1 monotherapy was proven [20]. The present study, entitled GC0301/TOP-002, represents another of these attempts, exploring the efficacy of a combination of S-1 and irinotecan (IRI-S). The dose and schedule for this combination had been established by a phase I trial [21], and treatment at the recommended dose has shown a response rate of 47.8% [95% confidence interval (CI) 27.4–68.2%] with an MST of 394 days in a phase II study [19]. Given these earlier results and the synergistic effect of irinotecan and 5-FU observed in preclinical studies, the results of this present trial have been eagerly awaited.
null
null
Results
[SUBTITLE] Patient characteristics [SUBSECTION] Between June 2004 and November 2005, a total of 326 patients (S-1 monotherapy, n = 162; IRI-S, n = 164) were enrolled from 54 institutions and randomized (Fig. 1). Seven patients were subsequently found to be ineligible or withdrew before receiving any treatment. Another 4 patients were found to be ineligible after starting treatment and were not included in the analysis. Therefore, 315 patients (S-1 monotherapy, n = 160; IRI-S, n = 155) were evaluable and were included in the full analysis set to assess overall survival and TTF. In addition, 187 patients were evaluable for tumor response. Baseline patient characteristics are shown in Table 1.Fig. 1Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure Table 1Baseline characteristics and prior therapyCharacteristicTreatmentS-1IRI-STotal n % n % n %Patients randomized162164326Patients receiving at least one dose of study medication (full analysis set)160155315Sex Male127791107123775 Female332145297825Age (years) Median636363 Range27–7533–7527–75ECOG performance status 0109681026621167 1462948319430 25353103Tumor histology Intestinal7144613913242 Diffuse8855936018157 Other111121Resection of primary tumor +9358936018659 −6742624012941Advanced133831298326283Recurrent Adjuvant chemotherapy (+)5353103 Adjuvant chemotherapy (−)221421144314 IRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure Baseline characteristics and prior therapy IRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group Between June 2004 and November 2005, a total of 326 patients (S-1 monotherapy, n = 162; IRI-S, n = 164) were enrolled from 54 institutions and randomized (Fig. 1). Seven patients were subsequently found to be ineligible or withdrew before receiving any treatment. Another 4 patients were found to be ineligible after starting treatment and were not included in the analysis. Therefore, 315 patients (S-1 monotherapy, n = 160; IRI-S, n = 155) were evaluable and were included in the full analysis set to assess overall survival and TTF. In addition, 187 patients were evaluable for tumor response. Baseline patient characteristics are shown in Table 1.Fig. 1Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure Table 1Baseline characteristics and prior therapyCharacteristicTreatmentS-1IRI-STotal n % n % n %Patients randomized162164326Patients receiving at least one dose of study medication (full analysis set)160155315Sex Male127791107123775 Female332145297825Age (years) Median636363 Range27–7533–7527–75ECOG performance status 0109681026621167 1462948319430 25353103Tumor histology Intestinal7144613913242 Diffuse8855936018157 Other111121Resection of primary tumor +9358936018659 −6742624012941Advanced133831298326283Recurrent Adjuvant chemotherapy (+)5353103 Adjuvant chemotherapy (−)221421144314 IRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure Baseline characteristics and prior therapy IRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group [SUBTITLE] Treatments given [SUBSECTION] The median number of treatment courses was three (range 1–19) for S-1 monotherapy whose duration was 6 weeks, and four (range 1–25) for IRI-S whose duration was 5 weeks. The main reasons for treatment discontinuation were disease progression [S-1 monotherapy vs. IRI-S, 116/160 (72.5%) vs. 89/155 (57.4%)], adverse events [12/160 (7.5%) vs. 23/155 (14.8%)], attending physician’s decision [18/160 (11.3%) vs. 18/155 (11.6%)], and consent withdrawal [11/160 (6.9%) vs. 17/155 (11.0%)]. The median TTF was 3.6 months (95% CI 2.9–4.1) and 4.5 months (95% CI 3.7–5.3), respectively (P = 0.157). The relative dose intensity was 88.9% for S-1 monotherapy, versus 90.0% for S-1 and 86.2% for irinotecan among those treated with IRI-S. Most patients in both groups received the scheduled dose of chemotherapy. Second-line chemotherapy was administered to 240 patients (76%; S-1 monotherapy, n = 112; IRI-S, n = 128) (Table 2). The most common second-line therapy in both groups was a taxane alone (S-1 monotherapy, 26.9%; IRI-S, 40.6%). Among patients initially treated with S-1, 13 received crossover treatment with IRI-S, while 31 patients originally treated with IRI-S received second-line S-1 monotherapy.Table 2Second-line chemotherapyRegimenS-1 (n = 160)IRI-S (n = 155) n % n %IRI-S138.1––Irinotecan-based regimena 2716.942.6S-1 alone––3120.0S-1-based regimenb 95.6117.1Taxane alone4326.96340.6Others2012.51912.3None4830.02717.4 IRI-S S-1 plus irinotecan aIrinotecan/cisplatin, irinotecan/taxane bS-1/cisplatin, S-1/taxane Second-line chemotherapy IRI-S S-1 plus irinotecan aIrinotecan/cisplatin, irinotecan/taxane bS-1/cisplatin, S-1/taxane The median number of treatment courses was three (range 1–19) for S-1 monotherapy whose duration was 6 weeks, and four (range 1–25) for IRI-S whose duration was 5 weeks. The main reasons for treatment discontinuation were disease progression [S-1 monotherapy vs. IRI-S, 116/160 (72.5%) vs. 89/155 (57.4%)], adverse events [12/160 (7.5%) vs. 23/155 (14.8%)], attending physician’s decision [18/160 (11.3%) vs. 18/155 (11.6%)], and consent withdrawal [11/160 (6.9%) vs. 17/155 (11.0%)]. The median TTF was 3.6 months (95% CI 2.9–4.1) and 4.5 months (95% CI 3.7–5.3), respectively (P = 0.157). The relative dose intensity was 88.9% for S-1 monotherapy, versus 90.0% for S-1 and 86.2% for irinotecan among those treated with IRI-S. Most patients in both groups received the scheduled dose of chemotherapy. Second-line chemotherapy was administered to 240 patients (76%; S-1 monotherapy, n = 112; IRI-S, n = 128) (Table 2). The most common second-line therapy in both groups was a taxane alone (S-1 monotherapy, 26.9%; IRI-S, 40.6%). Among patients initially treated with S-1, 13 received crossover treatment with IRI-S, while 31 patients originally treated with IRI-S received second-line S-1 monotherapy.Table 2Second-line chemotherapyRegimenS-1 (n = 160)IRI-S (n = 155) n % n %IRI-S138.1––Irinotecan-based regimena 2716.942.6S-1 alone––3120.0S-1-based regimenb 95.6117.1Taxane alone4326.96340.6Others2012.51912.3None4830.02717.4 IRI-S S-1 plus irinotecan aIrinotecan/cisplatin, irinotecan/taxane bS-1/cisplatin, S-1/taxane Second-line chemotherapy IRI-S S-1 plus irinotecan aIrinotecan/cisplatin, irinotecan/taxane bS-1/cisplatin, S-1/taxane [SUBTITLE] Response and survival [SUBSECTION] The overall response rate was determined in 187 patients evaluable by the RECIST, and was significantly higher with IRI-S than with S-1 monotherapy (39/94, 41.5% vs. 25/93, 26.9%; P = 0.035) (Table 3).Table 3Response to treatmentS-1 (n = 93)IRI-S (n = 94) n % n %Complete response0000Partial response25273941Stable disease35384043Progressive disease30321213Not assessable3333Overall response rate26.941.5*95% CI18.2–37.131.4–52.1 CI confidence interval*  P = 0.035 (χ2 test) Response to treatment CI confidence interval *  P = 0.035 (χ2 test) The MST at the predetermined cut-off date was 12.8 months with IRI-S compared with 10.5 months with S-1 monotherapy (HR 0.856, P = 0.233) (Fig. 2), but the difference was not statistically significant. The 1-year survival rates were 44.9% [95% CI 37.2–52.6%] with S-1 monotherapy and 52.0% (95% CI 44.1–59.9%) with IRI-S, while the 2-year survival rates were 19.5% (95% CI 12.6–26.4%) and 18.0% (95% CI 11.2–24.8%), respectively.Fig. 2Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval MST was additionally calculated as an exploratory analysis after 2.5 years of follow-up, but the result was identical to the initial analysis at 12.8 months for IRI-S and at 10.5 months for S-1 monotherapy (HR 0.927; log-rank test P = 0.536). Again, the difference was not statistically significant. The overall response rate was determined in 187 patients evaluable by the RECIST, and was significantly higher with IRI-S than with S-1 monotherapy (39/94, 41.5% vs. 25/93, 26.9%; P = 0.035) (Table 3).Table 3Response to treatmentS-1 (n = 93)IRI-S (n = 94) n % n %Complete response0000Partial response25273941Stable disease35384043Progressive disease30321213Not assessable3333Overall response rate26.941.5*95% CI18.2–37.131.4–52.1 CI confidence interval*  P = 0.035 (χ2 test) Response to treatment CI confidence interval *  P = 0.035 (χ2 test) The MST at the predetermined cut-off date was 12.8 months with IRI-S compared with 10.5 months with S-1 monotherapy (HR 0.856, P = 0.233) (Fig. 2), but the difference was not statistically significant. The 1-year survival rates were 44.9% [95% CI 37.2–52.6%] with S-1 monotherapy and 52.0% (95% CI 44.1–59.9%) with IRI-S, while the 2-year survival rates were 19.5% (95% CI 12.6–26.4%) and 18.0% (95% CI 11.2–24.8%), respectively.Fig. 2Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval MST was additionally calculated as an exploratory analysis after 2.5 years of follow-up, but the result was identical to the initial analysis at 12.8 months for IRI-S and at 10.5 months for S-1 monotherapy (HR 0.927; log-rank test P = 0.536). Again, the difference was not statistically significant. [SUBTITLE] Prognostic factors of all patients and factors that favored treatment with IRI-S [SUBSECTION] Baseline risk factors with a significant influence on the overall survival of all patients accrued (P < 0.05) were performance status (HR 1.348, 95% CI 1.079–1.686, Wald test P = 0.009), tumor histology (HR 1.720, 95% CI 1.161–2.548, P = 0.007), target lesion (HR 1.525, 95% CI 1.164–1.999, P = 0.002), and surgery for the primary tumor (HR 0.698, 95% CI 0.538–0.906, P = 0.007). Stratified analysis according to baseline patient characteristics (Fig. 3) showed that IRI-S was significantly more effective than S-1 monotherapy for patients with diffuse-type histology (HR 0.632, 95% CI 0.454–0.880) and for those with an ECOG performance status of 1 or 2 (HR 0.614, 95% CI 0.401–0.940). No differences were observed for the other factors assessed.Fig. 3Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors Baseline risk factors with a significant influence on the overall survival of all patients accrued (P < 0.05) were performance status (HR 1.348, 95% CI 1.079–1.686, Wald test P = 0.009), tumor histology (HR 1.720, 95% CI 1.161–2.548, P = 0.007), target lesion (HR 1.525, 95% CI 1.164–1.999, P = 0.002), and surgery for the primary tumor (HR 0.698, 95% CI 0.538–0.906, P = 0.007). Stratified analysis according to baseline patient characteristics (Fig. 3) showed that IRI-S was significantly more effective than S-1 monotherapy for patients with diffuse-type histology (HR 0.632, 95% CI 0.454–0.880) and for those with an ECOG performance status of 1 or 2 (HR 0.614, 95% CI 0.401–0.940). No differences were observed for the other factors assessed.Fig. 3Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors [SUBTITLE] Safety [SUBSECTION] Adverse events that occurred in each group are listed in Table 4. The incidence of major hematological toxicities was higher with IRI-S than with S-1 monotherapy. Grade 3 or 4 neutropenia was observed in 10.6% of patients treated with S-1 monotherapy versus 27.1% of patients treated with IRI-S, while the corresponding incidences of infection/febrile neutropenia were 3.8 versus 1.9%. The most common grade 3 or 4 non-hematological toxicities were diarrhea (S-1 monotherapy vs. IRI-S, 5.6 vs. 16.1%), anorexia (18.8 vs. 17.4%), nausea (5.6 vs. 7.1%), and vomiting (1.9 vs. 3.2%). Hand-foot skin reaction, a characteristic adverse event associated with some oral fluoropyrimidines, was confined to grade 2 or less and was observed in only 4.4 and 5.2% of patients treated with S-1 monotherapy and IRI-S, respectively. There were no treatment-related deaths among patients treated with S-1 monotherapy, whereas two patients in the IRI-S died of potentially treatment-related conditions (severe bone marrow dysfunction, multiple organ failure that was probably associated with multiple duodenal ulcers).Table 4Summary of adverse eventsS-1 (n = 160)IRI-S (n = 155)All eventsGrade 3/4All eventsGrade 3/4 n % n % n % n %Anemia8351.91911.511372.92415.5Leukopenia8351.953.111574.21811.6Neutropenia8653.81710.611372.94227.1Infection/febrile neutropenia2817.563.84025.831.9Thrombocytopenia1811.363.81711.021.3Increased AST7546.985.06944.553.2Increased ALT5836.331.96944.531.9Increased bilirubin7446.395.65636.153.2Increased creatinine1710.621.31912.331.9Fatigue10163.1127.512379.4106.5Alopecia138.100.08756.100.0Anorexia10465.03018.812580.62717.4Diarrhea6339.495.610366.52516.1Nausea8452.595.611574.2117.1Vomiting6037.531.96843.953.2Stomatitis/pharyngitis2716.921.33421.942.6Hand-foot skin reaction74.400.085.200.0Pigmentation changes7446.300.07749.700.0Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0 ALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan Summary of adverse events Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0 ALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan Adverse events that occurred in each group are listed in Table 4. The incidence of major hematological toxicities was higher with IRI-S than with S-1 monotherapy. Grade 3 or 4 neutropenia was observed in 10.6% of patients treated with S-1 monotherapy versus 27.1% of patients treated with IRI-S, while the corresponding incidences of infection/febrile neutropenia were 3.8 versus 1.9%. The most common grade 3 or 4 non-hematological toxicities were diarrhea (S-1 monotherapy vs. IRI-S, 5.6 vs. 16.1%), anorexia (18.8 vs. 17.4%), nausea (5.6 vs. 7.1%), and vomiting (1.9 vs. 3.2%). Hand-foot skin reaction, a characteristic adverse event associated with some oral fluoropyrimidines, was confined to grade 2 or less and was observed in only 4.4 and 5.2% of patients treated with S-1 monotherapy and IRI-S, respectively. There were no treatment-related deaths among patients treated with S-1 monotherapy, whereas two patients in the IRI-S died of potentially treatment-related conditions (severe bone marrow dysfunction, multiple organ failure that was probably associated with multiple duodenal ulcers).Table 4Summary of adverse eventsS-1 (n = 160)IRI-S (n = 155)All eventsGrade 3/4All eventsGrade 3/4 n % n % n % n %Anemia8351.91911.511372.92415.5Leukopenia8351.953.111574.21811.6Neutropenia8653.81710.611372.94227.1Infection/febrile neutropenia2817.563.84025.831.9Thrombocytopenia1811.363.81711.021.3Increased AST7546.985.06944.553.2Increased ALT5836.331.96944.531.9Increased bilirubin7446.395.65636.153.2Increased creatinine1710.621.31912.331.9Fatigue10163.1127.512379.4106.5Alopecia138.100.08756.100.0Anorexia10465.03018.812580.62717.4Diarrhea6339.495.610366.52516.1Nausea8452.595.611574.2117.1Vomiting6037.531.96843.953.2Stomatitis/pharyngitis2716.921.33421.942.6Hand-foot skin reaction74.400.085.200.0Pigmentation changes7446.300.07749.700.0Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0 ALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan Summary of adverse events Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0 ALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan
null
null
[ "Patients and methods", "Eligibility", "Treatment schedule", "Assessment of response and toxicity", "Statistical analysis", "Patient characteristics", "Treatments given", "Response and survival", "Prognostic factors of all patients and factors that favored treatment with IRI-S", "Safety" ]
[ "[SUBTITLE] Eligibility [SUBSECTION] The eligibility criteria were histologically and cytologically confirmed unresectable or recurrent gastric adenocarcinoma; oral food intake possible; age between 20 and 75 years; no prior radiotherapy or chemotherapy; expected survival for ≥12 weeks; Eastern Cooperative Oncology Group (ECOG) performance status of 0–2; and adequate major organ function before chemotherapy (leukocyte count of 4,000–12,000/mm3, hemoglobin ≥ 8.0 g/dl, platelet count ≥ 100,000/mm3, total bilirubin ≤ 1.5 mg/dl, aspartate aminotransferase ≤ 100 IU/l, alanine aminotransferase ≤ 100 IU/l, creatinine ≤ 1.2 mg/dl). The main exclusion criteria were massive ascites, active concomitant malignancy, uncontrolled diabetes mellitus, and pregnancy or breast-feeding. Written informed consent was obtained from each patient. Institutional review board approval was obtained at each participating institution. An independent data monitoring committee evaluated safety throughout this study. The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice Guidelines. This trial was registered with the Japan Pharmaceutical Information Center (JapicCTI-050083).\nThe eligibility criteria were histologically and cytologically confirmed unresectable or recurrent gastric adenocarcinoma; oral food intake possible; age between 20 and 75 years; no prior radiotherapy or chemotherapy; expected survival for ≥12 weeks; Eastern Cooperative Oncology Group (ECOG) performance status of 0–2; and adequate major organ function before chemotherapy (leukocyte count of 4,000–12,000/mm3, hemoglobin ≥ 8.0 g/dl, platelet count ≥ 100,000/mm3, total bilirubin ≤ 1.5 mg/dl, aspartate aminotransferase ≤ 100 IU/l, alanine aminotransferase ≤ 100 IU/l, creatinine ≤ 1.2 mg/dl). The main exclusion criteria were massive ascites, active concomitant malignancy, uncontrolled diabetes mellitus, and pregnancy or breast-feeding. Written informed consent was obtained from each patient. Institutional review board approval was obtained at each participating institution. An independent data monitoring committee evaluated safety throughout this study. The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice Guidelines. This trial was registered with the Japan Pharmaceutical Information Center (JapicCTI-050083).\n[SUBTITLE] Treatment schedule [SUBSECTION] In the S-1 monotherapy group, patients received oral S-1 twice daily for 28 days every 6 weeks. In the IRI-S group, S-1 (80 mg/m2) was given orally for 21 days and irinotecan (80 mg/m2) was infused intravenously on days 1 and 15 every 5 weeks. In both groups, the dose of S-1 was based on body surface area: 40 mg if the area was <1.25 m2; 50 mg for 1.25–1.5 m2, and 60 mg for ≥1.5 m2. Dose modification criteria were defined in the protocol. Treatment was discontinued if there was documented disease progression, unacceptable toxicity, or withdrawal of consent.\nIn the S-1 monotherapy group, patients received oral S-1 twice daily for 28 days every 6 weeks. In the IRI-S group, S-1 (80 mg/m2) was given orally for 21 days and irinotecan (80 mg/m2) was infused intravenously on days 1 and 15 every 5 weeks. In both groups, the dose of S-1 was based on body surface area: 40 mg if the area was <1.25 m2; 50 mg for 1.25–1.5 m2, and 60 mg for ≥1.5 m2. Dose modification criteria were defined in the protocol. Treatment was discontinued if there was documented disease progression, unacceptable toxicity, or withdrawal of consent.\n[SUBTITLE] Assessment of response and toxicity [SUBSECTION] All patients who had at least one measurable lesion were evaluated for tumor response according to the Response Evaluation Criteria in Solid Tumors (RECIST) [22]. All radiologic assessments were confirmed by extramural review. Toxicity was evaluated according to the National Cancer Institute Common Toxicity Criteria (version 2.0).\nAll patients who had at least one measurable lesion were evaluated for tumor response according to the Response Evaluation Criteria in Solid Tumors (RECIST) [22]. All radiologic assessments were confirmed by extramural review. Toxicity was evaluated according to the National Cancer Institute Common Toxicity Criteria (version 2.0).\n[SUBTITLE] Statistical analysis [SUBSECTION] Eligible patients were registered with the data center and randomized by centralized dynamic allocation with stratification for advanced/recurrent disease (with or without adjuvant chemotherapy), performance status (0/1/2), and institution. The full analysis set was defined as all patients who received treatment at least once and met all inclusion criteria. The per-protocol set was defined as all patients who received treatment at least once and had no major protocol violations.\nThe primary endpoint was overall survival, which was compared between groups using the stratified log-rank test. Secondary endpoints were the time to treatment failure (TTF), the 1- and 2-year survival rates, the response rate, and safety. Overall survival time was defined as the interval from the date of registration to the date of death (patients who remained alive at the final follow-up were censored at that time). Survival curves were estimated by the Kaplan–Meier method, and differences were analyzed with the stratified log-rank test. Hazard ratios (HRs) for various prognostic factors were calculated using a stratified Cox proportional hazards model. TTF was defined as the time from the date of registration to the date of detection of progressive disease, death, or treatment discontinuation.\nIn addition, subset analyses were conducted, using the Cox proportional hazards model, to identify factors that influenced overall survival in each group. As well as the predetermined variables such as gender, age, performance status, and disease status (whether the disease was unresectable or recurrent), subset analyses were conducted for 6 additional variables; the presence or absence of a measurable lesion by the RECIST, hepatic metastasis, peritoneal metastasis, existent of primary focus, metastasis the number of metastatic foci, and tumor histology. All analyses were performed using SAS system version 8.2 (SAS Institute, Cary, NC, USA).\nThis study was designed to detect a 40% improvement in MST at a two-tailed significance level of P ≤ 0.05 with 80% power. The MST for S-1 monotherapy was assumed to be 8.5 months, based on the results of previous phase I/II studies [12, 23]. A total of 142 patients per group were required according to calculations made with nQuery Advisor version 4.0 (Statistical Solutions, Boston, MA, USA), and the sample size was set as 300 (150 patients per group).\nWe initially planned to continue follow-up for ≥1.5 years after the registration of all patients, with a cut-off date of April 2007. However, an unexpectedly high survival rate of 22% (68 of 315 patients) at the cut-off date prompted the Coordinating Committee, the medical expert, and the biostatistician to advise the sponsor to continue follow-up for a further year before performing the final analysis. Thus, the MST was also calculated using 2.5-year follow-up data.\nEligible patients were registered with the data center and randomized by centralized dynamic allocation with stratification for advanced/recurrent disease (with or without adjuvant chemotherapy), performance status (0/1/2), and institution. The full analysis set was defined as all patients who received treatment at least once and met all inclusion criteria. The per-protocol set was defined as all patients who received treatment at least once and had no major protocol violations.\nThe primary endpoint was overall survival, which was compared between groups using the stratified log-rank test. Secondary endpoints were the time to treatment failure (TTF), the 1- and 2-year survival rates, the response rate, and safety. Overall survival time was defined as the interval from the date of registration to the date of death (patients who remained alive at the final follow-up were censored at that time). Survival curves were estimated by the Kaplan–Meier method, and differences were analyzed with the stratified log-rank test. Hazard ratios (HRs) for various prognostic factors were calculated using a stratified Cox proportional hazards model. TTF was defined as the time from the date of registration to the date of detection of progressive disease, death, or treatment discontinuation.\nIn addition, subset analyses were conducted, using the Cox proportional hazards model, to identify factors that influenced overall survival in each group. As well as the predetermined variables such as gender, age, performance status, and disease status (whether the disease was unresectable or recurrent), subset analyses were conducted for 6 additional variables; the presence or absence of a measurable lesion by the RECIST, hepatic metastasis, peritoneal metastasis, existent of primary focus, metastasis the number of metastatic foci, and tumor histology. All analyses were performed using SAS system version 8.2 (SAS Institute, Cary, NC, USA).\nThis study was designed to detect a 40% improvement in MST at a two-tailed significance level of P ≤ 0.05 with 80% power. The MST for S-1 monotherapy was assumed to be 8.5 months, based on the results of previous phase I/II studies [12, 23]. A total of 142 patients per group were required according to calculations made with nQuery Advisor version 4.0 (Statistical Solutions, Boston, MA, USA), and the sample size was set as 300 (150 patients per group).\nWe initially planned to continue follow-up for ≥1.5 years after the registration of all patients, with a cut-off date of April 2007. However, an unexpectedly high survival rate of 22% (68 of 315 patients) at the cut-off date prompted the Coordinating Committee, the medical expert, and the biostatistician to advise the sponsor to continue follow-up for a further year before performing the final analysis. Thus, the MST was also calculated using 2.5-year follow-up data.", "The eligibility criteria were histologically and cytologically confirmed unresectable or recurrent gastric adenocarcinoma; oral food intake possible; age between 20 and 75 years; no prior radiotherapy or chemotherapy; expected survival for ≥12 weeks; Eastern Cooperative Oncology Group (ECOG) performance status of 0–2; and adequate major organ function before chemotherapy (leukocyte count of 4,000–12,000/mm3, hemoglobin ≥ 8.0 g/dl, platelet count ≥ 100,000/mm3, total bilirubin ≤ 1.5 mg/dl, aspartate aminotransferase ≤ 100 IU/l, alanine aminotransferase ≤ 100 IU/l, creatinine ≤ 1.2 mg/dl). The main exclusion criteria were massive ascites, active concomitant malignancy, uncontrolled diabetes mellitus, and pregnancy or breast-feeding. Written informed consent was obtained from each patient. Institutional review board approval was obtained at each participating institution. An independent data monitoring committee evaluated safety throughout this study. The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice Guidelines. This trial was registered with the Japan Pharmaceutical Information Center (JapicCTI-050083).", "In the S-1 monotherapy group, patients received oral S-1 twice daily for 28 days every 6 weeks. In the IRI-S group, S-1 (80 mg/m2) was given orally for 21 days and irinotecan (80 mg/m2) was infused intravenously on days 1 and 15 every 5 weeks. In both groups, the dose of S-1 was based on body surface area: 40 mg if the area was <1.25 m2; 50 mg for 1.25–1.5 m2, and 60 mg for ≥1.5 m2. Dose modification criteria were defined in the protocol. Treatment was discontinued if there was documented disease progression, unacceptable toxicity, or withdrawal of consent.", "All patients who had at least one measurable lesion were evaluated for tumor response according to the Response Evaluation Criteria in Solid Tumors (RECIST) [22]. All radiologic assessments were confirmed by extramural review. Toxicity was evaluated according to the National Cancer Institute Common Toxicity Criteria (version 2.0).", "Eligible patients were registered with the data center and randomized by centralized dynamic allocation with stratification for advanced/recurrent disease (with or without adjuvant chemotherapy), performance status (0/1/2), and institution. The full analysis set was defined as all patients who received treatment at least once and met all inclusion criteria. The per-protocol set was defined as all patients who received treatment at least once and had no major protocol violations.\nThe primary endpoint was overall survival, which was compared between groups using the stratified log-rank test. Secondary endpoints were the time to treatment failure (TTF), the 1- and 2-year survival rates, the response rate, and safety. Overall survival time was defined as the interval from the date of registration to the date of death (patients who remained alive at the final follow-up were censored at that time). Survival curves were estimated by the Kaplan–Meier method, and differences were analyzed with the stratified log-rank test. Hazard ratios (HRs) for various prognostic factors were calculated using a stratified Cox proportional hazards model. TTF was defined as the time from the date of registration to the date of detection of progressive disease, death, or treatment discontinuation.\nIn addition, subset analyses were conducted, using the Cox proportional hazards model, to identify factors that influenced overall survival in each group. As well as the predetermined variables such as gender, age, performance status, and disease status (whether the disease was unresectable or recurrent), subset analyses were conducted for 6 additional variables; the presence or absence of a measurable lesion by the RECIST, hepatic metastasis, peritoneal metastasis, existent of primary focus, metastasis the number of metastatic foci, and tumor histology. All analyses were performed using SAS system version 8.2 (SAS Institute, Cary, NC, USA).\nThis study was designed to detect a 40% improvement in MST at a two-tailed significance level of P ≤ 0.05 with 80% power. The MST for S-1 monotherapy was assumed to be 8.5 months, based on the results of previous phase I/II studies [12, 23]. A total of 142 patients per group were required according to calculations made with nQuery Advisor version 4.0 (Statistical Solutions, Boston, MA, USA), and the sample size was set as 300 (150 patients per group).\nWe initially planned to continue follow-up for ≥1.5 years after the registration of all patients, with a cut-off date of April 2007. However, an unexpectedly high survival rate of 22% (68 of 315 patients) at the cut-off date prompted the Coordinating Committee, the medical expert, and the biostatistician to advise the sponsor to continue follow-up for a further year before performing the final analysis. Thus, the MST was also calculated using 2.5-year follow-up data.", "Between June 2004 and November 2005, a total of 326 patients (S-1 monotherapy, n = 162; IRI-S, n = 164) were enrolled from 54 institutions and randomized (Fig. 1). Seven patients were subsequently found to be ineligible or withdrew before receiving any treatment. Another 4 patients were found to be ineligible after starting treatment and were not included in the analysis. Therefore, 315 patients (S-1 monotherapy, n = 160; IRI-S, n = 155) were evaluable and were included in the full analysis set to assess overall survival and TTF. In addition, 187 patients were evaluable for tumor response. Baseline patient characteristics are shown in Table 1.Fig. 1Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nTable 1Baseline characteristics and prior therapyCharacteristicTreatmentS-1IRI-STotal\nn\n%\nn\n%\nn\n%Patients randomized162164326Patients receiving at least one dose of study medication (full analysis set)160155315Sex Male127791107123775 Female332145297825Age (years) Median636363 Range27–7533–7527–75ECOG performance status 0109681026621167 1462948319430 25353103Tumor histology Intestinal7144613913242 Diffuse8855936018157 Other111121Resection of primary tumor +9358936018659 −6742624012941Advanced133831298326283Recurrent Adjuvant chemotherapy (+)5353103 Adjuvant chemotherapy (−)221421144314\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group\n\nPatient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nBaseline characteristics and prior therapy\n\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group", "The median number of treatment courses was three (range 1–19) for S-1 monotherapy whose duration was 6 weeks, and four (range 1–25) for IRI-S whose duration was 5 weeks. The main reasons for treatment discontinuation were disease progression [S-1 monotherapy vs. IRI-S, 116/160 (72.5%) vs. 89/155 (57.4%)], adverse events [12/160 (7.5%) vs. 23/155 (14.8%)], attending physician’s decision [18/160 (11.3%) vs. 18/155 (11.6%)], and consent withdrawal [11/160 (6.9%) vs. 17/155 (11.0%)]. The median TTF was 3.6 months (95% CI 2.9–4.1) and 4.5 months (95% CI 3.7–5.3), respectively (P = 0.157). The relative dose intensity was 88.9% for S-1 monotherapy, versus 90.0% for S-1 and 86.2% for irinotecan among those treated with IRI-S. Most patients in both groups received the scheduled dose of chemotherapy.\nSecond-line chemotherapy was administered to 240 patients (76%; S-1 monotherapy, n = 112; IRI-S, n = 128) (Table 2). The most common second-line therapy in both groups was a taxane alone (S-1 monotherapy, 26.9%; IRI-S, 40.6%). Among patients initially treated with S-1, 13 received crossover treatment with IRI-S, while 31 patients originally treated with IRI-S received second-line S-1 monotherapy.Table 2Second-line chemotherapyRegimenS-1 (n = 160)IRI-S (n = 155)\nn\n%\nn\n%IRI-S138.1––Irinotecan-based regimena\n2716.942.6S-1 alone––3120.0S-1-based regimenb\n95.6117.1Taxane alone4326.96340.6Others2012.51912.3None4830.02717.4\nIRI-S S-1 plus irinotecan\naIrinotecan/cisplatin, irinotecan/taxane\nbS-1/cisplatin, S-1/taxane\n\nSecond-line chemotherapy\n\nIRI-S S-1 plus irinotecan\n\naIrinotecan/cisplatin, irinotecan/taxane\n\nbS-1/cisplatin, S-1/taxane", "The overall response rate was determined in 187 patients evaluable by the RECIST, and was significantly higher with IRI-S than with S-1 monotherapy (39/94, 41.5% vs. 25/93, 26.9%; P = 0.035) (Table 3).Table 3Response to treatmentS-1 (n = 93)IRI-S (n = 94)\nn\n%\nn\n%Complete response0000Partial response25273941Stable disease35384043Progressive disease30321213Not assessable3333Overall response rate26.941.5*95% CI18.2–37.131.4–52.1\nCI confidence interval*  P = 0.035 (χ2 test)\n\nResponse to treatment\n\nCI confidence interval\n*  P = 0.035 (χ2 test)\nThe MST at the predetermined cut-off date was 12.8 months with IRI-S compared with 10.5 months with S-1 monotherapy (HR 0.856, P = 0.233) (Fig. 2), but the difference was not statistically significant. The 1-year survival rates were 44.9% [95% CI 37.2–52.6%] with S-1 monotherapy and 52.0% (95% CI 44.1–59.9%) with IRI-S, while the 2-year survival rates were 19.5% (95% CI 12.6–26.4%) and 18.0% (95% CI 11.2–24.8%), respectively.Fig. 2Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\n\nKaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\nMST was additionally calculated as an exploratory analysis after 2.5 years of follow-up, but the result was identical to the initial analysis at 12.8 months for IRI-S and at 10.5 months for S-1 monotherapy (HR 0.927; log-rank test P = 0.536). Again, the difference was not statistically significant.", "Baseline risk factors with a significant influence on the overall survival of all patients accrued (P < 0.05) were performance status (HR 1.348, 95% CI 1.079–1.686, Wald test P = 0.009), tumor histology (HR 1.720, 95% CI 1.161–2.548, P = 0.007), target lesion (HR 1.525, 95% CI 1.164–1.999, P = 0.002), and surgery for the primary tumor (HR 0.698, 95% CI 0.538–0.906, P = 0.007).\nStratified analysis according to baseline patient characteristics (Fig. 3) showed that IRI-S was significantly more effective than S-1 monotherapy for patients with diffuse-type histology (HR 0.632, 95% CI 0.454–0.880) and for those with an ECOG performance status of 1 or 2 (HR 0.614, 95% CI 0.401–0.940). No differences were observed for the other factors assessed.Fig. 3Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors\n\nSubset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors", "Adverse events that occurred in each group are listed in Table 4. The incidence of major hematological toxicities was higher with IRI-S than with S-1 monotherapy. Grade 3 or 4 neutropenia was observed in 10.6% of patients treated with S-1 monotherapy versus 27.1% of patients treated with IRI-S, while the corresponding incidences of infection/febrile neutropenia were 3.8 versus 1.9%. The most common grade 3 or 4 non-hematological toxicities were diarrhea (S-1 monotherapy vs. IRI-S, 5.6 vs. 16.1%), anorexia (18.8 vs. 17.4%), nausea (5.6 vs. 7.1%), and vomiting (1.9 vs. 3.2%). Hand-foot skin reaction, a characteristic adverse event associated with some oral fluoropyrimidines, was confined to grade 2 or less and was observed in only 4.4 and 5.2% of patients treated with S-1 monotherapy and IRI-S, respectively. There were no treatment-related deaths among patients treated with S-1 monotherapy, whereas two patients in the IRI-S died of potentially treatment-related conditions (severe bone marrow dysfunction, multiple organ failure that was probably associated with multiple duodenal ulcers).Table 4Summary of adverse eventsS-1 (n = 160)IRI-S (n = 155)All eventsGrade 3/4All eventsGrade 3/4\nn\n%\nn\n%\nn\n%\nn\n%Anemia8351.91911.511372.92415.5Leukopenia8351.953.111574.21811.6Neutropenia8653.81710.611372.94227.1Infection/febrile neutropenia2817.563.84025.831.9Thrombocytopenia1811.363.81711.021.3Increased AST7546.985.06944.553.2Increased ALT5836.331.96944.531.9Increased bilirubin7446.395.65636.153.2Increased creatinine1710.621.31912.331.9Fatigue10163.1127.512379.4106.5Alopecia138.100.08756.100.0Anorexia10465.03018.812580.62717.4Diarrhea6339.495.610366.52516.1Nausea8452.595.611574.2117.1Vomiting6037.531.96843.953.2Stomatitis/pharyngitis2716.921.33421.942.6Hand-foot skin reaction74.400.085.200.0Pigmentation changes7446.300.07749.700.0Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan\n\nSummary of adverse events\nAdverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\n\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Patients and methods", "Eligibility", "Treatment schedule", "Assessment of response and toxicity", "Statistical analysis", "Results", "Patient characteristics", "Treatments given", "Response and survival", "Prognostic factors of all patients and factors that favored treatment with IRI-S", "Safety", "Discussion" ]
[ "Gastric cancer is the second leading cause of cancer-related deaths after lung cancer in Japan, and it was responsible for approximately 50,000 deaths in 2005 [1]. While surgery and appropriate adjuvant chemotherapy have resulted in superior stage-by-stage survival when compared with that in other parts of the world [2], the prognosis of unresectable or recurrent gastric cancer remains dismal. The development of more effective chemotherapeutic regimens is therefore warranted.\nIn Western countries where a combination of 5-fluorouracil (5-FU) and cisplatin (CDDP) [3] has served as a reference arm in several phase III studies [4–6], triplets employing epirubicin [7] or docetaxel [5] in addition to this combination are the current standards, with modifications such as the replacement of CDDP with oxaliplatin and the replacement of infusional 5-FU with oral agents such as capecitabine [8]. Failure with the first-line treatment usually denotes the termination of chemotherapy, and second-line treatments are rarely considered outside of clinical trials. In Japan, where a phase III study (JCOG9205) failed to show superiority of a 5-FU/CDDP combination over 5-FU alone [9], the 5-FU monotherapy remained a standard of care, and other cytotoxic agents were usually delivered sequentially as second-line and third-line therapies rather than concurrently as combination therapy. With this strategy, the median survival time (MST) of patients with advanced gastric cancer whose treatment started with infusional 5-FU alone actually reached 10.8 months [9].\nIn the 1990s, S-1 (TS-1; Taiho Pharmaceutical, Tokyo, Japan), an oral derivative of 5-FU, was developed for the treatment of gastric cancer [10–12]. With an exceptionally high response rate of 46% as a single agent, this drug rapidly established itself as a community standard in Japan and was used widely in clinical practice. Phase III trials eventually proved the non-inferiority of S-1 when compared with infusional 5-FU in the advanced/metastatic setting [13], along with the superiority of S-1 monotherapy over observation alone in the postoperative adjuvant setting [14]. In addition, S-1 was found to be a unique cytotoxic drug, in that Japanese patients tolerated higher doses than Western patients, due to differences in the gene polymorphism of relevant enzymes [15]. Thus, the development of novel chemotherapeutic regimens in Japan during the 2000s has inevitably centered around this drug.\nThe establishment of doublets to enhance response rates and improve on survival was the next important step, and several phase I/II studies were performed to explore combinations of S-1 with other cytotoxic drugs such as CDDP [16], docetaxel [17], paclitaxel [18], and irinotecan (Yakult Honsha, Tokyo, Japan; Daiichi Sankyo, Tokyo, Japan) [19]. All these combinations were found to be promising, with response rates of around 50% and relatively favorable safety profiles. A series of phase III trials comparing these doublets with S-1 monotherapy were subsequently planned and conducted to seek optimal first-line treatments. Of these, a phase III trial to explore S-1/CDDP was the first to complete accrual, and a significant improvement in MST of this combination over S-1 monotherapy was proven [20]. The present study, entitled GC0301/TOP-002, represents another of these attempts, exploring the efficacy of a combination of S-1 and irinotecan (IRI-S). The dose and schedule for this combination had been established by a phase I trial [21], and treatment at the recommended dose has shown a response rate of 47.8% [95% confidence interval (CI) 27.4–68.2%] with an MST of 394 days in a phase II study [19]. Given these earlier results and the synergistic effect of irinotecan and 5-FU observed in preclinical studies, the results of this present trial have been eagerly awaited.", "[SUBTITLE] Eligibility [SUBSECTION] The eligibility criteria were histologically and cytologically confirmed unresectable or recurrent gastric adenocarcinoma; oral food intake possible; age between 20 and 75 years; no prior radiotherapy or chemotherapy; expected survival for ≥12 weeks; Eastern Cooperative Oncology Group (ECOG) performance status of 0–2; and adequate major organ function before chemotherapy (leukocyte count of 4,000–12,000/mm3, hemoglobin ≥ 8.0 g/dl, platelet count ≥ 100,000/mm3, total bilirubin ≤ 1.5 mg/dl, aspartate aminotransferase ≤ 100 IU/l, alanine aminotransferase ≤ 100 IU/l, creatinine ≤ 1.2 mg/dl). The main exclusion criteria were massive ascites, active concomitant malignancy, uncontrolled diabetes mellitus, and pregnancy or breast-feeding. Written informed consent was obtained from each patient. Institutional review board approval was obtained at each participating institution. An independent data monitoring committee evaluated safety throughout this study. The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice Guidelines. This trial was registered with the Japan Pharmaceutical Information Center (JapicCTI-050083).\nThe eligibility criteria were histologically and cytologically confirmed unresectable or recurrent gastric adenocarcinoma; oral food intake possible; age between 20 and 75 years; no prior radiotherapy or chemotherapy; expected survival for ≥12 weeks; Eastern Cooperative Oncology Group (ECOG) performance status of 0–2; and adequate major organ function before chemotherapy (leukocyte count of 4,000–12,000/mm3, hemoglobin ≥ 8.0 g/dl, platelet count ≥ 100,000/mm3, total bilirubin ≤ 1.5 mg/dl, aspartate aminotransferase ≤ 100 IU/l, alanine aminotransferase ≤ 100 IU/l, creatinine ≤ 1.2 mg/dl). The main exclusion criteria were massive ascites, active concomitant malignancy, uncontrolled diabetes mellitus, and pregnancy or breast-feeding. Written informed consent was obtained from each patient. Institutional review board approval was obtained at each participating institution. An independent data monitoring committee evaluated safety throughout this study. The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice Guidelines. This trial was registered with the Japan Pharmaceutical Information Center (JapicCTI-050083).\n[SUBTITLE] Treatment schedule [SUBSECTION] In the S-1 monotherapy group, patients received oral S-1 twice daily for 28 days every 6 weeks. In the IRI-S group, S-1 (80 mg/m2) was given orally for 21 days and irinotecan (80 mg/m2) was infused intravenously on days 1 and 15 every 5 weeks. In both groups, the dose of S-1 was based on body surface area: 40 mg if the area was <1.25 m2; 50 mg for 1.25–1.5 m2, and 60 mg for ≥1.5 m2. Dose modification criteria were defined in the protocol. Treatment was discontinued if there was documented disease progression, unacceptable toxicity, or withdrawal of consent.\nIn the S-1 monotherapy group, patients received oral S-1 twice daily for 28 days every 6 weeks. In the IRI-S group, S-1 (80 mg/m2) was given orally for 21 days and irinotecan (80 mg/m2) was infused intravenously on days 1 and 15 every 5 weeks. In both groups, the dose of S-1 was based on body surface area: 40 mg if the area was <1.25 m2; 50 mg for 1.25–1.5 m2, and 60 mg for ≥1.5 m2. Dose modification criteria were defined in the protocol. Treatment was discontinued if there was documented disease progression, unacceptable toxicity, or withdrawal of consent.\n[SUBTITLE] Assessment of response and toxicity [SUBSECTION] All patients who had at least one measurable lesion were evaluated for tumor response according to the Response Evaluation Criteria in Solid Tumors (RECIST) [22]. All radiologic assessments were confirmed by extramural review. Toxicity was evaluated according to the National Cancer Institute Common Toxicity Criteria (version 2.0).\nAll patients who had at least one measurable lesion were evaluated for tumor response according to the Response Evaluation Criteria in Solid Tumors (RECIST) [22]. All radiologic assessments were confirmed by extramural review. Toxicity was evaluated according to the National Cancer Institute Common Toxicity Criteria (version 2.0).\n[SUBTITLE] Statistical analysis [SUBSECTION] Eligible patients were registered with the data center and randomized by centralized dynamic allocation with stratification for advanced/recurrent disease (with or without adjuvant chemotherapy), performance status (0/1/2), and institution. The full analysis set was defined as all patients who received treatment at least once and met all inclusion criteria. The per-protocol set was defined as all patients who received treatment at least once and had no major protocol violations.\nThe primary endpoint was overall survival, which was compared between groups using the stratified log-rank test. Secondary endpoints were the time to treatment failure (TTF), the 1- and 2-year survival rates, the response rate, and safety. Overall survival time was defined as the interval from the date of registration to the date of death (patients who remained alive at the final follow-up were censored at that time). Survival curves were estimated by the Kaplan–Meier method, and differences were analyzed with the stratified log-rank test. Hazard ratios (HRs) for various prognostic factors were calculated using a stratified Cox proportional hazards model. TTF was defined as the time from the date of registration to the date of detection of progressive disease, death, or treatment discontinuation.\nIn addition, subset analyses were conducted, using the Cox proportional hazards model, to identify factors that influenced overall survival in each group. As well as the predetermined variables such as gender, age, performance status, and disease status (whether the disease was unresectable or recurrent), subset analyses were conducted for 6 additional variables; the presence or absence of a measurable lesion by the RECIST, hepatic metastasis, peritoneal metastasis, existent of primary focus, metastasis the number of metastatic foci, and tumor histology. All analyses were performed using SAS system version 8.2 (SAS Institute, Cary, NC, USA).\nThis study was designed to detect a 40% improvement in MST at a two-tailed significance level of P ≤ 0.05 with 80% power. The MST for S-1 monotherapy was assumed to be 8.5 months, based on the results of previous phase I/II studies [12, 23]. A total of 142 patients per group were required according to calculations made with nQuery Advisor version 4.0 (Statistical Solutions, Boston, MA, USA), and the sample size was set as 300 (150 patients per group).\nWe initially planned to continue follow-up for ≥1.5 years after the registration of all patients, with a cut-off date of April 2007. However, an unexpectedly high survival rate of 22% (68 of 315 patients) at the cut-off date prompted the Coordinating Committee, the medical expert, and the biostatistician to advise the sponsor to continue follow-up for a further year before performing the final analysis. Thus, the MST was also calculated using 2.5-year follow-up data.\nEligible patients were registered with the data center and randomized by centralized dynamic allocation with stratification for advanced/recurrent disease (with or without adjuvant chemotherapy), performance status (0/1/2), and institution. The full analysis set was defined as all patients who received treatment at least once and met all inclusion criteria. The per-protocol set was defined as all patients who received treatment at least once and had no major protocol violations.\nThe primary endpoint was overall survival, which was compared between groups using the stratified log-rank test. Secondary endpoints were the time to treatment failure (TTF), the 1- and 2-year survival rates, the response rate, and safety. Overall survival time was defined as the interval from the date of registration to the date of death (patients who remained alive at the final follow-up were censored at that time). Survival curves were estimated by the Kaplan–Meier method, and differences were analyzed with the stratified log-rank test. Hazard ratios (HRs) for various prognostic factors were calculated using a stratified Cox proportional hazards model. TTF was defined as the time from the date of registration to the date of detection of progressive disease, death, or treatment discontinuation.\nIn addition, subset analyses were conducted, using the Cox proportional hazards model, to identify factors that influenced overall survival in each group. As well as the predetermined variables such as gender, age, performance status, and disease status (whether the disease was unresectable or recurrent), subset analyses were conducted for 6 additional variables; the presence or absence of a measurable lesion by the RECIST, hepatic metastasis, peritoneal metastasis, existent of primary focus, metastasis the number of metastatic foci, and tumor histology. All analyses were performed using SAS system version 8.2 (SAS Institute, Cary, NC, USA).\nThis study was designed to detect a 40% improvement in MST at a two-tailed significance level of P ≤ 0.05 with 80% power. The MST for S-1 monotherapy was assumed to be 8.5 months, based on the results of previous phase I/II studies [12, 23]. A total of 142 patients per group were required according to calculations made with nQuery Advisor version 4.0 (Statistical Solutions, Boston, MA, USA), and the sample size was set as 300 (150 patients per group).\nWe initially planned to continue follow-up for ≥1.5 years after the registration of all patients, with a cut-off date of April 2007. However, an unexpectedly high survival rate of 22% (68 of 315 patients) at the cut-off date prompted the Coordinating Committee, the medical expert, and the biostatistician to advise the sponsor to continue follow-up for a further year before performing the final analysis. Thus, the MST was also calculated using 2.5-year follow-up data.", "The eligibility criteria were histologically and cytologically confirmed unresectable or recurrent gastric adenocarcinoma; oral food intake possible; age between 20 and 75 years; no prior radiotherapy or chemotherapy; expected survival for ≥12 weeks; Eastern Cooperative Oncology Group (ECOG) performance status of 0–2; and adequate major organ function before chemotherapy (leukocyte count of 4,000–12,000/mm3, hemoglobin ≥ 8.0 g/dl, platelet count ≥ 100,000/mm3, total bilirubin ≤ 1.5 mg/dl, aspartate aminotransferase ≤ 100 IU/l, alanine aminotransferase ≤ 100 IU/l, creatinine ≤ 1.2 mg/dl). The main exclusion criteria were massive ascites, active concomitant malignancy, uncontrolled diabetes mellitus, and pregnancy or breast-feeding. Written informed consent was obtained from each patient. Institutional review board approval was obtained at each participating institution. An independent data monitoring committee evaluated safety throughout this study. The study was performed in accordance with the Declaration of Helsinki and Good Clinical Practice Guidelines. This trial was registered with the Japan Pharmaceutical Information Center (JapicCTI-050083).", "In the S-1 monotherapy group, patients received oral S-1 twice daily for 28 days every 6 weeks. In the IRI-S group, S-1 (80 mg/m2) was given orally for 21 days and irinotecan (80 mg/m2) was infused intravenously on days 1 and 15 every 5 weeks. In both groups, the dose of S-1 was based on body surface area: 40 mg if the area was <1.25 m2; 50 mg for 1.25–1.5 m2, and 60 mg for ≥1.5 m2. Dose modification criteria were defined in the protocol. Treatment was discontinued if there was documented disease progression, unacceptable toxicity, or withdrawal of consent.", "All patients who had at least one measurable lesion were evaluated for tumor response according to the Response Evaluation Criteria in Solid Tumors (RECIST) [22]. All radiologic assessments were confirmed by extramural review. Toxicity was evaluated according to the National Cancer Institute Common Toxicity Criteria (version 2.0).", "Eligible patients were registered with the data center and randomized by centralized dynamic allocation with stratification for advanced/recurrent disease (with or without adjuvant chemotherapy), performance status (0/1/2), and institution. The full analysis set was defined as all patients who received treatment at least once and met all inclusion criteria. The per-protocol set was defined as all patients who received treatment at least once and had no major protocol violations.\nThe primary endpoint was overall survival, which was compared between groups using the stratified log-rank test. Secondary endpoints were the time to treatment failure (TTF), the 1- and 2-year survival rates, the response rate, and safety. Overall survival time was defined as the interval from the date of registration to the date of death (patients who remained alive at the final follow-up were censored at that time). Survival curves were estimated by the Kaplan–Meier method, and differences were analyzed with the stratified log-rank test. Hazard ratios (HRs) for various prognostic factors were calculated using a stratified Cox proportional hazards model. TTF was defined as the time from the date of registration to the date of detection of progressive disease, death, or treatment discontinuation.\nIn addition, subset analyses were conducted, using the Cox proportional hazards model, to identify factors that influenced overall survival in each group. As well as the predetermined variables such as gender, age, performance status, and disease status (whether the disease was unresectable or recurrent), subset analyses were conducted for 6 additional variables; the presence or absence of a measurable lesion by the RECIST, hepatic metastasis, peritoneal metastasis, existent of primary focus, metastasis the number of metastatic foci, and tumor histology. All analyses were performed using SAS system version 8.2 (SAS Institute, Cary, NC, USA).\nThis study was designed to detect a 40% improvement in MST at a two-tailed significance level of P ≤ 0.05 with 80% power. The MST for S-1 monotherapy was assumed to be 8.5 months, based on the results of previous phase I/II studies [12, 23]. A total of 142 patients per group were required according to calculations made with nQuery Advisor version 4.0 (Statistical Solutions, Boston, MA, USA), and the sample size was set as 300 (150 patients per group).\nWe initially planned to continue follow-up for ≥1.5 years after the registration of all patients, with a cut-off date of April 2007. However, an unexpectedly high survival rate of 22% (68 of 315 patients) at the cut-off date prompted the Coordinating Committee, the medical expert, and the biostatistician to advise the sponsor to continue follow-up for a further year before performing the final analysis. Thus, the MST was also calculated using 2.5-year follow-up data.", "[SUBTITLE] Patient characteristics [SUBSECTION] Between June 2004 and November 2005, a total of 326 patients (S-1 monotherapy, n = 162; IRI-S, n = 164) were enrolled from 54 institutions and randomized (Fig. 1). Seven patients were subsequently found to be ineligible or withdrew before receiving any treatment. Another 4 patients were found to be ineligible after starting treatment and were not included in the analysis. Therefore, 315 patients (S-1 monotherapy, n = 160; IRI-S, n = 155) were evaluable and were included in the full analysis set to assess overall survival and TTF. In addition, 187 patients were evaluable for tumor response. Baseline patient characteristics are shown in Table 1.Fig. 1Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nTable 1Baseline characteristics and prior therapyCharacteristicTreatmentS-1IRI-STotal\nn\n%\nn\n%\nn\n%Patients randomized162164326Patients receiving at least one dose of study medication (full analysis set)160155315Sex Male127791107123775 Female332145297825Age (years) Median636363 Range27–7533–7527–75ECOG performance status 0109681026621167 1462948319430 25353103Tumor histology Intestinal7144613913242 Diffuse8855936018157 Other111121Resection of primary tumor +9358936018659 −6742624012941Advanced133831298326283Recurrent Adjuvant chemotherapy (+)5353103 Adjuvant chemotherapy (−)221421144314\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group\n\nPatient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nBaseline characteristics and prior therapy\n\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group\nBetween June 2004 and November 2005, a total of 326 patients (S-1 monotherapy, n = 162; IRI-S, n = 164) were enrolled from 54 institutions and randomized (Fig. 1). Seven patients were subsequently found to be ineligible or withdrew before receiving any treatment. Another 4 patients were found to be ineligible after starting treatment and were not included in the analysis. Therefore, 315 patients (S-1 monotherapy, n = 160; IRI-S, n = 155) were evaluable and were included in the full analysis set to assess overall survival and TTF. In addition, 187 patients were evaluable for tumor response. Baseline patient characteristics are shown in Table 1.Fig. 1Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nTable 1Baseline characteristics and prior therapyCharacteristicTreatmentS-1IRI-STotal\nn\n%\nn\n%\nn\n%Patients randomized162164326Patients receiving at least one dose of study medication (full analysis set)160155315Sex Male127791107123775 Female332145297825Age (years) Median636363 Range27–7533–7527–75ECOG performance status 0109681026621167 1462948319430 25353103Tumor histology Intestinal7144613913242 Diffuse8855936018157 Other111121Resection of primary tumor +9358936018659 −6742624012941Advanced133831298326283Recurrent Adjuvant chemotherapy (+)5353103 Adjuvant chemotherapy (−)221421144314\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group\n\nPatient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nBaseline characteristics and prior therapy\n\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group\n[SUBTITLE] Treatments given [SUBSECTION] The median number of treatment courses was three (range 1–19) for S-1 monotherapy whose duration was 6 weeks, and four (range 1–25) for IRI-S whose duration was 5 weeks. The main reasons for treatment discontinuation were disease progression [S-1 monotherapy vs. IRI-S, 116/160 (72.5%) vs. 89/155 (57.4%)], adverse events [12/160 (7.5%) vs. 23/155 (14.8%)], attending physician’s decision [18/160 (11.3%) vs. 18/155 (11.6%)], and consent withdrawal [11/160 (6.9%) vs. 17/155 (11.0%)]. The median TTF was 3.6 months (95% CI 2.9–4.1) and 4.5 months (95% CI 3.7–5.3), respectively (P = 0.157). The relative dose intensity was 88.9% for S-1 monotherapy, versus 90.0% for S-1 and 86.2% for irinotecan among those treated with IRI-S. Most patients in both groups received the scheduled dose of chemotherapy.\nSecond-line chemotherapy was administered to 240 patients (76%; S-1 monotherapy, n = 112; IRI-S, n = 128) (Table 2). The most common second-line therapy in both groups was a taxane alone (S-1 monotherapy, 26.9%; IRI-S, 40.6%). Among patients initially treated with S-1, 13 received crossover treatment with IRI-S, while 31 patients originally treated with IRI-S received second-line S-1 monotherapy.Table 2Second-line chemotherapyRegimenS-1 (n = 160)IRI-S (n = 155)\nn\n%\nn\n%IRI-S138.1––Irinotecan-based regimena\n2716.942.6S-1 alone––3120.0S-1-based regimenb\n95.6117.1Taxane alone4326.96340.6Others2012.51912.3None4830.02717.4\nIRI-S S-1 plus irinotecan\naIrinotecan/cisplatin, irinotecan/taxane\nbS-1/cisplatin, S-1/taxane\n\nSecond-line chemotherapy\n\nIRI-S S-1 plus irinotecan\n\naIrinotecan/cisplatin, irinotecan/taxane\n\nbS-1/cisplatin, S-1/taxane\nThe median number of treatment courses was three (range 1–19) for S-1 monotherapy whose duration was 6 weeks, and four (range 1–25) for IRI-S whose duration was 5 weeks. The main reasons for treatment discontinuation were disease progression [S-1 monotherapy vs. IRI-S, 116/160 (72.5%) vs. 89/155 (57.4%)], adverse events [12/160 (7.5%) vs. 23/155 (14.8%)], attending physician’s decision [18/160 (11.3%) vs. 18/155 (11.6%)], and consent withdrawal [11/160 (6.9%) vs. 17/155 (11.0%)]. The median TTF was 3.6 months (95% CI 2.9–4.1) and 4.5 months (95% CI 3.7–5.3), respectively (P = 0.157). The relative dose intensity was 88.9% for S-1 monotherapy, versus 90.0% for S-1 and 86.2% for irinotecan among those treated with IRI-S. Most patients in both groups received the scheduled dose of chemotherapy.\nSecond-line chemotherapy was administered to 240 patients (76%; S-1 monotherapy, n = 112; IRI-S, n = 128) (Table 2). The most common second-line therapy in both groups was a taxane alone (S-1 monotherapy, 26.9%; IRI-S, 40.6%). Among patients initially treated with S-1, 13 received crossover treatment with IRI-S, while 31 patients originally treated with IRI-S received second-line S-1 monotherapy.Table 2Second-line chemotherapyRegimenS-1 (n = 160)IRI-S (n = 155)\nn\n%\nn\n%IRI-S138.1––Irinotecan-based regimena\n2716.942.6S-1 alone––3120.0S-1-based regimenb\n95.6117.1Taxane alone4326.96340.6Others2012.51912.3None4830.02717.4\nIRI-S S-1 plus irinotecan\naIrinotecan/cisplatin, irinotecan/taxane\nbS-1/cisplatin, S-1/taxane\n\nSecond-line chemotherapy\n\nIRI-S S-1 plus irinotecan\n\naIrinotecan/cisplatin, irinotecan/taxane\n\nbS-1/cisplatin, S-1/taxane\n[SUBTITLE] Response and survival [SUBSECTION] The overall response rate was determined in 187 patients evaluable by the RECIST, and was significantly higher with IRI-S than with S-1 monotherapy (39/94, 41.5% vs. 25/93, 26.9%; P = 0.035) (Table 3).Table 3Response to treatmentS-1 (n = 93)IRI-S (n = 94)\nn\n%\nn\n%Complete response0000Partial response25273941Stable disease35384043Progressive disease30321213Not assessable3333Overall response rate26.941.5*95% CI18.2–37.131.4–52.1\nCI confidence interval*  P = 0.035 (χ2 test)\n\nResponse to treatment\n\nCI confidence interval\n*  P = 0.035 (χ2 test)\nThe MST at the predetermined cut-off date was 12.8 months with IRI-S compared with 10.5 months with S-1 monotherapy (HR 0.856, P = 0.233) (Fig. 2), but the difference was not statistically significant. The 1-year survival rates were 44.9% [95% CI 37.2–52.6%] with S-1 monotherapy and 52.0% (95% CI 44.1–59.9%) with IRI-S, while the 2-year survival rates were 19.5% (95% CI 12.6–26.4%) and 18.0% (95% CI 11.2–24.8%), respectively.Fig. 2Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\n\nKaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\nMST was additionally calculated as an exploratory analysis after 2.5 years of follow-up, but the result was identical to the initial analysis at 12.8 months for IRI-S and at 10.5 months for S-1 monotherapy (HR 0.927; log-rank test P = 0.536). Again, the difference was not statistically significant.\nThe overall response rate was determined in 187 patients evaluable by the RECIST, and was significantly higher with IRI-S than with S-1 monotherapy (39/94, 41.5% vs. 25/93, 26.9%; P = 0.035) (Table 3).Table 3Response to treatmentS-1 (n = 93)IRI-S (n = 94)\nn\n%\nn\n%Complete response0000Partial response25273941Stable disease35384043Progressive disease30321213Not assessable3333Overall response rate26.941.5*95% CI18.2–37.131.4–52.1\nCI confidence interval*  P = 0.035 (χ2 test)\n\nResponse to treatment\n\nCI confidence interval\n*  P = 0.035 (χ2 test)\nThe MST at the predetermined cut-off date was 12.8 months with IRI-S compared with 10.5 months with S-1 monotherapy (HR 0.856, P = 0.233) (Fig. 2), but the difference was not statistically significant. The 1-year survival rates were 44.9% [95% CI 37.2–52.6%] with S-1 monotherapy and 52.0% (95% CI 44.1–59.9%) with IRI-S, while the 2-year survival rates were 19.5% (95% CI 12.6–26.4%) and 18.0% (95% CI 11.2–24.8%), respectively.Fig. 2Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\n\nKaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\nMST was additionally calculated as an exploratory analysis after 2.5 years of follow-up, but the result was identical to the initial analysis at 12.8 months for IRI-S and at 10.5 months for S-1 monotherapy (HR 0.927; log-rank test P = 0.536). Again, the difference was not statistically significant.\n[SUBTITLE] Prognostic factors of all patients and factors that favored treatment with IRI-S [SUBSECTION] Baseline risk factors with a significant influence on the overall survival of all patients accrued (P < 0.05) were performance status (HR 1.348, 95% CI 1.079–1.686, Wald test P = 0.009), tumor histology (HR 1.720, 95% CI 1.161–2.548, P = 0.007), target lesion (HR 1.525, 95% CI 1.164–1.999, P = 0.002), and surgery for the primary tumor (HR 0.698, 95% CI 0.538–0.906, P = 0.007).\nStratified analysis according to baseline patient characteristics (Fig. 3) showed that IRI-S was significantly more effective than S-1 monotherapy for patients with diffuse-type histology (HR 0.632, 95% CI 0.454–0.880) and for those with an ECOG performance status of 1 or 2 (HR 0.614, 95% CI 0.401–0.940). No differences were observed for the other factors assessed.Fig. 3Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors\n\nSubset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors\nBaseline risk factors with a significant influence on the overall survival of all patients accrued (P < 0.05) were performance status (HR 1.348, 95% CI 1.079–1.686, Wald test P = 0.009), tumor histology (HR 1.720, 95% CI 1.161–2.548, P = 0.007), target lesion (HR 1.525, 95% CI 1.164–1.999, P = 0.002), and surgery for the primary tumor (HR 0.698, 95% CI 0.538–0.906, P = 0.007).\nStratified analysis according to baseline patient characteristics (Fig. 3) showed that IRI-S was significantly more effective than S-1 monotherapy for patients with diffuse-type histology (HR 0.632, 95% CI 0.454–0.880) and for those with an ECOG performance status of 1 or 2 (HR 0.614, 95% CI 0.401–0.940). No differences were observed for the other factors assessed.Fig. 3Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors\n\nSubset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors\n[SUBTITLE] Safety [SUBSECTION] Adverse events that occurred in each group are listed in Table 4. The incidence of major hematological toxicities was higher with IRI-S than with S-1 monotherapy. Grade 3 or 4 neutropenia was observed in 10.6% of patients treated with S-1 monotherapy versus 27.1% of patients treated with IRI-S, while the corresponding incidences of infection/febrile neutropenia were 3.8 versus 1.9%. The most common grade 3 or 4 non-hematological toxicities were diarrhea (S-1 monotherapy vs. IRI-S, 5.6 vs. 16.1%), anorexia (18.8 vs. 17.4%), nausea (5.6 vs. 7.1%), and vomiting (1.9 vs. 3.2%). Hand-foot skin reaction, a characteristic adverse event associated with some oral fluoropyrimidines, was confined to grade 2 or less and was observed in only 4.4 and 5.2% of patients treated with S-1 monotherapy and IRI-S, respectively. There were no treatment-related deaths among patients treated with S-1 monotherapy, whereas two patients in the IRI-S died of potentially treatment-related conditions (severe bone marrow dysfunction, multiple organ failure that was probably associated with multiple duodenal ulcers).Table 4Summary of adverse eventsS-1 (n = 160)IRI-S (n = 155)All eventsGrade 3/4All eventsGrade 3/4\nn\n%\nn\n%\nn\n%\nn\n%Anemia8351.91911.511372.92415.5Leukopenia8351.953.111574.21811.6Neutropenia8653.81710.611372.94227.1Infection/febrile neutropenia2817.563.84025.831.9Thrombocytopenia1811.363.81711.021.3Increased AST7546.985.06944.553.2Increased ALT5836.331.96944.531.9Increased bilirubin7446.395.65636.153.2Increased creatinine1710.621.31912.331.9Fatigue10163.1127.512379.4106.5Alopecia138.100.08756.100.0Anorexia10465.03018.812580.62717.4Diarrhea6339.495.610366.52516.1Nausea8452.595.611574.2117.1Vomiting6037.531.96843.953.2Stomatitis/pharyngitis2716.921.33421.942.6Hand-foot skin reaction74.400.085.200.0Pigmentation changes7446.300.07749.700.0Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan\n\nSummary of adverse events\nAdverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\n\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan\nAdverse events that occurred in each group are listed in Table 4. The incidence of major hematological toxicities was higher with IRI-S than with S-1 monotherapy. Grade 3 or 4 neutropenia was observed in 10.6% of patients treated with S-1 monotherapy versus 27.1% of patients treated with IRI-S, while the corresponding incidences of infection/febrile neutropenia were 3.8 versus 1.9%. The most common grade 3 or 4 non-hematological toxicities were diarrhea (S-1 monotherapy vs. IRI-S, 5.6 vs. 16.1%), anorexia (18.8 vs. 17.4%), nausea (5.6 vs. 7.1%), and vomiting (1.9 vs. 3.2%). Hand-foot skin reaction, a characteristic adverse event associated with some oral fluoropyrimidines, was confined to grade 2 or less and was observed in only 4.4 and 5.2% of patients treated with S-1 monotherapy and IRI-S, respectively. There were no treatment-related deaths among patients treated with S-1 monotherapy, whereas two patients in the IRI-S died of potentially treatment-related conditions (severe bone marrow dysfunction, multiple organ failure that was probably associated with multiple duodenal ulcers).Table 4Summary of adverse eventsS-1 (n = 160)IRI-S (n = 155)All eventsGrade 3/4All eventsGrade 3/4\nn\n%\nn\n%\nn\n%\nn\n%Anemia8351.91911.511372.92415.5Leukopenia8351.953.111574.21811.6Neutropenia8653.81710.611372.94227.1Infection/febrile neutropenia2817.563.84025.831.9Thrombocytopenia1811.363.81711.021.3Increased AST7546.985.06944.553.2Increased ALT5836.331.96944.531.9Increased bilirubin7446.395.65636.153.2Increased creatinine1710.621.31912.331.9Fatigue10163.1127.512379.4106.5Alopecia138.100.08756.100.0Anorexia10465.03018.812580.62717.4Diarrhea6339.495.610366.52516.1Nausea8452.595.611574.2117.1Vomiting6037.531.96843.953.2Stomatitis/pharyngitis2716.921.33421.942.6Hand-foot skin reaction74.400.085.200.0Pigmentation changes7446.300.07749.700.0Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan\n\nSummary of adverse events\nAdverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\n\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan", "Between June 2004 and November 2005, a total of 326 patients (S-1 monotherapy, n = 162; IRI-S, n = 164) were enrolled from 54 institutions and randomized (Fig. 1). Seven patients were subsequently found to be ineligible or withdrew before receiving any treatment. Another 4 patients were found to be ineligible after starting treatment and were not included in the analysis. Therefore, 315 patients (S-1 monotherapy, n = 160; IRI-S, n = 155) were evaluable and were included in the full analysis set to assess overall survival and TTF. In addition, 187 patients were evaluable for tumor response. Baseline patient characteristics are shown in Table 1.Fig. 1Patient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nTable 1Baseline characteristics and prior therapyCharacteristicTreatmentS-1IRI-STotal\nn\n%\nn\n%\nn\n%Patients randomized162164326Patients receiving at least one dose of study medication (full analysis set)160155315Sex Male127791107123775 Female332145297825Age (years) Median636363 Range27–7533–7527–75ECOG performance status 0109681026621167 1462948319430 25353103Tumor histology Intestinal7144613913242 Diffuse8855936018157 Other111121Resection of primary tumor +9358936018659 −6742624012941Advanced133831298326283Recurrent Adjuvant chemotherapy (+)5353103 Adjuvant chemotherapy (−)221421144314\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group\n\nPatient disposition. FAS Full analysis set, IRI-S S-1 plus irinotecan, PPS per-protocol set, TTF time to treatment failure\nBaseline characteristics and prior therapy\n\nIRI-S S-1 plus irinotecan, ECOG Eastern Cooperative Oncology Group", "The median number of treatment courses was three (range 1–19) for S-1 monotherapy whose duration was 6 weeks, and four (range 1–25) for IRI-S whose duration was 5 weeks. The main reasons for treatment discontinuation were disease progression [S-1 monotherapy vs. IRI-S, 116/160 (72.5%) vs. 89/155 (57.4%)], adverse events [12/160 (7.5%) vs. 23/155 (14.8%)], attending physician’s decision [18/160 (11.3%) vs. 18/155 (11.6%)], and consent withdrawal [11/160 (6.9%) vs. 17/155 (11.0%)]. The median TTF was 3.6 months (95% CI 2.9–4.1) and 4.5 months (95% CI 3.7–5.3), respectively (P = 0.157). The relative dose intensity was 88.9% for S-1 monotherapy, versus 90.0% for S-1 and 86.2% for irinotecan among those treated with IRI-S. Most patients in both groups received the scheduled dose of chemotherapy.\nSecond-line chemotherapy was administered to 240 patients (76%; S-1 monotherapy, n = 112; IRI-S, n = 128) (Table 2). The most common second-line therapy in both groups was a taxane alone (S-1 monotherapy, 26.9%; IRI-S, 40.6%). Among patients initially treated with S-1, 13 received crossover treatment with IRI-S, while 31 patients originally treated with IRI-S received second-line S-1 monotherapy.Table 2Second-line chemotherapyRegimenS-1 (n = 160)IRI-S (n = 155)\nn\n%\nn\n%IRI-S138.1––Irinotecan-based regimena\n2716.942.6S-1 alone––3120.0S-1-based regimenb\n95.6117.1Taxane alone4326.96340.6Others2012.51912.3None4830.02717.4\nIRI-S S-1 plus irinotecan\naIrinotecan/cisplatin, irinotecan/taxane\nbS-1/cisplatin, S-1/taxane\n\nSecond-line chemotherapy\n\nIRI-S S-1 plus irinotecan\n\naIrinotecan/cisplatin, irinotecan/taxane\n\nbS-1/cisplatin, S-1/taxane", "The overall response rate was determined in 187 patients evaluable by the RECIST, and was significantly higher with IRI-S than with S-1 monotherapy (39/94, 41.5% vs. 25/93, 26.9%; P = 0.035) (Table 3).Table 3Response to treatmentS-1 (n = 93)IRI-S (n = 94)\nn\n%\nn\n%Complete response0000Partial response25273941Stable disease35384043Progressive disease30321213Not assessable3333Overall response rate26.941.5*95% CI18.2–37.131.4–52.1\nCI confidence interval*  P = 0.035 (χ2 test)\n\nResponse to treatment\n\nCI confidence interval\n*  P = 0.035 (χ2 test)\nThe MST at the predetermined cut-off date was 12.8 months with IRI-S compared with 10.5 months with S-1 monotherapy (HR 0.856, P = 0.233) (Fig. 2), but the difference was not statistically significant. The 1-year survival rates were 44.9% [95% CI 37.2–52.6%] with S-1 monotherapy and 52.0% (95% CI 44.1–59.9%) with IRI-S, while the 2-year survival rates were 19.5% (95% CI 12.6–26.4%) and 18.0% (95% CI 11.2–24.8%), respectively.Fig. 2Kaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\n\nKaplan–Meier estimates of overall survival (a) and time to treatment failure (b) for 315 evaluable patients treated with S-1 monotherapy or S-1 plus irinotecan (IRI-S). MST Median survival time, TTF time to treatment failure, CI confidence interval\nMST was additionally calculated as an exploratory analysis after 2.5 years of follow-up, but the result was identical to the initial analysis at 12.8 months for IRI-S and at 10.5 months for S-1 monotherapy (HR 0.927; log-rank test P = 0.536). Again, the difference was not statistically significant.", "Baseline risk factors with a significant influence on the overall survival of all patients accrued (P < 0.05) were performance status (HR 1.348, 95% CI 1.079–1.686, Wald test P = 0.009), tumor histology (HR 1.720, 95% CI 1.161–2.548, P = 0.007), target lesion (HR 1.525, 95% CI 1.164–1.999, P = 0.002), and surgery for the primary tumor (HR 0.698, 95% CI 0.538–0.906, P = 0.007).\nStratified analysis according to baseline patient characteristics (Fig. 3) showed that IRI-S was significantly more effective than S-1 monotherapy for patients with diffuse-type histology (HR 0.632, 95% CI 0.454–0.880) and for those with an ECOG performance status of 1 or 2 (HR 0.614, 95% CI 0.401–0.940). No differences were observed for the other factors assessed.Fig. 3Subset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors\n\nSubset analysis of overall survival stratified by baseline patient characteristics. CI Confidence interval, ECOG Eastern Cooperative Oncology Group, RECIST Response Evaluation Criteria in Solid Tumors", "Adverse events that occurred in each group are listed in Table 4. The incidence of major hematological toxicities was higher with IRI-S than with S-1 monotherapy. Grade 3 or 4 neutropenia was observed in 10.6% of patients treated with S-1 monotherapy versus 27.1% of patients treated with IRI-S, while the corresponding incidences of infection/febrile neutropenia were 3.8 versus 1.9%. The most common grade 3 or 4 non-hematological toxicities were diarrhea (S-1 monotherapy vs. IRI-S, 5.6 vs. 16.1%), anorexia (18.8 vs. 17.4%), nausea (5.6 vs. 7.1%), and vomiting (1.9 vs. 3.2%). Hand-foot skin reaction, a characteristic adverse event associated with some oral fluoropyrimidines, was confined to grade 2 or less and was observed in only 4.4 and 5.2% of patients treated with S-1 monotherapy and IRI-S, respectively. There were no treatment-related deaths among patients treated with S-1 monotherapy, whereas two patients in the IRI-S died of potentially treatment-related conditions (severe bone marrow dysfunction, multiple organ failure that was probably associated with multiple duodenal ulcers).Table 4Summary of adverse eventsS-1 (n = 160)IRI-S (n = 155)All eventsGrade 3/4All eventsGrade 3/4\nn\n%\nn\n%\nn\n%\nn\n%Anemia8351.91911.511372.92415.5Leukopenia8351.953.111574.21811.6Neutropenia8653.81710.611372.94227.1Infection/febrile neutropenia2817.563.84025.831.9Thrombocytopenia1811.363.81711.021.3Increased AST7546.985.06944.553.2Increased ALT5836.331.96944.531.9Increased bilirubin7446.395.65636.153.2Increased creatinine1710.621.31912.331.9Fatigue10163.1127.512379.4106.5Alopecia138.100.08756.100.0Anorexia10465.03018.812580.62717.4Diarrhea6339.495.610366.52516.1Nausea8452.595.611574.2117.1Vomiting6037.531.96843.953.2Stomatitis/pharyngitis2716.921.33421.942.6Hand-foot skin reaction74.400.085.200.0Pigmentation changes7446.300.07749.700.0Adverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan\n\nSummary of adverse events\nAdverse events were graded according to National Cancer Institute Common Toxicity Criteria, version 2.0\n\nALT alanine aminotransferase, AST aspartate aminotransferase, IRI-S S-1 plus irinotecan", "This study was conducted to determine whether IRI-S could prolong MST compared with S-1 monotherapy. Basic studies have indicated that irinotecan has a multifactorial synergistic effect with the anti-tumor activity of 5-FU [24, 25]. In addition, several trials exploring combinations of S-1 and irinotecan have reported promising response rates [19, 23, 26, 27]; the dose and schedule in the present study was selected based on the lower incidence of grade 3 neutropenia and gastrointestinal toxicity evidenced from phase II studies among these trials.\nAlthough the combination therapy in the present study achieved a significantly higher response rate, the initial expectation that the addition of irinotecan would improve the MST by 40% was not met. Thus, the combination of S-1 and CDDP remains the first-line chemotherapy that can be recommended for Japanese patients, while patients who are frail or those who wish to refrain from the short stay in the hospital required for hydration could turn to S-1 monotherapy. Another standard treatment could be available pending the results of a phase III trial comparing S-1 with an S-1/docetaxel combination [17]. A combination of CDDP with 5-FU or its derivative capecitabine has been used as a platform for molecularly targeting agents in recent international trials [28]; however, the place of platinum agents in the first-line treatment of gastric cancer would seem indispensible at present.\nIrinotecan has often been delivered in combination with CDDP for gastric cancer in the West [29]. This combination was also explored in Japan in a phase II trial [30] and subsequently in a phase III trial [13], but failed to show statistically significant superiority over infusional 5-FU alone. Irinotecan was more recently found to be similarly effective to CDDP when delivered with 5-FU [31], with benefit in terms of a more favorable toxicity profile. The combination then went on to be compared with a 5-FU/CDDP combination [4], but, again, failed to show a survival advantage. With similar results obtained from the present study, irinotecan-based chemotherapy would no longer be expected to surpass 5-FU or its derivatives with or without CDDP in the first-line setting.\nOur stratified analysis revealed that IRI-S had a significant effect on overall survival in patients with diffuse-type histology and an ECOG performance status of 1 or 2 (Fig. 3). IRI-S was more effective in symptomatic patients. This finding may be related to its higher response rate, resulting from tumor shrinkage, with subsequent attenuation of clinical symptoms, possibly leading to enhanced survival time. The effect of IRI-S in cancer with diffuse-type histology was in line with the finding of the subset analysis of another phase III study that an irinotecan/CDDP combination improved the survival of patients with undifferentiated gastric cancer [13]. However, these data are contradictory to data from a phase II study of the combination of S-1 and irinotecan [19], where a higher response rate was observed for intestinal-type histology. It would not seem feasible at this time, therefore, to attempt to identify patients who may benefit from the IRI-S, using clinicopathologic factors that are easily accessible.\nAs mentioned previously, cytotoxic drugs tend to be used sequentially as second-line and third-line therapies in some countries, including Japan. Recently, Thuss-Patience et al. [32] reported on second-line treatment for metastatic gastric cancer, and stated that irinotecan monotherapy significantly extended survival compared with best supportive care. A retrospective study exploring a combination of irinotecan and CDDP for patients who failed first-line therapy with S-1 has shown a promising response rate of 28.6% and a MST of 9.4 months from the first day of the second-line treatment [33]. Another retrospective study, also in the second-line setting, has shown promising MSTs, ranging from 9.5 to 10.1 months [34]. These studies suggest a role for irinotecan after the failure of a 5-FU-based first-line treatment, provided that the patients retain sufficient performance status to tolerate this drug. Because definite evidence remains unavailable, further prospective studies in the second-line and third-line settings are warranted to confirm the place of irinotecan in the treatment of gastric cancer. IRI-S uses up one of promising drug combination for the second line treatment without sufficient prolongation of TTF when compared with S-1 monotherapy. It could partially explain why the combination failed to attain significant gain in MST in the present study.\nIRI-S was generally well tolerated in the present study. The dose intensity of S-1 in patients treated with IRI-S was equivalent to that in patients receiving S-1 monotherapy, demonstrating the good tolerability of the IRI-S. The most common grade 3 or 4 adverse events associated with this regimen included neutropenia (27.1%) and diarrhea (16.1%), both of these being more frequent than in patients receiving S-1 monotherapy. IRI-S appears to be better tolerated than either the S-1/CDDP or irinotecan/CDDP regimens explored in other phase III studies [13, 20]. Grade 3 or 4 neutropenia was less common with IRI-S than with the S-1/CDDP and irinotecan/CDDP regimens (27 vs. 40% and 65%, respectively), as was anorexia (17 vs. 30% and 33%) and nausea (7 vs. 12% and 21%). Only diarrhea was more common with IRI-S than with the S-1/CDDP and irinotecan/CDDP regimens (16 vs. 4% and 9%, respectively) [13, 20]. However, it is of note that, in the present study, two patients who received IRI-S died of potentially treatment-related conditions. The evaluation of uridine 5′-diphospho-glucuronosyl-transferase gene polymorphism, which had not been approved at the time the trial was conducted, could now identify a small number of patients who may suffer from overt adverse reactions to IRI-S [35].\nAlthough manageable in most cases, the IRI-S was found to be more toxic than S-1 monotherapy. To conclude, the improvement in the response rate observed with the IRI-S did not translate into the predicted prolongation of MST." ]
[ "introduction", null, null, null, null, null, "results", null, null, null, null, null, "discussion" ]
[ "Irinotecan S-1", "Gastric cancer", "Phase III", "Randomized controlled trial" ]
Involvement of promoter methylation in the regulation of Pregnane X receptor in colon cancer cells.
21342487
Pregnane X receptor (PXR) is a key transcription factor that regulates drug metabolizing enzymes such as cytochrome P450 (CYP) 3A4, and plays important roles in intestinal first-pass metabolism. Although there is a large inter-individual heterogeneity with intestinal CYP3A4 expression and activity, the mechanism driving these differences is not sufficiently explained by genetic variability of PXR or CYP3A4. We examined whether epigenetic mechanisms are involved in the regulation of PXR/CYP3A4 pathways in colon cancer cells.
BACKGROUND
mRNA levels of PXR, CYP3A4 and vitamin D receptor (VDR) were evaluated by quantitative real-time PCR on 6 colon cancer cell lines (Caco-2, HT29, HCT116, SW48, LS180, and LoVo). DNA methylation status was also examined by bisulfite sequencing of the 6 cell lines and 18 colorectal cancer tissue samples. DNA methylation was reversed by the treatment of these cell lines with 5-aza-2'-deoxycytidine (5-aza-dC).
METHODS
The 6 colon cancer cell lines were classified into two groups (high or low expression cells) based on the basal level of PXR/CYP3A4 mRNA. DNA methylation of the CpG-rich sequence of the PXR promoter was more densely detected in the low expression cells (Caco-2, HT29, HCT116, and SW48) than in the high expression cells (LS180 and LoVo). This methylation was reversed by treatment with 5-aza-dC, in association with re-expression of PXR and CYP3A4 mRNA, but not VDR mRNA. Therefore, PXR transcription was silenced by promoter methylation in the low expression cells, which most likely led to downregulation of CYP3A4 transactivation. Moreover, a lower level of PXR promoter methylation was observed in colorectal cancer tissues compared with adjacent normal mucosa, suggesting upregulation of the PXR/CYP3A4 mRNAs during carcinogenesis.
RESULTS
PXR promoter methylation is involved in the regulation of intestinal PXR and CYP3A4 mRNA expression and might be associated with the inter-individual variability of the drug responses of colon cancer cells.
CONCLUSIONS
[ "Azacitidine", "Caco-2 Cells", "Carcinoma", "Cell Line, Tumor", "Colonic Neoplasms", "Cytochrome P-450 CYP3A", "DNA Methylation", "Decitabine", "Gene Expression Regulation, Enzymologic", "Gene Expression Regulation, Neoplastic", "HCT116 Cells", "HT29 Cells", "Humans", "Pregnane X Receptor", "Promoter Regions, Genetic", "Receptors, Calcitriol", "Receptors, Steroid" ]
3053268
null
null
Methods
[SUBTITLE] Cell lines and tissue samples [SUBSECTION] Human colorectal cancer cell lines LS180, Caco-2, HT29, HCT116, DLD-1, LoVo, SW48 and SW620 were purchased from DS Pharma Biomedical Co., Ltd. (Osaka, Japan). LS180 cells were cultured in E-MEM medium (Invitrogen Corp., Carlsbad, CA) at 37°C under an atmosphere of 5% CO2. The other cells were cultured under conditions described elsewhere [10]. Eighteen pairs of cancerous and adjacent normal mucosa were excised from surgical specimens of colorectal cancers. The cancerous and normal epithelia were separated from stroma using crypt isolation [11]. All samples were selected from the same series of cancers as we used in a previous study [10]. All 18 patients with colorectal cancers did not receive chemotherapy before surgical resection. The study protocol was approved by ethics committee of Iwate Medical University (molecular analysis of gastrointestinal tumors and the surrounding mucosa; reference number, H21-140). Human colorectal cancer cell lines LS180, Caco-2, HT29, HCT116, DLD-1, LoVo, SW48 and SW620 were purchased from DS Pharma Biomedical Co., Ltd. (Osaka, Japan). LS180 cells were cultured in E-MEM medium (Invitrogen Corp., Carlsbad, CA) at 37°C under an atmosphere of 5% CO2. The other cells were cultured under conditions described elsewhere [10]. Eighteen pairs of cancerous and adjacent normal mucosa were excised from surgical specimens of colorectal cancers. The cancerous and normal epithelia were separated from stroma using crypt isolation [11]. All samples were selected from the same series of cancers as we used in a previous study [10]. All 18 patients with colorectal cancers did not receive chemotherapy before surgical resection. The study protocol was approved by ethics committee of Iwate Medical University (molecular analysis of gastrointestinal tumors and the surrounding mucosa; reference number, H21-140). [SUBTITLE] Treatment with 5-aza-2'-deoxycytidine [SUBSECTION] LS180, LoVo, Caco-2, HCT116, HT29 and SW48 cells were seeded at a concentration of 1 × 105 cells on a 100 mm dish. The next day, treatment of cells with 0, 0.5 or 5 μM 5-aza-2'-deoxycytidine (5-aza-dC) (Sigma Chemical, St. Louis, MO) was started, and 5-aza-dC was removed by changing the medium 24 h later. The cells were harvested 4 days after removal of 5-aza-dC for DNA and RNA extraction. LS180, LoVo, Caco-2, HCT116, HT29 and SW48 cells were seeded at a concentration of 1 × 105 cells on a 100 mm dish. The next day, treatment of cells with 0, 0.5 or 5 μM 5-aza-2'-deoxycytidine (5-aza-dC) (Sigma Chemical, St. Louis, MO) was started, and 5-aza-dC was removed by changing the medium 24 h later. The cells were harvested 4 days after removal of 5-aza-dC for DNA and RNA extraction. [SUBTITLE] Quantitative real-time PCR analysis of basal CYP3A4, PXR and VDR mRNA levels [SUBSECTION] Total RNA extraction, cDNA synthesis and real-time PCR were carried out on the cells prepared above, by the same methods as described previously [10]. mRNA levels of the CYP3A4 (exon 3-4, Hs01546612_m1), PXR (exon 5-6, Hs00243666_m1) and vitamin D receptor (VDR) (exon 10-11, Hs01045840_m1) were evaluated by TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA). In addition, the mRNA level of the PXR splicing variants (exon 1a-2) was also examined by SYBR Green assays using the following primers: 5'-GATTGTTCAAAGTGGACCCC-3'(forward) and 5'-TCCAGGAACAGACTCTGTGT-3'. The mRNA level of the above target genes was normalized to β-actin mRNA [10]. All samples were analyzed in duplicate and average quantities of the gene transcripts were used for calculation. Deviation of the mRNA level of each sample was within 7% of the average. Total RNA extraction, cDNA synthesis and real-time PCR were carried out on the cells prepared above, by the same methods as described previously [10]. mRNA levels of the CYP3A4 (exon 3-4, Hs01546612_m1), PXR (exon 5-6, Hs00243666_m1) and vitamin D receptor (VDR) (exon 10-11, Hs01045840_m1) were evaluated by TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA). In addition, the mRNA level of the PXR splicing variants (exon 1a-2) was also examined by SYBR Green assays using the following primers: 5'-GATTGTTCAAAGTGGACCCC-3'(forward) and 5'-TCCAGGAACAGACTCTGTGT-3'. The mRNA level of the above target genes was normalized to β-actin mRNA [10]. All samples were analyzed in duplicate and average quantities of the gene transcripts were used for calculation. Deviation of the mRNA level of each sample was within 7% of the average. [SUBTITLE] DNA methylation analysis of the 6 colon cancer cell lines and 18 colon cancer samples [SUBSECTION] We found CpG islands within the PXR (around exon 3 region), VDR (promoter region) and protein arginine methyltrasferase 1(PRMT1) (promoter region) genes using the CpG Island Searcher program [12,13]. A CpG island was also detected in the 5' untranslated region (UTR) of the CYP3A4 gene (approximately 25 kb distal to the transcription start site). We also found a CpG-rich sequence in the promoter region of the PXR gene, although this sequence did not strictly satisfy the criteria for a CpG island [13,14]. The location of all CpG sequences examined in this study are shown in Figure 1. Genomic DNA extracted from the 6 cell lines and the 18 pairs of normal and colon cancer tissue samples was modified by sodium bisulfite, and then each segment including a CpG island or CpG-rich sequence was amplified by PCR and subjected to direct sequencing. The methylation status of the PXR promoter sequence was also estimated by bisulfite sequencing on at least 7 individual DNA strands after subcloning of PCR products into the pCR4-TOPO vector using the TOPO TA Cloning Kit for Sequencing (Invitrogen). A relative methylation level of the PXR promoter was visually determined by the density of each HpyCH4IV-digested band using combined bisulfite restriction analysis (COBRA) [15]. The methylation status of the PXR exon 3 region was also examined by the COBRA assay using an HhaI digestion. All primer sequences are listed in Table 1. Location of the CpG sequences examined. CpG sites (vertical bars), CpG islands identified by CpG island searcher (horizontal thick lines) and transcription start sites within the 5' prime region (curved arrows) are shown in the (a), PXR; (b), CYP3A4; (c), VDR; and (d), PRMT1 genes. The segments indicated by double-pointed arrows were examined for DNA methylation by bisulfite direct sequencing. Primers used for DNA methylation analysis. F, forward primer; R, reverse primer *Dimethyl sulfoxide (5%) was added to the PCR mixture. We found CpG islands within the PXR (around exon 3 region), VDR (promoter region) and protein arginine methyltrasferase 1(PRMT1) (promoter region) genes using the CpG Island Searcher program [12,13]. A CpG island was also detected in the 5' untranslated region (UTR) of the CYP3A4 gene (approximately 25 kb distal to the transcription start site). We also found a CpG-rich sequence in the promoter region of the PXR gene, although this sequence did not strictly satisfy the criteria for a CpG island [13,14]. The location of all CpG sequences examined in this study are shown in Figure 1. Genomic DNA extracted from the 6 cell lines and the 18 pairs of normal and colon cancer tissue samples was modified by sodium bisulfite, and then each segment including a CpG island or CpG-rich sequence was amplified by PCR and subjected to direct sequencing. The methylation status of the PXR promoter sequence was also estimated by bisulfite sequencing on at least 7 individual DNA strands after subcloning of PCR products into the pCR4-TOPO vector using the TOPO TA Cloning Kit for Sequencing (Invitrogen). A relative methylation level of the PXR promoter was visually determined by the density of each HpyCH4IV-digested band using combined bisulfite restriction analysis (COBRA) [15]. The methylation status of the PXR exon 3 region was also examined by the COBRA assay using an HhaI digestion. All primer sequences are listed in Table 1. Location of the CpG sequences examined. CpG sites (vertical bars), CpG islands identified by CpG island searcher (horizontal thick lines) and transcription start sites within the 5' prime region (curved arrows) are shown in the (a), PXR; (b), CYP3A4; (c), VDR; and (d), PRMT1 genes. The segments indicated by double-pointed arrows were examined for DNA methylation by bisulfite direct sequencing. Primers used for DNA methylation analysis. F, forward primer; R, reverse primer *Dimethyl sulfoxide (5%) was added to the PCR mixture.
null
null
null
null
[ "Background", "Cell lines and tissue samples", "Treatment with 5-aza-2'-deoxycytidine", "Quantitative real-time PCR analysis of basal CYP3A4, PXR and VDR mRNA levels", "DNA methylation analysis of the 6 colon cancer cell lines and 18 colon cancer samples", "Results", "Basal mRNA levels of the CYP3A4, PXR and VDR genes in the 6 colon cancer cell lines", "Increased mRNA expression by 5-aza-dC treatment", "DNA methylation status of the colon cancer cell lines", "DNA methylation status of colon cancer tissue samples", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Nuclear receptor families play a pivotal role in regulating genes involved in drug metabolism and disposition. Pregnane X receptor (PXR, also termed SXR, PAR, and NR1I2 as its gene name) is a crucial regulator of various phase I and phase II drug metabolizing enzymes and drug transporters. PXR is expressed in liver, small intestine and other organs. PXR, with a number of therapeutic drugs and other xenobiotics as its ligands, dimerizes with retinoid X receptor α (RXRα). The ligand-PXR-RXRα complex binds to promoter and enhancer elements located upstream of cytochrome P450s (CYPs) 3A and 2C family members, UDP-glucuronosyltransferases (UGTs), sulfotransferases (SULTs), glutathione S-transferases (GSTs), and ATP binding cassette (ABC) drug transporters (reviewed in [1-3]).\nWide inter-individual variability has been documented in the expression of hepatic CYP3A4 with respect to basal and PXR-inducible activities. The genetic variability of PXR and CYP3A4 is not sufficiently frequent to explain the apparent inter-individual variability [4]. The inter-individual variability in basal hepatic CYP3A4 expression may include variability in PXR expression, as PXR is activated by endogenous steroid hormones and bile acids. Inter-individual variability of CYP3A4 expression is also observed in human intestine. Interestingly, a report using paired tissue samples of livers and small intestines indicated no observed correlation between the hepatic and small intestine CYP3A4 expression levels [5]. Another research group reported that a majority of CYP3A4 resided in the proximal region of the small intestine, and that the CYP3A4 protein levels decreased dramatically in the distal small intestine [6]. These observations suggest the CYP3A4 is regulated through tissue specific epigenetic regulation in normal tissue. The aim of the present study is to find possible and not yet fully elucidated mechanisms which regulate heterogeneous basal PXR and CYP3A4 expression and activity in cancerous tissues as well. Indeed, several studies previously found a fraction of genes that exhibited inter-individual differences in transcript levels associated with DNA methylation status [7,8].\nDNA methylation of the CpG-rich sequence around exon 3 of the PXR gene is involved in the epigenetic regulation of PXR in human neuroblastoma [9]. However, epigenetic regulation of PXR and CYP3A4 in human gut is poorly understood. In order to determine whether epigenetic mechanisms function in PXR and CYP3A4 regulation and intestinal metabolism, we examined DNA methylation and mRNA expression of several candidate genes on the PXR/CYP3A4 regulatory pathway in human colon cancer cell lines and tissues.", "Human colorectal cancer cell lines LS180, Caco-2, HT29, HCT116, DLD-1, LoVo, SW48 and SW620 were purchased from DS Pharma Biomedical Co., Ltd. (Osaka, Japan). LS180 cells were cultured in E-MEM medium (Invitrogen Corp., Carlsbad, CA) at 37°C under an atmosphere of 5% CO2. The other cells were cultured under conditions described elsewhere [10].\nEighteen pairs of cancerous and adjacent normal mucosa were excised from surgical specimens of colorectal cancers. The cancerous and normal epithelia were separated from stroma using crypt isolation [11]. All samples were selected from the same series of cancers as we used in a previous study [10]. All 18 patients with colorectal cancers did not receive chemotherapy before surgical resection. The study protocol was approved by ethics committee of Iwate Medical University (molecular analysis of gastrointestinal tumors and the surrounding mucosa; reference number, H21-140).", "LS180, LoVo, Caco-2, HCT116, HT29 and SW48 cells were seeded at a concentration of 1 × 105 cells on a 100 mm dish. The next day, treatment of cells with 0, 0.5 or 5 μM 5-aza-2'-deoxycytidine (5-aza-dC) (Sigma Chemical, St. Louis, MO) was started, and 5-aza-dC was removed by changing the medium 24 h later. The cells were harvested 4 days after removal of 5-aza-dC for DNA and RNA extraction.", "Total RNA extraction, cDNA synthesis and real-time PCR were carried out on the cells prepared above, by the same methods as described previously [10]. mRNA levels of the CYP3A4 (exon 3-4, Hs01546612_m1), PXR (exon 5-6, Hs00243666_m1) and vitamin D receptor (VDR) (exon 10-11, Hs01045840_m1) were evaluated by TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA). In addition, the mRNA level of the PXR splicing variants (exon 1a-2) was also examined by SYBR Green assays using the following primers: 5'-GATTGTTCAAAGTGGACCCC-3'(forward) and 5'-TCCAGGAACAGACTCTGTGT-3'. The mRNA level of the above target genes was normalized to β-actin mRNA [10]. All samples were analyzed in duplicate and average quantities of the gene transcripts were used for calculation. Deviation of the mRNA level of each sample was within 7% of the average.", "We found CpG islands within the PXR (around exon 3 region), VDR (promoter region) and protein arginine methyltrasferase 1(PRMT1) (promoter region) genes using the CpG Island Searcher program [12,13]. A CpG island was also detected in the 5' untranslated region (UTR) of the CYP3A4 gene (approximately 25 kb distal to the transcription start site). We also found a CpG-rich sequence in the promoter region of the PXR gene, although this sequence did not strictly satisfy the criteria for a CpG island [13,14]. The location of all CpG sequences examined in this study are shown in Figure 1. Genomic DNA extracted from the 6 cell lines and the 18 pairs of normal and colon cancer tissue samples was modified by sodium bisulfite, and then each segment including a CpG island or CpG-rich sequence was amplified by PCR and subjected to direct sequencing. The methylation status of the PXR promoter sequence was also estimated by bisulfite sequencing on at least 7 individual DNA strands after subcloning of PCR products into the pCR4-TOPO vector using the TOPO TA Cloning Kit for Sequencing (Invitrogen). A relative methylation level of the PXR promoter was visually determined by the density of each HpyCH4IV-digested band using combined bisulfite restriction analysis (COBRA) [15]. The methylation status of the PXR exon 3 region was also examined by the COBRA assay using an HhaI digestion. All primer sequences are listed in Table 1.\nLocation of the CpG sequences examined. CpG sites (vertical bars), CpG islands identified by CpG island searcher (horizontal thick lines) and transcription start sites within the 5' prime region (curved arrows) are shown in the (a), PXR; (b), CYP3A4; (c), VDR; and (d), PRMT1 genes. The segments indicated by double-pointed arrows were examined for DNA methylation by bisulfite direct sequencing.\nPrimers used for DNA methylation analysis.\nF, forward primer; R, reverse primer\n*Dimethyl sulfoxide (5%) was added to the PCR mixture.", "[SUBTITLE] Basal mRNA levels of the CYP3A4, PXR and VDR genes in the 6 colon cancer cell lines [SUBSECTION] Real-time PCR analyses revealed that the basal levels of CYP3A4, PXR (exon 5-6), PXR (exon 1a-2) and VDR mRNA were heterogeneous among the 6 cell lines examined (Figure 2). The 6 cell lines were then classified into two groups (high or low expression cells) based on the basal level of PXR and CYP3A4 mRNA, because the 6 cell lines always showed either high PXR and CYP3A4 expression or low PXR and CYP3A4 expression. The levels of CYP3A4 and PXR transcripts on high expression cells (LS180 and LoVo) were 7- to 35-fold and 40- to 5,000-fold higher than those on lower expression cells (Caco-2, HT29, HCT116 and SW48), respectively. These two groups also exhibited a difference in the basal level of the VDR transcript, although this difference was not marked (3.5- to 6-fold). There was strong correlation of the levels of the transcripts between PXR (exon 5-6) and PXR (exon 1a-2) throughout all analyses.\nBasal expression profile of colon cancer cell lines. Basal levels of (a), PXR exon 5-6; (b), PXR exon 1a-2; (c), CYP3A4; and (d), VDR transcripts in LS180, LoVo, Caco-2, HT29, HCT116 and SW48 cells. The vertical axis indicates a relative transcript level (ratio to LS180 cells). The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples (0.00257 and 0.0399, respectively).\nReal-time PCR analyses revealed that the basal levels of CYP3A4, PXR (exon 5-6), PXR (exon 1a-2) and VDR mRNA were heterogeneous among the 6 cell lines examined (Figure 2). The 6 cell lines were then classified into two groups (high or low expression cells) based on the basal level of PXR and CYP3A4 mRNA, because the 6 cell lines always showed either high PXR and CYP3A4 expression or low PXR and CYP3A4 expression. The levels of CYP3A4 and PXR transcripts on high expression cells (LS180 and LoVo) were 7- to 35-fold and 40- to 5,000-fold higher than those on lower expression cells (Caco-2, HT29, HCT116 and SW48), respectively. These two groups also exhibited a difference in the basal level of the VDR transcript, although this difference was not marked (3.5- to 6-fold). There was strong correlation of the levels of the transcripts between PXR (exon 5-6) and PXR (exon 1a-2) throughout all analyses.\nBasal expression profile of colon cancer cell lines. Basal levels of (a), PXR exon 5-6; (b), PXR exon 1a-2; (c), CYP3A4; and (d), VDR transcripts in LS180, LoVo, Caco-2, HT29, HCT116 and SW48 cells. The vertical axis indicates a relative transcript level (ratio to LS180 cells). The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples (0.00257 and 0.0399, respectively).\n[SUBTITLE] Increased mRNA expression by 5-aza-dC treatment [SUBSECTION] In order to determine whether DNA methylation is involved in the transcriptional regulation of these genes, the 6 cell lines were treated with DNA demethylating agent (5-aza-dC). The treatment with 5-aza-dC induced a clear increase in CYP3A4 (28- to 116-fold) and PXR (3- to 10-fold) transcripts in a dose-dependent manner in the low expression cells, but not in the high expression cells (Figure 3). In particular, the CYP3A4 transcript of the low expression cells eventually reached the levels seen in the high expression cells by 5-aza-dC treatment. In contrast, 5-aza-dC had no marked effect on VDR expression in any of the cell lines (0.5- to 1.2-fold increase). These results suggested that DNA methylation is involved in the regulation of CYP3A4 and PXR, but not VDR, in the low expression cell lines.\nExpression profile of colon cancer cell lines after 5-aza-dC treatment. Levels of PXR, VDR and CYP3A4 transcripts in (a), LS180; (b), Caco-2; (c), HCT116; (d), LoVo; (e), HT29; and (f), SW48 cells. Cells were treated with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). The vertical axis indicates a relative transcript level (ratio to cells without 5-aza-dC treatment). Numbers in italics indicate relative transcript levels that were over the maximum scale on the vertical axis. The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples.\nIn order to determine whether DNA methylation is involved in the transcriptional regulation of these genes, the 6 cell lines were treated with DNA demethylating agent (5-aza-dC). The treatment with 5-aza-dC induced a clear increase in CYP3A4 (28- to 116-fold) and PXR (3- to 10-fold) transcripts in a dose-dependent manner in the low expression cells, but not in the high expression cells (Figure 3). In particular, the CYP3A4 transcript of the low expression cells eventually reached the levels seen in the high expression cells by 5-aza-dC treatment. In contrast, 5-aza-dC had no marked effect on VDR expression in any of the cell lines (0.5- to 1.2-fold increase). These results suggested that DNA methylation is involved in the regulation of CYP3A4 and PXR, but not VDR, in the low expression cell lines.\nExpression profile of colon cancer cell lines after 5-aza-dC treatment. Levels of PXR, VDR and CYP3A4 transcripts in (a), LS180; (b), Caco-2; (c), HCT116; (d), LoVo; (e), HT29; and (f), SW48 cells. Cells were treated with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). The vertical axis indicates a relative transcript level (ratio to cells without 5-aza-dC treatment). Numbers in italics indicate relative transcript levels that were over the maximum scale on the vertical axis. The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples.\n[SUBTITLE] DNA methylation status of the colon cancer cell lines [SUBSECTION] The bisulfite direct sequencing detected no DNA methylation in the VDR or PRMT1 promoter sequences in the 6 cell lines. Partial methylation of the CYP3A4 5'-distal region and full methylation of the PXR exon 3 region were equally observed among the 6 cell lines. Therefore, the different expression profiles of the two groups cannot be explained by methylation of these sequences. On the other hand, the CpG-rich sequence of the PXR promoter showed a different methylation status among the 6 cell lines. Interestingly, the degree of methylation of the PXR promoter (segments 1 and 2) in the high expression cells (LS180 and LoVo) was lower than that in the low expression cells (Caco-2, HT29, HCT116 and SW48) (Figure 4). Therefore, the details of the methylation status of the PXR promoter were estimated on individual DNA strands after subcloning (Figure 5). We found that an HpyCH4IV site within segment 2 was a suitable marker for the COBRA assay to assess PXR promoter methylation, because the degree of methylation of this site showed inverse correlation with the levels of CYP3A4 and PXR expression.\nDNA methylation profile of PXR as detected by direct sequencing. Methylation status of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N) detected by bisulfite direct sequencing. Open and closed circles represent unmethylated and fully methylated CpG sites, respectively. Half-closed circles represent partially methylated CpG sites. The methylation status of the CpG island (around exon 3) and CpG-rich promoter sequence (segments 1, 2, 3 and 4) are shown in the upper and lower panels, respectively.\nDetailed DNA methylation profile of PXR segment 2. Detailed methylation profile of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N). Methylation of each CpG site was estimated by bisulfite sequencing on 7 or 8 individual DNA strands after subcloning. Open and closed circles represent unmethylated and methylated CpG sites, respectively. A 'TATA' indicates a putative TATA box. A curved arrow indicates a transcription start site. 'HpyCH4IV' indicates restriction site using the COBRA assay.\nThe COBRA assay demonstrated that the treatment with 5-aza-dC resulted in decreased amounts of methylation of the PXR promoter in a dose-dependent manner in the low expression cells (Figure 6a). Therefore, the magnitude of methylation was likely to be associated with the decreased levels of CYP3A4 and PXR gene expression in all 6 cell lines. These results suggest that the PXR gene was transcriptionally silenced by methylation of the promoter CpG sites and that the downregulation of the PXR protein resulted in decreased expression of the CYP3A4 gene.\nMethylation status of the PXR promoter and exon 3 regions, as detected by the COBRA assay. Unmethylated and methylated DNAs are shown as U and M, respectively. PCR products that were not cut by the restriction enzymes are shown as UC. (a) PXR promoter methylation was examined in the 6 cell lines after treatment with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). (b) Methylation status of the CpG island of the PXR exon 3 region in 6 of the 18 primary colorectal cancers (numbers are those for particular cases). DNA samples from normal and cancerous tissue are shown as N and C, respectively. Note that a high degree of methylation was detected in cancerous, but not normal, tissue. (c) Methylation status of the CpG-rich sequence of the PXR promoter region in 8 primary colorectal cancers. Note that a lower degree of methylation was detected in cancerous tissue compared to normal tissue.\nThe PXR promoter methylation was not associated with the profile of microsatellite instability (MSI) or other methylated genes (Table 2). This suggested that altered PXR methylation was accumulated during colorectal tumorigenesis, independent of these genetic and epigenetic events.\nProfile of Microsatellite instability (MSI), mismatch repair (MMR) deficiency and promoter methylation in 6 colon cancer cell lines.\nSummary of previous studies [27,28] and the present study (PXR methylation).\n*Mismatch repair deficiency was associated with mutations or defects in mRNA transcripts.\n**Methylation status was defined as unmethylated (U) or methylated (M).\nThe bisulfite direct sequencing detected no DNA methylation in the VDR or PRMT1 promoter sequences in the 6 cell lines. Partial methylation of the CYP3A4 5'-distal region and full methylation of the PXR exon 3 region were equally observed among the 6 cell lines. Therefore, the different expression profiles of the two groups cannot be explained by methylation of these sequences. On the other hand, the CpG-rich sequence of the PXR promoter showed a different methylation status among the 6 cell lines. Interestingly, the degree of methylation of the PXR promoter (segments 1 and 2) in the high expression cells (LS180 and LoVo) was lower than that in the low expression cells (Caco-2, HT29, HCT116 and SW48) (Figure 4). Therefore, the details of the methylation status of the PXR promoter were estimated on individual DNA strands after subcloning (Figure 5). We found that an HpyCH4IV site within segment 2 was a suitable marker for the COBRA assay to assess PXR promoter methylation, because the degree of methylation of this site showed inverse correlation with the levels of CYP3A4 and PXR expression.\nDNA methylation profile of PXR as detected by direct sequencing. Methylation status of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N) detected by bisulfite direct sequencing. Open and closed circles represent unmethylated and fully methylated CpG sites, respectively. Half-closed circles represent partially methylated CpG sites. The methylation status of the CpG island (around exon 3) and CpG-rich promoter sequence (segments 1, 2, 3 and 4) are shown in the upper and lower panels, respectively.\nDetailed DNA methylation profile of PXR segment 2. Detailed methylation profile of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N). Methylation of each CpG site was estimated by bisulfite sequencing on 7 or 8 individual DNA strands after subcloning. Open and closed circles represent unmethylated and methylated CpG sites, respectively. A 'TATA' indicates a putative TATA box. A curved arrow indicates a transcription start site. 'HpyCH4IV' indicates restriction site using the COBRA assay.\nThe COBRA assay demonstrated that the treatment with 5-aza-dC resulted in decreased amounts of methylation of the PXR promoter in a dose-dependent manner in the low expression cells (Figure 6a). Therefore, the magnitude of methylation was likely to be associated with the decreased levels of CYP3A4 and PXR gene expression in all 6 cell lines. These results suggest that the PXR gene was transcriptionally silenced by methylation of the promoter CpG sites and that the downregulation of the PXR protein resulted in decreased expression of the CYP3A4 gene.\nMethylation status of the PXR promoter and exon 3 regions, as detected by the COBRA assay. Unmethylated and methylated DNAs are shown as U and M, respectively. PCR products that were not cut by the restriction enzymes are shown as UC. (a) PXR promoter methylation was examined in the 6 cell lines after treatment with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). (b) Methylation status of the CpG island of the PXR exon 3 region in 6 of the 18 primary colorectal cancers (numbers are those for particular cases). DNA samples from normal and cancerous tissue are shown as N and C, respectively. Note that a high degree of methylation was detected in cancerous, but not normal, tissue. (c) Methylation status of the CpG-rich sequence of the PXR promoter region in 8 primary colorectal cancers. Note that a lower degree of methylation was detected in cancerous tissue compared to normal tissue.\nThe PXR promoter methylation was not associated with the profile of microsatellite instability (MSI) or other methylated genes (Table 2). This suggested that altered PXR methylation was accumulated during colorectal tumorigenesis, independent of these genetic and epigenetic events.\nProfile of Microsatellite instability (MSI), mismatch repair (MMR) deficiency and promoter methylation in 6 colon cancer cell lines.\nSummary of previous studies [27,28] and the present study (PXR methylation).\n*Mismatch repair deficiency was associated with mutations or defects in mRNA transcripts.\n**Methylation status was defined as unmethylated (U) or methylated (M).\n[SUBTITLE] DNA methylation status of colon cancer tissue samples [SUBSECTION] No or slight methylation of the PXR exon 3 region was detected in the normal colon tissue samples by direct sequencing and the COBRA assay. The levels of methylation in the cancer tissues were mostly higher than those in the paired adjacent normal tissues (Figure 6b). By contrast, the CpG-rich sequence of the PXR promoter was partially methylated in normal tissues, and the degree of methylation was decreased in the paired cancer tissues (Figure 6c). The decreased level of the PXR promoter methylation suggested increased expression of the PXR gene during colorectal carcinogenesis. There were no differences in the clinicopathological findings between the colorectal cancers with PXR methylation and those without methylation.\nNo or slight methylation of the PXR exon 3 region was detected in the normal colon tissue samples by direct sequencing and the COBRA assay. The levels of methylation in the cancer tissues were mostly higher than those in the paired adjacent normal tissues (Figure 6b). By contrast, the CpG-rich sequence of the PXR promoter was partially methylated in normal tissues, and the degree of methylation was decreased in the paired cancer tissues (Figure 6c). The decreased level of the PXR promoter methylation suggested increased expression of the PXR gene during colorectal carcinogenesis. There were no differences in the clinicopathological findings between the colorectal cancers with PXR methylation and those without methylation.", "Real-time PCR analyses revealed that the basal levels of CYP3A4, PXR (exon 5-6), PXR (exon 1a-2) and VDR mRNA were heterogeneous among the 6 cell lines examined (Figure 2). The 6 cell lines were then classified into two groups (high or low expression cells) based on the basal level of PXR and CYP3A4 mRNA, because the 6 cell lines always showed either high PXR and CYP3A4 expression or low PXR and CYP3A4 expression. The levels of CYP3A4 and PXR transcripts on high expression cells (LS180 and LoVo) were 7- to 35-fold and 40- to 5,000-fold higher than those on lower expression cells (Caco-2, HT29, HCT116 and SW48), respectively. These two groups also exhibited a difference in the basal level of the VDR transcript, although this difference was not marked (3.5- to 6-fold). There was strong correlation of the levels of the transcripts between PXR (exon 5-6) and PXR (exon 1a-2) throughout all analyses.\nBasal expression profile of colon cancer cell lines. Basal levels of (a), PXR exon 5-6; (b), PXR exon 1a-2; (c), CYP3A4; and (d), VDR transcripts in LS180, LoVo, Caco-2, HT29, HCT116 and SW48 cells. The vertical axis indicates a relative transcript level (ratio to LS180 cells). The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples (0.00257 and 0.0399, respectively).", "In order to determine whether DNA methylation is involved in the transcriptional regulation of these genes, the 6 cell lines were treated with DNA demethylating agent (5-aza-dC). The treatment with 5-aza-dC induced a clear increase in CYP3A4 (28- to 116-fold) and PXR (3- to 10-fold) transcripts in a dose-dependent manner in the low expression cells, but not in the high expression cells (Figure 3). In particular, the CYP3A4 transcript of the low expression cells eventually reached the levels seen in the high expression cells by 5-aza-dC treatment. In contrast, 5-aza-dC had no marked effect on VDR expression in any of the cell lines (0.5- to 1.2-fold increase). These results suggested that DNA methylation is involved in the regulation of CYP3A4 and PXR, but not VDR, in the low expression cell lines.\nExpression profile of colon cancer cell lines after 5-aza-dC treatment. Levels of PXR, VDR and CYP3A4 transcripts in (a), LS180; (b), Caco-2; (c), HCT116; (d), LoVo; (e), HT29; and (f), SW48 cells. Cells were treated with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). The vertical axis indicates a relative transcript level (ratio to cells without 5-aza-dC treatment). Numbers in italics indicate relative transcript levels that were over the maximum scale on the vertical axis. The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples.", "The bisulfite direct sequencing detected no DNA methylation in the VDR or PRMT1 promoter sequences in the 6 cell lines. Partial methylation of the CYP3A4 5'-distal region and full methylation of the PXR exon 3 region were equally observed among the 6 cell lines. Therefore, the different expression profiles of the two groups cannot be explained by methylation of these sequences. On the other hand, the CpG-rich sequence of the PXR promoter showed a different methylation status among the 6 cell lines. Interestingly, the degree of methylation of the PXR promoter (segments 1 and 2) in the high expression cells (LS180 and LoVo) was lower than that in the low expression cells (Caco-2, HT29, HCT116 and SW48) (Figure 4). Therefore, the details of the methylation status of the PXR promoter were estimated on individual DNA strands after subcloning (Figure 5). We found that an HpyCH4IV site within segment 2 was a suitable marker for the COBRA assay to assess PXR promoter methylation, because the degree of methylation of this site showed inverse correlation with the levels of CYP3A4 and PXR expression.\nDNA methylation profile of PXR as detected by direct sequencing. Methylation status of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N) detected by bisulfite direct sequencing. Open and closed circles represent unmethylated and fully methylated CpG sites, respectively. Half-closed circles represent partially methylated CpG sites. The methylation status of the CpG island (around exon 3) and CpG-rich promoter sequence (segments 1, 2, 3 and 4) are shown in the upper and lower panels, respectively.\nDetailed DNA methylation profile of PXR segment 2. Detailed methylation profile of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N). Methylation of each CpG site was estimated by bisulfite sequencing on 7 or 8 individual DNA strands after subcloning. Open and closed circles represent unmethylated and methylated CpG sites, respectively. A 'TATA' indicates a putative TATA box. A curved arrow indicates a transcription start site. 'HpyCH4IV' indicates restriction site using the COBRA assay.\nThe COBRA assay demonstrated that the treatment with 5-aza-dC resulted in decreased amounts of methylation of the PXR promoter in a dose-dependent manner in the low expression cells (Figure 6a). Therefore, the magnitude of methylation was likely to be associated with the decreased levels of CYP3A4 and PXR gene expression in all 6 cell lines. These results suggest that the PXR gene was transcriptionally silenced by methylation of the promoter CpG sites and that the downregulation of the PXR protein resulted in decreased expression of the CYP3A4 gene.\nMethylation status of the PXR promoter and exon 3 regions, as detected by the COBRA assay. Unmethylated and methylated DNAs are shown as U and M, respectively. PCR products that were not cut by the restriction enzymes are shown as UC. (a) PXR promoter methylation was examined in the 6 cell lines after treatment with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). (b) Methylation status of the CpG island of the PXR exon 3 region in 6 of the 18 primary colorectal cancers (numbers are those for particular cases). DNA samples from normal and cancerous tissue are shown as N and C, respectively. Note that a high degree of methylation was detected in cancerous, but not normal, tissue. (c) Methylation status of the CpG-rich sequence of the PXR promoter region in 8 primary colorectal cancers. Note that a lower degree of methylation was detected in cancerous tissue compared to normal tissue.\nThe PXR promoter methylation was not associated with the profile of microsatellite instability (MSI) or other methylated genes (Table 2). This suggested that altered PXR methylation was accumulated during colorectal tumorigenesis, independent of these genetic and epigenetic events.\nProfile of Microsatellite instability (MSI), mismatch repair (MMR) deficiency and promoter methylation in 6 colon cancer cell lines.\nSummary of previous studies [27,28] and the present study (PXR methylation).\n*Mismatch repair deficiency was associated with mutations or defects in mRNA transcripts.\n**Methylation status was defined as unmethylated (U) or methylated (M).", "No or slight methylation of the PXR exon 3 region was detected in the normal colon tissue samples by direct sequencing and the COBRA assay. The levels of methylation in the cancer tissues were mostly higher than those in the paired adjacent normal tissues (Figure 6b). By contrast, the CpG-rich sequence of the PXR promoter was partially methylated in normal tissues, and the degree of methylation was decreased in the paired cancer tissues (Figure 6c). The decreased level of the PXR promoter methylation suggested increased expression of the PXR gene during colorectal carcinogenesis. There were no differences in the clinicopathological findings between the colorectal cancers with PXR methylation and those without methylation.", "In the present study, 6 colon cancer cell lines showed heterogeneous mRNA expression profiles and were able to be classified into two groups with respect to their basal levels of the PXR/CYP3A4 transcripts (high expression cells, LS180 and LoVo; low expression cells Caco-2, HT29, HCT116, and SW48). These results are consistent with previous studies, in which LS180 and Caco-2 cells were characterized as PXR-sufficient and PXR-deficient cells, respectively [16,17].\nGenetic polymorphisms in the regions that regulate transcription are often a major cause of inter-individual variability in the levels of transcripts. However, such polymorphisms have not been frequently observed in the human PXR or CYP3A4 genes, implying that certain epigenetic mechanisms are involved in the regulation of PXR and CYP3A4 expression. We found that the CpG-rich sequence within the PXR promoter region is methylated to different levels in high and low expression cells. Importantly, the magnitude of this promoter methylation was inversely associated with the levels of PXR and CYP3A4 expression. Furthermore, the levels of the PXR and CYP3A4 transcripts in low expression cells were mostly restored when DNA methylation was reversed by treatment with 5-aza-dC. Although this CpG-rich sequence did not strictly satisfy the criteria for a CpG island, the most affected CpG sites were located in a highly restricted region (segments 1 and 2) and these CpG sites were proximal to several putative transcription factor binding sites (such as Sp1 and hepatocyte nuclear factor 4 alpha) [18-20]. Therefore, PXR gene expression is most likely transcriptionally regulated by methylation of these promoter CpG sites.\nCYP3A4 is transactivated by functional interplays with VDR-RXRα or PXR-PRMT1 [20-22]. CpG-island methylation of the VDR or PRMT1 promoter was not detected in these cell lines and the mRNA expression of VDR was not affected by 5-aza-dC treatment. These observations imply that DNA methylation of PXR, but not VDR or PRMT1, resulted in downregulation of the CYP3A4 mRNA in these colon cancer cells.\nIt is still uncertain whether re-expression of the CYP3A4 by 5-aza-dC treatment was due to the re-expression of some other genes than PXR. However, PXR must be a candidate for methylation and reduced expression of the PXR by promoter methylation, even if partially, contributes to downregulation of the CYP3A4. Indeed, several studies demonstrated that selective downregulation of the PXR by siRNA reduces the basal level of the CYP3A4 transcripts in a dose-dependent manner [23].\nCpG islands in the exon 3 region were fully methylated throughout the cancer cell lines and most cancer tissues. Even after treatment of the cell lines with 5-aza-dC, no increase in the PXR mRNA levels was observed in the high-PXR expressing cell lines, LS180 and LoVo. This strongly suggests that in human colon cancer cells, the methylated CpG islands in the exon 3 play a much less role in the epigenetic regulation of PXR, instead, promoter methylation plays a pivotal role in its regulation. In contrast, Misawa et al. previously demonstrated a distinct methylation profile of neuroblastoma cells, in which mRNA expression of the PXR splicing variant (exon 1a-2) was specifically regulated by the methylation of the exon 3 region rather than promoter methylation [9]. We found no marked difference in the levels of the PXR (exon 5-6) and PXR (exon 1a-2) transcripts in the colon cancer cells. Therefore, a tissue-specific DNA methylation profile is most likely involved in the transcriptional regulation of the PXR gene.\nDNA methylation of the PXR promoter was detected in only 1 of the 18 colorectal cancer tissue samples. The results reflect the genuine DNA methylation status, because we examined pure cancerous and normal epithelia using crypt isolation and directly compared the DNA methylation status between paired epithelia. Therefore, a low level of PXR promoter methylation, which was observed in the high expression cells, appears to be a common feature of colorectal cancers. We also demonstrated that the level of PXR promoter methylation is decreased during carcinogenesis, since paired adjacent normal tissues mostly showed higher levels of PXR promoter methylation. We could not directly compare DNA methylation status with the PXR mRNA expression, because crypt isolation provided ethanol-fixed epithelia and it was difficult to obtain fresh mRNA samples. However, most cancer tissues exhibited a pattern of promoter methylation quite similar to that observed in cultured cells with high expression (LS180 and LoVo) (Figures 5 and 6c). Therefore, the association between promoter methylation and transcriptional silencing of the PXR gene is most likely applicable to primary colorectal cancers. As observed in the colon cancer cell lines, the decreased level of PXR promoter methylation most likely led to increased expression of PXR mRNA in the colorectal cancer tissues. These results are consistent with a recent study that showed strong expression of PXR mRNA in colon cancers, with great variability [24]. In contrast, Ouyang et al. found that PXR expression was lost or greatly diminished in many colon cancers using histochemical analysis [25]. Although the role of the altered PXR expression in colorectal carcinogenesis remains to be clarified, Zhou et al. demonstrated that PXR plays an antiapoptotic role in colon carcinogenesis by induction of multiple antiapoptotic genes [26].\nWe cannot rule out the possibility that alterations of the PXR methylation levels play direct roles in tumorigenesis, because certain oncogenes or tumor suppressor genes may be trascriptionally regulated by PXR. Partial methylation of the PXR observed in adjacent normal mucosa may be associated with \"field defect\" for carcinogenesis. However, numerous studies have demonstrated that ligand-binding activation or siRNA-mediated silencing of the PXR can affect the activity of metabolic enzymes including CYP3A4, without changes in the cell proliferation capacity. Therefore, we think that altered level of the PXR methylation does not provide a selective growth advantage during colorectal cancer progression.\nInterestingly, overexpression of PXR in the colorectal cancer tissue samples was correlated with an increase in UDP glucuronosyl transferases UGT1A1, UGT1A9 and UGT1A10, and led to a marked chemoresistance to the active metabolite of irinotecan (CPT-11) [24]. In addition, CYP3A4 and p-glycoprotein, which are transcriptionally activated by PXR, play important roles in intestinal first-pass metabolism and determine a drug's bioavailability. We hypothesized that PXR may play a key role in the colon cancer cell response to anticancer drugs by modulating expression of drug metabolizing enzymes and transporters including UGT1A, CYP3A4 and p-glycoprotein. Therefore, DNA methylation of the PXR promoter might be a good predictor of chemotherapy outcome and toxicity in colorectal cancers.", "PXR promoter methylation is involved in the regulation of intestinal PXR and CYP3A4 expression. This methylation might be associated with the inter-individual variability of the drug response of colon cancer cells.", "PXR: pregnane X receptor; CYPs: cytochrome P450s; VDR: vitamin D receptor; PRMT1: protein arginine methyltrasferase 1; 5-aza-dC: 5-aza-2'-deoxycytidine; COBRA: combined bisulfite restriction analysis", "The authors declare that they have no competing interests.", "WH designed this study and carried out the cell culture, molecular genetic studies and drafted the manuscript. GT and JT participated in the data analysis. TS performed crypt isolation and pathological diagnosis. KO and GW performed the surgeries and obtained informed consent from the patients. SO participated in the design of this study and helped to draft the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/81/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Cell lines and tissue samples", "Treatment with 5-aza-2'-deoxycytidine", "Quantitative real-time PCR analysis of basal CYP3A4, PXR and VDR mRNA levels", "DNA methylation analysis of the 6 colon cancer cell lines and 18 colon cancer samples", "Results", "Basal mRNA levels of the CYP3A4, PXR and VDR genes in the 6 colon cancer cell lines", "Increased mRNA expression by 5-aza-dC treatment", "DNA methylation status of the colon cancer cell lines", "DNA methylation status of colon cancer tissue samples", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Nuclear receptor families play a pivotal role in regulating genes involved in drug metabolism and disposition. Pregnane X receptor (PXR, also termed SXR, PAR, and NR1I2 as its gene name) is a crucial regulator of various phase I and phase II drug metabolizing enzymes and drug transporters. PXR is expressed in liver, small intestine and other organs. PXR, with a number of therapeutic drugs and other xenobiotics as its ligands, dimerizes with retinoid X receptor α (RXRα). The ligand-PXR-RXRα complex binds to promoter and enhancer elements located upstream of cytochrome P450s (CYPs) 3A and 2C family members, UDP-glucuronosyltransferases (UGTs), sulfotransferases (SULTs), glutathione S-transferases (GSTs), and ATP binding cassette (ABC) drug transporters (reviewed in [1-3]).\nWide inter-individual variability has been documented in the expression of hepatic CYP3A4 with respect to basal and PXR-inducible activities. The genetic variability of PXR and CYP3A4 is not sufficiently frequent to explain the apparent inter-individual variability [4]. The inter-individual variability in basal hepatic CYP3A4 expression may include variability in PXR expression, as PXR is activated by endogenous steroid hormones and bile acids. Inter-individual variability of CYP3A4 expression is also observed in human intestine. Interestingly, a report using paired tissue samples of livers and small intestines indicated no observed correlation between the hepatic and small intestine CYP3A4 expression levels [5]. Another research group reported that a majority of CYP3A4 resided in the proximal region of the small intestine, and that the CYP3A4 protein levels decreased dramatically in the distal small intestine [6]. These observations suggest the CYP3A4 is regulated through tissue specific epigenetic regulation in normal tissue. The aim of the present study is to find possible and not yet fully elucidated mechanisms which regulate heterogeneous basal PXR and CYP3A4 expression and activity in cancerous tissues as well. Indeed, several studies previously found a fraction of genes that exhibited inter-individual differences in transcript levels associated with DNA methylation status [7,8].\nDNA methylation of the CpG-rich sequence around exon 3 of the PXR gene is involved in the epigenetic regulation of PXR in human neuroblastoma [9]. However, epigenetic regulation of PXR and CYP3A4 in human gut is poorly understood. In order to determine whether epigenetic mechanisms function in PXR and CYP3A4 regulation and intestinal metabolism, we examined DNA methylation and mRNA expression of several candidate genes on the PXR/CYP3A4 regulatory pathway in human colon cancer cell lines and tissues.", "[SUBTITLE] Cell lines and tissue samples [SUBSECTION] Human colorectal cancer cell lines LS180, Caco-2, HT29, HCT116, DLD-1, LoVo, SW48 and SW620 were purchased from DS Pharma Biomedical Co., Ltd. (Osaka, Japan). LS180 cells were cultured in E-MEM medium (Invitrogen Corp., Carlsbad, CA) at 37°C under an atmosphere of 5% CO2. The other cells were cultured under conditions described elsewhere [10].\nEighteen pairs of cancerous and adjacent normal mucosa were excised from surgical specimens of colorectal cancers. The cancerous and normal epithelia were separated from stroma using crypt isolation [11]. All samples were selected from the same series of cancers as we used in a previous study [10]. All 18 patients with colorectal cancers did not receive chemotherapy before surgical resection. The study protocol was approved by ethics committee of Iwate Medical University (molecular analysis of gastrointestinal tumors and the surrounding mucosa; reference number, H21-140).\nHuman colorectal cancer cell lines LS180, Caco-2, HT29, HCT116, DLD-1, LoVo, SW48 and SW620 were purchased from DS Pharma Biomedical Co., Ltd. (Osaka, Japan). LS180 cells were cultured in E-MEM medium (Invitrogen Corp., Carlsbad, CA) at 37°C under an atmosphere of 5% CO2. The other cells were cultured under conditions described elsewhere [10].\nEighteen pairs of cancerous and adjacent normal mucosa were excised from surgical specimens of colorectal cancers. The cancerous and normal epithelia were separated from stroma using crypt isolation [11]. All samples were selected from the same series of cancers as we used in a previous study [10]. All 18 patients with colorectal cancers did not receive chemotherapy before surgical resection. The study protocol was approved by ethics committee of Iwate Medical University (molecular analysis of gastrointestinal tumors and the surrounding mucosa; reference number, H21-140).\n[SUBTITLE] Treatment with 5-aza-2'-deoxycytidine [SUBSECTION] LS180, LoVo, Caco-2, HCT116, HT29 and SW48 cells were seeded at a concentration of 1 × 105 cells on a 100 mm dish. The next day, treatment of cells with 0, 0.5 or 5 μM 5-aza-2'-deoxycytidine (5-aza-dC) (Sigma Chemical, St. Louis, MO) was started, and 5-aza-dC was removed by changing the medium 24 h later. The cells were harvested 4 days after removal of 5-aza-dC for DNA and RNA extraction.\nLS180, LoVo, Caco-2, HCT116, HT29 and SW48 cells were seeded at a concentration of 1 × 105 cells on a 100 mm dish. The next day, treatment of cells with 0, 0.5 or 5 μM 5-aza-2'-deoxycytidine (5-aza-dC) (Sigma Chemical, St. Louis, MO) was started, and 5-aza-dC was removed by changing the medium 24 h later. The cells were harvested 4 days after removal of 5-aza-dC for DNA and RNA extraction.\n[SUBTITLE] Quantitative real-time PCR analysis of basal CYP3A4, PXR and VDR mRNA levels [SUBSECTION] Total RNA extraction, cDNA synthesis and real-time PCR were carried out on the cells prepared above, by the same methods as described previously [10]. mRNA levels of the CYP3A4 (exon 3-4, Hs01546612_m1), PXR (exon 5-6, Hs00243666_m1) and vitamin D receptor (VDR) (exon 10-11, Hs01045840_m1) were evaluated by TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA). In addition, the mRNA level of the PXR splicing variants (exon 1a-2) was also examined by SYBR Green assays using the following primers: 5'-GATTGTTCAAAGTGGACCCC-3'(forward) and 5'-TCCAGGAACAGACTCTGTGT-3'. The mRNA level of the above target genes was normalized to β-actin mRNA [10]. All samples were analyzed in duplicate and average quantities of the gene transcripts were used for calculation. Deviation of the mRNA level of each sample was within 7% of the average.\nTotal RNA extraction, cDNA synthesis and real-time PCR were carried out on the cells prepared above, by the same methods as described previously [10]. mRNA levels of the CYP3A4 (exon 3-4, Hs01546612_m1), PXR (exon 5-6, Hs00243666_m1) and vitamin D receptor (VDR) (exon 10-11, Hs01045840_m1) were evaluated by TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA). In addition, the mRNA level of the PXR splicing variants (exon 1a-2) was also examined by SYBR Green assays using the following primers: 5'-GATTGTTCAAAGTGGACCCC-3'(forward) and 5'-TCCAGGAACAGACTCTGTGT-3'. The mRNA level of the above target genes was normalized to β-actin mRNA [10]. All samples were analyzed in duplicate and average quantities of the gene transcripts were used for calculation. Deviation of the mRNA level of each sample was within 7% of the average.\n[SUBTITLE] DNA methylation analysis of the 6 colon cancer cell lines and 18 colon cancer samples [SUBSECTION] We found CpG islands within the PXR (around exon 3 region), VDR (promoter region) and protein arginine methyltrasferase 1(PRMT1) (promoter region) genes using the CpG Island Searcher program [12,13]. A CpG island was also detected in the 5' untranslated region (UTR) of the CYP3A4 gene (approximately 25 kb distal to the transcription start site). We also found a CpG-rich sequence in the promoter region of the PXR gene, although this sequence did not strictly satisfy the criteria for a CpG island [13,14]. The location of all CpG sequences examined in this study are shown in Figure 1. Genomic DNA extracted from the 6 cell lines and the 18 pairs of normal and colon cancer tissue samples was modified by sodium bisulfite, and then each segment including a CpG island or CpG-rich sequence was amplified by PCR and subjected to direct sequencing. The methylation status of the PXR promoter sequence was also estimated by bisulfite sequencing on at least 7 individual DNA strands after subcloning of PCR products into the pCR4-TOPO vector using the TOPO TA Cloning Kit for Sequencing (Invitrogen). A relative methylation level of the PXR promoter was visually determined by the density of each HpyCH4IV-digested band using combined bisulfite restriction analysis (COBRA) [15]. The methylation status of the PXR exon 3 region was also examined by the COBRA assay using an HhaI digestion. All primer sequences are listed in Table 1.\nLocation of the CpG sequences examined. CpG sites (vertical bars), CpG islands identified by CpG island searcher (horizontal thick lines) and transcription start sites within the 5' prime region (curved arrows) are shown in the (a), PXR; (b), CYP3A4; (c), VDR; and (d), PRMT1 genes. The segments indicated by double-pointed arrows were examined for DNA methylation by bisulfite direct sequencing.\nPrimers used for DNA methylation analysis.\nF, forward primer; R, reverse primer\n*Dimethyl sulfoxide (5%) was added to the PCR mixture.\nWe found CpG islands within the PXR (around exon 3 region), VDR (promoter region) and protein arginine methyltrasferase 1(PRMT1) (promoter region) genes using the CpG Island Searcher program [12,13]. A CpG island was also detected in the 5' untranslated region (UTR) of the CYP3A4 gene (approximately 25 kb distal to the transcription start site). We also found a CpG-rich sequence in the promoter region of the PXR gene, although this sequence did not strictly satisfy the criteria for a CpG island [13,14]. The location of all CpG sequences examined in this study are shown in Figure 1. Genomic DNA extracted from the 6 cell lines and the 18 pairs of normal and colon cancer tissue samples was modified by sodium bisulfite, and then each segment including a CpG island or CpG-rich sequence was amplified by PCR and subjected to direct sequencing. The methylation status of the PXR promoter sequence was also estimated by bisulfite sequencing on at least 7 individual DNA strands after subcloning of PCR products into the pCR4-TOPO vector using the TOPO TA Cloning Kit for Sequencing (Invitrogen). A relative methylation level of the PXR promoter was visually determined by the density of each HpyCH4IV-digested band using combined bisulfite restriction analysis (COBRA) [15]. The methylation status of the PXR exon 3 region was also examined by the COBRA assay using an HhaI digestion. All primer sequences are listed in Table 1.\nLocation of the CpG sequences examined. CpG sites (vertical bars), CpG islands identified by CpG island searcher (horizontal thick lines) and transcription start sites within the 5' prime region (curved arrows) are shown in the (a), PXR; (b), CYP3A4; (c), VDR; and (d), PRMT1 genes. The segments indicated by double-pointed arrows were examined for DNA methylation by bisulfite direct sequencing.\nPrimers used for DNA methylation analysis.\nF, forward primer; R, reverse primer\n*Dimethyl sulfoxide (5%) was added to the PCR mixture.", "Human colorectal cancer cell lines LS180, Caco-2, HT29, HCT116, DLD-1, LoVo, SW48 and SW620 were purchased from DS Pharma Biomedical Co., Ltd. (Osaka, Japan). LS180 cells were cultured in E-MEM medium (Invitrogen Corp., Carlsbad, CA) at 37°C under an atmosphere of 5% CO2. The other cells were cultured under conditions described elsewhere [10].\nEighteen pairs of cancerous and adjacent normal mucosa were excised from surgical specimens of colorectal cancers. The cancerous and normal epithelia were separated from stroma using crypt isolation [11]. All samples were selected from the same series of cancers as we used in a previous study [10]. All 18 patients with colorectal cancers did not receive chemotherapy before surgical resection. The study protocol was approved by ethics committee of Iwate Medical University (molecular analysis of gastrointestinal tumors and the surrounding mucosa; reference number, H21-140).", "LS180, LoVo, Caco-2, HCT116, HT29 and SW48 cells were seeded at a concentration of 1 × 105 cells on a 100 mm dish. The next day, treatment of cells with 0, 0.5 or 5 μM 5-aza-2'-deoxycytidine (5-aza-dC) (Sigma Chemical, St. Louis, MO) was started, and 5-aza-dC was removed by changing the medium 24 h later. The cells were harvested 4 days after removal of 5-aza-dC for DNA and RNA extraction.", "Total RNA extraction, cDNA synthesis and real-time PCR were carried out on the cells prepared above, by the same methods as described previously [10]. mRNA levels of the CYP3A4 (exon 3-4, Hs01546612_m1), PXR (exon 5-6, Hs00243666_m1) and vitamin D receptor (VDR) (exon 10-11, Hs01045840_m1) were evaluated by TaqMan Gene Expression Assays (Applied Biosystems, Foster City, CA). In addition, the mRNA level of the PXR splicing variants (exon 1a-2) was also examined by SYBR Green assays using the following primers: 5'-GATTGTTCAAAGTGGACCCC-3'(forward) and 5'-TCCAGGAACAGACTCTGTGT-3'. The mRNA level of the above target genes was normalized to β-actin mRNA [10]. All samples were analyzed in duplicate and average quantities of the gene transcripts were used for calculation. Deviation of the mRNA level of each sample was within 7% of the average.", "We found CpG islands within the PXR (around exon 3 region), VDR (promoter region) and protein arginine methyltrasferase 1(PRMT1) (promoter region) genes using the CpG Island Searcher program [12,13]. A CpG island was also detected in the 5' untranslated region (UTR) of the CYP3A4 gene (approximately 25 kb distal to the transcription start site). We also found a CpG-rich sequence in the promoter region of the PXR gene, although this sequence did not strictly satisfy the criteria for a CpG island [13,14]. The location of all CpG sequences examined in this study are shown in Figure 1. Genomic DNA extracted from the 6 cell lines and the 18 pairs of normal and colon cancer tissue samples was modified by sodium bisulfite, and then each segment including a CpG island or CpG-rich sequence was amplified by PCR and subjected to direct sequencing. The methylation status of the PXR promoter sequence was also estimated by bisulfite sequencing on at least 7 individual DNA strands after subcloning of PCR products into the pCR4-TOPO vector using the TOPO TA Cloning Kit for Sequencing (Invitrogen). A relative methylation level of the PXR promoter was visually determined by the density of each HpyCH4IV-digested band using combined bisulfite restriction analysis (COBRA) [15]. The methylation status of the PXR exon 3 region was also examined by the COBRA assay using an HhaI digestion. All primer sequences are listed in Table 1.\nLocation of the CpG sequences examined. CpG sites (vertical bars), CpG islands identified by CpG island searcher (horizontal thick lines) and transcription start sites within the 5' prime region (curved arrows) are shown in the (a), PXR; (b), CYP3A4; (c), VDR; and (d), PRMT1 genes. The segments indicated by double-pointed arrows were examined for DNA methylation by bisulfite direct sequencing.\nPrimers used for DNA methylation analysis.\nF, forward primer; R, reverse primer\n*Dimethyl sulfoxide (5%) was added to the PCR mixture.", "[SUBTITLE] Basal mRNA levels of the CYP3A4, PXR and VDR genes in the 6 colon cancer cell lines [SUBSECTION] Real-time PCR analyses revealed that the basal levels of CYP3A4, PXR (exon 5-6), PXR (exon 1a-2) and VDR mRNA were heterogeneous among the 6 cell lines examined (Figure 2). The 6 cell lines were then classified into two groups (high or low expression cells) based on the basal level of PXR and CYP3A4 mRNA, because the 6 cell lines always showed either high PXR and CYP3A4 expression or low PXR and CYP3A4 expression. The levels of CYP3A4 and PXR transcripts on high expression cells (LS180 and LoVo) were 7- to 35-fold and 40- to 5,000-fold higher than those on lower expression cells (Caco-2, HT29, HCT116 and SW48), respectively. These two groups also exhibited a difference in the basal level of the VDR transcript, although this difference was not marked (3.5- to 6-fold). There was strong correlation of the levels of the transcripts between PXR (exon 5-6) and PXR (exon 1a-2) throughout all analyses.\nBasal expression profile of colon cancer cell lines. Basal levels of (a), PXR exon 5-6; (b), PXR exon 1a-2; (c), CYP3A4; and (d), VDR transcripts in LS180, LoVo, Caco-2, HT29, HCT116 and SW48 cells. The vertical axis indicates a relative transcript level (ratio to LS180 cells). The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples (0.00257 and 0.0399, respectively).\nReal-time PCR analyses revealed that the basal levels of CYP3A4, PXR (exon 5-6), PXR (exon 1a-2) and VDR mRNA were heterogeneous among the 6 cell lines examined (Figure 2). The 6 cell lines were then classified into two groups (high or low expression cells) based on the basal level of PXR and CYP3A4 mRNA, because the 6 cell lines always showed either high PXR and CYP3A4 expression or low PXR and CYP3A4 expression. The levels of CYP3A4 and PXR transcripts on high expression cells (LS180 and LoVo) were 7- to 35-fold and 40- to 5,000-fold higher than those on lower expression cells (Caco-2, HT29, HCT116 and SW48), respectively. These two groups also exhibited a difference in the basal level of the VDR transcript, although this difference was not marked (3.5- to 6-fold). There was strong correlation of the levels of the transcripts between PXR (exon 5-6) and PXR (exon 1a-2) throughout all analyses.\nBasal expression profile of colon cancer cell lines. Basal levels of (a), PXR exon 5-6; (b), PXR exon 1a-2; (c), CYP3A4; and (d), VDR transcripts in LS180, LoVo, Caco-2, HT29, HCT116 and SW48 cells. The vertical axis indicates a relative transcript level (ratio to LS180 cells). The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples (0.00257 and 0.0399, respectively).\n[SUBTITLE] Increased mRNA expression by 5-aza-dC treatment [SUBSECTION] In order to determine whether DNA methylation is involved in the transcriptional regulation of these genes, the 6 cell lines were treated with DNA demethylating agent (5-aza-dC). The treatment with 5-aza-dC induced a clear increase in CYP3A4 (28- to 116-fold) and PXR (3- to 10-fold) transcripts in a dose-dependent manner in the low expression cells, but not in the high expression cells (Figure 3). In particular, the CYP3A4 transcript of the low expression cells eventually reached the levels seen in the high expression cells by 5-aza-dC treatment. In contrast, 5-aza-dC had no marked effect on VDR expression in any of the cell lines (0.5- to 1.2-fold increase). These results suggested that DNA methylation is involved in the regulation of CYP3A4 and PXR, but not VDR, in the low expression cell lines.\nExpression profile of colon cancer cell lines after 5-aza-dC treatment. Levels of PXR, VDR and CYP3A4 transcripts in (a), LS180; (b), Caco-2; (c), HCT116; (d), LoVo; (e), HT29; and (f), SW48 cells. Cells were treated with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). The vertical axis indicates a relative transcript level (ratio to cells without 5-aza-dC treatment). Numbers in italics indicate relative transcript levels that were over the maximum scale on the vertical axis. The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples.\nIn order to determine whether DNA methylation is involved in the transcriptional regulation of these genes, the 6 cell lines were treated with DNA demethylating agent (5-aza-dC). The treatment with 5-aza-dC induced a clear increase in CYP3A4 (28- to 116-fold) and PXR (3- to 10-fold) transcripts in a dose-dependent manner in the low expression cells, but not in the high expression cells (Figure 3). In particular, the CYP3A4 transcript of the low expression cells eventually reached the levels seen in the high expression cells by 5-aza-dC treatment. In contrast, 5-aza-dC had no marked effect on VDR expression in any of the cell lines (0.5- to 1.2-fold increase). These results suggested that DNA methylation is involved in the regulation of CYP3A4 and PXR, but not VDR, in the low expression cell lines.\nExpression profile of colon cancer cell lines after 5-aza-dC treatment. Levels of PXR, VDR and CYP3A4 transcripts in (a), LS180; (b), Caco-2; (c), HCT116; (d), LoVo; (e), HT29; and (f), SW48 cells. Cells were treated with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). The vertical axis indicates a relative transcript level (ratio to cells without 5-aza-dC treatment). Numbers in italics indicate relative transcript levels that were over the maximum scale on the vertical axis. The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples.\n[SUBTITLE] DNA methylation status of the colon cancer cell lines [SUBSECTION] The bisulfite direct sequencing detected no DNA methylation in the VDR or PRMT1 promoter sequences in the 6 cell lines. Partial methylation of the CYP3A4 5'-distal region and full methylation of the PXR exon 3 region were equally observed among the 6 cell lines. Therefore, the different expression profiles of the two groups cannot be explained by methylation of these sequences. On the other hand, the CpG-rich sequence of the PXR promoter showed a different methylation status among the 6 cell lines. Interestingly, the degree of methylation of the PXR promoter (segments 1 and 2) in the high expression cells (LS180 and LoVo) was lower than that in the low expression cells (Caco-2, HT29, HCT116 and SW48) (Figure 4). Therefore, the details of the methylation status of the PXR promoter were estimated on individual DNA strands after subcloning (Figure 5). We found that an HpyCH4IV site within segment 2 was a suitable marker for the COBRA assay to assess PXR promoter methylation, because the degree of methylation of this site showed inverse correlation with the levels of CYP3A4 and PXR expression.\nDNA methylation profile of PXR as detected by direct sequencing. Methylation status of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N) detected by bisulfite direct sequencing. Open and closed circles represent unmethylated and fully methylated CpG sites, respectively. Half-closed circles represent partially methylated CpG sites. The methylation status of the CpG island (around exon 3) and CpG-rich promoter sequence (segments 1, 2, 3 and 4) are shown in the upper and lower panels, respectively.\nDetailed DNA methylation profile of PXR segment 2. Detailed methylation profile of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N). Methylation of each CpG site was estimated by bisulfite sequencing on 7 or 8 individual DNA strands after subcloning. Open and closed circles represent unmethylated and methylated CpG sites, respectively. A 'TATA' indicates a putative TATA box. A curved arrow indicates a transcription start site. 'HpyCH4IV' indicates restriction site using the COBRA assay.\nThe COBRA assay demonstrated that the treatment with 5-aza-dC resulted in decreased amounts of methylation of the PXR promoter in a dose-dependent manner in the low expression cells (Figure 6a). Therefore, the magnitude of methylation was likely to be associated with the decreased levels of CYP3A4 and PXR gene expression in all 6 cell lines. These results suggest that the PXR gene was transcriptionally silenced by methylation of the promoter CpG sites and that the downregulation of the PXR protein resulted in decreased expression of the CYP3A4 gene.\nMethylation status of the PXR promoter and exon 3 regions, as detected by the COBRA assay. Unmethylated and methylated DNAs are shown as U and M, respectively. PCR products that were not cut by the restriction enzymes are shown as UC. (a) PXR promoter methylation was examined in the 6 cell lines after treatment with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). (b) Methylation status of the CpG island of the PXR exon 3 region in 6 of the 18 primary colorectal cancers (numbers are those for particular cases). DNA samples from normal and cancerous tissue are shown as N and C, respectively. Note that a high degree of methylation was detected in cancerous, but not normal, tissue. (c) Methylation status of the CpG-rich sequence of the PXR promoter region in 8 primary colorectal cancers. Note that a lower degree of methylation was detected in cancerous tissue compared to normal tissue.\nThe PXR promoter methylation was not associated with the profile of microsatellite instability (MSI) or other methylated genes (Table 2). This suggested that altered PXR methylation was accumulated during colorectal tumorigenesis, independent of these genetic and epigenetic events.\nProfile of Microsatellite instability (MSI), mismatch repair (MMR) deficiency and promoter methylation in 6 colon cancer cell lines.\nSummary of previous studies [27,28] and the present study (PXR methylation).\n*Mismatch repair deficiency was associated with mutations or defects in mRNA transcripts.\n**Methylation status was defined as unmethylated (U) or methylated (M).\nThe bisulfite direct sequencing detected no DNA methylation in the VDR or PRMT1 promoter sequences in the 6 cell lines. Partial methylation of the CYP3A4 5'-distal region and full methylation of the PXR exon 3 region were equally observed among the 6 cell lines. Therefore, the different expression profiles of the two groups cannot be explained by methylation of these sequences. On the other hand, the CpG-rich sequence of the PXR promoter showed a different methylation status among the 6 cell lines. Interestingly, the degree of methylation of the PXR promoter (segments 1 and 2) in the high expression cells (LS180 and LoVo) was lower than that in the low expression cells (Caco-2, HT29, HCT116 and SW48) (Figure 4). Therefore, the details of the methylation status of the PXR promoter were estimated on individual DNA strands after subcloning (Figure 5). We found that an HpyCH4IV site within segment 2 was a suitable marker for the COBRA assay to assess PXR promoter methylation, because the degree of methylation of this site showed inverse correlation with the levels of CYP3A4 and PXR expression.\nDNA methylation profile of PXR as detected by direct sequencing. Methylation status of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N) detected by bisulfite direct sequencing. Open and closed circles represent unmethylated and fully methylated CpG sites, respectively. Half-closed circles represent partially methylated CpG sites. The methylation status of the CpG island (around exon 3) and CpG-rich promoter sequence (segments 1, 2, 3 and 4) are shown in the upper and lower panels, respectively.\nDetailed DNA methylation profile of PXR segment 2. Detailed methylation profile of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N). Methylation of each CpG site was estimated by bisulfite sequencing on 7 or 8 individual DNA strands after subcloning. Open and closed circles represent unmethylated and methylated CpG sites, respectively. A 'TATA' indicates a putative TATA box. A curved arrow indicates a transcription start site. 'HpyCH4IV' indicates restriction site using the COBRA assay.\nThe COBRA assay demonstrated that the treatment with 5-aza-dC resulted in decreased amounts of methylation of the PXR promoter in a dose-dependent manner in the low expression cells (Figure 6a). Therefore, the magnitude of methylation was likely to be associated with the decreased levels of CYP3A4 and PXR gene expression in all 6 cell lines. These results suggest that the PXR gene was transcriptionally silenced by methylation of the promoter CpG sites and that the downregulation of the PXR protein resulted in decreased expression of the CYP3A4 gene.\nMethylation status of the PXR promoter and exon 3 regions, as detected by the COBRA assay. Unmethylated and methylated DNAs are shown as U and M, respectively. PCR products that were not cut by the restriction enzymes are shown as UC. (a) PXR promoter methylation was examined in the 6 cell lines after treatment with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). (b) Methylation status of the CpG island of the PXR exon 3 region in 6 of the 18 primary colorectal cancers (numbers are those for particular cases). DNA samples from normal and cancerous tissue are shown as N and C, respectively. Note that a high degree of methylation was detected in cancerous, but not normal, tissue. (c) Methylation status of the CpG-rich sequence of the PXR promoter region in 8 primary colorectal cancers. Note that a lower degree of methylation was detected in cancerous tissue compared to normal tissue.\nThe PXR promoter methylation was not associated with the profile of microsatellite instability (MSI) or other methylated genes (Table 2). This suggested that altered PXR methylation was accumulated during colorectal tumorigenesis, independent of these genetic and epigenetic events.\nProfile of Microsatellite instability (MSI), mismatch repair (MMR) deficiency and promoter methylation in 6 colon cancer cell lines.\nSummary of previous studies [27,28] and the present study (PXR methylation).\n*Mismatch repair deficiency was associated with mutations or defects in mRNA transcripts.\n**Methylation status was defined as unmethylated (U) or methylated (M).\n[SUBTITLE] DNA methylation status of colon cancer tissue samples [SUBSECTION] No or slight methylation of the PXR exon 3 region was detected in the normal colon tissue samples by direct sequencing and the COBRA assay. The levels of methylation in the cancer tissues were mostly higher than those in the paired adjacent normal tissues (Figure 6b). By contrast, the CpG-rich sequence of the PXR promoter was partially methylated in normal tissues, and the degree of methylation was decreased in the paired cancer tissues (Figure 6c). The decreased level of the PXR promoter methylation suggested increased expression of the PXR gene during colorectal carcinogenesis. There were no differences in the clinicopathological findings between the colorectal cancers with PXR methylation and those without methylation.\nNo or slight methylation of the PXR exon 3 region was detected in the normal colon tissue samples by direct sequencing and the COBRA assay. The levels of methylation in the cancer tissues were mostly higher than those in the paired adjacent normal tissues (Figure 6b). By contrast, the CpG-rich sequence of the PXR promoter was partially methylated in normal tissues, and the degree of methylation was decreased in the paired cancer tissues (Figure 6c). The decreased level of the PXR promoter methylation suggested increased expression of the PXR gene during colorectal carcinogenesis. There were no differences in the clinicopathological findings between the colorectal cancers with PXR methylation and those without methylation.", "Real-time PCR analyses revealed that the basal levels of CYP3A4, PXR (exon 5-6), PXR (exon 1a-2) and VDR mRNA were heterogeneous among the 6 cell lines examined (Figure 2). The 6 cell lines were then classified into two groups (high or low expression cells) based on the basal level of PXR and CYP3A4 mRNA, because the 6 cell lines always showed either high PXR and CYP3A4 expression or low PXR and CYP3A4 expression. The levels of CYP3A4 and PXR transcripts on high expression cells (LS180 and LoVo) were 7- to 35-fold and 40- to 5,000-fold higher than those on lower expression cells (Caco-2, HT29, HCT116 and SW48), respectively. These two groups also exhibited a difference in the basal level of the VDR transcript, although this difference was not marked (3.5- to 6-fold). There was strong correlation of the levels of the transcripts between PXR (exon 5-6) and PXR (exon 1a-2) throughout all analyses.\nBasal expression profile of colon cancer cell lines. Basal levels of (a), PXR exon 5-6; (b), PXR exon 1a-2; (c), CYP3A4; and (d), VDR transcripts in LS180, LoVo, Caco-2, HT29, HCT116 and SW48 cells. The vertical axis indicates a relative transcript level (ratio to LS180 cells). The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples (0.00257 and 0.0399, respectively).", "In order to determine whether DNA methylation is involved in the transcriptional regulation of these genes, the 6 cell lines were treated with DNA demethylating agent (5-aza-dC). The treatment with 5-aza-dC induced a clear increase in CYP3A4 (28- to 116-fold) and PXR (3- to 10-fold) transcripts in a dose-dependent manner in the low expression cells, but not in the high expression cells (Figure 3). In particular, the CYP3A4 transcript of the low expression cells eventually reached the levels seen in the high expression cells by 5-aza-dC treatment. In contrast, 5-aza-dC had no marked effect on VDR expression in any of the cell lines (0.5- to 1.2-fold increase). These results suggested that DNA methylation is involved in the regulation of CYP3A4 and PXR, but not VDR, in the low expression cell lines.\nExpression profile of colon cancer cell lines after 5-aza-dC treatment. Levels of PXR, VDR and CYP3A4 transcripts in (a), LS180; (b), Caco-2; (c), HCT116; (d), LoVo; (e), HT29; and (f), SW48 cells. Cells were treated with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). The vertical axis indicates a relative transcript level (ratio to cells without 5-aza-dC treatment). Numbers in italics indicate relative transcript levels that were over the maximum scale on the vertical axis. The transcripts indicated by * (PXR exon 5-6 transcript in SW48 cells and CYP3A4 transcript in HCT116 cells) were not detected and we estimated the minimum detectable levels among gradually diluted calibration samples.", "The bisulfite direct sequencing detected no DNA methylation in the VDR or PRMT1 promoter sequences in the 6 cell lines. Partial methylation of the CYP3A4 5'-distal region and full methylation of the PXR exon 3 region were equally observed among the 6 cell lines. Therefore, the different expression profiles of the two groups cannot be explained by methylation of these sequences. On the other hand, the CpG-rich sequence of the PXR promoter showed a different methylation status among the 6 cell lines. Interestingly, the degree of methylation of the PXR promoter (segments 1 and 2) in the high expression cells (LS180 and LoVo) was lower than that in the low expression cells (Caco-2, HT29, HCT116 and SW48) (Figure 4). Therefore, the details of the methylation status of the PXR promoter were estimated on individual DNA strands after subcloning (Figure 5). We found that an HpyCH4IV site within segment 2 was a suitable marker for the COBRA assay to assess PXR promoter methylation, because the degree of methylation of this site showed inverse correlation with the levels of CYP3A4 and PXR expression.\nDNA methylation profile of PXR as detected by direct sequencing. Methylation status of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N) detected by bisulfite direct sequencing. Open and closed circles represent unmethylated and fully methylated CpG sites, respectively. Half-closed circles represent partially methylated CpG sites. The methylation status of the CpG island (around exon 3) and CpG-rich promoter sequence (segments 1, 2, 3 and 4) are shown in the upper and lower panels, respectively.\nDetailed DNA methylation profile of PXR segment 2. Detailed methylation profile of the PXR gene in the 6 colon cancer cell lines and a cancerous tissue sample (255C) and its paired adjacent normal tissue sample (255N). Methylation of each CpG site was estimated by bisulfite sequencing on 7 or 8 individual DNA strands after subcloning. Open and closed circles represent unmethylated and methylated CpG sites, respectively. A 'TATA' indicates a putative TATA box. A curved arrow indicates a transcription start site. 'HpyCH4IV' indicates restriction site using the COBRA assay.\nThe COBRA assay demonstrated that the treatment with 5-aza-dC resulted in decreased amounts of methylation of the PXR promoter in a dose-dependent manner in the low expression cells (Figure 6a). Therefore, the magnitude of methylation was likely to be associated with the decreased levels of CYP3A4 and PXR gene expression in all 6 cell lines. These results suggest that the PXR gene was transcriptionally silenced by methylation of the promoter CpG sites and that the downregulation of the PXR protein resulted in decreased expression of the CYP3A4 gene.\nMethylation status of the PXR promoter and exon 3 regions, as detected by the COBRA assay. Unmethylated and methylated DNAs are shown as U and M, respectively. PCR products that were not cut by the restriction enzymes are shown as UC. (a) PXR promoter methylation was examined in the 6 cell lines after treatment with 5-aza-2'-deoxycytidine (0, 0.5 or 5 μM). (b) Methylation status of the CpG island of the PXR exon 3 region in 6 of the 18 primary colorectal cancers (numbers are those for particular cases). DNA samples from normal and cancerous tissue are shown as N and C, respectively. Note that a high degree of methylation was detected in cancerous, but not normal, tissue. (c) Methylation status of the CpG-rich sequence of the PXR promoter region in 8 primary colorectal cancers. Note that a lower degree of methylation was detected in cancerous tissue compared to normal tissue.\nThe PXR promoter methylation was not associated with the profile of microsatellite instability (MSI) or other methylated genes (Table 2). This suggested that altered PXR methylation was accumulated during colorectal tumorigenesis, independent of these genetic and epigenetic events.\nProfile of Microsatellite instability (MSI), mismatch repair (MMR) deficiency and promoter methylation in 6 colon cancer cell lines.\nSummary of previous studies [27,28] and the present study (PXR methylation).\n*Mismatch repair deficiency was associated with mutations or defects in mRNA transcripts.\n**Methylation status was defined as unmethylated (U) or methylated (M).", "No or slight methylation of the PXR exon 3 region was detected in the normal colon tissue samples by direct sequencing and the COBRA assay. The levels of methylation in the cancer tissues were mostly higher than those in the paired adjacent normal tissues (Figure 6b). By contrast, the CpG-rich sequence of the PXR promoter was partially methylated in normal tissues, and the degree of methylation was decreased in the paired cancer tissues (Figure 6c). The decreased level of the PXR promoter methylation suggested increased expression of the PXR gene during colorectal carcinogenesis. There were no differences in the clinicopathological findings between the colorectal cancers with PXR methylation and those without methylation.", "In the present study, 6 colon cancer cell lines showed heterogeneous mRNA expression profiles and were able to be classified into two groups with respect to their basal levels of the PXR/CYP3A4 transcripts (high expression cells, LS180 and LoVo; low expression cells Caco-2, HT29, HCT116, and SW48). These results are consistent with previous studies, in which LS180 and Caco-2 cells were characterized as PXR-sufficient and PXR-deficient cells, respectively [16,17].\nGenetic polymorphisms in the regions that regulate transcription are often a major cause of inter-individual variability in the levels of transcripts. However, such polymorphisms have not been frequently observed in the human PXR or CYP3A4 genes, implying that certain epigenetic mechanisms are involved in the regulation of PXR and CYP3A4 expression. We found that the CpG-rich sequence within the PXR promoter region is methylated to different levels in high and low expression cells. Importantly, the magnitude of this promoter methylation was inversely associated with the levels of PXR and CYP3A4 expression. Furthermore, the levels of the PXR and CYP3A4 transcripts in low expression cells were mostly restored when DNA methylation was reversed by treatment with 5-aza-dC. Although this CpG-rich sequence did not strictly satisfy the criteria for a CpG island, the most affected CpG sites were located in a highly restricted region (segments 1 and 2) and these CpG sites were proximal to several putative transcription factor binding sites (such as Sp1 and hepatocyte nuclear factor 4 alpha) [18-20]. Therefore, PXR gene expression is most likely transcriptionally regulated by methylation of these promoter CpG sites.\nCYP3A4 is transactivated by functional interplays with VDR-RXRα or PXR-PRMT1 [20-22]. CpG-island methylation of the VDR or PRMT1 promoter was not detected in these cell lines and the mRNA expression of VDR was not affected by 5-aza-dC treatment. These observations imply that DNA methylation of PXR, but not VDR or PRMT1, resulted in downregulation of the CYP3A4 mRNA in these colon cancer cells.\nIt is still uncertain whether re-expression of the CYP3A4 by 5-aza-dC treatment was due to the re-expression of some other genes than PXR. However, PXR must be a candidate for methylation and reduced expression of the PXR by promoter methylation, even if partially, contributes to downregulation of the CYP3A4. Indeed, several studies demonstrated that selective downregulation of the PXR by siRNA reduces the basal level of the CYP3A4 transcripts in a dose-dependent manner [23].\nCpG islands in the exon 3 region were fully methylated throughout the cancer cell lines and most cancer tissues. Even after treatment of the cell lines with 5-aza-dC, no increase in the PXR mRNA levels was observed in the high-PXR expressing cell lines, LS180 and LoVo. This strongly suggests that in human colon cancer cells, the methylated CpG islands in the exon 3 play a much less role in the epigenetic regulation of PXR, instead, promoter methylation plays a pivotal role in its regulation. In contrast, Misawa et al. previously demonstrated a distinct methylation profile of neuroblastoma cells, in which mRNA expression of the PXR splicing variant (exon 1a-2) was specifically regulated by the methylation of the exon 3 region rather than promoter methylation [9]. We found no marked difference in the levels of the PXR (exon 5-6) and PXR (exon 1a-2) transcripts in the colon cancer cells. Therefore, a tissue-specific DNA methylation profile is most likely involved in the transcriptional regulation of the PXR gene.\nDNA methylation of the PXR promoter was detected in only 1 of the 18 colorectal cancer tissue samples. The results reflect the genuine DNA methylation status, because we examined pure cancerous and normal epithelia using crypt isolation and directly compared the DNA methylation status between paired epithelia. Therefore, a low level of PXR promoter methylation, which was observed in the high expression cells, appears to be a common feature of colorectal cancers. We also demonstrated that the level of PXR promoter methylation is decreased during carcinogenesis, since paired adjacent normal tissues mostly showed higher levels of PXR promoter methylation. We could not directly compare DNA methylation status with the PXR mRNA expression, because crypt isolation provided ethanol-fixed epithelia and it was difficult to obtain fresh mRNA samples. However, most cancer tissues exhibited a pattern of promoter methylation quite similar to that observed in cultured cells with high expression (LS180 and LoVo) (Figures 5 and 6c). Therefore, the association between promoter methylation and transcriptional silencing of the PXR gene is most likely applicable to primary colorectal cancers. As observed in the colon cancer cell lines, the decreased level of PXR promoter methylation most likely led to increased expression of PXR mRNA in the colorectal cancer tissues. These results are consistent with a recent study that showed strong expression of PXR mRNA in colon cancers, with great variability [24]. In contrast, Ouyang et al. found that PXR expression was lost or greatly diminished in many colon cancers using histochemical analysis [25]. Although the role of the altered PXR expression in colorectal carcinogenesis remains to be clarified, Zhou et al. demonstrated that PXR plays an antiapoptotic role in colon carcinogenesis by induction of multiple antiapoptotic genes [26].\nWe cannot rule out the possibility that alterations of the PXR methylation levels play direct roles in tumorigenesis, because certain oncogenes or tumor suppressor genes may be trascriptionally regulated by PXR. Partial methylation of the PXR observed in adjacent normal mucosa may be associated with \"field defect\" for carcinogenesis. However, numerous studies have demonstrated that ligand-binding activation or siRNA-mediated silencing of the PXR can affect the activity of metabolic enzymes including CYP3A4, without changes in the cell proliferation capacity. Therefore, we think that altered level of the PXR methylation does not provide a selective growth advantage during colorectal cancer progression.\nInterestingly, overexpression of PXR in the colorectal cancer tissue samples was correlated with an increase in UDP glucuronosyl transferases UGT1A1, UGT1A9 and UGT1A10, and led to a marked chemoresistance to the active metabolite of irinotecan (CPT-11) [24]. In addition, CYP3A4 and p-glycoprotein, which are transcriptionally activated by PXR, play important roles in intestinal first-pass metabolism and determine a drug's bioavailability. We hypothesized that PXR may play a key role in the colon cancer cell response to anticancer drugs by modulating expression of drug metabolizing enzymes and transporters including UGT1A, CYP3A4 and p-glycoprotein. Therefore, DNA methylation of the PXR promoter might be a good predictor of chemotherapy outcome and toxicity in colorectal cancers.", "PXR promoter methylation is involved in the regulation of intestinal PXR and CYP3A4 expression. This methylation might be associated with the inter-individual variability of the drug response of colon cancer cells.", "PXR: pregnane X receptor; CYPs: cytochrome P450s; VDR: vitamin D receptor; PRMT1: protein arginine methyltrasferase 1; 5-aza-dC: 5-aza-2'-deoxycytidine; COBRA: combined bisulfite restriction analysis", "The authors declare that they have no competing interests.", "WH designed this study and carried out the cell culture, molecular genetic studies and drafted the manuscript. GT and JT participated in the data analysis. TS performed crypt isolation and pathological diagnosis. KO and GW performed the surgeries and obtained informed consent from the patients. SO participated in the design of this study and helped to draft the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/81/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Discrepancy between radiological and pathological size of renal masses.
21342488
Tumor size is a critical variable in staging for renal cell carcinoma. Clinicians rely on radiological estimates of pathological tumor size to guide patient counseling regarding prognosis, choice of treatment strategy and entry into clinical trials. If there is a discrepancy between radiological and pathological measurements of renal tumor size, this could have implications for clinical practice. Our study aimed to compare the radiological size of solid renal tumors on computed tomography (CT) to the pathological size in an Australian population.
BACKGROUND
We identified 157 patients in the Westmead Renal Tumor Database, for whom data was available for both radiological tumor size on CT and pathological tumor size. The paired Student's t-test was used to compare the mean radiological tumor size and the mean pathological tumor size. Statistical significance was defined as P < 0.05. We also identified all cases in which post-operative down-staging or up-staging occurred due to discrepancy between radiological and pathological tumor sizes. Additionally, we examined the relationship between Fuhrman grade and radiological tumor size and pathological T stage.
METHODS
Overall, the mean radiological tumor size on CT was 58.3 mm and the mean pathological size was 55.2 mm. On average, CT overestimated pathological size by 3.1 mm (P = 0.012). CT overestimated pathological tumor size in 92 (58.6%) patients, underestimated in 44 (28.0%) patients and equaled pathological size in 21 (31.4%) patients. Among the 122 patients with pT1 or pT2 tumors, there was a discrepancy between clinical and pathological staging in 35 (29%) patients. Of these, 21 (17%) patients were down-staged post-operatively and 14 (11.5%) were up-staged. Fuhrman grade correlated positively with radiological tumor size (P = 0.039) and pathological tumor stage (P = 0.003).
RESULTS
There was a statistically significant but small difference (3.1 mm) between mean radiological and mean pathological tumor size, but this is of uncertain clinical significance. For some patients, the difference leads to a discrepancy between clinical and pathological staging, which may have implications for pre-operative patient counseling regarding prognosis and management.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Biopsy", "Carcinoma, Renal Cell", "Female", "Humans", "Kidney Neoplasms", "Male", "Middle Aged", "Observer Variation", "Reproducibility of Results", "Sensitivity and Specificity", "Tomography, X-Ray Computed" ]
3056852
null
null
Methods
The Westmead Renal Tumor Database contains 547 patients whose tumors were removed by radical or partial nephrectomy from 1994 to 2007. Data collection and analysis was approved by the hospital ethics committee and complies with the Declaration of Helsinki. We retrospectively reviewed the database and identified 157 patients for whom accurate data was available for both radiological and pathological tumor size. Radiological tumor size was defined as the largest transverse diameter in the axial plane on CT scan, as measured by the reporting radiologist. The CT protocol entailed pre-contrast images and images in the arterial, corticomedullary (venous) and excretory phases. Tumor size was measured in the phase in which the tumor margins were most obvious. Coronal and sagittal reconstruction images were available, but the radiological tumor size was always measured in the axial plane. Pathological tumor size was defined as the largest transverse diameter, as measured by the pathologist at examination of the surgical specimen prior to formalin fixation. There were 4 patients with multifocal tumors. For these patients, we included the data for their largest tumor in our analysis. According to radiological size, tumors were grouped by 1 cm size intervals and by clinically relevant size intervals (≤4 cm; >4 cm but ≤7 cm; >7 cm but ≤10 cm; >10 cm). We extracted demographic data for all patients from the database, including age, sex, year of operation, type of procedure (open or laparoscopic, radical or partial nephrectomy), tumor histology (conventional, papillary, chromophobe, other), Fuhrman grade, and clinical and pathological tumor stage (according to 2009 TNM staging system). The paired Student's t-test was used to compare the mean radiological tumor size and the mean pathological tumor size. Statistical significance was defined as P < 0.05. Data analysis was performed using SPSS, version 15.0. We also compared mean radiological and mean pathological size for tumors grouped by histological subtype, by type of procedure, by 1 cm size intervals and by clinically relevant size intervals (≤4 cm; >4 cm but ≤7 cm; >7 cm but ≤10 cm; >10 cm). For patients with pT1 and pT2 tumors, the radiological and pathological tumor sizes were compared to identify all cases of post-operative down-staging or up-staging. We calculated the number and percentage of patients for whom a difference between radiological and pathological tumor sizes accounted for discrepancy between clinical and pathological tumor stage. We also examined the relationship between Fuhrman grade and CT tumor size (grouped into clinically relevant size intervals) and pathological T stage using a chi-square test. Of our cohort of 157 patients, 7 were excluded from this analysis because they did not have a Fuhrman grade recorded in the database. For the analysis we grouped tumors into low-grade (Fuhrman 1 or 2) and high-grade (Fuhrman 3 or 4).
null
null
null
null
[ "Background", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tumor size is an important prognostic indicator for renal cell carcinoma (RCC), and is thus a critical variable in staging systems and a key factor when deciding upon treatment strategy.\nThe 2009 TNM staging system for RCC stratifies tumors limited to the kidney by their size alone (T1a ≤4 cm; T1b > 4 cm but ≤7 cm; T2a > 7 cm but ≤10 cm; T2b > 10 cm)[1]. Available prognostic nomograms also incorporate tumor size[2-5].\nRenal tumor size also guides clinicians in recommending radical nephrectomy (RN), partial nephrectomy (PN), ablative techniques or active surveillance as the management of choice. PN is the standard approach for T1a (≤4 cm) renal tumors, achieving equivalent oncological efficacy to RN[6], while preserving renal function[7] and protecting from non-cancer related mortality[8,9]. Several studies support PN for all amenable T1b tumors (> 4 cm but ≤7 cm) [10-14]. The growing acceptance of PN as an option for T1b tumors is reflected in current American and European guidelines[15,16]. RN remains the therapy of choice for T2 tumors (> 7 cm) [16,17]. Although recent studies have demonstrated the feasibility of PN for carefully selected patients with T2 tumors in experienced centers[18,19], it is uncertain whether these results can be extrapolated to all institutions. For high-risk surgical candidates with small renal tumors, there is intermediate-term data to support minimally invasive ablative techniques such as cryoablation and radiofrequency ablation (RFA) [20]. There is a relationship between tumor size and local recurrence after ablation[20], and a tumor size threshold of 3.5 cm has been proposed for such techniques[17]. In patients with limited life expectancy, active surveillance of small renal masses has been advocated as a viable option, provided that tumor size is less than 3 cm[21].\nMost studies report patient outcomes following surgical intervention for RCC according to the pathological size of the tumor, rather than the radiological size on CT[2-5,22,23]. Indeed, the studies that have defined a tumor size threshold for partial nephrectomy are all based on pathological size[6,7,10-14].\nPreoperatively, clinicians must rely on radiological estimates of pathological tumor size to guide patient counseling regarding prognosis and management. For example, at institutions employing a size threshold for PN, patients will be offered or denied PN based on tumor size on CT. If there is a discrepancy between radiological size on CT and pathological size, this may have implications for clinical practice.\nFor patients undergoing ablative techniques, pathological tumor size cannot be determined. Therefore, studies report the outcome of ablative techniques according to radiological tumor size[20]. If a discrepancy between radiological and pathological tumor size exists, it may be difficult to meaningfully compare these studies with the established evidence for nephrectomy, which is reported according to pathological size.\nA number of studies have examined the relationship between CT size and pathological size of renal tumors[24-36]. Most of these studies found that, on average, CT overestimated pathological tumor size, although this reached statistical significance in only three studies[28,33,35]. Authors have disagreed on the clinical significance of these findings. Only two of these studies comprehensively reported instances in which disagreement between CT and pathological size led to discordance between clinical and pathological stage[26,30]. To our knowledge, there has been no such study performed on an Australian population. A recent study has demonstrated different trends in stage migration in an Australian RCC cohort compared with populations in the USA[37]. Therefore, international findings are not necessarily applicable to the Australian population and there is a need for Australian data to be reported.\nThe aim of our study was to compare the radiological size of RCC on CT to the pathological size in a contemporary Australian population. We also aimed to identify patients who were up-staged or down-staged due to discrepancy between CT and pathological size.", "A total of 157 patients were identified, among whom there were 51 (32.5%) women and 106 men (67.5%). The mean (range) patient age was 63.3 (34-100) years. The patients underwent surgery between 1998 and 2007. There were 18 (11.5%) patients treated with partial nephrectomy (10 laparoscopic and 8 open procedures), and 139 (88.5%) treated with radical nephrectomy (100 laparoscopic and 39 open). The histological tumor subtype was conventional in 126 (80.3%) patients, papillary in 16 (10.2%), chromophobe in 11 (7.0%) and other in 4 (2.5%). The pathological tumor stage (according to the 2009 TNM staging system) was T1a in 58 (36.9%) patients, T1b in 41 (26.1%), T2a in 18 (11.5%), T2b in 5 (3.2%), T3a in 30 (19.1%), T3b in 2 (1.3%), T3c in 2 (1.3%) and T4 in 1 (0.6%). Demographic data for our study population is summarized in Table 1.\nDemographic data for 157 patients\n† PN - Partial nephrectomy\n‡ RN - Radical nephrectomy\nA scatter plot of pathological tumor size against radiological tumor size is shown in Figure 1. Overall, the mean radiological tumor size on CT was 58.3 mm (SD 29.2 mm) and the mean pathological size was 55.2 mm (SD 30.5 mm). On average, CT overestimated pathological size by 3.1 mm (95% CI: 0.7 to 5.5 mm, P = 0.012). CT overestimated pathological tumor size in 92 (58.6%) patients, underestimated in 44 (28.0%) patients and equaled pathological size in 21 (31.4%) patients.\nScatter plot of pathological tumor size against radiological tumor size. Please see attached image file.\nAmong the 122 patients with pT1 or pT2 tumors, there was a discrepancy between clinical and pathological staging in 35 (29%) patients. Of these, 21 (17%) patients were down-staged post-operatively and 14 (11.5%) were up-staged. This data is summarized in Table 2.\nDiscrepancy between clinical and pathological stage in 122 pT1 and pT2 tumors.\nTable 3 shows the mean radiological and pathological tumor sizes divided into 10 mm size intervals by radiological size. Mean radiological size was greater than mean pathological size for all size intervals, except for the 50 - 59 mm and 70 - 79 mm categories. This only reached statistical significance for tumors in the 80 - 89 mm category, for which mean radiological size was 13 mm larger than mean pathological size (95% CI: 1.26 to 24.74 mm, P = 0.034).\nMean radiological and pathological tumor size (mm) divided into 10 mm size intervals by radiological size.\n† mm = millimetres\n‡ SD = standard deviation\n§ CI = confidence interval\nTable 4 shows the mean radiological and pathological tumor sizes separated into clinically relevant size intervals, corresponding to T1a (≤4 cm), T1b (>4 cm but ≤7 cm), T2a (>7 cm but ≤10 cm) and T2b (>10 cm) stages. For all three groups, mean radiological size was greater than mean pathological size but the difference did not achieve statistical significance.\nMean radiological and pathological tumor size (mm) divided into clinically relevant size intervals by radiological size.\n† SD = standard deviation\n‡ CI = confidence interval\nTable 5 shows the mean radiological and pathological tumor sizes for the different histological sub-types. For conventional RCC, CT overestimated pathological size by an average of 3.8 mm (95% CI 1.25 to 6.39 mm, P = 0.004). There was no statistically significant difference for the other histological subtypes.\nMean radiological and pathological tumor size (mm) by histological subtype\n† SD = standard deviation\n‡ CI = confidence interval\nTable 6 shows the mean radiological and pathological tumor sizes stratified by type of procedure. For tumors removed by radical nephrectomy, the mean radiological size was 3.4 mm larger than the mean pathological size (95% CI: 0.71 to 6.02 mm, P = 0.013). There was no statistically significant difference detected for tumors removed by partial nephrectomy.\nMean radiological and pathological tumor size (mm) stratified according to type of procedure\n† SD = standard deviation\n‡ CI = confidence interval\nTable 7 shows radiological tumor size (grouped into clinically relevant size intervals) distributed according to Fuhrman grade. High-grade disease (Fuhrman 3 or 4) was more common in larger tumors (≤4 cm vs >4 cm but ≤7 cm vs >7 cm; P = 0.039). The prevalence of high-grade disease was 24.5%, 31.1% and 50.0% for tumors ≤4 cm, >4 cm but ≤7 cm, >7 cm respectively. Table 8 shows pathological T stage (grouped into T1a, T1b and ≥ T2) distributed according to Fuhrman grade. There was a statistically significant positive correlation between Fuhrman grade and tumor stage (P = 0.003).\nRadiological tumor size (mm) distributed according to Fuhrman grade.\n† N/A = Not available.\nPathological T stage distributed according to Fuhrman grade.\n† N/A = Not available.", "Tumor size is an important prognostic indicator for RCC. Outcome of nephrectomy has been studied according to pathological tumor size. Pre-operatively, we must rely upon CT estimates of pathological tumor size to guide counseling regarding prognosis and choice of treatment modality. Furthermore, ablative techniques for renal tumors do not provide specimens for pathological assessment of tumor size. When comparing emerging ablative techniques to the benchmark of nephrectomy, we are comparing data based on pathological tumor size to data based on CT size. Therefore, it is important to understand the relationship between radiological tumor size and pathological tumor size, and to understand how any difference between the two measurements affects the accuracy of clinical staging.\nOur study of a contemporary Australian cohort found that overall CT overestimated pathological tumor size by a statistically significant but small amount (3.1 mm). This observation is consistent with the findings of previous studies. The findings of recent papers comparing mean radiological and mean pathological renal tumors sizes are summarized in Table 9. Kurta et al[28] reported on the largest series (N = 521), and found that mean radiological tumor size was larger than mean pathological tumor size by 1 mm. Similarly, CT was found to overestimate pathological tumor size overall by 6.3 mm in a study by Herr[35], and by 10.0 mm in a paper by Irani et al[33]. Schlomer et al[31] found no statistically significant difference overall, but found that CT overestimated pathological size for pT1a tumors by 3.9 mm and for lesions 40 to 50 mm by 8.7 mm. Similarly, Lee et al[24] found a statistically significant overestimation of pathological tumor size by CT for tumors in the 40 to 50 mm range only, by an average of 2 mm. Choi et al[25] found that CT tumor size was on average larger than pathological size for smaller tumors only (<6 cm or T1). In several other series, mean radiological tumor size was greater than mean pathological size, but the difference did not reach statistical significance[27,29,30,32,34,36]. Only one study reported an underestimation of pathological tumor size by CT overall, and this achieved statistical significance for T1a tumors only[26].\nSummary of previous studies comparing mean radiological and mean pathological renal tumor sizes.\n† Used median CT size and median pathological size.\n‡ mm = millimetres\nAnalysis by histological subtype in our series showed a statistically significant difference for conventional RCC only, with CT overestimating pathological size by an average of 3.8 mm. The small number of papillary (N = 18) and chromophobe (N = 11) tumors included in our study meant we were unlikely to detect a statistically significant difference. Several studies have shown that CT size is greater than pathological size on average for conventional RCC, and smaller than pathological size on average for papillary RCC[24,28,32]. Kurta et al[28] found that CT overestimated pathological tumor size by 2.3 mm for conventional RCC and underestimated pathological tumor size by 5.4 mm for papillary RCC. Similarly, Lee et al[24] found that CT size was 1.4 mm greater than pathological size on average for conventional RCC, and 5.3 mm smaller for papillary RCC. In contrast, Herr[34] found that pathological size was overestimated on CT for all histological subtypes, and that the overestimation was significantly greater for conventional RCC compared to other subtypes (9.7 mm versus 3.9 mm). Similarly, Choi et al[25] demonstrated that mean radiological tumor size was larger than mean pathological tumor size for all histological subtypes, but there was no significant difference between groups.\nThe discrepancy between clinical and pathological tumor size has been attributed to decreased tumor vascularity after excision, leading to a diminished size post-operatively[34]. This effect is probably more pronounced for clear cell carcinomas because they typically have a richer vascular network than other histological subtypes. Yaycioglu et al[32] postulated that certain radiological and pathological features might influence the accuracy of tumor size measurement by CT. These features included: concomitant pyelonephritis, presence of hemorrhage or hematoma, cystic tumor or adjacent cysts, dilatation of adjacent renal calyces and invasion of the collecting system. The same study found that tumor invasion of perinephric tissues impacted upon the accuracy of CT. For these tumors, CT more frequently underestimated pathological size when compared to tumors confined to the kidney. Ates et al[26] demonstrated less accurate CT measurement of tumor size for locally invasive tumors. It may be more difficult to delineate the radiographic margin of invasive tumors on CT, leading to disagreement between radiological and pathological tumor sizes. Ates et al[26] also found more accurate measurement of tumors size on CT for exophytic lesions. Herr[35] found that CT more closely approximated pathological tumor size for upper pole tumors, but other studies have failed to confirm this finding[24,32,33]. Additionally, in our study the radiological and pathological tumor sizes were not necessarily measured in the same geometric plane and this could contribute to the discrepancy between the two measurements. The largest tumor diameter on CT was measured in the axial plane, and this did not always correspond to the plane in which the largest diameter was measured at pathological exam. Formalin fixation is known to cause tumor shrinkage[38], but in our series the pathological specimens were examined prior to fixation.\nInaccurate CT estimation of pathological tumor size led to discordance between clinical and pathological stage in over one quarter of tumors limited to the kidney in our study (pT1, pT2). Of these, 21 (17%) patients were down-staged and 14 (11.5%) up-staged post-operatively. There is limited published data on the impact that disagreement between radiological and pathological tumor sizes may have on staging discrepancies. Kanofsky et al[30] reported on a series of 198 renal cell carcinomas and identified 21 patients for whom disagreement between CT and pathological tumor size led to discrepancy between clinical and pathological tumor stage. Of these, 15 patients were down-staged and 6 up-staged post-operatively. Ates et al[26] found that differences between radiological and pathological measurements led to staging discrepancies in 19 of 86 patients, with 6 patients being down-staged and 13 patients being up-staged post-operatively. Kurta et al[28] and Lee et al[24] only reported cases of post-operative down-staging. Kurta et al demonstrated that among 258 patients with CT tumor size greater than 4 cm, 30 (11.6%) had a pathological size of less than 4 cm. Among 92 patients with CT tumor size greater than 7 cm, 7 (7.6%) had a pathological size of less than 7 cm. Lee et al demonstrated similar results. Of the 141 patients with CT tumor size between 4 cm and 7 cm, 17 (12.1%) had a pathological size less than 4 cm. Of the 87 patients with CT tumor size greater than 7 cm, 8 (9.2%) had a pathological size of less than 7 cm.\nFor these patients, pre-operative counseling regarding prognosis and management would have been based on a clinical tumor stage that was ultimately down-staged or up-staged based on pathological tumor size. Thus, although the magnitude of the mean difference between radiological and pathological tumor sizes is only 3.1 mm, there are cases where the discrepancy may impact upon clinical management.\nAuthors disagree about the clinical implications of the small but statistically significant difference between CT and pathological tumor size. Some studies conclude that CT adequately approximates pathological tumor size[24,26,32,34], and that any discrepancy between the two measurements has minimal impact on patient management[28]. Other authors point out that overestimation of pathological size on CT could affect selection of patients for elective PN[29,31,33,34]. PN is the standard of care for T1a tumors (≤4 cm) [17]. Mistry et al[29] report that 5 (5%) of their patients who were not offered elective PN based on a CT tumor size > 40 mm, had a pathological size ≤4 cm. Likewise, 3 patients out of 100 included in the study by Irani et al[33] were ineligible for elective PN based on CT size > 40 mm, but had a pathological size ≤4 cm. However, with the growing impetus to use PN for all amenable T1 tumors[15,16], tumor size is becoming less important for determining patient eligibility for PN. Several authors argue that the decision to perform elective PN should be based on technical feasibility and patient preference rather than a rigid tumor size cut-off[12,13,18,19,39].\nThe discrepancy between radiological and pathological tumor size could have implications for the use of ablative techniques and active surveillance for RCC. These approaches produce no specimen for pathological assessment, and so we must rely upon CT estimates of tumor size to guide management. Decision-making under these circumstances is aided by the small number of studies that report tumor prognosis according to radiological tumor size. Kanao et al[40] have recently developed a preoperative prognostic nomogram based on clinical staging to predict survival after nephrectomy. Raj et al[41] have also developed a preoperative nomogram to predict the development of metastases after nephrectomy. Such prognostic data based on clinical information can be used as a benchmark against which the oncological outcome of ablative techniques can be compared.\nOur finding of a positive correlation between Fuhrman grade and tumor size supports previous observations. Thompson et al[42] (N = 1523) and Frank et al[43] (N = 2559) both demonstrated that larger tumors were more likely to harbor high-grade disease, with each 1 cm increase in tumor size carrying a 25 - 32% increased risk of high-grade disease (Fuhrman 3 or 4). Analysis of tumors grouped according to various tumor size breakpoints (3 cm[44], 4 cm[45], 5 cm[46]) has also shown a higher prevalence of high-grade disease in the larger size groups. In contrast, Klatte et al classified tumors by an 11 cm breakpoint and found that Fuhrman grade was similar in the two groups[47]. Our finding that Fuhrman grade correlated with tumor stage is also consistent with findings from other studies[48,49]. The relationship between tumor size and Fuhrman grade has implications for patient counseling and management, particularly if electing active surveillance.\nOur study has several shortcomings. It is a retrospective single institution analysis. The small numbers of papillary and chromophobe histological subtypes, and the small number of patients treated with partial nephrectomy were inadequately powered to detect a difference. Likewise, when categorized into 1 cm size intervals, several groups had insufficient numbers to detect a difference. There was no record of when the pre-operative CT was performed, and so we could not standardize the interval between imaging and surgery. Furthermore, there was no uniform protocol for measurement of CT tumor size and pathological tumor size. There was no centralized review of measurements by a single radiologist or pathologist.\nA follow-up prospective multi-centre study with larger numbers and a uniform protocol for tumor measurement should be performed to further elucidate the relationship between CT and pathological tumor size. There is also a need for studies examining the correlation between clinical and pathological staging for RCC. Studies that report prognosis according to radiological rather than pathological tumor size would guide us in making treatment decisions based on clinical tumor size. The development and validation of pre-operative prognostic nomograms would also aid decision-making.", "There was a statistically significant but small overestimation (3.1 mm) of pathological size by CT overall, but this is of uncertain clinical significance. For some patients, the difference leads to a discrepancy between clinical and pathological staging, which may have implications for pre-operative patient counseling regarding prognosis and choice of treatment strategy.", "The authors declare that they have no competing interests.", "NJ drafted the manuscript. ND and DG were responsible for creating and maintaining the Westmead Renal Tumor database. MP conceived the idea of the study and revised the manuscript. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2490/11/2/prepub\n" ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tumor size is an important prognostic indicator for renal cell carcinoma (RCC), and is thus a critical variable in staging systems and a key factor when deciding upon treatment strategy.\nThe 2009 TNM staging system for RCC stratifies tumors limited to the kidney by their size alone (T1a ≤4 cm; T1b > 4 cm but ≤7 cm; T2a > 7 cm but ≤10 cm; T2b > 10 cm)[1]. Available prognostic nomograms also incorporate tumor size[2-5].\nRenal tumor size also guides clinicians in recommending radical nephrectomy (RN), partial nephrectomy (PN), ablative techniques or active surveillance as the management of choice. PN is the standard approach for T1a (≤4 cm) renal tumors, achieving equivalent oncological efficacy to RN[6], while preserving renal function[7] and protecting from non-cancer related mortality[8,9]. Several studies support PN for all amenable T1b tumors (> 4 cm but ≤7 cm) [10-14]. The growing acceptance of PN as an option for T1b tumors is reflected in current American and European guidelines[15,16]. RN remains the therapy of choice for T2 tumors (> 7 cm) [16,17]. Although recent studies have demonstrated the feasibility of PN for carefully selected patients with T2 tumors in experienced centers[18,19], it is uncertain whether these results can be extrapolated to all institutions. For high-risk surgical candidates with small renal tumors, there is intermediate-term data to support minimally invasive ablative techniques such as cryoablation and radiofrequency ablation (RFA) [20]. There is a relationship between tumor size and local recurrence after ablation[20], and a tumor size threshold of 3.5 cm has been proposed for such techniques[17]. In patients with limited life expectancy, active surveillance of small renal masses has been advocated as a viable option, provided that tumor size is less than 3 cm[21].\nMost studies report patient outcomes following surgical intervention for RCC according to the pathological size of the tumor, rather than the radiological size on CT[2-5,22,23]. Indeed, the studies that have defined a tumor size threshold for partial nephrectomy are all based on pathological size[6,7,10-14].\nPreoperatively, clinicians must rely on radiological estimates of pathological tumor size to guide patient counseling regarding prognosis and management. For example, at institutions employing a size threshold for PN, patients will be offered or denied PN based on tumor size on CT. If there is a discrepancy between radiological size on CT and pathological size, this may have implications for clinical practice.\nFor patients undergoing ablative techniques, pathological tumor size cannot be determined. Therefore, studies report the outcome of ablative techniques according to radiological tumor size[20]. If a discrepancy between radiological and pathological tumor size exists, it may be difficult to meaningfully compare these studies with the established evidence for nephrectomy, which is reported according to pathological size.\nA number of studies have examined the relationship between CT size and pathological size of renal tumors[24-36]. Most of these studies found that, on average, CT overestimated pathological tumor size, although this reached statistical significance in only three studies[28,33,35]. Authors have disagreed on the clinical significance of these findings. Only two of these studies comprehensively reported instances in which disagreement between CT and pathological size led to discordance between clinical and pathological stage[26,30]. To our knowledge, there has been no such study performed on an Australian population. A recent study has demonstrated different trends in stage migration in an Australian RCC cohort compared with populations in the USA[37]. Therefore, international findings are not necessarily applicable to the Australian population and there is a need for Australian data to be reported.\nThe aim of our study was to compare the radiological size of RCC on CT to the pathological size in a contemporary Australian population. We also aimed to identify patients who were up-staged or down-staged due to discrepancy between CT and pathological size.", "The Westmead Renal Tumor Database contains 547 patients whose tumors were removed by radical or partial nephrectomy from 1994 to 2007. Data collection and analysis was approved by the hospital ethics committee and complies with the Declaration of Helsinki.\nWe retrospectively reviewed the database and identified 157 patients for whom accurate data was available for both radiological and pathological tumor size. Radiological tumor size was defined as the largest transverse diameter in the axial plane on CT scan, as measured by the reporting radiologist. The CT protocol entailed pre-contrast images and images in the arterial, corticomedullary (venous) and excretory phases. Tumor size was measured in the phase in which the tumor margins were most obvious. Coronal and sagittal reconstruction images were available, but the radiological tumor size was always measured in the axial plane. Pathological tumor size was defined as the largest transverse diameter, as measured by the pathologist at examination of the surgical specimen prior to formalin fixation. There were 4 patients with multifocal tumors. For these patients, we included the data for their largest tumor in our analysis. According to radiological size, tumors were grouped by 1 cm size intervals and by clinically relevant size intervals (≤4 cm; >4 cm but ≤7 cm; >7 cm but ≤10 cm; >10 cm).\nWe extracted demographic data for all patients from the database, including age, sex, year of operation, type of procedure (open or laparoscopic, radical or partial nephrectomy), tumor histology (conventional, papillary, chromophobe, other), Fuhrman grade, and clinical and pathological tumor stage (according to 2009 TNM staging system).\nThe paired Student's t-test was used to compare the mean radiological tumor size and the mean pathological tumor size. Statistical significance was defined as P < 0.05. Data analysis was performed using SPSS, version 15.0. We also compared mean radiological and mean pathological size for tumors grouped by histological subtype, by type of procedure, by 1 cm size intervals and by clinically relevant size intervals (≤4 cm; >4 cm but ≤7 cm; >7 cm but ≤10 cm; >10 cm).\nFor patients with pT1 and pT2 tumors, the radiological and pathological tumor sizes were compared to identify all cases of post-operative down-staging or up-staging. We calculated the number and percentage of patients for whom a difference between radiological and pathological tumor sizes accounted for discrepancy between clinical and pathological tumor stage.\nWe also examined the relationship between Fuhrman grade and CT tumor size (grouped into clinically relevant size intervals) and pathological T stage using a chi-square test. Of our cohort of 157 patients, 7 were excluded from this analysis because they did not have a Fuhrman grade recorded in the database. For the analysis we grouped tumors into low-grade (Fuhrman 1 or 2) and high-grade (Fuhrman 3 or 4).", "A total of 157 patients were identified, among whom there were 51 (32.5%) women and 106 men (67.5%). The mean (range) patient age was 63.3 (34-100) years. The patients underwent surgery between 1998 and 2007. There were 18 (11.5%) patients treated with partial nephrectomy (10 laparoscopic and 8 open procedures), and 139 (88.5%) treated with radical nephrectomy (100 laparoscopic and 39 open). The histological tumor subtype was conventional in 126 (80.3%) patients, papillary in 16 (10.2%), chromophobe in 11 (7.0%) and other in 4 (2.5%). The pathological tumor stage (according to the 2009 TNM staging system) was T1a in 58 (36.9%) patients, T1b in 41 (26.1%), T2a in 18 (11.5%), T2b in 5 (3.2%), T3a in 30 (19.1%), T3b in 2 (1.3%), T3c in 2 (1.3%) and T4 in 1 (0.6%). Demographic data for our study population is summarized in Table 1.\nDemographic data for 157 patients\n† PN - Partial nephrectomy\n‡ RN - Radical nephrectomy\nA scatter plot of pathological tumor size against radiological tumor size is shown in Figure 1. Overall, the mean radiological tumor size on CT was 58.3 mm (SD 29.2 mm) and the mean pathological size was 55.2 mm (SD 30.5 mm). On average, CT overestimated pathological size by 3.1 mm (95% CI: 0.7 to 5.5 mm, P = 0.012). CT overestimated pathological tumor size in 92 (58.6%) patients, underestimated in 44 (28.0%) patients and equaled pathological size in 21 (31.4%) patients.\nScatter plot of pathological tumor size against radiological tumor size. Please see attached image file.\nAmong the 122 patients with pT1 or pT2 tumors, there was a discrepancy between clinical and pathological staging in 35 (29%) patients. Of these, 21 (17%) patients were down-staged post-operatively and 14 (11.5%) were up-staged. This data is summarized in Table 2.\nDiscrepancy between clinical and pathological stage in 122 pT1 and pT2 tumors.\nTable 3 shows the mean radiological and pathological tumor sizes divided into 10 mm size intervals by radiological size. Mean radiological size was greater than mean pathological size for all size intervals, except for the 50 - 59 mm and 70 - 79 mm categories. This only reached statistical significance for tumors in the 80 - 89 mm category, for which mean radiological size was 13 mm larger than mean pathological size (95% CI: 1.26 to 24.74 mm, P = 0.034).\nMean radiological and pathological tumor size (mm) divided into 10 mm size intervals by radiological size.\n† mm = millimetres\n‡ SD = standard deviation\n§ CI = confidence interval\nTable 4 shows the mean radiological and pathological tumor sizes separated into clinically relevant size intervals, corresponding to T1a (≤4 cm), T1b (>4 cm but ≤7 cm), T2a (>7 cm but ≤10 cm) and T2b (>10 cm) stages. For all three groups, mean radiological size was greater than mean pathological size but the difference did not achieve statistical significance.\nMean radiological and pathological tumor size (mm) divided into clinically relevant size intervals by radiological size.\n† SD = standard deviation\n‡ CI = confidence interval\nTable 5 shows the mean radiological and pathological tumor sizes for the different histological sub-types. For conventional RCC, CT overestimated pathological size by an average of 3.8 mm (95% CI 1.25 to 6.39 mm, P = 0.004). There was no statistically significant difference for the other histological subtypes.\nMean radiological and pathological tumor size (mm) by histological subtype\n† SD = standard deviation\n‡ CI = confidence interval\nTable 6 shows the mean radiological and pathological tumor sizes stratified by type of procedure. For tumors removed by radical nephrectomy, the mean radiological size was 3.4 mm larger than the mean pathological size (95% CI: 0.71 to 6.02 mm, P = 0.013). There was no statistically significant difference detected for tumors removed by partial nephrectomy.\nMean radiological and pathological tumor size (mm) stratified according to type of procedure\n† SD = standard deviation\n‡ CI = confidence interval\nTable 7 shows radiological tumor size (grouped into clinically relevant size intervals) distributed according to Fuhrman grade. High-grade disease (Fuhrman 3 or 4) was more common in larger tumors (≤4 cm vs >4 cm but ≤7 cm vs >7 cm; P = 0.039). The prevalence of high-grade disease was 24.5%, 31.1% and 50.0% for tumors ≤4 cm, >4 cm but ≤7 cm, >7 cm respectively. Table 8 shows pathological T stage (grouped into T1a, T1b and ≥ T2) distributed according to Fuhrman grade. There was a statistically significant positive correlation between Fuhrman grade and tumor stage (P = 0.003).\nRadiological tumor size (mm) distributed according to Fuhrman grade.\n† N/A = Not available.\nPathological T stage distributed according to Fuhrman grade.\n† N/A = Not available.", "Tumor size is an important prognostic indicator for RCC. Outcome of nephrectomy has been studied according to pathological tumor size. Pre-operatively, we must rely upon CT estimates of pathological tumor size to guide counseling regarding prognosis and choice of treatment modality. Furthermore, ablative techniques for renal tumors do not provide specimens for pathological assessment of tumor size. When comparing emerging ablative techniques to the benchmark of nephrectomy, we are comparing data based on pathological tumor size to data based on CT size. Therefore, it is important to understand the relationship between radiological tumor size and pathological tumor size, and to understand how any difference between the two measurements affects the accuracy of clinical staging.\nOur study of a contemporary Australian cohort found that overall CT overestimated pathological tumor size by a statistically significant but small amount (3.1 mm). This observation is consistent with the findings of previous studies. The findings of recent papers comparing mean radiological and mean pathological renal tumors sizes are summarized in Table 9. Kurta et al[28] reported on the largest series (N = 521), and found that mean radiological tumor size was larger than mean pathological tumor size by 1 mm. Similarly, CT was found to overestimate pathological tumor size overall by 6.3 mm in a study by Herr[35], and by 10.0 mm in a paper by Irani et al[33]. Schlomer et al[31] found no statistically significant difference overall, but found that CT overestimated pathological size for pT1a tumors by 3.9 mm and for lesions 40 to 50 mm by 8.7 mm. Similarly, Lee et al[24] found a statistically significant overestimation of pathological tumor size by CT for tumors in the 40 to 50 mm range only, by an average of 2 mm. Choi et al[25] found that CT tumor size was on average larger than pathological size for smaller tumors only (<6 cm or T1). In several other series, mean radiological tumor size was greater than mean pathological size, but the difference did not reach statistical significance[27,29,30,32,34,36]. Only one study reported an underestimation of pathological tumor size by CT overall, and this achieved statistical significance for T1a tumors only[26].\nSummary of previous studies comparing mean radiological and mean pathological renal tumor sizes.\n† Used median CT size and median pathological size.\n‡ mm = millimetres\nAnalysis by histological subtype in our series showed a statistically significant difference for conventional RCC only, with CT overestimating pathological size by an average of 3.8 mm. The small number of papillary (N = 18) and chromophobe (N = 11) tumors included in our study meant we were unlikely to detect a statistically significant difference. Several studies have shown that CT size is greater than pathological size on average for conventional RCC, and smaller than pathological size on average for papillary RCC[24,28,32]. Kurta et al[28] found that CT overestimated pathological tumor size by 2.3 mm for conventional RCC and underestimated pathological tumor size by 5.4 mm for papillary RCC. Similarly, Lee et al[24] found that CT size was 1.4 mm greater than pathological size on average for conventional RCC, and 5.3 mm smaller for papillary RCC. In contrast, Herr[34] found that pathological size was overestimated on CT for all histological subtypes, and that the overestimation was significantly greater for conventional RCC compared to other subtypes (9.7 mm versus 3.9 mm). Similarly, Choi et al[25] demonstrated that mean radiological tumor size was larger than mean pathological tumor size for all histological subtypes, but there was no significant difference between groups.\nThe discrepancy between clinical and pathological tumor size has been attributed to decreased tumor vascularity after excision, leading to a diminished size post-operatively[34]. This effect is probably more pronounced for clear cell carcinomas because they typically have a richer vascular network than other histological subtypes. Yaycioglu et al[32] postulated that certain radiological and pathological features might influence the accuracy of tumor size measurement by CT. These features included: concomitant pyelonephritis, presence of hemorrhage or hematoma, cystic tumor or adjacent cysts, dilatation of adjacent renal calyces and invasion of the collecting system. The same study found that tumor invasion of perinephric tissues impacted upon the accuracy of CT. For these tumors, CT more frequently underestimated pathological size when compared to tumors confined to the kidney. Ates et al[26] demonstrated less accurate CT measurement of tumor size for locally invasive tumors. It may be more difficult to delineate the radiographic margin of invasive tumors on CT, leading to disagreement between radiological and pathological tumor sizes. Ates et al[26] also found more accurate measurement of tumors size on CT for exophytic lesions. Herr[35] found that CT more closely approximated pathological tumor size for upper pole tumors, but other studies have failed to confirm this finding[24,32,33]. Additionally, in our study the radiological and pathological tumor sizes were not necessarily measured in the same geometric plane and this could contribute to the discrepancy between the two measurements. The largest tumor diameter on CT was measured in the axial plane, and this did not always correspond to the plane in which the largest diameter was measured at pathological exam. Formalin fixation is known to cause tumor shrinkage[38], but in our series the pathological specimens were examined prior to fixation.\nInaccurate CT estimation of pathological tumor size led to discordance between clinical and pathological stage in over one quarter of tumors limited to the kidney in our study (pT1, pT2). Of these, 21 (17%) patients were down-staged and 14 (11.5%) up-staged post-operatively. There is limited published data on the impact that disagreement between radiological and pathological tumor sizes may have on staging discrepancies. Kanofsky et al[30] reported on a series of 198 renal cell carcinomas and identified 21 patients for whom disagreement between CT and pathological tumor size led to discrepancy between clinical and pathological tumor stage. Of these, 15 patients were down-staged and 6 up-staged post-operatively. Ates et al[26] found that differences between radiological and pathological measurements led to staging discrepancies in 19 of 86 patients, with 6 patients being down-staged and 13 patients being up-staged post-operatively. Kurta et al[28] and Lee et al[24] only reported cases of post-operative down-staging. Kurta et al demonstrated that among 258 patients with CT tumor size greater than 4 cm, 30 (11.6%) had a pathological size of less than 4 cm. Among 92 patients with CT tumor size greater than 7 cm, 7 (7.6%) had a pathological size of less than 7 cm. Lee et al demonstrated similar results. Of the 141 patients with CT tumor size between 4 cm and 7 cm, 17 (12.1%) had a pathological size less than 4 cm. Of the 87 patients with CT tumor size greater than 7 cm, 8 (9.2%) had a pathological size of less than 7 cm.\nFor these patients, pre-operative counseling regarding prognosis and management would have been based on a clinical tumor stage that was ultimately down-staged or up-staged based on pathological tumor size. Thus, although the magnitude of the mean difference between radiological and pathological tumor sizes is only 3.1 mm, there are cases where the discrepancy may impact upon clinical management.\nAuthors disagree about the clinical implications of the small but statistically significant difference between CT and pathological tumor size. Some studies conclude that CT adequately approximates pathological tumor size[24,26,32,34], and that any discrepancy between the two measurements has minimal impact on patient management[28]. Other authors point out that overestimation of pathological size on CT could affect selection of patients for elective PN[29,31,33,34]. PN is the standard of care for T1a tumors (≤4 cm) [17]. Mistry et al[29] report that 5 (5%) of their patients who were not offered elective PN based on a CT tumor size > 40 mm, had a pathological size ≤4 cm. Likewise, 3 patients out of 100 included in the study by Irani et al[33] were ineligible for elective PN based on CT size > 40 mm, but had a pathological size ≤4 cm. However, with the growing impetus to use PN for all amenable T1 tumors[15,16], tumor size is becoming less important for determining patient eligibility for PN. Several authors argue that the decision to perform elective PN should be based on technical feasibility and patient preference rather than a rigid tumor size cut-off[12,13,18,19,39].\nThe discrepancy between radiological and pathological tumor size could have implications for the use of ablative techniques and active surveillance for RCC. These approaches produce no specimen for pathological assessment, and so we must rely upon CT estimates of tumor size to guide management. Decision-making under these circumstances is aided by the small number of studies that report tumor prognosis according to radiological tumor size. Kanao et al[40] have recently developed a preoperative prognostic nomogram based on clinical staging to predict survival after nephrectomy. Raj et al[41] have also developed a preoperative nomogram to predict the development of metastases after nephrectomy. Such prognostic data based on clinical information can be used as a benchmark against which the oncological outcome of ablative techniques can be compared.\nOur finding of a positive correlation between Fuhrman grade and tumor size supports previous observations. Thompson et al[42] (N = 1523) and Frank et al[43] (N = 2559) both demonstrated that larger tumors were more likely to harbor high-grade disease, with each 1 cm increase in tumor size carrying a 25 - 32% increased risk of high-grade disease (Fuhrman 3 or 4). Analysis of tumors grouped according to various tumor size breakpoints (3 cm[44], 4 cm[45], 5 cm[46]) has also shown a higher prevalence of high-grade disease in the larger size groups. In contrast, Klatte et al classified tumors by an 11 cm breakpoint and found that Fuhrman grade was similar in the two groups[47]. Our finding that Fuhrman grade correlated with tumor stage is also consistent with findings from other studies[48,49]. The relationship between tumor size and Fuhrman grade has implications for patient counseling and management, particularly if electing active surveillance.\nOur study has several shortcomings. It is a retrospective single institution analysis. The small numbers of papillary and chromophobe histological subtypes, and the small number of patients treated with partial nephrectomy were inadequately powered to detect a difference. Likewise, when categorized into 1 cm size intervals, several groups had insufficient numbers to detect a difference. There was no record of when the pre-operative CT was performed, and so we could not standardize the interval between imaging and surgery. Furthermore, there was no uniform protocol for measurement of CT tumor size and pathological tumor size. There was no centralized review of measurements by a single radiologist or pathologist.\nA follow-up prospective multi-centre study with larger numbers and a uniform protocol for tumor measurement should be performed to further elucidate the relationship between CT and pathological tumor size. There is also a need for studies examining the correlation between clinical and pathological staging for RCC. Studies that report prognosis according to radiological rather than pathological tumor size would guide us in making treatment decisions based on clinical tumor size. The development and validation of pre-operative prognostic nomograms would also aid decision-making.", "There was a statistically significant but small overestimation (3.1 mm) of pathological size by CT overall, but this is of uncertain clinical significance. For some patients, the difference leads to a discrepancy between clinical and pathological staging, which may have implications for pre-operative patient counseling regarding prognosis and choice of treatment strategy.", "The authors declare that they have no competing interests.", "NJ drafted the manuscript. ND and DG were responsible for creating and maintaining the Westmead Renal Tumor database. MP conceived the idea of the study and revised the manuscript. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2490/11/2/prepub\n" ]
[ null, "methods", null, null, null, null, null, null ]
[]
Development of Metarhizium anisopliae and Beauveria bassiana formulations for control of malaria mosquito larvae.
21342492
The entomopathogenic fungi Metarhizium anisopliae and Beauveria bassiana have demonstrated effectiveness against anopheline larvae in the laboratory. However, utilising these fungi for the control of anopheline larvae under field conditions, relies on development of effective means of application as well as reducing their sensitivity to UV radiation, high temperatures and the inevitable contact with water. This study was conducted to develop formulations that facilitate the application of Metarhizium anisopliae and Beauveria bassiana spores for the control of anopheline larvae, and also improve their persistence under field conditions.
BACKGROUND
Laboratory bioassays were conducted to test the ability of aqueous (0.1% Tween 80), dry (organic and inorganic) and oil (mineral and synthetic) formulations to facilitate the spread of fungal spores over the water surface and improve the efficacy of formulated spores against anopheline larvae as well as improve spore survival after application. Field bioassays were then carried out to test the efficacy of the most promising formulation under field conditions in western Kenya.
METHODS
When formulated in a synthetic oil (ShellSol T), fungal spores of both Metarhizium anisopliae and Beauveria bassiana were easy to mix and apply to the water surface. This formulation was more effective against anopheline larvae than 0.1% Tween 80, dry powders or mineral oil formulations. ShellSol T also improved the persistence of fungal spores after application to the water. Under field conditions in Kenya, the percentage pupation of An. gambiae was significantly reduced by 39 - 50% by the ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores as compared to the effects of the application of unformulated spores.
RESULTS
ShellSol T is an effective carrier for fungal spores when targeting anopheline larvae under both laboratory and field conditions. Entomopathogenic fungi formulated with a suitable carrier are a promising tool for control of larval populations of malaria mosquitoes. Additional studies are required to identify the best delivery method (where, when and how) to make use of the entomopathogenic potential of these fungi against anopheline larvae.
CONCLUSIONS
[ "Animals", "Beauveria", "Chemistry, Pharmaceutical", "Culicidae", "Disease Vectors", "Drug Carriers", "Female", "Kenya", "Larva", "Metarhizium", "Mosquito Control", "Pest Control, Biological" ]
3051916
null
null
Methods
[SUBTITLE] Mosquitoes [SUBSECTION] Anopheles gambiae s.s. (Suakoko strain, courtesy of Prof. M. Coluzzi, reared in laboratory for 23 years) and An. stephensi (Strain STE 2, MR4 no. 128, origin India, reared in laboratory for 2 years after obtaining the eggs from MR4) were reared separately, but under similar conditions, in climate-controlled rooms at Wageningen University, The Netherlands. The temperature was maintained at 27 ± 1°C. Relative humidity was set at 70 ± 5% and the rooms had a 12L:12D photoperiod. Larvae were kept in plastic trays filled with tap water. First instar larvae were fed on Liquifry No. 1 (Interpet Ltd., Surrey, UK) while older instar stages were fed on Tetramin® (Tetra, Melle, Germany). The resulting pupae were transferred to holding cages (30 × 30 × 30 cm) in small cups, where they emerged as adults with ad libitum access to 6% glucose water. The female mosquitoes were blood-fed with the Hemotek membrane feeding system. Human blood (Sanquin®, Nijmegen, The Netherlands) was used for this purpose and mosquitoes could feed on it through a Parafilm M® membrane. Eggs were laid on moist filter paper, and were subsequently transferred to the larval trays. For the field bioassays An. gambiae s.s. eggs (Kisumu, strain, reared in laboratory for 8 years) were obtained from the Kenya Medical Research Institute (KEMRI) and reared at the Ahero Multipurpose Development Training Institute (AMDTI), Kenya. Rearing was carried out under local climate conditions (described below) and larvae were fed on Tetramin®. Anopheles gambiae s.s. (Suakoko strain, courtesy of Prof. M. Coluzzi, reared in laboratory for 23 years) and An. stephensi (Strain STE 2, MR4 no. 128, origin India, reared in laboratory for 2 years after obtaining the eggs from MR4) were reared separately, but under similar conditions, in climate-controlled rooms at Wageningen University, The Netherlands. The temperature was maintained at 27 ± 1°C. Relative humidity was set at 70 ± 5% and the rooms had a 12L:12D photoperiod. Larvae were kept in plastic trays filled with tap water. First instar larvae were fed on Liquifry No. 1 (Interpet Ltd., Surrey, UK) while older instar stages were fed on Tetramin® (Tetra, Melle, Germany). The resulting pupae were transferred to holding cages (30 × 30 × 30 cm) in small cups, where they emerged as adults with ad libitum access to 6% glucose water. The female mosquitoes were blood-fed with the Hemotek membrane feeding system. Human blood (Sanquin®, Nijmegen, The Netherlands) was used for this purpose and mosquitoes could feed on it through a Parafilm M® membrane. Eggs were laid on moist filter paper, and were subsequently transferred to the larval trays. For the field bioassays An. gambiae s.s. eggs (Kisumu, strain, reared in laboratory for 8 years) were obtained from the Kenya Medical Research Institute (KEMRI) and reared at the Ahero Multipurpose Development Training Institute (AMDTI), Kenya. Rearing was carried out under local climate conditions (described below) and larvae were fed on Tetramin®. [SUBTITLE] Fungal spores [SUBSECTION] Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores were obtained from the Department of Bioprocess Engineering, Wageningen University, and stored as dry spores in Falcon™ tubes at 4°C. Metarhizium anisopliae spores are olivaceous green, cylindrical and 2.5-3.5 μm long while B. bassiana spores are hyaline, spherical or sub-spherical and have a diameter of 2-3 μm [27]. Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores were obtained from the Department of Bioprocess Engineering, Wageningen University, and stored as dry spores in Falcon™ tubes at 4°C. Metarhizium anisopliae spores are olivaceous green, cylindrical and 2.5-3.5 μm long while B. bassiana spores are hyaline, spherical or sub-spherical and have a diameter of 2-3 μm [27]. [SUBTITLE] Carrier materials [SUBSECTION] Wheat flour, white pepper, WaterSavr (WaterSavr™, Sodium bicarbonate version, Flexible Solutions International Ltd., Victoria BC, Canada), 0.1% Tween 80 aqueous solution, Ondina oil 917 (Shell Ondina® Oil 917, Shell, The Netherlands) and ShellSol T (Shellsol T®, Shell, The Netherlands) were tested for their potential as carrier of fungal spores. Wheat flour and white pepper served as organic dry carriers. These were tested because anopheline larvae are known to aggregate around and feed on powdered organic materials (wheat flour, alfalfa flour, blood meal and liver powder) even when a choice of inorganic materials (chalk, charcoal and kaolin) is also available [28]. One inorganic dry powder, known as WaterSavr, was also tested. WaterSavr consists of fine bicarbonate granules that self-spread over the water surface forming a thin layer which has been shown to reduce evaporation [29]. Its biodegradability, safety and surface-spreading features made it a suitable candidate for inclusion in our tests. Surfactants, such as Tween 80, can be used to overcome the hydrophobic nature of fungal spores and form a homogeneous aqueous solution. Fungal spores formulated in Tween 80 have been used in bioassays to test the efficacy of fungal spores against mosquito larvae [13,16,30-34]. ShellSol T is a synthetic isoparaffinic hydrocarbon solvent. Ondina oil 917, slightly denser than ShellSol T, is a highly refined mineral oil. Both ShellSol T and Ondina oil 917 have been successfully used as carrier for fungal spores to target the adult stage of mosquitoes [1,35]. Wheat flour, white pepper, WaterSavr (WaterSavr™, Sodium bicarbonate version, Flexible Solutions International Ltd., Victoria BC, Canada), 0.1% Tween 80 aqueous solution, Ondina oil 917 (Shell Ondina® Oil 917, Shell, The Netherlands) and ShellSol T (Shellsol T®, Shell, The Netherlands) were tested for their potential as carrier of fungal spores. Wheat flour and white pepper served as organic dry carriers. These were tested because anopheline larvae are known to aggregate around and feed on powdered organic materials (wheat flour, alfalfa flour, blood meal and liver powder) even when a choice of inorganic materials (chalk, charcoal and kaolin) is also available [28]. One inorganic dry powder, known as WaterSavr, was also tested. WaterSavr consists of fine bicarbonate granules that self-spread over the water surface forming a thin layer which has been shown to reduce evaporation [29]. Its biodegradability, safety and surface-spreading features made it a suitable candidate for inclusion in our tests. Surfactants, such as Tween 80, can be used to overcome the hydrophobic nature of fungal spores and form a homogeneous aqueous solution. Fungal spores formulated in Tween 80 have been used in bioassays to test the efficacy of fungal spores against mosquito larvae [13,16,30-34]. ShellSol T is a synthetic isoparaffinic hydrocarbon solvent. Ondina oil 917, slightly denser than ShellSol T, is a highly refined mineral oil. Both ShellSol T and Ondina oil 917 have been successfully used as carrier for fungal spores to target the adult stage of mosquitoes [1,35]. [SUBTITLE] Formulations [SUBSECTION] The first selection of carriers suitable for formulating entomopathogenic fungal spores consisted of a test in which the carrier material was evaluated for its ability to spread over the water surface. For this purpose, plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and the carriers applied on the water surface (441 cm2). The least amount of each carrier required to cover the entire surface was recorded. Once that amount was determined, M. anisopliae spores (10 mg, ~ 4.7 × 108 spores) were added to the carriers. The quantity of the carriers was increased to make a consistent suspension or mixture of fungal spores and carriers. The resulting formulations were applied to select the carriers that spread the spores evenly over the water surface evenly. Metarhizium anisopliae spores were used because of their colour (olivaceous green) which made it easy to visualize them whilst spreading. The first selection of carriers suitable for formulating entomopathogenic fungal spores consisted of a test in which the carrier material was evaluated for its ability to spread over the water surface. For this purpose, plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and the carriers applied on the water surface (441 cm2). The least amount of each carrier required to cover the entire surface was recorded. Once that amount was determined, M. anisopliae spores (10 mg, ~ 4.7 × 108 spores) were added to the carriers. The quantity of the carriers was increased to make a consistent suspension or mixture of fungal spores and carriers. The resulting formulations were applied to select the carriers that spread the spores evenly over the water surface evenly. Metarhizium anisopliae spores were used because of their colour (olivaceous green) which made it easy to visualize them whilst spreading. [SUBTITLE] Efficacy of formulations against Anopheles gambiae larvae [SUBSECTION] The next step consisted of testing selected formulations against An. gambiae larvae in laboratory bioassays. Bioassays were performed under climatic conditions similar to the mosquito rearing. Plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and allowed to acclimatise overnight. Fifty second-instar larvae were added to each tray. Unformulated or formulated spores were applied to the water surface of each tray. The number of larvae that died or pupated was recorded daily for the next eight days. For each treatment, the carrier alone (in the same quantity as in the formulation) served as the control. In the case of unformulated spores, the control was untreated tap water. The larvae were provided with Tetramin® as food at the rate of 0.2 - 0.3 mg/larva per day. The experiments were replicated three times. The next step consisted of testing selected formulations against An. gambiae larvae in laboratory bioassays. Bioassays were performed under climatic conditions similar to the mosquito rearing. Plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and allowed to acclimatise overnight. Fifty second-instar larvae were added to each tray. Unformulated or formulated spores were applied to the water surface of each tray. The number of larvae that died or pupated was recorded daily for the next eight days. For each treatment, the carrier alone (in the same quantity as in the formulation) served as the control. In the case of unformulated spores, the control was untreated tap water. The larvae were provided with Tetramin® as food at the rate of 0.2 - 0.3 mg/larva per day. The experiments were replicated three times. [SUBTITLE] Pathogenicity of floating unformulated spores over time [SUBSECTION] A third experiment was performed to evaluate how the pathogenicity of fungal spores is affected by being in contact with water over a time period of seven days. At the start, 15 plastic trays (same size as above) were each filled with one liter of water. These trays were kept overnight in a climate-controlled room to acclimitise. Metarhizium anisopliae spores were applied to the water surface in five trays (10 mg per tray). Similarly, 10 mg of B. bassiana spores (~ 2 × 109 spores) were applied on the water surface in five other trays. The remaining five trays served as the control. After one day, 50 second-instar An. stephensi larvae were added to one of the trays treated with M. anisopliae spores, B. bassiana spores and one untreated control tray. Similarly larvae were added to remaining trays after either 2, 3, 5 or 7 days after fungal treatment. The mortality and/or pupation was followed for 9 days. The larvae were fed at the same rate as mentioned before. This experiment was replicated three times. A third experiment was performed to evaluate how the pathogenicity of fungal spores is affected by being in contact with water over a time period of seven days. At the start, 15 plastic trays (same size as above) were each filled with one liter of water. These trays were kept overnight in a climate-controlled room to acclimitise. Metarhizium anisopliae spores were applied to the water surface in five trays (10 mg per tray). Similarly, 10 mg of B. bassiana spores (~ 2 × 109 spores) were applied on the water surface in five other trays. The remaining five trays served as the control. After one day, 50 second-instar An. stephensi larvae were added to one of the trays treated with M. anisopliae spores, B. bassiana spores and one untreated control tray. Similarly larvae were added to remaining trays after either 2, 3, 5 or 7 days after fungal treatment. The mortality and/or pupation was followed for 9 days. The larvae were fed at the same rate as mentioned before. This experiment was replicated three times. [SUBTITLE] Effect of formulation on persistence of pathogenicity [SUBSECTION] Based on the results of the formulation experiments, the carriers WaterSavr and ShellSol T were selected and tested further for their ability to increase the persistence of pathogenicity in fungal spores in contact with water. Unformulated and formulated (either with WaterSavr or ShellSol T) M. anisopliae and B. bassiana spores were applied to plastic trays containing 1 L of acclimatized water. One replicate consisted of 18 trays. A pair of trays was applied with one of the following nine treatments: (1) 10 mg of dry M. anisopliae spores, (2) 10 mg of dry B. bassiana spores, (3) M. anisopliae spores mixed with WaterSavr (10 mg/130 mg), (4) B. bassiana spores mixed with WaterSavr (10 mg/130 mg), (5) M. anisopliae spores mixed with ShellSol T (10 mg/200 μl), (6) B. bassiana spores mixed with ShellSol T (10 mg/200 μl), (7) WaterSavr (130 mg) only, (8) ShellSol (200 μl) only or (9) no treatment. Trays treated with WaterSavr or ShellSol T without fungal spores and the untreated trays served as control for their respective treatments. Fifty second-instar An. stephensi larvae were added to one tray of each pair on the same day the fungal spores were applied. The same number of larvae was added to the other tray of the pair on the seventh day (based on the results of the previous experiment). The larvae were checked for mortality or pupation for the following 10 days after being added to the trays. The experiment was replicated three times. The trays were topped up with acclimatised tap water, every other day, to compensate for evaporation. Based on the results of the formulation experiments, the carriers WaterSavr and ShellSol T were selected and tested further for their ability to increase the persistence of pathogenicity in fungal spores in contact with water. Unformulated and formulated (either with WaterSavr or ShellSol T) M. anisopliae and B. bassiana spores were applied to plastic trays containing 1 L of acclimatized water. One replicate consisted of 18 trays. A pair of trays was applied with one of the following nine treatments: (1) 10 mg of dry M. anisopliae spores, (2) 10 mg of dry B. bassiana spores, (3) M. anisopliae spores mixed with WaterSavr (10 mg/130 mg), (4) B. bassiana spores mixed with WaterSavr (10 mg/130 mg), (5) M. anisopliae spores mixed with ShellSol T (10 mg/200 μl), (6) B. bassiana spores mixed with ShellSol T (10 mg/200 μl), (7) WaterSavr (130 mg) only, (8) ShellSol (200 μl) only or (9) no treatment. Trays treated with WaterSavr or ShellSol T without fungal spores and the untreated trays served as control for their respective treatments. Fifty second-instar An. stephensi larvae were added to one tray of each pair on the same day the fungal spores were applied. The same number of larvae was added to the other tray of the pair on the seventh day (based on the results of the previous experiment). The larvae were checked for mortality or pupation for the following 10 days after being added to the trays. The experiment was replicated three times. The trays were topped up with acclimatised tap water, every other day, to compensate for evaporation. [SUBTITLE] Field bioassays [SUBSECTION] To evaluate the efficacy of unformulated and formulated fungal spores in the field, experiments were carried out in Kenya in May and June, 2010. The experiments were conducted in a restricted part of the Ahero Multipurpose Development and Training Institute (AMDTI) compound. This institute is located 24 km southeast of Kisumu, in western Kenya (0°10'S, 34°55'E). Malaria is highly endemic in this region and transmission occurs throughout the year. A mean annual Plasmodium falciparum sporozoite inoculation rates (EIR) of 0.4-17 infective bites per year has been shown by recent studies for this region [36]. The region has an annual mean temperature range of 17°C to 32°C, average annual rainfall of 1,000 - 1,800 mm and average relative humidity of 65% [37]. Bioassays were conducted outdoors in 33 plastic containers (0.30 m diameter). The plastic containers had two nylon-screened holes (3 cm2), close to the brim, allowing excess rain water to flow out while retaining the larvae. Dry soil from a rice paddy at the Ahero irrigation scheme (4 km from AMDTI) was softened up by adding water. The softened soil was placed at the bottom of each plastic container to form a 2 cm thick layer. One L of tap water was then added to each plastic container. The water level was 3 cm above soil level and exposed a surface area of 450 cm2. Each plastic container was placed in a larger tub that also had a bottom layer of soil but was filled with water to the top. The larger tubs were employed to prevent ants from accessing the plastic container inside. Forty second-instar An. gambiae s.s. larvae, were added to each container. The large tubs, with the containers inside, were arranged in three rows 0.5 m apart from each other (Figure 1a). Field bioassays. (a) Forty An. gambiae larvae were placed in plastic containers (with nylon screened holes, indicated by an arrow) with a soil layer (2 cm) at the bottom and a 3 cm layer of water. The screened holes were a precautionary measure to retain larvae in the tubs in case of overflow due to heavy rain. The plastic containers were placed in larger tubs, also filled with soil and water, to prevent ants from access to the bioassays. (b) Unformulated (dry) Metarhizium anisopliae (10 mg) spores applied on the water surface. Note the two large clumps just outside the centre of the containers. (c) Shellsol T-formulated Metarhizium anisopliae (10 mg) spores applied on the water surface. Note that spores are spread more evenly over the surface by ShellSol T than dry spores (Figure b). Dry and ShellSol T formulated spores of both fungal species were tested. ShellSol T was the only formulation that successfully met the criteria investigated in the laboratory studies. Two different concentrations (10 mg spores/200 μl ShellSol T and 20 mg spores/230 μl ShellSol T) of both M. anisopliae and B. bassiana spores were tested. For the larger amount of spores, 230 μl ShellSol T was required to make a consistent suspension. Each treatment was randomly applied to three plastic containers. The 11 treatments consisted of dry M. anisopliae spores (10 mg and 20 mg), dry B. bassiana spores (10 mg and 20 mg), ShellSol T formulated M. anisopliae spores (10 mg/200 μl and 20 mg/230 μl), ShellSol T formulated B. bassiana spores (10 mg/200 μl and 20 mg/230 μl) and only ShellSol T (200 μl and 230 μl) while the one remaining tub was untreated. The ShellSol T (200 μl and 230 μl) and the untreated container served as control for their respective treatments. The number of larvae that died in the containers could not be recorded because it was difficult to recover them in the turbid water and/or bottom soil. Therefore, larval survival was assessed as the number of pupae produced. No food was provided to the larvae after being placed in the container. The plastic containers were checked twice daily (for the following 15 days) and pupae were removed with a dipper. To prevent oviposition or emergence of local mosquitoes in the water of larger tubs in which treated plastic containers were placed, Aquatain (a silicone-based oil) was applied to the water surface [38]. Water (0.5 L, kept outdoors in Jerry cans) was added to every plastic container when the water level had been reduced by evaporation to less than 1 cm. Meteorological data was obtained from the National Irrigation Board (NIB) research station located approximately 4 km from the experimental site. Water surface (5 mm top layer) temperature was measured daily at the same time, in each container, with a digital thermometer (GTH 175/Pt, Greisinger electronics, Germany). To evaluate the efficacy of unformulated and formulated fungal spores in the field, experiments were carried out in Kenya in May and June, 2010. The experiments were conducted in a restricted part of the Ahero Multipurpose Development and Training Institute (AMDTI) compound. This institute is located 24 km southeast of Kisumu, in western Kenya (0°10'S, 34°55'E). Malaria is highly endemic in this region and transmission occurs throughout the year. A mean annual Plasmodium falciparum sporozoite inoculation rates (EIR) of 0.4-17 infective bites per year has been shown by recent studies for this region [36]. The region has an annual mean temperature range of 17°C to 32°C, average annual rainfall of 1,000 - 1,800 mm and average relative humidity of 65% [37]. Bioassays were conducted outdoors in 33 plastic containers (0.30 m diameter). The plastic containers had two nylon-screened holes (3 cm2), close to the brim, allowing excess rain water to flow out while retaining the larvae. Dry soil from a rice paddy at the Ahero irrigation scheme (4 km from AMDTI) was softened up by adding water. The softened soil was placed at the bottom of each plastic container to form a 2 cm thick layer. One L of tap water was then added to each plastic container. The water level was 3 cm above soil level and exposed a surface area of 450 cm2. Each plastic container was placed in a larger tub that also had a bottom layer of soil but was filled with water to the top. The larger tubs were employed to prevent ants from accessing the plastic container inside. Forty second-instar An. gambiae s.s. larvae, were added to each container. The large tubs, with the containers inside, were arranged in three rows 0.5 m apart from each other (Figure 1a). Field bioassays. (a) Forty An. gambiae larvae were placed in plastic containers (with nylon screened holes, indicated by an arrow) with a soil layer (2 cm) at the bottom and a 3 cm layer of water. The screened holes were a precautionary measure to retain larvae in the tubs in case of overflow due to heavy rain. The plastic containers were placed in larger tubs, also filled with soil and water, to prevent ants from access to the bioassays. (b) Unformulated (dry) Metarhizium anisopliae (10 mg) spores applied on the water surface. Note the two large clumps just outside the centre of the containers. (c) Shellsol T-formulated Metarhizium anisopliae (10 mg) spores applied on the water surface. Note that spores are spread more evenly over the surface by ShellSol T than dry spores (Figure b). Dry and ShellSol T formulated spores of both fungal species were tested. ShellSol T was the only formulation that successfully met the criteria investigated in the laboratory studies. Two different concentrations (10 mg spores/200 μl ShellSol T and 20 mg spores/230 μl ShellSol T) of both M. anisopliae and B. bassiana spores were tested. For the larger amount of spores, 230 μl ShellSol T was required to make a consistent suspension. Each treatment was randomly applied to three plastic containers. The 11 treatments consisted of dry M. anisopliae spores (10 mg and 20 mg), dry B. bassiana spores (10 mg and 20 mg), ShellSol T formulated M. anisopliae spores (10 mg/200 μl and 20 mg/230 μl), ShellSol T formulated B. bassiana spores (10 mg/200 μl and 20 mg/230 μl) and only ShellSol T (200 μl and 230 μl) while the one remaining tub was untreated. The ShellSol T (200 μl and 230 μl) and the untreated container served as control for their respective treatments. The number of larvae that died in the containers could not be recorded because it was difficult to recover them in the turbid water and/or bottom soil. Therefore, larval survival was assessed as the number of pupae produced. No food was provided to the larvae after being placed in the container. The plastic containers were checked twice daily (for the following 15 days) and pupae were removed with a dipper. To prevent oviposition or emergence of local mosquitoes in the water of larger tubs in which treated plastic containers were placed, Aquatain (a silicone-based oil) was applied to the water surface [38]. Water (0.5 L, kept outdoors in Jerry cans) was added to every plastic container when the water level had been reduced by evaporation to less than 1 cm. Meteorological data was obtained from the National Irrigation Board (NIB) research station located approximately 4 km from the experimental site. Water surface (5 mm top layer) temperature was measured daily at the same time, in each container, with a digital thermometer (GTH 175/Pt, Greisinger electronics, Germany). [SUBTITLE] Statistical analysis [SUBSECTION] Differences in larval survival were analysed using Cox regression [39]. The survival of larvae treated with formulated or unformulated fungal spores were compared with their respective control larvae and the resulting Hazard Ratio (HR) values were used to evaluate differences in mortality rates. The proportional hazard assumption of Cox regression was tested by plotting the cumulative hazard rates against time for the treated and control groups to confirm that the resulting curves did not cross [40]. To test the pathogenicity of fungal spores over time, HR's were computed for larvae exposed to spores floating on water for different time periods. In addition, the arcsine-square root transformed proportions of dead larvae were compared directly, after being corrected for their respective controls using the Abbott's formula, by a one-way ANOVA and LSD post-hoc test of the arcsine transformed mortality proportion [41]. Similarly, the persistence of pathogenicity in formulated and unformulated spores was also compared. The arcsine-square root transformed proportions of larvae that pupated in the field trial were compared by one-way ANOVA and LSD post-hoc tests. All the analyses were performed using SPSS version 15 software (SPSS Inc. Chicago, IL, USA). Differences in larval survival were analysed using Cox regression [39]. The survival of larvae treated with formulated or unformulated fungal spores were compared with their respective control larvae and the resulting Hazard Ratio (HR) values were used to evaluate differences in mortality rates. The proportional hazard assumption of Cox regression was tested by plotting the cumulative hazard rates against time for the treated and control groups to confirm that the resulting curves did not cross [40]. To test the pathogenicity of fungal spores over time, HR's were computed for larvae exposed to spores floating on water for different time periods. In addition, the arcsine-square root transformed proportions of dead larvae were compared directly, after being corrected for their respective controls using the Abbott's formula, by a one-way ANOVA and LSD post-hoc test of the arcsine transformed mortality proportion [41]. Similarly, the persistence of pathogenicity in formulated and unformulated spores was also compared. The arcsine-square root transformed proportions of larvae that pupated in the field trial were compared by one-way ANOVA and LSD post-hoc tests. All the analyses were performed using SPSS version 15 software (SPSS Inc. Chicago, IL, USA).
null
null
null
null
[ "Background", "Mosquitoes", "Fungal spores", "Carrier materials", "Formulations", "Efficacy of formulations against Anopheles gambiae larvae", "Pathogenicity of floating unformulated spores over time", "Effect of formulation on persistence of pathogenicity", "Field bioassays", "Statistical analysis", "Results", "Formulations", "Efficacy of formulations against Anopheles gambiae larvae", "Pathogenicity of floating unformulated spores over time", "Effect of formulation on persistence of pathogenicity", "Field bioassays", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Recently, theoretical and experimental studies have shown the potential of entomopathogenic fungi as next generation agents for the control of malaria mosquitoes [1-5] However, most of this work has focused on targeting adult mosquitoes. Larval control has a convincing history of malaria eradication and recent studies have also shown this approach to be highly effective [6-11]. It is, therefore, worthwhile to investigate the ability of entomopathogenic fungi to control mosquito larvae and the feasibility of their operational use.\nOur previous work showed the efficacy of Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores in infecting and killing larvae of Anopheles stephensi and An. gambiae under laboratory conditions [12]. Other isolates of M. anisopliae and B. bassiana have also been shown to affect culicine and anopheline larvae [13-17]. The main infection sites were the feeding and respiratory apparatus [16]. Most of these studies had been carried out in the laboratory and proved the application of dry fungal spores to be more effective than the application of formulated spores [13,14,18]. Applying dry spores in the field, however, has certain limitations. Fungal spores are hydrophobic by nature so when applied in an aquatic environment, they clump together, reducing the area that is effectively covered. As a result, massive amounts of fungal spores are required. Contact with water also disrupts the infection process. Attachment of spores to the host is an important step of the infection process. The outer layer of spores has highly organised surface proteins known as rodlets, which are mainly responsible for attachment to the host [19]. For successful infection, germination should follow spore attachment to the host. When dry fungal spores are applied to an aquatic habitat, typical for mosquito larvae, the nutrients in the water are usually sufficient to stimulate germination in the spores following water intake [20,21]. Once a spore germinates, the outer layer is ruptured reducing the chance of attachment to the host. Water contact, thus, reduces the pathogenicity of the floating spores. In addition, dry unformulated fungal spores are more exposed to UV radiation and high temperatures, which are known to negatively affect spore persistence and germination rate [22,23].\nIn addition to strain selection and genetic modification, formulation can have a considerable impact on improving the efficacy of biopesticides. An ideal formulation aids the handling and application of the biopesticides, as well as increases its efficacy by improving contact with the host and protecting the active agent from environmental factors [24]. Considering the surface feeding behaviour of anopheline larvae, any formulation intended to infect them should spread the fungal spores over the water surface [25,26]. The larvae are then most likely to come in contact with spores. The spores should spread uniformly, providing equal coverage, over the entire treated area. In addition, spores should be prevented from germinating before host attachment, and at least to some extent be protected from environmental factors. In this context we developed and tested dry (organic and inorganic), oil (mineral and synthetic) and water-based formulations of M. anisopliae and B. bassiana for their efficacy against anopheline larvae.\nThe objectives of this study were to (a) develop formulations suitable for the positioning (water surface or bottom) and uniform spread of M. anisopliae and B. bassiana spores, (b) assess the efficacy of selected spore formulations in killing anopheline larvae, (c) assess the selected formulations for their potential to increase spore persistence, and (d) assess the potential of formulations to suppress populations of mosquito larvae in a field situation.", "Anopheles gambiae s.s. (Suakoko strain, courtesy of Prof. M. Coluzzi, reared in laboratory for 23 years) and An. stephensi (Strain STE 2, MR4 no. 128, origin India, reared in laboratory for 2 years after obtaining the eggs from MR4) were reared separately, but under similar conditions, in climate-controlled rooms at Wageningen University, The Netherlands. The temperature was maintained at 27 ± 1°C. Relative humidity was set at 70 ± 5% and the rooms had a 12L:12D photoperiod. Larvae were kept in plastic trays filled with tap water. First instar larvae were fed on Liquifry No. 1 (Interpet Ltd., Surrey, UK) while older instar stages were fed on Tetramin® (Tetra, Melle, Germany). The resulting pupae were transferred to holding cages (30 × 30 × 30 cm) in small cups, where they emerged as adults with ad libitum access to 6% glucose water. The female mosquitoes were blood-fed with the Hemotek membrane feeding system. Human blood (Sanquin®, Nijmegen, The Netherlands) was used for this purpose and mosquitoes could feed on it through a Parafilm M® membrane. Eggs were laid on moist filter paper, and were subsequently transferred to the larval trays. For the field bioassays An. gambiae s.s. eggs (Kisumu, strain, reared in laboratory for 8 years) were obtained from the Kenya Medical Research Institute (KEMRI) and reared at the Ahero Multipurpose Development Training Institute (AMDTI), Kenya. Rearing was carried out under local climate conditions (described below) and larvae were fed on Tetramin®.", "Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores were obtained from the Department of Bioprocess Engineering, Wageningen University, and stored as dry spores in Falcon™ tubes at 4°C. Metarhizium anisopliae spores are olivaceous green, cylindrical and 2.5-3.5 μm long while B. bassiana spores are hyaline, spherical or sub-spherical and have a diameter of 2-3 μm [27].", "Wheat flour, white pepper, WaterSavr (WaterSavr™, Sodium bicarbonate version, Flexible Solutions International Ltd., Victoria BC, Canada), 0.1% Tween 80 aqueous solution, Ondina oil 917 (Shell Ondina® Oil 917, Shell, The Netherlands) and ShellSol T (Shellsol T®, Shell, The Netherlands) were tested for their potential as carrier of fungal spores. Wheat flour and white pepper served as organic dry carriers. These were tested because anopheline larvae are known to aggregate around and feed on powdered organic materials (wheat flour, alfalfa flour, blood meal and liver powder) even when a choice of inorganic materials (chalk, charcoal and kaolin) is also available [28]. One inorganic dry powder, known as WaterSavr, was also tested. WaterSavr consists of fine bicarbonate granules that self-spread over the water surface forming a thin layer which has been shown to reduce evaporation [29]. Its biodegradability, safety and surface-spreading features made it a suitable candidate for inclusion in our tests. Surfactants, such as Tween 80, can be used to overcome the hydrophobic nature of fungal spores and form a homogeneous aqueous solution. Fungal spores formulated in Tween 80 have been used in bioassays to test the efficacy of fungal spores against mosquito larvae [13,16,30-34]. ShellSol T is a synthetic isoparaffinic hydrocarbon solvent. Ondina oil 917, slightly denser than ShellSol T, is a highly refined mineral oil. Both ShellSol T and Ondina oil 917 have been successfully used as carrier for fungal spores to target the adult stage of mosquitoes [1,35].", "The first selection of carriers suitable for formulating entomopathogenic fungal spores consisted of a test in which the carrier material was evaluated for its ability to spread over the water surface. For this purpose, plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and the carriers applied on the water surface (441 cm2). The least amount of each carrier required to cover the entire surface was recorded. Once that amount was determined, M. anisopliae spores (10 mg, ~ 4.7 × 108 spores) were added to the carriers. The quantity of the carriers was increased to make a consistent suspension or mixture of fungal spores and carriers. The resulting formulations were applied to select the carriers that spread the spores evenly over the water surface evenly. Metarhizium anisopliae spores were used because of their colour (olivaceous green) which made it easy to visualize them whilst spreading.", "The next step consisted of testing selected formulations against An. gambiae larvae in laboratory bioassays. Bioassays were performed under climatic conditions similar to the mosquito rearing. Plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and allowed to acclimatise overnight. Fifty second-instar larvae were added to each tray. Unformulated or formulated spores were applied to the water surface of each tray. The number of larvae that died or pupated was recorded daily for the next eight days. For each treatment, the carrier alone (in the same quantity as in the formulation) served as the control. In the case of unformulated spores, the control was untreated tap water. The larvae were provided with Tetramin® as food at the rate of 0.2 - 0.3 mg/larva per day. The experiments were replicated three times.", "A third experiment was performed to evaluate how the pathogenicity of fungal spores is affected by being in contact with water over a time period of seven days. At the start, 15 plastic trays (same size as above) were each filled with one liter of water. These trays were kept overnight in a climate-controlled room to acclimitise. Metarhizium anisopliae spores were applied to the water surface in five trays (10 mg per tray). Similarly, 10 mg of B. bassiana spores (~ 2 × 109 spores) were applied on the water surface in five other trays. The remaining five trays served as the control. After one day, 50 second-instar An. stephensi larvae were added to one of the trays treated with M. anisopliae spores, B. bassiana spores and one untreated control tray. Similarly larvae were added to remaining trays after either 2, 3, 5 or 7 days after fungal treatment. The mortality and/or pupation was followed for 9 days. The larvae were fed at the same rate as mentioned before. This experiment was replicated three times.", "Based on the results of the formulation experiments, the carriers WaterSavr and ShellSol T were selected and tested further for their ability to increase the persistence of pathogenicity in fungal spores in contact with water. Unformulated and formulated (either with WaterSavr or ShellSol T) M. anisopliae and B. bassiana spores were applied to plastic trays containing 1 L of acclimatized water. One replicate consisted of 18 trays. A pair of trays was applied with one of the following nine treatments: (1) 10 mg of dry M. anisopliae spores, (2) 10 mg of dry B. bassiana spores, (3) M. anisopliae spores mixed with WaterSavr (10 mg/130 mg), (4) B. bassiana spores mixed with WaterSavr (10 mg/130 mg), (5) M. anisopliae spores mixed with ShellSol T (10 mg/200 μl), (6) B. bassiana spores mixed with ShellSol T (10 mg/200 μl), (7) WaterSavr (130 mg) only, (8) ShellSol (200 μl) only or (9) no treatment. Trays treated with WaterSavr or ShellSol T without fungal spores and the untreated trays served as control for their respective treatments. Fifty second-instar An. stephensi larvae were added to one tray of each pair on the same day the fungal spores were applied. The same number of larvae was added to the other tray of the pair on the seventh day (based on the results of the previous experiment). The larvae were checked for mortality or pupation for the following 10 days after being added to the trays. The experiment was replicated three times. The trays were topped up with acclimatised tap water, every other day, to compensate for evaporation.", "To evaluate the efficacy of unformulated and formulated fungal spores in the field, experiments were carried out in Kenya in May and June, 2010. The experiments were conducted in a restricted part of the Ahero Multipurpose Development and Training Institute (AMDTI) compound. This institute is located 24 km southeast of Kisumu, in western Kenya (0°10'S, 34°55'E). Malaria is highly endemic in this region and transmission occurs throughout the year. A mean annual Plasmodium falciparum sporozoite inoculation rates (EIR) of 0.4-17 infective bites per year has been shown by recent studies for this region [36]. The region has an annual mean temperature range of 17°C to 32°C, average annual rainfall of 1,000 - 1,800 mm and average relative humidity of 65% [37].\nBioassays were conducted outdoors in 33 plastic containers (0.30 m diameter). The plastic containers had two nylon-screened holes (3 cm2), close to the brim, allowing excess rain water to flow out while retaining the larvae. Dry soil from a rice paddy at the Ahero irrigation scheme (4 km from AMDTI) was softened up by adding water. The softened soil was placed at the bottom of each plastic container to form a 2 cm thick layer. One L of tap water was then added to each plastic container. The water level was 3 cm above soil level and exposed a surface area of 450 cm2. Each plastic container was placed in a larger tub that also had a bottom layer of soil but was filled with water to the top. The larger tubs were employed to prevent ants from accessing the plastic container inside. Forty second-instar An. gambiae s.s. larvae, were added to each container. The large tubs, with the containers inside, were arranged in three rows 0.5 m apart from each other (Figure 1a).\nField bioassays. (a) Forty An. gambiae larvae were placed in plastic containers (with nylon screened holes, indicated by an arrow) with a soil layer (2 cm) at the bottom and a 3 cm layer of water. The screened holes were a precautionary measure to retain larvae in the tubs in case of overflow due to heavy rain. The plastic containers were placed in larger tubs, also filled with soil and water, to prevent ants from access to the bioassays. (b) Unformulated (dry) Metarhizium anisopliae (10 mg) spores applied on the water surface. Note the two large clumps just outside the centre of the containers. (c) Shellsol T-formulated Metarhizium anisopliae (10 mg) spores applied on the water surface. Note that spores are spread more evenly over the surface by ShellSol T than dry spores (Figure b).\nDry and ShellSol T formulated spores of both fungal species were tested. ShellSol T was the only formulation that successfully met the criteria investigated in the laboratory studies. Two different concentrations (10 mg spores/200 μl ShellSol T and 20 mg spores/230 μl ShellSol T) of both M. anisopliae and B. bassiana spores were tested. For the larger amount of spores, 230 μl ShellSol T was required to make a consistent suspension. Each treatment was randomly applied to three plastic containers. The 11 treatments consisted of dry M. anisopliae spores (10 mg and 20 mg), dry B. bassiana spores (10 mg and 20 mg), ShellSol T formulated M. anisopliae spores (10 mg/200 μl and 20 mg/230 μl), ShellSol T formulated B. bassiana spores (10 mg/200 μl and 20 mg/230 μl) and only ShellSol T (200 μl and 230 μl) while the one remaining tub was untreated. The ShellSol T (200 μl and 230 μl) and the untreated container served as control for their respective treatments. The number of larvae that died in the containers could not be recorded because it was difficult to recover them in the turbid water and/or bottom soil. Therefore, larval survival was assessed as the number of pupae produced. No food was provided to the larvae after being placed in the container. The plastic containers were checked twice daily (for the following 15 days) and pupae were removed with a dipper. To prevent oviposition or emergence of local mosquitoes in the water of larger tubs in which treated plastic containers were placed, Aquatain (a silicone-based oil) was applied to the water surface [38]. Water (0.5 L, kept outdoors in Jerry cans) was added to every plastic container when the water level had been reduced by evaporation to less than 1 cm. Meteorological data was obtained from the National Irrigation Board (NIB) research station located approximately 4 km from the experimental site. Water surface (5 mm top layer) temperature was measured daily at the same time, in each container, with a digital thermometer (GTH 175/Pt, Greisinger electronics, Germany).", "Differences in larval survival were analysed using Cox regression [39]. The survival of larvae treated with formulated or unformulated fungal spores were compared with their respective control larvae and the resulting Hazard Ratio (HR) values were used to evaluate differences in mortality rates. The proportional hazard assumption of Cox regression was tested by plotting the cumulative hazard rates against time for the treated and control groups to confirm that the resulting curves did not cross [40].\nTo test the pathogenicity of fungal spores over time, HR's were computed for larvae exposed to spores floating on water for different time periods. In addition, the arcsine-square root transformed proportions of dead larvae were compared directly, after being corrected for their respective controls using the Abbott's formula, by a one-way ANOVA and LSD post-hoc test of the arcsine transformed mortality proportion [41]. Similarly, the persistence of pathogenicity in formulated and unformulated spores was also compared. The arcsine-square root transformed proportions of larvae that pupated in the field trial were compared by one-way ANOVA and LSD post-hoc tests. All the analyses were performed using SPSS version 15 software (SPSS Inc. Chicago, IL, USA).", "[SUBTITLE] Formulations [SUBSECTION] In the case of both ShellSol T and Ondina oil 917, 100 μl of the oil was required to cover a water surface of 441 cm2. The amounts could not be determined for 0.1% Tween 80 and wheat flour. Tween 80 solution could not be visualised as it is colourless. The wheat flour formed clumps rather than spreading. White pepper spread across the water surface evenly and 30 mg of it was sufficient to cover the entire surface area. Similarly 130 mg of Watersavr spread and covered the water surface of 441 cm2 (Table 1). After determining these amounts, 10 mg of Metarhizium anisopliae spores was added to each of the carriers. The quantity of ShellSol T and Ondina oil 917 had to be doubled (200 μl) to form a homogenous suspension. In case of the 0.1% Tween 80 solution, 4 ml was required to form a consistent suspension. Wheat flour was not tested further because of clumping. The quantity of white pepper and WaterSavr (30 mg and 130 mg respectively) required for covering the water surface (441 cm2) was also enough to form a consistent mixture with 10 mg of fungal spores (Table 1). Formulations, apart from the 0.1% Tween 80 solution which caused the spores to sink, resulted in a fairly uniform spread of fungal spores on the water surface (Table 1). Therefore 0.1% Tween 80 solution was not tested further.\nCarriers tested for their ability to spread spores and the composition of formulations tested\nThe amount of each carrier required to cover a water surface area of 441 cm2, the amount required to form a consistent mixture with 10 mg of Metarhizium anisopliae spores, the ability of the carriers to spread the spores over the water surface and the composition of formulations with suitable carriers\n-- Not Tested or could not be determined\nIn the case of both ShellSol T and Ondina oil 917, 100 μl of the oil was required to cover a water surface of 441 cm2. The amounts could not be determined for 0.1% Tween 80 and wheat flour. Tween 80 solution could not be visualised as it is colourless. The wheat flour formed clumps rather than spreading. White pepper spread across the water surface evenly and 30 mg of it was sufficient to cover the entire surface area. Similarly 130 mg of Watersavr spread and covered the water surface of 441 cm2 (Table 1). After determining these amounts, 10 mg of Metarhizium anisopliae spores was added to each of the carriers. The quantity of ShellSol T and Ondina oil 917 had to be doubled (200 μl) to form a homogenous suspension. In case of the 0.1% Tween 80 solution, 4 ml was required to form a consistent suspension. Wheat flour was not tested further because of clumping. The quantity of white pepper and WaterSavr (30 mg and 130 mg respectively) required for covering the water surface (441 cm2) was also enough to form a consistent mixture with 10 mg of fungal spores (Table 1). Formulations, apart from the 0.1% Tween 80 solution which caused the spores to sink, resulted in a fairly uniform spread of fungal spores on the water surface (Table 1). Therefore 0.1% Tween 80 solution was not tested further.\nCarriers tested for their ability to spread spores and the composition of formulations tested\nThe amount of each carrier required to cover a water surface area of 441 cm2, the amount required to form a consistent mixture with 10 mg of Metarhizium anisopliae spores, the ability of the carriers to spread the spores over the water surface and the composition of formulations with suitable carriers\n-- Not Tested or could not be determined\n[SUBTITLE] Efficacy of formulations against Anopheles gambiae larvae [SUBSECTION] Bioassays were conducted with unformulated M. anisopliae spores (10 mg) and M. anisopliae spores formulated in pepper (10 mg/30 mg), WaterSavr (10 mg/130 mg), ShellSol T (10 mg/200 μl )) or Ondina oil 917 (10 mg/200 μl) against An. gambiae larvae. Only 2.7 ± 1.8% of the larvae treated with unformulated M. anisopliae spores pupated while 47.6 ± 3.9% pupated in the relevant control. The treated larvae had a nearly two times higher daily risk of mortality as compared to the untreated control larvae (HR (95%CI) = 1.8 (1.4-2.4), Table 2, Figure 2a). WaterSavr formulation reduced the pupation of the larvae from 67.2 ± 10.6% to 1.3 ± 0.6%, exposing the formulation-treated larvae to nearly three times higher daily risk of mortality as compared to the control (Table 2, Figure 2c). With the ShellSol T formulation 1.3 ± 0.6% of the treated larvae pupated while the larvae treated with ShellSol T (without fungal spores) showed 85.4 ± 14.5% pupation. Larvae exposed to ShellSol T formulated spores of M. anisopliae had a mortality risk four times higher compared to larvae treated with ShellSol T only (HR (95%CI) = 3.7 (2.5-5.4), Table 2, Figure 2e). However, with white pepper and Ondina oil there was no significant difference in the mortality of larvae treated with the formulation or the carrier alone, or the formulations and fungal spores together. Both pepper and Ondina oil 917 killed 100% larvae even without fungal spores (Table 2, Figure 2b and 2d). These two carriers were not tested further as the objective was to develop a formulation that enhances the spreading and efficacy of the fungal spores to infect and kill larvae.\nPercentage pupation and Hazard ratios of larvae exposed to tested formulations\nAverage percentage pupation (±S.E.) of An. gambiae larvae exposed to unformulated spores and formulated Metarhizium anisopliae spores (n = 3). The carrier in each formulation (White pepper, WaterSavr, Ondina oil 917 or ShellSol T) served as the control. In case of unformulated spores the control was completely untreated. Carrier and Metarhizium anisopliae spores together formed the treatment. Hazard ratio's (HR) indicate the mortality risk in the treatments as compared to their respective controls\nLaboratory bioassays to test the efficacy of unformulated and formulated Metarhizium anisopliae spores. The average percentage cumulative survival (±S.E.) of An. gambiae larvae (n = 3) exposed to (a) Unformulated Metarhizium anisopliae spores (control (C) and unformulated spores (Ma spores) (b) Pepper (control (P)) and Pepper formulated spores (Ma spores + P) (c) WaterSavr (control (WS)) and WaterSavr formulated spores (Ma spores + WS) (d) Ondina Oil (control (OO)) and Ondina oil formulated spores (Ma spores + OO) (e) ShellSol T (control (SS)) and ShellSol T formulated spores (Ma spores + SS) over 8 days post-treatment. Larvae that pupated are included as surviving.\nBioassays were conducted with unformulated M. anisopliae spores (10 mg) and M. anisopliae spores formulated in pepper (10 mg/30 mg), WaterSavr (10 mg/130 mg), ShellSol T (10 mg/200 μl )) or Ondina oil 917 (10 mg/200 μl) against An. gambiae larvae. Only 2.7 ± 1.8% of the larvae treated with unformulated M. anisopliae spores pupated while 47.6 ± 3.9% pupated in the relevant control. The treated larvae had a nearly two times higher daily risk of mortality as compared to the untreated control larvae (HR (95%CI) = 1.8 (1.4-2.4), Table 2, Figure 2a). WaterSavr formulation reduced the pupation of the larvae from 67.2 ± 10.6% to 1.3 ± 0.6%, exposing the formulation-treated larvae to nearly three times higher daily risk of mortality as compared to the control (Table 2, Figure 2c). With the ShellSol T formulation 1.3 ± 0.6% of the treated larvae pupated while the larvae treated with ShellSol T (without fungal spores) showed 85.4 ± 14.5% pupation. Larvae exposed to ShellSol T formulated spores of M. anisopliae had a mortality risk four times higher compared to larvae treated with ShellSol T only (HR (95%CI) = 3.7 (2.5-5.4), Table 2, Figure 2e). However, with white pepper and Ondina oil there was no significant difference in the mortality of larvae treated with the formulation or the carrier alone, or the formulations and fungal spores together. Both pepper and Ondina oil 917 killed 100% larvae even without fungal spores (Table 2, Figure 2b and 2d). These two carriers were not tested further as the objective was to develop a formulation that enhances the spreading and efficacy of the fungal spores to infect and kill larvae.\nPercentage pupation and Hazard ratios of larvae exposed to tested formulations\nAverage percentage pupation (±S.E.) of An. gambiae larvae exposed to unformulated spores and formulated Metarhizium anisopliae spores (n = 3). The carrier in each formulation (White pepper, WaterSavr, Ondina oil 917 or ShellSol T) served as the control. In case of unformulated spores the control was completely untreated. Carrier and Metarhizium anisopliae spores together formed the treatment. Hazard ratio's (HR) indicate the mortality risk in the treatments as compared to their respective controls\nLaboratory bioassays to test the efficacy of unformulated and formulated Metarhizium anisopliae spores. The average percentage cumulative survival (±S.E.) of An. gambiae larvae (n = 3) exposed to (a) Unformulated Metarhizium anisopliae spores (control (C) and unformulated spores (Ma spores) (b) Pepper (control (P)) and Pepper formulated spores (Ma spores + P) (c) WaterSavr (control (WS)) and WaterSavr formulated spores (Ma spores + WS) (d) Ondina Oil (control (OO)) and Ondina oil formulated spores (Ma spores + OO) (e) ShellSol T (control (SS)) and ShellSol T formulated spores (Ma spores + SS) over 8 days post-treatment. Larvae that pupated are included as surviving.\n[SUBTITLE] Pathogenicity of floating unformulated spores over time [SUBSECTION] The pathogenicity of dry M. anisopliae and B. bassiana spores was substantially reduced over a period of five days (Figure 3). Anopheles stephensi larvae exposed to M. anisopliae spores, applied to water seven days earlier, showed a similar pupation proportion as their control (Table 3). Beauveria bassiana spores lost their effectiveness after being in contact with water for three days. Metarhizium anisopliae spores lost their effectiveness after five days (Table 3). After seven days the control mortality was significantly higher than the mortality of larvae exposed to M. anisopliae treatment.\nLaboratory bioassays to test the persistence of floating unformulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to spores of Metarhizium anisopliae and Beauveria bassiana that had been floating on the water surface for 1, 2, 3, 5 or 7 days. Bars with letter in common show no significant difference (LSD post hoc test, α = 0.05).\nPercentage pupation and Hazard ratio's of larvae exposed to unformulated floating fungal spores\nAverage percentage pupation (±S.E.) in the control and treated An. stephensi larvae exposed to Metarhizium anisopliae and Beauveria bassiana spores floating on the water surface for 1, 2, 3, 5 and 7 days (n = 3). The controls consisted of untreated trays filled with water at the same time as the treated trays. Hazard ratio's (HR) indicate the mortality risk of larvae as compared to the controls for both Metarhizium anisopliae and Beauveria bassiana spores\na. HR lower than 1 represents higher mortality in the control group.\nThe pathogenicity of dry M. anisopliae and B. bassiana spores was substantially reduced over a period of five days (Figure 3). Anopheles stephensi larvae exposed to M. anisopliae spores, applied to water seven days earlier, showed a similar pupation proportion as their control (Table 3). Beauveria bassiana spores lost their effectiveness after being in contact with water for three days. Metarhizium anisopliae spores lost their effectiveness after five days (Table 3). After seven days the control mortality was significantly higher than the mortality of larvae exposed to M. anisopliae treatment.\nLaboratory bioassays to test the persistence of floating unformulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to spores of Metarhizium anisopliae and Beauveria bassiana that had been floating on the water surface for 1, 2, 3, 5 or 7 days. Bars with letter in common show no significant difference (LSD post hoc test, α = 0.05).\nPercentage pupation and Hazard ratio's of larvae exposed to unformulated floating fungal spores\nAverage percentage pupation (±S.E.) in the control and treated An. stephensi larvae exposed to Metarhizium anisopliae and Beauveria bassiana spores floating on the water surface for 1, 2, 3, 5 and 7 days (n = 3). The controls consisted of untreated trays filled with water at the same time as the treated trays. Hazard ratio's (HR) indicate the mortality risk of larvae as compared to the controls for both Metarhizium anisopliae and Beauveria bassiana spores\na. HR lower than 1 represents higher mortality in the control group.\n[SUBTITLE] Effect of formulation on persistence of pathogenicity [SUBSECTION] Fungal spores formulated with ShellSol T were more persistent compared to the unformulated spores or spores formulated in WaterSavr. Seven days after application only ShellSol T formulated fungal spores (both M. anisopliae and B. bassiana) still caused significant mortality in the An. stephensi larvae (Table 4). Formulation in WaterSavr seemed to reduce the efficacy of fungal spores. When the An. stephensi larvae were exposed to WaterSavr-formulated M. anisopliae and B. bassiana spores, on the same day the fungal spores were applied, the corrected proportion larval-mortality was significantly lower as compared to larvae exposed to unformulated M. anisopliae and B. bassiana spores. Larvae exposed to M. anisopliae spores formulated with WaterSavr, applied that same day, had a lower mortality risk (HR (95% CI), 8.9 (4.4-18.1)) than those exposed to the unformulated spores (HR (95% CI), 44.6 (10.9-181.7)). There was no significant difference in the corrected proportion mortality of larvae exposed to unformulated and WaterSavr-formulated M. anisopliae spores, seven days after their application on water (Figure 4). Similar results were observed for B. bassiana spores. There was no significant difference between the corrected larval-mortality proportion due to unformulated and WaterSavr formulated B. bassiana spores, applied on water seven days before exposing the larvae. Also, the proportion larval mortality caused by WaterSavr-formulated B. bassiana spores was significantly lower than with ShellSol T-formulated B. bassiana spores (Figure 4).\nHazard ratios of larvae exposed to (un)formulated fungal spores, 0 and 7 days post-application\nHazard ratio's (HR) indicate the mortality risk of An. stephensi larvae exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores, 0 and 7 days after application (n = 3)\nLaboratory bioassays to test the persistence of formulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae (Ma) and Beauveria bassiana (Bb) spores, immediately (Day 0) or seven days (Day 7) after application. Letters in common (upper case for Ma and lower case for Bb) show no significant difference (LSD post hoc test, α = 0.05).\nFungal spores formulated with ShellSol T were more persistent compared to the unformulated spores or spores formulated in WaterSavr. Seven days after application only ShellSol T formulated fungal spores (both M. anisopliae and B. bassiana) still caused significant mortality in the An. stephensi larvae (Table 4). Formulation in WaterSavr seemed to reduce the efficacy of fungal spores. When the An. stephensi larvae were exposed to WaterSavr-formulated M. anisopliae and B. bassiana spores, on the same day the fungal spores were applied, the corrected proportion larval-mortality was significantly lower as compared to larvae exposed to unformulated M. anisopliae and B. bassiana spores. Larvae exposed to M. anisopliae spores formulated with WaterSavr, applied that same day, had a lower mortality risk (HR (95% CI), 8.9 (4.4-18.1)) than those exposed to the unformulated spores (HR (95% CI), 44.6 (10.9-181.7)). There was no significant difference in the corrected proportion mortality of larvae exposed to unformulated and WaterSavr-formulated M. anisopliae spores, seven days after their application on water (Figure 4). Similar results were observed for B. bassiana spores. There was no significant difference between the corrected larval-mortality proportion due to unformulated and WaterSavr formulated B. bassiana spores, applied on water seven days before exposing the larvae. Also, the proportion larval mortality caused by WaterSavr-formulated B. bassiana spores was significantly lower than with ShellSol T-formulated B. bassiana spores (Figure 4).\nHazard ratios of larvae exposed to (un)formulated fungal spores, 0 and 7 days post-application\nHazard ratio's (HR) indicate the mortality risk of An. stephensi larvae exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores, 0 and 7 days after application (n = 3)\nLaboratory bioassays to test the persistence of formulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae (Ma) and Beauveria bassiana (Bb) spores, immediately (Day 0) or seven days (Day 7) after application. Letters in common (upper case for Ma and lower case for Bb) show no significant difference (LSD post hoc test, α = 0.05).\n[SUBTITLE] Field bioassays [SUBSECTION] During the experimental period (15 days), the mean minimum and maximum temperatures were 15.7°C and 30.9°C, respectively, with a mean relative humidity of 54% and total rainfall of 19.4 mm. Water surface temperature ranged from 21°C to 38.8°C. Similar to the laboratory observations, unformulated spores clumped together on the water surface (Figure 1b) while ShellSol T-formulated fungal spores were uniformly spread (Figure 1c).\nThe efficacy of unformulated fungal spores was found to be low under field conditions as compared to laboratory conditions. At dose rates of both 10 mg and 20 mg, the same (p > 0.05) level of pupation was observed in the An. gambiae larvae treated with unformulated M. anisopliae and B. bassiana spores as in the untreated An. gambiae larvae (Figure 5). As observed in the laboratory bioassays, ShellSol T on its own had no harmful effect on larval development and pupation. A similar proportion (p > 0.05) of larvae pupated in the containers treated with ShellSol T (200 μl and 230 μl) and the untreated containers (Figure 5).\nField bioassays testing the efficacy of fungal spores formulated in ShellSol T. The average percentage pupation of An. gambiae larvae (n = 3) exposed to unformulated and ShellSol T formulated Metarhizium anisopliae (Ma) or Beauveria bassiana (Bb) spores at two doses, 10 mg/200 μl and 20 mg/230 μl. Controls included no treatment at all or treatment with only ShellSol T (200 μl or 230 μl). Letters in common show no significant difference (LSD post hoc test, α = 0.05).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated M. anisopliae spores was 43% (low dose, 10 mg) and 49% (high dose, 20 mg) lower than that of the corresponding unformulated treatments. However for the lower dose (10 mg) the proportion of larvae that pupated was not significantly different (p = 0.08, Figure 5).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated B. bassiana spores was 39% (low dose, 10 mg) and 50% (high dose, 20 mg) lower than that in the corresponding unformulated treatments. At both lower and higher dose the proportion of larvae that pupated was significantly different (p < 0.05, Figure 5).\nDuring the experimental period (15 days), the mean minimum and maximum temperatures were 15.7°C and 30.9°C, respectively, with a mean relative humidity of 54% and total rainfall of 19.4 mm. Water surface temperature ranged from 21°C to 38.8°C. Similar to the laboratory observations, unformulated spores clumped together on the water surface (Figure 1b) while ShellSol T-formulated fungal spores were uniformly spread (Figure 1c).\nThe efficacy of unformulated fungal spores was found to be low under field conditions as compared to laboratory conditions. At dose rates of both 10 mg and 20 mg, the same (p > 0.05) level of pupation was observed in the An. gambiae larvae treated with unformulated M. anisopliae and B. bassiana spores as in the untreated An. gambiae larvae (Figure 5). As observed in the laboratory bioassays, ShellSol T on its own had no harmful effect on larval development and pupation. A similar proportion (p > 0.05) of larvae pupated in the containers treated with ShellSol T (200 μl and 230 μl) and the untreated containers (Figure 5).\nField bioassays testing the efficacy of fungal spores formulated in ShellSol T. The average percentage pupation of An. gambiae larvae (n = 3) exposed to unformulated and ShellSol T formulated Metarhizium anisopliae (Ma) or Beauveria bassiana (Bb) spores at two doses, 10 mg/200 μl and 20 mg/230 μl. Controls included no treatment at all or treatment with only ShellSol T (200 μl or 230 μl). Letters in common show no significant difference (LSD post hoc test, α = 0.05).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated M. anisopliae spores was 43% (low dose, 10 mg) and 49% (high dose, 20 mg) lower than that of the corresponding unformulated treatments. However for the lower dose (10 mg) the proportion of larvae that pupated was not significantly different (p = 0.08, Figure 5).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated B. bassiana spores was 39% (low dose, 10 mg) and 50% (high dose, 20 mg) lower than that in the corresponding unformulated treatments. At both lower and higher dose the proportion of larvae that pupated was significantly different (p < 0.05, Figure 5).", "In the case of both ShellSol T and Ondina oil 917, 100 μl of the oil was required to cover a water surface of 441 cm2. The amounts could not be determined for 0.1% Tween 80 and wheat flour. Tween 80 solution could not be visualised as it is colourless. The wheat flour formed clumps rather than spreading. White pepper spread across the water surface evenly and 30 mg of it was sufficient to cover the entire surface area. Similarly 130 mg of Watersavr spread and covered the water surface of 441 cm2 (Table 1). After determining these amounts, 10 mg of Metarhizium anisopliae spores was added to each of the carriers. The quantity of ShellSol T and Ondina oil 917 had to be doubled (200 μl) to form a homogenous suspension. In case of the 0.1% Tween 80 solution, 4 ml was required to form a consistent suspension. Wheat flour was not tested further because of clumping. The quantity of white pepper and WaterSavr (30 mg and 130 mg respectively) required for covering the water surface (441 cm2) was also enough to form a consistent mixture with 10 mg of fungal spores (Table 1). Formulations, apart from the 0.1% Tween 80 solution which caused the spores to sink, resulted in a fairly uniform spread of fungal spores on the water surface (Table 1). Therefore 0.1% Tween 80 solution was not tested further.\nCarriers tested for their ability to spread spores and the composition of formulations tested\nThe amount of each carrier required to cover a water surface area of 441 cm2, the amount required to form a consistent mixture with 10 mg of Metarhizium anisopliae spores, the ability of the carriers to spread the spores over the water surface and the composition of formulations with suitable carriers\n-- Not Tested or could not be determined", "Bioassays were conducted with unformulated M. anisopliae spores (10 mg) and M. anisopliae spores formulated in pepper (10 mg/30 mg), WaterSavr (10 mg/130 mg), ShellSol T (10 mg/200 μl )) or Ondina oil 917 (10 mg/200 μl) against An. gambiae larvae. Only 2.7 ± 1.8% of the larvae treated with unformulated M. anisopliae spores pupated while 47.6 ± 3.9% pupated in the relevant control. The treated larvae had a nearly two times higher daily risk of mortality as compared to the untreated control larvae (HR (95%CI) = 1.8 (1.4-2.4), Table 2, Figure 2a). WaterSavr formulation reduced the pupation of the larvae from 67.2 ± 10.6% to 1.3 ± 0.6%, exposing the formulation-treated larvae to nearly three times higher daily risk of mortality as compared to the control (Table 2, Figure 2c). With the ShellSol T formulation 1.3 ± 0.6% of the treated larvae pupated while the larvae treated with ShellSol T (without fungal spores) showed 85.4 ± 14.5% pupation. Larvae exposed to ShellSol T formulated spores of M. anisopliae had a mortality risk four times higher compared to larvae treated with ShellSol T only (HR (95%CI) = 3.7 (2.5-5.4), Table 2, Figure 2e). However, with white pepper and Ondina oil there was no significant difference in the mortality of larvae treated with the formulation or the carrier alone, or the formulations and fungal spores together. Both pepper and Ondina oil 917 killed 100% larvae even without fungal spores (Table 2, Figure 2b and 2d). These two carriers were not tested further as the objective was to develop a formulation that enhances the spreading and efficacy of the fungal spores to infect and kill larvae.\nPercentage pupation and Hazard ratios of larvae exposed to tested formulations\nAverage percentage pupation (±S.E.) of An. gambiae larvae exposed to unformulated spores and formulated Metarhizium anisopliae spores (n = 3). The carrier in each formulation (White pepper, WaterSavr, Ondina oil 917 or ShellSol T) served as the control. In case of unformulated spores the control was completely untreated. Carrier and Metarhizium anisopliae spores together formed the treatment. Hazard ratio's (HR) indicate the mortality risk in the treatments as compared to their respective controls\nLaboratory bioassays to test the efficacy of unformulated and formulated Metarhizium anisopliae spores. The average percentage cumulative survival (±S.E.) of An. gambiae larvae (n = 3) exposed to (a) Unformulated Metarhizium anisopliae spores (control (C) and unformulated spores (Ma spores) (b) Pepper (control (P)) and Pepper formulated spores (Ma spores + P) (c) WaterSavr (control (WS)) and WaterSavr formulated spores (Ma spores + WS) (d) Ondina Oil (control (OO)) and Ondina oil formulated spores (Ma spores + OO) (e) ShellSol T (control (SS)) and ShellSol T formulated spores (Ma spores + SS) over 8 days post-treatment. Larvae that pupated are included as surviving.", "The pathogenicity of dry M. anisopliae and B. bassiana spores was substantially reduced over a period of five days (Figure 3). Anopheles stephensi larvae exposed to M. anisopliae spores, applied to water seven days earlier, showed a similar pupation proportion as their control (Table 3). Beauveria bassiana spores lost their effectiveness after being in contact with water for three days. Metarhizium anisopliae spores lost their effectiveness after five days (Table 3). After seven days the control mortality was significantly higher than the mortality of larvae exposed to M. anisopliae treatment.\nLaboratory bioassays to test the persistence of floating unformulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to spores of Metarhizium anisopliae and Beauveria bassiana that had been floating on the water surface for 1, 2, 3, 5 or 7 days. Bars with letter in common show no significant difference (LSD post hoc test, α = 0.05).\nPercentage pupation and Hazard ratio's of larvae exposed to unformulated floating fungal spores\nAverage percentage pupation (±S.E.) in the control and treated An. stephensi larvae exposed to Metarhizium anisopliae and Beauveria bassiana spores floating on the water surface for 1, 2, 3, 5 and 7 days (n = 3). The controls consisted of untreated trays filled with water at the same time as the treated trays. Hazard ratio's (HR) indicate the mortality risk of larvae as compared to the controls for both Metarhizium anisopliae and Beauveria bassiana spores\na. HR lower than 1 represents higher mortality in the control group.", "Fungal spores formulated with ShellSol T were more persistent compared to the unformulated spores or spores formulated in WaterSavr. Seven days after application only ShellSol T formulated fungal spores (both M. anisopliae and B. bassiana) still caused significant mortality in the An. stephensi larvae (Table 4). Formulation in WaterSavr seemed to reduce the efficacy of fungal spores. When the An. stephensi larvae were exposed to WaterSavr-formulated M. anisopliae and B. bassiana spores, on the same day the fungal spores were applied, the corrected proportion larval-mortality was significantly lower as compared to larvae exposed to unformulated M. anisopliae and B. bassiana spores. Larvae exposed to M. anisopliae spores formulated with WaterSavr, applied that same day, had a lower mortality risk (HR (95% CI), 8.9 (4.4-18.1)) than those exposed to the unformulated spores (HR (95% CI), 44.6 (10.9-181.7)). There was no significant difference in the corrected proportion mortality of larvae exposed to unformulated and WaterSavr-formulated M. anisopliae spores, seven days after their application on water (Figure 4). Similar results were observed for B. bassiana spores. There was no significant difference between the corrected larval-mortality proportion due to unformulated and WaterSavr formulated B. bassiana spores, applied on water seven days before exposing the larvae. Also, the proportion larval mortality caused by WaterSavr-formulated B. bassiana spores was significantly lower than with ShellSol T-formulated B. bassiana spores (Figure 4).\nHazard ratios of larvae exposed to (un)formulated fungal spores, 0 and 7 days post-application\nHazard ratio's (HR) indicate the mortality risk of An. stephensi larvae exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores, 0 and 7 days after application (n = 3)\nLaboratory bioassays to test the persistence of formulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae (Ma) and Beauveria bassiana (Bb) spores, immediately (Day 0) or seven days (Day 7) after application. Letters in common (upper case for Ma and lower case for Bb) show no significant difference (LSD post hoc test, α = 0.05).", "During the experimental period (15 days), the mean minimum and maximum temperatures were 15.7°C and 30.9°C, respectively, with a mean relative humidity of 54% and total rainfall of 19.4 mm. Water surface temperature ranged from 21°C to 38.8°C. Similar to the laboratory observations, unformulated spores clumped together on the water surface (Figure 1b) while ShellSol T-formulated fungal spores were uniformly spread (Figure 1c).\nThe efficacy of unformulated fungal spores was found to be low under field conditions as compared to laboratory conditions. At dose rates of both 10 mg and 20 mg, the same (p > 0.05) level of pupation was observed in the An. gambiae larvae treated with unformulated M. anisopliae and B. bassiana spores as in the untreated An. gambiae larvae (Figure 5). As observed in the laboratory bioassays, ShellSol T on its own had no harmful effect on larval development and pupation. A similar proportion (p > 0.05) of larvae pupated in the containers treated with ShellSol T (200 μl and 230 μl) and the untreated containers (Figure 5).\nField bioassays testing the efficacy of fungal spores formulated in ShellSol T. The average percentage pupation of An. gambiae larvae (n = 3) exposed to unformulated and ShellSol T formulated Metarhizium anisopliae (Ma) or Beauveria bassiana (Bb) spores at two doses, 10 mg/200 μl and 20 mg/230 μl. Controls included no treatment at all or treatment with only ShellSol T (200 μl or 230 μl). Letters in common show no significant difference (LSD post hoc test, α = 0.05).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated M. anisopliae spores was 43% (low dose, 10 mg) and 49% (high dose, 20 mg) lower than that of the corresponding unformulated treatments. However for the lower dose (10 mg) the proportion of larvae that pupated was not significantly different (p = 0.08, Figure 5).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated B. bassiana spores was 39% (low dose, 10 mg) and 50% (high dose, 20 mg) lower than that in the corresponding unformulated treatments. At both lower and higher dose the proportion of larvae that pupated was significantly different (p < 0.05, Figure 5).", "The results of this study show how certain formulations can improve the ability of entomopathogenic fungus spores to spread over a water surface as well as increase their persistence. The results also show that better spreading and persistence leads to an enhanced efficacy of fungal spores. The study also demonstrates that both M. anisopliae and B. bassiana caused a high impact on the survival of An. gambiae s.s. larvae under field conditions, when formulated in Shellsol T.\nAnopheles stephensi and An. gambiae larvae were found to be equally susceptible to unformulated M. anisopliae and B. bassiana spores [12]. This suggests that these fungi are likely to also affect other anopheline vector species.\nFormulating fungal spores with Tween 80 and wheat flour was found to be unsuitable. Spores formulated with Tween 80 did not spread over the water surface, the primary feeding site of anopheline larvae, but sunk to the bottom [25,28]. Surfactants are known to impair attachment of the spore to the host so even if the spores were spread on the water surface they would not have been effective against anopheline larvae [20,42]. Wheat flour, although due to its organic nature could have served as a bait, did not spread the fungal spores over the water surface [28]. The wheat flour clumped together and sunk.\nPowdered pepper and Ondina oil caused 100% mortality in anopheline larvae even without the fungal spores. Extracts of fruits of the Piperaceae family have been shown to be toxic for Aedes aegypti L. larvae [43], but the exact toxicity mechanism remains unclear. Although fungal spores were effectively spread with white pepper, pepper was considered an unsuitable carrier due to its own toxic effect on the anopheline larvae. Ondina oil, in the amount tested (200 μl), formed an oily layer over the water surface causing the larvae to suffocate. As compared to ShellSol T, Ondina oil is denser and evaporates less. This may explain the difference in the mortality observed with Ondina oil and ShellSol T controls. The amount of Ondina oil tested could not be reduced as, in that case, it was not possible to make a homogeneous suspension with the fungal spores.\nDry unformulated M. anisopliae and B. bassiana spores lost their pathogenicity five days after being applied to the water surface as the survival of larvae exposed to the fungal spores five days after application was similar to that of the controls. Similar results were shown in a study by Alves et al. (2002), where M. anisopliae caused no mortality in Cx. quinquefasciatus Say larvae introduced four days after the spores were applied [13]. This is in contrast to Pereira et al. (2009), who found M. anisopliae spores to cause 50% mortality in Ae. aegypti larvae exposed to fungal spores that were applied ten days previously [34]. The studies mentioned here were carried out in controlled climate conditions (25-27°C) in the laboratory. In field conditions the spores are more likely to lose their pathogenicity in less time due to exposure to hight temperatures and UV-radiations. This may explain why unformulated fungal spores did not cause any significant reduction in pupation in the field bioassays, where the water surface temperatures were measured to be as high as 38.8°C. The measured (water surface) temperatures agree with those reported by Paaijmans et al. (2008) for similar sized water-bodies and are known to exhibit high daily fluctuations [44].\nWhen the larvae were exposed to fungal spores on the same day as the spores were applied, unformulated spores and spores formulated in WaterSavr or Shellsol T caused larval mortality over the next few days. However, only fungal spores formulated in ShellSol T caused significantly higher mortality in larvae introduced seven days after the fungal spores had been applied. Fungal spores formulated in ShellSol T remained pathogenic possibly because ShellSol T prevented spores from absorbing the amount of moisture required to stimulate germination [21,31]. ShellSol T was also considered a good carrier of fungal spores in other studies [31,45]. WaterSavr, on the other hand, did not protect fungal spores.\nShellSol T was the only formulation that we tested in the field as the laboratory results showed high persistence of pathogenicity in the fungal spores formulated only with this product. Unformulated M. anisopliae and B. bassiana did not suppress the larval population effectively in the field. In contrast to the situation in the laboratory, the spores were exposed to sunlight, rain and fluctuating temperatures in the field which might have reduced spore survival. By contrast, only 10-20% of the larvae treated with spores formulated in ShellSol T, developed into pupae. Both M. anisopliae and B. bassiana spores were found to be equally effective when formulated in ShellSol T. Oil formulations are known to improve spore survival, improve fungal efficacy against insects and reduce spore sensitivity to UV radiation [31,45].\nIn the field residual effect of formulated spores could not be tested after a certain number of days because the plastic containers began to harbour Culex larvae and thus had to be drained. The presence of Culex larvae is an indication that ovipositing female Culex mosquitoes were not repelled by the fungus treatment. It is disadvantageous for a larval control agent to have an oviposition-repellent effect because in that case ovipositing mosquito females are forced to seek and deposit their eggs at alternative untreated sites. This means that the control agent only targets the existing larval population and needs to be reapplied after the site has been inhabited again. Studies specifically designed to establish the response of ovipositing anopheline female mosquitoes to fungal spores and the residual effect of fungal spore treatment are required for a better understanding. Oil-formulated M. anisopliae spores have been shown to have an increased ovicidal activity in case of Ae. aegypti eggs [46]. This might be an added advantage if anopheline eggs are also affected by M. anisopliae spores similar to the Ae. aegypti eggs.\nPathogenicity of control agents in the field is generally lower than that in the laboratory settings [47]. In the field bioassays, therefore, a higher dose (20 mg/450 cm2) of fungal spores was also tested together with the dose tested in the laboratory (10 mg/441 cm2). The laboratory dose, however, showed similar efficacy in the field by reducing pupation similar to the higher dose. Therefore doses lower than used in the current study should be evaluated to establish the lowest effective amount of fungal spores required to treat a certain area.\nShellSol T was a candidate carrier that not only facilitated the application of spores but also improved their efficacy by providing maximum chance for contact (spreading the spores on the water surface) with the larvae and increasing spore persistence. The fungal spores readily suspend in ShellSol T with a slight agitation. This is advantageous as the spores can be conveniently mixed in ShellSol T, on the spot, which means that during transport and storage only the bio-active agent would have to be kept at low temperatures rather than the whole mixture. This can reduce the cooling space requirement as ShellSol T itself is a stable product and has no particular storage demands. It has been shown that the percentage germination of dry spores is generally higher than that of oil-formulated spores when stored at the same temperature for the same number of days [[23]; unpublished data]. The fungal spores Metarhizium flavoviride had a germination rate of 80% when stored at 30°C for 90 days as compared to 90% when stored dry under similar environmental conditions [23]. In this context, it seems more efficient to store fungal spores separately and only mix them with the oil-component shortly before application.\nThe results of this study show the necessity of a good formulation for fungal spores when these are to be utilised in the field. The efficacy of unformulated (dry) spores was so low in the field situation that their application, as such, is not justified. While ShellSol T-formulated spores were highly effective in killing anopheline larvae in the field an important point to consider is the potential increased risk to the non-target organisms due to their improved persistence and/or undesirable properties of the solvent [33,48-50]. ShellSol T has a low toxicity effect on fish, aquatic invertebrates and microorganisms at concentration higher than 1 g/liter [51]. Considering the volume of ShellSol T that we tested (200-230 μl on 1 L of water), the concentration of ShellSol T was 0.15 g/L which is nearly seven times lower than the lowest lethal concentration. ShellSol T evaporates and therefore is less likely to remain in the aquatic habitats. Detailed safety studies, however, are necessary to have a better understanding of any adverse effect ShellSol T might have on the environment and non-target organisms, at the required doses.\nBesides formulation, it is very important to identify the best delivery method (where, when and how) to fully utilize the entomopathogenic potential of M. anisopliae and B. bassiana spores. Frequency of re-application has to be determined based on the residual effect of formulated spores in the field. The feasibility of applying formulated spores at artificial breeding sites, baited to attract ovipositing females, is also worth testing [52]. A good delivery system will reduce the chances of non-target organisms coming into contact with fungal spores.", "From a number of candidate products tested for the formulation of entomopathogenic fungi, ShellSol T emerged as a promising carrier of fungal spores when targeting anopheline larvae. Spores of B. bassiana and M. anisopliae formulated in ShellSol T had an increased efficacy against larvae of An. gambiae s.s. as compared to unformulated spores and were also more persistent under field conditions in Kenya. Other oils with physical properties similar to ShellSol T may also serve as good carriers. Together with a sound delivery system, these formulated fungi can be utilised in the field, providing additional tools for biological control of malaria vectors.", "The authors declare that they have no competing interests.", "TB designed the study, carried out the experimental work, performed the statistical analysis and drafted the manuscript. CJMK helped with the study design, statistical analyses, and drafting the manuscript. WT provided scientific guidance in interpretation of the findings and drafting the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Mosquitoes", "Fungal spores", "Carrier materials", "Formulations", "Efficacy of formulations against Anopheles gambiae larvae", "Pathogenicity of floating unformulated spores over time", "Effect of formulation on persistence of pathogenicity", "Field bioassays", "Statistical analysis", "Results", "Formulations", "Efficacy of formulations against Anopheles gambiae larvae", "Pathogenicity of floating unformulated spores over time", "Effect of formulation on persistence of pathogenicity", "Field bioassays", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Recently, theoretical and experimental studies have shown the potential of entomopathogenic fungi as next generation agents for the control of malaria mosquitoes [1-5] However, most of this work has focused on targeting adult mosquitoes. Larval control has a convincing history of malaria eradication and recent studies have also shown this approach to be highly effective [6-11]. It is, therefore, worthwhile to investigate the ability of entomopathogenic fungi to control mosquito larvae and the feasibility of their operational use.\nOur previous work showed the efficacy of Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores in infecting and killing larvae of Anopheles stephensi and An. gambiae under laboratory conditions [12]. Other isolates of M. anisopliae and B. bassiana have also been shown to affect culicine and anopheline larvae [13-17]. The main infection sites were the feeding and respiratory apparatus [16]. Most of these studies had been carried out in the laboratory and proved the application of dry fungal spores to be more effective than the application of formulated spores [13,14,18]. Applying dry spores in the field, however, has certain limitations. Fungal spores are hydrophobic by nature so when applied in an aquatic environment, they clump together, reducing the area that is effectively covered. As a result, massive amounts of fungal spores are required. Contact with water also disrupts the infection process. Attachment of spores to the host is an important step of the infection process. The outer layer of spores has highly organised surface proteins known as rodlets, which are mainly responsible for attachment to the host [19]. For successful infection, germination should follow spore attachment to the host. When dry fungal spores are applied to an aquatic habitat, typical for mosquito larvae, the nutrients in the water are usually sufficient to stimulate germination in the spores following water intake [20,21]. Once a spore germinates, the outer layer is ruptured reducing the chance of attachment to the host. Water contact, thus, reduces the pathogenicity of the floating spores. In addition, dry unformulated fungal spores are more exposed to UV radiation and high temperatures, which are known to negatively affect spore persistence and germination rate [22,23].\nIn addition to strain selection and genetic modification, formulation can have a considerable impact on improving the efficacy of biopesticides. An ideal formulation aids the handling and application of the biopesticides, as well as increases its efficacy by improving contact with the host and protecting the active agent from environmental factors [24]. Considering the surface feeding behaviour of anopheline larvae, any formulation intended to infect them should spread the fungal spores over the water surface [25,26]. The larvae are then most likely to come in contact with spores. The spores should spread uniformly, providing equal coverage, over the entire treated area. In addition, spores should be prevented from germinating before host attachment, and at least to some extent be protected from environmental factors. In this context we developed and tested dry (organic and inorganic), oil (mineral and synthetic) and water-based formulations of M. anisopliae and B. bassiana for their efficacy against anopheline larvae.\nThe objectives of this study were to (a) develop formulations suitable for the positioning (water surface or bottom) and uniform spread of M. anisopliae and B. bassiana spores, (b) assess the efficacy of selected spore formulations in killing anopheline larvae, (c) assess the selected formulations for their potential to increase spore persistence, and (d) assess the potential of formulations to suppress populations of mosquito larvae in a field situation.", "[SUBTITLE] Mosquitoes [SUBSECTION] Anopheles gambiae s.s. (Suakoko strain, courtesy of Prof. M. Coluzzi, reared in laboratory for 23 years) and An. stephensi (Strain STE 2, MR4 no. 128, origin India, reared in laboratory for 2 years after obtaining the eggs from MR4) were reared separately, but under similar conditions, in climate-controlled rooms at Wageningen University, The Netherlands. The temperature was maintained at 27 ± 1°C. Relative humidity was set at 70 ± 5% and the rooms had a 12L:12D photoperiod. Larvae were kept in plastic trays filled with tap water. First instar larvae were fed on Liquifry No. 1 (Interpet Ltd., Surrey, UK) while older instar stages were fed on Tetramin® (Tetra, Melle, Germany). The resulting pupae were transferred to holding cages (30 × 30 × 30 cm) in small cups, where they emerged as adults with ad libitum access to 6% glucose water. The female mosquitoes were blood-fed with the Hemotek membrane feeding system. Human blood (Sanquin®, Nijmegen, The Netherlands) was used for this purpose and mosquitoes could feed on it through a Parafilm M® membrane. Eggs were laid on moist filter paper, and were subsequently transferred to the larval trays. For the field bioassays An. gambiae s.s. eggs (Kisumu, strain, reared in laboratory for 8 years) were obtained from the Kenya Medical Research Institute (KEMRI) and reared at the Ahero Multipurpose Development Training Institute (AMDTI), Kenya. Rearing was carried out under local climate conditions (described below) and larvae were fed on Tetramin®.\nAnopheles gambiae s.s. (Suakoko strain, courtesy of Prof. M. Coluzzi, reared in laboratory for 23 years) and An. stephensi (Strain STE 2, MR4 no. 128, origin India, reared in laboratory for 2 years after obtaining the eggs from MR4) were reared separately, but under similar conditions, in climate-controlled rooms at Wageningen University, The Netherlands. The temperature was maintained at 27 ± 1°C. Relative humidity was set at 70 ± 5% and the rooms had a 12L:12D photoperiod. Larvae were kept in plastic trays filled with tap water. First instar larvae were fed on Liquifry No. 1 (Interpet Ltd., Surrey, UK) while older instar stages were fed on Tetramin® (Tetra, Melle, Germany). The resulting pupae were transferred to holding cages (30 × 30 × 30 cm) in small cups, where they emerged as adults with ad libitum access to 6% glucose water. The female mosquitoes were blood-fed with the Hemotek membrane feeding system. Human blood (Sanquin®, Nijmegen, The Netherlands) was used for this purpose and mosquitoes could feed on it through a Parafilm M® membrane. Eggs were laid on moist filter paper, and were subsequently transferred to the larval trays. For the field bioassays An. gambiae s.s. eggs (Kisumu, strain, reared in laboratory for 8 years) were obtained from the Kenya Medical Research Institute (KEMRI) and reared at the Ahero Multipurpose Development Training Institute (AMDTI), Kenya. Rearing was carried out under local climate conditions (described below) and larvae were fed on Tetramin®.\n[SUBTITLE] Fungal spores [SUBSECTION] Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores were obtained from the Department of Bioprocess Engineering, Wageningen University, and stored as dry spores in Falcon™ tubes at 4°C. Metarhizium anisopliae spores are olivaceous green, cylindrical and 2.5-3.5 μm long while B. bassiana spores are hyaline, spherical or sub-spherical and have a diameter of 2-3 μm [27].\nMetarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores were obtained from the Department of Bioprocess Engineering, Wageningen University, and stored as dry spores in Falcon™ tubes at 4°C. Metarhizium anisopliae spores are olivaceous green, cylindrical and 2.5-3.5 μm long while B. bassiana spores are hyaline, spherical or sub-spherical and have a diameter of 2-3 μm [27].\n[SUBTITLE] Carrier materials [SUBSECTION] Wheat flour, white pepper, WaterSavr (WaterSavr™, Sodium bicarbonate version, Flexible Solutions International Ltd., Victoria BC, Canada), 0.1% Tween 80 aqueous solution, Ondina oil 917 (Shell Ondina® Oil 917, Shell, The Netherlands) and ShellSol T (Shellsol T®, Shell, The Netherlands) were tested for their potential as carrier of fungal spores. Wheat flour and white pepper served as organic dry carriers. These were tested because anopheline larvae are known to aggregate around and feed on powdered organic materials (wheat flour, alfalfa flour, blood meal and liver powder) even when a choice of inorganic materials (chalk, charcoal and kaolin) is also available [28]. One inorganic dry powder, known as WaterSavr, was also tested. WaterSavr consists of fine bicarbonate granules that self-spread over the water surface forming a thin layer which has been shown to reduce evaporation [29]. Its biodegradability, safety and surface-spreading features made it a suitable candidate for inclusion in our tests. Surfactants, such as Tween 80, can be used to overcome the hydrophobic nature of fungal spores and form a homogeneous aqueous solution. Fungal spores formulated in Tween 80 have been used in bioassays to test the efficacy of fungal spores against mosquito larvae [13,16,30-34]. ShellSol T is a synthetic isoparaffinic hydrocarbon solvent. Ondina oil 917, slightly denser than ShellSol T, is a highly refined mineral oil. Both ShellSol T and Ondina oil 917 have been successfully used as carrier for fungal spores to target the adult stage of mosquitoes [1,35].\nWheat flour, white pepper, WaterSavr (WaterSavr™, Sodium bicarbonate version, Flexible Solutions International Ltd., Victoria BC, Canada), 0.1% Tween 80 aqueous solution, Ondina oil 917 (Shell Ondina® Oil 917, Shell, The Netherlands) and ShellSol T (Shellsol T®, Shell, The Netherlands) were tested for their potential as carrier of fungal spores. Wheat flour and white pepper served as organic dry carriers. These were tested because anopheline larvae are known to aggregate around and feed on powdered organic materials (wheat flour, alfalfa flour, blood meal and liver powder) even when a choice of inorganic materials (chalk, charcoal and kaolin) is also available [28]. One inorganic dry powder, known as WaterSavr, was also tested. WaterSavr consists of fine bicarbonate granules that self-spread over the water surface forming a thin layer which has been shown to reduce evaporation [29]. Its biodegradability, safety and surface-spreading features made it a suitable candidate for inclusion in our tests. Surfactants, such as Tween 80, can be used to overcome the hydrophobic nature of fungal spores and form a homogeneous aqueous solution. Fungal spores formulated in Tween 80 have been used in bioassays to test the efficacy of fungal spores against mosquito larvae [13,16,30-34]. ShellSol T is a synthetic isoparaffinic hydrocarbon solvent. Ondina oil 917, slightly denser than ShellSol T, is a highly refined mineral oil. Both ShellSol T and Ondina oil 917 have been successfully used as carrier for fungal spores to target the adult stage of mosquitoes [1,35].\n[SUBTITLE] Formulations [SUBSECTION] The first selection of carriers suitable for formulating entomopathogenic fungal spores consisted of a test in which the carrier material was evaluated for its ability to spread over the water surface. For this purpose, plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and the carriers applied on the water surface (441 cm2). The least amount of each carrier required to cover the entire surface was recorded. Once that amount was determined, M. anisopliae spores (10 mg, ~ 4.7 × 108 spores) were added to the carriers. The quantity of the carriers was increased to make a consistent suspension or mixture of fungal spores and carriers. The resulting formulations were applied to select the carriers that spread the spores evenly over the water surface evenly. Metarhizium anisopliae spores were used because of their colour (olivaceous green) which made it easy to visualize them whilst spreading.\nThe first selection of carriers suitable for formulating entomopathogenic fungal spores consisted of a test in which the carrier material was evaluated for its ability to spread over the water surface. For this purpose, plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and the carriers applied on the water surface (441 cm2). The least amount of each carrier required to cover the entire surface was recorded. Once that amount was determined, M. anisopliae spores (10 mg, ~ 4.7 × 108 spores) were added to the carriers. The quantity of the carriers was increased to make a consistent suspension or mixture of fungal spores and carriers. The resulting formulations were applied to select the carriers that spread the spores evenly over the water surface evenly. Metarhizium anisopliae spores were used because of their colour (olivaceous green) which made it easy to visualize them whilst spreading.\n[SUBTITLE] Efficacy of formulations against Anopheles gambiae larvae [SUBSECTION] The next step consisted of testing selected formulations against An. gambiae larvae in laboratory bioassays. Bioassays were performed under climatic conditions similar to the mosquito rearing. Plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and allowed to acclimatise overnight. Fifty second-instar larvae were added to each tray. Unformulated or formulated spores were applied to the water surface of each tray. The number of larvae that died or pupated was recorded daily for the next eight days. For each treatment, the carrier alone (in the same quantity as in the formulation) served as the control. In the case of unformulated spores, the control was untreated tap water. The larvae were provided with Tetramin® as food at the rate of 0.2 - 0.3 mg/larva per day. The experiments were replicated three times.\nThe next step consisted of testing selected formulations against An. gambiae larvae in laboratory bioassays. Bioassays were performed under climatic conditions similar to the mosquito rearing. Plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and allowed to acclimatise overnight. Fifty second-instar larvae were added to each tray. Unformulated or formulated spores were applied to the water surface of each tray. The number of larvae that died or pupated was recorded daily for the next eight days. For each treatment, the carrier alone (in the same quantity as in the formulation) served as the control. In the case of unformulated spores, the control was untreated tap water. The larvae were provided with Tetramin® as food at the rate of 0.2 - 0.3 mg/larva per day. The experiments were replicated three times.\n[SUBTITLE] Pathogenicity of floating unformulated spores over time [SUBSECTION] A third experiment was performed to evaluate how the pathogenicity of fungal spores is affected by being in contact with water over a time period of seven days. At the start, 15 plastic trays (same size as above) were each filled with one liter of water. These trays were kept overnight in a climate-controlled room to acclimitise. Metarhizium anisopliae spores were applied to the water surface in five trays (10 mg per tray). Similarly, 10 mg of B. bassiana spores (~ 2 × 109 spores) were applied on the water surface in five other trays. The remaining five trays served as the control. After one day, 50 second-instar An. stephensi larvae were added to one of the trays treated with M. anisopliae spores, B. bassiana spores and one untreated control tray. Similarly larvae were added to remaining trays after either 2, 3, 5 or 7 days after fungal treatment. The mortality and/or pupation was followed for 9 days. The larvae were fed at the same rate as mentioned before. This experiment was replicated three times.\nA third experiment was performed to evaluate how the pathogenicity of fungal spores is affected by being in contact with water over a time period of seven days. At the start, 15 plastic trays (same size as above) were each filled with one liter of water. These trays were kept overnight in a climate-controlled room to acclimitise. Metarhizium anisopliae spores were applied to the water surface in five trays (10 mg per tray). Similarly, 10 mg of B. bassiana spores (~ 2 × 109 spores) were applied on the water surface in five other trays. The remaining five trays served as the control. After one day, 50 second-instar An. stephensi larvae were added to one of the trays treated with M. anisopliae spores, B. bassiana spores and one untreated control tray. Similarly larvae were added to remaining trays after either 2, 3, 5 or 7 days after fungal treatment. The mortality and/or pupation was followed for 9 days. The larvae were fed at the same rate as mentioned before. This experiment was replicated three times.\n[SUBTITLE] Effect of formulation on persistence of pathogenicity [SUBSECTION] Based on the results of the formulation experiments, the carriers WaterSavr and ShellSol T were selected and tested further for their ability to increase the persistence of pathogenicity in fungal spores in contact with water. Unformulated and formulated (either with WaterSavr or ShellSol T) M. anisopliae and B. bassiana spores were applied to plastic trays containing 1 L of acclimatized water. One replicate consisted of 18 trays. A pair of trays was applied with one of the following nine treatments: (1) 10 mg of dry M. anisopliae spores, (2) 10 mg of dry B. bassiana spores, (3) M. anisopliae spores mixed with WaterSavr (10 mg/130 mg), (4) B. bassiana spores mixed with WaterSavr (10 mg/130 mg), (5) M. anisopliae spores mixed with ShellSol T (10 mg/200 μl), (6) B. bassiana spores mixed with ShellSol T (10 mg/200 μl), (7) WaterSavr (130 mg) only, (8) ShellSol (200 μl) only or (9) no treatment. Trays treated with WaterSavr or ShellSol T without fungal spores and the untreated trays served as control for their respective treatments. Fifty second-instar An. stephensi larvae were added to one tray of each pair on the same day the fungal spores were applied. The same number of larvae was added to the other tray of the pair on the seventh day (based on the results of the previous experiment). The larvae were checked for mortality or pupation for the following 10 days after being added to the trays. The experiment was replicated three times. The trays were topped up with acclimatised tap water, every other day, to compensate for evaporation.\nBased on the results of the formulation experiments, the carriers WaterSavr and ShellSol T were selected and tested further for their ability to increase the persistence of pathogenicity in fungal spores in contact with water. Unformulated and formulated (either with WaterSavr or ShellSol T) M. anisopliae and B. bassiana spores were applied to plastic trays containing 1 L of acclimatized water. One replicate consisted of 18 trays. A pair of trays was applied with one of the following nine treatments: (1) 10 mg of dry M. anisopliae spores, (2) 10 mg of dry B. bassiana spores, (3) M. anisopliae spores mixed with WaterSavr (10 mg/130 mg), (4) B. bassiana spores mixed with WaterSavr (10 mg/130 mg), (5) M. anisopliae spores mixed with ShellSol T (10 mg/200 μl), (6) B. bassiana spores mixed with ShellSol T (10 mg/200 μl), (7) WaterSavr (130 mg) only, (8) ShellSol (200 μl) only or (9) no treatment. Trays treated with WaterSavr or ShellSol T without fungal spores and the untreated trays served as control for their respective treatments. Fifty second-instar An. stephensi larvae were added to one tray of each pair on the same day the fungal spores were applied. The same number of larvae was added to the other tray of the pair on the seventh day (based on the results of the previous experiment). The larvae were checked for mortality or pupation for the following 10 days after being added to the trays. The experiment was replicated three times. The trays were topped up with acclimatised tap water, every other day, to compensate for evaporation.\n[SUBTITLE] Field bioassays [SUBSECTION] To evaluate the efficacy of unformulated and formulated fungal spores in the field, experiments were carried out in Kenya in May and June, 2010. The experiments were conducted in a restricted part of the Ahero Multipurpose Development and Training Institute (AMDTI) compound. This institute is located 24 km southeast of Kisumu, in western Kenya (0°10'S, 34°55'E). Malaria is highly endemic in this region and transmission occurs throughout the year. A mean annual Plasmodium falciparum sporozoite inoculation rates (EIR) of 0.4-17 infective bites per year has been shown by recent studies for this region [36]. The region has an annual mean temperature range of 17°C to 32°C, average annual rainfall of 1,000 - 1,800 mm and average relative humidity of 65% [37].\nBioassays were conducted outdoors in 33 plastic containers (0.30 m diameter). The plastic containers had two nylon-screened holes (3 cm2), close to the brim, allowing excess rain water to flow out while retaining the larvae. Dry soil from a rice paddy at the Ahero irrigation scheme (4 km from AMDTI) was softened up by adding water. The softened soil was placed at the bottom of each plastic container to form a 2 cm thick layer. One L of tap water was then added to each plastic container. The water level was 3 cm above soil level and exposed a surface area of 450 cm2. Each plastic container was placed in a larger tub that also had a bottom layer of soil but was filled with water to the top. The larger tubs were employed to prevent ants from accessing the plastic container inside. Forty second-instar An. gambiae s.s. larvae, were added to each container. The large tubs, with the containers inside, were arranged in three rows 0.5 m apart from each other (Figure 1a).\nField bioassays. (a) Forty An. gambiae larvae were placed in plastic containers (with nylon screened holes, indicated by an arrow) with a soil layer (2 cm) at the bottom and a 3 cm layer of water. The screened holes were a precautionary measure to retain larvae in the tubs in case of overflow due to heavy rain. The plastic containers were placed in larger tubs, also filled with soil and water, to prevent ants from access to the bioassays. (b) Unformulated (dry) Metarhizium anisopliae (10 mg) spores applied on the water surface. Note the two large clumps just outside the centre of the containers. (c) Shellsol T-formulated Metarhizium anisopliae (10 mg) spores applied on the water surface. Note that spores are spread more evenly over the surface by ShellSol T than dry spores (Figure b).\nDry and ShellSol T formulated spores of both fungal species were tested. ShellSol T was the only formulation that successfully met the criteria investigated in the laboratory studies. Two different concentrations (10 mg spores/200 μl ShellSol T and 20 mg spores/230 μl ShellSol T) of both M. anisopliae and B. bassiana spores were tested. For the larger amount of spores, 230 μl ShellSol T was required to make a consistent suspension. Each treatment was randomly applied to three plastic containers. The 11 treatments consisted of dry M. anisopliae spores (10 mg and 20 mg), dry B. bassiana spores (10 mg and 20 mg), ShellSol T formulated M. anisopliae spores (10 mg/200 μl and 20 mg/230 μl), ShellSol T formulated B. bassiana spores (10 mg/200 μl and 20 mg/230 μl) and only ShellSol T (200 μl and 230 μl) while the one remaining tub was untreated. The ShellSol T (200 μl and 230 μl) and the untreated container served as control for their respective treatments. The number of larvae that died in the containers could not be recorded because it was difficult to recover them in the turbid water and/or bottom soil. Therefore, larval survival was assessed as the number of pupae produced. No food was provided to the larvae after being placed in the container. The plastic containers were checked twice daily (for the following 15 days) and pupae were removed with a dipper. To prevent oviposition or emergence of local mosquitoes in the water of larger tubs in which treated plastic containers were placed, Aquatain (a silicone-based oil) was applied to the water surface [38]. Water (0.5 L, kept outdoors in Jerry cans) was added to every plastic container when the water level had been reduced by evaporation to less than 1 cm. Meteorological data was obtained from the National Irrigation Board (NIB) research station located approximately 4 km from the experimental site. Water surface (5 mm top layer) temperature was measured daily at the same time, in each container, with a digital thermometer (GTH 175/Pt, Greisinger electronics, Germany).\nTo evaluate the efficacy of unformulated and formulated fungal spores in the field, experiments were carried out in Kenya in May and June, 2010. The experiments were conducted in a restricted part of the Ahero Multipurpose Development and Training Institute (AMDTI) compound. This institute is located 24 km southeast of Kisumu, in western Kenya (0°10'S, 34°55'E). Malaria is highly endemic in this region and transmission occurs throughout the year. A mean annual Plasmodium falciparum sporozoite inoculation rates (EIR) of 0.4-17 infective bites per year has been shown by recent studies for this region [36]. The region has an annual mean temperature range of 17°C to 32°C, average annual rainfall of 1,000 - 1,800 mm and average relative humidity of 65% [37].\nBioassays were conducted outdoors in 33 plastic containers (0.30 m diameter). The plastic containers had two nylon-screened holes (3 cm2), close to the brim, allowing excess rain water to flow out while retaining the larvae. Dry soil from a rice paddy at the Ahero irrigation scheme (4 km from AMDTI) was softened up by adding water. The softened soil was placed at the bottom of each plastic container to form a 2 cm thick layer. One L of tap water was then added to each plastic container. The water level was 3 cm above soil level and exposed a surface area of 450 cm2. Each plastic container was placed in a larger tub that also had a bottom layer of soil but was filled with water to the top. The larger tubs were employed to prevent ants from accessing the plastic container inside. Forty second-instar An. gambiae s.s. larvae, were added to each container. The large tubs, with the containers inside, were arranged in three rows 0.5 m apart from each other (Figure 1a).\nField bioassays. (a) Forty An. gambiae larvae were placed in plastic containers (with nylon screened holes, indicated by an arrow) with a soil layer (2 cm) at the bottom and a 3 cm layer of water. The screened holes were a precautionary measure to retain larvae in the tubs in case of overflow due to heavy rain. The plastic containers were placed in larger tubs, also filled with soil and water, to prevent ants from access to the bioassays. (b) Unformulated (dry) Metarhizium anisopliae (10 mg) spores applied on the water surface. Note the two large clumps just outside the centre of the containers. (c) Shellsol T-formulated Metarhizium anisopliae (10 mg) spores applied on the water surface. Note that spores are spread more evenly over the surface by ShellSol T than dry spores (Figure b).\nDry and ShellSol T formulated spores of both fungal species were tested. ShellSol T was the only formulation that successfully met the criteria investigated in the laboratory studies. Two different concentrations (10 mg spores/200 μl ShellSol T and 20 mg spores/230 μl ShellSol T) of both M. anisopliae and B. bassiana spores were tested. For the larger amount of spores, 230 μl ShellSol T was required to make a consistent suspension. Each treatment was randomly applied to three plastic containers. The 11 treatments consisted of dry M. anisopliae spores (10 mg and 20 mg), dry B. bassiana spores (10 mg and 20 mg), ShellSol T formulated M. anisopliae spores (10 mg/200 μl and 20 mg/230 μl), ShellSol T formulated B. bassiana spores (10 mg/200 μl and 20 mg/230 μl) and only ShellSol T (200 μl and 230 μl) while the one remaining tub was untreated. The ShellSol T (200 μl and 230 μl) and the untreated container served as control for their respective treatments. The number of larvae that died in the containers could not be recorded because it was difficult to recover them in the turbid water and/or bottom soil. Therefore, larval survival was assessed as the number of pupae produced. No food was provided to the larvae after being placed in the container. The plastic containers were checked twice daily (for the following 15 days) and pupae were removed with a dipper. To prevent oviposition or emergence of local mosquitoes in the water of larger tubs in which treated plastic containers were placed, Aquatain (a silicone-based oil) was applied to the water surface [38]. Water (0.5 L, kept outdoors in Jerry cans) was added to every plastic container when the water level had been reduced by evaporation to less than 1 cm. Meteorological data was obtained from the National Irrigation Board (NIB) research station located approximately 4 km from the experimental site. Water surface (5 mm top layer) temperature was measured daily at the same time, in each container, with a digital thermometer (GTH 175/Pt, Greisinger electronics, Germany).\n[SUBTITLE] Statistical analysis [SUBSECTION] Differences in larval survival were analysed using Cox regression [39]. The survival of larvae treated with formulated or unformulated fungal spores were compared with their respective control larvae and the resulting Hazard Ratio (HR) values were used to evaluate differences in mortality rates. The proportional hazard assumption of Cox regression was tested by plotting the cumulative hazard rates against time for the treated and control groups to confirm that the resulting curves did not cross [40].\nTo test the pathogenicity of fungal spores over time, HR's were computed for larvae exposed to spores floating on water for different time periods. In addition, the arcsine-square root transformed proportions of dead larvae were compared directly, after being corrected for their respective controls using the Abbott's formula, by a one-way ANOVA and LSD post-hoc test of the arcsine transformed mortality proportion [41]. Similarly, the persistence of pathogenicity in formulated and unformulated spores was also compared. The arcsine-square root transformed proportions of larvae that pupated in the field trial were compared by one-way ANOVA and LSD post-hoc tests. All the analyses were performed using SPSS version 15 software (SPSS Inc. Chicago, IL, USA).\nDifferences in larval survival were analysed using Cox regression [39]. The survival of larvae treated with formulated or unformulated fungal spores were compared with their respective control larvae and the resulting Hazard Ratio (HR) values were used to evaluate differences in mortality rates. The proportional hazard assumption of Cox regression was tested by plotting the cumulative hazard rates against time for the treated and control groups to confirm that the resulting curves did not cross [40].\nTo test the pathogenicity of fungal spores over time, HR's were computed for larvae exposed to spores floating on water for different time periods. In addition, the arcsine-square root transformed proportions of dead larvae were compared directly, after being corrected for their respective controls using the Abbott's formula, by a one-way ANOVA and LSD post-hoc test of the arcsine transformed mortality proportion [41]. Similarly, the persistence of pathogenicity in formulated and unformulated spores was also compared. The arcsine-square root transformed proportions of larvae that pupated in the field trial were compared by one-way ANOVA and LSD post-hoc tests. All the analyses were performed using SPSS version 15 software (SPSS Inc. Chicago, IL, USA).", "Anopheles gambiae s.s. (Suakoko strain, courtesy of Prof. M. Coluzzi, reared in laboratory for 23 years) and An. stephensi (Strain STE 2, MR4 no. 128, origin India, reared in laboratory for 2 years after obtaining the eggs from MR4) were reared separately, but under similar conditions, in climate-controlled rooms at Wageningen University, The Netherlands. The temperature was maintained at 27 ± 1°C. Relative humidity was set at 70 ± 5% and the rooms had a 12L:12D photoperiod. Larvae were kept in plastic trays filled with tap water. First instar larvae were fed on Liquifry No. 1 (Interpet Ltd., Surrey, UK) while older instar stages were fed on Tetramin® (Tetra, Melle, Germany). The resulting pupae were transferred to holding cages (30 × 30 × 30 cm) in small cups, where they emerged as adults with ad libitum access to 6% glucose water. The female mosquitoes were blood-fed with the Hemotek membrane feeding system. Human blood (Sanquin®, Nijmegen, The Netherlands) was used for this purpose and mosquitoes could feed on it through a Parafilm M® membrane. Eggs were laid on moist filter paper, and were subsequently transferred to the larval trays. For the field bioassays An. gambiae s.s. eggs (Kisumu, strain, reared in laboratory for 8 years) were obtained from the Kenya Medical Research Institute (KEMRI) and reared at the Ahero Multipurpose Development Training Institute (AMDTI), Kenya. Rearing was carried out under local climate conditions (described below) and larvae were fed on Tetramin®.", "Metarhizium anisopliae (ICIPE-30) and Beauveria bassiana (IMI- 391510) spores were obtained from the Department of Bioprocess Engineering, Wageningen University, and stored as dry spores in Falcon™ tubes at 4°C. Metarhizium anisopliae spores are olivaceous green, cylindrical and 2.5-3.5 μm long while B. bassiana spores are hyaline, spherical or sub-spherical and have a diameter of 2-3 μm [27].", "Wheat flour, white pepper, WaterSavr (WaterSavr™, Sodium bicarbonate version, Flexible Solutions International Ltd., Victoria BC, Canada), 0.1% Tween 80 aqueous solution, Ondina oil 917 (Shell Ondina® Oil 917, Shell, The Netherlands) and ShellSol T (Shellsol T®, Shell, The Netherlands) were tested for their potential as carrier of fungal spores. Wheat flour and white pepper served as organic dry carriers. These were tested because anopheline larvae are known to aggregate around and feed on powdered organic materials (wheat flour, alfalfa flour, blood meal and liver powder) even when a choice of inorganic materials (chalk, charcoal and kaolin) is also available [28]. One inorganic dry powder, known as WaterSavr, was also tested. WaterSavr consists of fine bicarbonate granules that self-spread over the water surface forming a thin layer which has been shown to reduce evaporation [29]. Its biodegradability, safety and surface-spreading features made it a suitable candidate for inclusion in our tests. Surfactants, such as Tween 80, can be used to overcome the hydrophobic nature of fungal spores and form a homogeneous aqueous solution. Fungal spores formulated in Tween 80 have been used in bioassays to test the efficacy of fungal spores against mosquito larvae [13,16,30-34]. ShellSol T is a synthetic isoparaffinic hydrocarbon solvent. Ondina oil 917, slightly denser than ShellSol T, is a highly refined mineral oil. Both ShellSol T and Ondina oil 917 have been successfully used as carrier for fungal spores to target the adult stage of mosquitoes [1,35].", "The first selection of carriers suitable for formulating entomopathogenic fungal spores consisted of a test in which the carrier material was evaluated for its ability to spread over the water surface. For this purpose, plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and the carriers applied on the water surface (441 cm2). The least amount of each carrier required to cover the entire surface was recorded. Once that amount was determined, M. anisopliae spores (10 mg, ~ 4.7 × 108 spores) were added to the carriers. The quantity of the carriers was increased to make a consistent suspension or mixture of fungal spores and carriers. The resulting formulations were applied to select the carriers that spread the spores evenly over the water surface evenly. Metarhizium anisopliae spores were used because of their colour (olivaceous green) which made it easy to visualize them whilst spreading.", "The next step consisted of testing selected formulations against An. gambiae larvae in laboratory bioassays. Bioassays were performed under climatic conditions similar to the mosquito rearing. Plastic trays (25 × 25 × 8 cm) were filled with 1 L of tap water and allowed to acclimatise overnight. Fifty second-instar larvae were added to each tray. Unformulated or formulated spores were applied to the water surface of each tray. The number of larvae that died or pupated was recorded daily for the next eight days. For each treatment, the carrier alone (in the same quantity as in the formulation) served as the control. In the case of unformulated spores, the control was untreated tap water. The larvae were provided with Tetramin® as food at the rate of 0.2 - 0.3 mg/larva per day. The experiments were replicated three times.", "A third experiment was performed to evaluate how the pathogenicity of fungal spores is affected by being in contact with water over a time period of seven days. At the start, 15 plastic trays (same size as above) were each filled with one liter of water. These trays were kept overnight in a climate-controlled room to acclimitise. Metarhizium anisopliae spores were applied to the water surface in five trays (10 mg per tray). Similarly, 10 mg of B. bassiana spores (~ 2 × 109 spores) were applied on the water surface in five other trays. The remaining five trays served as the control. After one day, 50 second-instar An. stephensi larvae were added to one of the trays treated with M. anisopliae spores, B. bassiana spores and one untreated control tray. Similarly larvae were added to remaining trays after either 2, 3, 5 or 7 days after fungal treatment. The mortality and/or pupation was followed for 9 days. The larvae were fed at the same rate as mentioned before. This experiment was replicated three times.", "Based on the results of the formulation experiments, the carriers WaterSavr and ShellSol T were selected and tested further for their ability to increase the persistence of pathogenicity in fungal spores in contact with water. Unformulated and formulated (either with WaterSavr or ShellSol T) M. anisopliae and B. bassiana spores were applied to plastic trays containing 1 L of acclimatized water. One replicate consisted of 18 trays. A pair of trays was applied with one of the following nine treatments: (1) 10 mg of dry M. anisopliae spores, (2) 10 mg of dry B. bassiana spores, (3) M. anisopliae spores mixed with WaterSavr (10 mg/130 mg), (4) B. bassiana spores mixed with WaterSavr (10 mg/130 mg), (5) M. anisopliae spores mixed with ShellSol T (10 mg/200 μl), (6) B. bassiana spores mixed with ShellSol T (10 mg/200 μl), (7) WaterSavr (130 mg) only, (8) ShellSol (200 μl) only or (9) no treatment. Trays treated with WaterSavr or ShellSol T without fungal spores and the untreated trays served as control for their respective treatments. Fifty second-instar An. stephensi larvae were added to one tray of each pair on the same day the fungal spores were applied. The same number of larvae was added to the other tray of the pair on the seventh day (based on the results of the previous experiment). The larvae were checked for mortality or pupation for the following 10 days after being added to the trays. The experiment was replicated three times. The trays were topped up with acclimatised tap water, every other day, to compensate for evaporation.", "To evaluate the efficacy of unformulated and formulated fungal spores in the field, experiments were carried out in Kenya in May and June, 2010. The experiments were conducted in a restricted part of the Ahero Multipurpose Development and Training Institute (AMDTI) compound. This institute is located 24 km southeast of Kisumu, in western Kenya (0°10'S, 34°55'E). Malaria is highly endemic in this region and transmission occurs throughout the year. A mean annual Plasmodium falciparum sporozoite inoculation rates (EIR) of 0.4-17 infective bites per year has been shown by recent studies for this region [36]. The region has an annual mean temperature range of 17°C to 32°C, average annual rainfall of 1,000 - 1,800 mm and average relative humidity of 65% [37].\nBioassays were conducted outdoors in 33 plastic containers (0.30 m diameter). The plastic containers had two nylon-screened holes (3 cm2), close to the brim, allowing excess rain water to flow out while retaining the larvae. Dry soil from a rice paddy at the Ahero irrigation scheme (4 km from AMDTI) was softened up by adding water. The softened soil was placed at the bottom of each plastic container to form a 2 cm thick layer. One L of tap water was then added to each plastic container. The water level was 3 cm above soil level and exposed a surface area of 450 cm2. Each plastic container was placed in a larger tub that also had a bottom layer of soil but was filled with water to the top. The larger tubs were employed to prevent ants from accessing the plastic container inside. Forty second-instar An. gambiae s.s. larvae, were added to each container. The large tubs, with the containers inside, were arranged in three rows 0.5 m apart from each other (Figure 1a).\nField bioassays. (a) Forty An. gambiae larvae were placed in plastic containers (with nylon screened holes, indicated by an arrow) with a soil layer (2 cm) at the bottom and a 3 cm layer of water. The screened holes were a precautionary measure to retain larvae in the tubs in case of overflow due to heavy rain. The plastic containers were placed in larger tubs, also filled with soil and water, to prevent ants from access to the bioassays. (b) Unformulated (dry) Metarhizium anisopliae (10 mg) spores applied on the water surface. Note the two large clumps just outside the centre of the containers. (c) Shellsol T-formulated Metarhizium anisopliae (10 mg) spores applied on the water surface. Note that spores are spread more evenly over the surface by ShellSol T than dry spores (Figure b).\nDry and ShellSol T formulated spores of both fungal species were tested. ShellSol T was the only formulation that successfully met the criteria investigated in the laboratory studies. Two different concentrations (10 mg spores/200 μl ShellSol T and 20 mg spores/230 μl ShellSol T) of both M. anisopliae and B. bassiana spores were tested. For the larger amount of spores, 230 μl ShellSol T was required to make a consistent suspension. Each treatment was randomly applied to three plastic containers. The 11 treatments consisted of dry M. anisopliae spores (10 mg and 20 mg), dry B. bassiana spores (10 mg and 20 mg), ShellSol T formulated M. anisopliae spores (10 mg/200 μl and 20 mg/230 μl), ShellSol T formulated B. bassiana spores (10 mg/200 μl and 20 mg/230 μl) and only ShellSol T (200 μl and 230 μl) while the one remaining tub was untreated. The ShellSol T (200 μl and 230 μl) and the untreated container served as control for their respective treatments. The number of larvae that died in the containers could not be recorded because it was difficult to recover them in the turbid water and/or bottom soil. Therefore, larval survival was assessed as the number of pupae produced. No food was provided to the larvae after being placed in the container. The plastic containers were checked twice daily (for the following 15 days) and pupae were removed with a dipper. To prevent oviposition or emergence of local mosquitoes in the water of larger tubs in which treated plastic containers were placed, Aquatain (a silicone-based oil) was applied to the water surface [38]. Water (0.5 L, kept outdoors in Jerry cans) was added to every plastic container when the water level had been reduced by evaporation to less than 1 cm. Meteorological data was obtained from the National Irrigation Board (NIB) research station located approximately 4 km from the experimental site. Water surface (5 mm top layer) temperature was measured daily at the same time, in each container, with a digital thermometer (GTH 175/Pt, Greisinger electronics, Germany).", "Differences in larval survival were analysed using Cox regression [39]. The survival of larvae treated with formulated or unformulated fungal spores were compared with their respective control larvae and the resulting Hazard Ratio (HR) values were used to evaluate differences in mortality rates. The proportional hazard assumption of Cox regression was tested by plotting the cumulative hazard rates against time for the treated and control groups to confirm that the resulting curves did not cross [40].\nTo test the pathogenicity of fungal spores over time, HR's were computed for larvae exposed to spores floating on water for different time periods. In addition, the arcsine-square root transformed proportions of dead larvae were compared directly, after being corrected for their respective controls using the Abbott's formula, by a one-way ANOVA and LSD post-hoc test of the arcsine transformed mortality proportion [41]. Similarly, the persistence of pathogenicity in formulated and unformulated spores was also compared. The arcsine-square root transformed proportions of larvae that pupated in the field trial were compared by one-way ANOVA and LSD post-hoc tests. All the analyses were performed using SPSS version 15 software (SPSS Inc. Chicago, IL, USA).", "[SUBTITLE] Formulations [SUBSECTION] In the case of both ShellSol T and Ondina oil 917, 100 μl of the oil was required to cover a water surface of 441 cm2. The amounts could not be determined for 0.1% Tween 80 and wheat flour. Tween 80 solution could not be visualised as it is colourless. The wheat flour formed clumps rather than spreading. White pepper spread across the water surface evenly and 30 mg of it was sufficient to cover the entire surface area. Similarly 130 mg of Watersavr spread and covered the water surface of 441 cm2 (Table 1). After determining these amounts, 10 mg of Metarhizium anisopliae spores was added to each of the carriers. The quantity of ShellSol T and Ondina oil 917 had to be doubled (200 μl) to form a homogenous suspension. In case of the 0.1% Tween 80 solution, 4 ml was required to form a consistent suspension. Wheat flour was not tested further because of clumping. The quantity of white pepper and WaterSavr (30 mg and 130 mg respectively) required for covering the water surface (441 cm2) was also enough to form a consistent mixture with 10 mg of fungal spores (Table 1). Formulations, apart from the 0.1% Tween 80 solution which caused the spores to sink, resulted in a fairly uniform spread of fungal spores on the water surface (Table 1). Therefore 0.1% Tween 80 solution was not tested further.\nCarriers tested for their ability to spread spores and the composition of formulations tested\nThe amount of each carrier required to cover a water surface area of 441 cm2, the amount required to form a consistent mixture with 10 mg of Metarhizium anisopliae spores, the ability of the carriers to spread the spores over the water surface and the composition of formulations with suitable carriers\n-- Not Tested or could not be determined\nIn the case of both ShellSol T and Ondina oil 917, 100 μl of the oil was required to cover a water surface of 441 cm2. The amounts could not be determined for 0.1% Tween 80 and wheat flour. Tween 80 solution could not be visualised as it is colourless. The wheat flour formed clumps rather than spreading. White pepper spread across the water surface evenly and 30 mg of it was sufficient to cover the entire surface area. Similarly 130 mg of Watersavr spread and covered the water surface of 441 cm2 (Table 1). After determining these amounts, 10 mg of Metarhizium anisopliae spores was added to each of the carriers. The quantity of ShellSol T and Ondina oil 917 had to be doubled (200 μl) to form a homogenous suspension. In case of the 0.1% Tween 80 solution, 4 ml was required to form a consistent suspension. Wheat flour was not tested further because of clumping. The quantity of white pepper and WaterSavr (30 mg and 130 mg respectively) required for covering the water surface (441 cm2) was also enough to form a consistent mixture with 10 mg of fungal spores (Table 1). Formulations, apart from the 0.1% Tween 80 solution which caused the spores to sink, resulted in a fairly uniform spread of fungal spores on the water surface (Table 1). Therefore 0.1% Tween 80 solution was not tested further.\nCarriers tested for their ability to spread spores and the composition of formulations tested\nThe amount of each carrier required to cover a water surface area of 441 cm2, the amount required to form a consistent mixture with 10 mg of Metarhizium anisopliae spores, the ability of the carriers to spread the spores over the water surface and the composition of formulations with suitable carriers\n-- Not Tested or could not be determined\n[SUBTITLE] Efficacy of formulations against Anopheles gambiae larvae [SUBSECTION] Bioassays were conducted with unformulated M. anisopliae spores (10 mg) and M. anisopliae spores formulated in pepper (10 mg/30 mg), WaterSavr (10 mg/130 mg), ShellSol T (10 mg/200 μl )) or Ondina oil 917 (10 mg/200 μl) against An. gambiae larvae. Only 2.7 ± 1.8% of the larvae treated with unformulated M. anisopliae spores pupated while 47.6 ± 3.9% pupated in the relevant control. The treated larvae had a nearly two times higher daily risk of mortality as compared to the untreated control larvae (HR (95%CI) = 1.8 (1.4-2.4), Table 2, Figure 2a). WaterSavr formulation reduced the pupation of the larvae from 67.2 ± 10.6% to 1.3 ± 0.6%, exposing the formulation-treated larvae to nearly three times higher daily risk of mortality as compared to the control (Table 2, Figure 2c). With the ShellSol T formulation 1.3 ± 0.6% of the treated larvae pupated while the larvae treated with ShellSol T (without fungal spores) showed 85.4 ± 14.5% pupation. Larvae exposed to ShellSol T formulated spores of M. anisopliae had a mortality risk four times higher compared to larvae treated with ShellSol T only (HR (95%CI) = 3.7 (2.5-5.4), Table 2, Figure 2e). However, with white pepper and Ondina oil there was no significant difference in the mortality of larvae treated with the formulation or the carrier alone, or the formulations and fungal spores together. Both pepper and Ondina oil 917 killed 100% larvae even without fungal spores (Table 2, Figure 2b and 2d). These two carriers were not tested further as the objective was to develop a formulation that enhances the spreading and efficacy of the fungal spores to infect and kill larvae.\nPercentage pupation and Hazard ratios of larvae exposed to tested formulations\nAverage percentage pupation (±S.E.) of An. gambiae larvae exposed to unformulated spores and formulated Metarhizium anisopliae spores (n = 3). The carrier in each formulation (White pepper, WaterSavr, Ondina oil 917 or ShellSol T) served as the control. In case of unformulated spores the control was completely untreated. Carrier and Metarhizium anisopliae spores together formed the treatment. Hazard ratio's (HR) indicate the mortality risk in the treatments as compared to their respective controls\nLaboratory bioassays to test the efficacy of unformulated and formulated Metarhizium anisopliae spores. The average percentage cumulative survival (±S.E.) of An. gambiae larvae (n = 3) exposed to (a) Unformulated Metarhizium anisopliae spores (control (C) and unformulated spores (Ma spores) (b) Pepper (control (P)) and Pepper formulated spores (Ma spores + P) (c) WaterSavr (control (WS)) and WaterSavr formulated spores (Ma spores + WS) (d) Ondina Oil (control (OO)) and Ondina oil formulated spores (Ma spores + OO) (e) ShellSol T (control (SS)) and ShellSol T formulated spores (Ma spores + SS) over 8 days post-treatment. Larvae that pupated are included as surviving.\nBioassays were conducted with unformulated M. anisopliae spores (10 mg) and M. anisopliae spores formulated in pepper (10 mg/30 mg), WaterSavr (10 mg/130 mg), ShellSol T (10 mg/200 μl )) or Ondina oil 917 (10 mg/200 μl) against An. gambiae larvae. Only 2.7 ± 1.8% of the larvae treated with unformulated M. anisopliae spores pupated while 47.6 ± 3.9% pupated in the relevant control. The treated larvae had a nearly two times higher daily risk of mortality as compared to the untreated control larvae (HR (95%CI) = 1.8 (1.4-2.4), Table 2, Figure 2a). WaterSavr formulation reduced the pupation of the larvae from 67.2 ± 10.6% to 1.3 ± 0.6%, exposing the formulation-treated larvae to nearly three times higher daily risk of mortality as compared to the control (Table 2, Figure 2c). With the ShellSol T formulation 1.3 ± 0.6% of the treated larvae pupated while the larvae treated with ShellSol T (without fungal spores) showed 85.4 ± 14.5% pupation. Larvae exposed to ShellSol T formulated spores of M. anisopliae had a mortality risk four times higher compared to larvae treated with ShellSol T only (HR (95%CI) = 3.7 (2.5-5.4), Table 2, Figure 2e). However, with white pepper and Ondina oil there was no significant difference in the mortality of larvae treated with the formulation or the carrier alone, or the formulations and fungal spores together. Both pepper and Ondina oil 917 killed 100% larvae even without fungal spores (Table 2, Figure 2b and 2d). These two carriers were not tested further as the objective was to develop a formulation that enhances the spreading and efficacy of the fungal spores to infect and kill larvae.\nPercentage pupation and Hazard ratios of larvae exposed to tested formulations\nAverage percentage pupation (±S.E.) of An. gambiae larvae exposed to unformulated spores and formulated Metarhizium anisopliae spores (n = 3). The carrier in each formulation (White pepper, WaterSavr, Ondina oil 917 or ShellSol T) served as the control. In case of unformulated spores the control was completely untreated. Carrier and Metarhizium anisopliae spores together formed the treatment. Hazard ratio's (HR) indicate the mortality risk in the treatments as compared to their respective controls\nLaboratory bioassays to test the efficacy of unformulated and formulated Metarhizium anisopliae spores. The average percentage cumulative survival (±S.E.) of An. gambiae larvae (n = 3) exposed to (a) Unformulated Metarhizium anisopliae spores (control (C) and unformulated spores (Ma spores) (b) Pepper (control (P)) and Pepper formulated spores (Ma spores + P) (c) WaterSavr (control (WS)) and WaterSavr formulated spores (Ma spores + WS) (d) Ondina Oil (control (OO)) and Ondina oil formulated spores (Ma spores + OO) (e) ShellSol T (control (SS)) and ShellSol T formulated spores (Ma spores + SS) over 8 days post-treatment. Larvae that pupated are included as surviving.\n[SUBTITLE] Pathogenicity of floating unformulated spores over time [SUBSECTION] The pathogenicity of dry M. anisopliae and B. bassiana spores was substantially reduced over a period of five days (Figure 3). Anopheles stephensi larvae exposed to M. anisopliae spores, applied to water seven days earlier, showed a similar pupation proportion as their control (Table 3). Beauveria bassiana spores lost their effectiveness after being in contact with water for three days. Metarhizium anisopliae spores lost their effectiveness after five days (Table 3). After seven days the control mortality was significantly higher than the mortality of larvae exposed to M. anisopliae treatment.\nLaboratory bioassays to test the persistence of floating unformulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to spores of Metarhizium anisopliae and Beauveria bassiana that had been floating on the water surface for 1, 2, 3, 5 or 7 days. Bars with letter in common show no significant difference (LSD post hoc test, α = 0.05).\nPercentage pupation and Hazard ratio's of larvae exposed to unformulated floating fungal spores\nAverage percentage pupation (±S.E.) in the control and treated An. stephensi larvae exposed to Metarhizium anisopliae and Beauveria bassiana spores floating on the water surface for 1, 2, 3, 5 and 7 days (n = 3). The controls consisted of untreated trays filled with water at the same time as the treated trays. Hazard ratio's (HR) indicate the mortality risk of larvae as compared to the controls for both Metarhizium anisopliae and Beauveria bassiana spores\na. HR lower than 1 represents higher mortality in the control group.\nThe pathogenicity of dry M. anisopliae and B. bassiana spores was substantially reduced over a period of five days (Figure 3). Anopheles stephensi larvae exposed to M. anisopliae spores, applied to water seven days earlier, showed a similar pupation proportion as their control (Table 3). Beauveria bassiana spores lost their effectiveness after being in contact with water for three days. Metarhizium anisopliae spores lost their effectiveness after five days (Table 3). After seven days the control mortality was significantly higher than the mortality of larvae exposed to M. anisopliae treatment.\nLaboratory bioassays to test the persistence of floating unformulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to spores of Metarhizium anisopliae and Beauveria bassiana that had been floating on the water surface for 1, 2, 3, 5 or 7 days. Bars with letter in common show no significant difference (LSD post hoc test, α = 0.05).\nPercentage pupation and Hazard ratio's of larvae exposed to unformulated floating fungal spores\nAverage percentage pupation (±S.E.) in the control and treated An. stephensi larvae exposed to Metarhizium anisopliae and Beauveria bassiana spores floating on the water surface for 1, 2, 3, 5 and 7 days (n = 3). The controls consisted of untreated trays filled with water at the same time as the treated trays. Hazard ratio's (HR) indicate the mortality risk of larvae as compared to the controls for both Metarhizium anisopliae and Beauveria bassiana spores\na. HR lower than 1 represents higher mortality in the control group.\n[SUBTITLE] Effect of formulation on persistence of pathogenicity [SUBSECTION] Fungal spores formulated with ShellSol T were more persistent compared to the unformulated spores or spores formulated in WaterSavr. Seven days after application only ShellSol T formulated fungal spores (both M. anisopliae and B. bassiana) still caused significant mortality in the An. stephensi larvae (Table 4). Formulation in WaterSavr seemed to reduce the efficacy of fungal spores. When the An. stephensi larvae were exposed to WaterSavr-formulated M. anisopliae and B. bassiana spores, on the same day the fungal spores were applied, the corrected proportion larval-mortality was significantly lower as compared to larvae exposed to unformulated M. anisopliae and B. bassiana spores. Larvae exposed to M. anisopliae spores formulated with WaterSavr, applied that same day, had a lower mortality risk (HR (95% CI), 8.9 (4.4-18.1)) than those exposed to the unformulated spores (HR (95% CI), 44.6 (10.9-181.7)). There was no significant difference in the corrected proportion mortality of larvae exposed to unformulated and WaterSavr-formulated M. anisopliae spores, seven days after their application on water (Figure 4). Similar results were observed for B. bassiana spores. There was no significant difference between the corrected larval-mortality proportion due to unformulated and WaterSavr formulated B. bassiana spores, applied on water seven days before exposing the larvae. Also, the proportion larval mortality caused by WaterSavr-formulated B. bassiana spores was significantly lower than with ShellSol T-formulated B. bassiana spores (Figure 4).\nHazard ratios of larvae exposed to (un)formulated fungal spores, 0 and 7 days post-application\nHazard ratio's (HR) indicate the mortality risk of An. stephensi larvae exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores, 0 and 7 days after application (n = 3)\nLaboratory bioassays to test the persistence of formulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae (Ma) and Beauveria bassiana (Bb) spores, immediately (Day 0) or seven days (Day 7) after application. Letters in common (upper case for Ma and lower case for Bb) show no significant difference (LSD post hoc test, α = 0.05).\nFungal spores formulated with ShellSol T were more persistent compared to the unformulated spores or spores formulated in WaterSavr. Seven days after application only ShellSol T formulated fungal spores (both M. anisopliae and B. bassiana) still caused significant mortality in the An. stephensi larvae (Table 4). Formulation in WaterSavr seemed to reduce the efficacy of fungal spores. When the An. stephensi larvae were exposed to WaterSavr-formulated M. anisopliae and B. bassiana spores, on the same day the fungal spores were applied, the corrected proportion larval-mortality was significantly lower as compared to larvae exposed to unformulated M. anisopliae and B. bassiana spores. Larvae exposed to M. anisopliae spores formulated with WaterSavr, applied that same day, had a lower mortality risk (HR (95% CI), 8.9 (4.4-18.1)) than those exposed to the unformulated spores (HR (95% CI), 44.6 (10.9-181.7)). There was no significant difference in the corrected proportion mortality of larvae exposed to unformulated and WaterSavr-formulated M. anisopliae spores, seven days after their application on water (Figure 4). Similar results were observed for B. bassiana spores. There was no significant difference between the corrected larval-mortality proportion due to unformulated and WaterSavr formulated B. bassiana spores, applied on water seven days before exposing the larvae. Also, the proportion larval mortality caused by WaterSavr-formulated B. bassiana spores was significantly lower than with ShellSol T-formulated B. bassiana spores (Figure 4).\nHazard ratios of larvae exposed to (un)formulated fungal spores, 0 and 7 days post-application\nHazard ratio's (HR) indicate the mortality risk of An. stephensi larvae exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores, 0 and 7 days after application (n = 3)\nLaboratory bioassays to test the persistence of formulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae (Ma) and Beauveria bassiana (Bb) spores, immediately (Day 0) or seven days (Day 7) after application. Letters in common (upper case for Ma and lower case for Bb) show no significant difference (LSD post hoc test, α = 0.05).\n[SUBTITLE] Field bioassays [SUBSECTION] During the experimental period (15 days), the mean minimum and maximum temperatures were 15.7°C and 30.9°C, respectively, with a mean relative humidity of 54% and total rainfall of 19.4 mm. Water surface temperature ranged from 21°C to 38.8°C. Similar to the laboratory observations, unformulated spores clumped together on the water surface (Figure 1b) while ShellSol T-formulated fungal spores were uniformly spread (Figure 1c).\nThe efficacy of unformulated fungal spores was found to be low under field conditions as compared to laboratory conditions. At dose rates of both 10 mg and 20 mg, the same (p > 0.05) level of pupation was observed in the An. gambiae larvae treated with unformulated M. anisopliae and B. bassiana spores as in the untreated An. gambiae larvae (Figure 5). As observed in the laboratory bioassays, ShellSol T on its own had no harmful effect on larval development and pupation. A similar proportion (p > 0.05) of larvae pupated in the containers treated with ShellSol T (200 μl and 230 μl) and the untreated containers (Figure 5).\nField bioassays testing the efficacy of fungal spores formulated in ShellSol T. The average percentage pupation of An. gambiae larvae (n = 3) exposed to unformulated and ShellSol T formulated Metarhizium anisopliae (Ma) or Beauveria bassiana (Bb) spores at two doses, 10 mg/200 μl and 20 mg/230 μl. Controls included no treatment at all or treatment with only ShellSol T (200 μl or 230 μl). Letters in common show no significant difference (LSD post hoc test, α = 0.05).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated M. anisopliae spores was 43% (low dose, 10 mg) and 49% (high dose, 20 mg) lower than that of the corresponding unformulated treatments. However for the lower dose (10 mg) the proportion of larvae that pupated was not significantly different (p = 0.08, Figure 5).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated B. bassiana spores was 39% (low dose, 10 mg) and 50% (high dose, 20 mg) lower than that in the corresponding unformulated treatments. At both lower and higher dose the proportion of larvae that pupated was significantly different (p < 0.05, Figure 5).\nDuring the experimental period (15 days), the mean minimum and maximum temperatures were 15.7°C and 30.9°C, respectively, with a mean relative humidity of 54% and total rainfall of 19.4 mm. Water surface temperature ranged from 21°C to 38.8°C. Similar to the laboratory observations, unformulated spores clumped together on the water surface (Figure 1b) while ShellSol T-formulated fungal spores were uniformly spread (Figure 1c).\nThe efficacy of unformulated fungal spores was found to be low under field conditions as compared to laboratory conditions. At dose rates of both 10 mg and 20 mg, the same (p > 0.05) level of pupation was observed in the An. gambiae larvae treated with unformulated M. anisopliae and B. bassiana spores as in the untreated An. gambiae larvae (Figure 5). As observed in the laboratory bioassays, ShellSol T on its own had no harmful effect on larval development and pupation. A similar proportion (p > 0.05) of larvae pupated in the containers treated with ShellSol T (200 μl and 230 μl) and the untreated containers (Figure 5).\nField bioassays testing the efficacy of fungal spores formulated in ShellSol T. The average percentage pupation of An. gambiae larvae (n = 3) exposed to unformulated and ShellSol T formulated Metarhizium anisopliae (Ma) or Beauveria bassiana (Bb) spores at two doses, 10 mg/200 μl and 20 mg/230 μl. Controls included no treatment at all or treatment with only ShellSol T (200 μl or 230 μl). Letters in common show no significant difference (LSD post hoc test, α = 0.05).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated M. anisopliae spores was 43% (low dose, 10 mg) and 49% (high dose, 20 mg) lower than that of the corresponding unformulated treatments. However for the lower dose (10 mg) the proportion of larvae that pupated was not significantly different (p = 0.08, Figure 5).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated B. bassiana spores was 39% (low dose, 10 mg) and 50% (high dose, 20 mg) lower than that in the corresponding unformulated treatments. At both lower and higher dose the proportion of larvae that pupated was significantly different (p < 0.05, Figure 5).", "In the case of both ShellSol T and Ondina oil 917, 100 μl of the oil was required to cover a water surface of 441 cm2. The amounts could not be determined for 0.1% Tween 80 and wheat flour. Tween 80 solution could not be visualised as it is colourless. The wheat flour formed clumps rather than spreading. White pepper spread across the water surface evenly and 30 mg of it was sufficient to cover the entire surface area. Similarly 130 mg of Watersavr spread and covered the water surface of 441 cm2 (Table 1). After determining these amounts, 10 mg of Metarhizium anisopliae spores was added to each of the carriers. The quantity of ShellSol T and Ondina oil 917 had to be doubled (200 μl) to form a homogenous suspension. In case of the 0.1% Tween 80 solution, 4 ml was required to form a consistent suspension. Wheat flour was not tested further because of clumping. The quantity of white pepper and WaterSavr (30 mg and 130 mg respectively) required for covering the water surface (441 cm2) was also enough to form a consistent mixture with 10 mg of fungal spores (Table 1). Formulations, apart from the 0.1% Tween 80 solution which caused the spores to sink, resulted in a fairly uniform spread of fungal spores on the water surface (Table 1). Therefore 0.1% Tween 80 solution was not tested further.\nCarriers tested for their ability to spread spores and the composition of formulations tested\nThe amount of each carrier required to cover a water surface area of 441 cm2, the amount required to form a consistent mixture with 10 mg of Metarhizium anisopliae spores, the ability of the carriers to spread the spores over the water surface and the composition of formulations with suitable carriers\n-- Not Tested or could not be determined", "Bioassays were conducted with unformulated M. anisopliae spores (10 mg) and M. anisopliae spores formulated in pepper (10 mg/30 mg), WaterSavr (10 mg/130 mg), ShellSol T (10 mg/200 μl )) or Ondina oil 917 (10 mg/200 μl) against An. gambiae larvae. Only 2.7 ± 1.8% of the larvae treated with unformulated M. anisopliae spores pupated while 47.6 ± 3.9% pupated in the relevant control. The treated larvae had a nearly two times higher daily risk of mortality as compared to the untreated control larvae (HR (95%CI) = 1.8 (1.4-2.4), Table 2, Figure 2a). WaterSavr formulation reduced the pupation of the larvae from 67.2 ± 10.6% to 1.3 ± 0.6%, exposing the formulation-treated larvae to nearly three times higher daily risk of mortality as compared to the control (Table 2, Figure 2c). With the ShellSol T formulation 1.3 ± 0.6% of the treated larvae pupated while the larvae treated with ShellSol T (without fungal spores) showed 85.4 ± 14.5% pupation. Larvae exposed to ShellSol T formulated spores of M. anisopliae had a mortality risk four times higher compared to larvae treated with ShellSol T only (HR (95%CI) = 3.7 (2.5-5.4), Table 2, Figure 2e). However, with white pepper and Ondina oil there was no significant difference in the mortality of larvae treated with the formulation or the carrier alone, or the formulations and fungal spores together. Both pepper and Ondina oil 917 killed 100% larvae even without fungal spores (Table 2, Figure 2b and 2d). These two carriers were not tested further as the objective was to develop a formulation that enhances the spreading and efficacy of the fungal spores to infect and kill larvae.\nPercentage pupation and Hazard ratios of larvae exposed to tested formulations\nAverage percentage pupation (±S.E.) of An. gambiae larvae exposed to unformulated spores and formulated Metarhizium anisopliae spores (n = 3). The carrier in each formulation (White pepper, WaterSavr, Ondina oil 917 or ShellSol T) served as the control. In case of unformulated spores the control was completely untreated. Carrier and Metarhizium anisopliae spores together formed the treatment. Hazard ratio's (HR) indicate the mortality risk in the treatments as compared to their respective controls\nLaboratory bioassays to test the efficacy of unformulated and formulated Metarhizium anisopliae spores. The average percentage cumulative survival (±S.E.) of An. gambiae larvae (n = 3) exposed to (a) Unformulated Metarhizium anisopliae spores (control (C) and unformulated spores (Ma spores) (b) Pepper (control (P)) and Pepper formulated spores (Ma spores + P) (c) WaterSavr (control (WS)) and WaterSavr formulated spores (Ma spores + WS) (d) Ondina Oil (control (OO)) and Ondina oil formulated spores (Ma spores + OO) (e) ShellSol T (control (SS)) and ShellSol T formulated spores (Ma spores + SS) over 8 days post-treatment. Larvae that pupated are included as surviving.", "The pathogenicity of dry M. anisopliae and B. bassiana spores was substantially reduced over a period of five days (Figure 3). Anopheles stephensi larvae exposed to M. anisopliae spores, applied to water seven days earlier, showed a similar pupation proportion as their control (Table 3). Beauveria bassiana spores lost their effectiveness after being in contact with water for three days. Metarhizium anisopliae spores lost their effectiveness after five days (Table 3). After seven days the control mortality was significantly higher than the mortality of larvae exposed to M. anisopliae treatment.\nLaboratory bioassays to test the persistence of floating unformulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to spores of Metarhizium anisopliae and Beauveria bassiana that had been floating on the water surface for 1, 2, 3, 5 or 7 days. Bars with letter in common show no significant difference (LSD post hoc test, α = 0.05).\nPercentage pupation and Hazard ratio's of larvae exposed to unformulated floating fungal spores\nAverage percentage pupation (±S.E.) in the control and treated An. stephensi larvae exposed to Metarhizium anisopliae and Beauveria bassiana spores floating on the water surface for 1, 2, 3, 5 and 7 days (n = 3). The controls consisted of untreated trays filled with water at the same time as the treated trays. Hazard ratio's (HR) indicate the mortality risk of larvae as compared to the controls for both Metarhizium anisopliae and Beauveria bassiana spores\na. HR lower than 1 represents higher mortality in the control group.", "Fungal spores formulated with ShellSol T were more persistent compared to the unformulated spores or spores formulated in WaterSavr. Seven days after application only ShellSol T formulated fungal spores (both M. anisopliae and B. bassiana) still caused significant mortality in the An. stephensi larvae (Table 4). Formulation in WaterSavr seemed to reduce the efficacy of fungal spores. When the An. stephensi larvae were exposed to WaterSavr-formulated M. anisopliae and B. bassiana spores, on the same day the fungal spores were applied, the corrected proportion larval-mortality was significantly lower as compared to larvae exposed to unformulated M. anisopliae and B. bassiana spores. Larvae exposed to M. anisopliae spores formulated with WaterSavr, applied that same day, had a lower mortality risk (HR (95% CI), 8.9 (4.4-18.1)) than those exposed to the unformulated spores (HR (95% CI), 44.6 (10.9-181.7)). There was no significant difference in the corrected proportion mortality of larvae exposed to unformulated and WaterSavr-formulated M. anisopliae spores, seven days after their application on water (Figure 4). Similar results were observed for B. bassiana spores. There was no significant difference between the corrected larval-mortality proportion due to unformulated and WaterSavr formulated B. bassiana spores, applied on water seven days before exposing the larvae. Also, the proportion larval mortality caused by WaterSavr-formulated B. bassiana spores was significantly lower than with ShellSol T-formulated B. bassiana spores (Figure 4).\nHazard ratios of larvae exposed to (un)formulated fungal spores, 0 and 7 days post-application\nHazard ratio's (HR) indicate the mortality risk of An. stephensi larvae exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae and Beauveria bassiana spores, 0 and 7 days after application (n = 3)\nLaboratory bioassays to test the persistence of formulated fungal spores. The average percentage corrected mortality (±S.E.) of An. stephensi larvae (n = 3) exposed to unformulated, WaterSavr-formulated and ShellSol T-formulated Metarhizium anisopliae (Ma) and Beauveria bassiana (Bb) spores, immediately (Day 0) or seven days (Day 7) after application. Letters in common (upper case for Ma and lower case for Bb) show no significant difference (LSD post hoc test, α = 0.05).", "During the experimental period (15 days), the mean minimum and maximum temperatures were 15.7°C and 30.9°C, respectively, with a mean relative humidity of 54% and total rainfall of 19.4 mm. Water surface temperature ranged from 21°C to 38.8°C. Similar to the laboratory observations, unformulated spores clumped together on the water surface (Figure 1b) while ShellSol T-formulated fungal spores were uniformly spread (Figure 1c).\nThe efficacy of unformulated fungal spores was found to be low under field conditions as compared to laboratory conditions. At dose rates of both 10 mg and 20 mg, the same (p > 0.05) level of pupation was observed in the An. gambiae larvae treated with unformulated M. anisopliae and B. bassiana spores as in the untreated An. gambiae larvae (Figure 5). As observed in the laboratory bioassays, ShellSol T on its own had no harmful effect on larval development and pupation. A similar proportion (p > 0.05) of larvae pupated in the containers treated with ShellSol T (200 μl and 230 μl) and the untreated containers (Figure 5).\nField bioassays testing the efficacy of fungal spores formulated in ShellSol T. The average percentage pupation of An. gambiae larvae (n = 3) exposed to unformulated and ShellSol T formulated Metarhizium anisopliae (Ma) or Beauveria bassiana (Bb) spores at two doses, 10 mg/200 μl and 20 mg/230 μl. Controls included no treatment at all or treatment with only ShellSol T (200 μl or 230 μl). Letters in common show no significant difference (LSD post hoc test, α = 0.05).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated M. anisopliae spores was 43% (low dose, 10 mg) and 49% (high dose, 20 mg) lower than that of the corresponding unformulated treatments. However for the lower dose (10 mg) the proportion of larvae that pupated was not significantly different (p = 0.08, Figure 5).\nThe percentage pupation observed in An. gambiae larvae treated with ShellSol T-formulated B. bassiana spores was 39% (low dose, 10 mg) and 50% (high dose, 20 mg) lower than that in the corresponding unformulated treatments. At both lower and higher dose the proportion of larvae that pupated was significantly different (p < 0.05, Figure 5).", "The results of this study show how certain formulations can improve the ability of entomopathogenic fungus spores to spread over a water surface as well as increase their persistence. The results also show that better spreading and persistence leads to an enhanced efficacy of fungal spores. The study also demonstrates that both M. anisopliae and B. bassiana caused a high impact on the survival of An. gambiae s.s. larvae under field conditions, when formulated in Shellsol T.\nAnopheles stephensi and An. gambiae larvae were found to be equally susceptible to unformulated M. anisopliae and B. bassiana spores [12]. This suggests that these fungi are likely to also affect other anopheline vector species.\nFormulating fungal spores with Tween 80 and wheat flour was found to be unsuitable. Spores formulated with Tween 80 did not spread over the water surface, the primary feeding site of anopheline larvae, but sunk to the bottom [25,28]. Surfactants are known to impair attachment of the spore to the host so even if the spores were spread on the water surface they would not have been effective against anopheline larvae [20,42]. Wheat flour, although due to its organic nature could have served as a bait, did not spread the fungal spores over the water surface [28]. The wheat flour clumped together and sunk.\nPowdered pepper and Ondina oil caused 100% mortality in anopheline larvae even without the fungal spores. Extracts of fruits of the Piperaceae family have been shown to be toxic for Aedes aegypti L. larvae [43], but the exact toxicity mechanism remains unclear. Although fungal spores were effectively spread with white pepper, pepper was considered an unsuitable carrier due to its own toxic effect on the anopheline larvae. Ondina oil, in the amount tested (200 μl), formed an oily layer over the water surface causing the larvae to suffocate. As compared to ShellSol T, Ondina oil is denser and evaporates less. This may explain the difference in the mortality observed with Ondina oil and ShellSol T controls. The amount of Ondina oil tested could not be reduced as, in that case, it was not possible to make a homogeneous suspension with the fungal spores.\nDry unformulated M. anisopliae and B. bassiana spores lost their pathogenicity five days after being applied to the water surface as the survival of larvae exposed to the fungal spores five days after application was similar to that of the controls. Similar results were shown in a study by Alves et al. (2002), where M. anisopliae caused no mortality in Cx. quinquefasciatus Say larvae introduced four days after the spores were applied [13]. This is in contrast to Pereira et al. (2009), who found M. anisopliae spores to cause 50% mortality in Ae. aegypti larvae exposed to fungal spores that were applied ten days previously [34]. The studies mentioned here were carried out in controlled climate conditions (25-27°C) in the laboratory. In field conditions the spores are more likely to lose their pathogenicity in less time due to exposure to hight temperatures and UV-radiations. This may explain why unformulated fungal spores did not cause any significant reduction in pupation in the field bioassays, where the water surface temperatures were measured to be as high as 38.8°C. The measured (water surface) temperatures agree with those reported by Paaijmans et al. (2008) for similar sized water-bodies and are known to exhibit high daily fluctuations [44].\nWhen the larvae were exposed to fungal spores on the same day as the spores were applied, unformulated spores and spores formulated in WaterSavr or Shellsol T caused larval mortality over the next few days. However, only fungal spores formulated in ShellSol T caused significantly higher mortality in larvae introduced seven days after the fungal spores had been applied. Fungal spores formulated in ShellSol T remained pathogenic possibly because ShellSol T prevented spores from absorbing the amount of moisture required to stimulate germination [21,31]. ShellSol T was also considered a good carrier of fungal spores in other studies [31,45]. WaterSavr, on the other hand, did not protect fungal spores.\nShellSol T was the only formulation that we tested in the field as the laboratory results showed high persistence of pathogenicity in the fungal spores formulated only with this product. Unformulated M. anisopliae and B. bassiana did not suppress the larval population effectively in the field. In contrast to the situation in the laboratory, the spores were exposed to sunlight, rain and fluctuating temperatures in the field which might have reduced spore survival. By contrast, only 10-20% of the larvae treated with spores formulated in ShellSol T, developed into pupae. Both M. anisopliae and B. bassiana spores were found to be equally effective when formulated in ShellSol T. Oil formulations are known to improve spore survival, improve fungal efficacy against insects and reduce spore sensitivity to UV radiation [31,45].\nIn the field residual effect of formulated spores could not be tested after a certain number of days because the plastic containers began to harbour Culex larvae and thus had to be drained. The presence of Culex larvae is an indication that ovipositing female Culex mosquitoes were not repelled by the fungus treatment. It is disadvantageous for a larval control agent to have an oviposition-repellent effect because in that case ovipositing mosquito females are forced to seek and deposit their eggs at alternative untreated sites. This means that the control agent only targets the existing larval population and needs to be reapplied after the site has been inhabited again. Studies specifically designed to establish the response of ovipositing anopheline female mosquitoes to fungal spores and the residual effect of fungal spore treatment are required for a better understanding. Oil-formulated M. anisopliae spores have been shown to have an increased ovicidal activity in case of Ae. aegypti eggs [46]. This might be an added advantage if anopheline eggs are also affected by M. anisopliae spores similar to the Ae. aegypti eggs.\nPathogenicity of control agents in the field is generally lower than that in the laboratory settings [47]. In the field bioassays, therefore, a higher dose (20 mg/450 cm2) of fungal spores was also tested together with the dose tested in the laboratory (10 mg/441 cm2). The laboratory dose, however, showed similar efficacy in the field by reducing pupation similar to the higher dose. Therefore doses lower than used in the current study should be evaluated to establish the lowest effective amount of fungal spores required to treat a certain area.\nShellSol T was a candidate carrier that not only facilitated the application of spores but also improved their efficacy by providing maximum chance for contact (spreading the spores on the water surface) with the larvae and increasing spore persistence. The fungal spores readily suspend in ShellSol T with a slight agitation. This is advantageous as the spores can be conveniently mixed in ShellSol T, on the spot, which means that during transport and storage only the bio-active agent would have to be kept at low temperatures rather than the whole mixture. This can reduce the cooling space requirement as ShellSol T itself is a stable product and has no particular storage demands. It has been shown that the percentage germination of dry spores is generally higher than that of oil-formulated spores when stored at the same temperature for the same number of days [[23]; unpublished data]. The fungal spores Metarhizium flavoviride had a germination rate of 80% when stored at 30°C for 90 days as compared to 90% when stored dry under similar environmental conditions [23]. In this context, it seems more efficient to store fungal spores separately and only mix them with the oil-component shortly before application.\nThe results of this study show the necessity of a good formulation for fungal spores when these are to be utilised in the field. The efficacy of unformulated (dry) spores was so low in the field situation that their application, as such, is not justified. While ShellSol T-formulated spores were highly effective in killing anopheline larvae in the field an important point to consider is the potential increased risk to the non-target organisms due to their improved persistence and/or undesirable properties of the solvent [33,48-50]. ShellSol T has a low toxicity effect on fish, aquatic invertebrates and microorganisms at concentration higher than 1 g/liter [51]. Considering the volume of ShellSol T that we tested (200-230 μl on 1 L of water), the concentration of ShellSol T was 0.15 g/L which is nearly seven times lower than the lowest lethal concentration. ShellSol T evaporates and therefore is less likely to remain in the aquatic habitats. Detailed safety studies, however, are necessary to have a better understanding of any adverse effect ShellSol T might have on the environment and non-target organisms, at the required doses.\nBesides formulation, it is very important to identify the best delivery method (where, when and how) to fully utilize the entomopathogenic potential of M. anisopliae and B. bassiana spores. Frequency of re-application has to be determined based on the residual effect of formulated spores in the field. The feasibility of applying formulated spores at artificial breeding sites, baited to attract ovipositing females, is also worth testing [52]. A good delivery system will reduce the chances of non-target organisms coming into contact with fungal spores.", "From a number of candidate products tested for the formulation of entomopathogenic fungi, ShellSol T emerged as a promising carrier of fungal spores when targeting anopheline larvae. Spores of B. bassiana and M. anisopliae formulated in ShellSol T had an increased efficacy against larvae of An. gambiae s.s. as compared to unformulated spores and were also more persistent under field conditions in Kenya. Other oils with physical properties similar to ShellSol T may also serve as good carriers. Together with a sound delivery system, these formulated fungi can be utilised in the field, providing additional tools for biological control of malaria vectors.", "The authors declare that they have no competing interests.", "TB designed the study, carried out the experimental work, performed the statistical analysis and drafted the manuscript. CJMK helped with the study design, statistical analyses, and drafting the manuscript. WT provided scientific guidance in interpretation of the findings and drafting the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Missed opportunities in TB diagnosis: a TB process-based performance review tool to evaluate and improve clinical care.
21342493
Traditional tuberculosis (TB) treatment outcome measures, such as cure rate, do not provide insight into the underlying reasons for missing clinical targets. We evaluated a TB Process-Based Performance Review (TB-PBPR) tool, developed to identify "missed opportunities" for timely and accurate diagnosis of TB. The tool enables performance assessment at the level of process and quality of care.
BACKGROUND
The TB-PBPR tool is a single-page structured flow-sheet that identifies 14 clinical actions (grouped into elicited symptoms, clinical examination and investigations). Medical records from selected deceased patients were reviewed at two South African mine hospitals (A = 56 cases; B = 26 cases), a South African teaching hospital (C = 20 cases) and a UK teaching hospital (D = 13 cases).
METHODS
In hospital A, where autopsy was routine, TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). Clinical omissions were identified at each hospital and at every stage of clinical management. For example, recording of chest symptoms was omitted in up to 39% of cases, sputum smear examination in up to 85% and chest radiograph in up to 38% of cases respectively.
RESULTS
This study introduces the TB-PBPR tool as a novel method to review and evaluate clinical performance in TB management. We found that simple clinical actions were omitted in many cases. The tool, in conjunction with a manual describing best practice, is adaptable to a range of settings, is educational and enables detailed feedback within a TB programme. The TB-PBPR tool and manual are both freely available for general use.
CONCLUSIONS
[ "Adult", "Aged", "Female", "Hospitals, Teaching", "Humans", "Male", "Medical Audit", "Middle Aged", "South Africa", "Tuberculin Test", "Tuberculosis", "United Kingdom", "Young Adult" ]
3051909
null
null
Methods
The TB-PBPR tool consists of a single-page structured flow sheet (see additional file 1). Each element is derived from clinical evidence and, as a whole, is in accordance with the 2006 International Standards for TB Care (ISTC) [22]. A manual containing concise, evidence-based clinical summaries was developed for use in conjunction with the TB-PBPR tool to provide guidance on best practice [23]. Recorded data include demographics, clinical and autopsy diagnoses, important clinical actions, missed opportunities and response to therapy. The tool evaluates the integrated process of care for a number of essential clinical actions; first through identification of whether each clinical action was performed, and second through assessment of whether the result of that clinical action was recorded and then acted on appropriately. In total, the TB-PBPR tool identifies 14 clinical actions which, if carried out, should minimize the number of missed diagnoses: eliciting TB symptoms constitutes 1, clinical examination 6 and clinical investigations 7. "Missed opportunities" are identified as errors causing potential failure to make timely and accurate clinical diagnoses. For example, with regard to CXR, a missed opportunity would be identified if a CXR were omitted, if a CXR were performed but the result not obtained, or if the result were obtained but not acted upon. Where no documentation is found in the clinical notes, the action is recorded as omitted. The tool takes account of the circumstances in which different investigations are indicated because certain investigations may not be required in every patient. For example, lymph node aspiration is not applicable in the absence of lymphadenopathy. When sputum examination identifies TB, further investigations to identify TB are recorded 'not applicable'. The TB-PBPR tool evaluates a group of 'other' investigations, which should be considered in accordance with ISTC guideline 3 (to investigate extrapulmonary TB), particularly in HIV positive cases with suspected TB and three negative sputum smears [22]. We evaluated the TB-PBPR tool in four hospitals (two South African platinum mine hospitals and two tertiary-care teaching hospitals (one in South Africa and one in the UK)). Cases were selected using different criteria (outlined below) to assess the tool's use in a range of healthcare settings, patient groups and populations. The TB rate is close to 1000 per 100,000 in the general South African population [24], and estimated adult HIV prevalence is 18% [25]. Although TB is less common in the UK than South Africa, the UK-based hospital serves the London community where TB rates (43 per 100,000) are much higher than elsewhere in the country [26]. A medical doctor completed each TB-PBPR flow sheet using the accompanying manual in ~40 minutes. [SUBTITLE] Hospital A (South African platinum mine hospital 1) [SUBSECTION] In this setting, autopsies are conducted for compensation purposes in terms of the Occupational Diseases in Mines and Works Act (ODMWA). Provided next of kin give consent, autopsy is performed in all men dying in employment regardless of the clinical cause of death. Cases not submitted are generally those who die off mine premises. Deceased miners' heart and lungs are removed at their place of employment, placed in formalin and dispatched to the South African National Institute for Occupational Health (NIOH), where histopathology is performed according to a standardized protocol [27,28]. Pathological TB is diagnosed in the presence of necrotizing granulomatous inflammation and/or presence of acid-fast bacilli, other causes having been excluded. Results are stored in the PATHAUT computerised database [29,30]. All patients from hospital A who died and had an autopsy of cardio-respiratory organs at the NIOH between October 2006 and December 2007 (n = 110) were considered for this study. The subset with a clinical and/or autopsy diagnosis of pulmonary TB (n = 62) were selected for review using the TB-PBPR tool. Clinical notes were available for 56 cases. In this setting, autopsies are conducted for compensation purposes in terms of the Occupational Diseases in Mines and Works Act (ODMWA). Provided next of kin give consent, autopsy is performed in all men dying in employment regardless of the clinical cause of death. Cases not submitted are generally those who die off mine premises. Deceased miners' heart and lungs are removed at their place of employment, placed in formalin and dispatched to the South African National Institute for Occupational Health (NIOH), where histopathology is performed according to a standardized protocol [27,28]. Pathological TB is diagnosed in the presence of necrotizing granulomatous inflammation and/or presence of acid-fast bacilli, other causes having been excluded. Results are stored in the PATHAUT computerised database [29,30]. All patients from hospital A who died and had an autopsy of cardio-respiratory organs at the NIOH between October 2006 and December 2007 (n = 110) were considered for this study. The subset with a clinical and/or autopsy diagnosis of pulmonary TB (n = 62) were selected for review using the TB-PBPR tool. Clinical notes were available for 56 cases. [SUBTITLE] Hospital B (South African platinum mine hospital 2) [SUBSECTION] No autopsies were performed for hospital B. Therefore, all patients who died during 2007 (n = 60) were considered for this study. The subset of those with a natural cause of death were selected for review (n = 35). Clinical notes were available for 26 cases. TB diagnosis was taken from clinical records and made on clinical or microbiological grounds. A health care service, comprising primary care clinics, specialised clinics and hospital facilities, is provided free of charge to all mine employees at hospitals A and B, approximately 18,000 workers in each case. The healthcare services run their own TB control programmes. No autopsies were performed for hospital B. Therefore, all patients who died during 2007 (n = 60) were considered for this study. The subset of those with a natural cause of death were selected for review (n = 35). Clinical notes were available for 26 cases. TB diagnosis was taken from clinical records and made on clinical or microbiological grounds. A health care service, comprising primary care clinics, specialised clinics and hospital facilities, is provided free of charge to all mine employees at hospitals A and B, approximately 18,000 workers in each case. The healthcare services run their own TB control programmes. [SUBTITLE] Hospital C (2700-bed public sector South African tertiary-care teaching hospital) [SUBSECTION] A convenience sample of 20 deceased individuals with a pre-mortem clinical diagnosis of TB and undergoing autopsy during the period December 2003 to March 2005 at Chris Hani Baragwanath Hospital (CHBH), Soweto as described by Martinson et al [10] was selected for review. Eligibility criteria for the original study included: >18 years of age, survival in hospital for >24 h and next of kin consenting to a full autopsy. A convenience sample of 20 deceased individuals with a pre-mortem clinical diagnosis of TB and undergoing autopsy during the period December 2003 to March 2005 at Chris Hani Baragwanath Hospital (CHBH), Soweto as described by Martinson et al [10] was selected for review. Eligibility criteria for the original study included: >18 years of age, survival in hospital for >24 h and next of kin consenting to a full autopsy. [SUBTITLE] Hospital D (1000-bed public sector UK tertiary-care teaching hospital) [SUBSECTION] No autopsy data were available for hospital D. Therefore all patients who were registered with the TB programme at the Royal Free Hospital (RFH), London, and who died between January 2004 and December 2007 (n = 22) were considered for this study. Clinical notes were available for 13 cases, who did not differ significantly in age, sex or ethnicity from cases for whom notes were unavailable. Ethical approval was obtained from the University of the Witwatersrand. Work in the UK hospital was undertaken as part of a clinical audit of TB services and, therefore, no local ethical approval was required. No autopsy data were available for hospital D. Therefore all patients who were registered with the TB programme at the Royal Free Hospital (RFH), London, and who died between January 2004 and December 2007 (n = 22) were considered for this study. Clinical notes were available for 13 cases, who did not differ significantly in age, sex or ethnicity from cases for whom notes were unavailable. Ethical approval was obtained from the University of the Witwatersrand. Work in the UK hospital was undertaken as part of a clinical audit of TB services and, therefore, no local ethical approval was required.
null
null
null
null
[ "Background", "Hospital A (South African platinum mine hospital 1)", "Hospital B (South African platinum mine hospital 2)", "Hospital C (2700-bed public sector South African tertiary-care teaching hospital)", "Hospital D (1000-bed public sector UK tertiary-care teaching hospital)", "Results", "Patients", "Clinico-Pathological Comparison", "Symptoms and Clinical Examination", "Investigations", "Missed Opportunities", "Discussion and Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The primary aims of tuberculosis (TB) control programmes are early diagnosis and prompt treatment of infectious cases to limit transmission [1]. To this end, the World Health Organisation (WHO) has developed specific outcome measures to evaluate TB control. Hence, treatment outcomes are recorded internationally and targets of 70% case detection and 85% cure in smear positive pulmonary TB have been set [2]. However, these broad outcome measures do not provide detailed insight into the pathways of clinical care or identify reasons for missing the targets.\nMethods of TB diagnosis have not changed significantly for many decades; resting primarily on clinical history, clinical examination, chest radiograph (CXR), and sputum smear and culture. Despite this long experience, there is overwhelming evidence from studies published over the last 50 years that TB diagnosis is prone to significant error [3-9]. Misdiagnosis occurs both if TB is missed and if TB is over-diagnosed. For example, a recent South African study found 21% of adults dying in hospital with a pre-mortem diagnosis of \"TB\" had no TB at autopsy [10], while in Italy in 1996, 36% of deceased AIDS patients with clinical diagnoses of TB had no evidence of TB at autopsy [11]. On the other hand, and of more concern to public health, are studies from the USA that suggest 5% of notified TB cases are diagnosed only after death [12,13], plus several large autopsy studies showing that TB is missed in life in 18-54% of cases with pathological evidence of active TB [9,14-16]. Avoidable clinical errors can contribute to delays or error in TB diagnosis [17].\nThis study describes a novel method for evaluating TB control at the point of care using a Process-Based Performance Review tool (TB-PBPR) to identify missed opportunities for early and accurate TB diagnosis. PBPR is a teaching strategy where clinicians retrospectively review patient records to evaluate crucial clinical actions, and has been shown to improve clinical performance [18,19]. Following initial development and piloting in the South African mining industry [20,21], we evaluated the tool by applying it to identify missed opportunities in deceased patients in four different hospital settings.", "In this setting, autopsies are conducted for compensation purposes in terms of the Occupational Diseases in Mines and Works Act (ODMWA). Provided next of kin give consent, autopsy is performed in all men dying in employment regardless of the clinical cause of death. Cases not submitted are generally those who die off mine premises. Deceased miners' heart and lungs are removed at their place of employment, placed in formalin and dispatched to the South African National Institute for Occupational Health (NIOH), where histopathology is performed according to a standardized protocol [27,28]. Pathological TB is diagnosed in the presence of necrotizing granulomatous inflammation and/or presence of acid-fast bacilli, other causes having been excluded. Results are stored in the PATHAUT computerised database [29,30].\nAll patients from hospital A who died and had an autopsy of cardio-respiratory organs at the NIOH between October 2006 and December 2007 (n = 110) were considered for this study. The subset with a clinical and/or autopsy diagnosis of pulmonary TB (n = 62) were selected for review using the TB-PBPR tool. Clinical notes were available for 56 cases.", "No autopsies were performed for hospital B. Therefore, all patients who died during 2007 (n = 60) were considered for this study. The subset of those with a natural cause of death were selected for review (n = 35). Clinical notes were available for 26 cases. TB diagnosis was taken from clinical records and made on clinical or microbiological grounds.\nA health care service, comprising primary care clinics, specialised clinics and hospital facilities, is provided free of charge to all mine employees at hospitals A and B, approximately 18,000 workers in each case. The healthcare services run their own TB control programmes.", "A convenience sample of 20 deceased individuals with a pre-mortem clinical diagnosis of TB and undergoing autopsy during the period December 2003 to March 2005 at Chris Hani Baragwanath Hospital (CHBH), Soweto as described by Martinson et al [10] was selected for review. Eligibility criteria for the original study included: >18 years of age, survival in hospital for >24 h and next of kin consenting to a full autopsy.", "No autopsy data were available for hospital D. Therefore all patients who were registered with the TB programme at the Royal Free Hospital (RFH), London, and who died between January 2004 and December 2007 (n = 22) were considered for this study. Clinical notes were available for 13 cases, who did not differ significantly in age, sex or ethnicity from cases for whom notes were unavailable.\nEthical approval was obtained from the University of the Witwatersrand. Work in the UK hospital was undertaken as part of a clinical audit of TB services and, therefore, no local ethical approval was required.", "[SUBTITLE] Patients [SUBSECTION] We reviewed medical records from 115 patients who died at the four hospitals using the TB-PBPR tool (Table 1). The duration of hospital admission was >24 h in 96% (110/115) of patients. Most patients at the South African hospitals (80-96%) were known to have HIV infection and in the mining hospitals, many had previously been treated for TB. The proportion treated for TB at final admission varied according to hospital setting (57-92%), with treatment started empirically (without microbiological evidence) in 25-53% of cases.\nCharacteristics of cases by hospital\n† HIV status was unknown in 2, 1, 3 and 1 cases in Hospitals A-D respectively\nWe reviewed medical records from 115 patients who died at the four hospitals using the TB-PBPR tool (Table 1). The duration of hospital admission was >24 h in 96% (110/115) of patients. Most patients at the South African hospitals (80-96%) were known to have HIV infection and in the mining hospitals, many had previously been treated for TB. The proportion treated for TB at final admission varied according to hospital setting (57-92%), with treatment started empirically (without microbiological evidence) in 25-53% of cases.\nCharacteristics of cases by hospital\n† HIV status was unknown in 2, 1, 3 and 1 cases in Hospitals A-D respectively\n[SUBTITLE] Clinico-Pathological Comparison [SUBSECTION] During the study period, 214 deaths were recorded at Hospital A. Autopsy was performed on 110 cases, in whom active TB was found in 40% (44/110). TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). The sensitivity of clinical diagnosis for TB was 48% (21/44) and specificity 73% (48/66) when measured against autopsy. Data on clinico-pathological comparison for Hospital C cases are published elsewhere [10].\nDuring the study period, 214 deaths were recorded at Hospital A. Autopsy was performed on 110 cases, in whom active TB was found in 40% (44/110). TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). The sensitivity of clinical diagnosis for TB was 48% (21/44) and specificity 73% (48/66) when measured against autopsy. Data on clinico-pathological comparison for Hospital C cases are published elsewhere [10].\n[SUBTITLE] Symptoms and Clinical Examination [SUBSECTION] Eliciting of TB symptoms is evaluated by the TB-PBPR tool on the basis of whether 3 symptoms are recorded. Importantly, the tool cannot distinguish between omissions of clinical action and omissions of clinical record. Depending on the hospital, symptoms of chest complaint (cough/haemoptysis/pain) were omitted in up to 39% of patients, weight loss between 15% and 73%, and fever/night sweats in up to 81% (Table 2). The TB-PBPR tool evaluates clinical examination on the basis of 6 clinical actions (Table 2). Chest auscultation was omitted in 32% and 50% of patients from mine hospital A and B respectively, but always performed at both teaching hospitals. Examination for weight loss and lymphadenopathy were performed poorly at all four hospitals, being omitted in 31% to 85% of patients. Notwithstanding clinical omissions, at least one positive clinical finding was documented for most patients: 93% (52/56) at hospital A, 88% (23/26) at hospital B and 100% at both hospital C and D.\nClinical actions: eliciting of clinical symptoms and examination (%)\nThe TB-PBPR tool was used to evaluate medical records to determine whether clinical actions were omitted or performed. Where an action was performed, we recorded whether symptoms and examination findings were absent or present. For example, in hospital A, clinicians omitted eliciting symptoms of chest complaint (cough/haemoptysis/pain) in 38% of patients. Chest symptoms were elicited in the remaining patients (Action performed); these symptoms were absent in 7% and present in 55% of patients.\nEliciting of TB symptoms is evaluated by the TB-PBPR tool on the basis of whether 3 symptoms are recorded. Importantly, the tool cannot distinguish between omissions of clinical action and omissions of clinical record. Depending on the hospital, symptoms of chest complaint (cough/haemoptysis/pain) were omitted in up to 39% of patients, weight loss between 15% and 73%, and fever/night sweats in up to 81% (Table 2). The TB-PBPR tool evaluates clinical examination on the basis of 6 clinical actions (Table 2). Chest auscultation was omitted in 32% and 50% of patients from mine hospital A and B respectively, but always performed at both teaching hospitals. Examination for weight loss and lymphadenopathy were performed poorly at all four hospitals, being omitted in 31% to 85% of patients. Notwithstanding clinical omissions, at least one positive clinical finding was documented for most patients: 93% (52/56) at hospital A, 88% (23/26) at hospital B and 100% at both hospital C and D.\nClinical actions: eliciting of clinical symptoms and examination (%)\nThe TB-PBPR tool was used to evaluate medical records to determine whether clinical actions were omitted or performed. Where an action was performed, we recorded whether symptoms and examination findings were absent or present. For example, in hospital A, clinicians omitted eliciting symptoms of chest complaint (cough/haemoptysis/pain) in 38% of patients. Chest symptoms were elicited in the remaining patients (Action performed); these symptoms were absent in 7% and present in 55% of patients.\n[SUBTITLE] Investigations [SUBSECTION] Following symptoms and clinical examination, the tool assesses whether clinical investigations were performed appropriately (Table 3). Investigations were omitted in hospitals A to C as follows; CXR in up to 38%; sputum smear in up to 85%; sputum culture in up to 90% (Table 3). However, in hospital D, these investigations were performed in every case. Assessment of other investigations such as lymph node aspiration, pleural tap and lumbar puncture depends upon recording of relevant clinical details. For example, in the absence of examination for lymphadenopathy it was impossible to evaluate the use of lymph node aspiration (marked 'no exam'). Nonetheless, we did identify cases in hospitals A, B and D where lymphadenopathy was present but lymph node biopsy was omitted (Table 3).\nClinical actions: use of appropriate investigations (%)\nThe TB-PBPR tool was used to assess whether appropriate investigations were performed. We recorded 'NA' where the investigation was not applicable (for example, where 'Sputum Smear' had identified acid fast bacilli), we recorded 'no exam' where the relevant clinical examination was not documented, we recorded 'omitted' where an investigation was applicable but not performed, and we recorded 'done' where the investigation was performed. For example, in hospital C, lymph node (LN) aspiration was not applicable in 45% of patients and was not documented in 35% of patients. LN aspiration was omitted in 15% and done in 5% of the patients.\nFollowing symptoms and clinical examination, the tool assesses whether clinical investigations were performed appropriately (Table 3). Investigations were omitted in hospitals A to C as follows; CXR in up to 38%; sputum smear in up to 85%; sputum culture in up to 90% (Table 3). However, in hospital D, these investigations were performed in every case. Assessment of other investigations such as lymph node aspiration, pleural tap and lumbar puncture depends upon recording of relevant clinical details. For example, in the absence of examination for lymphadenopathy it was impossible to evaluate the use of lymph node aspiration (marked 'no exam'). Nonetheless, we did identify cases in hospitals A, B and D where lymphadenopathy was present but lymph node biopsy was omitted (Table 3).\nClinical actions: use of appropriate investigations (%)\nThe TB-PBPR tool was used to assess whether appropriate investigations were performed. We recorded 'NA' where the investigation was not applicable (for example, where 'Sputum Smear' had identified acid fast bacilli), we recorded 'no exam' where the relevant clinical examination was not documented, we recorded 'omitted' where an investigation was applicable but not performed, and we recorded 'done' where the investigation was performed. For example, in hospital C, lymph node (LN) aspiration was not applicable in 45% of patients and was not documented in 35% of patients. LN aspiration was omitted in 15% and done in 5% of the patients.\n[SUBTITLE] Missed Opportunities [SUBSECTION] For each of the 14 clinical actions, the tool collapses the chain of clinical process to identify a missed opportunity where the process failed. For eliciting of TB symptoms, the clinical process was considered complete provided one or more symptoms were recorded and appropriate investigations had been performed. We found a mean of 8.8, 9.8, 7.2 and 2.4 missed opportunities per patient at hospitals A-D respectively (Figure 1). For both mine hospitals, we were able to review attendances to the hospital or outpatient clinics in the 3 months before final admission as a surrogate measure of opportunities for earlier intervention. In hospital A, 86% (47/56) of patients had attended at least once while the proportion in hospital B was 92% (24/26). The median number of attendances was 3 and 7 respectively.\nHistograms showing the distribution of total missed opportunities per case (maximum 14) for each hospital.\nFor each of the 14 clinical actions, the tool collapses the chain of clinical process to identify a missed opportunity where the process failed. For eliciting of TB symptoms, the clinical process was considered complete provided one or more symptoms were recorded and appropriate investigations had been performed. We found a mean of 8.8, 9.8, 7.2 and 2.4 missed opportunities per patient at hospitals A-D respectively (Figure 1). For both mine hospitals, we were able to review attendances to the hospital or outpatient clinics in the 3 months before final admission as a surrogate measure of opportunities for earlier intervention. In hospital A, 86% (47/56) of patients had attended at least once while the proportion in hospital B was 92% (24/26). The median number of attendances was 3 and 7 respectively.\nHistograms showing the distribution of total missed opportunities per case (maximum 14) for each hospital.", "We reviewed medical records from 115 patients who died at the four hospitals using the TB-PBPR tool (Table 1). The duration of hospital admission was >24 h in 96% (110/115) of patients. Most patients at the South African hospitals (80-96%) were known to have HIV infection and in the mining hospitals, many had previously been treated for TB. The proportion treated for TB at final admission varied according to hospital setting (57-92%), with treatment started empirically (without microbiological evidence) in 25-53% of cases.\nCharacteristics of cases by hospital\n† HIV status was unknown in 2, 1, 3 and 1 cases in Hospitals A-D respectively", "During the study period, 214 deaths were recorded at Hospital A. Autopsy was performed on 110 cases, in whom active TB was found in 40% (44/110). TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). The sensitivity of clinical diagnosis for TB was 48% (21/44) and specificity 73% (48/66) when measured against autopsy. Data on clinico-pathological comparison for Hospital C cases are published elsewhere [10].", "Eliciting of TB symptoms is evaluated by the TB-PBPR tool on the basis of whether 3 symptoms are recorded. Importantly, the tool cannot distinguish between omissions of clinical action and omissions of clinical record. Depending on the hospital, symptoms of chest complaint (cough/haemoptysis/pain) were omitted in up to 39% of patients, weight loss between 15% and 73%, and fever/night sweats in up to 81% (Table 2). The TB-PBPR tool evaluates clinical examination on the basis of 6 clinical actions (Table 2). Chest auscultation was omitted in 32% and 50% of patients from mine hospital A and B respectively, but always performed at both teaching hospitals. Examination for weight loss and lymphadenopathy were performed poorly at all four hospitals, being omitted in 31% to 85% of patients. Notwithstanding clinical omissions, at least one positive clinical finding was documented for most patients: 93% (52/56) at hospital A, 88% (23/26) at hospital B and 100% at both hospital C and D.\nClinical actions: eliciting of clinical symptoms and examination (%)\nThe TB-PBPR tool was used to evaluate medical records to determine whether clinical actions were omitted or performed. Where an action was performed, we recorded whether symptoms and examination findings were absent or present. For example, in hospital A, clinicians omitted eliciting symptoms of chest complaint (cough/haemoptysis/pain) in 38% of patients. Chest symptoms were elicited in the remaining patients (Action performed); these symptoms were absent in 7% and present in 55% of patients.", "Following symptoms and clinical examination, the tool assesses whether clinical investigations were performed appropriately (Table 3). Investigations were omitted in hospitals A to C as follows; CXR in up to 38%; sputum smear in up to 85%; sputum culture in up to 90% (Table 3). However, in hospital D, these investigations were performed in every case. Assessment of other investigations such as lymph node aspiration, pleural tap and lumbar puncture depends upon recording of relevant clinical details. For example, in the absence of examination for lymphadenopathy it was impossible to evaluate the use of lymph node aspiration (marked 'no exam'). Nonetheless, we did identify cases in hospitals A, B and D where lymphadenopathy was present but lymph node biopsy was omitted (Table 3).\nClinical actions: use of appropriate investigations (%)\nThe TB-PBPR tool was used to assess whether appropriate investigations were performed. We recorded 'NA' where the investigation was not applicable (for example, where 'Sputum Smear' had identified acid fast bacilli), we recorded 'no exam' where the relevant clinical examination was not documented, we recorded 'omitted' where an investigation was applicable but not performed, and we recorded 'done' where the investigation was performed. For example, in hospital C, lymph node (LN) aspiration was not applicable in 45% of patients and was not documented in 35% of patients. LN aspiration was omitted in 15% and done in 5% of the patients.", "For each of the 14 clinical actions, the tool collapses the chain of clinical process to identify a missed opportunity where the process failed. For eliciting of TB symptoms, the clinical process was considered complete provided one or more symptoms were recorded and appropriate investigations had been performed. We found a mean of 8.8, 9.8, 7.2 and 2.4 missed opportunities per patient at hospitals A-D respectively (Figure 1). For both mine hospitals, we were able to review attendances to the hospital or outpatient clinics in the 3 months before final admission as a surrogate measure of opportunities for earlier intervention. In hospital A, 86% (47/56) of patients had attended at least once while the proportion in hospital B was 92% (24/26). The median number of attendances was 3 and 7 respectively.\nHistograms showing the distribution of total missed opportunities per case (maximum 14) for each hospital.", "We applied a TB-Process Based Performance Review tool as a novel method to evaluate accurate and timely diagnosis of TB. We evaluated the tool in deceased patients, where clinical management may have failed. We therefore expected to find omissions in clinical process, and this may not be representative of the TB programmes in their entirety. Used in conjunction with autopsy results, as for hospitals A and C, the tool deconstructs a patient's clinical care to capture diagnostic errors. In the absence of autopsy data, as for hospitals B and D, the tool still provides valuable insight into patient care and overall TB programme performance. Where documentation was missing, we recorded an omission because clear documentation is essential for clinical governance as well as communication reasons. We found that simple but fundamental diagnostic clinical actions such as chest auscultation, CXR and sputum smear were not recorded in many cases. The tool highlights specific areas for improvement within each setting. Similar system failures have been reported in South African urban state clinics [31]. In the absence of a systematic clinical history, examination, and appropriate investigation, the evidence base for logical diagnostic decision-making is lost. This is important in TB (particularly with HIV co-infection) where cases can present diagnostic challenges even where all appropriate investigations seem to have been explored [8]. Similarly, implementation of a 19-point checklist of simple clinical actions was associated with significant improvements in surgical outcome [32].\nMany of these 115 patients presented with symptoms that should have prompted clinicians to consider TB (Table 2); overall 62% presented with chest symptoms, 38% with symptoms of weight loss and 37% with fever or night sweats. Other indications of the possibility of TB existed in these patients, 34% had previously been treated for TB and 85% were HIV-infected. All admitted patients survived for at least 24 h (many survived for longer). Clinicians therefore had opportunity to initiate investigations and treatment, and in many cases to monitor the response to treatment. Furthermore, in hospitals A and B, we found most cases made contact with medical services in the three months before final admission. We would expect to find further missed opportunities in these earlier patient/clinician interactions.\nWe highlight that sputum culture results were missing for so many of the South African patients, although this investigation was available in all hospitals. The dangers of multidrug resistance have been well described [33], and a high index of suspicion must be maintained, particularly where patients fail to improve on TB treatment or have recurrent TB (ISTC, standard 14 [22]). On the other hand, HIV testing was performed well, acknowledging the importance of ascertaining HIV status where TB is suspected.\nDifferent criteria were used for selection at each hospital, with the important distinction that (in this study) patients at Hospitals C and D were diagnosed with TB pre-mortem. However, a differential diagnosis that included TB was common to all. Although the selection criteria varied, and this limits comparison, the four hospitals do allow some useful broad observations to be made. We demonstrate that the tool identifies missed opportunities in a variety of settings, both in public and private hospitals, and in developed and developing countries. We recognise that not all missed opportunities carry equal weight but found the total number of missed opportunities to be a useful assessment of overall performance. The findings in hospital D, although the number of patients was small, suggest that it may be possible to achieve low numbers of missed opportunities for some patient groups, and this supports previous data showing that, despite low rates of error, some deaths are unavoidable [17]. Omissions in clinical care occur throughout the world and are not limited to TB; one study found that 45% of US adults do not receive care that is consistent with current recommendations [34]. If simple clinical actions are omitted in these settings, where TB prevalence is high, this raises concerns about settings where clinicians have far less experience of treating TB.\nThe TB-PBPR tool was designed and used here for in-patients and may require adaptation before use in some low income outpatient settings where clinical records may be less detailed. However, many of the key missed opportunities such as basic history taking and sputum smear are common to nearly all settings. Further evaluation will assess whether the TB-PBPR tool can be used to improve local clinical and programme management.\nSuccessful TB control requires basic clinical and public health management to be performed efficiently and consistently. Clinical omissions and misdiagnoses have implications for both the individual and the community, delaying treatment and increasing the period of infectivity leading to increased transmission, treatment failure, medical costs, and deaths.\nThe tool was designed with an educational objective in mind, to be used by clinicians to reflect on clinical practice and monitor missed opportunities. The TB-PBPR tool may be particularly useful to improve clinical care in patients with a range of poor outcomes, (e.g, development of drug resistance and recurrence) and its use is not limited to deceased patients. We suggest that the tool may augment broader WHO measures for TB programmes because it allows detailed evaluation to feedback into a TB programme and improve clinical care.", "The authors declare that they have no competing interests.", "JM, PS and NF designed the study. NF, JM, MW and ND completed the TB-PBPR tool flow sheets, and the data were collated by NF. All other authors participated in the design and coordination of the study. All authors contributed to the manuscript, read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/127/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Hospital A (South African platinum mine hospital 1)", "Hospital B (South African platinum mine hospital 2)", "Hospital C (2700-bed public sector South African tertiary-care teaching hospital)", "Hospital D (1000-bed public sector UK tertiary-care teaching hospital)", "Results", "Patients", "Clinico-Pathological Comparison", "Symptoms and Clinical Examination", "Investigations", "Missed Opportunities", "Discussion and Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "The primary aims of tuberculosis (TB) control programmes are early diagnosis and prompt treatment of infectious cases to limit transmission [1]. To this end, the World Health Organisation (WHO) has developed specific outcome measures to evaluate TB control. Hence, treatment outcomes are recorded internationally and targets of 70% case detection and 85% cure in smear positive pulmonary TB have been set [2]. However, these broad outcome measures do not provide detailed insight into the pathways of clinical care or identify reasons for missing the targets.\nMethods of TB diagnosis have not changed significantly for many decades; resting primarily on clinical history, clinical examination, chest radiograph (CXR), and sputum smear and culture. Despite this long experience, there is overwhelming evidence from studies published over the last 50 years that TB diagnosis is prone to significant error [3-9]. Misdiagnosis occurs both if TB is missed and if TB is over-diagnosed. For example, a recent South African study found 21% of adults dying in hospital with a pre-mortem diagnosis of \"TB\" had no TB at autopsy [10], while in Italy in 1996, 36% of deceased AIDS patients with clinical diagnoses of TB had no evidence of TB at autopsy [11]. On the other hand, and of more concern to public health, are studies from the USA that suggest 5% of notified TB cases are diagnosed only after death [12,13], plus several large autopsy studies showing that TB is missed in life in 18-54% of cases with pathological evidence of active TB [9,14-16]. Avoidable clinical errors can contribute to delays or error in TB diagnosis [17].\nThis study describes a novel method for evaluating TB control at the point of care using a Process-Based Performance Review tool (TB-PBPR) to identify missed opportunities for early and accurate TB diagnosis. PBPR is a teaching strategy where clinicians retrospectively review patient records to evaluate crucial clinical actions, and has been shown to improve clinical performance [18,19]. Following initial development and piloting in the South African mining industry [20,21], we evaluated the tool by applying it to identify missed opportunities in deceased patients in four different hospital settings.", "The TB-PBPR tool consists of a single-page structured flow sheet (see additional file 1). Each element is derived from clinical evidence and, as a whole, is in accordance with the 2006 International Standards for TB Care (ISTC) [22]. A manual containing concise, evidence-based clinical summaries was developed for use in conjunction with the TB-PBPR tool to provide guidance on best practice [23].\nRecorded data include demographics, clinical and autopsy diagnoses, important clinical actions, missed opportunities and response to therapy. The tool evaluates the integrated process of care for a number of essential clinical actions; first through identification of whether each clinical action was performed, and second through assessment of whether the result of that clinical action was recorded and then acted on appropriately.\nIn total, the TB-PBPR tool identifies 14 clinical actions which, if carried out, should minimize the number of missed diagnoses: eliciting TB symptoms constitutes 1, clinical examination 6 and clinical investigations 7. \"Missed opportunities\" are identified as errors causing potential failure to make timely and accurate clinical diagnoses. For example, with regard to CXR, a missed opportunity would be identified if a CXR were omitted, if a CXR were performed but the result not obtained, or if the result were obtained but not acted upon. Where no documentation is found in the clinical notes, the action is recorded as omitted. The tool takes account of the circumstances in which different investigations are indicated because certain investigations may not be required in every patient. For example, lymph node aspiration is not applicable in the absence of lymphadenopathy. When sputum examination identifies TB, further investigations to identify TB are recorded 'not applicable'.\nThe TB-PBPR tool evaluates a group of 'other' investigations, which should be considered in accordance with ISTC guideline 3 (to investigate extrapulmonary TB), particularly in HIV positive cases with suspected TB and three negative sputum smears [22].\nWe evaluated the TB-PBPR tool in four hospitals (two South African platinum mine hospitals and two tertiary-care teaching hospitals (one in South Africa and one in the UK)). Cases were selected using different criteria (outlined below) to assess the tool's use in a range of healthcare settings, patient groups and populations. The TB rate is close to 1000 per 100,000 in the general South African population [24], and estimated adult HIV prevalence is 18% [25]. Although TB is less common in the UK than South Africa, the UK-based hospital serves the London community where TB rates (43 per 100,000) are much higher than elsewhere in the country [26]. A medical doctor completed each TB-PBPR flow sheet using the accompanying manual in ~40 minutes.\n[SUBTITLE] Hospital A (South African platinum mine hospital 1) [SUBSECTION] In this setting, autopsies are conducted for compensation purposes in terms of the Occupational Diseases in Mines and Works Act (ODMWA). Provided next of kin give consent, autopsy is performed in all men dying in employment regardless of the clinical cause of death. Cases not submitted are generally those who die off mine premises. Deceased miners' heart and lungs are removed at their place of employment, placed in formalin and dispatched to the South African National Institute for Occupational Health (NIOH), where histopathology is performed according to a standardized protocol [27,28]. Pathological TB is diagnosed in the presence of necrotizing granulomatous inflammation and/or presence of acid-fast bacilli, other causes having been excluded. Results are stored in the PATHAUT computerised database [29,30].\nAll patients from hospital A who died and had an autopsy of cardio-respiratory organs at the NIOH between October 2006 and December 2007 (n = 110) were considered for this study. The subset with a clinical and/or autopsy diagnosis of pulmonary TB (n = 62) were selected for review using the TB-PBPR tool. Clinical notes were available for 56 cases.\nIn this setting, autopsies are conducted for compensation purposes in terms of the Occupational Diseases in Mines and Works Act (ODMWA). Provided next of kin give consent, autopsy is performed in all men dying in employment regardless of the clinical cause of death. Cases not submitted are generally those who die off mine premises. Deceased miners' heart and lungs are removed at their place of employment, placed in formalin and dispatched to the South African National Institute for Occupational Health (NIOH), where histopathology is performed according to a standardized protocol [27,28]. Pathological TB is diagnosed in the presence of necrotizing granulomatous inflammation and/or presence of acid-fast bacilli, other causes having been excluded. Results are stored in the PATHAUT computerised database [29,30].\nAll patients from hospital A who died and had an autopsy of cardio-respiratory organs at the NIOH between October 2006 and December 2007 (n = 110) were considered for this study. The subset with a clinical and/or autopsy diagnosis of pulmonary TB (n = 62) were selected for review using the TB-PBPR tool. Clinical notes were available for 56 cases.\n[SUBTITLE] Hospital B (South African platinum mine hospital 2) [SUBSECTION] No autopsies were performed for hospital B. Therefore, all patients who died during 2007 (n = 60) were considered for this study. The subset of those with a natural cause of death were selected for review (n = 35). Clinical notes were available for 26 cases. TB diagnosis was taken from clinical records and made on clinical or microbiological grounds.\nA health care service, comprising primary care clinics, specialised clinics and hospital facilities, is provided free of charge to all mine employees at hospitals A and B, approximately 18,000 workers in each case. The healthcare services run their own TB control programmes.\nNo autopsies were performed for hospital B. Therefore, all patients who died during 2007 (n = 60) were considered for this study. The subset of those with a natural cause of death were selected for review (n = 35). Clinical notes were available for 26 cases. TB diagnosis was taken from clinical records and made on clinical or microbiological grounds.\nA health care service, comprising primary care clinics, specialised clinics and hospital facilities, is provided free of charge to all mine employees at hospitals A and B, approximately 18,000 workers in each case. The healthcare services run their own TB control programmes.\n[SUBTITLE] Hospital C (2700-bed public sector South African tertiary-care teaching hospital) [SUBSECTION] A convenience sample of 20 deceased individuals with a pre-mortem clinical diagnosis of TB and undergoing autopsy during the period December 2003 to March 2005 at Chris Hani Baragwanath Hospital (CHBH), Soweto as described by Martinson et al [10] was selected for review. Eligibility criteria for the original study included: >18 years of age, survival in hospital for >24 h and next of kin consenting to a full autopsy.\nA convenience sample of 20 deceased individuals with a pre-mortem clinical diagnosis of TB and undergoing autopsy during the period December 2003 to March 2005 at Chris Hani Baragwanath Hospital (CHBH), Soweto as described by Martinson et al [10] was selected for review. Eligibility criteria for the original study included: >18 years of age, survival in hospital for >24 h and next of kin consenting to a full autopsy.\n[SUBTITLE] Hospital D (1000-bed public sector UK tertiary-care teaching hospital) [SUBSECTION] No autopsy data were available for hospital D. Therefore all patients who were registered with the TB programme at the Royal Free Hospital (RFH), London, and who died between January 2004 and December 2007 (n = 22) were considered for this study. Clinical notes were available for 13 cases, who did not differ significantly in age, sex or ethnicity from cases for whom notes were unavailable.\nEthical approval was obtained from the University of the Witwatersrand. Work in the UK hospital was undertaken as part of a clinical audit of TB services and, therefore, no local ethical approval was required.\nNo autopsy data were available for hospital D. Therefore all patients who were registered with the TB programme at the Royal Free Hospital (RFH), London, and who died between January 2004 and December 2007 (n = 22) were considered for this study. Clinical notes were available for 13 cases, who did not differ significantly in age, sex or ethnicity from cases for whom notes were unavailable.\nEthical approval was obtained from the University of the Witwatersrand. Work in the UK hospital was undertaken as part of a clinical audit of TB services and, therefore, no local ethical approval was required.", "In this setting, autopsies are conducted for compensation purposes in terms of the Occupational Diseases in Mines and Works Act (ODMWA). Provided next of kin give consent, autopsy is performed in all men dying in employment regardless of the clinical cause of death. Cases not submitted are generally those who die off mine premises. Deceased miners' heart and lungs are removed at their place of employment, placed in formalin and dispatched to the South African National Institute for Occupational Health (NIOH), where histopathology is performed according to a standardized protocol [27,28]. Pathological TB is diagnosed in the presence of necrotizing granulomatous inflammation and/or presence of acid-fast bacilli, other causes having been excluded. Results are stored in the PATHAUT computerised database [29,30].\nAll patients from hospital A who died and had an autopsy of cardio-respiratory organs at the NIOH between October 2006 and December 2007 (n = 110) were considered for this study. The subset with a clinical and/or autopsy diagnosis of pulmonary TB (n = 62) were selected for review using the TB-PBPR tool. Clinical notes were available for 56 cases.", "No autopsies were performed for hospital B. Therefore, all patients who died during 2007 (n = 60) were considered for this study. The subset of those with a natural cause of death were selected for review (n = 35). Clinical notes were available for 26 cases. TB diagnosis was taken from clinical records and made on clinical or microbiological grounds.\nA health care service, comprising primary care clinics, specialised clinics and hospital facilities, is provided free of charge to all mine employees at hospitals A and B, approximately 18,000 workers in each case. The healthcare services run their own TB control programmes.", "A convenience sample of 20 deceased individuals with a pre-mortem clinical diagnosis of TB and undergoing autopsy during the period December 2003 to March 2005 at Chris Hani Baragwanath Hospital (CHBH), Soweto as described by Martinson et al [10] was selected for review. Eligibility criteria for the original study included: >18 years of age, survival in hospital for >24 h and next of kin consenting to a full autopsy.", "No autopsy data were available for hospital D. Therefore all patients who were registered with the TB programme at the Royal Free Hospital (RFH), London, and who died between January 2004 and December 2007 (n = 22) were considered for this study. Clinical notes were available for 13 cases, who did not differ significantly in age, sex or ethnicity from cases for whom notes were unavailable.\nEthical approval was obtained from the University of the Witwatersrand. Work in the UK hospital was undertaken as part of a clinical audit of TB services and, therefore, no local ethical approval was required.", "[SUBTITLE] Patients [SUBSECTION] We reviewed medical records from 115 patients who died at the four hospitals using the TB-PBPR tool (Table 1). The duration of hospital admission was >24 h in 96% (110/115) of patients. Most patients at the South African hospitals (80-96%) were known to have HIV infection and in the mining hospitals, many had previously been treated for TB. The proportion treated for TB at final admission varied according to hospital setting (57-92%), with treatment started empirically (without microbiological evidence) in 25-53% of cases.\nCharacteristics of cases by hospital\n† HIV status was unknown in 2, 1, 3 and 1 cases in Hospitals A-D respectively\nWe reviewed medical records from 115 patients who died at the four hospitals using the TB-PBPR tool (Table 1). The duration of hospital admission was >24 h in 96% (110/115) of patients. Most patients at the South African hospitals (80-96%) were known to have HIV infection and in the mining hospitals, many had previously been treated for TB. The proportion treated for TB at final admission varied according to hospital setting (57-92%), with treatment started empirically (without microbiological evidence) in 25-53% of cases.\nCharacteristics of cases by hospital\n† HIV status was unknown in 2, 1, 3 and 1 cases in Hospitals A-D respectively\n[SUBTITLE] Clinico-Pathological Comparison [SUBSECTION] During the study period, 214 deaths were recorded at Hospital A. Autopsy was performed on 110 cases, in whom active TB was found in 40% (44/110). TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). The sensitivity of clinical diagnosis for TB was 48% (21/44) and specificity 73% (48/66) when measured against autopsy. Data on clinico-pathological comparison for Hospital C cases are published elsewhere [10].\nDuring the study period, 214 deaths were recorded at Hospital A. Autopsy was performed on 110 cases, in whom active TB was found in 40% (44/110). TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). The sensitivity of clinical diagnosis for TB was 48% (21/44) and specificity 73% (48/66) when measured against autopsy. Data on clinico-pathological comparison for Hospital C cases are published elsewhere [10].\n[SUBTITLE] Symptoms and Clinical Examination [SUBSECTION] Eliciting of TB symptoms is evaluated by the TB-PBPR tool on the basis of whether 3 symptoms are recorded. Importantly, the tool cannot distinguish between omissions of clinical action and omissions of clinical record. Depending on the hospital, symptoms of chest complaint (cough/haemoptysis/pain) were omitted in up to 39% of patients, weight loss between 15% and 73%, and fever/night sweats in up to 81% (Table 2). The TB-PBPR tool evaluates clinical examination on the basis of 6 clinical actions (Table 2). Chest auscultation was omitted in 32% and 50% of patients from mine hospital A and B respectively, but always performed at both teaching hospitals. Examination for weight loss and lymphadenopathy were performed poorly at all four hospitals, being omitted in 31% to 85% of patients. Notwithstanding clinical omissions, at least one positive clinical finding was documented for most patients: 93% (52/56) at hospital A, 88% (23/26) at hospital B and 100% at both hospital C and D.\nClinical actions: eliciting of clinical symptoms and examination (%)\nThe TB-PBPR tool was used to evaluate medical records to determine whether clinical actions were omitted or performed. Where an action was performed, we recorded whether symptoms and examination findings were absent or present. For example, in hospital A, clinicians omitted eliciting symptoms of chest complaint (cough/haemoptysis/pain) in 38% of patients. Chest symptoms were elicited in the remaining patients (Action performed); these symptoms were absent in 7% and present in 55% of patients.\nEliciting of TB symptoms is evaluated by the TB-PBPR tool on the basis of whether 3 symptoms are recorded. Importantly, the tool cannot distinguish between omissions of clinical action and omissions of clinical record. Depending on the hospital, symptoms of chest complaint (cough/haemoptysis/pain) were omitted in up to 39% of patients, weight loss between 15% and 73%, and fever/night sweats in up to 81% (Table 2). The TB-PBPR tool evaluates clinical examination on the basis of 6 clinical actions (Table 2). Chest auscultation was omitted in 32% and 50% of patients from mine hospital A and B respectively, but always performed at both teaching hospitals. Examination for weight loss and lymphadenopathy were performed poorly at all four hospitals, being omitted in 31% to 85% of patients. Notwithstanding clinical omissions, at least one positive clinical finding was documented for most patients: 93% (52/56) at hospital A, 88% (23/26) at hospital B and 100% at both hospital C and D.\nClinical actions: eliciting of clinical symptoms and examination (%)\nThe TB-PBPR tool was used to evaluate medical records to determine whether clinical actions were omitted or performed. Where an action was performed, we recorded whether symptoms and examination findings were absent or present. For example, in hospital A, clinicians omitted eliciting symptoms of chest complaint (cough/haemoptysis/pain) in 38% of patients. Chest symptoms were elicited in the remaining patients (Action performed); these symptoms were absent in 7% and present in 55% of patients.\n[SUBTITLE] Investigations [SUBSECTION] Following symptoms and clinical examination, the tool assesses whether clinical investigations were performed appropriately (Table 3). Investigations were omitted in hospitals A to C as follows; CXR in up to 38%; sputum smear in up to 85%; sputum culture in up to 90% (Table 3). However, in hospital D, these investigations were performed in every case. Assessment of other investigations such as lymph node aspiration, pleural tap and lumbar puncture depends upon recording of relevant clinical details. For example, in the absence of examination for lymphadenopathy it was impossible to evaluate the use of lymph node aspiration (marked 'no exam'). Nonetheless, we did identify cases in hospitals A, B and D where lymphadenopathy was present but lymph node biopsy was omitted (Table 3).\nClinical actions: use of appropriate investigations (%)\nThe TB-PBPR tool was used to assess whether appropriate investigations were performed. We recorded 'NA' where the investigation was not applicable (for example, where 'Sputum Smear' had identified acid fast bacilli), we recorded 'no exam' where the relevant clinical examination was not documented, we recorded 'omitted' where an investigation was applicable but not performed, and we recorded 'done' where the investigation was performed. For example, in hospital C, lymph node (LN) aspiration was not applicable in 45% of patients and was not documented in 35% of patients. LN aspiration was omitted in 15% and done in 5% of the patients.\nFollowing symptoms and clinical examination, the tool assesses whether clinical investigations were performed appropriately (Table 3). Investigations were omitted in hospitals A to C as follows; CXR in up to 38%; sputum smear in up to 85%; sputum culture in up to 90% (Table 3). However, in hospital D, these investigations were performed in every case. Assessment of other investigations such as lymph node aspiration, pleural tap and lumbar puncture depends upon recording of relevant clinical details. For example, in the absence of examination for lymphadenopathy it was impossible to evaluate the use of lymph node aspiration (marked 'no exam'). Nonetheless, we did identify cases in hospitals A, B and D where lymphadenopathy was present but lymph node biopsy was omitted (Table 3).\nClinical actions: use of appropriate investigations (%)\nThe TB-PBPR tool was used to assess whether appropriate investigations were performed. We recorded 'NA' where the investigation was not applicable (for example, where 'Sputum Smear' had identified acid fast bacilli), we recorded 'no exam' where the relevant clinical examination was not documented, we recorded 'omitted' where an investigation was applicable but not performed, and we recorded 'done' where the investigation was performed. For example, in hospital C, lymph node (LN) aspiration was not applicable in 45% of patients and was not documented in 35% of patients. LN aspiration was omitted in 15% and done in 5% of the patients.\n[SUBTITLE] Missed Opportunities [SUBSECTION] For each of the 14 clinical actions, the tool collapses the chain of clinical process to identify a missed opportunity where the process failed. For eliciting of TB symptoms, the clinical process was considered complete provided one or more symptoms were recorded and appropriate investigations had been performed. We found a mean of 8.8, 9.8, 7.2 and 2.4 missed opportunities per patient at hospitals A-D respectively (Figure 1). For both mine hospitals, we were able to review attendances to the hospital or outpatient clinics in the 3 months before final admission as a surrogate measure of opportunities for earlier intervention. In hospital A, 86% (47/56) of patients had attended at least once while the proportion in hospital B was 92% (24/26). The median number of attendances was 3 and 7 respectively.\nHistograms showing the distribution of total missed opportunities per case (maximum 14) for each hospital.\nFor each of the 14 clinical actions, the tool collapses the chain of clinical process to identify a missed opportunity where the process failed. For eliciting of TB symptoms, the clinical process was considered complete provided one or more symptoms were recorded and appropriate investigations had been performed. We found a mean of 8.8, 9.8, 7.2 and 2.4 missed opportunities per patient at hospitals A-D respectively (Figure 1). For both mine hospitals, we were able to review attendances to the hospital or outpatient clinics in the 3 months before final admission as a surrogate measure of opportunities for earlier intervention. In hospital A, 86% (47/56) of patients had attended at least once while the proportion in hospital B was 92% (24/26). The median number of attendances was 3 and 7 respectively.\nHistograms showing the distribution of total missed opportunities per case (maximum 14) for each hospital.", "We reviewed medical records from 115 patients who died at the four hospitals using the TB-PBPR tool (Table 1). The duration of hospital admission was >24 h in 96% (110/115) of patients. Most patients at the South African hospitals (80-96%) were known to have HIV infection and in the mining hospitals, many had previously been treated for TB. The proportion treated for TB at final admission varied according to hospital setting (57-92%), with treatment started empirically (without microbiological evidence) in 25-53% of cases.\nCharacteristics of cases by hospital\n† HIV status was unknown in 2, 1, 3 and 1 cases in Hospitals A-D respectively", "During the study period, 214 deaths were recorded at Hospital A. Autopsy was performed on 110 cases, in whom active TB was found in 40% (44/110). TB was missed in life in 52% (23/44) of cases and was wrongly attributed as the cause of death in 16% (18/110). The sensitivity of clinical diagnosis for TB was 48% (21/44) and specificity 73% (48/66) when measured against autopsy. Data on clinico-pathological comparison for Hospital C cases are published elsewhere [10].", "Eliciting of TB symptoms is evaluated by the TB-PBPR tool on the basis of whether 3 symptoms are recorded. Importantly, the tool cannot distinguish between omissions of clinical action and omissions of clinical record. Depending on the hospital, symptoms of chest complaint (cough/haemoptysis/pain) were omitted in up to 39% of patients, weight loss between 15% and 73%, and fever/night sweats in up to 81% (Table 2). The TB-PBPR tool evaluates clinical examination on the basis of 6 clinical actions (Table 2). Chest auscultation was omitted in 32% and 50% of patients from mine hospital A and B respectively, but always performed at both teaching hospitals. Examination for weight loss and lymphadenopathy were performed poorly at all four hospitals, being omitted in 31% to 85% of patients. Notwithstanding clinical omissions, at least one positive clinical finding was documented for most patients: 93% (52/56) at hospital A, 88% (23/26) at hospital B and 100% at both hospital C and D.\nClinical actions: eliciting of clinical symptoms and examination (%)\nThe TB-PBPR tool was used to evaluate medical records to determine whether clinical actions were omitted or performed. Where an action was performed, we recorded whether symptoms and examination findings were absent or present. For example, in hospital A, clinicians omitted eliciting symptoms of chest complaint (cough/haemoptysis/pain) in 38% of patients. Chest symptoms were elicited in the remaining patients (Action performed); these symptoms were absent in 7% and present in 55% of patients.", "Following symptoms and clinical examination, the tool assesses whether clinical investigations were performed appropriately (Table 3). Investigations were omitted in hospitals A to C as follows; CXR in up to 38%; sputum smear in up to 85%; sputum culture in up to 90% (Table 3). However, in hospital D, these investigations were performed in every case. Assessment of other investigations such as lymph node aspiration, pleural tap and lumbar puncture depends upon recording of relevant clinical details. For example, in the absence of examination for lymphadenopathy it was impossible to evaluate the use of lymph node aspiration (marked 'no exam'). Nonetheless, we did identify cases in hospitals A, B and D where lymphadenopathy was present but lymph node biopsy was omitted (Table 3).\nClinical actions: use of appropriate investigations (%)\nThe TB-PBPR tool was used to assess whether appropriate investigations were performed. We recorded 'NA' where the investigation was not applicable (for example, where 'Sputum Smear' had identified acid fast bacilli), we recorded 'no exam' where the relevant clinical examination was not documented, we recorded 'omitted' where an investigation was applicable but not performed, and we recorded 'done' where the investigation was performed. For example, in hospital C, lymph node (LN) aspiration was not applicable in 45% of patients and was not documented in 35% of patients. LN aspiration was omitted in 15% and done in 5% of the patients.", "For each of the 14 clinical actions, the tool collapses the chain of clinical process to identify a missed opportunity where the process failed. For eliciting of TB symptoms, the clinical process was considered complete provided one or more symptoms were recorded and appropriate investigations had been performed. We found a mean of 8.8, 9.8, 7.2 and 2.4 missed opportunities per patient at hospitals A-D respectively (Figure 1). For both mine hospitals, we were able to review attendances to the hospital or outpatient clinics in the 3 months before final admission as a surrogate measure of opportunities for earlier intervention. In hospital A, 86% (47/56) of patients had attended at least once while the proportion in hospital B was 92% (24/26). The median number of attendances was 3 and 7 respectively.\nHistograms showing the distribution of total missed opportunities per case (maximum 14) for each hospital.", "We applied a TB-Process Based Performance Review tool as a novel method to evaluate accurate and timely diagnosis of TB. We evaluated the tool in deceased patients, where clinical management may have failed. We therefore expected to find omissions in clinical process, and this may not be representative of the TB programmes in their entirety. Used in conjunction with autopsy results, as for hospitals A and C, the tool deconstructs a patient's clinical care to capture diagnostic errors. In the absence of autopsy data, as for hospitals B and D, the tool still provides valuable insight into patient care and overall TB programme performance. Where documentation was missing, we recorded an omission because clear documentation is essential for clinical governance as well as communication reasons. We found that simple but fundamental diagnostic clinical actions such as chest auscultation, CXR and sputum smear were not recorded in many cases. The tool highlights specific areas for improvement within each setting. Similar system failures have been reported in South African urban state clinics [31]. In the absence of a systematic clinical history, examination, and appropriate investigation, the evidence base for logical diagnostic decision-making is lost. This is important in TB (particularly with HIV co-infection) where cases can present diagnostic challenges even where all appropriate investigations seem to have been explored [8]. Similarly, implementation of a 19-point checklist of simple clinical actions was associated with significant improvements in surgical outcome [32].\nMany of these 115 patients presented with symptoms that should have prompted clinicians to consider TB (Table 2); overall 62% presented with chest symptoms, 38% with symptoms of weight loss and 37% with fever or night sweats. Other indications of the possibility of TB existed in these patients, 34% had previously been treated for TB and 85% were HIV-infected. All admitted patients survived for at least 24 h (many survived for longer). Clinicians therefore had opportunity to initiate investigations and treatment, and in many cases to monitor the response to treatment. Furthermore, in hospitals A and B, we found most cases made contact with medical services in the three months before final admission. We would expect to find further missed opportunities in these earlier patient/clinician interactions.\nWe highlight that sputum culture results were missing for so many of the South African patients, although this investigation was available in all hospitals. The dangers of multidrug resistance have been well described [33], and a high index of suspicion must be maintained, particularly where patients fail to improve on TB treatment or have recurrent TB (ISTC, standard 14 [22]). On the other hand, HIV testing was performed well, acknowledging the importance of ascertaining HIV status where TB is suspected.\nDifferent criteria were used for selection at each hospital, with the important distinction that (in this study) patients at Hospitals C and D were diagnosed with TB pre-mortem. However, a differential diagnosis that included TB was common to all. Although the selection criteria varied, and this limits comparison, the four hospitals do allow some useful broad observations to be made. We demonstrate that the tool identifies missed opportunities in a variety of settings, both in public and private hospitals, and in developed and developing countries. We recognise that not all missed opportunities carry equal weight but found the total number of missed opportunities to be a useful assessment of overall performance. The findings in hospital D, although the number of patients was small, suggest that it may be possible to achieve low numbers of missed opportunities for some patient groups, and this supports previous data showing that, despite low rates of error, some deaths are unavoidable [17]. Omissions in clinical care occur throughout the world and are not limited to TB; one study found that 45% of US adults do not receive care that is consistent with current recommendations [34]. If simple clinical actions are omitted in these settings, where TB prevalence is high, this raises concerns about settings where clinicians have far less experience of treating TB.\nThe TB-PBPR tool was designed and used here for in-patients and may require adaptation before use in some low income outpatient settings where clinical records may be less detailed. However, many of the key missed opportunities such as basic history taking and sputum smear are common to nearly all settings. Further evaluation will assess whether the TB-PBPR tool can be used to improve local clinical and programme management.\nSuccessful TB control requires basic clinical and public health management to be performed efficiently and consistently. Clinical omissions and misdiagnoses have implications for both the individual and the community, delaying treatment and increasing the period of infectivity leading to increased transmission, treatment failure, medical costs, and deaths.\nThe tool was designed with an educational objective in mind, to be used by clinicians to reflect on clinical practice and monitor missed opportunities. The TB-PBPR tool may be particularly useful to improve clinical care in patients with a range of poor outcomes, (e.g, development of drug resistance and recurrence) and its use is not limited to deceased patients. We suggest that the tool may augment broader WHO measures for TB programmes because it allows detailed evaluation to feedback into a TB programme and improve clinical care.", "The authors declare that they have no competing interests.", "JM, PS and NF designed the study. NF, JM, MW and ND completed the TB-PBPR tool flow sheets, and the data were collated by NF. All other authors participated in the design and coordination of the study. All authors contributed to the manuscript, read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/127/prepub\n", "The TB-PBPR tool.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Methylenetetrahydrofolate reductase C677T polymorphism in patients with lung cancer in a Korean population.
21342495
This study was designed to investigate an association between methylenetetrahydrofolate reductase (MTHFR) C677T polymorphism and the risk of lung cancer in a Korean population.
BACKGROUND
We conducted a large-scale, case-control study involving 3938 patients with newly diagnosed lung cancer and 1700 healthy controls. Genotyping was performed with peripheral blood DNA for MTHFR C677T polymorphisms. Statistical significance was estimated by logistic regression analysis.
METHODS
The MTHFR C677T frequencies of CC, CT, and TT genotypes were 34.5%, 48.5%, and 17% among lung cancer patients, and 31.8%, 50.7%, and 17.5% in the controls, respectively. The MTHFR 677CT and TT genotype showed a weak protection against lung cancer compared with the homozygous CC genotype, although the results did not reach statistical significance. The age- and gender-adjusted odds ratio (OR) of overall lung cancer was 0.90 (95% confidence interval (CI), 0.77-1.04) for MTHFR 677 CT and 0.88 (95% CI, 0.71-1.07) for MTHFR 677TT. However, after stratification analysis by histological type, the MTHFR 677CT genotype showed a significantly decreased risk for squamous cell carcinoma (age- and gender-adjusted OR, 0.78; 95% CI, 0.64-0.96). The combination of 677 TT homozygous with 677 CT heterozygous also appeared to have a protection effect on the risk of squamous cell carcinoma. We observed no significant interaction between the MTHFR C677T polymorphism and age and gender or smoking habit.
RESULTS
This is the first reported study focusing on the association between MTHFR C677T polymorphisms and the risk of lung cancer in a Korean population. The T allele was found to provide a weak protective association with lung squamous cell carcinoma.
CONCLUSIONS
[ "Adenocarcinoma", "Aged", "Alleles", "Base Sequence", "Carcinoma, Large Cell", "Carcinoma, Squamous Cell", "Case-Control Studies", "Confidence Intervals", "DNA Primers", "DNA, Neoplasm", "Female", "Gene Frequency", "Genetic Predisposition to Disease", "Humans", "Lung Neoplasms", "Male", "Methylenetetrahydrofolate Reductase (NADPH2)", "Middle Aged", "Odds Ratio", "Polymorphism, Single Nucleotide", "Republic of Korea", "Risk Factors", "Small Cell Lung Carcinoma" ]
3048494
null
null
Methods
[SUBTITLE] Subjects [SUBSECTION] The study population consisted of 3938 patients with newly diagnosed lung cancer and 1700 population-based controls. All enrolled patients were pathologically confirmed at Chonnam National University Hwasun Hospital between January 2000 and August 2010. Cases with secondary or recurrent tumors were excluded. The control group (n = 1700) consisted of participants in the Thyroid Disease Prevalence Study [19], conducted from July 2004 to January 2006 in the Yeonggwang and Muan Counties of Jeollanam-do Province and in Namwon City of Jeollabuk-do, Korea. A total of 4018 subjects were randomly selected by 5-year age strata and sex. Of the total number, 3486 were eligible subjects. Of those eligible, 1699 (48.8% of the eligible subjects; 820 men and 879 women), underwent clinical examinations. At the time of their peripheral blood collections, all control subjects provided their informed consent to participate in this study. This study was approved by the Institutional Review Board of the Chonnam National University Hwasun Hospital in Hwasun, South Korea. At the time of their peripheral blood collections, all case and control subjects provided their informed consent to participate in this study. The study population consisted of 3938 patients with newly diagnosed lung cancer and 1700 population-based controls. All enrolled patients were pathologically confirmed at Chonnam National University Hwasun Hospital between January 2000 and August 2010. Cases with secondary or recurrent tumors were excluded. The control group (n = 1700) consisted of participants in the Thyroid Disease Prevalence Study [19], conducted from July 2004 to January 2006 in the Yeonggwang and Muan Counties of Jeollanam-do Province and in Namwon City of Jeollabuk-do, Korea. A total of 4018 subjects were randomly selected by 5-year age strata and sex. Of the total number, 3486 were eligible subjects. Of those eligible, 1699 (48.8% of the eligible subjects; 820 men and 879 women), underwent clinical examinations. At the time of their peripheral blood collections, all control subjects provided their informed consent to participate in this study. This study was approved by the Institutional Review Board of the Chonnam National University Hwasun Hospital in Hwasun, South Korea. At the time of their peripheral blood collections, all case and control subjects provided their informed consent to participate in this study. [SUBTITLE] Genotyping [SUBSECTION] Genomic DNA was extracted from peripheral blood using a QIAamp DNA Blood Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. Genotyping was performed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) or real-time PCR. The genotyping protocol for PCR-RFLP was adapted from Frosst et al. [8]. After HinfI (Takara, Tokyo, Japan) restriction enzyme digestion, samples were run on a 10% polyacrylamide gel (19:1) using Microtitre Array Diagonal Gel Electrophoresis (MADGE; MadgeBio, Grantham and Southampton, UK). Genotyping by real-time PCR was performed by allelic discrimination using dual-labeled probes containing locked nucleic acids (LNA) in a real-time PCR assay. PCR primers and LNA probes were designed and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA). Primers producing a 104-bp amplicon were as follows: forward, 5'-CTTTGAGGCTGACCTGAAGC-3' and reverse, 5'-TCACAAAGCGGAAGAA TGTG-3'. Dual-labeled LNA hybridization probes were 5'-FAM-ATG GcT ccc-BHQ1-3' for the C allele and 5'-cy5-cgA CTc cCg C-BHQ2-3' for the T allele (LNA bases are denoted in upper case, and single nucleotide polymorphisms are underlined). Real-time PCR was performed using a Rotor-Gene 3000 multiplex system (Corbett Research, Sydney, Australia) in a 10-μL reaction volume containing 200 nM PCR primer, 10-10 nM each probe, 0.5 U f-taq polymerase (Solgent, Daejeon, Korea), and 40 ng of genomic DNA. In 24 subjects, the results of PCR-RFLP were compared with those from real-time PCR, and the resulting concordance rate was 100%. Genomic DNA was extracted from peripheral blood using a QIAamp DNA Blood Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. Genotyping was performed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) or real-time PCR. The genotyping protocol for PCR-RFLP was adapted from Frosst et al. [8]. After HinfI (Takara, Tokyo, Japan) restriction enzyme digestion, samples were run on a 10% polyacrylamide gel (19:1) using Microtitre Array Diagonal Gel Electrophoresis (MADGE; MadgeBio, Grantham and Southampton, UK). Genotyping by real-time PCR was performed by allelic discrimination using dual-labeled probes containing locked nucleic acids (LNA) in a real-time PCR assay. PCR primers and LNA probes were designed and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA). Primers producing a 104-bp amplicon were as follows: forward, 5'-CTTTGAGGCTGACCTGAAGC-3' and reverse, 5'-TCACAAAGCGGAAGAA TGTG-3'. Dual-labeled LNA hybridization probes were 5'-FAM-ATG GcT ccc-BHQ1-3' for the C allele and 5'-cy5-cgA CTc cCg C-BHQ2-3' for the T allele (LNA bases are denoted in upper case, and single nucleotide polymorphisms are underlined). Real-time PCR was performed using a Rotor-Gene 3000 multiplex system (Corbett Research, Sydney, Australia) in a 10-μL reaction volume containing 200 nM PCR primer, 10-10 nM each probe, 0.5 U f-taq polymerase (Solgent, Daejeon, Korea), and 40 ng of genomic DNA. In 24 subjects, the results of PCR-RFLP were compared with those from real-time PCR, and the resulting concordance rate was 100%. [SUBTITLE] Statistical analyses [SUBSECTION] The statistical significance of differences between the patient and control groups was estimated by logistic regression analysis. Adjusted odds ratios (OR) were calculated with a logistic regression model that controlled for gender and age and are given with 95% confidence intervals (CI). Subjects with the wild-type genotypes (MTHFR 677CC) were considered to be at baseline risk. The expected frequency of control genotypes was checked by the Hardy-Weinberg equilibrium test. The heterogeneity was tested by multivariate logistic regression model. Subjects for whom there were missing data for smoking or histological type were excluded in interaction and subgroup analyses related to these variables. All analyses were performed using the Statistical Package for the Social Sciences software (ver. 13.0; SPSS, Chicago, IL, USA). The statistical significance of differences between the patient and control groups was estimated by logistic regression analysis. Adjusted odds ratios (OR) were calculated with a logistic regression model that controlled for gender and age and are given with 95% confidence intervals (CI). Subjects with the wild-type genotypes (MTHFR 677CC) were considered to be at baseline risk. The expected frequency of control genotypes was checked by the Hardy-Weinberg equilibrium test. The heterogeneity was tested by multivariate logistic regression model. Subjects for whom there were missing data for smoking or histological type were excluded in interaction and subgroup analyses related to these variables. All analyses were performed using the Statistical Package for the Social Sciences software (ver. 13.0; SPSS, Chicago, IL, USA).
null
null
null
null
[ "Background", "Subjects", "Genotyping", "Statistical analyses", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Lung cancer is the leading cause of cancer-related death worldwide. The incidence and mortality of lung cancer have been significantly and constantly increasing over the past two decades in Korea [1-3]. According to the Korean National Cancer Registry, the age-standardized incidence rate for lung cancer of Korean population was 47.5/100,000 for men and 13.3/100,000 for women in 2007 [2], it has become the second most common malignant tumor following gastric cancer. The reason for this increase in lung cancer has not been clearly explained. Although it is well known that cigarette smoking is the major cause of lung cancer, only 10-20% of lifetime smokers are known to develop lung cancer. Additionally, lung cancer is a multicellular and multistage process involving a number of genetic changes in oncogenes, suggesting that genetic factors may play an important role in its development [4-6].\nMethylenetetrahydrofolate reductase (MTHFR) is an important enzyme in folate metabolism. A common mutation of the MTHFR gene is the C to T transition at nucleotide 677, which converts alanine to valine, results in a thermo-labile enzyme with decreased activity [7]. The heterozygote and homozygous variant of C677T were shown to have 65 and 30% of the enzyme activity, respectively [8]. The low enzymatic activity of the MTHFR C677T genotypic variant is associated with DNA hypomethylation, which may induce genomic instability or the derepression of proto-oncogenes.\nTo date, several studies have shown that the MTHFR C677T polymorphism are associated with either increased [9-12] or decreased [13-15] risk of lung cancer, whereas others observed no association between the MTHFR C677T genotype and genetic susceptibility to lung cancer [16-18]. Small sample size, various ethnic groups, diet, environment, and methodologies may be responsible for the discrepancy. Therefore, a larger single study is required to evaluate MTHFR C677T polymorphisms and the lung cancer risk in a specific population. Additionally, to our knowledge, no previous report has examined the effect of MTHFR C677T polymorphisms on the risk of lung cancer in a Korean population. In the present study, we performed a large population based case-control study involving 3938 lung cancer patients and 1700 healthy controls to evaluate whether MTHFR C677T polymorphism was associated with lung cancer risk in a Korean population. Additionally, we investigated whether MTHFR C677T plays an interactive role in the lung cancer risk in relation to histological subtypes and smoking status.", "The study population consisted of 3938 patients with newly diagnosed lung cancer and 1700 population-based controls. All enrolled patients were pathologically confirmed at Chonnam National University Hwasun Hospital between January 2000 and August 2010. Cases with secondary or recurrent tumors were excluded.\nThe control group (n = 1700) consisted of participants in the Thyroid Disease Prevalence Study [19], conducted from July 2004 to January 2006 in the Yeonggwang and Muan Counties of Jeollanam-do Province and in Namwon City of Jeollabuk-do, Korea. A total of 4018 subjects were randomly selected by 5-year age strata and sex. Of the total number, 3486 were eligible subjects. Of those eligible, 1699 (48.8% of the eligible subjects; 820 men and 879 women), underwent clinical examinations. At the time of their peripheral blood collections, all control subjects provided their informed consent to participate in this study.\nThis study was approved by the Institutional Review Board of the Chonnam National University Hwasun Hospital in Hwasun, South Korea. At the time of their peripheral blood collections, all case and control subjects provided their informed consent to participate in this study.", "Genomic DNA was extracted from peripheral blood using a QIAamp DNA Blood Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. Genotyping was performed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) or real-time PCR. The genotyping protocol for PCR-RFLP was adapted from Frosst et al. [8]. After HinfI (Takara, Tokyo, Japan) restriction enzyme digestion, samples were run on a 10% polyacrylamide gel (19:1) using Microtitre Array Diagonal Gel Electrophoresis (MADGE; MadgeBio, Grantham and Southampton, UK). Genotyping by real-time PCR was performed by allelic discrimination using dual-labeled probes containing locked nucleic acids (LNA) in a real-time PCR assay. PCR primers and LNA probes were designed and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA). Primers producing a 104-bp amplicon were as follows: forward, 5'-CTTTGAGGCTGACCTGAAGC-3' and reverse, 5'-TCACAAAGCGGAAGAA TGTG-3'. Dual-labeled LNA hybridization probes were 5'-FAM-ATG GcT ccc-BHQ1-3' for the C allele and 5'-cy5-cgA CTc cCg C-BHQ2-3' for the T allele (LNA bases are denoted in upper case, and single nucleotide polymorphisms are underlined). Real-time PCR was performed using a Rotor-Gene 3000 multiplex system (Corbett Research, Sydney, Australia) in a 10-μL reaction volume containing 200 nM PCR primer, 10-10 nM each probe, 0.5 U f-taq polymerase (Solgent, Daejeon, Korea), and 40 ng of genomic DNA. In 24 subjects, the results of PCR-RFLP were compared with those from real-time PCR, and the resulting concordance rate was 100%.", "The statistical significance of differences between the patient and control groups was estimated by logistic regression analysis. Adjusted odds ratios (OR) were calculated with a logistic regression model that controlled for gender and age and are given with 95% confidence intervals (CI). Subjects with the wild-type genotypes (MTHFR 677CC) were considered to be at baseline risk. The expected frequency of control genotypes was checked by the Hardy-Weinberg equilibrium test. The heterogeneity was tested by multivariate logistic regression model. Subjects for whom there were missing data for smoking or histological type were excluded in interaction and subgroup analyses related to these variables. All analyses were performed using the Statistical Package for the Social Sciences software (ver. 13.0; SPSS, Chicago, IL, USA).", "The characteristics of the study population are presented in Table 1. In total, 3938 cases and 1700 controls were included in these analyses. The 3938 lung cancer cases consisted of 1523 adenocarcinomas, 1519 squamous cell carcinomas, 574 small cell carcinomas, and 322 other types, including 75 large cell cancers and 247 mixed types. The mean age of patients with lung cancer was significantly higher than the control group. A statistically significant gender difference was also found between patients with lung cancer and healthy controls; the control group had more females. The proportion of smokers in lung cancer cases was higher than in the controls.\nGeneral characteristics of subjects\nSD, standard deviation; ADC, adenocarcinoma; SQC, squamous cell carcinoma;\nSCLC, small cell lung cancer; others, large cell carcinoma and mixed types; *, p < 0.01.\nTable 2 shows the genotype distributions for MTHFR C677T and their ORs and 95% CIs in lung cancer. The distribution of the MTHFR C677T gene polymorphisms in the controls was calculated by the Hardy-Weinberg equilibrium. The MTHFR C677T frequencies of CC, CT, and TT genotypes were 34.5%, 48.5%, and 17.0% in lung cancer, and 31.8%, 50.7%, and 17.5% in the controls, respectively. The frequencies of combination for 677 CT heterozygous and 677 TT homozygous were observed 65.4% in lung cancer and 68.2% in the controls. Compared with the MTHFR 677 CC genotype, the TT and CT genotypes showed a protective effect for the risk of lung cancer when adjustments were made for age and gender, overall TT versus CC (OR = 0.88; 95% CI = 0.71-1.07) and overall CT versus CC (OR = 0.90; 95% CI = 0.77-1.04); however, the results did not reach statistical significance.\nDistribution of MTHFR C677T and their association with lung cancer risk\naAdjusted for age, gender; OR, odds ratio; CI, confidence interval.\nTable 3 shows subgroup analysis by gender, age and histological type for the MTHFR C677T polymorphisms. When the MTHFR 677CC genotype was used as the reference group, the MTHFR 677 CT genotype were associated with a significantly reduced risk in squamous cell carcinoma (OR = 0.78; 95% CI = 0.64- 0.96), the combined variant genotypes (677 CT + TT) also showed a protect effect on the risk of squamous cell carcinoma (OR = 0.79; 95% CI = 0.65- 0.95), while there was no significant association in other histological types of lung cancer. There were no heterogeneities among subgroups of gender (male, female), age (age ≤65, age > 65), smoking (never smoker, ever smoker), histological type (adenocarcinoma, squamous cell carcinoma, small cell carcinoma, other types). Nor did we find evidence for an interaction between the MTHFR C677T polymorphisms and age and gender or smoking habit.\nSubgroup analysis for the MTHFR C677T polymorphisms\nSCLC, small cell lung cancer; SQC, squamous cell carcinoma; ADC, adenocarcinoma; others, large cell carcinoma and mixed types.\nORa: odds ratio adjusted for age and gender, CI, confidence interval;\nPb: p values for heterogeneity.", "The current study represents the largest sample (3938 lung cancer patients and 1700 controls) of a single population reported to evaluate a possible association between MTHFR C677T gene polymorphism and susceptibility to lung cancer. To our knowledge, this is also the first report to examine the association between MTHFR C677T polymorphisms and susceptibility to lung cancer in a Korean population. We found that the MTHFR 677 CT and TT showed weak protection for overall lung cancer, although the results were not statistically significant. However, by histological subtype, we found significant protection of the MTHFR CT genotype for squamous cell carcinoma risk.\nThe combination of 677 TT homozygous with 677 CT heterozygous also appeared to have a protection effect on the risk of squamous cell carcinoma. We observed no significant interactions between the MTHFR C677T polymorphism and smoking, gender, or age.\nResults of several studies examining the role of the MTHFR C677T polymorphism in lung cancer susceptibility have been inconsistent. Liu et al. [14] and Jeng et al. [13] in Taiwan and Suzuki et al. [15] in Japan showed that the MTHFR 677 TT genotype was associated with a decreased risk of lung cancer. However, Siemianowicz et al. [11] in Poland, Hung et al. [9] in Central Europe, and Shen et al. [10] in China showed that individuals with MTHFR TT genotype had an increased risk of lung cancer versus those with the wild-type homozygous variant, while a recent meta-analysis by Mao et al. [20] based on eight case-control study suggested no evidence for a major role of the MTHFR C677T polymorphisms in carcinogenesis of lung cancer. Small sample size, various ethnic groups, diet, environment, and methodologies might be responsible for the discrepancy.\nThe pathogenesis of adenocarcinoma is considered to be somewhat different from that of squamous cell carcinomas, and whether the effect of MTHFR C677T polymorphism differs by lung cancer histology remains unclear. We performed a stratification analysis by histological type, which is lacking in most previous studies, and found that the MTHFR 677 CT genotype was associated with a significantly decreased risk for lung squamous cell carcinoma (OR = 0.78, 95%CI = 0.64-0.96), supporting the potential effect of MTHFR C677T polymorphism on lung squamous cell carcinoma. In other histological type of lung cancer, such as adenocarcinoma and small cell lung cancer, we found no association between C677T MTHFR genotype and lung cancer risk. A similar result was seen in a Japanese study in which no effect of MTHFR C677T polymorphism on the risk of overall lung cancer was evident, but on histology based analysis, the MTHFR 677T allele was associated with a reduced risk of squamous/small cell carcinoma [15]. While Siemianowicz et al. [11] reported that the 677TT genotype was associated with a significantly higher risk of non-small cell lung cancer.\nThe role of MTHFR polymorphisms in modulating cancer risk is associated with folate status. Under adequate folate conditions, the protective effect of the 677TT genotype turns to a situation of elevated risk of lung cancer among MTHFR 677TT genotype with low folate intakes. A recent meta-analysis by Boccia et al. [21], which included stratified analysis according to dietary folate intake, showed an increased risk for individuals with low folate intake (OR = 1.28, 95% CI = 0.97-1.68 for lung) versus high folate intake (OR = 0.94, 95% CI = 0.79-1.12 for lung). Plasma folate level might be relatively high among Korean adults; the median plasma folate was 22.7 nmol/L in our population based controls, which is higher than that in Chinese population [22], and also higher than that in populations from15 European countries (folate status ranged from 6.3 to 20.1 nmol/L) [23]. In addition, folate intake seems also fairly high among Korean population; the average of folate intake in Korea is about 347 μg/day [24]. This level is higher than the average intake in most European countries, except for United Kingdom [23]. This might provide a partial explanation why the MTHFR 677 mutations were found to protect against lung cancer, especially in lung squamous cell carcinoma in our study. Moreover, our recent study found a protective effect of MTHFR 677 T allele on the risk of gastric and colorectal cancer in a Korean population [25].\nIn our study, there was no significant gender difference in the effect of MTHFR C677T polymorphism on lung cancer risk. Our results seem to differ from those of the Shi et al. [26] study in Houston, TX, USA, which reported that the MTHFR 677TT genotype in women was associated with a decreased lung cancer risk compared with carriers of the MTHFR 677CC genotype.\nCigarette smoking is a known risk factor for lung cancer, and smokers may tend to have lower levels of serum and produce a localized deficiency of folic acid. We further examined the effects of MTHFR C677T in subgroups according to smoking status and found no interaction between the MTHFR C677T polymorphism and smoking. Our results seem somewhat similar to the results of Vineis, et al. [18], which showed that the MTHFR C677T polymorphism had no any association in both smokers and nonsmokers. However, a beneficial effect of the MTHFR TT genotype on the risk of lung cancer was observed in those with heavy smokers; Suzuki et al. [15] in Japan found that MTHFR 677T alleles were associated with reduced risk of squamous/small cell carcinomas, especially among heavy smokers with the MTHFR 677T allele. Liu et al. [14] in Taiwan observed that smokers carrying the MTHFR 677 T allele showed a significantly decreased risk of lung cancer.\nIt is well known that familial aggregation of lung cancer could increase the risk of lung cancer, and a high consumption of vegetables and fruits is associated with a reduced risk of lung cancer. However, we have no information on the accuracy of reported family history of cancer, dietary folate intake or detailed data on the environmental tobacco exposure risk factors for lung cancer. Thus, we cannot evaluate the relationship between gene-environment interactions. Another limitation of the present study is that the case group was composed of lung cancer patients who were enrolled from hospital, which could not be representative the general population.", "Our present large case-control study in Korea found a protective effect of the MTHFR C677T variant genotype for lung squamous cell carcinoma and suggested that the effects of MTHFR C677T polymorphism may be involved in the development of lung cancer for Korean population.", "The authors declare that they have no competing interests.", "MHS planned the analysis. CLH performed in the study design and drafted the manuscript. HNK and HRS participated in the experiments. JMP performed data analysis. YCK, IJO and KSK provided clinical material. SSK, JSC, and WJY participated in its design and coordination. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2350/12/28/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Genotyping", "Statistical analyses", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Lung cancer is the leading cause of cancer-related death worldwide. The incidence and mortality of lung cancer have been significantly and constantly increasing over the past two decades in Korea [1-3]. According to the Korean National Cancer Registry, the age-standardized incidence rate for lung cancer of Korean population was 47.5/100,000 for men and 13.3/100,000 for women in 2007 [2], it has become the second most common malignant tumor following gastric cancer. The reason for this increase in lung cancer has not been clearly explained. Although it is well known that cigarette smoking is the major cause of lung cancer, only 10-20% of lifetime smokers are known to develop lung cancer. Additionally, lung cancer is a multicellular and multistage process involving a number of genetic changes in oncogenes, suggesting that genetic factors may play an important role in its development [4-6].\nMethylenetetrahydrofolate reductase (MTHFR) is an important enzyme in folate metabolism. A common mutation of the MTHFR gene is the C to T transition at nucleotide 677, which converts alanine to valine, results in a thermo-labile enzyme with decreased activity [7]. The heterozygote and homozygous variant of C677T were shown to have 65 and 30% of the enzyme activity, respectively [8]. The low enzymatic activity of the MTHFR C677T genotypic variant is associated with DNA hypomethylation, which may induce genomic instability or the derepression of proto-oncogenes.\nTo date, several studies have shown that the MTHFR C677T polymorphism are associated with either increased [9-12] or decreased [13-15] risk of lung cancer, whereas others observed no association between the MTHFR C677T genotype and genetic susceptibility to lung cancer [16-18]. Small sample size, various ethnic groups, diet, environment, and methodologies may be responsible for the discrepancy. Therefore, a larger single study is required to evaluate MTHFR C677T polymorphisms and the lung cancer risk in a specific population. Additionally, to our knowledge, no previous report has examined the effect of MTHFR C677T polymorphisms on the risk of lung cancer in a Korean population. In the present study, we performed a large population based case-control study involving 3938 lung cancer patients and 1700 healthy controls to evaluate whether MTHFR C677T polymorphism was associated with lung cancer risk in a Korean population. Additionally, we investigated whether MTHFR C677T plays an interactive role in the lung cancer risk in relation to histological subtypes and smoking status.", "[SUBTITLE] Subjects [SUBSECTION] The study population consisted of 3938 patients with newly diagnosed lung cancer and 1700 population-based controls. All enrolled patients were pathologically confirmed at Chonnam National University Hwasun Hospital between January 2000 and August 2010. Cases with secondary or recurrent tumors were excluded.\nThe control group (n = 1700) consisted of participants in the Thyroid Disease Prevalence Study [19], conducted from July 2004 to January 2006 in the Yeonggwang and Muan Counties of Jeollanam-do Province and in Namwon City of Jeollabuk-do, Korea. A total of 4018 subjects were randomly selected by 5-year age strata and sex. Of the total number, 3486 were eligible subjects. Of those eligible, 1699 (48.8% of the eligible subjects; 820 men and 879 women), underwent clinical examinations. At the time of their peripheral blood collections, all control subjects provided their informed consent to participate in this study.\nThis study was approved by the Institutional Review Board of the Chonnam National University Hwasun Hospital in Hwasun, South Korea. At the time of their peripheral blood collections, all case and control subjects provided their informed consent to participate in this study.\nThe study population consisted of 3938 patients with newly diagnosed lung cancer and 1700 population-based controls. All enrolled patients were pathologically confirmed at Chonnam National University Hwasun Hospital between January 2000 and August 2010. Cases with secondary or recurrent tumors were excluded.\nThe control group (n = 1700) consisted of participants in the Thyroid Disease Prevalence Study [19], conducted from July 2004 to January 2006 in the Yeonggwang and Muan Counties of Jeollanam-do Province and in Namwon City of Jeollabuk-do, Korea. A total of 4018 subjects were randomly selected by 5-year age strata and sex. Of the total number, 3486 were eligible subjects. Of those eligible, 1699 (48.8% of the eligible subjects; 820 men and 879 women), underwent clinical examinations. At the time of their peripheral blood collections, all control subjects provided their informed consent to participate in this study.\nThis study was approved by the Institutional Review Board of the Chonnam National University Hwasun Hospital in Hwasun, South Korea. At the time of their peripheral blood collections, all case and control subjects provided their informed consent to participate in this study.\n[SUBTITLE] Genotyping [SUBSECTION] Genomic DNA was extracted from peripheral blood using a QIAamp DNA Blood Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. Genotyping was performed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) or real-time PCR. The genotyping protocol for PCR-RFLP was adapted from Frosst et al. [8]. After HinfI (Takara, Tokyo, Japan) restriction enzyme digestion, samples were run on a 10% polyacrylamide gel (19:1) using Microtitre Array Diagonal Gel Electrophoresis (MADGE; MadgeBio, Grantham and Southampton, UK). Genotyping by real-time PCR was performed by allelic discrimination using dual-labeled probes containing locked nucleic acids (LNA) in a real-time PCR assay. PCR primers and LNA probes were designed and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA). Primers producing a 104-bp amplicon were as follows: forward, 5'-CTTTGAGGCTGACCTGAAGC-3' and reverse, 5'-TCACAAAGCGGAAGAA TGTG-3'. Dual-labeled LNA hybridization probes were 5'-FAM-ATG GcT ccc-BHQ1-3' for the C allele and 5'-cy5-cgA CTc cCg C-BHQ2-3' for the T allele (LNA bases are denoted in upper case, and single nucleotide polymorphisms are underlined). Real-time PCR was performed using a Rotor-Gene 3000 multiplex system (Corbett Research, Sydney, Australia) in a 10-μL reaction volume containing 200 nM PCR primer, 10-10 nM each probe, 0.5 U f-taq polymerase (Solgent, Daejeon, Korea), and 40 ng of genomic DNA. In 24 subjects, the results of PCR-RFLP were compared with those from real-time PCR, and the resulting concordance rate was 100%.\nGenomic DNA was extracted from peripheral blood using a QIAamp DNA Blood Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. Genotyping was performed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) or real-time PCR. The genotyping protocol for PCR-RFLP was adapted from Frosst et al. [8]. After HinfI (Takara, Tokyo, Japan) restriction enzyme digestion, samples were run on a 10% polyacrylamide gel (19:1) using Microtitre Array Diagonal Gel Electrophoresis (MADGE; MadgeBio, Grantham and Southampton, UK). Genotyping by real-time PCR was performed by allelic discrimination using dual-labeled probes containing locked nucleic acids (LNA) in a real-time PCR assay. PCR primers and LNA probes were designed and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA). Primers producing a 104-bp amplicon were as follows: forward, 5'-CTTTGAGGCTGACCTGAAGC-3' and reverse, 5'-TCACAAAGCGGAAGAA TGTG-3'. Dual-labeled LNA hybridization probes were 5'-FAM-ATG GcT ccc-BHQ1-3' for the C allele and 5'-cy5-cgA CTc cCg C-BHQ2-3' for the T allele (LNA bases are denoted in upper case, and single nucleotide polymorphisms are underlined). Real-time PCR was performed using a Rotor-Gene 3000 multiplex system (Corbett Research, Sydney, Australia) in a 10-μL reaction volume containing 200 nM PCR primer, 10-10 nM each probe, 0.5 U f-taq polymerase (Solgent, Daejeon, Korea), and 40 ng of genomic DNA. In 24 subjects, the results of PCR-RFLP were compared with those from real-time PCR, and the resulting concordance rate was 100%.\n[SUBTITLE] Statistical analyses [SUBSECTION] The statistical significance of differences between the patient and control groups was estimated by logistic regression analysis. Adjusted odds ratios (OR) were calculated with a logistic regression model that controlled for gender and age and are given with 95% confidence intervals (CI). Subjects with the wild-type genotypes (MTHFR 677CC) were considered to be at baseline risk. The expected frequency of control genotypes was checked by the Hardy-Weinberg equilibrium test. The heterogeneity was tested by multivariate logistic regression model. Subjects for whom there were missing data for smoking or histological type were excluded in interaction and subgroup analyses related to these variables. All analyses were performed using the Statistical Package for the Social Sciences software (ver. 13.0; SPSS, Chicago, IL, USA).\nThe statistical significance of differences between the patient and control groups was estimated by logistic regression analysis. Adjusted odds ratios (OR) were calculated with a logistic regression model that controlled for gender and age and are given with 95% confidence intervals (CI). Subjects with the wild-type genotypes (MTHFR 677CC) were considered to be at baseline risk. The expected frequency of control genotypes was checked by the Hardy-Weinberg equilibrium test. The heterogeneity was tested by multivariate logistic regression model. Subjects for whom there were missing data for smoking or histological type were excluded in interaction and subgroup analyses related to these variables. All analyses were performed using the Statistical Package for the Social Sciences software (ver. 13.0; SPSS, Chicago, IL, USA).", "The study population consisted of 3938 patients with newly diagnosed lung cancer and 1700 population-based controls. All enrolled patients were pathologically confirmed at Chonnam National University Hwasun Hospital between January 2000 and August 2010. Cases with secondary or recurrent tumors were excluded.\nThe control group (n = 1700) consisted of participants in the Thyroid Disease Prevalence Study [19], conducted from July 2004 to January 2006 in the Yeonggwang and Muan Counties of Jeollanam-do Province and in Namwon City of Jeollabuk-do, Korea. A total of 4018 subjects were randomly selected by 5-year age strata and sex. Of the total number, 3486 were eligible subjects. Of those eligible, 1699 (48.8% of the eligible subjects; 820 men and 879 women), underwent clinical examinations. At the time of their peripheral blood collections, all control subjects provided their informed consent to participate in this study.\nThis study was approved by the Institutional Review Board of the Chonnam National University Hwasun Hospital in Hwasun, South Korea. At the time of their peripheral blood collections, all case and control subjects provided their informed consent to participate in this study.", "Genomic DNA was extracted from peripheral blood using a QIAamp DNA Blood Mini Kit (Qiagen, Valencia, CA, USA) according to the manufacturer's protocol. Genotyping was performed by polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) or real-time PCR. The genotyping protocol for PCR-RFLP was adapted from Frosst et al. [8]. After HinfI (Takara, Tokyo, Japan) restriction enzyme digestion, samples were run on a 10% polyacrylamide gel (19:1) using Microtitre Array Diagonal Gel Electrophoresis (MADGE; MadgeBio, Grantham and Southampton, UK). Genotyping by real-time PCR was performed by allelic discrimination using dual-labeled probes containing locked nucleic acids (LNA) in a real-time PCR assay. PCR primers and LNA probes were designed and synthesized by Integrated DNA Technologies (IDT, Coralville, IA, USA). Primers producing a 104-bp amplicon were as follows: forward, 5'-CTTTGAGGCTGACCTGAAGC-3' and reverse, 5'-TCACAAAGCGGAAGAA TGTG-3'. Dual-labeled LNA hybridization probes were 5'-FAM-ATG GcT ccc-BHQ1-3' for the C allele and 5'-cy5-cgA CTc cCg C-BHQ2-3' for the T allele (LNA bases are denoted in upper case, and single nucleotide polymorphisms are underlined). Real-time PCR was performed using a Rotor-Gene 3000 multiplex system (Corbett Research, Sydney, Australia) in a 10-μL reaction volume containing 200 nM PCR primer, 10-10 nM each probe, 0.5 U f-taq polymerase (Solgent, Daejeon, Korea), and 40 ng of genomic DNA. In 24 subjects, the results of PCR-RFLP were compared with those from real-time PCR, and the resulting concordance rate was 100%.", "The statistical significance of differences between the patient and control groups was estimated by logistic regression analysis. Adjusted odds ratios (OR) were calculated with a logistic regression model that controlled for gender and age and are given with 95% confidence intervals (CI). Subjects with the wild-type genotypes (MTHFR 677CC) were considered to be at baseline risk. The expected frequency of control genotypes was checked by the Hardy-Weinberg equilibrium test. The heterogeneity was tested by multivariate logistic regression model. Subjects for whom there were missing data for smoking or histological type were excluded in interaction and subgroup analyses related to these variables. All analyses were performed using the Statistical Package for the Social Sciences software (ver. 13.0; SPSS, Chicago, IL, USA).", "The characteristics of the study population are presented in Table 1. In total, 3938 cases and 1700 controls were included in these analyses. The 3938 lung cancer cases consisted of 1523 adenocarcinomas, 1519 squamous cell carcinomas, 574 small cell carcinomas, and 322 other types, including 75 large cell cancers and 247 mixed types. The mean age of patients with lung cancer was significantly higher than the control group. A statistically significant gender difference was also found between patients with lung cancer and healthy controls; the control group had more females. The proportion of smokers in lung cancer cases was higher than in the controls.\nGeneral characteristics of subjects\nSD, standard deviation; ADC, adenocarcinoma; SQC, squamous cell carcinoma;\nSCLC, small cell lung cancer; others, large cell carcinoma and mixed types; *, p < 0.01.\nTable 2 shows the genotype distributions for MTHFR C677T and their ORs and 95% CIs in lung cancer. The distribution of the MTHFR C677T gene polymorphisms in the controls was calculated by the Hardy-Weinberg equilibrium. The MTHFR C677T frequencies of CC, CT, and TT genotypes were 34.5%, 48.5%, and 17.0% in lung cancer, and 31.8%, 50.7%, and 17.5% in the controls, respectively. The frequencies of combination for 677 CT heterozygous and 677 TT homozygous were observed 65.4% in lung cancer and 68.2% in the controls. Compared with the MTHFR 677 CC genotype, the TT and CT genotypes showed a protective effect for the risk of lung cancer when adjustments were made for age and gender, overall TT versus CC (OR = 0.88; 95% CI = 0.71-1.07) and overall CT versus CC (OR = 0.90; 95% CI = 0.77-1.04); however, the results did not reach statistical significance.\nDistribution of MTHFR C677T and their association with lung cancer risk\naAdjusted for age, gender; OR, odds ratio; CI, confidence interval.\nTable 3 shows subgroup analysis by gender, age and histological type for the MTHFR C677T polymorphisms. When the MTHFR 677CC genotype was used as the reference group, the MTHFR 677 CT genotype were associated with a significantly reduced risk in squamous cell carcinoma (OR = 0.78; 95% CI = 0.64- 0.96), the combined variant genotypes (677 CT + TT) also showed a protect effect on the risk of squamous cell carcinoma (OR = 0.79; 95% CI = 0.65- 0.95), while there was no significant association in other histological types of lung cancer. There were no heterogeneities among subgroups of gender (male, female), age (age ≤65, age > 65), smoking (never smoker, ever smoker), histological type (adenocarcinoma, squamous cell carcinoma, small cell carcinoma, other types). Nor did we find evidence for an interaction between the MTHFR C677T polymorphisms and age and gender or smoking habit.\nSubgroup analysis for the MTHFR C677T polymorphisms\nSCLC, small cell lung cancer; SQC, squamous cell carcinoma; ADC, adenocarcinoma; others, large cell carcinoma and mixed types.\nORa: odds ratio adjusted for age and gender, CI, confidence interval;\nPb: p values for heterogeneity.", "The current study represents the largest sample (3938 lung cancer patients and 1700 controls) of a single population reported to evaluate a possible association between MTHFR C677T gene polymorphism and susceptibility to lung cancer. To our knowledge, this is also the first report to examine the association between MTHFR C677T polymorphisms and susceptibility to lung cancer in a Korean population. We found that the MTHFR 677 CT and TT showed weak protection for overall lung cancer, although the results were not statistically significant. However, by histological subtype, we found significant protection of the MTHFR CT genotype for squamous cell carcinoma risk.\nThe combination of 677 TT homozygous with 677 CT heterozygous also appeared to have a protection effect on the risk of squamous cell carcinoma. We observed no significant interactions between the MTHFR C677T polymorphism and smoking, gender, or age.\nResults of several studies examining the role of the MTHFR C677T polymorphism in lung cancer susceptibility have been inconsistent. Liu et al. [14] and Jeng et al. [13] in Taiwan and Suzuki et al. [15] in Japan showed that the MTHFR 677 TT genotype was associated with a decreased risk of lung cancer. However, Siemianowicz et al. [11] in Poland, Hung et al. [9] in Central Europe, and Shen et al. [10] in China showed that individuals with MTHFR TT genotype had an increased risk of lung cancer versus those with the wild-type homozygous variant, while a recent meta-analysis by Mao et al. [20] based on eight case-control study suggested no evidence for a major role of the MTHFR C677T polymorphisms in carcinogenesis of lung cancer. Small sample size, various ethnic groups, diet, environment, and methodologies might be responsible for the discrepancy.\nThe pathogenesis of adenocarcinoma is considered to be somewhat different from that of squamous cell carcinomas, and whether the effect of MTHFR C677T polymorphism differs by lung cancer histology remains unclear. We performed a stratification analysis by histological type, which is lacking in most previous studies, and found that the MTHFR 677 CT genotype was associated with a significantly decreased risk for lung squamous cell carcinoma (OR = 0.78, 95%CI = 0.64-0.96), supporting the potential effect of MTHFR C677T polymorphism on lung squamous cell carcinoma. In other histological type of lung cancer, such as adenocarcinoma and small cell lung cancer, we found no association between C677T MTHFR genotype and lung cancer risk. A similar result was seen in a Japanese study in which no effect of MTHFR C677T polymorphism on the risk of overall lung cancer was evident, but on histology based analysis, the MTHFR 677T allele was associated with a reduced risk of squamous/small cell carcinoma [15]. While Siemianowicz et al. [11] reported that the 677TT genotype was associated with a significantly higher risk of non-small cell lung cancer.\nThe role of MTHFR polymorphisms in modulating cancer risk is associated with folate status. Under adequate folate conditions, the protective effect of the 677TT genotype turns to a situation of elevated risk of lung cancer among MTHFR 677TT genotype with low folate intakes. A recent meta-analysis by Boccia et al. [21], which included stratified analysis according to dietary folate intake, showed an increased risk for individuals with low folate intake (OR = 1.28, 95% CI = 0.97-1.68 for lung) versus high folate intake (OR = 0.94, 95% CI = 0.79-1.12 for lung). Plasma folate level might be relatively high among Korean adults; the median plasma folate was 22.7 nmol/L in our population based controls, which is higher than that in Chinese population [22], and also higher than that in populations from15 European countries (folate status ranged from 6.3 to 20.1 nmol/L) [23]. In addition, folate intake seems also fairly high among Korean population; the average of folate intake in Korea is about 347 μg/day [24]. This level is higher than the average intake in most European countries, except for United Kingdom [23]. This might provide a partial explanation why the MTHFR 677 mutations were found to protect against lung cancer, especially in lung squamous cell carcinoma in our study. Moreover, our recent study found a protective effect of MTHFR 677 T allele on the risk of gastric and colorectal cancer in a Korean population [25].\nIn our study, there was no significant gender difference in the effect of MTHFR C677T polymorphism on lung cancer risk. Our results seem to differ from those of the Shi et al. [26] study in Houston, TX, USA, which reported that the MTHFR 677TT genotype in women was associated with a decreased lung cancer risk compared with carriers of the MTHFR 677CC genotype.\nCigarette smoking is a known risk factor for lung cancer, and smokers may tend to have lower levels of serum and produce a localized deficiency of folic acid. We further examined the effects of MTHFR C677T in subgroups according to smoking status and found no interaction between the MTHFR C677T polymorphism and smoking. Our results seem somewhat similar to the results of Vineis, et al. [18], which showed that the MTHFR C677T polymorphism had no any association in both smokers and nonsmokers. However, a beneficial effect of the MTHFR TT genotype on the risk of lung cancer was observed in those with heavy smokers; Suzuki et al. [15] in Japan found that MTHFR 677T alleles were associated with reduced risk of squamous/small cell carcinomas, especially among heavy smokers with the MTHFR 677T allele. Liu et al. [14] in Taiwan observed that smokers carrying the MTHFR 677 T allele showed a significantly decreased risk of lung cancer.\nIt is well known that familial aggregation of lung cancer could increase the risk of lung cancer, and a high consumption of vegetables and fruits is associated with a reduced risk of lung cancer. However, we have no information on the accuracy of reported family history of cancer, dietary folate intake or detailed data on the environmental tobacco exposure risk factors for lung cancer. Thus, we cannot evaluate the relationship between gene-environment interactions. Another limitation of the present study is that the case group was composed of lung cancer patients who were enrolled from hospital, which could not be representative the general population.", "Our present large case-control study in Korea found a protective effect of the MTHFR C677T variant genotype for lung squamous cell carcinoma and suggested that the effects of MTHFR C677T polymorphism may be involved in the development of lung cancer for Korean population.", "The authors declare that they have no competing interests.", "MHS planned the analysis. CLH performed in the study design and drafted the manuscript. HNK and HRS participated in the experiments. JMP performed data analysis. YCK, IJO and KSK provided clinical material. SSK, JSC, and WJY participated in its design and coordination. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2350/12/28/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
EEG complexity as a biomarker for autism spectrum disorder risk.
21342500
Complex neurodevelopmental disorders may be characterized by subtle brain function signatures early in life before behavioral symptoms are apparent. Such endophenotypes may be measurable biomarkers for later cognitive impairments. The nonlinear complexity of electroencephalography (EEG) signals is believed to contain information about the architecture of the neural networks in the brain on many scales. Early detection of abnormalities in EEG signals may be an early biomarker for developmental cognitive disorders. The goal of this paper is to demonstrate that the modified multiscale entropy (mMSE) computed on the basis of resting state EEG data can be used as a biomarker of normal brain development and distinguish typically developing children from a group of infants at high risk for autism spectrum disorder (ASD), defined on the basis of an older sibling with ASD.
BACKGROUND
Using mMSE as a feature vector, a multiclass support vector machine algorithm was used to classify typically developing and high-risk groups. Classification was computed separately within each age group from 6 to 24 months.
METHODS
Multiscale entropy appears to go through a different developmental trajectory in infants at high risk for autism (HRA) than it does in typically developing controls. Differences appear to be greatest at ages 9 to 12 months. Using several machine learning algorithms with mMSE as a feature vector, infants were classified with over 80% accuracy into control and HRA groups at age 9 months. Classification accuracy for boys was close to 100% at age 9 months and remains high (70% to 90%) at ages 12 and 18 months. For girls, classification accuracy was highest at age 6 months, but declines thereafter.
RESULTS
This proof-of-principle study suggests that mMSE computed from resting state EEG signals may be a useful biomarker for early detection of risk for ASD and abnormalities in cognitive development in infants. To our knowledge, this is the first demonstration of an information theoretic analysis of EEG data for biomarkers in infants at risk for a complex neurodevelopmental disorder.
CONCLUSIONS
[ "Biomarkers", "Brain", "Child Development Disorders, Pervasive", "Child, Preschool", "Electroencephalography", "Female", "Humans", "Infant", "Male", "Risk Assessment" ]
3050760
null
null
Methods
[SUBTITLE] Participants [SUBSECTION] Data were collected from 79 different infants: 46 who were at high risk for ASD (hereafter referred to as HRA), defined on the basis of having an older sibling with a confirmed diagnosis of ASD, and 33 controls, defined on the basis of a typically developing older sibling and no family history of neurodevelopmental disorders. Testing sessions included infants from ages 6 to 24 months, with some participants tested at more than one age. The study participants were part of an ongoing longitudinal study, and for this analysis visits were evaluated at regular intervals. However, at the time this study was done, most infants had been tested at only one or two visits. Data collected at each session were therefore treated as an independent data set. Thus, the data gathered from an infant who was tested during five different sessions, at ages 6, 9, 12, 18 and 24 months, were treated as unique data sets. Data were collected from a total of 143 sessions and from 79 different individuals. The distribution at different ages and risk groups is shown in Table 1. The number of infants who were tested at only one age at the time of this study is shown in Table 2, as well as the number of infants tested two, three, four and five times. Only one infant thus far has been tested at all five ages from 6 to 24 months. For the purposes of this study, all visits were treated as independent measurements. No comparison of different ages or of growth trajectories between individuals was done. Other characteristics recorded include height and head circumference as shown in Table 1. Distribution of participants by age and risk groupa aA total of 79 different infants (46 HRA and 33 CON) participated in this study. Some infants participated in multiple sessions at different ages, raising the total to 143 recording sessions. Also shown are measured demographic variables (age, height and head circumference) and mean multiscale entropy (mMSE) values over three regions: whole head, frontal and left frontal. Statistically significant differences between HRA and CON groups are highlighted in boldface. HRA, high risk for autism, CON, controls; SD, standard deviation. Distribution of participants with number of visits and/or measurements of the same child at different agesa aOverall, 79 infants participated in the study, and 143 measurement sessions were conducted. HRA, high risk for autism; CON, controls. The larger Infant Sibling Project study, from which data for this project were taken, was approved by the Committee on Clinical Investigations at Children's Hospital Boston (X06-08-0374) and the Boston University School of Medicine (H-29049). Parental written informed consent was obtained after the experimental procedures had been fully explained. Data were collected from 79 different infants: 46 who were at high risk for ASD (hereafter referred to as HRA), defined on the basis of having an older sibling with a confirmed diagnosis of ASD, and 33 controls, defined on the basis of a typically developing older sibling and no family history of neurodevelopmental disorders. Testing sessions included infants from ages 6 to 24 months, with some participants tested at more than one age. The study participants were part of an ongoing longitudinal study, and for this analysis visits were evaluated at regular intervals. However, at the time this study was done, most infants had been tested at only one or two visits. Data collected at each session were therefore treated as an independent data set. Thus, the data gathered from an infant who was tested during five different sessions, at ages 6, 9, 12, 18 and 24 months, were treated as unique data sets. Data were collected from a total of 143 sessions and from 79 different individuals. The distribution at different ages and risk groups is shown in Table 1. The number of infants who were tested at only one age at the time of this study is shown in Table 2, as well as the number of infants tested two, three, four and five times. Only one infant thus far has been tested at all five ages from 6 to 24 months. For the purposes of this study, all visits were treated as independent measurements. No comparison of different ages or of growth trajectories between individuals was done. Other characteristics recorded include height and head circumference as shown in Table 1. Distribution of participants by age and risk groupa aA total of 79 different infants (46 HRA and 33 CON) participated in this study. Some infants participated in multiple sessions at different ages, raising the total to 143 recording sessions. Also shown are measured demographic variables (age, height and head circumference) and mean multiscale entropy (mMSE) values over three regions: whole head, frontal and left frontal. Statistically significant differences between HRA and CON groups are highlighted in boldface. HRA, high risk for autism, CON, controls; SD, standard deviation. Distribution of participants with number of visits and/or measurements of the same child at different agesa aOverall, 79 infants participated in the study, and 143 measurement sessions were conducted. HRA, high risk for autism; CON, controls. The larger Infant Sibling Project study, from which data for this project were taken, was approved by the Committee on Clinical Investigations at Children's Hospital Boston (X06-08-0374) and the Boston University School of Medicine (H-29049). Parental written informed consent was obtained after the experimental procedures had been fully explained. [SUBTITLE] EEG data collection [SUBSECTION] Infants were seated on their mothers' laps in a dimly lit room while a research assistant engaged the infants' attention by blowing bubbles. This procedure was followed to limit the amount of head movement by the infant that would interfere with the recording process. Continuous EEG recordings were taken with a 64-channel Sensor Net System (EGI, Inc., Eugene, OR, USA). This sensor net device comprises an elastic tension structure forming a geodesic tessellation of the head surface and containing carbon fiber electrodes embedded in pedestal sponges. At each vertex is a sensor pedestal housing an Ag/AgCl-coated, carbon-filled plastic electrode and a sponge containing a saline electrolyte solution. Prior to fitting the sensor net over the scalp, the sponges are soaked in electrolyte solution (6 mL of KCl per 1 L of distilled water) to facilitate electrical contact between the scalp and the relevant electrode. To ensure the safety and comfort of the infant, the salinity of the electrolyte solution is the same as tears. In the event that the solution comes into contact with the eyes, no damage or discomfort to the infant will occur. Prior to recording, measurements of channel gains and zeros were taken to provide an accurate scaling factor for the display of waveform data. The baby's head was measured and marked with a washable wax pencil to ensure accurate placement of the net, which was then placed over the scalp. Scalp impedances were checked online using NetStation (EGI, Inc.), the recording software package that runs this system. EEG data were collected and recorded online using NetAmps Amplifiers (EGI, Inc.) and NetStation software. The data were amplified, band-pass filtered at 0.1 to 100.0 Hz and sampled at a frequency of 250 Hz. They were digitized with a 12-bit National Instruments Board (National Instruments Corp., Woburn, MA, USA). Typically, 2 minutes of baseline activity were recorded, but depending on the willingness of the infant, recorded periods may have been shorter. For this study, continuous sample segments of 20 seconds were selected from the processed resting state data and used to compute multiscale entropy values. [SUBTITLE] Modified Multiscale Sample Entropy [SUBSECTION] A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or "signal") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal. Multiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1): (1) s 1 : x 1 , x 2 , x 3 … x N s 2 : ( x 1 + x 2 ) / 2 , ( x 3 + x 4 ) / 2 , … , ( x N − 1 + x N ) / 2 ⋮ s 20 : ( x 1 + ⋯ + x 20 ) / 20 , … , ( x N − 20 + ⋯ + x N ) / 20 Coarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following: (2) m M S E ( s , m , r ) = − ln ⁡ ( A r m ( s ) B r m ( s ) ) . The multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals. Characteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series. A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or "signal") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal. Multiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1): (1) s 1 : x 1 , x 2 , x 3 … x N s 2 : ( x 1 + x 2 ) / 2 , ( x 3 + x 4 ) / 2 , … , ( x N − 1 + x N ) / 2 ⋮ s 20 : ( x 1 + ⋯ + x 20 ) / 20 , … , ( x N − 20 + ⋯ + x N ) / 20 Coarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following: (2) m M S E ( s , m , r ) = − ln ⁡ ( A r m ( s ) B r m ( s ) ) . The multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals. Characteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series. Infants were seated on their mothers' laps in a dimly lit room while a research assistant engaged the infants' attention by blowing bubbles. This procedure was followed to limit the amount of head movement by the infant that would interfere with the recording process. Continuous EEG recordings were taken with a 64-channel Sensor Net System (EGI, Inc., Eugene, OR, USA). This sensor net device comprises an elastic tension structure forming a geodesic tessellation of the head surface and containing carbon fiber electrodes embedded in pedestal sponges. At each vertex is a sensor pedestal housing an Ag/AgCl-coated, carbon-filled plastic electrode and a sponge containing a saline electrolyte solution. Prior to fitting the sensor net over the scalp, the sponges are soaked in electrolyte solution (6 mL of KCl per 1 L of distilled water) to facilitate electrical contact between the scalp and the relevant electrode. To ensure the safety and comfort of the infant, the salinity of the electrolyte solution is the same as tears. In the event that the solution comes into contact with the eyes, no damage or discomfort to the infant will occur. Prior to recording, measurements of channel gains and zeros were taken to provide an accurate scaling factor for the display of waveform data. The baby's head was measured and marked with a washable wax pencil to ensure accurate placement of the net, which was then placed over the scalp. Scalp impedances were checked online using NetStation (EGI, Inc.), the recording software package that runs this system. EEG data were collected and recorded online using NetAmps Amplifiers (EGI, Inc.) and NetStation software. The data were amplified, band-pass filtered at 0.1 to 100.0 Hz and sampled at a frequency of 250 Hz. They were digitized with a 12-bit National Instruments Board (National Instruments Corp., Woburn, MA, USA). Typically, 2 minutes of baseline activity were recorded, but depending on the willingness of the infant, recorded periods may have been shorter. For this study, continuous sample segments of 20 seconds were selected from the processed resting state data and used to compute multiscale entropy values. [SUBTITLE] Modified Multiscale Sample Entropy [SUBSECTION] A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or "signal") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal. Multiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1): (1) s 1 : x 1 , x 2 , x 3 … x N s 2 : ( x 1 + x 2 ) / 2 , ( x 3 + x 4 ) / 2 , … , ( x N − 1 + x N ) / 2 ⋮ s 20 : ( x 1 + ⋯ + x 20 ) / 20 , … , ( x N − 20 + ⋯ + x N ) / 20 Coarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following: (2) m M S E ( s , m , r ) = − ln ⁡ ( A r m ( s ) B r m ( s ) ) . The multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals. Characteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series. A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or "signal") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal. Multiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1): (1) s 1 : x 1 , x 2 , x 3 … x N s 2 : ( x 1 + x 2 ) / 2 , ( x 3 + x 4 ) / 2 , … , ( x N − 1 + x N ) / 2 ⋮ s 20 : ( x 1 + ⋯ + x 20 ) / 20 , … , ( x N − 20 + ⋯ + x N ) / 20 Coarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following: (2) m M S E ( s , m , r ) = − ln ⁡ ( A r m ( s ) B r m ( s ) ) . The multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals. Characteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series. [SUBTITLE] Time asymmetry and nonlinearity [SUBSECTION] The time irreversibility index (trev) was computed for different resolutions of the EEG time series using the algorithm of Costa et al. [31]. The third column of Figure 1 shows trev values for several different linear and nonlinear time series. Of particular note is that only the sine wave time series and both random time series have nearly zero irreversibility indices, while the index for the nonlinear logistic series and the representative EEG signal are both nonzero on all scales shown. After computing multiple resolutions of the EEG time series as described above, an estimate of the time irreversibility for each resolution was computed by noting that a symmetric function or time series will have the same number of increments as decrements. That is, the number of times |xi+1 - xi| > 0 will be approximately the same as the number of times |xi+1 - xi| < 0. Thus, an estimate of the time series symmetry (or reversibility) was found by summing increments and decrements and dividing by the length of the series. A reversible time series will have a value of zero. For a series of 5,000 points, as used in this study, trev > 0.1 is a significant indicator of irreversibility and thus of nonlinearity [42]. This information is used only to indicate that nonlinear information is contained in the EEG time series that is not used in linear analysis methods, suggesting that the mMSE may contain more diagnostically useful information than power spectra analysis alone. The time irreversibility index (trev) was computed for different resolutions of the EEG time series using the algorithm of Costa et al. [31]. The third column of Figure 1 shows trev values for several different linear and nonlinear time series. Of particular note is that only the sine wave time series and both random time series have nearly zero irreversibility indices, while the index for the nonlinear logistic series and the representative EEG signal are both nonzero on all scales shown. After computing multiple resolutions of the EEG time series as described above, an estimate of the time irreversibility for each resolution was computed by noting that a symmetric function or time series will have the same number of increments as decrements. That is, the number of times |xi+1 - xi| > 0 will be approximately the same as the number of times |xi+1 - xi| < 0. Thus, an estimate of the time series symmetry (or reversibility) was found by summing increments and decrements and dividing by the length of the series. A reversible time series will have a value of zero. For a series of 5,000 points, as used in this study, trev > 0.1 is a significant indicator of irreversibility and thus of nonlinearity [42]. This information is used only to indicate that nonlinear information is contained in the EEG time series that is not used in linear analysis methods, suggesting that the mMSE may contain more diagnostically useful information than power spectra analysis alone. [SUBTITLE] Classification and endophenotypes [SUBSECTION] The Orange machine learning software package (orange.biolab.si/) was used for classification calculations [43]. Several different learning algorithms were compared (support vector machine, k-nearest neighbors and naïve Bayesian algorithms) to exclude possible overfitting by one method. The significance of the classification results for each method was estimated empirically using the permutation approach described by Golland and Fischl [44]. To keep the feature set smaller while still capturing the overall shape of the mMSE curve, the low, high and mean values for each curve were extracted for each of 64 channels, creating a feature set of 192 values. A single sample from the population is represented by these 192 values. Although some data points were from the same infant at different ages, this study should be considered a cross-sectional study in that any relationship between data at two different ages was not used for classification. That is, the infants in the age 6 months EEG data set were considered to be independent of the set of infants studied at age 9 months, age 12 months and so on. The Orange machine learning software package (orange.biolab.si/) was used for classification calculations [43]. Several different learning algorithms were compared (support vector machine, k-nearest neighbors and naïve Bayesian algorithms) to exclude possible overfitting by one method. The significance of the classification results for each method was estimated empirically using the permutation approach described by Golland and Fischl [44]. To keep the feature set smaller while still capturing the overall shape of the mMSE curve, the low, high and mean values for each curve were extracted for each of 64 channels, creating a feature set of 192 values. A single sample from the population is represented by these 192 values. Although some data points were from the same infant at different ages, this study should be considered a cross-sectional study in that any relationship between data at two different ages was not used for classification. That is, the infants in the age 6 months EEG data set were considered to be independent of the set of infants studied at age 9 months, age 12 months and so on.
null
null
null
null
[ "Background", "Participants", "EEG data collection", "Modified Multiscale Sample Entropy", "Time asymmetry and nonlinearity", "Classification and endophenotypes", "Results", "Machine learning classification of risk", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The human brain exhibits a remarkable network organization. Although sparsely connected, each neuron is within a few synaptic connections of any other neuron [1]. This remarkable connectivity is achieved by a kind of hierarchical organization that is not fully understood in the brain, but is ubiquitous in nature and is called a scale-free network [2-4] that changes with development. Complex networks are characterized by dense local connectivity and sparser long-range connectivity [2] that are fractal or self-similar at all scales. Modules or clusters can be identified on multiple scales. A comparison of network properties using functional magnetic resonance imaging showed that children and young adults' brains had similar \"small-world\" or scale-free organization at the global level, but differed significantly in hierarchical organization and interregional connectivity [5]. White matter fiber tracking has revealed that brain development in children involves changes in both short-range and long-range wiring, with synaptogenesis and pruning occurring at both the local (neuronal) level and the systems level [5]. Abnormal network connectivity may be a key to understanding developmental disabilities.\nAutism is a complex and heterogeneous developmental disorder that affects the developmental trajectory in several key behavioral domains, including social, cognitive and language abilities. The underlying brain dysfunction that results in the behavioral characteristics is not well understood. Complex mental disorders such as autism cannot easily be described as being associated with underconnectivity or overconnectivity, but may involve some form of abnormal connectivity that varies between different brain regions [6]. Normal and abnormal connectivity may also change during development, so that, for example, a condition may not exist at age 3 months but may emerge by age 24 months. A key to understanding neurodevelopmental disorders is the relationship between functional brain connectivity and cognitive development [7]. Measuring functional brain development is difficult both because the brain is a complex, hierarchical system and because few methods are available for noninvasive measurements of brain function in infants. New nonlinear methods for analyzing brain electrical activity measured using scalp electrodes may enable differences in infant brain connectivity to be detected. For example, coarse-grained entropy synchronization between electroencephalography (EEG) electrodes revealed that synchronization was significantly lower in children with autism than in a group of typically developing children [8], supporting the theory that autistic brains exhibit low functional connectivity. In the autistic brain, high local connectivity and low long-range connectivity may develop concurrently because of problems with synapse pruning or formation [9,10]. Estimation of changes in neural connectivity might be an effective diagnostic marker for atypical connectivity development.\nEEG signals are believed to derive from pyramidal cells aligned in parallel in the cerebral cortex and the hippocampus [11], which act as many interacting nonlinear oscillators [12]. As a consequence of the scale-free network organization of neurons, EEG signals carry nonlinear, complex system information reflecting the underlying network topology, including transient synchronization between frequencies, short- and long-range correlations and cross-modulation of amplitudes and frequencies [13]. The mathematical relationship between network structure and time series is a subject of current research and may eventually shed further light on the relationship between neural networks and EEG signals.\nA great deal of information about interrelationships in the nervous system likely remains undiscovered because the linear analysis techniques currently in use fail even to detect them [14]. If brain function and behavior are mirrors of each other as is commonly accepted [15-18], then biomarkers of complex developmental disorders may be hidden in complex, nonlinear patterns of EEG data. The dynamics of the brain are inherently nonlinear, exhibiting emergent dynamics such as chaotic and transiently synchronized behavior that may be central to understanding the mind-brain relationship [19] or the \"dynamic core\" [20]. Methods for chaotic signal analysis originally arose from a need to rigorously describe physical phenomena that exhibited what was formerly thought to be purely stochastic behavior, but was then discovered to represent complex, aperiodic yet organized behavior, referred to as self-organized dynamics [21]. The analysis of signal complexity on multiple scales may reveal information about neural connectivity that is diagnostically useful [1,19,22].\nOne interpretation of biological complexity is that it reflects a system's ability to adapt quickly and function in a changing environment [23]. The complexity of EEG signals was found in one study to be associated with the ability to attend to a task and adapt to new cognitive tasks; a significant difference in complexity was found between controls and patients diagnosed with schizophrenia [24]. Patients with schizophrenia were found to have lower complexity than controls in some EEG channels and significantly higher interhemispheric and intrahemispheric cross-mutual information values than controls [25]. A study of the correlation dimension (another measure of signal complexity) of EEG signals in healthy individuals showed an increase with aging, interpreted as an increase in the number of independent synchronous networks in the brain [22].\nSeveral different methods for computing complex or nonlinear time series features have been defined and used successfully to analyze biological signals [26,27]. Sample entropy, a measure of time series complexity, was significantly higher in certain regions of the right hemisphere in preterm neonates who received skin-to-skin contact than in those who did not, indicating faster brain maturation [28]. Sample entropy has also been used as a marker of brain maturation in neonates [29] and was found to increase prenatally until maturation at about 42 weeks, then decreased after newborns reached full term [30].\nLiving systems exhibit a fundamental propensity to move forward in time. This property also describes physical systems that are far from an equilibrium state. For example, heat moves in only one direction, from hot to cold areas. In thermodynamics, this property is related to the requirement that all systems must move in the direction of higher entropy. Time irreversibility is a common characteristic of living biosignals. It was found to be a characteristic of healthy human heart electrocardiographic (ECG) recordings and was shown to be a reliable way to distinguish between actual ECG recordings and model ECG simulations [31]. ECG signals from patients with congestive heart disease were found to have lower time irreversibility indices than healthy patients [32]. Interestingly, the time irreversibility of EEG signals has been associated with epileptic regions of the brain, and this measure has been proposed as a biomarker for seizure foci [33]. Time irreversibility may be used as a practical test for nonlinearity in a time series.\nThis study is a preliminary investigation of the difference in multiscale entropy between two groups of infants between 6 and 24 months of age. The groups include typically developing infants and infants who have an older sibling with a confirmed diagnosis of autism spectrum disorder (ASD) and who are thus at higher risk for developing autism. ASD is a developmental disorder in which symptoms emerge during the second year of life. Behavioral indicators are not evident at 6 months of age [34-36]; however, on the basis of the use of a novel observational scale to assess ASD characteristics in infants, distinguishing characteristics were seen at 12 months [35]. Another study compared behavioral measures such as frequency of gaze at faces and shared smiles in infants. Again, group differences between those who later developed an ASD and typically developing controls were apparent at age 12 months, but not at age 6 months [34]. Only one study has investigated behavioral differences at age 9 months: infants at risk for ASD showed distinct differences in visual orientation from those with no family history of autism [37]. These behavioral observations suggest that important developmental differences are occurring in the brains of typically developing infants and those who will later develop an ASD. Although there have been no other published studies on brain development during the first year of life, one of the most replicated findings, based on a retrospective review of medical records, is accelerated growth in head circumference (a valid and reliable proxy for brain growth), which begins at around 6 to 9 months of age [38-40]. If multiscale entropy is a measure of functional brain complexity, then it may be a useful marker for distinguishing differences in brain activity between at-risk and typical infants.", "Data were collected from 79 different infants: 46 who were at high risk for ASD (hereafter referred to as HRA), defined on the basis of having an older sibling with a confirmed diagnosis of ASD, and 33 controls, defined on the basis of a typically developing older sibling and no family history of neurodevelopmental disorders. Testing sessions included infants from ages 6 to 24 months, with some participants tested at more than one age. The study participants were part of an ongoing longitudinal study, and for this analysis visits were evaluated at regular intervals. However, at the time this study was done, most infants had been tested at only one or two visits. Data collected at each session were therefore treated as an independent data set. Thus, the data gathered from an infant who was tested during five different sessions, at ages 6, 9, 12, 18 and 24 months, were treated as unique data sets. Data were collected from a total of 143 sessions and from 79 different individuals. The distribution at different ages and risk groups is shown in Table 1. The number of infants who were tested at only one age at the time of this study is shown in Table 2, as well as the number of infants tested two, three, four and five times. Only one infant thus far has been tested at all five ages from 6 to 24 months. For the purposes of this study, all visits were treated as independent measurements. No comparison of different ages or of growth trajectories between individuals was done. Other characteristics recorded include height and head circumference as shown in Table 1.\nDistribution of participants by age and risk groupa\naA total of 79 different infants (46 HRA and 33 CON) participated in this study. Some infants participated in multiple sessions at different ages, raising the total to 143 recording sessions. Also shown are measured demographic variables (age, height and head circumference) and mean multiscale entropy (mMSE) values over three regions: whole head, frontal and left frontal. Statistically significant differences between HRA and CON groups are highlighted in boldface. HRA, high risk for autism, CON, controls; SD, standard deviation.\nDistribution of participants with number of visits and/or measurements of the same child at different agesa\naOverall, 79 infants participated in the study, and 143 measurement sessions were conducted. HRA, high risk for autism; CON, controls.\nThe larger Infant Sibling Project study, from which data for this project were taken, was approved by the Committee on Clinical Investigations at Children's Hospital Boston (X06-08-0374) and the Boston University School of Medicine (H-29049). Parental written informed consent was obtained after the experimental procedures had been fully explained.", "Infants were seated on their mothers' laps in a dimly lit room while a research assistant engaged the infants' attention by blowing bubbles. This procedure was followed to limit the amount of head movement by the infant that would interfere with the recording process. Continuous EEG recordings were taken with a 64-channel Sensor Net System (EGI, Inc., Eugene, OR, USA). This sensor net device comprises an elastic tension structure forming a geodesic tessellation of the head surface and containing carbon fiber electrodes embedded in pedestal sponges. At each vertex is a sensor pedestal housing an Ag/AgCl-coated, carbon-filled plastic electrode and a sponge containing a saline electrolyte solution. Prior to fitting the sensor net over the scalp, the sponges are soaked in electrolyte solution (6 mL of KCl per 1 L of distilled water) to facilitate electrical contact between the scalp and the relevant electrode. To ensure the safety and comfort of the infant, the salinity of the electrolyte solution is the same as tears. In the event that the solution comes into contact with the eyes, no damage or discomfort to the infant will occur.\nPrior to recording, measurements of channel gains and zeros were taken to provide an accurate scaling factor for the display of waveform data. The baby's head was measured and marked with a washable wax pencil to ensure accurate placement of the net, which was then placed over the scalp. Scalp impedances were checked online using NetStation (EGI, Inc.), the recording software package that runs this system. EEG data were collected and recorded online using NetAmps Amplifiers (EGI, Inc.) and NetStation software. The data were amplified, band-pass filtered at 0.1 to 100.0 Hz and sampled at a frequency of 250 Hz. They were digitized with a 12-bit National Instruments Board (National Instruments Corp., Woburn, MA, USA). Typically, 2 minutes of baseline activity were recorded, but depending on the willingness of the infant, recorded periods may have been shorter. For this study, continuous sample segments of 20 seconds were selected from the processed resting state data and used to compute multiscale entropy values.\n[SUBTITLE] Modified Multiscale Sample Entropy [SUBSECTION] A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.\nA multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.", "A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.", "The time irreversibility index (trev) was computed for different resolutions of the EEG time series using the algorithm of Costa et al. [31]. The third column of Figure 1 shows trev values for several different linear and nonlinear time series. Of particular note is that only the sine wave time series and both random time series have nearly zero irreversibility indices, while the index for the nonlinear logistic series and the representative EEG signal are both nonzero on all scales shown.\nAfter computing multiple resolutions of the EEG time series as described above, an estimate of the time irreversibility for each resolution was computed by noting that a symmetric function or time series will have the same number of increments as decrements. That is, the number of times |xi+1 - xi| > 0 will be approximately the same as the number of times |xi+1 - xi| < 0. Thus, an estimate of the time series symmetry (or reversibility) was found by summing increments and decrements and dividing by the length of the series. A reversible time series will have a value of zero. For a series of 5,000 points, as used in this study, trev > 0.1 is a significant indicator of irreversibility and thus of nonlinearity [42]. This information is used only to indicate that nonlinear information is contained in the EEG time series that is not used in linear analysis methods, suggesting that the mMSE may contain more diagnostically useful information than power spectra analysis alone.", "The Orange machine learning software package (orange.biolab.si/) was used for classification calculations [43]. Several different learning algorithms were compared (support vector machine, k-nearest neighbors and naïve Bayesian algorithms) to exclude possible overfitting by one method. The significance of the classification results for each method was estimated empirically using the permutation approach described by Golland and Fischl [44].\nTo keep the feature set smaller while still capturing the overall shape of the mMSE curve, the low, high and mean values for each curve were extracted for each of 64 channels, creating a feature set of 192 values. A single sample from the population is represented by these 192 values. Although some data points were from the same infant at different ages, this study should be considered a cross-sectional study in that any relationship between data at two different ages was not used for classification. That is, the infants in the age 6 months EEG data set were considered to be independent of the set of infants studied at age 9 months, age 12 months and so on.", "The multiscale entropy and time irreversibility characteristics of five different time series are shown in Figure 1. The example time series amplitudes are shown in the first column. The second column displays plots of the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. White noise shows a characteristic decline in entropy with temporal scale, indicating loss of correlation between longer time intervals. Note that the deterministic but chaotic logistic equation has an entropy profile similar to white noise, suggesting that signal characteristics that appear as noise may in fact contain significant dynamic information about the system. The physiological (EEG) time series has a unique entropy curve that increases with temporal scale, similar to the cardiac signals observed in ECG readings [31,45].\nThe third column of Figure 1 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series. Although white noise and the logistic curve have similar entropy profiles, the time asymmetry index distinguishes the nonlinear chaotic signal from noise. The EEG signal shown here clearly contains nonlinear characteristics on the basis of the nonlinear time asymmetry index.\nUsing all of the EEG data, we first calculated time asymmetry to determine the degree of nonlinearity present in the signals. Figure 2 shows the time asymmetry index for all 64 channels of the resting state EEG for control and high-risk groups by age. The value of the time asymmetry index in the scalp plot was determined by averaging the index value over all members of that age and risk group. Since the value may take on positive or negative values and will be near zero for time-reversible signal, the persistence of the nonzero values in this plot is an indicator of signal nonlinearity. The multiscale entropy and trev values have independent physiological meanings [31]. Since apparent differences exist between controls and the high-risk group at all ages for both mMSE and trev, these two quantities together may provide a more sensitive biomarker for developmental age and atypical development. However, in this study, only the multiscale complexity was used to classify the high-risk group.\nTime asymmetry index for typical control group and the group of infants at high risk for autism is shown. The index was averaged over all infants in the group and age categories. If time asymmetry varied randomly at channel locations, the fluctuations would average out. The persistence of time asymmetry values different from zero indicates nonlinearity in the signal.\nTo make some general comparisons of EEG complexity between risk groups and different ages, mMSE curves were averaged over all members of subgroups by both age and risk group. Figure 3 shows that the HRA group had a consistently lower mean complexity over all channels, across all scales and at all ages. Figure 4 shows the group average mMSE value versus age for infants in each of the two risk groups. The bold black line in Figure 4 represents the mean mMSE value averaged over all 64 EEG channels. Left and right laterality were determined by averaging all left-side and all right-side channels separately. Similarly, mMSE values for four left frontal and four right frontal channels were averaged and plotted versus age. Note that the data in Figure 4 are treated as if drawn from a cross-sectional study as described previously. Mean values, standard deviations and statistical significance (P values from t-test) for the channel averages are given in Table 1. Differences between group averages are significant at age 18 months for overall mean mMSE, and the differences are significant for the left frontal region at all ages except 9 months. Of note is that significant differences were not found at age 9 months for any of the three MSE averages in Table 1, although head circumference was significantly different only at age 9 months. As discussed below, when all mMSE data were considered without averaging (that is, mMSE curves at each channel), machine learning algorithms found the greatest classification accuracy at age 9 months. Although it appears in Figure 4 that the most prominent difference between the control and HRA groups was the change in mMSE between ages 9 and 12 months, significance levels were not computed for changes in this study because measurements at each age were taken from different populations of infants.\nModified multiscale entropy (mMSE) is computed for each of 64 channels and for each of the risk groups and averaged over the sample population to produce the mMSE plots for infants ages 6 to 24 months.\nThe change in mean modified multiscale entropy (mMSE) over all channels is shown for each age. Averaging over all channels reveals that, in general, mMSE is higher in the typical control group than in the group of infants at high risk for autism, but regional differences cannot be seen. Numerical data, including the statistical significance of group differences, are contained in Table 1.\nSeveral features are immediately apparent. A general asymmetry in mMSE is seen in both control and high-risk groups, although this asymmetry appears to decline from ages 12 to 18 months as the left and right hemispheres and frontal curves come closer together at age 18 months. EEG complexity changes with age, but not uniformly. In the controls, the overall EEG complexity, shown by the solid black line in Figure 4, increases from ages 6 to 9 months then decreases slightly from ages 9 to 12 months before increasing again from ages 12 to 18 months. Left and right channels and the right frontal channels all follow this same pattern, though left and right hemisphere complexity is not symmetric. The left frontal channels follow a different pattern, increasing strongly until age 12 months and then declining after that. The complexity curves for the high-risk group follow a similar pattern, but the overall complexity is lower and the increases and decreases are much more exaggerated. Perhaps even more distinct is the left frontal curve in the high-risk group. It follows the same pattern as all other regions, unlike the left frontal curve in the controls.\nSince the complexity changes seem to vary with EEG channel, a better picture of complexity development with age and between risk groups can be seen in a scalp plot. Figure 5 shows the mean mMSE value for all EEG channels by risk group and age. The complexity values here were computed by averaging the mMSE over all coarse-grained scales for that channel as in Figure 2. Complexity variation with age and between risk groups is immediately apparent. One or two channels of the left frontal region appear to increase in complexity continuously with age in the controls, as does the right parieto-occipital region. The overall complexity in the high-risk group was lower than in the control group. Although the pattern of complexity change from ages 6 to 9 months appears similar in both groups, the high-risk group shows a marked decline in overall complexity from ages 9 to 12 months.\nMean modified multiscale entropy in each electroencephalography channel averaged over all infants at each age in (a) the typical control group or (b) the group of infants at high risk for autism.\nHeight, head circumference and exact age in days at the time of testing, as well as group means, standard deviations and significance levels, are included in Table 1. The only significant group difference among these variables was in head circumference at age 9 months: The infants in the HRA group had a larger mean head circumference than the typically developing controls.\n[SUBTITLE] Machine learning classification of risk [SUBSECTION] Statistical averages can sometimes obscure meaningful information in complex and highly varying time series. The scalp plots shown in Figure 5 reveal differences between risk groups and ages, but may not use all the information available in the mMSE calculations. For example, the complete mMSE curves on 20 resolutions or scales are shown in Figures 6 and 7 for individual 9-month-old infants. Figure 6 is derived from an infant from the control group, and Figure 7 is derived from an infant from the high-risk group. Curves are grouped by brain region, with 64 curves in all. The purpose of these graphs is simply to illustrate that the shape of the mMSE curves can vary between channels and individuals in distinct ways and that these differences will not be seen in average values. We note that the low spatial scale entropy in the frontal region of the infant from the control group is especially high, while this feature is lacking in the infant from the high-risk group. Although differences between these two examples are apparent, it may be quite difficult to compare 64 mMSE curves for a large number of infants in each group and determine the differences. To use all 64 × 20, or a total of 1,280, multiscale entropy values for each participant, a multiclass support vector machine (SVM) algorithm was used to perform supervised classification of the control and HRA groups.\nMean modified multiscale entropy curves for all 64 channels grouped by brain region for a single 9-month-old infant from the typical control group. Higher low spatial region (corresponding to high frequency) entropy in the frontal region is one distinct difference in the control example compared to the infants at high risk for autism example in Figure 7.\nThis figure is analogous to Figure 6, but for a single 9-month-old infant from the high risk group. Figures 6 and 7 illustrate that the shape of the modified multiscale entropy curve may contain information not seen when using averages alone as in previous scalp plots.\nUsing 10-fold cross-validation, infants were classified into either control or high-risk groups using three different learning algorithms as described previously. Since the complexity of all channels is changing rapidly from ages 6 to 24 months, classification within age groups was done rather than comparing the two groups using infants across the entire age spectrum. Machine classification calculations were done for boys and girls together at each age as well as separately. The results of these simulations are shown in Table 3. Classification by age and sex are shown with accuracy and significance estimates for three different machine learning algorithms: the k-nearest neighbors (k-NN), SVM and naïve Bayesian classification (Bayes) algorithms.\nSupervised learning classification using three different algorithms: k-nearest neighbors, support vector machine, and naïve Bayes classificationa\naTenfold cross-validation was run using the computed mean mMSE values on 64 channels for each infant within each age group. P values were estimated empirically using a permutation of class labels approach as described in the methods section under 'classification and endophenotypes. Identical cross-validation calculations with 100 permutations were performed to determine empirical P values with three different populations: all infants, boys only and girls only. Too few 24-month-old boys were available for cross-validation. k-NN, k-nearest neighbors algorithm; SVM, support vector machine algorithm; Bayes, naïve Bayes classification algorithm. Boldface entries highlight values with statistical significance of p < 0.05.\nThe significance of classification accuracy was assessed empirically using the permutation strategy described by Golland and Fischl [44]. This approach is common for estimating the significance of learning algorithms when the number of features greatly exceeds the number of training examples. If the class labels are randomly permutated, new classification accuracy can be computed using 10-fold cross-validation to serve as a baseline. For this study, 100 random permutations were run with 10-fold cross-validation for each machine classification calculation. The P value was determined by counting the number of random classifications for which the accuracy was equal to or higher than the accuracy for the true labels.\nUsing P = 0.05 as a significance cutoff value, the HRA and control groups can be classified at age 9 months for boys and girls together and for boys separately with accuracies of nearly 80% and well over 90%, respectively. For boys considered alone, the classification accuracy remained relatively high at ages 9, 12 and 18 months, though the result at age 12 months was not statistically significant. For girls, separation of the two groups was most accurate and significant at age 6 months, possibly indicating a sex difference in developmental trajectories. These results suggest that a familial endophenotype may be present at around age 9 months that enables HRA infants to be distinguished from low-risk controls. The differences seem to decline after 9 months of age, especially in girls, with some evidence that it may persist in boys until age 18 months (Table 3). Since approximately 60% of the HRA infants are expected not to be diagnosed with an ASD (20% will likely be diagnosed with another disorder, although not an ASD) [36], this is not surprising. Increasing heterogeneity with age regarding rates of development and behavioral characteristics of the high-risk group may be partly responsible for the drop in accuracy. Further study and subclassification with future data are needed to explore sex differences in brain development using entropy calculations.\nTo determine whether the significant group differences in mean head circumference were predictors of individual class status, two additional calculations were done. First, head circumference was added as one more feature to the mMSE values. The prediction calculations were repeated. The predictive accuracy of the classifiers was unchanged from the results obtained with mMSE alone. This might have been because the changed mMSE values were a direct reflection of head size differences in some way, so classification was done with head circumference alone. Somewhat surprisingly, classification accuracy was not significant and nearly random. When examining the group values, it appears that the rather large individual variability within each group accounts for this finding. We conclude that head circumference does not contribute to classification accuracy at any of the ages tested.\nStatistical averages can sometimes obscure meaningful information in complex and highly varying time series. The scalp plots shown in Figure 5 reveal differences between risk groups and ages, but may not use all the information available in the mMSE calculations. For example, the complete mMSE curves on 20 resolutions or scales are shown in Figures 6 and 7 for individual 9-month-old infants. Figure 6 is derived from an infant from the control group, and Figure 7 is derived from an infant from the high-risk group. Curves are grouped by brain region, with 64 curves in all. The purpose of these graphs is simply to illustrate that the shape of the mMSE curves can vary between channels and individuals in distinct ways and that these differences will not be seen in average values. We note that the low spatial scale entropy in the frontal region of the infant from the control group is especially high, while this feature is lacking in the infant from the high-risk group. Although differences between these two examples are apparent, it may be quite difficult to compare 64 mMSE curves for a large number of infants in each group and determine the differences. To use all 64 × 20, or a total of 1,280, multiscale entropy values for each participant, a multiclass support vector machine (SVM) algorithm was used to perform supervised classification of the control and HRA groups.\nMean modified multiscale entropy curves for all 64 channels grouped by brain region for a single 9-month-old infant from the typical control group. Higher low spatial region (corresponding to high frequency) entropy in the frontal region is one distinct difference in the control example compared to the infants at high risk for autism example in Figure 7.\nThis figure is analogous to Figure 6, but for a single 9-month-old infant from the high risk group. Figures 6 and 7 illustrate that the shape of the modified multiscale entropy curve may contain information not seen when using averages alone as in previous scalp plots.\nUsing 10-fold cross-validation, infants were classified into either control or high-risk groups using three different learning algorithms as described previously. Since the complexity of all channels is changing rapidly from ages 6 to 24 months, classification within age groups was done rather than comparing the two groups using infants across the entire age spectrum. Machine classification calculations were done for boys and girls together at each age as well as separately. The results of these simulations are shown in Table 3. Classification by age and sex are shown with accuracy and significance estimates for three different machine learning algorithms: the k-nearest neighbors (k-NN), SVM and naïve Bayesian classification (Bayes) algorithms.\nSupervised learning classification using three different algorithms: k-nearest neighbors, support vector machine, and naïve Bayes classificationa\naTenfold cross-validation was run using the computed mean mMSE values on 64 channels for each infant within each age group. P values were estimated empirically using a permutation of class labels approach as described in the methods section under 'classification and endophenotypes. Identical cross-validation calculations with 100 permutations were performed to determine empirical P values with three different populations: all infants, boys only and girls only. Too few 24-month-old boys were available for cross-validation. k-NN, k-nearest neighbors algorithm; SVM, support vector machine algorithm; Bayes, naïve Bayes classification algorithm. Boldface entries highlight values with statistical significance of p < 0.05.\nThe significance of classification accuracy was assessed empirically using the permutation strategy described by Golland and Fischl [44]. This approach is common for estimating the significance of learning algorithms when the number of features greatly exceeds the number of training examples. If the class labels are randomly permutated, new classification accuracy can be computed using 10-fold cross-validation to serve as a baseline. For this study, 100 random permutations were run with 10-fold cross-validation for each machine classification calculation. The P value was determined by counting the number of random classifications for which the accuracy was equal to or higher than the accuracy for the true labels.\nUsing P = 0.05 as a significance cutoff value, the HRA and control groups can be classified at age 9 months for boys and girls together and for boys separately with accuracies of nearly 80% and well over 90%, respectively. For boys considered alone, the classification accuracy remained relatively high at ages 9, 12 and 18 months, though the result at age 12 months was not statistically significant. For girls, separation of the two groups was most accurate and significant at age 6 months, possibly indicating a sex difference in developmental trajectories. These results suggest that a familial endophenotype may be present at around age 9 months that enables HRA infants to be distinguished from low-risk controls. The differences seem to decline after 9 months of age, especially in girls, with some evidence that it may persist in boys until age 18 months (Table 3). Since approximately 60% of the HRA infants are expected not to be diagnosed with an ASD (20% will likely be diagnosed with another disorder, although not an ASD) [36], this is not surprising. Increasing heterogeneity with age regarding rates of development and behavioral characteristics of the high-risk group may be partly responsible for the drop in accuracy. Further study and subclassification with future data are needed to explore sex differences in brain development using entropy calculations.\nTo determine whether the significant group differences in mean head circumference were predictors of individual class status, two additional calculations were done. First, head circumference was added as one more feature to the mMSE values. The prediction calculations were repeated. The predictive accuracy of the classifiers was unchanged from the results obtained with mMSE alone. This might have been because the changed mMSE values were a direct reflection of head size differences in some way, so classification was done with head circumference alone. Somewhat surprisingly, classification accuracy was not significant and nearly random. When examining the group values, it appears that the rather large individual variability within each group accounts for this finding. We conclude that head circumference does not contribute to classification accuracy at any of the ages tested.", "Statistical averages can sometimes obscure meaningful information in complex and highly varying time series. The scalp plots shown in Figure 5 reveal differences between risk groups and ages, but may not use all the information available in the mMSE calculations. For example, the complete mMSE curves on 20 resolutions or scales are shown in Figures 6 and 7 for individual 9-month-old infants. Figure 6 is derived from an infant from the control group, and Figure 7 is derived from an infant from the high-risk group. Curves are grouped by brain region, with 64 curves in all. The purpose of these graphs is simply to illustrate that the shape of the mMSE curves can vary between channels and individuals in distinct ways and that these differences will not be seen in average values. We note that the low spatial scale entropy in the frontal region of the infant from the control group is especially high, while this feature is lacking in the infant from the high-risk group. Although differences between these two examples are apparent, it may be quite difficult to compare 64 mMSE curves for a large number of infants in each group and determine the differences. To use all 64 × 20, or a total of 1,280, multiscale entropy values for each participant, a multiclass support vector machine (SVM) algorithm was used to perform supervised classification of the control and HRA groups.\nMean modified multiscale entropy curves for all 64 channels grouped by brain region for a single 9-month-old infant from the typical control group. Higher low spatial region (corresponding to high frequency) entropy in the frontal region is one distinct difference in the control example compared to the infants at high risk for autism example in Figure 7.\nThis figure is analogous to Figure 6, but for a single 9-month-old infant from the high risk group. Figures 6 and 7 illustrate that the shape of the modified multiscale entropy curve may contain information not seen when using averages alone as in previous scalp plots.\nUsing 10-fold cross-validation, infants were classified into either control or high-risk groups using three different learning algorithms as described previously. Since the complexity of all channels is changing rapidly from ages 6 to 24 months, classification within age groups was done rather than comparing the two groups using infants across the entire age spectrum. Machine classification calculations were done for boys and girls together at each age as well as separately. The results of these simulations are shown in Table 3. Classification by age and sex are shown with accuracy and significance estimates for three different machine learning algorithms: the k-nearest neighbors (k-NN), SVM and naïve Bayesian classification (Bayes) algorithms.\nSupervised learning classification using three different algorithms: k-nearest neighbors, support vector machine, and naïve Bayes classificationa\naTenfold cross-validation was run using the computed mean mMSE values on 64 channels for each infant within each age group. P values were estimated empirically using a permutation of class labels approach as described in the methods section under 'classification and endophenotypes. Identical cross-validation calculations with 100 permutations were performed to determine empirical P values with three different populations: all infants, boys only and girls only. Too few 24-month-old boys were available for cross-validation. k-NN, k-nearest neighbors algorithm; SVM, support vector machine algorithm; Bayes, naïve Bayes classification algorithm. Boldface entries highlight values with statistical significance of p < 0.05.\nThe significance of classification accuracy was assessed empirically using the permutation strategy described by Golland and Fischl [44]. This approach is common for estimating the significance of learning algorithms when the number of features greatly exceeds the number of training examples. If the class labels are randomly permutated, new classification accuracy can be computed using 10-fold cross-validation to serve as a baseline. For this study, 100 random permutations were run with 10-fold cross-validation for each machine classification calculation. The P value was determined by counting the number of random classifications for which the accuracy was equal to or higher than the accuracy for the true labels.\nUsing P = 0.05 as a significance cutoff value, the HRA and control groups can be classified at age 9 months for boys and girls together and for boys separately with accuracies of nearly 80% and well over 90%, respectively. For boys considered alone, the classification accuracy remained relatively high at ages 9, 12 and 18 months, though the result at age 12 months was not statistically significant. For girls, separation of the two groups was most accurate and significant at age 6 months, possibly indicating a sex difference in developmental trajectories. These results suggest that a familial endophenotype may be present at around age 9 months that enables HRA infants to be distinguished from low-risk controls. The differences seem to decline after 9 months of age, especially in girls, with some evidence that it may persist in boys until age 18 months (Table 3). Since approximately 60% of the HRA infants are expected not to be diagnosed with an ASD (20% will likely be diagnosed with another disorder, although not an ASD) [36], this is not surprising. Increasing heterogeneity with age regarding rates of development and behavioral characteristics of the high-risk group may be partly responsible for the drop in accuracy. Further study and subclassification with future data are needed to explore sex differences in brain development using entropy calculations.\nTo determine whether the significant group differences in mean head circumference were predictors of individual class status, two additional calculations were done. First, head circumference was added as one more feature to the mMSE values. The prediction calculations were repeated. The predictive accuracy of the classifiers was unchanged from the results obtained with mMSE alone. This might have been because the changed mMSE values were a direct reflection of head size differences in some way, so classification was done with head circumference alone. Somewhat surprisingly, classification accuracy was not significant and nearly random. When examining the group values, it appears that the rather large individual variability within each group accounts for this finding. We conclude that head circumference does not contribute to classification accuracy at any of the ages tested.", "The primary goal of this study was to explore whether measures of EEG complexity might reveal functional endophenotypes of ASD and thus identify them as potential biomarkers for risk of ASD at very early ages before the onset of clear behavioral symptoms. Our findings show significant promise for the specific measure of multiscale entropy that was used to compare high- and low-risk infants between the ages of 6 and 24 months. Differences in mean mMSE over the entire scalp and especially in the left frontal region were significant at most ages measured, except at age 9 months. The trajectory of the curves between ages 6 and 12 months in Figure 4 appears to be as informative as information at any specific age. This result makes the relatively high accuracy at age 9 months of the machine classification using all of the mMSE curves as feature vectors particularly notable. This early period of life is one of important changes in brain function that are foundational for the emergence of higher-level social and communicative skills that are at the heart of the difficulties associated with ASD. A number of major cognitive milestones typically occur beginning at around age 9 months and perhaps earlier in girls. These milestones include, for example, the development of the ability to perceive intentional actions by others [46], as well as loss of the ability to perceive speech sound distinctions in non-native languages [47] and loss of the ability to discriminate certain categories of faces [48]. These latter developments are especially significant because they reveal how socially grounded experiences influence changes in the neurocognitive mechanisms that underlie speech and face recognition processing. Thus, Marcus and Nelson [49] argued that infants mold their face-processing system on the basis of the visual experiences they encounter, just as their speech-processing skills are molded to their native language [50,51]. This model assumes a narrowing of the social-perceptual window through which language and faces are processed, which in turn results in an increase in cortical specialization. In a prospective study, Ozonoff et al. [34] found that social communicative behaviors in infants who later developed ASD declined dramatically between ages 6 and 18 months compared to typically developing infants.\nWe hypothesize that the following developmental sequence may explain the data in Table 3. At age 6 months, no significant behavioral differences have been noted in prospective studies between typically developing infants and those who develop autism [34,35]. Thus, few differences in electrophysiological data are expected at age 6 months, as shown in Figure 4 and Table 3. However, if girls are considered separately, differences in mMSE appear to be significant at age 6 months. If the multiscale entropy calculations from the EEG signals are indeed a biomarker for endophenotypes of autism familial traits, then by 9 months of age many infants in the high-risk group will display unique characteristics in their mMSE profiles that enable them to be distinguished from the controls. Those infants in the high-risk group who do not have multiple risk factors and later develop normally would not be expected to exhibit abnormalities in their mMSE profiles throughout the developmental period. These hypotheses might account for the HRA infants in our study who were classified similarly to our typical controls. This hypothesis will be tested when sufficient numbers of infants in the HRA group have reached 2 to 3 years of age and a diagnosis of ASD or typical development can be made.\nDevelopmental abnormalities from ages 6 to 12 months are particularly distinct in the two groups (low and high risk for ASD), allowing the groups to be classified quite accurately, although some overlap between the HRA and control groups should be expected at all ages. From 12 to 24 months of age, the distinction between the two groups declines. This likely reflects the trend for some fraction of high-risk infants to develop more typical cognitive and behavioral function, even though they may carry endophenotypes that share common complexity profiles at an earlier age with other high-risk infants who will later be diagnosed with ASD.\nRather than analyzing entropy at single age points, using a trajectory of entropy values from ages 6 to 24 months might be more informative. Although EEG complexity has been shown in several studies to increase with age [30,52,53], the increase is neither monotonic nor uniform across different brain regions. The abnormalities in brain development that lead to autistic characteristics may not be immediately apparent by inspecting relevant brain activity, even if the data contain diagnostically significant information. For example, a recent study of the relationship between cortical thickness and intelligence found no correlation between absolute cortical thickness at any particular age and intelligence. However, a specific pattern of developmental changes in cortical thickness was highly correlated with intelligence [54].\nOne of the characteristics of the high-risk group is heterogeneity: This group includes infants who will go on to develop an ASD and those who are within the normal range genetically, developmentally and behaviorally, as well as those in between who exhibit mild autism-like traits. Further study of this cohort as they grow and develop will enable this hypothesis to be tested. Rather than binary classification into typical controls and heterogeneous high-risk groups, classification on the basis of actual behavioral assessments will allow a more accurate test of the efficacy of using the mMSE to measure brain function.", "Abnormal brain connectivity, whether locally, regionally or both, may be a cause of a number of behavioral disorders, including ASD [9], and changes in local complexity are believed to be related to brain connectivity [55]. Local neural network connectivity undergoes rapid change during early development, and this may be reflected in the multiscale entropy of EEG signals, which is one measure of signal complexity that has been associated with health and disease [23]. A number of recent studies have demonstrated a link between brain connectivity and complexity, and EEG signal complexity may provide valuable information about the neural correlates of cognitive processes [56]. Early markers for neurological or mental disorders, particularly those with developmental etiologies, may be the growth trajectories of complexity as measured by multiscale entropy curves. The results described in this paper suggest that infants in families with a history of ASD have quite different EEG complexity patterns from 6 to 24 months of age that may be indicators of a functional endophenotype associated with ASD risk. Differences between mean mMSE averaged over all channels or in frontal regions in the two groups are significant at all ages except 9 months. Machine classification on the basis of mMSE curves in each channel as a feature set is able to determine group membership, particularly at 9 months of age. The classification accuracy decreases after age 12 months, possibly because of the influence of normal brain development and the development of normal characteristics in many of the high-risk infants. Classification accuracy for boys alone still appears to be significant and relatively high at age 18 months. More data about the future outcomes of the HRA infants and the computation of additional features, such as laterality of entropy, together with behavioral and cognitive assessments as the cohort of participants in this study grows, may enable the high-risk population to be subclassified more accurately. Future longitudinal analysis of data from this cohort will allow growth trajectories, as well as the future outcomes of the high-risk children, to be compared. Deeper understanding of the relationship between these neurophysiological processes and cognitive function may yield a new window to the mind and provide a clinically useful psychiatric biomarker using complexity analysis of EEG data.", "WJB is named on a provisional patent application submitted by the Children's Hospital Boston Technology Development Office that includes parts of the signal analysis methods discussed in this article. The authors declare that they have no other competing financial or nonfinancial interests.", "WJB conceived of the analytical methods used in this paper, wrote needed computer codes, performed calculations and statistical analysis and drafted the manuscript. AT carried out the initial processing of the raw data, participated in discussion of analysis results and contributed to drafting the methods section. CAN and HTF are co-Principal Investigators on the larger Infant Siblings Project study upon which this paper was based, contributed to the study design, interpretation of developmental implications of the results and were responsible for coordinating recruitment and testing of all patient data. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1741-7015/9/18/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "EEG data collection", "Modified Multiscale Sample Entropy", "Time asymmetry and nonlinearity", "Classification and endophenotypes", "Results", "Machine learning classification of risk", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The human brain exhibits a remarkable network organization. Although sparsely connected, each neuron is within a few synaptic connections of any other neuron [1]. This remarkable connectivity is achieved by a kind of hierarchical organization that is not fully understood in the brain, but is ubiquitous in nature and is called a scale-free network [2-4] that changes with development. Complex networks are characterized by dense local connectivity and sparser long-range connectivity [2] that are fractal or self-similar at all scales. Modules or clusters can be identified on multiple scales. A comparison of network properties using functional magnetic resonance imaging showed that children and young adults' brains had similar \"small-world\" or scale-free organization at the global level, but differed significantly in hierarchical organization and interregional connectivity [5]. White matter fiber tracking has revealed that brain development in children involves changes in both short-range and long-range wiring, with synaptogenesis and pruning occurring at both the local (neuronal) level and the systems level [5]. Abnormal network connectivity may be a key to understanding developmental disabilities.\nAutism is a complex and heterogeneous developmental disorder that affects the developmental trajectory in several key behavioral domains, including social, cognitive and language abilities. The underlying brain dysfunction that results in the behavioral characteristics is not well understood. Complex mental disorders such as autism cannot easily be described as being associated with underconnectivity or overconnectivity, but may involve some form of abnormal connectivity that varies between different brain regions [6]. Normal and abnormal connectivity may also change during development, so that, for example, a condition may not exist at age 3 months but may emerge by age 24 months. A key to understanding neurodevelopmental disorders is the relationship between functional brain connectivity and cognitive development [7]. Measuring functional brain development is difficult both because the brain is a complex, hierarchical system and because few methods are available for noninvasive measurements of brain function in infants. New nonlinear methods for analyzing brain electrical activity measured using scalp electrodes may enable differences in infant brain connectivity to be detected. For example, coarse-grained entropy synchronization between electroencephalography (EEG) electrodes revealed that synchronization was significantly lower in children with autism than in a group of typically developing children [8], supporting the theory that autistic brains exhibit low functional connectivity. In the autistic brain, high local connectivity and low long-range connectivity may develop concurrently because of problems with synapse pruning or formation [9,10]. Estimation of changes in neural connectivity might be an effective diagnostic marker for atypical connectivity development.\nEEG signals are believed to derive from pyramidal cells aligned in parallel in the cerebral cortex and the hippocampus [11], which act as many interacting nonlinear oscillators [12]. As a consequence of the scale-free network organization of neurons, EEG signals carry nonlinear, complex system information reflecting the underlying network topology, including transient synchronization between frequencies, short- and long-range correlations and cross-modulation of amplitudes and frequencies [13]. The mathematical relationship between network structure and time series is a subject of current research and may eventually shed further light on the relationship between neural networks and EEG signals.\nA great deal of information about interrelationships in the nervous system likely remains undiscovered because the linear analysis techniques currently in use fail even to detect them [14]. If brain function and behavior are mirrors of each other as is commonly accepted [15-18], then biomarkers of complex developmental disorders may be hidden in complex, nonlinear patterns of EEG data. The dynamics of the brain are inherently nonlinear, exhibiting emergent dynamics such as chaotic and transiently synchronized behavior that may be central to understanding the mind-brain relationship [19] or the \"dynamic core\" [20]. Methods for chaotic signal analysis originally arose from a need to rigorously describe physical phenomena that exhibited what was formerly thought to be purely stochastic behavior, but was then discovered to represent complex, aperiodic yet organized behavior, referred to as self-organized dynamics [21]. The analysis of signal complexity on multiple scales may reveal information about neural connectivity that is diagnostically useful [1,19,22].\nOne interpretation of biological complexity is that it reflects a system's ability to adapt quickly and function in a changing environment [23]. The complexity of EEG signals was found in one study to be associated with the ability to attend to a task and adapt to new cognitive tasks; a significant difference in complexity was found between controls and patients diagnosed with schizophrenia [24]. Patients with schizophrenia were found to have lower complexity than controls in some EEG channels and significantly higher interhemispheric and intrahemispheric cross-mutual information values than controls [25]. A study of the correlation dimension (another measure of signal complexity) of EEG signals in healthy individuals showed an increase with aging, interpreted as an increase in the number of independent synchronous networks in the brain [22].\nSeveral different methods for computing complex or nonlinear time series features have been defined and used successfully to analyze biological signals [26,27]. Sample entropy, a measure of time series complexity, was significantly higher in certain regions of the right hemisphere in preterm neonates who received skin-to-skin contact than in those who did not, indicating faster brain maturation [28]. Sample entropy has also been used as a marker of brain maturation in neonates [29] and was found to increase prenatally until maturation at about 42 weeks, then decreased after newborns reached full term [30].\nLiving systems exhibit a fundamental propensity to move forward in time. This property also describes physical systems that are far from an equilibrium state. For example, heat moves in only one direction, from hot to cold areas. In thermodynamics, this property is related to the requirement that all systems must move in the direction of higher entropy. Time irreversibility is a common characteristic of living biosignals. It was found to be a characteristic of healthy human heart electrocardiographic (ECG) recordings and was shown to be a reliable way to distinguish between actual ECG recordings and model ECG simulations [31]. ECG signals from patients with congestive heart disease were found to have lower time irreversibility indices than healthy patients [32]. Interestingly, the time irreversibility of EEG signals has been associated with epileptic regions of the brain, and this measure has been proposed as a biomarker for seizure foci [33]. Time irreversibility may be used as a practical test for nonlinearity in a time series.\nThis study is a preliminary investigation of the difference in multiscale entropy between two groups of infants between 6 and 24 months of age. The groups include typically developing infants and infants who have an older sibling with a confirmed diagnosis of autism spectrum disorder (ASD) and who are thus at higher risk for developing autism. ASD is a developmental disorder in which symptoms emerge during the second year of life. Behavioral indicators are not evident at 6 months of age [34-36]; however, on the basis of the use of a novel observational scale to assess ASD characteristics in infants, distinguishing characteristics were seen at 12 months [35]. Another study compared behavioral measures such as frequency of gaze at faces and shared smiles in infants. Again, group differences between those who later developed an ASD and typically developing controls were apparent at age 12 months, but not at age 6 months [34]. Only one study has investigated behavioral differences at age 9 months: infants at risk for ASD showed distinct differences in visual orientation from those with no family history of autism [37]. These behavioral observations suggest that important developmental differences are occurring in the brains of typically developing infants and those who will later develop an ASD. Although there have been no other published studies on brain development during the first year of life, one of the most replicated findings, based on a retrospective review of medical records, is accelerated growth in head circumference (a valid and reliable proxy for brain growth), which begins at around 6 to 9 months of age [38-40]. If multiscale entropy is a measure of functional brain complexity, then it may be a useful marker for distinguishing differences in brain activity between at-risk and typical infants.", "[SUBTITLE] Participants [SUBSECTION] Data were collected from 79 different infants: 46 who were at high risk for ASD (hereafter referred to as HRA), defined on the basis of having an older sibling with a confirmed diagnosis of ASD, and 33 controls, defined on the basis of a typically developing older sibling and no family history of neurodevelopmental disorders. Testing sessions included infants from ages 6 to 24 months, with some participants tested at more than one age. The study participants were part of an ongoing longitudinal study, and for this analysis visits were evaluated at regular intervals. However, at the time this study was done, most infants had been tested at only one or two visits. Data collected at each session were therefore treated as an independent data set. Thus, the data gathered from an infant who was tested during five different sessions, at ages 6, 9, 12, 18 and 24 months, were treated as unique data sets. Data were collected from a total of 143 sessions and from 79 different individuals. The distribution at different ages and risk groups is shown in Table 1. The number of infants who were tested at only one age at the time of this study is shown in Table 2, as well as the number of infants tested two, three, four and five times. Only one infant thus far has been tested at all five ages from 6 to 24 months. For the purposes of this study, all visits were treated as independent measurements. No comparison of different ages or of growth trajectories between individuals was done. Other characteristics recorded include height and head circumference as shown in Table 1.\nDistribution of participants by age and risk groupa\naA total of 79 different infants (46 HRA and 33 CON) participated in this study. Some infants participated in multiple sessions at different ages, raising the total to 143 recording sessions. Also shown are measured demographic variables (age, height and head circumference) and mean multiscale entropy (mMSE) values over three regions: whole head, frontal and left frontal. Statistically significant differences between HRA and CON groups are highlighted in boldface. HRA, high risk for autism, CON, controls; SD, standard deviation.\nDistribution of participants with number of visits and/or measurements of the same child at different agesa\naOverall, 79 infants participated in the study, and 143 measurement sessions were conducted. HRA, high risk for autism; CON, controls.\nThe larger Infant Sibling Project study, from which data for this project were taken, was approved by the Committee on Clinical Investigations at Children's Hospital Boston (X06-08-0374) and the Boston University School of Medicine (H-29049). Parental written informed consent was obtained after the experimental procedures had been fully explained.\nData were collected from 79 different infants: 46 who were at high risk for ASD (hereafter referred to as HRA), defined on the basis of having an older sibling with a confirmed diagnosis of ASD, and 33 controls, defined on the basis of a typically developing older sibling and no family history of neurodevelopmental disorders. Testing sessions included infants from ages 6 to 24 months, with some participants tested at more than one age. The study participants were part of an ongoing longitudinal study, and for this analysis visits were evaluated at regular intervals. However, at the time this study was done, most infants had been tested at only one or two visits. Data collected at each session were therefore treated as an independent data set. Thus, the data gathered from an infant who was tested during five different sessions, at ages 6, 9, 12, 18 and 24 months, were treated as unique data sets. Data were collected from a total of 143 sessions and from 79 different individuals. The distribution at different ages and risk groups is shown in Table 1. The number of infants who were tested at only one age at the time of this study is shown in Table 2, as well as the number of infants tested two, three, four and five times. Only one infant thus far has been tested at all five ages from 6 to 24 months. For the purposes of this study, all visits were treated as independent measurements. No comparison of different ages or of growth trajectories between individuals was done. Other characteristics recorded include height and head circumference as shown in Table 1.\nDistribution of participants by age and risk groupa\naA total of 79 different infants (46 HRA and 33 CON) participated in this study. Some infants participated in multiple sessions at different ages, raising the total to 143 recording sessions. Also shown are measured demographic variables (age, height and head circumference) and mean multiscale entropy (mMSE) values over three regions: whole head, frontal and left frontal. Statistically significant differences between HRA and CON groups are highlighted in boldface. HRA, high risk for autism, CON, controls; SD, standard deviation.\nDistribution of participants with number of visits and/or measurements of the same child at different agesa\naOverall, 79 infants participated in the study, and 143 measurement sessions were conducted. HRA, high risk for autism; CON, controls.\nThe larger Infant Sibling Project study, from which data for this project were taken, was approved by the Committee on Clinical Investigations at Children's Hospital Boston (X06-08-0374) and the Boston University School of Medicine (H-29049). Parental written informed consent was obtained after the experimental procedures had been fully explained.\n[SUBTITLE] EEG data collection [SUBSECTION] Infants were seated on their mothers' laps in a dimly lit room while a research assistant engaged the infants' attention by blowing bubbles. This procedure was followed to limit the amount of head movement by the infant that would interfere with the recording process. Continuous EEG recordings were taken with a 64-channel Sensor Net System (EGI, Inc., Eugene, OR, USA). This sensor net device comprises an elastic tension structure forming a geodesic tessellation of the head surface and containing carbon fiber electrodes embedded in pedestal sponges. At each vertex is a sensor pedestal housing an Ag/AgCl-coated, carbon-filled plastic electrode and a sponge containing a saline electrolyte solution. Prior to fitting the sensor net over the scalp, the sponges are soaked in electrolyte solution (6 mL of KCl per 1 L of distilled water) to facilitate electrical contact between the scalp and the relevant electrode. To ensure the safety and comfort of the infant, the salinity of the electrolyte solution is the same as tears. In the event that the solution comes into contact with the eyes, no damage or discomfort to the infant will occur.\nPrior to recording, measurements of channel gains and zeros were taken to provide an accurate scaling factor for the display of waveform data. The baby's head was measured and marked with a washable wax pencil to ensure accurate placement of the net, which was then placed over the scalp. Scalp impedances were checked online using NetStation (EGI, Inc.), the recording software package that runs this system. EEG data were collected and recorded online using NetAmps Amplifiers (EGI, Inc.) and NetStation software. The data were amplified, band-pass filtered at 0.1 to 100.0 Hz and sampled at a frequency of 250 Hz. They were digitized with a 12-bit National Instruments Board (National Instruments Corp., Woburn, MA, USA). Typically, 2 minutes of baseline activity were recorded, but depending on the willingness of the infant, recorded periods may have been shorter. For this study, continuous sample segments of 20 seconds were selected from the processed resting state data and used to compute multiscale entropy values.\n[SUBTITLE] Modified Multiscale Sample Entropy [SUBSECTION] A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.\nA multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.\nInfants were seated on their mothers' laps in a dimly lit room while a research assistant engaged the infants' attention by blowing bubbles. This procedure was followed to limit the amount of head movement by the infant that would interfere with the recording process. Continuous EEG recordings were taken with a 64-channel Sensor Net System (EGI, Inc., Eugene, OR, USA). This sensor net device comprises an elastic tension structure forming a geodesic tessellation of the head surface and containing carbon fiber electrodes embedded in pedestal sponges. At each vertex is a sensor pedestal housing an Ag/AgCl-coated, carbon-filled plastic electrode and a sponge containing a saline electrolyte solution. Prior to fitting the sensor net over the scalp, the sponges are soaked in electrolyte solution (6 mL of KCl per 1 L of distilled water) to facilitate electrical contact between the scalp and the relevant electrode. To ensure the safety and comfort of the infant, the salinity of the electrolyte solution is the same as tears. In the event that the solution comes into contact with the eyes, no damage or discomfort to the infant will occur.\nPrior to recording, measurements of channel gains and zeros were taken to provide an accurate scaling factor for the display of waveform data. The baby's head was measured and marked with a washable wax pencil to ensure accurate placement of the net, which was then placed over the scalp. Scalp impedances were checked online using NetStation (EGI, Inc.), the recording software package that runs this system. EEG data were collected and recorded online using NetAmps Amplifiers (EGI, Inc.) and NetStation software. The data were amplified, band-pass filtered at 0.1 to 100.0 Hz and sampled at a frequency of 250 Hz. They were digitized with a 12-bit National Instruments Board (National Instruments Corp., Woburn, MA, USA). Typically, 2 minutes of baseline activity were recorded, but depending on the willingness of the infant, recorded periods may have been shorter. For this study, continuous sample segments of 20 seconds were selected from the processed resting state data and used to compute multiscale entropy values.\n[SUBTITLE] Modified Multiscale Sample Entropy [SUBSECTION] A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.\nA multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.\n[SUBTITLE] Time asymmetry and nonlinearity [SUBSECTION] The time irreversibility index (trev) was computed for different resolutions of the EEG time series using the algorithm of Costa et al. [31]. The third column of Figure 1 shows trev values for several different linear and nonlinear time series. Of particular note is that only the sine wave time series and both random time series have nearly zero irreversibility indices, while the index for the nonlinear logistic series and the representative EEG signal are both nonzero on all scales shown.\nAfter computing multiple resolutions of the EEG time series as described above, an estimate of the time irreversibility for each resolution was computed by noting that a symmetric function or time series will have the same number of increments as decrements. That is, the number of times |xi+1 - xi| > 0 will be approximately the same as the number of times |xi+1 - xi| < 0. Thus, an estimate of the time series symmetry (or reversibility) was found by summing increments and decrements and dividing by the length of the series. A reversible time series will have a value of zero. For a series of 5,000 points, as used in this study, trev > 0.1 is a significant indicator of irreversibility and thus of nonlinearity [42]. This information is used only to indicate that nonlinear information is contained in the EEG time series that is not used in linear analysis methods, suggesting that the mMSE may contain more diagnostically useful information than power spectra analysis alone.\nThe time irreversibility index (trev) was computed for different resolutions of the EEG time series using the algorithm of Costa et al. [31]. The third column of Figure 1 shows trev values for several different linear and nonlinear time series. Of particular note is that only the sine wave time series and both random time series have nearly zero irreversibility indices, while the index for the nonlinear logistic series and the representative EEG signal are both nonzero on all scales shown.\nAfter computing multiple resolutions of the EEG time series as described above, an estimate of the time irreversibility for each resolution was computed by noting that a symmetric function or time series will have the same number of increments as decrements. That is, the number of times |xi+1 - xi| > 0 will be approximately the same as the number of times |xi+1 - xi| < 0. Thus, an estimate of the time series symmetry (or reversibility) was found by summing increments and decrements and dividing by the length of the series. A reversible time series will have a value of zero. For a series of 5,000 points, as used in this study, trev > 0.1 is a significant indicator of irreversibility and thus of nonlinearity [42]. This information is used only to indicate that nonlinear information is contained in the EEG time series that is not used in linear analysis methods, suggesting that the mMSE may contain more diagnostically useful information than power spectra analysis alone.\n[SUBTITLE] Classification and endophenotypes [SUBSECTION] The Orange machine learning software package (orange.biolab.si/) was used for classification calculations [43]. Several different learning algorithms were compared (support vector machine, k-nearest neighbors and naïve Bayesian algorithms) to exclude possible overfitting by one method. The significance of the classification results for each method was estimated empirically using the permutation approach described by Golland and Fischl [44].\nTo keep the feature set smaller while still capturing the overall shape of the mMSE curve, the low, high and mean values for each curve were extracted for each of 64 channels, creating a feature set of 192 values. A single sample from the population is represented by these 192 values. Although some data points were from the same infant at different ages, this study should be considered a cross-sectional study in that any relationship between data at two different ages was not used for classification. That is, the infants in the age 6 months EEG data set were considered to be independent of the set of infants studied at age 9 months, age 12 months and so on.\nThe Orange machine learning software package (orange.biolab.si/) was used for classification calculations [43]. Several different learning algorithms were compared (support vector machine, k-nearest neighbors and naïve Bayesian algorithms) to exclude possible overfitting by one method. The significance of the classification results for each method was estimated empirically using the permutation approach described by Golland and Fischl [44].\nTo keep the feature set smaller while still capturing the overall shape of the mMSE curve, the low, high and mean values for each curve were extracted for each of 64 channels, creating a feature set of 192 values. A single sample from the population is represented by these 192 values. Although some data points were from the same infant at different ages, this study should be considered a cross-sectional study in that any relationship between data at two different ages was not used for classification. That is, the infants in the age 6 months EEG data set were considered to be independent of the set of infants studied at age 9 months, age 12 months and so on.", "Data were collected from 79 different infants: 46 who were at high risk for ASD (hereafter referred to as HRA), defined on the basis of having an older sibling with a confirmed diagnosis of ASD, and 33 controls, defined on the basis of a typically developing older sibling and no family history of neurodevelopmental disorders. Testing sessions included infants from ages 6 to 24 months, with some participants tested at more than one age. The study participants were part of an ongoing longitudinal study, and for this analysis visits were evaluated at regular intervals. However, at the time this study was done, most infants had been tested at only one or two visits. Data collected at each session were therefore treated as an independent data set. Thus, the data gathered from an infant who was tested during five different sessions, at ages 6, 9, 12, 18 and 24 months, were treated as unique data sets. Data were collected from a total of 143 sessions and from 79 different individuals. The distribution at different ages and risk groups is shown in Table 1. The number of infants who were tested at only one age at the time of this study is shown in Table 2, as well as the number of infants tested two, three, four and five times. Only one infant thus far has been tested at all five ages from 6 to 24 months. For the purposes of this study, all visits were treated as independent measurements. No comparison of different ages or of growth trajectories between individuals was done. Other characteristics recorded include height and head circumference as shown in Table 1.\nDistribution of participants by age and risk groupa\naA total of 79 different infants (46 HRA and 33 CON) participated in this study. Some infants participated in multiple sessions at different ages, raising the total to 143 recording sessions. Also shown are measured demographic variables (age, height and head circumference) and mean multiscale entropy (mMSE) values over three regions: whole head, frontal and left frontal. Statistically significant differences between HRA and CON groups are highlighted in boldface. HRA, high risk for autism, CON, controls; SD, standard deviation.\nDistribution of participants with number of visits and/or measurements of the same child at different agesa\naOverall, 79 infants participated in the study, and 143 measurement sessions were conducted. HRA, high risk for autism; CON, controls.\nThe larger Infant Sibling Project study, from which data for this project were taken, was approved by the Committee on Clinical Investigations at Children's Hospital Boston (X06-08-0374) and the Boston University School of Medicine (H-29049). Parental written informed consent was obtained after the experimental procedures had been fully explained.", "Infants were seated on their mothers' laps in a dimly lit room while a research assistant engaged the infants' attention by blowing bubbles. This procedure was followed to limit the amount of head movement by the infant that would interfere with the recording process. Continuous EEG recordings were taken with a 64-channel Sensor Net System (EGI, Inc., Eugene, OR, USA). This sensor net device comprises an elastic tension structure forming a geodesic tessellation of the head surface and containing carbon fiber electrodes embedded in pedestal sponges. At each vertex is a sensor pedestal housing an Ag/AgCl-coated, carbon-filled plastic electrode and a sponge containing a saline electrolyte solution. Prior to fitting the sensor net over the scalp, the sponges are soaked in electrolyte solution (6 mL of KCl per 1 L of distilled water) to facilitate electrical contact between the scalp and the relevant electrode. To ensure the safety and comfort of the infant, the salinity of the electrolyte solution is the same as tears. In the event that the solution comes into contact with the eyes, no damage or discomfort to the infant will occur.\nPrior to recording, measurements of channel gains and zeros were taken to provide an accurate scaling factor for the display of waveform data. The baby's head was measured and marked with a washable wax pencil to ensure accurate placement of the net, which was then placed over the scalp. Scalp impedances were checked online using NetStation (EGI, Inc.), the recording software package that runs this system. EEG data were collected and recorded online using NetAmps Amplifiers (EGI, Inc.) and NetStation software. The data were amplified, band-pass filtered at 0.1 to 100.0 Hz and sampled at a frequency of 250 Hz. They were digitized with a 12-bit National Instruments Board (National Instruments Corp., Woburn, MA, USA). Typically, 2 minutes of baseline activity were recorded, but depending on the willingness of the infant, recorded periods may have been shorter. For this study, continuous sample segments of 20 seconds were selected from the processed resting state data and used to compute multiscale entropy values.\n[SUBTITLE] Modified Multiscale Sample Entropy [SUBSECTION] A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.\nA multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.", "A multiscale method for computing the entropy of biological signals was developed by Costa et al. [23]. This approach computes the sample entropy on the original time series (or \"signal\") and on coarse-scaled series that are derived from the original signal. Because biological systems must be adaptable across multiple time scales, measurements of biological signals are likely to carry information across multiple scales. A multiscale estimation of the information content of EEG signals may reveal more information than the entropy of only the original signal.\nMultiple scale time series are produced from the original signal using a coarse-graining procedure. The scale 1 series is the original time series. The scale 2 time series was obtained by averaging two successive values from the original series. Scale 3 was obtained by averaging every three original values and so on as shown in equation (1):\n\n\n(1)\n\n\n\n\n\ns\n1\n\n:\n\nx\n1\n\n,\n\nx\n2\n\n,\n\nx\n3\n\n…\n\nx\nN\n\n\n\n\n\n\ns\n2\n\n:\n\n(\n\n\nx\n1\n\n+\n\nx\n2\n\n\n)\n\n/\n2\n,\n\n(\n\n\nx\n3\n\n+\n\nx\n4\n\n\n)\n\n/\n2\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n1\n\n\n+\n\nx\nN\n\n\n)\n\n/\n2\n\n\n\n\n⋮\n\n\n\n\n\ns\n\n20\n\n\n:\n\n(\n\n\nx\n1\n\n+\n⋯\n+\n\nx\n\n20\n\n\n\n)\n\n/\n20\n,\n…\n,\n\n(\n\n\nx\n\nN\n−\n20\n\n\n+\n⋯\n+\n\nx\nN\n\n\n)\n\n/\n20\n\n\n\n\n\n\nCoarse-grained series up to scale 20 are computed for each of the 64 EEG channels. The modified sample entropy (mSE) defined by Xie et al. [41] was used to compute the entropy of each coarse-grained time series. The mSE algorithm uses a sigmoidal function to compare vector similarity rather than a Heaviside function with a strict cutoff as with the sample entropy used for analysis of biological and ECG signals by Costa et al. [23,31]. The practical effect of using the mSE is that the computed entropy values are more robust to noise and the results are more consistent with short time series. In brief, the similarity functions Arm and Brm defined by equations (7) and (9) in the paper by Xie et al. [41] are computed with m = 2 and r = 0.15 for each coarse-grained time series defined in equation (1). The modified multiscale entropy (mMSE) is defined as the series of mSE values at each of the coarse-grained scales from 1 to 20. The mMSE for scale s with a finite length time series is then approximated by calculating the following:\n\n\n(2)\n\n\nm\nM\nS\nE\n\n(\n\ns\n,\nm\n,\nr\n\n)\n\n=\n−\nln\n⁡\n\n(\n\n\n\n\nA\nr\nm\n\n\n(\ns\n)\n\n\n\n\nB\nr\nm\n\n\n(\ns\n)\n\n\n\n\n)\n\n.\n\n\n\n\nThe multiscale entropy for several linear, stochastic and nonlinear time series is shown in Figure 1, along with representative mMSE for EEG signals from the EEG data used in this study. The purely random white noise and the completely deterministic logistic equation have similar mMSE curves and visually appear indistinguishable. As discussed by Costa et al. [23], these are quite distinct from normal physiological signals. The EEG signal is the only one of the series in Figure 1 that has an mMSE that increases with scale, indicating longer-range correlations in time. Decreasing entropy in general indicates that a signal contains information only on the smallest time scales. If entropy values across all scales for one time series are higher than for another, then the former is considered to be more complex than the latter. Although the mean mMSE value can be computed and used for comparing the overall complexity of physiological signals, the shape of the curve itself may be important for distinguishing two signals.\nCharacteristics of five different time series are shown. Column 1 shows the time series amplitudes. Column 2 represents the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. Column 3 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series.", "The time irreversibility index (trev) was computed for different resolutions of the EEG time series using the algorithm of Costa et al. [31]. The third column of Figure 1 shows trev values for several different linear and nonlinear time series. Of particular note is that only the sine wave time series and both random time series have nearly zero irreversibility indices, while the index for the nonlinear logistic series and the representative EEG signal are both nonzero on all scales shown.\nAfter computing multiple resolutions of the EEG time series as described above, an estimate of the time irreversibility for each resolution was computed by noting that a symmetric function or time series will have the same number of increments as decrements. That is, the number of times |xi+1 - xi| > 0 will be approximately the same as the number of times |xi+1 - xi| < 0. Thus, an estimate of the time series symmetry (or reversibility) was found by summing increments and decrements and dividing by the length of the series. A reversible time series will have a value of zero. For a series of 5,000 points, as used in this study, trev > 0.1 is a significant indicator of irreversibility and thus of nonlinearity [42]. This information is used only to indicate that nonlinear information is contained in the EEG time series that is not used in linear analysis methods, suggesting that the mMSE may contain more diagnostically useful information than power spectra analysis alone.", "The Orange machine learning software package (orange.biolab.si/) was used for classification calculations [43]. Several different learning algorithms were compared (support vector machine, k-nearest neighbors and naïve Bayesian algorithms) to exclude possible overfitting by one method. The significance of the classification results for each method was estimated empirically using the permutation approach described by Golland and Fischl [44].\nTo keep the feature set smaller while still capturing the overall shape of the mMSE curve, the low, high and mean values for each curve were extracted for each of 64 channels, creating a feature set of 192 values. A single sample from the population is represented by these 192 values. Although some data points were from the same infant at different ages, this study should be considered a cross-sectional study in that any relationship between data at two different ages was not used for classification. That is, the infants in the age 6 months EEG data set were considered to be independent of the set of infants studied at age 9 months, age 12 months and so on.", "The multiscale entropy and time irreversibility characteristics of five different time series are shown in Figure 1. The example time series amplitudes are shown in the first column. The second column displays plots of the multiscale entropy, where the horizontal axis is the coarse-grained scale from 1 to 20. White noise shows a characteristic decline in entropy with temporal scale, indicating loss of correlation between longer time intervals. Note that the deterministic but chaotic logistic equation has an entropy profile similar to white noise, suggesting that signal characteristics that appear as noise may in fact contain significant dynamic information about the system. The physiological (EEG) time series has a unique entropy curve that increases with temporal scale, similar to the cardiac signals observed in ECG readings [31,45].\nThe third column of Figure 1 is the multiscale time asymmetry value. The value of a in the lower right corner of the time asymmetry plot is the value of the time asymmetry index summed over scales 1 to 5. A nonzero time asymmetry value is a sufficient condition for nonlinearity of a time series. Although white noise and the logistic curve have similar entropy profiles, the time asymmetry index distinguishes the nonlinear chaotic signal from noise. The EEG signal shown here clearly contains nonlinear characteristics on the basis of the nonlinear time asymmetry index.\nUsing all of the EEG data, we first calculated time asymmetry to determine the degree of nonlinearity present in the signals. Figure 2 shows the time asymmetry index for all 64 channels of the resting state EEG for control and high-risk groups by age. The value of the time asymmetry index in the scalp plot was determined by averaging the index value over all members of that age and risk group. Since the value may take on positive or negative values and will be near zero for time-reversible signal, the persistence of the nonzero values in this plot is an indicator of signal nonlinearity. The multiscale entropy and trev values have independent physiological meanings [31]. Since apparent differences exist between controls and the high-risk group at all ages for both mMSE and trev, these two quantities together may provide a more sensitive biomarker for developmental age and atypical development. However, in this study, only the multiscale complexity was used to classify the high-risk group.\nTime asymmetry index for typical control group and the group of infants at high risk for autism is shown. The index was averaged over all infants in the group and age categories. If time asymmetry varied randomly at channel locations, the fluctuations would average out. The persistence of time asymmetry values different from zero indicates nonlinearity in the signal.\nTo make some general comparisons of EEG complexity between risk groups and different ages, mMSE curves were averaged over all members of subgroups by both age and risk group. Figure 3 shows that the HRA group had a consistently lower mean complexity over all channels, across all scales and at all ages. Figure 4 shows the group average mMSE value versus age for infants in each of the two risk groups. The bold black line in Figure 4 represents the mean mMSE value averaged over all 64 EEG channels. Left and right laterality were determined by averaging all left-side and all right-side channels separately. Similarly, mMSE values for four left frontal and four right frontal channels were averaged and plotted versus age. Note that the data in Figure 4 are treated as if drawn from a cross-sectional study as described previously. Mean values, standard deviations and statistical significance (P values from t-test) for the channel averages are given in Table 1. Differences between group averages are significant at age 18 months for overall mean mMSE, and the differences are significant for the left frontal region at all ages except 9 months. Of note is that significant differences were not found at age 9 months for any of the three MSE averages in Table 1, although head circumference was significantly different only at age 9 months. As discussed below, when all mMSE data were considered without averaging (that is, mMSE curves at each channel), machine learning algorithms found the greatest classification accuracy at age 9 months. Although it appears in Figure 4 that the most prominent difference between the control and HRA groups was the change in mMSE between ages 9 and 12 months, significance levels were not computed for changes in this study because measurements at each age were taken from different populations of infants.\nModified multiscale entropy (mMSE) is computed for each of 64 channels and for each of the risk groups and averaged over the sample population to produce the mMSE plots for infants ages 6 to 24 months.\nThe change in mean modified multiscale entropy (mMSE) over all channels is shown for each age. Averaging over all channels reveals that, in general, mMSE is higher in the typical control group than in the group of infants at high risk for autism, but regional differences cannot be seen. Numerical data, including the statistical significance of group differences, are contained in Table 1.\nSeveral features are immediately apparent. A general asymmetry in mMSE is seen in both control and high-risk groups, although this asymmetry appears to decline from ages 12 to 18 months as the left and right hemispheres and frontal curves come closer together at age 18 months. EEG complexity changes with age, but not uniformly. In the controls, the overall EEG complexity, shown by the solid black line in Figure 4, increases from ages 6 to 9 months then decreases slightly from ages 9 to 12 months before increasing again from ages 12 to 18 months. Left and right channels and the right frontal channels all follow this same pattern, though left and right hemisphere complexity is not symmetric. The left frontal channels follow a different pattern, increasing strongly until age 12 months and then declining after that. The complexity curves for the high-risk group follow a similar pattern, but the overall complexity is lower and the increases and decreases are much more exaggerated. Perhaps even more distinct is the left frontal curve in the high-risk group. It follows the same pattern as all other regions, unlike the left frontal curve in the controls.\nSince the complexity changes seem to vary with EEG channel, a better picture of complexity development with age and between risk groups can be seen in a scalp plot. Figure 5 shows the mean mMSE value for all EEG channels by risk group and age. The complexity values here were computed by averaging the mMSE over all coarse-grained scales for that channel as in Figure 2. Complexity variation with age and between risk groups is immediately apparent. One or two channels of the left frontal region appear to increase in complexity continuously with age in the controls, as does the right parieto-occipital region. The overall complexity in the high-risk group was lower than in the control group. Although the pattern of complexity change from ages 6 to 9 months appears similar in both groups, the high-risk group shows a marked decline in overall complexity from ages 9 to 12 months.\nMean modified multiscale entropy in each electroencephalography channel averaged over all infants at each age in (a) the typical control group or (b) the group of infants at high risk for autism.\nHeight, head circumference and exact age in days at the time of testing, as well as group means, standard deviations and significance levels, are included in Table 1. The only significant group difference among these variables was in head circumference at age 9 months: The infants in the HRA group had a larger mean head circumference than the typically developing controls.\n[SUBTITLE] Machine learning classification of risk [SUBSECTION] Statistical averages can sometimes obscure meaningful information in complex and highly varying time series. The scalp plots shown in Figure 5 reveal differences between risk groups and ages, but may not use all the information available in the mMSE calculations. For example, the complete mMSE curves on 20 resolutions or scales are shown in Figures 6 and 7 for individual 9-month-old infants. Figure 6 is derived from an infant from the control group, and Figure 7 is derived from an infant from the high-risk group. Curves are grouped by brain region, with 64 curves in all. The purpose of these graphs is simply to illustrate that the shape of the mMSE curves can vary between channels and individuals in distinct ways and that these differences will not be seen in average values. We note that the low spatial scale entropy in the frontal region of the infant from the control group is especially high, while this feature is lacking in the infant from the high-risk group. Although differences between these two examples are apparent, it may be quite difficult to compare 64 mMSE curves for a large number of infants in each group and determine the differences. To use all 64 × 20, or a total of 1,280, multiscale entropy values for each participant, a multiclass support vector machine (SVM) algorithm was used to perform supervised classification of the control and HRA groups.\nMean modified multiscale entropy curves for all 64 channels grouped by brain region for a single 9-month-old infant from the typical control group. Higher low spatial region (corresponding to high frequency) entropy in the frontal region is one distinct difference in the control example compared to the infants at high risk for autism example in Figure 7.\nThis figure is analogous to Figure 6, but for a single 9-month-old infant from the high risk group. Figures 6 and 7 illustrate that the shape of the modified multiscale entropy curve may contain information not seen when using averages alone as in previous scalp plots.\nUsing 10-fold cross-validation, infants were classified into either control or high-risk groups using three different learning algorithms as described previously. Since the complexity of all channels is changing rapidly from ages 6 to 24 months, classification within age groups was done rather than comparing the two groups using infants across the entire age spectrum. Machine classification calculations were done for boys and girls together at each age as well as separately. The results of these simulations are shown in Table 3. Classification by age and sex are shown with accuracy and significance estimates for three different machine learning algorithms: the k-nearest neighbors (k-NN), SVM and naïve Bayesian classification (Bayes) algorithms.\nSupervised learning classification using three different algorithms: k-nearest neighbors, support vector machine, and naïve Bayes classificationa\naTenfold cross-validation was run using the computed mean mMSE values on 64 channels for each infant within each age group. P values were estimated empirically using a permutation of class labels approach as described in the methods section under 'classification and endophenotypes. Identical cross-validation calculations with 100 permutations were performed to determine empirical P values with three different populations: all infants, boys only and girls only. Too few 24-month-old boys were available for cross-validation. k-NN, k-nearest neighbors algorithm; SVM, support vector machine algorithm; Bayes, naïve Bayes classification algorithm. Boldface entries highlight values with statistical significance of p < 0.05.\nThe significance of classification accuracy was assessed empirically using the permutation strategy described by Golland and Fischl [44]. This approach is common for estimating the significance of learning algorithms when the number of features greatly exceeds the number of training examples. If the class labels are randomly permutated, new classification accuracy can be computed using 10-fold cross-validation to serve as a baseline. For this study, 100 random permutations were run with 10-fold cross-validation for each machine classification calculation. The P value was determined by counting the number of random classifications for which the accuracy was equal to or higher than the accuracy for the true labels.\nUsing P = 0.05 as a significance cutoff value, the HRA and control groups can be classified at age 9 months for boys and girls together and for boys separately with accuracies of nearly 80% and well over 90%, respectively. For boys considered alone, the classification accuracy remained relatively high at ages 9, 12 and 18 months, though the result at age 12 months was not statistically significant. For girls, separation of the two groups was most accurate and significant at age 6 months, possibly indicating a sex difference in developmental trajectories. These results suggest that a familial endophenotype may be present at around age 9 months that enables HRA infants to be distinguished from low-risk controls. The differences seem to decline after 9 months of age, especially in girls, with some evidence that it may persist in boys until age 18 months (Table 3). Since approximately 60% of the HRA infants are expected not to be diagnosed with an ASD (20% will likely be diagnosed with another disorder, although not an ASD) [36], this is not surprising. Increasing heterogeneity with age regarding rates of development and behavioral characteristics of the high-risk group may be partly responsible for the drop in accuracy. Further study and subclassification with future data are needed to explore sex differences in brain development using entropy calculations.\nTo determine whether the significant group differences in mean head circumference were predictors of individual class status, two additional calculations were done. First, head circumference was added as one more feature to the mMSE values. The prediction calculations were repeated. The predictive accuracy of the classifiers was unchanged from the results obtained with mMSE alone. This might have been because the changed mMSE values were a direct reflection of head size differences in some way, so classification was done with head circumference alone. Somewhat surprisingly, classification accuracy was not significant and nearly random. When examining the group values, it appears that the rather large individual variability within each group accounts for this finding. We conclude that head circumference does not contribute to classification accuracy at any of the ages tested.\nStatistical averages can sometimes obscure meaningful information in complex and highly varying time series. The scalp plots shown in Figure 5 reveal differences between risk groups and ages, but may not use all the information available in the mMSE calculations. For example, the complete mMSE curves on 20 resolutions or scales are shown in Figures 6 and 7 for individual 9-month-old infants. Figure 6 is derived from an infant from the control group, and Figure 7 is derived from an infant from the high-risk group. Curves are grouped by brain region, with 64 curves in all. The purpose of these graphs is simply to illustrate that the shape of the mMSE curves can vary between channels and individuals in distinct ways and that these differences will not be seen in average values. We note that the low spatial scale entropy in the frontal region of the infant from the control group is especially high, while this feature is lacking in the infant from the high-risk group. Although differences between these two examples are apparent, it may be quite difficult to compare 64 mMSE curves for a large number of infants in each group and determine the differences. To use all 64 × 20, or a total of 1,280, multiscale entropy values for each participant, a multiclass support vector machine (SVM) algorithm was used to perform supervised classification of the control and HRA groups.\nMean modified multiscale entropy curves for all 64 channels grouped by brain region for a single 9-month-old infant from the typical control group. Higher low spatial region (corresponding to high frequency) entropy in the frontal region is one distinct difference in the control example compared to the infants at high risk for autism example in Figure 7.\nThis figure is analogous to Figure 6, but for a single 9-month-old infant from the high risk group. Figures 6 and 7 illustrate that the shape of the modified multiscale entropy curve may contain information not seen when using averages alone as in previous scalp plots.\nUsing 10-fold cross-validation, infants were classified into either control or high-risk groups using three different learning algorithms as described previously. Since the complexity of all channels is changing rapidly from ages 6 to 24 months, classification within age groups was done rather than comparing the two groups using infants across the entire age spectrum. Machine classification calculations were done for boys and girls together at each age as well as separately. The results of these simulations are shown in Table 3. Classification by age and sex are shown with accuracy and significance estimates for three different machine learning algorithms: the k-nearest neighbors (k-NN), SVM and naïve Bayesian classification (Bayes) algorithms.\nSupervised learning classification using three different algorithms: k-nearest neighbors, support vector machine, and naïve Bayes classificationa\naTenfold cross-validation was run using the computed mean mMSE values on 64 channels for each infant within each age group. P values were estimated empirically using a permutation of class labels approach as described in the methods section under 'classification and endophenotypes. Identical cross-validation calculations with 100 permutations were performed to determine empirical P values with three different populations: all infants, boys only and girls only. Too few 24-month-old boys were available for cross-validation. k-NN, k-nearest neighbors algorithm; SVM, support vector machine algorithm; Bayes, naïve Bayes classification algorithm. Boldface entries highlight values with statistical significance of p < 0.05.\nThe significance of classification accuracy was assessed empirically using the permutation strategy described by Golland and Fischl [44]. This approach is common for estimating the significance of learning algorithms when the number of features greatly exceeds the number of training examples. If the class labels are randomly permutated, new classification accuracy can be computed using 10-fold cross-validation to serve as a baseline. For this study, 100 random permutations were run with 10-fold cross-validation for each machine classification calculation. The P value was determined by counting the number of random classifications for which the accuracy was equal to or higher than the accuracy for the true labels.\nUsing P = 0.05 as a significance cutoff value, the HRA and control groups can be classified at age 9 months for boys and girls together and for boys separately with accuracies of nearly 80% and well over 90%, respectively. For boys considered alone, the classification accuracy remained relatively high at ages 9, 12 and 18 months, though the result at age 12 months was not statistically significant. For girls, separation of the two groups was most accurate and significant at age 6 months, possibly indicating a sex difference in developmental trajectories. These results suggest that a familial endophenotype may be present at around age 9 months that enables HRA infants to be distinguished from low-risk controls. The differences seem to decline after 9 months of age, especially in girls, with some evidence that it may persist in boys until age 18 months (Table 3). Since approximately 60% of the HRA infants are expected not to be diagnosed with an ASD (20% will likely be diagnosed with another disorder, although not an ASD) [36], this is not surprising. Increasing heterogeneity with age regarding rates of development and behavioral characteristics of the high-risk group may be partly responsible for the drop in accuracy. Further study and subclassification with future data are needed to explore sex differences in brain development using entropy calculations.\nTo determine whether the significant group differences in mean head circumference were predictors of individual class status, two additional calculations were done. First, head circumference was added as one more feature to the mMSE values. The prediction calculations were repeated. The predictive accuracy of the classifiers was unchanged from the results obtained with mMSE alone. This might have been because the changed mMSE values were a direct reflection of head size differences in some way, so classification was done with head circumference alone. Somewhat surprisingly, classification accuracy was not significant and nearly random. When examining the group values, it appears that the rather large individual variability within each group accounts for this finding. We conclude that head circumference does not contribute to classification accuracy at any of the ages tested.", "Statistical averages can sometimes obscure meaningful information in complex and highly varying time series. The scalp plots shown in Figure 5 reveal differences between risk groups and ages, but may not use all the information available in the mMSE calculations. For example, the complete mMSE curves on 20 resolutions or scales are shown in Figures 6 and 7 for individual 9-month-old infants. Figure 6 is derived from an infant from the control group, and Figure 7 is derived from an infant from the high-risk group. Curves are grouped by brain region, with 64 curves in all. The purpose of these graphs is simply to illustrate that the shape of the mMSE curves can vary between channels and individuals in distinct ways and that these differences will not be seen in average values. We note that the low spatial scale entropy in the frontal region of the infant from the control group is especially high, while this feature is lacking in the infant from the high-risk group. Although differences between these two examples are apparent, it may be quite difficult to compare 64 mMSE curves for a large number of infants in each group and determine the differences. To use all 64 × 20, or a total of 1,280, multiscale entropy values for each participant, a multiclass support vector machine (SVM) algorithm was used to perform supervised classification of the control and HRA groups.\nMean modified multiscale entropy curves for all 64 channels grouped by brain region for a single 9-month-old infant from the typical control group. Higher low spatial region (corresponding to high frequency) entropy in the frontal region is one distinct difference in the control example compared to the infants at high risk for autism example in Figure 7.\nThis figure is analogous to Figure 6, but for a single 9-month-old infant from the high risk group. Figures 6 and 7 illustrate that the shape of the modified multiscale entropy curve may contain information not seen when using averages alone as in previous scalp plots.\nUsing 10-fold cross-validation, infants were classified into either control or high-risk groups using three different learning algorithms as described previously. Since the complexity of all channels is changing rapidly from ages 6 to 24 months, classification within age groups was done rather than comparing the two groups using infants across the entire age spectrum. Machine classification calculations were done for boys and girls together at each age as well as separately. The results of these simulations are shown in Table 3. Classification by age and sex are shown with accuracy and significance estimates for three different machine learning algorithms: the k-nearest neighbors (k-NN), SVM and naïve Bayesian classification (Bayes) algorithms.\nSupervised learning classification using three different algorithms: k-nearest neighbors, support vector machine, and naïve Bayes classificationa\naTenfold cross-validation was run using the computed mean mMSE values on 64 channels for each infant within each age group. P values were estimated empirically using a permutation of class labels approach as described in the methods section under 'classification and endophenotypes. Identical cross-validation calculations with 100 permutations were performed to determine empirical P values with three different populations: all infants, boys only and girls only. Too few 24-month-old boys were available for cross-validation. k-NN, k-nearest neighbors algorithm; SVM, support vector machine algorithm; Bayes, naïve Bayes classification algorithm. Boldface entries highlight values with statistical significance of p < 0.05.\nThe significance of classification accuracy was assessed empirically using the permutation strategy described by Golland and Fischl [44]. This approach is common for estimating the significance of learning algorithms when the number of features greatly exceeds the number of training examples. If the class labels are randomly permutated, new classification accuracy can be computed using 10-fold cross-validation to serve as a baseline. For this study, 100 random permutations were run with 10-fold cross-validation for each machine classification calculation. The P value was determined by counting the number of random classifications for which the accuracy was equal to or higher than the accuracy for the true labels.\nUsing P = 0.05 as a significance cutoff value, the HRA and control groups can be classified at age 9 months for boys and girls together and for boys separately with accuracies of nearly 80% and well over 90%, respectively. For boys considered alone, the classification accuracy remained relatively high at ages 9, 12 and 18 months, though the result at age 12 months was not statistically significant. For girls, separation of the two groups was most accurate and significant at age 6 months, possibly indicating a sex difference in developmental trajectories. These results suggest that a familial endophenotype may be present at around age 9 months that enables HRA infants to be distinguished from low-risk controls. The differences seem to decline after 9 months of age, especially in girls, with some evidence that it may persist in boys until age 18 months (Table 3). Since approximately 60% of the HRA infants are expected not to be diagnosed with an ASD (20% will likely be diagnosed with another disorder, although not an ASD) [36], this is not surprising. Increasing heterogeneity with age regarding rates of development and behavioral characteristics of the high-risk group may be partly responsible for the drop in accuracy. Further study and subclassification with future data are needed to explore sex differences in brain development using entropy calculations.\nTo determine whether the significant group differences in mean head circumference were predictors of individual class status, two additional calculations were done. First, head circumference was added as one more feature to the mMSE values. The prediction calculations were repeated. The predictive accuracy of the classifiers was unchanged from the results obtained with mMSE alone. This might have been because the changed mMSE values were a direct reflection of head size differences in some way, so classification was done with head circumference alone. Somewhat surprisingly, classification accuracy was not significant and nearly random. When examining the group values, it appears that the rather large individual variability within each group accounts for this finding. We conclude that head circumference does not contribute to classification accuracy at any of the ages tested.", "The primary goal of this study was to explore whether measures of EEG complexity might reveal functional endophenotypes of ASD and thus identify them as potential biomarkers for risk of ASD at very early ages before the onset of clear behavioral symptoms. Our findings show significant promise for the specific measure of multiscale entropy that was used to compare high- and low-risk infants between the ages of 6 and 24 months. Differences in mean mMSE over the entire scalp and especially in the left frontal region were significant at most ages measured, except at age 9 months. The trajectory of the curves between ages 6 and 12 months in Figure 4 appears to be as informative as information at any specific age. This result makes the relatively high accuracy at age 9 months of the machine classification using all of the mMSE curves as feature vectors particularly notable. This early period of life is one of important changes in brain function that are foundational for the emergence of higher-level social and communicative skills that are at the heart of the difficulties associated with ASD. A number of major cognitive milestones typically occur beginning at around age 9 months and perhaps earlier in girls. These milestones include, for example, the development of the ability to perceive intentional actions by others [46], as well as loss of the ability to perceive speech sound distinctions in non-native languages [47] and loss of the ability to discriminate certain categories of faces [48]. These latter developments are especially significant because they reveal how socially grounded experiences influence changes in the neurocognitive mechanisms that underlie speech and face recognition processing. Thus, Marcus and Nelson [49] argued that infants mold their face-processing system on the basis of the visual experiences they encounter, just as their speech-processing skills are molded to their native language [50,51]. This model assumes a narrowing of the social-perceptual window through which language and faces are processed, which in turn results in an increase in cortical specialization. In a prospective study, Ozonoff et al. [34] found that social communicative behaviors in infants who later developed ASD declined dramatically between ages 6 and 18 months compared to typically developing infants.\nWe hypothesize that the following developmental sequence may explain the data in Table 3. At age 6 months, no significant behavioral differences have been noted in prospective studies between typically developing infants and those who develop autism [34,35]. Thus, few differences in electrophysiological data are expected at age 6 months, as shown in Figure 4 and Table 3. However, if girls are considered separately, differences in mMSE appear to be significant at age 6 months. If the multiscale entropy calculations from the EEG signals are indeed a biomarker for endophenotypes of autism familial traits, then by 9 months of age many infants in the high-risk group will display unique characteristics in their mMSE profiles that enable them to be distinguished from the controls. Those infants in the high-risk group who do not have multiple risk factors and later develop normally would not be expected to exhibit abnormalities in their mMSE profiles throughout the developmental period. These hypotheses might account for the HRA infants in our study who were classified similarly to our typical controls. This hypothesis will be tested when sufficient numbers of infants in the HRA group have reached 2 to 3 years of age and a diagnosis of ASD or typical development can be made.\nDevelopmental abnormalities from ages 6 to 12 months are particularly distinct in the two groups (low and high risk for ASD), allowing the groups to be classified quite accurately, although some overlap between the HRA and control groups should be expected at all ages. From 12 to 24 months of age, the distinction between the two groups declines. This likely reflects the trend for some fraction of high-risk infants to develop more typical cognitive and behavioral function, even though they may carry endophenotypes that share common complexity profiles at an earlier age with other high-risk infants who will later be diagnosed with ASD.\nRather than analyzing entropy at single age points, using a trajectory of entropy values from ages 6 to 24 months might be more informative. Although EEG complexity has been shown in several studies to increase with age [30,52,53], the increase is neither monotonic nor uniform across different brain regions. The abnormalities in brain development that lead to autistic characteristics may not be immediately apparent by inspecting relevant brain activity, even if the data contain diagnostically significant information. For example, a recent study of the relationship between cortical thickness and intelligence found no correlation between absolute cortical thickness at any particular age and intelligence. However, a specific pattern of developmental changes in cortical thickness was highly correlated with intelligence [54].\nOne of the characteristics of the high-risk group is heterogeneity: This group includes infants who will go on to develop an ASD and those who are within the normal range genetically, developmentally and behaviorally, as well as those in between who exhibit mild autism-like traits. Further study of this cohort as they grow and develop will enable this hypothesis to be tested. Rather than binary classification into typical controls and heterogeneous high-risk groups, classification on the basis of actual behavioral assessments will allow a more accurate test of the efficacy of using the mMSE to measure brain function.", "Abnormal brain connectivity, whether locally, regionally or both, may be a cause of a number of behavioral disorders, including ASD [9], and changes in local complexity are believed to be related to brain connectivity [55]. Local neural network connectivity undergoes rapid change during early development, and this may be reflected in the multiscale entropy of EEG signals, which is one measure of signal complexity that has been associated with health and disease [23]. A number of recent studies have demonstrated a link between brain connectivity and complexity, and EEG signal complexity may provide valuable information about the neural correlates of cognitive processes [56]. Early markers for neurological or mental disorders, particularly those with developmental etiologies, may be the growth trajectories of complexity as measured by multiscale entropy curves. The results described in this paper suggest that infants in families with a history of ASD have quite different EEG complexity patterns from 6 to 24 months of age that may be indicators of a functional endophenotype associated with ASD risk. Differences between mean mMSE averaged over all channels or in frontal regions in the two groups are significant at all ages except 9 months. Machine classification on the basis of mMSE curves in each channel as a feature set is able to determine group membership, particularly at 9 months of age. The classification accuracy decreases after age 12 months, possibly because of the influence of normal brain development and the development of normal characteristics in many of the high-risk infants. Classification accuracy for boys alone still appears to be significant and relatively high at age 18 months. More data about the future outcomes of the HRA infants and the computation of additional features, such as laterality of entropy, together with behavioral and cognitive assessments as the cohort of participants in this study grows, may enable the high-risk population to be subclassified more accurately. Future longitudinal analysis of data from this cohort will allow growth trajectories, as well as the future outcomes of the high-risk children, to be compared. Deeper understanding of the relationship between these neurophysiological processes and cognitive function may yield a new window to the mind and provide a clinically useful psychiatric biomarker using complexity analysis of EEG data.", "WJB is named on a provisional patent application submitted by the Children's Hospital Boston Technology Development Office that includes parts of the signal analysis methods discussed in this article. The authors declare that they have no other competing financial or nonfinancial interests.", "WJB conceived of the analytical methods used in this paper, wrote needed computer codes, performed calculations and statistical analysis and drafted the manuscript. AT carried out the initial processing of the raw data, participated in discussion of analysis results and contributed to drafting the methods section. CAN and HTF are co-Principal Investigators on the larger Infant Siblings Project study upon which this paper was based, contributed to the study design, interpretation of developmental implications of the results and were responsible for coordinating recruitment and testing of all patient data. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1741-7015/9/18/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Autoantibodies predate the onset of systemic lupus erythematosus in northern Sweden.
21342502
Autoantibodies have a central role in systemic lupus erythematosus (SLE). The presence of autoantibodies preceding disease onset by years has been reported both in patients with SLE and in those with rheumatoid arthritis, suggesting a gradual development of these diseases. Therefore, we sought to identify autoantibodies in a northern European population predating the onset of symptoms of SLE and their relationship to presenting symptoms.
INTRODUCTION
The register of patients fulfilling the American College of Rheumatology criteria for SLE and with a given date of the onset of symptoms was coanalysed with the register of the Medical Biobank, Umeå, Sweden. Thirty-eight patients were identified as having donated blood samples prior to symptom onset. A nested case-control study (1:4) was performed with 152 age- and sex-matched controls identified from within the Medical Biobank register (Umeå, Sweden). Antibodies against anti-Sjögren's syndrome antigen A (Ro/SSA; 52 and 60 kDa), anti-Sjögren's syndrome antigen B, anti-Smith antibody, ribonucleoprotein, scleroderma, anti-histidyl-tRNA synthetase antibody, double-stranded DNA (dsDNA), centromere protein B and histones were analysed using the AtheNA Multi-Lyte ANA II Plus Test System on a Bio-Plex Array Reader (Luminex200). Antinuclear antibodies test II (ANA II) results were analysed using indirect immunofluorescence on human epidermal 2 cells at a sample dilution of 1:100.
METHODS
Autoantibodies against nuclear antigens were detected a mean (±SD) of 5.6 ± 4.7 years before the onset of symptoms and 8.7 ± 5.6 years before diagnosis in 63% of the individuals who subsequently developed SLE. The sensitivity (45.7%) was highest for ANA II, with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with sensitivities of 20.0% at specificities of 98.7% and 97.4%, respectively. The odds ratios (ORs) for predicting disease were 18.13 for anti-dsDNA (95% confidence interval (95% CI), 3.58 to 91.84) and 11.5 (95% CI, 4.54 to 28.87) for ANA. Anti-Ro/SSA antibodies appeared first at a mean of 6.6 ± 2.5 years prior to symptom onset. The mean number of autoantibodies in prediseased individuals was 1.4, and after disease onset it was 3.1 (P < 0.0005). The time predating disease was shorter and the number of autoantibodies was greater in those individuals with serositis as a presenting symptom in comparison to those with arthritis and skin manifestations as the presenting symptoms.
RESULTS
Autoantibodies against nuclear antigens were detected in individuals who developed SLE several years before the onset of symptoms and diagnosis. The most sensitive autoantibodies were ANA, Ro/SSA and dsDNA, with the highest predictive OR being for anti-dsDNA antibodies. The first autoantibodies detected were anti-Ro/SSA.
CONCLUSIONS
[ "Adolescent", "Adult", "Antibodies, Antinuclear", "Autoantigens", "Case-Control Studies", "Early Diagnosis", "Female", "Humans", "Lupus Erythematosus, Systemic", "Male", "Middle Aged", "Sensitivity and Specificity", "Sweden", "Young Adult" ]
3241374
null
null
null
null
Results
[SUBTITLE] Analyses in presymptomatic individuals and controls [SUBSECTION] Of the 35 presymptomatic individuals whose blood samples were available, 22 (63%) had any detectable autoantibodies in their blood before the onset of symptoms, that is, predating disease by a median of 4.2 years (range, 2.1 to 7.9 years). Ten of these patients expressed one autoantibody, whilst 12 others had two or more autoantibodies (range, from two to seven). The sensitivity was highest for ANAs at 45.7% with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with a sensitivity of 20% but with specificities of 98.7% and 97.4%, respectively (Table 2). The sensitivities for the other autoantibodies were between 14.3% and 2.9% at 98% to 100% specificity levels (Table 2). The odds ratio (ORs) for predicting the development of SLE were highest for anti-dsDNA at 18.13 (95% confidence interval (95% CI), 3.58 to 91.84), followed by ANAs at 11.5 (95% CI, 4.54 to 28.87) and anti-Ro/SSA antibodies at 8.94 (95% CI, 2.45 to 32.58). The ORs for the other antibodies were between 9.36 and 4.29, although the number of positive individuals was low, that is, up to five. The likelihood ratio (LR) was highest for anti-dsDNA antibodies at 15.38, followed by ANAs with a LR of 9.14. Sensitivity and specificity of autoantibodies before onset of disease symptoms in individuals who later developed SLEa a95% CI, 95% confidence interval; bP values were determined by using χ2 test or Fisher's exact test as appropriate. . * = p < 0.05, **= p < 0.01, ***= p < 0.001 ANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; LR, positive likelihood value; ns, not significant; OR, odds ratio; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith. The autoantibody type to appear first before the onset of symptoms was anti-Ro/SSA antibody at a mean (±SD) of 6.6 ± 2.5 years. Anti-RNP and antihistones also appeared early at means (±SD) of 5.9 ± 2.5 years and 5.0 ± 1.5 years, respectively, although the number of positive individuals with each antibody was small, that is, four and five, respectively. The autoantibodies first detectable closest to disease onset were anti-centromere protein B at 0.2 years, anti-Sm at 0.7 years and anti-Scl-70 at a mean (±SD) of 1.4 ± 0.6 years (Table 3). The number of individuals expressing autoantibodies increased the closer they got to the onset of symptoms, that is, 12 (63%) of 19 individuals had autoantibodies present <5 years before disease onset compared with 8 (50%) of 16 individuals who had autoantibodies present ≥5 years before disease onset. The number of autoantibodies per individual also increased the closer the individual got to the onset of symptoms, particularly during the last 3 years before disease onset; however, this change did not reach statistical significance. The accumulated number of individuals who were positive for each antibody before any symptoms of disease and after disease onset is illustrated in Figure 1. In the maternity cohort, 37.5% had autoantibodies predating disease, compared with 94% in females and 100% in males from the Medical Biobank cohort. Duration in years of the various antibodies preceding the onset of symptoms and diseasea ANA, antinuclear antibody; aRo/SSA, anti-Sjögren's syndrome antigen A; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Scl-70, scleroderma 70; SD, standard deviation; Sm, Smith. Graph showing the accumulated number of positive individuals for each antibody. Shown as the percentage predating disease onset in years and after diagnosis of the disease. ANA, antinuclear antibody; SSA, Sjögren's syndrome antigen A; SSB, Sjögren's syndrome antigen B; dsDNA, double-stranded DNA; RNP, ribonucleoprotein; histon, histone. The number of positive autoantibodies increased with age at the time of blood sampling (P = 0.001, Pc < 0.01). Those individuals who had autoantibodies predating disease onset were older both at the time of blood sampling and at the onset of symptoms (42.8 versus 28.3 years and 49.3 versus 36.0 years; P = 0.002, Pc < 0.05, and P = 0.005, Pc < 0.05, respectively). The interval between blood sampling and the onset of clinical symptoms was shorter than it was for those who had no autoantibodies in their presymptom sample; however, this finding was not statistically significantly different (mean 5.2 years versus 6.3 years before symptom onset). Of the 35 presymptomatic individuals whose blood samples were available, 22 (63%) had any detectable autoantibodies in their blood before the onset of symptoms, that is, predating disease by a median of 4.2 years (range, 2.1 to 7.9 years). Ten of these patients expressed one autoantibody, whilst 12 others had two or more autoantibodies (range, from two to seven). The sensitivity was highest for ANAs at 45.7% with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with a sensitivity of 20% but with specificities of 98.7% and 97.4%, respectively (Table 2). The sensitivities for the other autoantibodies were between 14.3% and 2.9% at 98% to 100% specificity levels (Table 2). The odds ratio (ORs) for predicting the development of SLE were highest for anti-dsDNA at 18.13 (95% confidence interval (95% CI), 3.58 to 91.84), followed by ANAs at 11.5 (95% CI, 4.54 to 28.87) and anti-Ro/SSA antibodies at 8.94 (95% CI, 2.45 to 32.58). The ORs for the other antibodies were between 9.36 and 4.29, although the number of positive individuals was low, that is, up to five. The likelihood ratio (LR) was highest for anti-dsDNA antibodies at 15.38, followed by ANAs with a LR of 9.14. Sensitivity and specificity of autoantibodies before onset of disease symptoms in individuals who later developed SLEa a95% CI, 95% confidence interval; bP values were determined by using χ2 test or Fisher's exact test as appropriate. . * = p < 0.05, **= p < 0.01, ***= p < 0.001 ANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; LR, positive likelihood value; ns, not significant; OR, odds ratio; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith. The autoantibody type to appear first before the onset of symptoms was anti-Ro/SSA antibody at a mean (±SD) of 6.6 ± 2.5 years. Anti-RNP and antihistones also appeared early at means (±SD) of 5.9 ± 2.5 years and 5.0 ± 1.5 years, respectively, although the number of positive individuals with each antibody was small, that is, four and five, respectively. The autoantibodies first detectable closest to disease onset were anti-centromere protein B at 0.2 years, anti-Sm at 0.7 years and anti-Scl-70 at a mean (±SD) of 1.4 ± 0.6 years (Table 3). The number of individuals expressing autoantibodies increased the closer they got to the onset of symptoms, that is, 12 (63%) of 19 individuals had autoantibodies present <5 years before disease onset compared with 8 (50%) of 16 individuals who had autoantibodies present ≥5 years before disease onset. The number of autoantibodies per individual also increased the closer the individual got to the onset of symptoms, particularly during the last 3 years before disease onset; however, this change did not reach statistical significance. The accumulated number of individuals who were positive for each antibody before any symptoms of disease and after disease onset is illustrated in Figure 1. In the maternity cohort, 37.5% had autoantibodies predating disease, compared with 94% in females and 100% in males from the Medical Biobank cohort. Duration in years of the various antibodies preceding the onset of symptoms and diseasea ANA, antinuclear antibody; aRo/SSA, anti-Sjögren's syndrome antigen A; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Scl-70, scleroderma 70; SD, standard deviation; Sm, Smith. Graph showing the accumulated number of positive individuals for each antibody. Shown as the percentage predating disease onset in years and after diagnosis of the disease. ANA, antinuclear antibody; SSA, Sjögren's syndrome antigen A; SSB, Sjögren's syndrome antigen B; dsDNA, double-stranded DNA; RNP, ribonucleoprotein; histon, histone. The number of positive autoantibodies increased with age at the time of blood sampling (P = 0.001, Pc < 0.01). Those individuals who had autoantibodies predating disease onset were older both at the time of blood sampling and at the onset of symptoms (42.8 versus 28.3 years and 49.3 versus 36.0 years; P = 0.002, Pc < 0.05, and P = 0.005, Pc < 0.05, respectively). The interval between blood sampling and the onset of clinical symptoms was shorter than it was for those who had no autoantibodies in their presymptom sample; however, this finding was not statistically significantly different (mean 5.2 years versus 6.3 years before symptom onset). [SUBTITLE] Analyses in presymptomatic individuals and at diagnosis of SLE [SUBSECTION] The mean number of autoantibodies present in predisease individuals was 1.4 and increased after disease onset to 3.1 (P < 0.0005). In the autoantibody positive presymptomatic individuals (n = 22), the mean number of autoantibodies was 2.2 before and 3.3 after a diagnosis of SLE (P < 0.016, Pc < 0.1), whilst among the antibody-negative prepatients (n = 13), the mean number of autoantibodies after diagnosis was 2.8 (P < 0.002, Pc < 0.05). The autoantibodies present in relation to symptoms at the onset of disease are presented in Table 4. The patients with serositis (n = 6; four females and two of three males) at the onset of symptoms had higher frequencies of autoantibodies than did those with arthritis (n = 20; one of three males) and skin manifestations (n = 11; one male), with the mean number of autoantibodies among these patients being 2.5, 1.7 and 0.9, respectively. However, the time interval predating disease was shorter for those with primary symptoms such as serositis (median, 1.9 years) in comparison with those with arthritis (6.7 years) and skin manifestations (4.2 years). In one individual, the symptom preceding the onset of disease was nephritis without any autoantibodies detectable when analysed 3.7 years before disease onset, although at onset the patient was ANA- and anti-dsDNA-antibody-positive. There was no association between smoking and autoantibody formation in either the number of autoantibody-positive individuals or the number of autoantibodies present. Autoantibodies predating onset of SLE and presenting symptoms at disease onseta aANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith. In samples analysed after disease onset but during development of the disease, the concentrations of six of the autoantibodies that were positive in presymptomatic patients, namely, the autoantibodies anti-Jo-1 (n = 3), anti-Scl-70 (n = 2), anti-RNP (n = 2), antihistone (n = 2), anti-Ro/SSA (n = 1) and anti-centromere protein B (n = 1), decreased to below the cutoff values on the basis of either the multiplex detection kit or routine laboratory protocols. The mean number of autoantibodies present in predisease individuals was 1.4 and increased after disease onset to 3.1 (P < 0.0005). In the autoantibody positive presymptomatic individuals (n = 22), the mean number of autoantibodies was 2.2 before and 3.3 after a diagnosis of SLE (P < 0.016, Pc < 0.1), whilst among the antibody-negative prepatients (n = 13), the mean number of autoantibodies after diagnosis was 2.8 (P < 0.002, Pc < 0.05). The autoantibodies present in relation to symptoms at the onset of disease are presented in Table 4. The patients with serositis (n = 6; four females and two of three males) at the onset of symptoms had higher frequencies of autoantibodies than did those with arthritis (n = 20; one of three males) and skin manifestations (n = 11; one male), with the mean number of autoantibodies among these patients being 2.5, 1.7 and 0.9, respectively. However, the time interval predating disease was shorter for those with primary symptoms such as serositis (median, 1.9 years) in comparison with those with arthritis (6.7 years) and skin manifestations (4.2 years). In one individual, the symptom preceding the onset of disease was nephritis without any autoantibodies detectable when analysed 3.7 years before disease onset, although at onset the patient was ANA- and anti-dsDNA-antibody-positive. There was no association between smoking and autoantibody formation in either the number of autoantibody-positive individuals or the number of autoantibodies present. Autoantibodies predating onset of SLE and presenting symptoms at disease onseta aANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith. In samples analysed after disease onset but during development of the disease, the concentrations of six of the autoantibodies that were positive in presymptomatic patients, namely, the autoantibodies anti-Jo-1 (n = 3), anti-Scl-70 (n = 2), anti-RNP (n = 2), antihistone (n = 2), anti-Ro/SSA (n = 1) and anti-centromere protein B (n = 1), decreased to below the cutoff values on the basis of either the multiplex detection kit or routine laboratory protocols.
Conclusions
On the basis of this study, we conclude that autoantibodies against nuclear antigens can be detected several years before the onset of symptoms and SLE diagnosis in individuals who subsequently develop SLE. The highest sensitivities were for ANA, Ro/SSA and dsDNA, and anti-dsDNA antibodies had the highest predictive value for SLE. Antibodies against Ro/SSA were the first autoantibodies detected. Individuals who had serositis as the first symptom had more autoantibodies and a shorter time interval between the positive blood sample and disease onset than other onset symptoms, suggesting that more serious disease manifestation in the beginning of the disease is associated with faster disease development and more pronounced epitope spreading.
[ "Introduction", "Patients and controls", "Analysis of autoantibodies", "Statistics", "Analyses in presymptomatic individuals and controls", "Analyses in presymptomatic individuals and at diagnosis of SLE", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Systemic lupus erythematosus (SLE) is a heterogeneous disease with diverse clinical manifestations and variable severity in individual patients and between different patient populations [1,2]. A typical pathophysiological sign in SLE patients is the production of autoantibodies directed against nuclear antigens, which precede the development of clinical manifestations [3,4]. In particular, antibodies against double-stranded DNA (anti-dsDNA) have been shown to increase just prior to a diagnosis of SLE [5]. Individuals who develop SLE have also been found to gradually fulfill the clinical classification criteria that are preceded by the appearance of associated autoantibodies before diagnosis [6]. Furthermore, in patients defined as having undifferentiated connective tissue disease, a diagnosis of SLE was predicted in a 5-year follow-up study on the basis of the presence of anti-dsDNA antibodies [7].\nThere are several autoimmune diseases that are recognised by exhibiting a long preclinical phase during which susceptible individuals who later develop disease can be identified by the presence of autoantibodies [8-11]. The development of a rheumatic disease in asymptomatic mothers expressing anti-Sjögren's syndrome antigen A (Ro/SSA) and/or anti-Sjögren's syndrome antigen B (La/SSB) antibodies, and identified by the birth of a child with a congenital heart block, was found to be relatively common at 48% [12]. In another study, the detection of anti-La/SSB antibodies predated clinical evidence of Sjögren's syndrome by months and in some cases by years [13]. Furthermore, in an animal model of SLE, mice immunized with human Ro/SSA developed autoimmunity not only towards this molecule but also towards other immunologically similar molecules in a process equivalent to epitope spreading [14].\nThe presence of antinuclear antibodies (ANAs) was shown to predate the development of SLE in a small study conducted in Finland [15]. In the study by Arbuckle et al. [3], the frequency of producing at least one SLE-related autoantibody years before diagnosis was high at 88%. ANAs were present in 78% of the cases, anti-dsDNA antibodies were present in 55% and anti-Ro/SSA antibodies were present in 47%. Furthermore, the appearance of these antibodies appeared to follow a predictable course [3]. Anticardiolipin antibodies have been found to precede both the diagnosis of SLE and the development of clinical manifestations of thrombosis by a number of years [16].\nThe aim of this study was to analyse, using multiplex technology, the autoantibodies predating the onset of symptoms of SLE in individuals in a patient population in northern Europe and to relate these autoantibodies to the first recorded symptom of disease.", "The register of patients with SLE attending the Department of Rheumatology, University Hospital, Umeå, Sweden, with a known date of the onset of symptoms was coanalysed with the registers of the Medical Biobank (Umeå, Sweden) and of the maternity cohort (that is, the record of samples obtained for rubella screening of pregnant women) from northern Sweden. All SLE patients had been evaluated clinically. A total of 38 patients (3 male and 35 female, of whom 37 fulfilled four and one fulfilled only three of the American College of Rheumatology (ACR) criteria for SLE [17,18]) were identified as having donated blood before the onset of any symptoms of disease. One of the patients also fulfilled the criteria for mixed connective tissue disease [19]. Nineteen of the patients were identified from the Medical Biobank (on the basis of plasma withdrawal), and the other 19 were identified from among the maternity cohort collection (on the basis of sera withdrawal). All individuals in the county of Västerbotten are continuously invited to donate blood samples to the Medical Biobank, the plasma from which is stored at -80°C in a biorepository, and blood samples are drawn from all pregnant women with the sera stored at -20°C. Full details of the conditions for recruitment and the collection and storage of blood samples have been described previously [10].\nA nested 1:4 case-control study was undertaken with the 38 identified individuals, referred to hereinafter as \"presymptomatic\" individuals, and randomly selected controls (n = 152) from the same population-based cohorts matched for sex, age and date of blood sampling as well as area of residence. The mean age at the time blood sampling of the individuals who subsequently developed SLE was 36.9 years (age range, 16.8 to 60.2 years) and that of the matched controls was 36.7 years (age range, 17.8 to 62.3 years). The patients' ages at the time of sampling, the time predating the onset of symptoms and diagnosis and the time after sampling until the onset of symptoms are presented in Table 1 for both the Medical Biobank (stratified by sex) and maternity cohorts.\nAge at sampling, at onset of symptoms and disease onset and predating time presented as median values (Q1, Q3)\nSamples from three prepatients and six controls, all from the maternity cohort, were no longer available; that is, insufficient sera were available for analysis. The frequencies of nonsmokers, ex-smokers and current smokers among the presymptomatic patients were 47.2%, 26.3% and 26.3%, respectively. The equivalent data are not available for the controls.\nThe study was approved by the Regional Ethics Committee of the University Hospital in Umeå, and all participants gave their written informed consent.", "Autoantibodies against Ro/SSA (52 and 60 kDa), La/SSB, dsDNA, ribonucleoprotein (RNP), Smith (Sm), histidyl-tRNA synthetase (Jo-1), scleroderma (Scl-70), centromere protein B and histones in plasma from 19 presymptomatic individuals and matched controls (n = 76) (Medical Biobank), in sera from 16 presymptomatic individuals and matched controls (n = 76) (maternity cohort) and in sera from SLE patients (n = 38) were collected during the disease. All autoantibodies were detected using the multiplex AtheNA Multi-Lyte ANA II Plus Test (Zeuss Scientific, Raritan, NJ, USA) and analysed on a Bio-Plex Array Reader (Luminex200 Labmap™ system; Luminex Corp., Austin, TX, USA). The cutoff level for a positive value for each autoantibody recommended by the manufacturer was used, that is, 120 AU/ml for all analytes. Analyses of ANAs were performed by indirect immunofluorescence on human epidermal cell 2 (HEp-2 cells) slides (Immunoconcept, Sacramento, CA, USA) using 1:100 diluted samples. Analyses of the autoantibodies (ANAs, anti-dsDNA, anti-Ro/SSA, anti-La/SSB, anti-Sm, anti-RNP, anti-Jo-1, anti-Scl-70, anti-centromere protein B and antihistone) in the sera of the patients at diagnosis were also undertaken by the routine clinical immunology laboratory at the University Hospital. ANAs were analysed by indirect immunofluorescence with HEp-2 cells or rat tissue (in house), anti-dsDNA was analysed on Crithidia luciliae-coated slides (Immunoconcept) and the other autoantibodies were analysed either by enzyme-linked immunosorbent assay or by immunoblot assay.", "Statistical calculations were performed using SPSS for Windows version 17.0 software (SPSS, Inc., Chicago, IL, USA). Continuous data were compared by nonparametric analyses with the Wilcoxon signed-rank test for matched pairs (prepatients versus SLE patients) and conditional logistic regression analyses (prepatients versus controls). The relationships between categorical data (positive versus negative) were compared using χ2 analysis or Fisher's exact test as appropriate. All P values are two-sided, and P ≤ 0.05 was considered statistical significant. P values corrected for the number of comparisons made outside the hypothesis are presented as P corrected (Pc).", "Of the 35 presymptomatic individuals whose blood samples were available, 22 (63%) had any detectable autoantibodies in their blood before the onset of symptoms, that is, predating disease by a median of 4.2 years (range, 2.1 to 7.9 years). Ten of these patients expressed one autoantibody, whilst 12 others had two or more autoantibodies (range, from two to seven). The sensitivity was highest for ANAs at 45.7% with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with a sensitivity of 20% but with specificities of 98.7% and 97.4%, respectively (Table 2). The sensitivities for the other autoantibodies were between 14.3% and 2.9% at 98% to 100% specificity levels (Table 2). The odds ratio (ORs) for predicting the development of SLE were highest for anti-dsDNA at 18.13 (95% confidence interval (95% CI), 3.58 to 91.84), followed by ANAs at 11.5 (95% CI, 4.54 to 28.87) and anti-Ro/SSA antibodies at 8.94 (95% CI, 2.45 to 32.58). The ORs for the other antibodies were between 9.36 and 4.29, although the number of positive individuals was low, that is, up to five. The likelihood ratio (LR) was highest for anti-dsDNA antibodies at 15.38, followed by ANAs with a LR of 9.14.\nSensitivity and specificity of autoantibodies before onset of disease symptoms in individuals who later developed SLEa\na95% CI, 95% confidence interval; bP values were determined by using χ2 test or Fisher's exact test as appropriate. . * = p < 0.05, **= p < 0.01, ***= p < 0.001\nANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; LR, positive likelihood value; ns, not significant; OR, odds ratio; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nThe autoantibody type to appear first before the onset of symptoms was anti-Ro/SSA antibody at a mean (±SD) of 6.6 ± 2.5 years. Anti-RNP and antihistones also appeared early at means (±SD) of 5.9 ± 2.5 years and 5.0 ± 1.5 years, respectively, although the number of positive individuals with each antibody was small, that is, four and five, respectively. The autoantibodies first detectable closest to disease onset were anti-centromere protein B at 0.2 years, anti-Sm at 0.7 years and anti-Scl-70 at a mean (±SD) of 1.4 ± 0.6 years (Table 3). The number of individuals expressing autoantibodies increased the closer they got to the onset of symptoms, that is, 12 (63%) of 19 individuals had autoantibodies present <5 years before disease onset compared with 8 (50%) of 16 individuals who had autoantibodies present ≥5 years before disease onset. The number of autoantibodies per individual also increased the closer the individual got to the onset of symptoms, particularly during the last 3 years before disease onset; however, this change did not reach statistical significance. The accumulated number of individuals who were positive for each antibody before any symptoms of disease and after disease onset is illustrated in Figure 1. In the maternity cohort, 37.5% had autoantibodies predating disease, compared with 94% in females and 100% in males from the Medical Biobank cohort.\nDuration in years of the various antibodies preceding the onset of symptoms and diseasea\nANA, antinuclear antibody; aRo/SSA, anti-Sjögren's syndrome antigen A; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Scl-70, scleroderma 70; SD, standard deviation; Sm, Smith.\nGraph showing the accumulated number of positive individuals for each antibody. Shown as the percentage predating disease onset in years and after diagnosis of the disease. ANA, antinuclear antibody; SSA, Sjögren's syndrome antigen A; SSB, Sjögren's syndrome antigen B; dsDNA, double-stranded DNA; RNP, ribonucleoprotein; histon, histone.\nThe number of positive autoantibodies increased with age at the time of blood sampling (P = 0.001, Pc < 0.01). Those individuals who had autoantibodies predating disease onset were older both at the time of blood sampling and at the onset of symptoms (42.8 versus 28.3 years and 49.3 versus 36.0 years; P = 0.002, Pc < 0.05, and P = 0.005, Pc < 0.05, respectively). The interval between blood sampling and the onset of clinical symptoms was shorter than it was for those who had no autoantibodies in their presymptom sample; however, this finding was not statistically significantly different (mean 5.2 years versus 6.3 years before symptom onset).", "The mean number of autoantibodies present in predisease individuals was 1.4 and increased after disease onset to 3.1 (P < 0.0005). In the autoantibody positive presymptomatic individuals (n = 22), the mean number of autoantibodies was 2.2 before and 3.3 after a diagnosis of SLE (P < 0.016, Pc < 0.1), whilst among the antibody-negative prepatients (n = 13), the mean number of autoantibodies after diagnosis was 2.8 (P < 0.002, Pc < 0.05).\nThe autoantibodies present in relation to symptoms at the onset of disease are presented in Table 4. The patients with serositis (n = 6; four females and two of three males) at the onset of symptoms had higher frequencies of autoantibodies than did those with arthritis (n = 20; one of three males) and skin manifestations (n = 11; one male), with the mean number of autoantibodies among these patients being 2.5, 1.7 and 0.9, respectively. However, the time interval predating disease was shorter for those with primary symptoms such as serositis (median, 1.9 years) in comparison with those with arthritis (6.7 years) and skin manifestations (4.2 years). In one individual, the symptom preceding the onset of disease was nephritis without any autoantibodies detectable when analysed 3.7 years before disease onset, although at onset the patient was ANA- and anti-dsDNA-antibody-positive. There was no association between smoking and autoantibody formation in either the number of autoantibody-positive individuals or the number of autoantibodies present.\nAutoantibodies predating onset of SLE and presenting symptoms at disease onseta\naANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nIn samples analysed after disease onset but during development of the disease, the concentrations of six of the autoantibodies that were positive in presymptomatic patients, namely, the autoantibodies anti-Jo-1 (n = 3), anti-Scl-70 (n = 2), anti-RNP (n = 2), antihistone (n = 2), anti-Ro/SSA (n = 1) and anti-centromere protein B (n = 1), decreased to below the cutoff values on the basis of either the multiplex detection kit or routine laboratory protocols.", "ACR: American College of Rheumatology; ANA II: antinuclear antibody test II; anti-Sm: anti-Smith antibody; dsDNA: double-stranded DNA; HEp-2: human epidermal cell 2; Jo-1: anti-histidyl-tRNA synthetase antibody; La/SSB: anti-Sjögren's syndrome antigen B; LR: likelihood ratio; OR: odds ratio; RNP: ribonucleoprotein; Ro/SSA: anti-Sjögren's syndrome antigen A; Scl-70: scleroderma 70; SLE: systemic lupus erythematosus.", "The authors declare that they have no competing interests.", "CE analysed and interpreted the data and was involved in drafting the manuscript. HK analysed and interpreted the data and was to some extent involved in drafting the manuscript. MJ contributed to the study design and analysed and interpreted the data. GH and GW contributed to the design of the study and were involved with the supply of the blood samples. SRD designed the study, analysed and interpreted the data and was involved in drafting the manuscript. All authors have given their final approval of the version of the manuscript to be published." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Patients and controls", "Analysis of autoantibodies", "Statistics", "Results", "Analyses in presymptomatic individuals and controls", "Analyses in presymptomatic individuals and at diagnosis of SLE", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Systemic lupus erythematosus (SLE) is a heterogeneous disease with diverse clinical manifestations and variable severity in individual patients and between different patient populations [1,2]. A typical pathophysiological sign in SLE patients is the production of autoantibodies directed against nuclear antigens, which precede the development of clinical manifestations [3,4]. In particular, antibodies against double-stranded DNA (anti-dsDNA) have been shown to increase just prior to a diagnosis of SLE [5]. Individuals who develop SLE have also been found to gradually fulfill the clinical classification criteria that are preceded by the appearance of associated autoantibodies before diagnosis [6]. Furthermore, in patients defined as having undifferentiated connective tissue disease, a diagnosis of SLE was predicted in a 5-year follow-up study on the basis of the presence of anti-dsDNA antibodies [7].\nThere are several autoimmune diseases that are recognised by exhibiting a long preclinical phase during which susceptible individuals who later develop disease can be identified by the presence of autoantibodies [8-11]. The development of a rheumatic disease in asymptomatic mothers expressing anti-Sjögren's syndrome antigen A (Ro/SSA) and/or anti-Sjögren's syndrome antigen B (La/SSB) antibodies, and identified by the birth of a child with a congenital heart block, was found to be relatively common at 48% [12]. In another study, the detection of anti-La/SSB antibodies predated clinical evidence of Sjögren's syndrome by months and in some cases by years [13]. Furthermore, in an animal model of SLE, mice immunized with human Ro/SSA developed autoimmunity not only towards this molecule but also towards other immunologically similar molecules in a process equivalent to epitope spreading [14].\nThe presence of antinuclear antibodies (ANAs) was shown to predate the development of SLE in a small study conducted in Finland [15]. In the study by Arbuckle et al. [3], the frequency of producing at least one SLE-related autoantibody years before diagnosis was high at 88%. ANAs were present in 78% of the cases, anti-dsDNA antibodies were present in 55% and anti-Ro/SSA antibodies were present in 47%. Furthermore, the appearance of these antibodies appeared to follow a predictable course [3]. Anticardiolipin antibodies have been found to precede both the diagnosis of SLE and the development of clinical manifestations of thrombosis by a number of years [16].\nThe aim of this study was to analyse, using multiplex technology, the autoantibodies predating the onset of symptoms of SLE in individuals in a patient population in northern Europe and to relate these autoantibodies to the first recorded symptom of disease.", "[SUBTITLE] Patients and controls [SUBSECTION] The register of patients with SLE attending the Department of Rheumatology, University Hospital, Umeå, Sweden, with a known date of the onset of symptoms was coanalysed with the registers of the Medical Biobank (Umeå, Sweden) and of the maternity cohort (that is, the record of samples obtained for rubella screening of pregnant women) from northern Sweden. All SLE patients had been evaluated clinically. A total of 38 patients (3 male and 35 female, of whom 37 fulfilled four and one fulfilled only three of the American College of Rheumatology (ACR) criteria for SLE [17,18]) were identified as having donated blood before the onset of any symptoms of disease. One of the patients also fulfilled the criteria for mixed connective tissue disease [19]. Nineteen of the patients were identified from the Medical Biobank (on the basis of plasma withdrawal), and the other 19 were identified from among the maternity cohort collection (on the basis of sera withdrawal). All individuals in the county of Västerbotten are continuously invited to donate blood samples to the Medical Biobank, the plasma from which is stored at -80°C in a biorepository, and blood samples are drawn from all pregnant women with the sera stored at -20°C. Full details of the conditions for recruitment and the collection and storage of blood samples have been described previously [10].\nA nested 1:4 case-control study was undertaken with the 38 identified individuals, referred to hereinafter as \"presymptomatic\" individuals, and randomly selected controls (n = 152) from the same population-based cohorts matched for sex, age and date of blood sampling as well as area of residence. The mean age at the time blood sampling of the individuals who subsequently developed SLE was 36.9 years (age range, 16.8 to 60.2 years) and that of the matched controls was 36.7 years (age range, 17.8 to 62.3 years). The patients' ages at the time of sampling, the time predating the onset of symptoms and diagnosis and the time after sampling until the onset of symptoms are presented in Table 1 for both the Medical Biobank (stratified by sex) and maternity cohorts.\nAge at sampling, at onset of symptoms and disease onset and predating time presented as median values (Q1, Q3)\nSamples from three prepatients and six controls, all from the maternity cohort, were no longer available; that is, insufficient sera were available for analysis. The frequencies of nonsmokers, ex-smokers and current smokers among the presymptomatic patients were 47.2%, 26.3% and 26.3%, respectively. The equivalent data are not available for the controls.\nThe study was approved by the Regional Ethics Committee of the University Hospital in Umeå, and all participants gave their written informed consent.\nThe register of patients with SLE attending the Department of Rheumatology, University Hospital, Umeå, Sweden, with a known date of the onset of symptoms was coanalysed with the registers of the Medical Biobank (Umeå, Sweden) and of the maternity cohort (that is, the record of samples obtained for rubella screening of pregnant women) from northern Sweden. All SLE patients had been evaluated clinically. A total of 38 patients (3 male and 35 female, of whom 37 fulfilled four and one fulfilled only three of the American College of Rheumatology (ACR) criteria for SLE [17,18]) were identified as having donated blood before the onset of any symptoms of disease. One of the patients also fulfilled the criteria for mixed connective tissue disease [19]. Nineteen of the patients were identified from the Medical Biobank (on the basis of plasma withdrawal), and the other 19 were identified from among the maternity cohort collection (on the basis of sera withdrawal). All individuals in the county of Västerbotten are continuously invited to donate blood samples to the Medical Biobank, the plasma from which is stored at -80°C in a biorepository, and blood samples are drawn from all pregnant women with the sera stored at -20°C. Full details of the conditions for recruitment and the collection and storage of blood samples have been described previously [10].\nA nested 1:4 case-control study was undertaken with the 38 identified individuals, referred to hereinafter as \"presymptomatic\" individuals, and randomly selected controls (n = 152) from the same population-based cohorts matched for sex, age and date of blood sampling as well as area of residence. The mean age at the time blood sampling of the individuals who subsequently developed SLE was 36.9 years (age range, 16.8 to 60.2 years) and that of the matched controls was 36.7 years (age range, 17.8 to 62.3 years). The patients' ages at the time of sampling, the time predating the onset of symptoms and diagnosis and the time after sampling until the onset of symptoms are presented in Table 1 for both the Medical Biobank (stratified by sex) and maternity cohorts.\nAge at sampling, at onset of symptoms and disease onset and predating time presented as median values (Q1, Q3)\nSamples from three prepatients and six controls, all from the maternity cohort, were no longer available; that is, insufficient sera were available for analysis. The frequencies of nonsmokers, ex-smokers and current smokers among the presymptomatic patients were 47.2%, 26.3% and 26.3%, respectively. The equivalent data are not available for the controls.\nThe study was approved by the Regional Ethics Committee of the University Hospital in Umeå, and all participants gave their written informed consent.\n[SUBTITLE] Analysis of autoantibodies [SUBSECTION] Autoantibodies against Ro/SSA (52 and 60 kDa), La/SSB, dsDNA, ribonucleoprotein (RNP), Smith (Sm), histidyl-tRNA synthetase (Jo-1), scleroderma (Scl-70), centromere protein B and histones in plasma from 19 presymptomatic individuals and matched controls (n = 76) (Medical Biobank), in sera from 16 presymptomatic individuals and matched controls (n = 76) (maternity cohort) and in sera from SLE patients (n = 38) were collected during the disease. All autoantibodies were detected using the multiplex AtheNA Multi-Lyte ANA II Plus Test (Zeuss Scientific, Raritan, NJ, USA) and analysed on a Bio-Plex Array Reader (Luminex200 Labmap™ system; Luminex Corp., Austin, TX, USA). The cutoff level for a positive value for each autoantibody recommended by the manufacturer was used, that is, 120 AU/ml for all analytes. Analyses of ANAs were performed by indirect immunofluorescence on human epidermal cell 2 (HEp-2 cells) slides (Immunoconcept, Sacramento, CA, USA) using 1:100 diluted samples. Analyses of the autoantibodies (ANAs, anti-dsDNA, anti-Ro/SSA, anti-La/SSB, anti-Sm, anti-RNP, anti-Jo-1, anti-Scl-70, anti-centromere protein B and antihistone) in the sera of the patients at diagnosis were also undertaken by the routine clinical immunology laboratory at the University Hospital. ANAs were analysed by indirect immunofluorescence with HEp-2 cells or rat tissue (in house), anti-dsDNA was analysed on Crithidia luciliae-coated slides (Immunoconcept) and the other autoantibodies were analysed either by enzyme-linked immunosorbent assay or by immunoblot assay.\nAutoantibodies against Ro/SSA (52 and 60 kDa), La/SSB, dsDNA, ribonucleoprotein (RNP), Smith (Sm), histidyl-tRNA synthetase (Jo-1), scleroderma (Scl-70), centromere protein B and histones in plasma from 19 presymptomatic individuals and matched controls (n = 76) (Medical Biobank), in sera from 16 presymptomatic individuals and matched controls (n = 76) (maternity cohort) and in sera from SLE patients (n = 38) were collected during the disease. All autoantibodies were detected using the multiplex AtheNA Multi-Lyte ANA II Plus Test (Zeuss Scientific, Raritan, NJ, USA) and analysed on a Bio-Plex Array Reader (Luminex200 Labmap™ system; Luminex Corp., Austin, TX, USA). The cutoff level for a positive value for each autoantibody recommended by the manufacturer was used, that is, 120 AU/ml for all analytes. Analyses of ANAs were performed by indirect immunofluorescence on human epidermal cell 2 (HEp-2 cells) slides (Immunoconcept, Sacramento, CA, USA) using 1:100 diluted samples. Analyses of the autoantibodies (ANAs, anti-dsDNA, anti-Ro/SSA, anti-La/SSB, anti-Sm, anti-RNP, anti-Jo-1, anti-Scl-70, anti-centromere protein B and antihistone) in the sera of the patients at diagnosis were also undertaken by the routine clinical immunology laboratory at the University Hospital. ANAs were analysed by indirect immunofluorescence with HEp-2 cells or rat tissue (in house), anti-dsDNA was analysed on Crithidia luciliae-coated slides (Immunoconcept) and the other autoantibodies were analysed either by enzyme-linked immunosorbent assay or by immunoblot assay.\n[SUBTITLE] Statistics [SUBSECTION] Statistical calculations were performed using SPSS for Windows version 17.0 software (SPSS, Inc., Chicago, IL, USA). Continuous data were compared by nonparametric analyses with the Wilcoxon signed-rank test for matched pairs (prepatients versus SLE patients) and conditional logistic regression analyses (prepatients versus controls). The relationships between categorical data (positive versus negative) were compared using χ2 analysis or Fisher's exact test as appropriate. All P values are two-sided, and P ≤ 0.05 was considered statistical significant. P values corrected for the number of comparisons made outside the hypothesis are presented as P corrected (Pc).\nStatistical calculations were performed using SPSS for Windows version 17.0 software (SPSS, Inc., Chicago, IL, USA). Continuous data were compared by nonparametric analyses with the Wilcoxon signed-rank test for matched pairs (prepatients versus SLE patients) and conditional logistic regression analyses (prepatients versus controls). The relationships between categorical data (positive versus negative) were compared using χ2 analysis or Fisher's exact test as appropriate. All P values are two-sided, and P ≤ 0.05 was considered statistical significant. P values corrected for the number of comparisons made outside the hypothesis are presented as P corrected (Pc).", "The register of patients with SLE attending the Department of Rheumatology, University Hospital, Umeå, Sweden, with a known date of the onset of symptoms was coanalysed with the registers of the Medical Biobank (Umeå, Sweden) and of the maternity cohort (that is, the record of samples obtained for rubella screening of pregnant women) from northern Sweden. All SLE patients had been evaluated clinically. A total of 38 patients (3 male and 35 female, of whom 37 fulfilled four and one fulfilled only three of the American College of Rheumatology (ACR) criteria for SLE [17,18]) were identified as having donated blood before the onset of any symptoms of disease. One of the patients also fulfilled the criteria for mixed connective tissue disease [19]. Nineteen of the patients were identified from the Medical Biobank (on the basis of plasma withdrawal), and the other 19 were identified from among the maternity cohort collection (on the basis of sera withdrawal). All individuals in the county of Västerbotten are continuously invited to donate blood samples to the Medical Biobank, the plasma from which is stored at -80°C in a biorepository, and blood samples are drawn from all pregnant women with the sera stored at -20°C. Full details of the conditions for recruitment and the collection and storage of blood samples have been described previously [10].\nA nested 1:4 case-control study was undertaken with the 38 identified individuals, referred to hereinafter as \"presymptomatic\" individuals, and randomly selected controls (n = 152) from the same population-based cohorts matched for sex, age and date of blood sampling as well as area of residence. The mean age at the time blood sampling of the individuals who subsequently developed SLE was 36.9 years (age range, 16.8 to 60.2 years) and that of the matched controls was 36.7 years (age range, 17.8 to 62.3 years). The patients' ages at the time of sampling, the time predating the onset of symptoms and diagnosis and the time after sampling until the onset of symptoms are presented in Table 1 for both the Medical Biobank (stratified by sex) and maternity cohorts.\nAge at sampling, at onset of symptoms and disease onset and predating time presented as median values (Q1, Q3)\nSamples from three prepatients and six controls, all from the maternity cohort, were no longer available; that is, insufficient sera were available for analysis. The frequencies of nonsmokers, ex-smokers and current smokers among the presymptomatic patients were 47.2%, 26.3% and 26.3%, respectively. The equivalent data are not available for the controls.\nThe study was approved by the Regional Ethics Committee of the University Hospital in Umeå, and all participants gave their written informed consent.", "Autoantibodies against Ro/SSA (52 and 60 kDa), La/SSB, dsDNA, ribonucleoprotein (RNP), Smith (Sm), histidyl-tRNA synthetase (Jo-1), scleroderma (Scl-70), centromere protein B and histones in plasma from 19 presymptomatic individuals and matched controls (n = 76) (Medical Biobank), in sera from 16 presymptomatic individuals and matched controls (n = 76) (maternity cohort) and in sera from SLE patients (n = 38) were collected during the disease. All autoantibodies were detected using the multiplex AtheNA Multi-Lyte ANA II Plus Test (Zeuss Scientific, Raritan, NJ, USA) and analysed on a Bio-Plex Array Reader (Luminex200 Labmap™ system; Luminex Corp., Austin, TX, USA). The cutoff level for a positive value for each autoantibody recommended by the manufacturer was used, that is, 120 AU/ml for all analytes. Analyses of ANAs were performed by indirect immunofluorescence on human epidermal cell 2 (HEp-2 cells) slides (Immunoconcept, Sacramento, CA, USA) using 1:100 diluted samples. Analyses of the autoantibodies (ANAs, anti-dsDNA, anti-Ro/SSA, anti-La/SSB, anti-Sm, anti-RNP, anti-Jo-1, anti-Scl-70, anti-centromere protein B and antihistone) in the sera of the patients at diagnosis were also undertaken by the routine clinical immunology laboratory at the University Hospital. ANAs were analysed by indirect immunofluorescence with HEp-2 cells or rat tissue (in house), anti-dsDNA was analysed on Crithidia luciliae-coated slides (Immunoconcept) and the other autoantibodies were analysed either by enzyme-linked immunosorbent assay or by immunoblot assay.", "Statistical calculations were performed using SPSS for Windows version 17.0 software (SPSS, Inc., Chicago, IL, USA). Continuous data were compared by nonparametric analyses with the Wilcoxon signed-rank test for matched pairs (prepatients versus SLE patients) and conditional logistic regression analyses (prepatients versus controls). The relationships between categorical data (positive versus negative) were compared using χ2 analysis or Fisher's exact test as appropriate. All P values are two-sided, and P ≤ 0.05 was considered statistical significant. P values corrected for the number of comparisons made outside the hypothesis are presented as P corrected (Pc).", "[SUBTITLE] Analyses in presymptomatic individuals and controls [SUBSECTION] Of the 35 presymptomatic individuals whose blood samples were available, 22 (63%) had any detectable autoantibodies in their blood before the onset of symptoms, that is, predating disease by a median of 4.2 years (range, 2.1 to 7.9 years). Ten of these patients expressed one autoantibody, whilst 12 others had two or more autoantibodies (range, from two to seven). The sensitivity was highest for ANAs at 45.7% with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with a sensitivity of 20% but with specificities of 98.7% and 97.4%, respectively (Table 2). The sensitivities for the other autoantibodies were between 14.3% and 2.9% at 98% to 100% specificity levels (Table 2). The odds ratio (ORs) for predicting the development of SLE were highest for anti-dsDNA at 18.13 (95% confidence interval (95% CI), 3.58 to 91.84), followed by ANAs at 11.5 (95% CI, 4.54 to 28.87) and anti-Ro/SSA antibodies at 8.94 (95% CI, 2.45 to 32.58). The ORs for the other antibodies were between 9.36 and 4.29, although the number of positive individuals was low, that is, up to five. The likelihood ratio (LR) was highest for anti-dsDNA antibodies at 15.38, followed by ANAs with a LR of 9.14.\nSensitivity and specificity of autoantibodies before onset of disease symptoms in individuals who later developed SLEa\na95% CI, 95% confidence interval; bP values were determined by using χ2 test or Fisher's exact test as appropriate. . * = p < 0.05, **= p < 0.01, ***= p < 0.001\nANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; LR, positive likelihood value; ns, not significant; OR, odds ratio; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nThe autoantibody type to appear first before the onset of symptoms was anti-Ro/SSA antibody at a mean (±SD) of 6.6 ± 2.5 years. Anti-RNP and antihistones also appeared early at means (±SD) of 5.9 ± 2.5 years and 5.0 ± 1.5 years, respectively, although the number of positive individuals with each antibody was small, that is, four and five, respectively. The autoantibodies first detectable closest to disease onset were anti-centromere protein B at 0.2 years, anti-Sm at 0.7 years and anti-Scl-70 at a mean (±SD) of 1.4 ± 0.6 years (Table 3). The number of individuals expressing autoantibodies increased the closer they got to the onset of symptoms, that is, 12 (63%) of 19 individuals had autoantibodies present <5 years before disease onset compared with 8 (50%) of 16 individuals who had autoantibodies present ≥5 years before disease onset. The number of autoantibodies per individual also increased the closer the individual got to the onset of symptoms, particularly during the last 3 years before disease onset; however, this change did not reach statistical significance. The accumulated number of individuals who were positive for each antibody before any symptoms of disease and after disease onset is illustrated in Figure 1. In the maternity cohort, 37.5% had autoantibodies predating disease, compared with 94% in females and 100% in males from the Medical Biobank cohort.\nDuration in years of the various antibodies preceding the onset of symptoms and diseasea\nANA, antinuclear antibody; aRo/SSA, anti-Sjögren's syndrome antigen A; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Scl-70, scleroderma 70; SD, standard deviation; Sm, Smith.\nGraph showing the accumulated number of positive individuals for each antibody. Shown as the percentage predating disease onset in years and after diagnosis of the disease. ANA, antinuclear antibody; SSA, Sjögren's syndrome antigen A; SSB, Sjögren's syndrome antigen B; dsDNA, double-stranded DNA; RNP, ribonucleoprotein; histon, histone.\nThe number of positive autoantibodies increased with age at the time of blood sampling (P = 0.001, Pc < 0.01). Those individuals who had autoantibodies predating disease onset were older both at the time of blood sampling and at the onset of symptoms (42.8 versus 28.3 years and 49.3 versus 36.0 years; P = 0.002, Pc < 0.05, and P = 0.005, Pc < 0.05, respectively). The interval between blood sampling and the onset of clinical symptoms was shorter than it was for those who had no autoantibodies in their presymptom sample; however, this finding was not statistically significantly different (mean 5.2 years versus 6.3 years before symptom onset).\nOf the 35 presymptomatic individuals whose blood samples were available, 22 (63%) had any detectable autoantibodies in their blood before the onset of symptoms, that is, predating disease by a median of 4.2 years (range, 2.1 to 7.9 years). Ten of these patients expressed one autoantibody, whilst 12 others had two or more autoantibodies (range, from two to seven). The sensitivity was highest for ANAs at 45.7% with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with a sensitivity of 20% but with specificities of 98.7% and 97.4%, respectively (Table 2). The sensitivities for the other autoantibodies were between 14.3% and 2.9% at 98% to 100% specificity levels (Table 2). The odds ratio (ORs) for predicting the development of SLE were highest for anti-dsDNA at 18.13 (95% confidence interval (95% CI), 3.58 to 91.84), followed by ANAs at 11.5 (95% CI, 4.54 to 28.87) and anti-Ro/SSA antibodies at 8.94 (95% CI, 2.45 to 32.58). The ORs for the other antibodies were between 9.36 and 4.29, although the number of positive individuals was low, that is, up to five. The likelihood ratio (LR) was highest for anti-dsDNA antibodies at 15.38, followed by ANAs with a LR of 9.14.\nSensitivity and specificity of autoantibodies before onset of disease symptoms in individuals who later developed SLEa\na95% CI, 95% confidence interval; bP values were determined by using χ2 test or Fisher's exact test as appropriate. . * = p < 0.05, **= p < 0.01, ***= p < 0.001\nANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; LR, positive likelihood value; ns, not significant; OR, odds ratio; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nThe autoantibody type to appear first before the onset of symptoms was anti-Ro/SSA antibody at a mean (±SD) of 6.6 ± 2.5 years. Anti-RNP and antihistones also appeared early at means (±SD) of 5.9 ± 2.5 years and 5.0 ± 1.5 years, respectively, although the number of positive individuals with each antibody was small, that is, four and five, respectively. The autoantibodies first detectable closest to disease onset were anti-centromere protein B at 0.2 years, anti-Sm at 0.7 years and anti-Scl-70 at a mean (±SD) of 1.4 ± 0.6 years (Table 3). The number of individuals expressing autoantibodies increased the closer they got to the onset of symptoms, that is, 12 (63%) of 19 individuals had autoantibodies present <5 years before disease onset compared with 8 (50%) of 16 individuals who had autoantibodies present ≥5 years before disease onset. The number of autoantibodies per individual also increased the closer the individual got to the onset of symptoms, particularly during the last 3 years before disease onset; however, this change did not reach statistical significance. The accumulated number of individuals who were positive for each antibody before any symptoms of disease and after disease onset is illustrated in Figure 1. In the maternity cohort, 37.5% had autoantibodies predating disease, compared with 94% in females and 100% in males from the Medical Biobank cohort.\nDuration in years of the various antibodies preceding the onset of symptoms and diseasea\nANA, antinuclear antibody; aRo/SSA, anti-Sjögren's syndrome antigen A; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Scl-70, scleroderma 70; SD, standard deviation; Sm, Smith.\nGraph showing the accumulated number of positive individuals for each antibody. Shown as the percentage predating disease onset in years and after diagnosis of the disease. ANA, antinuclear antibody; SSA, Sjögren's syndrome antigen A; SSB, Sjögren's syndrome antigen B; dsDNA, double-stranded DNA; RNP, ribonucleoprotein; histon, histone.\nThe number of positive autoantibodies increased with age at the time of blood sampling (P = 0.001, Pc < 0.01). Those individuals who had autoantibodies predating disease onset were older both at the time of blood sampling and at the onset of symptoms (42.8 versus 28.3 years and 49.3 versus 36.0 years; P = 0.002, Pc < 0.05, and P = 0.005, Pc < 0.05, respectively). The interval between blood sampling and the onset of clinical symptoms was shorter than it was for those who had no autoantibodies in their presymptom sample; however, this finding was not statistically significantly different (mean 5.2 years versus 6.3 years before symptom onset).\n[SUBTITLE] Analyses in presymptomatic individuals and at diagnosis of SLE [SUBSECTION] The mean number of autoantibodies present in predisease individuals was 1.4 and increased after disease onset to 3.1 (P < 0.0005). In the autoantibody positive presymptomatic individuals (n = 22), the mean number of autoantibodies was 2.2 before and 3.3 after a diagnosis of SLE (P < 0.016, Pc < 0.1), whilst among the antibody-negative prepatients (n = 13), the mean number of autoantibodies after diagnosis was 2.8 (P < 0.002, Pc < 0.05).\nThe autoantibodies present in relation to symptoms at the onset of disease are presented in Table 4. The patients with serositis (n = 6; four females and two of three males) at the onset of symptoms had higher frequencies of autoantibodies than did those with arthritis (n = 20; one of three males) and skin manifestations (n = 11; one male), with the mean number of autoantibodies among these patients being 2.5, 1.7 and 0.9, respectively. However, the time interval predating disease was shorter for those with primary symptoms such as serositis (median, 1.9 years) in comparison with those with arthritis (6.7 years) and skin manifestations (4.2 years). In one individual, the symptom preceding the onset of disease was nephritis without any autoantibodies detectable when analysed 3.7 years before disease onset, although at onset the patient was ANA- and anti-dsDNA-antibody-positive. There was no association between smoking and autoantibody formation in either the number of autoantibody-positive individuals or the number of autoantibodies present.\nAutoantibodies predating onset of SLE and presenting symptoms at disease onseta\naANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nIn samples analysed after disease onset but during development of the disease, the concentrations of six of the autoantibodies that were positive in presymptomatic patients, namely, the autoantibodies anti-Jo-1 (n = 3), anti-Scl-70 (n = 2), anti-RNP (n = 2), antihistone (n = 2), anti-Ro/SSA (n = 1) and anti-centromere protein B (n = 1), decreased to below the cutoff values on the basis of either the multiplex detection kit or routine laboratory protocols.\nThe mean number of autoantibodies present in predisease individuals was 1.4 and increased after disease onset to 3.1 (P < 0.0005). In the autoantibody positive presymptomatic individuals (n = 22), the mean number of autoantibodies was 2.2 before and 3.3 after a diagnosis of SLE (P < 0.016, Pc < 0.1), whilst among the antibody-negative prepatients (n = 13), the mean number of autoantibodies after diagnosis was 2.8 (P < 0.002, Pc < 0.05).\nThe autoantibodies present in relation to symptoms at the onset of disease are presented in Table 4. The patients with serositis (n = 6; four females and two of three males) at the onset of symptoms had higher frequencies of autoantibodies than did those with arthritis (n = 20; one of three males) and skin manifestations (n = 11; one male), with the mean number of autoantibodies among these patients being 2.5, 1.7 and 0.9, respectively. However, the time interval predating disease was shorter for those with primary symptoms such as serositis (median, 1.9 years) in comparison with those with arthritis (6.7 years) and skin manifestations (4.2 years). In one individual, the symptom preceding the onset of disease was nephritis without any autoantibodies detectable when analysed 3.7 years before disease onset, although at onset the patient was ANA- and anti-dsDNA-antibody-positive. There was no association between smoking and autoantibody formation in either the number of autoantibody-positive individuals or the number of autoantibodies present.\nAutoantibodies predating onset of SLE and presenting symptoms at disease onseta\naANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nIn samples analysed after disease onset but during development of the disease, the concentrations of six of the autoantibodies that were positive in presymptomatic patients, namely, the autoantibodies anti-Jo-1 (n = 3), anti-Scl-70 (n = 2), anti-RNP (n = 2), antihistone (n = 2), anti-Ro/SSA (n = 1) and anti-centromere protein B (n = 1), decreased to below the cutoff values on the basis of either the multiplex detection kit or routine laboratory protocols.", "Of the 35 presymptomatic individuals whose blood samples were available, 22 (63%) had any detectable autoantibodies in their blood before the onset of symptoms, that is, predating disease by a median of 4.2 years (range, 2.1 to 7.9 years). Ten of these patients expressed one autoantibody, whilst 12 others had two or more autoantibodies (range, from two to seven). The sensitivity was highest for ANAs at 45.7% with a specificity of 95%, followed by anti-dsDNA and anti-Ro/SSA antibodies, both with a sensitivity of 20% but with specificities of 98.7% and 97.4%, respectively (Table 2). The sensitivities for the other autoantibodies were between 14.3% and 2.9% at 98% to 100% specificity levels (Table 2). The odds ratio (ORs) for predicting the development of SLE were highest for anti-dsDNA at 18.13 (95% confidence interval (95% CI), 3.58 to 91.84), followed by ANAs at 11.5 (95% CI, 4.54 to 28.87) and anti-Ro/SSA antibodies at 8.94 (95% CI, 2.45 to 32.58). The ORs for the other antibodies were between 9.36 and 4.29, although the number of positive individuals was low, that is, up to five. The likelihood ratio (LR) was highest for anti-dsDNA antibodies at 15.38, followed by ANAs with a LR of 9.14.\nSensitivity and specificity of autoantibodies before onset of disease symptoms in individuals who later developed SLEa\na95% CI, 95% confidence interval; bP values were determined by using χ2 test or Fisher's exact test as appropriate. . * = p < 0.05, **= p < 0.01, ***= p < 0.001\nANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; LR, positive likelihood value; ns, not significant; OR, odds ratio; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nThe autoantibody type to appear first before the onset of symptoms was anti-Ro/SSA antibody at a mean (±SD) of 6.6 ± 2.5 years. Anti-RNP and antihistones also appeared early at means (±SD) of 5.9 ± 2.5 years and 5.0 ± 1.5 years, respectively, although the number of positive individuals with each antibody was small, that is, four and five, respectively. The autoantibodies first detectable closest to disease onset were anti-centromere protein B at 0.2 years, anti-Sm at 0.7 years and anti-Scl-70 at a mean (±SD) of 1.4 ± 0.6 years (Table 3). The number of individuals expressing autoantibodies increased the closer they got to the onset of symptoms, that is, 12 (63%) of 19 individuals had autoantibodies present <5 years before disease onset compared with 8 (50%) of 16 individuals who had autoantibodies present ≥5 years before disease onset. The number of autoantibodies per individual also increased the closer the individual got to the onset of symptoms, particularly during the last 3 years before disease onset; however, this change did not reach statistical significance. The accumulated number of individuals who were positive for each antibody before any symptoms of disease and after disease onset is illustrated in Figure 1. In the maternity cohort, 37.5% had autoantibodies predating disease, compared with 94% in females and 100% in males from the Medical Biobank cohort.\nDuration in years of the various antibodies preceding the onset of symptoms and diseasea\nANA, antinuclear antibody; aRo/SSA, anti-Sjögren's syndrome antigen A; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Scl-70, scleroderma 70; SD, standard deviation; Sm, Smith.\nGraph showing the accumulated number of positive individuals for each antibody. Shown as the percentage predating disease onset in years and after diagnosis of the disease. ANA, antinuclear antibody; SSA, Sjögren's syndrome antigen A; SSB, Sjögren's syndrome antigen B; dsDNA, double-stranded DNA; RNP, ribonucleoprotein; histon, histone.\nThe number of positive autoantibodies increased with age at the time of blood sampling (P = 0.001, Pc < 0.01). Those individuals who had autoantibodies predating disease onset were older both at the time of blood sampling and at the onset of symptoms (42.8 versus 28.3 years and 49.3 versus 36.0 years; P = 0.002, Pc < 0.05, and P = 0.005, Pc < 0.05, respectively). The interval between blood sampling and the onset of clinical symptoms was shorter than it was for those who had no autoantibodies in their presymptom sample; however, this finding was not statistically significantly different (mean 5.2 years versus 6.3 years before symptom onset).", "The mean number of autoantibodies present in predisease individuals was 1.4 and increased after disease onset to 3.1 (P < 0.0005). In the autoantibody positive presymptomatic individuals (n = 22), the mean number of autoantibodies was 2.2 before and 3.3 after a diagnosis of SLE (P < 0.016, Pc < 0.1), whilst among the antibody-negative prepatients (n = 13), the mean number of autoantibodies after diagnosis was 2.8 (P < 0.002, Pc < 0.05).\nThe autoantibodies present in relation to symptoms at the onset of disease are presented in Table 4. The patients with serositis (n = 6; four females and two of three males) at the onset of symptoms had higher frequencies of autoantibodies than did those with arthritis (n = 20; one of three males) and skin manifestations (n = 11; one male), with the mean number of autoantibodies among these patients being 2.5, 1.7 and 0.9, respectively. However, the time interval predating disease was shorter for those with primary symptoms such as serositis (median, 1.9 years) in comparison with those with arthritis (6.7 years) and skin manifestations (4.2 years). In one individual, the symptom preceding the onset of disease was nephritis without any autoantibodies detectable when analysed 3.7 years before disease onset, although at onset the patient was ANA- and anti-dsDNA-antibody-positive. There was no association between smoking and autoantibody formation in either the number of autoantibody-positive individuals or the number of autoantibodies present.\nAutoantibodies predating onset of SLE and presenting symptoms at disease onseta\naANA, antinuclear antibody; dsDNA, double-stranded DNA; Jo-1, anti-histidyl-tRNA synthetase antibody; La/SSB, anti-Sjögren's syndrome antigen B; RNP, ribonucleoprotein; Ro/SSA, anti-Sjögren's syndrome antigen A; Scl-70, scleroderma 70; Sm, Smith.\nIn samples analysed after disease onset but during development of the disease, the concentrations of six of the autoantibodies that were positive in presymptomatic patients, namely, the autoantibodies anti-Jo-1 (n = 3), anti-Scl-70 (n = 2), anti-RNP (n = 2), antihistone (n = 2), anti-Ro/SSA (n = 1) and anti-centromere protein B (n = 1), decreased to below the cutoff values on the basis of either the multiplex detection kit or routine laboratory protocols.", "In this study, we have shown that autoantibody seropositivity preceded the onset of SLE, as defined by ACR criteria, by years. In those individuals who subsequently developed SLE, the number of autoantibodies increased gradually. This could suggest a gradual pathogenic process over a long period. Our results are consistent with data reported in other prospective studies of asymptomatic individuals who later developed SLE [3], rheumatoid arthritis (RA) [10,11] or other autoimmune diseases [9]. ANAs were in line with the results presented by Arbuckle et al. [3] in that the most prevalent autoantibodies were found in individuals before the onset of symptoms. However, the frequency of the different autoantibodies predating SLE was lower in our study than the frequencies reported by others [3,20]. This could be explained by the longer time predating the onset of disease relative to the lower number of samples.\nFurthermore, one must consider the ethnic background of the different patient cohorts. All of the individuals included in the present study were from northern Sweden, whilst in the two other studies cited [3,20], 62% were black in both studies, with only 29% and 26%, respectively, being of European background. Anti-extractable nuclear antigen (anti-ENA) antibodies have been found to be more common in Afro-Caribbean and African-American populations than in Caucasians [21-23]. Conversely, the importance of ethnic differences in relation to autoantibodies was not confirmed in another study [24].\nAnother possible explanation for the lower frequency of detectable autoantibodies in the individuals studied here is that one-half of the samples were sera from pregnant women, in whom the frequency of autoantibodies is known to generally be lower. Also, these donors were younger at the time of blood sampling, and consequently the time interval before disease onset for most of the individuals was longer. The samples from the maternity cohort were taken early in pregnancy, which can be of importance when considering that these presymptomatic individuals had a lower prevalence of autoantibodies than the remainder of the patients and also that pregnancy is, partially at least, an immunosuppressive state. These individuals were also younger at the time of the collection of blood samples, when the symptoms started and when the diagnosis of SLE was confirmed. Their samples had also been stored frozen for a longer time, which should be considered as a factor that could interfere with the analyses. After the diagnosis was established, these patients had marginally fewer autoantibodies than the other patients, although not significantly so. It has long been suggested that autoantibody formation increases with age [25,26] as was found in the present study.\nIn line with the other studies [3,20], anti-Ro/SSA antibodies were the first to be detected and preceded the onset of SLE by several years, whilst anti-Sm and anti-centromere protein B antibodies appeared closer to the onset of clinical symptoms. Also, as described by Arbuckle et al. [3], anti-dsDNA antibodies appeared at an intermediate time point. Our results differ from those of Arbuckle et al. in the way that ANAs appeared at an intermediate point relative to the onset of clinical symptoms and that anti-La/SSB antibodies appeared closer to the onset of symptoms. This finding is consistent with the hypothesis of a progression due to epitope spreading as previously described both in animal models and in SLE patients [27-29].\nThe individuals who had serositis as the first symptom had more autoantibodies and a shorter time interval between the positive blood sample and disease onset than other onset symptoms, suggesting that a more serious manifestation in the beginning of the disease is associated with faster disease development and more pronounced epitope spreading. However, we were unable to show a significant increase in the number of autoantibodies preceding symptom or disease onset, but after the onset of disease the number of antibodies increased significantly.\nThe OR for predicting SLE was highest for anti-dsDNA antibodies, followed by ANAs and the other autoantibodies with lower ORs, but all were within the 95% CI for the OR of anti-dsDNA antibodies. The number of individuals positive for most of the other antibodies was small: between two and five.\nIn this study, 6.7% of the population based controls were positive for ANAs at a preset specificity of 95%. However, ANA positivity alone in healthy individuals was not regarded as a good predictor of developing connective tissue disease [30,31]. Two controls were positive for anti-Jo-1 antibodies and one was positive for anti-Scl-70 antibodies, which are rare autoantibodies. However, because of the limited amounts of sera and plasma available from the Medical Biobank, we were not able to undertake any confirmatory analyses for anti-ENA or anti-dsDNA antibodies using alternative techniques, which would have been desirable.\nThe ENA and chromatin antigens are a part of all autoantigens present in the cell nuclei visualised by ANA analysis using immunofluorescence. In the nucleus, there are many antigens other than ENA or chromatin that cannot be detected by specific methods today. Comparison between multiple assays for autoantibody detection in SLE has shown variable frequency of, for example, Scl-70, with higher frequencies published using the same assay as we used in this study, suggesting a too low cutoff value, at least for Scl-70 [32,33].\nIn this study, we could not find any difference in autoantibody formation between smokers and nonsmokers. A significantly higher risk of dsDNA seropositivity was found in current smokers compared with those who had never smoked in a previous study of SLE patients [34]. Smoking has been suggested as an environmental factor involved in the pathogenic development of autoantibodies to citrullinated proteins and rheumatoid factor in patients with RA [35].\nThis study is limited by the availability of stored samples and by not having several samples collected from the same individual before the onset of symptoms. However, these individuals were patients attending one clinic, where they are followed regularly. The controls used in this study were sampled at the same time as the patients, and their samples were collected, stored and analysed in the same way.\nWe have also used a newly introduced multiplex technique, which is similar to that used by Heinlen et al. [20], thereby making comparison with the previous publication by Arbuckle et al. [3] more difficult. The multiplex technology is very suitable, since the amount of serum or plasma required is very small relative to the number of analytes it is possible to detect in any given sample. This is of special benefit when analysing stored serum samples from biobanks, where the volumes stored are limited.", "On the basis of this study, we conclude that autoantibodies against nuclear antigens can be detected several years before the onset of symptoms and SLE diagnosis in individuals who subsequently develop SLE. The highest sensitivities were for ANA, Ro/SSA and dsDNA, and anti-dsDNA antibodies had the highest predictive value for SLE. Antibodies against Ro/SSA were the first autoantibodies detected. Individuals who had serositis as the first symptom had more autoantibodies and a shorter time interval between the positive blood sample and disease onset than other onset symptoms, suggesting that more serious disease manifestation in the beginning of the disease is associated with faster disease development and more pronounced epitope spreading.", "ACR: American College of Rheumatology; ANA II: antinuclear antibody test II; anti-Sm: anti-Smith antibody; dsDNA: double-stranded DNA; HEp-2: human epidermal cell 2; Jo-1: anti-histidyl-tRNA synthetase antibody; La/SSB: anti-Sjögren's syndrome antigen B; LR: likelihood ratio; OR: odds ratio; RNP: ribonucleoprotein; Ro/SSA: anti-Sjögren's syndrome antigen A; Scl-70: scleroderma 70; SLE: systemic lupus erythematosus.", "The authors declare that they have no competing interests.", "CE analysed and interpreted the data and was involved in drafting the manuscript. HK analysed and interpreted the data and was to some extent involved in drafting the manuscript. MJ contributed to the study design and analysed and interpreted the data. GH and GW contributed to the design of the study and were involved with the supply of the blood samples. SRD designed the study, analysed and interpreted the data and was involved in drafting the manuscript. All authors have given their final approval of the version of the manuscript to be published." ]
[ null, "materials|methods", null, null, null, "results", null, null, "discussion", "conclusions", null, null, null ]
[]
Notch signaling contributes to the maintenance of both normal neural stem cells and patient-derived glioma stem cells.
21342503
Cancer stem cells (CSCs) play an important role in the development and recurrence of malignant tumors including glioma. Notch signaling, an evolutionarily conserved pathway mediating direct cell-cell interaction, has been shown to regulate neural stem cells (NSCs) and glioma stem cells (GSCs) in normal neurogenesis and pathological carcinogenesis, respectively. However, how Notch signaling regulates the proliferation and differentiation of GSCs has not been well elucidated.
BACKGROUND
We isolated and cultivate human GSCs from glioma patient specimens. Then on parallel comparison with NSCs, we inhibited Notch signaling using γ-secretase inhibitors (GSI) and assessed the potential functions of Notch signaling in human GSCs.
METHODS
Similar to the GSI-treated NSCs, the number of the primary and secondary tumor spheres from GSI-treated GSCs decreased significantly, suggesting that the proliferation and self-renewal ability of GSI-treated GSCs were attenuated. GSI-treated GSCs showed increased differentiation into mature neural cell types in differentiation medium, similar to GSI-treated NSCs. Next, we found that GSI-treated tumor spheres were composed of more intermediate progenitors instead of CSCs, compared with the controls. Interestingly, although inhibition of Notch signaling decreased the ratio of proliferating NSCs in long term culture, we found that the ratio of G2+M phase-GSCs were almost undisturbed on GSI treatment within 72 h.
RESULTS
These data indicate that like NSCs, Notch signaling maintains the patient-derived GSCs by promoting their self-renewal and inhibiting their differentiation, and support that Notch signal inhibitor GSI might be a prosperous candidate of the treatment targeting CSCs for gliomas, however, with GSI-resistance at the early stage of GSCs cell cycle.
CONCLUSIONS
[ "Adult", "Animals", "Brain Neoplasms", "Cell Differentiation", "Cell Proliferation", "Cells, Cultured", "Embryo, Mammalian", "Glioma", "Humans", "Mice", "Mice, Inbred C57BL", "Neoplastic Stem Cells", "Neural Stem Cells", "Neurogenesis", "Receptors, Notch", "Signal Transduction" ]
3052197
null
null
Methods
[SUBTITLE] Glioma samples [SUBSECTION] Glioma tissues were obtained from 9 adult patients with pathologically diagnosed grade 2 to grade 4 gliomas, at the Department of Neurosurgery in Xijing Hospital, Fourth Military Medical University, under the guidance from the Medical Ethnic Committee of the Fourth Military Medical University. The summary of the patient population is outlined in Additional file 1: Table S1. Glioma tissues were obtained from 9 adult patients with pathologically diagnosed grade 2 to grade 4 gliomas, at the Department of Neurosurgery in Xijing Hospital, Fourth Military Medical University, under the guidance from the Medical Ethnic Committee of the Fourth Military Medical University. The summary of the patient population is outlined in Additional file 1: Table S1. [SUBTITLE] Neurosphere culture [SUBSECTION] Neurosphere cultures were performed as described previously with some modifications [21]. Briefly, for the culture of NSCs, the brains from embryonic (E) day 12.5 C57BL/6 mice were dissected under a stereomicroscope. And for the culture of GSCs, tissues from patient specimen were acutely minced after sampling. The tissues were then washed, mechanically dissociated by repetitive pipette. Single cells were primarily plated in serum-free Dulbecco's modified Eagle's medium (DMEDM)/F12 medium containing 20 ng/ml basic fibroblast growth factor (bFGF, human recombinant, Sigma), 20 ng/ml epidermal growth factor (EGF, mouse submaxillary), the B-27 (1:50, GIBCO), penicillin (100 U/ml) and streptomycin (0.1 mg/ml). Cells were cultured at a density of 1 × 105 cell/ml in 24-well plates (0.5 ml/well), and were fed every 3 days by adding fresh medium supplemented with GSI or DMSO with indicated concentrations. Animal experiments were reviewed and approved by the Animal Experiment Administration Committee of the Fourth Military Medical University. Neurosphere cultures were performed as described previously with some modifications [21]. Briefly, for the culture of NSCs, the brains from embryonic (E) day 12.5 C57BL/6 mice were dissected under a stereomicroscope. And for the culture of GSCs, tissues from patient specimen were acutely minced after sampling. The tissues were then washed, mechanically dissociated by repetitive pipette. Single cells were primarily plated in serum-free Dulbecco's modified Eagle's medium (DMEDM)/F12 medium containing 20 ng/ml basic fibroblast growth factor (bFGF, human recombinant, Sigma), 20 ng/ml epidermal growth factor (EGF, mouse submaxillary), the B-27 (1:50, GIBCO), penicillin (100 U/ml) and streptomycin (0.1 mg/ml). Cells were cultured at a density of 1 × 105 cell/ml in 24-well plates (0.5 ml/well), and were fed every 3 days by adding fresh medium supplemented with GSI or DMSO with indicated concentrations. Animal experiments were reviewed and approved by the Animal Experiment Administration Committee of the Fourth Military Medical University. [SUBTITLE] Neurosphere assays [SUBSECTION] After 7 days from primary culture the numbers of primary spheres were counted under a microscope (Additional file 1: Figure S2) [21]. And for the expression of target genes, neurospheres were harvested on the 5th day of culture for RNA extraction, cDNA synthesis, and real-time reverse transcription-polymerase chain reaction (RT-PCR). Primary neurospheres were harvested and dissociated mechanically into single cell suspensions, and were replated at 1 × 105 cells/ml in 24-well plates. Cells were then cultured for another 7 days until secondary spheres formed [31], which were quantified by counting. On the 7th day of primary culture, neurospheres were plated onto poly-D-lysine (Sigma) coated glass cover slips in DMEM/F12 containing 10% fetal bovine serum (FBS) for another 7 days. On the third day of differentiation, neurospheres were photomicrographed and their neurites were counted and measured, then on the 7th day of differentiation culture, immunofluorescence staining was performed as described below. After 7 days from primary culture the numbers of primary spheres were counted under a microscope (Additional file 1: Figure S2) [21]. And for the expression of target genes, neurospheres were harvested on the 5th day of culture for RNA extraction, cDNA synthesis, and real-time reverse transcription-polymerase chain reaction (RT-PCR). Primary neurospheres were harvested and dissociated mechanically into single cell suspensions, and were replated at 1 × 105 cells/ml in 24-well plates. Cells were then cultured for another 7 days until secondary spheres formed [31], which were quantified by counting. On the 7th day of primary culture, neurospheres were plated onto poly-D-lysine (Sigma) coated glass cover slips in DMEM/F12 containing 10% fetal bovine serum (FBS) for another 7 days. On the third day of differentiation, neurospheres were photomicrographed and their neurites were counted and measured, then on the 7th day of differentiation culture, immunofluorescence staining was performed as described below. [SUBTITLE] Immunofluorescence [SUBSECTION] Undifferentiated neurospheres were plated onto poly-D-lysine coated glass cover slips in serum-free medium for 4 h. Then cells were directly fixed in 4% paraformaldehyde at 4°C for 10 min, and incubated with primary antibodies overnight at 4°C, followed by species-specific secondary antibodies. Samples were visualized under fluorescence microscope (FV-1000, Olympus, Japan). Immunofluorescence for differentiated neurospheres was performed in a similar way. Cells were additionally counterstained with Hoechst. Primary antibodies used included rabbit anti-Nestin serum (1:200, Sigma), rabbit anti-glial fibrillary acidic protein (GFAP, 1:200, Sigma), mouse anti-mitogen-activated protein 2 (MAP2, 1:200, Sigma). FITC-conjugated goat anti-mouse IgG and Cy3-conjugated goat anti-rabbit IgG (1:400, Jackson ImmunoResearch) were used as the secondary antibodies. Undifferentiated neurospheres were plated onto poly-D-lysine coated glass cover slips in serum-free medium for 4 h. Then cells were directly fixed in 4% paraformaldehyde at 4°C for 10 min, and incubated with primary antibodies overnight at 4°C, followed by species-specific secondary antibodies. Samples were visualized under fluorescence microscope (FV-1000, Olympus, Japan). Immunofluorescence for differentiated neurospheres was performed in a similar way. Cells were additionally counterstained with Hoechst. Primary antibodies used included rabbit anti-Nestin serum (1:200, Sigma), rabbit anti-glial fibrillary acidic protein (GFAP, 1:200, Sigma), mouse anti-mitogen-activated protein 2 (MAP2, 1:200, Sigma). FITC-conjugated goat anti-mouse IgG and Cy3-conjugated goat anti-rabbit IgG (1:400, Jackson ImmunoResearch) were used as the secondary antibodies. [SUBTITLE] Quantitative RT-PCR [SUBSECTION] Total RNA of neurospheres was isolated using the Trizol reagent (Invitrogen). cDNA was synthesized and was used for real-time PCR with a kit (SYBR Premix EX Taq, Takara, Kyoto, Japan) and the ABI PRISM 7300 real-time PCR system, with human GAPDH and mouse β-actin as the reference controls. Primers used for real-time PCR were summarized in Additional file 1: Table S3. Total RNA of neurospheres was isolated using the Trizol reagent (Invitrogen). cDNA was synthesized and was used for real-time PCR with a kit (SYBR Premix EX Taq, Takara, Kyoto, Japan) and the ABI PRISM 7300 real-time PCR system, with human GAPDH and mouse β-actin as the reference controls. Primers used for real-time PCR were summarized in Additional file 1: Table S3. [SUBTITLE] DNA content analysis [SUBSECTION] Spheres were dissociated mechanically into single cell suspensions in the culture medium. Cells were then washed and resuspended in PBS, and were fixed with ethanol at room temperature for 20 min. Cells were resuspended in PBS containing 50 μg/ml of propidium iodide and 0.1 mg/ml RNase A for 10 min, and were analyzed for ploidy using a flow cytometry (BD Biosciences). Data analysis was performed using the CellQuest software (BD Biosciences). Spheres were dissociated mechanically into single cell suspensions in the culture medium. Cells were then washed and resuspended in PBS, and were fixed with ethanol at room temperature for 20 min. Cells were resuspended in PBS containing 50 μg/ml of propidium iodide and 0.1 mg/ml RNase A for 10 min, and were analyzed for ploidy using a flow cytometry (BD Biosciences). Data analysis was performed using the CellQuest software (BD Biosciences). [SUBTITLE] Statistics [SUBSECTION] Independent cultures from at least three samples were used for each experiment (Additional file 1: Table S2). For immunofluorescence, cells were counted by Image-ProPlus 6.0, and only cell bodies that were labeled with immunoreactivity were included. Proportions of immunoreactive cells in the total population of cultured cells revealed by Hoechst staining were calculated, and at least 5 microscopic fields per specimen were selected. For neurite analysis, neurites of 30 neurospheres from each culture in the presence of GSI or DMSO were measured. The total numbers of neurites per tumor spheres were counted via photomicrographs taken by a phase contrast microscopy, and the average of the length of neuritis per tumor spheres were measured by Image-ProPlus 6.0. Each experiment was repeated for at least three times. Data were expressed as mean ± s.e.m, and the difference between the two groups was analyzed with the Student's t-test, with P < 0.05 as statistically significant. Independent cultures from at least three samples were used for each experiment (Additional file 1: Table S2). For immunofluorescence, cells were counted by Image-ProPlus 6.0, and only cell bodies that were labeled with immunoreactivity were included. Proportions of immunoreactive cells in the total population of cultured cells revealed by Hoechst staining were calculated, and at least 5 microscopic fields per specimen were selected. For neurite analysis, neurites of 30 neurospheres from each culture in the presence of GSI or DMSO were measured. The total numbers of neurites per tumor spheres were counted via photomicrographs taken by a phase contrast microscopy, and the average of the length of neuritis per tumor spheres were measured by Image-ProPlus 6.0. Each experiment was repeated for at least three times. Data were expressed as mean ± s.e.m, and the difference between the two groups was analyzed with the Student's t-test, with P < 0.05 as statistically significant.
null
null
null
null
[ "Background", "Glioma samples", "Neurosphere culture", "Neurosphere assays", "Immunofluorescence", "Quantitative RT-PCR", "DNA content analysis", "Statistics", "Results", "Formation of neurosphere-like colonies from primary glioma specimens", "Blockade of Notch signaling attenuates the proliferation and self-renewal ability and promotes differentiation of normal NSCs", "Decreased proliferation and self-renewal ability of GSCs upon GSI treatment", "Blockade of Notch signaling promotes the differentiation of GSCs", "Blockade of Notch signaling promotes the conversion of GSCs to INP-like cells", "GSCs show resistance to GSI treatment compared with NSCs", "Discussion", "The frequency of GSCs in tumor tissue", "INP-like cells in GSCs population", "Double positive cell types in the derivatives of GSCs", "GSI-resistance of GSCs at the early stage of GSI treatment", "The mechanisms of Notch signaling in regulating the proliferation and differentiation of GSCs", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Glioma, the most common tumor of the central nervous system (CNS), frequently leads to death. Glioma is derived from brain glial tissue and comprises several diverse tumor forms and grades. Treatment of malignant gliomas is often palliative due to their infiltrating nature and high recurrence. Despite advances in surgery, chemotherapy and radiation gradually result in therapy-resistance. However, genetic events that lead to gliomas are mostly unknown.\nRecent researches highlight the importance of cancer-initiating cells in the malignancy of gliomas [1-3]. These cells have been referred to as glioma stem cells (GSC), as they share similarities to normal neural stem cells (NSCs) in the brain. There is increasing evidence that malignant gliomas arise from and contain these minority tumor cells with stem cell-like properties. This subpopulation of tumor cells with the potential for self-renewal and multi-lineage differentiation that recapitulates the phenotype of the original glioma [4-8], plays an important role in glioma initiation, growth, and recurrence. Eliminating GSCs from the bulk tumor mass seems to be a prosperous therapeutic strategy [9,10]. Therefore, it is extremely important to understand the signal pathways that contribute to the formation and maintenance of GSCs.\nA number of signal pathways are involved in the formation and maintenance of stem cells, many of which are closely conserved across species. Notch signaling, an evolutionarily conserved pathway mediating direct cell-cell interaction and signaling, plays a pivotal role in the maintenance of NSCs [11]. The functions of the Notch pathway in cancer formation have been gradually established, and recent data have also implicated a role for Notch signaling in GSCs [12]. Notch is a family of hetero-dimeric transmembrane receptors composed of an extracellular domain responsible for ligand recognition, a transmembrane domain, and an intracellular domain involved in transcriptional regulation. When Notch receptor is triggered by the ligands on the neighboring cells, the intracellular domain of the Notch receptor (NICD) is released from the membrane, after successive proteolytic cleavages by the γ-secretase complex [13,14]. NICD then translocates into the nucleus and associates with the transcription factor RBP-J, the DNA recombination signal binding protein-Jκ. The NICD-RBP-J complex further recruits other co-activators, and activates the expression of downstream genes associated with cell proliferation, differentiation and apoptosis [15]. It is believed that γ-secretase inhibitors (GSI) decrease the activity of Notch signaling and slow the growth of Notch-dependent tumors such as medulloblastoma [12].\nRapid proliferation, self-renewal ability and multipotential differentiation are the hallmarks of both normal NSCs and GSCs. Similarities in the growth characteristics and gene expression patterns of normal NSCs and brain tumor CSCs suggest that pathways important for NSCs are probable targets for eliminating brain tumor CSCs. The RBP-J-mediated canonical Notch pathway plays several significant roles in the maintenance and differentiation of NSCs [16-18]. During embryogenesis, Notch signaling is required to maintain all NSC populations, and to repress the differentiation of NSCs into intermediate neural progenitors (INPs) in vivo [19-21]. Along with later development, Notch signal commits NSCs to an astroglia fate, while repressing neuronal differentiation [22]. In adult, Notch signaling modulates cell cycle in order to ensure brain-derived NSCs retain their self-renewal property [23].\nIncreasing evidence has shown that there is a link between tumorigenesis and aberrantly activated Notch signaling [24,25]. Notch1 and its ligands, Dll1 and Jagged1, were overexpressed in many glioma cell lines and primary human gliomas. When the expression of Notch1, Dll1 or Jagged1 was down-regulated by RNA interference, apoptosis and proliferation inhibition in multiple glioma cell lines were induced [26]. Depletion of Hey1, a member of Hes-related family downstream effectors of Notch signaling, by RNA interference also reduces proliferation of glioblastoma cells in tissue culture [27]. Moreover, the blockade of Notch signaling directly caused cell cycle exit, apoptosis, differentiation, and reduced the CD133-positive cells in medulloblastoma and glioblastoma cell lines while Notch activation enhances the expression of Nestin, promotes cell proliferation and the formation of NSC-like colonies and plays a contributing role in the brain tumor stem cells [28-30]. However, the exact roles of Notch signaling in the proliferation and differentiation of patient-derived GSCs have not been clearly elucidated.\nIn this study, we explore the roles of Notch signaling in patient-derived GSCs with parallel analysis of normal NSCs by using GSI-mediated inhibition of Notch signaling in vitro. The results showed that when Notch signaling was inhibited, the proliferation and self-renewal ability of GSCs from human primary gliomas were attenuated. In addition, the blockade of Notch signaling in GSCs increased their differentiation into the downstream neural cell types, and promoted their conversion from stem cells into INP-like cells. Interestingly, although inhibition of Notch signaling definitely decreased the proliferating GSCs in long term culture, we found that the percentage of G2+M phase-GSCs were almost undisturbed at the initial stage of GSI treatment. To summarize, our results suggested that Notch signaling maintained GSCs by promoting their self-renewal and inhibiting their differentiation into INP-like cells, and supported that Notch signal inhibitors might be prosperous candidates of the treatments targeting CSCs for gliomas.", "Glioma tissues were obtained from 9 adult patients with pathologically diagnosed grade 2 to grade 4 gliomas, at the Department of Neurosurgery in Xijing Hospital, Fourth Military Medical University, under the guidance from the Medical Ethnic Committee of the Fourth Military Medical University. The summary of the patient population is outlined in Additional file 1: Table S1.", "Neurosphere cultures were performed as described previously with some modifications [21]. Briefly, for the culture of NSCs, the brains from embryonic (E) day 12.5 C57BL/6 mice were dissected under a stereomicroscope. And for the culture of GSCs, tissues from patient specimen were acutely minced after sampling. The tissues were then washed, mechanically dissociated by repetitive pipette. Single cells were primarily plated in serum-free Dulbecco's modified Eagle's medium (DMEDM)/F12 medium containing 20 ng/ml basic fibroblast growth factor (bFGF, human recombinant, Sigma), 20 ng/ml epidermal growth factor (EGF, mouse submaxillary), the B-27 (1:50, GIBCO), penicillin (100 U/ml) and streptomycin (0.1 mg/ml). Cells were cultured at a density of 1 × 105 cell/ml in 24-well plates (0.5 ml/well), and were fed every 3 days by adding fresh medium supplemented with GSI or DMSO with indicated concentrations. Animal experiments were reviewed and approved by the Animal Experiment Administration Committee of the Fourth Military Medical University.", "After 7 days from primary culture the numbers of primary spheres were counted under a microscope (Additional file 1: Figure S2) [21]. And for the expression of target genes, neurospheres were harvested on the 5th day of culture for RNA extraction, cDNA synthesis, and real-time reverse transcription-polymerase chain reaction (RT-PCR). Primary neurospheres were harvested and dissociated mechanically into single cell suspensions, and were replated at 1 × 105 cells/ml in 24-well plates. Cells were then cultured for another 7 days until secondary spheres formed [31], which were quantified by counting. On the 7th day of primary culture, neurospheres were plated onto poly-D-lysine (Sigma) coated glass cover slips in DMEM/F12 containing 10% fetal bovine serum (FBS) for another 7 days. On the third day of differentiation, neurospheres were photomicrographed and their neurites were counted and measured, then on the 7th day of differentiation culture, immunofluorescence staining was performed as described below.", "Undifferentiated neurospheres were plated onto poly-D-lysine coated glass cover slips in serum-free medium for 4 h. Then cells were directly fixed in 4% paraformaldehyde at 4°C for 10 min, and incubated with primary antibodies overnight at 4°C, followed by species-specific secondary antibodies. Samples were visualized under fluorescence microscope (FV-1000, Olympus, Japan). Immunofluorescence for differentiated neurospheres was performed in a similar way. Cells were additionally counterstained with Hoechst. Primary antibodies used included rabbit anti-Nestin serum (1:200, Sigma), rabbit anti-glial fibrillary acidic protein (GFAP, 1:200, Sigma), mouse anti-mitogen-activated protein 2 (MAP2, 1:200, Sigma). FITC-conjugated goat anti-mouse IgG and Cy3-conjugated goat anti-rabbit IgG (1:400, Jackson ImmunoResearch) were used as the secondary antibodies.", "Total RNA of neurospheres was isolated using the Trizol reagent (Invitrogen). cDNA was synthesized and was used for real-time PCR with a kit (SYBR Premix EX Taq, Takara, Kyoto, Japan) and the ABI PRISM 7300 real-time PCR system, with human GAPDH and mouse β-actin as the reference controls. Primers used for real-time PCR were summarized in Additional file 1: Table S3.", "Spheres were dissociated mechanically into single cell suspensions in the culture medium. Cells were then washed and resuspended in PBS, and were fixed with ethanol at room temperature for 20 min. Cells were resuspended in PBS containing 50 μg/ml of propidium iodide and 0.1 mg/ml RNase A for 10 min, and were analyzed for ploidy using a flow cytometry (BD Biosciences). Data analysis was performed using the CellQuest software (BD Biosciences).", "Independent cultures from at least three samples were used for each experiment (Additional file 1: Table S2). For immunofluorescence, cells were counted by Image-ProPlus 6.0, and only cell bodies that were labeled with immunoreactivity were included. Proportions of immunoreactive cells in the total population of cultured cells revealed by Hoechst staining were calculated, and at least 5 microscopic fields per specimen were selected. For neurite analysis, neurites of 30 neurospheres from each culture in the presence of GSI or DMSO were measured. The total numbers of neurites per tumor spheres were counted via photomicrographs taken by a phase contrast microscopy, and the average of the length of neuritis per tumor spheres were measured by Image-ProPlus 6.0. Each experiment was repeated for at least three times. Data were expressed as mean ± s.e.m, and the difference between the two groups was analyzed with the Student's t-test, with P < 0.05 as statistically significant.", "[SUBTITLE] Formation of neurosphere-like colonies from primary glioma specimens [SUBSECTION] Nine specimens of gliomas were used in the current studies, including 3 oligoastrocytomas, 3 oligodendrogliomas, 2 astrocytomas, and 1 glioblastoma, and the specimens were graded according to the WHO grading scheme (Additional file 1: Table S1).\nTumor tissues were dissociated mechanically into a single cell suspension and were cultured in serum-free DMEM/F12 medium supplemented with EGF and bFGF. Seven primary gliomas of the nine gave rise to proliferating tumor spheres. Regardless of pathological subtype and grade, neurosphere-like clusters, or tumor spheres, first appeared within 72 h of primary culture and increased their numbers and diameters quickly during 7 days after the onset of the culture (Figure 1A). In order to estimate whether these tumor spheres showed NSC properties, we stained the tumor spheres from patients with anti-Nestin antibody. The result showed that these tumor spheres expressed Nestin, a marker of NSCs (Figure 1B). The multipotency of these human glioma cell-derived tumor spheres was confirmed by differentiation assay in vitro. We estimated the differentiation capacity of tumor spheres in differentiating conditions by examining the types of molecular markers expressed by neurons and glial cells. We observed that these cells could differentiate into GFAP-positive astrocyte- and MAP2-positive neuron-like cells (Figure 1C, 1D). In addition, a local recurrence tumor also could produce tumor spheres in growth medium (data not shown). Tumor spheres could be passed at least for five generations by mechanical dissociation and their stemness and multipotency could be maintained in serum-free medium supplemented with growth factors for at least one month.\nPatient glioma-derived stem cells have the ability to form neurosphere-like colonies and gave rise to the downstream neural cell types of NSCs. (A) Photomicrographs of typical primary tumor spheres from one glioma tissue at 72 h after plating. (B) Undifferentiated primary tumor spheres expressed high levels of Nestin (red), a marker of NSCs. (C, D) The tumor spheres-derived from human glioma were cultured in differentiation conditional medium for 7 days, and differentiated into neural cells expressing specific molecular markers of GFAP (C, red) and MAP2 (D, green). Scale bar, 100 μm in A, and 50 μm in BCD.\nNine specimens of gliomas were used in the current studies, including 3 oligoastrocytomas, 3 oligodendrogliomas, 2 astrocytomas, and 1 glioblastoma, and the specimens were graded according to the WHO grading scheme (Additional file 1: Table S1).\nTumor tissues were dissociated mechanically into a single cell suspension and were cultured in serum-free DMEM/F12 medium supplemented with EGF and bFGF. Seven primary gliomas of the nine gave rise to proliferating tumor spheres. Regardless of pathological subtype and grade, neurosphere-like clusters, or tumor spheres, first appeared within 72 h of primary culture and increased their numbers and diameters quickly during 7 days after the onset of the culture (Figure 1A). In order to estimate whether these tumor spheres showed NSC properties, we stained the tumor spheres from patients with anti-Nestin antibody. The result showed that these tumor spheres expressed Nestin, a marker of NSCs (Figure 1B). The multipotency of these human glioma cell-derived tumor spheres was confirmed by differentiation assay in vitro. We estimated the differentiation capacity of tumor spheres in differentiating conditions by examining the types of molecular markers expressed by neurons and glial cells. We observed that these cells could differentiate into GFAP-positive astrocyte- and MAP2-positive neuron-like cells (Figure 1C, 1D). In addition, a local recurrence tumor also could produce tumor spheres in growth medium (data not shown). Tumor spheres could be passed at least for five generations by mechanical dissociation and their stemness and multipotency could be maintained in serum-free medium supplemented with growth factors for at least one month.\nPatient glioma-derived stem cells have the ability to form neurosphere-like colonies and gave rise to the downstream neural cell types of NSCs. (A) Photomicrographs of typical primary tumor spheres from one glioma tissue at 72 h after plating. (B) Undifferentiated primary tumor spheres expressed high levels of Nestin (red), a marker of NSCs. (C, D) The tumor spheres-derived from human glioma were cultured in differentiation conditional medium for 7 days, and differentiated into neural cells expressing specific molecular markers of GFAP (C, red) and MAP2 (D, green). Scale bar, 100 μm in A, and 50 μm in BCD.\n[SUBTITLE] Blockade of Notch signaling attenuates the proliferation and self-renewal ability and promotes differentiation of normal NSCs [SUBSECTION] Stem cell-like cells in brain tumors share many similarities with normal neural stem/progenitor cells and may require Notch signal for their survival and growth. In vitro, NSCs proliferate and form clonal spheres referred to as neurospheres. GSI reduced the proliferation of mouse embryonic brain-derived NSCs in a dose-dependent fashion (Additional file 1: Figure S1). The number of neurospheres was decreased in the presence of GSI, compared with the control treated with DMSO (Figure 2A). In order to confirm that GSI effectively blocked Notch signaling in NSCs in our culture system, we test the expression of Hes1 and Hes5, both of which are downstream molecules of the Notch signaling [12]. Total RNA was prepared from neurospheres on the fifth day of 25 μmol/L GSI treatment and was used for RT-PCR. The expression of Hes1 and Hes5 decreased remarkably in NSCs, suggesting that GSI at this concentration could inhibit Notch signaling effectively (Figure 2B, 2C). We quantitatively analyzed the number of primary neurospheres in the presence of GSI, and found that there was a significant decrease in the number of neurospheres upon GSI treatment at 25 μmol/L (Figure 2D). In order to determine the possible effect of GSI on the NSCs self-renewal ability, we harvested the spheres and dissociated them into a single cell suspension by soft pipeting. When replated in the presence of GSI, the number of secondary neurospheres significantly decreased after 7 days culture (Figure 2E). These results suggested that the proliferation of NSCs was slowed by inhibiting Notch signaling and the self-renewal ability, a key NSC behavior, was at least partially depleted.\nBlockade of Notch signaling attenuates proliferation and self-renewal ability of normal mouse NSCs. (A) Photomicrographs of neurospheres derived from E12.5 mouse brain at 72 h after primary culture, with GSI or DMSO supplemented. (B, C) Total RNA was prepared from neurospheres on the 5th day of GSI or DMSO treatment. And the expressions of Hes1 and Hes5 were measured by RT-PCR (B) and Real-time PCR (C), with β-actin as the reference control (n = 3, Hes5, P = 0.006, Hes1, P = 0.006). (D, E) Equal number of cells (1 × 105/ml) were plated in the growth medium, and the number of primary (n = 3, P = 0.010) (D) and secondary (n = 3, P = 0.043) (E) neurospheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNotch signaling has been shown to inhibit the differentiation of NSCs to INPs [21]. In our study, we tested the expression of molecular markers of INPs in primary neurospheres treated with GSI or DMSO. Quantitative RT-PCR showed that the mRNA levels of Glast, which is indicative of the frequency of NSCs, were decreased, while that of Mash1 and Tubulin α1, both of which are markers of INPs, were increased (Figure 3A, 3B). These results indicated an augmented differentiation from NSCs into INPs upon the blockade of Notch signaling by GSI.\nBlockade of Notch signaling promotes the differentiation of normal mouse NSCs into INPs and downstream neural cell types. (A, B) Total RNA was prepared from GSI or DMSO treated neurospheres derived from E12.5 mouse brain on the 5th day of culture. And the expressions of Glast, Mash1 and Tubulin α1 were measured by RT-PCR (A) and Real-time PCR (B), with β-actin as the reference control (n = 3, GLAST, P = 0.003, Mash1, P = 0.043, Tubulin α1, P = 0.046). (C, D) Immunofluorescence. Differentiated NSCs were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (E, F) Quantification and comparison of neurons (MAP2+) or astrocytes (GFAP+) in GSI-treated and control NSCs. Cells were counterstained with Hoechst, to permit counting of cell nuclei in at least 5 microscopic fields per specimen (n = 3, E, P = 0.021, F, P = 0.031). *, P < 0.05, **, P < 0.01. Scale bar, 50 μm for C and D.\nTo further study the effect of inhibiting Notch signaling on NSC differentiation, we used the neurosphere differentiation assay in vitro. When spheres were cultured adherently on poly-D-lysine coated glass cover slips without growth factors, they began to differentiate into cells bearing specific markers of neurons and astrocytes. We quantitatively compared the cell types produced by neurospheres in the GSI-containing medium with that of the control. All of the neurospheres gave rise to cells with the molecular markers of neurons or astrocytes (Figure 3C, 3D). However, the percentage of MAP2+ cells increased significantly in the presence of GSI, from 29.0 ± 10.4% to 66.5 ± 8.4%, and the percentage of GFAP+ cells in GSI-treated neurospheres was elevated from 8.7 ± 3.0% to 26.9 ± 6.6% (Figure 3E, 3F). These results suggested that inhibiting Notch signaling in NSCs leads to an increase in the number of differentiated cells.\nStem cell-like cells in brain tumors share many similarities with normal neural stem/progenitor cells and may require Notch signal for their survival and growth. In vitro, NSCs proliferate and form clonal spheres referred to as neurospheres. GSI reduced the proliferation of mouse embryonic brain-derived NSCs in a dose-dependent fashion (Additional file 1: Figure S1). The number of neurospheres was decreased in the presence of GSI, compared with the control treated with DMSO (Figure 2A). In order to confirm that GSI effectively blocked Notch signaling in NSCs in our culture system, we test the expression of Hes1 and Hes5, both of which are downstream molecules of the Notch signaling [12]. Total RNA was prepared from neurospheres on the fifth day of 25 μmol/L GSI treatment and was used for RT-PCR. The expression of Hes1 and Hes5 decreased remarkably in NSCs, suggesting that GSI at this concentration could inhibit Notch signaling effectively (Figure 2B, 2C). We quantitatively analyzed the number of primary neurospheres in the presence of GSI, and found that there was a significant decrease in the number of neurospheres upon GSI treatment at 25 μmol/L (Figure 2D). In order to determine the possible effect of GSI on the NSCs self-renewal ability, we harvested the spheres and dissociated them into a single cell suspension by soft pipeting. When replated in the presence of GSI, the number of secondary neurospheres significantly decreased after 7 days culture (Figure 2E). These results suggested that the proliferation of NSCs was slowed by inhibiting Notch signaling and the self-renewal ability, a key NSC behavior, was at least partially depleted.\nBlockade of Notch signaling attenuates proliferation and self-renewal ability of normal mouse NSCs. (A) Photomicrographs of neurospheres derived from E12.5 mouse brain at 72 h after primary culture, with GSI or DMSO supplemented. (B, C) Total RNA was prepared from neurospheres on the 5th day of GSI or DMSO treatment. And the expressions of Hes1 and Hes5 were measured by RT-PCR (B) and Real-time PCR (C), with β-actin as the reference control (n = 3, Hes5, P = 0.006, Hes1, P = 0.006). (D, E) Equal number of cells (1 × 105/ml) were plated in the growth medium, and the number of primary (n = 3, P = 0.010) (D) and secondary (n = 3, P = 0.043) (E) neurospheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNotch signaling has been shown to inhibit the differentiation of NSCs to INPs [21]. In our study, we tested the expression of molecular markers of INPs in primary neurospheres treated with GSI or DMSO. Quantitative RT-PCR showed that the mRNA levels of Glast, which is indicative of the frequency of NSCs, were decreased, while that of Mash1 and Tubulin α1, both of which are markers of INPs, were increased (Figure 3A, 3B). These results indicated an augmented differentiation from NSCs into INPs upon the blockade of Notch signaling by GSI.\nBlockade of Notch signaling promotes the differentiation of normal mouse NSCs into INPs and downstream neural cell types. (A, B) Total RNA was prepared from GSI or DMSO treated neurospheres derived from E12.5 mouse brain on the 5th day of culture. And the expressions of Glast, Mash1 and Tubulin α1 were measured by RT-PCR (A) and Real-time PCR (B), with β-actin as the reference control (n = 3, GLAST, P = 0.003, Mash1, P = 0.043, Tubulin α1, P = 0.046). (C, D) Immunofluorescence. Differentiated NSCs were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (E, F) Quantification and comparison of neurons (MAP2+) or astrocytes (GFAP+) in GSI-treated and control NSCs. Cells were counterstained with Hoechst, to permit counting of cell nuclei in at least 5 microscopic fields per specimen (n = 3, E, P = 0.021, F, P = 0.031). *, P < 0.05, **, P < 0.01. Scale bar, 50 μm for C and D.\nTo further study the effect of inhibiting Notch signaling on NSC differentiation, we used the neurosphere differentiation assay in vitro. When spheres were cultured adherently on poly-D-lysine coated glass cover slips without growth factors, they began to differentiate into cells bearing specific markers of neurons and astrocytes. We quantitatively compared the cell types produced by neurospheres in the GSI-containing medium with that of the control. All of the neurospheres gave rise to cells with the molecular markers of neurons or astrocytes (Figure 3C, 3D). However, the percentage of MAP2+ cells increased significantly in the presence of GSI, from 29.0 ± 10.4% to 66.5 ± 8.4%, and the percentage of GFAP+ cells in GSI-treated neurospheres was elevated from 8.7 ± 3.0% to 26.9 ± 6.6% (Figure 3E, 3F). These results suggested that inhibiting Notch signaling in NSCs leads to an increase in the number of differentiated cells.\n[SUBTITLE] Decreased proliferation and self-renewal ability of GSCs upon GSI treatment [SUBSECTION] Although Notch signaling has been shown to play critical roles in the maintenance of normal NSCs, whether this signaling might be involved in tumor stem cells is not fully clear. To determine whether Notch signaling activity was required during growth of GSCs, we investigate the effect of GSI on proliferation and self-renewal of GSCs. After Notch signaling was inhibited in GSCs by GSI treatment at 25 μmol/L, the expressions of Hes5 and Hes1, the specific and direct downstream targets of the Notch/RBP-J transcription complex were identified by RT-PCR and real-time PCR as described previously. After 5 days of GSI treatment, Hes5 and Hes1 expression markedly decreased (Figure 4A, 4B), and no obvious cell death was observed, indicating no effect on cell viability (data not shown). These results indicated that Notch signaling was efficiently blocked by GSI treatment in GSCs.\nAttenuated proliferation and self-renewal ability of patient-derived GSCs on the blockade of Notch signaling. (A, B) Total RNA was prepared from primary tumor spheres on the 5th day in the presence of GSI or DMSO. And the expressions of Hes5 and Hes1 were measured by RT-PCR (A) and Real-time PCR (B), with human GAPDH as the reference control (n = 5, Hes5, P = 0.046, Hes1, P = 0.002). (C, D) Equal number of cells (1 × 105/ml) form brain tumor tissues were plated in the growth medium, the number of primary (n = 4, P = 0.008) (C) and secondary (n = 4, P = 0.041) (D) tumor spheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNext, we quantitatively compared the proliferation and self-renewal ability of GSI-treated GSCs with that of the controls. The number of the primary tumor spheres in the presence of GSI decreased significantly, from 51.5 ± 2.8 to 34.8 ± 3.3 (Figure 4C). Self-renewal ability of the tumor spheres was assayed by dissociating and replating the primary tumor spheres. Our results showed that GSI-treated GSCs generated a decreased number of secondary tumor spheres (17.5 ± 2.3), than the number of controls (31.7 ± 5.6) (Figure 4D). These results showed that the proliferation and self-renewal ability of GSCs also could be attenuated by inhibiting Notch signaling.\nAlthough Notch signaling has been shown to play critical roles in the maintenance of normal NSCs, whether this signaling might be involved in tumor stem cells is not fully clear. To determine whether Notch signaling activity was required during growth of GSCs, we investigate the effect of GSI on proliferation and self-renewal of GSCs. After Notch signaling was inhibited in GSCs by GSI treatment at 25 μmol/L, the expressions of Hes5 and Hes1, the specific and direct downstream targets of the Notch/RBP-J transcription complex were identified by RT-PCR and real-time PCR as described previously. After 5 days of GSI treatment, Hes5 and Hes1 expression markedly decreased (Figure 4A, 4B), and no obvious cell death was observed, indicating no effect on cell viability (data not shown). These results indicated that Notch signaling was efficiently blocked by GSI treatment in GSCs.\nAttenuated proliferation and self-renewal ability of patient-derived GSCs on the blockade of Notch signaling. (A, B) Total RNA was prepared from primary tumor spheres on the 5th day in the presence of GSI or DMSO. And the expressions of Hes5 and Hes1 were measured by RT-PCR (A) and Real-time PCR (B), with human GAPDH as the reference control (n = 5, Hes5, P = 0.046, Hes1, P = 0.002). (C, D) Equal number of cells (1 × 105/ml) form brain tumor tissues were plated in the growth medium, the number of primary (n = 4, P = 0.008) (C) and secondary (n = 4, P = 0.041) (D) tumor spheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNext, we quantitatively compared the proliferation and self-renewal ability of GSI-treated GSCs with that of the controls. The number of the primary tumor spheres in the presence of GSI decreased significantly, from 51.5 ± 2.8 to 34.8 ± 3.3 (Figure 4C). Self-renewal ability of the tumor spheres was assayed by dissociating and replating the primary tumor spheres. Our results showed that GSI-treated GSCs generated a decreased number of secondary tumor spheres (17.5 ± 2.3), than the number of controls (31.7 ± 5.6) (Figure 4D). These results showed that the proliferation and self-renewal ability of GSCs also could be attenuated by inhibiting Notch signaling.\n[SUBTITLE] Blockade of Notch signaling promotes the differentiation of GSCs [SUBSECTION] The previous result indicated that inhibiting Notch signaling promotes the normal NSCs to differentiate into neurons and astrocytes, both of which are the downstream neural cell types of NSCs. Therefore, we investigated whether the GSI treatment promoted GSCs differentiation. Interestingly, after 3 days, approximately 18.7 ± 0.9 neurites grew out from each tumor spheres cultured in the medium with GSI, compared to only 6.7 ± 0.9 from that cultured with DMSO. Meanwhile, the average length of neurites increased from 206.0 ± 13.1 μm in tumor spheres culture with DMSO to 269.7 ± 28.4 μm in GSI-treated tumor spheres (Figure 5B, 5C). In order to further confirm whether these cells are the downstream neural cell types, immunofluorescence was performed on differentiated primary GSCs using the specific markers of neurons and astrocytes on the 7th day in differentiating conditional medium (Figure 5D, 5E). We quantitatively compared the cell types produced by neurospheres in the GSI-treated group with that of the control. The percentages of MAP2+ cells and GFAP+ cells increased significantly, as high as 51.6 ± 6.1% and 44.0 ± 1.7%, respectively (Figure 5F, 5G). These results suggest that inhibiting Notch signaling also promotes the differentiation of GSCs.\nAugmented neurite outgrowth and enhanced differentiation of patient-derived tumor spheres on the blockade of Notch signaling. (A) Photomicrographs of differentiated tumor spheres at 72 h after plated in differentiation conditional medium supplemented with GSI or DMSO. (B, C) Comparison of neurites number (n = 3, P < 0.001) (B) and length (n = 3, P = 0.041) (C) between tumor spheres in the presence of GSI and DMSO. (D, E) Immunofluorescence. Differentiated tumor spheres were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (F, G) Quantification and comparison of the percentages of neurons (MAP2+) (n = 3, P < 0.001) (F) or astrocytes (GFAP+) (n = 3, P < 0.001) (G) in the total cell number revealed by Hoechst counterstaining, between GSI-treated and control GSCs. Scale bar, 100 μm for A, and 50 μm for D and E. *, P < 0.05, **, P < 0.01.\nThe previous result indicated that inhibiting Notch signaling promotes the normal NSCs to differentiate into neurons and astrocytes, both of which are the downstream neural cell types of NSCs. Therefore, we investigated whether the GSI treatment promoted GSCs differentiation. Interestingly, after 3 days, approximately 18.7 ± 0.9 neurites grew out from each tumor spheres cultured in the medium with GSI, compared to only 6.7 ± 0.9 from that cultured with DMSO. Meanwhile, the average length of neurites increased from 206.0 ± 13.1 μm in tumor spheres culture with DMSO to 269.7 ± 28.4 μm in GSI-treated tumor spheres (Figure 5B, 5C). In order to further confirm whether these cells are the downstream neural cell types, immunofluorescence was performed on differentiated primary GSCs using the specific markers of neurons and astrocytes on the 7th day in differentiating conditional medium (Figure 5D, 5E). We quantitatively compared the cell types produced by neurospheres in the GSI-treated group with that of the control. The percentages of MAP2+ cells and GFAP+ cells increased significantly, as high as 51.6 ± 6.1% and 44.0 ± 1.7%, respectively (Figure 5F, 5G). These results suggest that inhibiting Notch signaling also promotes the differentiation of GSCs.\nAugmented neurite outgrowth and enhanced differentiation of patient-derived tumor spheres on the blockade of Notch signaling. (A) Photomicrographs of differentiated tumor spheres at 72 h after plated in differentiation conditional medium supplemented with GSI or DMSO. (B, C) Comparison of neurites number (n = 3, P < 0.001) (B) and length (n = 3, P = 0.041) (C) between tumor spheres in the presence of GSI and DMSO. (D, E) Immunofluorescence. Differentiated tumor spheres were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (F, G) Quantification and comparison of the percentages of neurons (MAP2+) (n = 3, P < 0.001) (F) or astrocytes (GFAP+) (n = 3, P < 0.001) (G) in the total cell number revealed by Hoechst counterstaining, between GSI-treated and control GSCs. Scale bar, 100 μm for A, and 50 μm for D and E. *, P < 0.05, **, P < 0.01.\n[SUBTITLE] Blockade of Notch signaling promotes the conversion of GSCs to INP-like cells [SUBSECTION] The previous report indicated that blockade of Notch signaling in the CNS increased the frequency of INPs in vivo [21]. Precocious differentiation of NSCs into INPs might exhaust the NSC pool. Therefore, we investigated the effect of inhibiting Notch signaling on the frequency of GSCs and INP-like cells in glioma specimen. In an attempt to distinguish GSCs and INP-like tumor cells, we examined the expression of several markers that could distinguish NSCs from INPs by quantitive RT-PCR [20,21]. Compared with the controls, the primary tumor spheres in the presence of GSI expressed lower Glast and CD133, which are indicative of the frequency of NSCs and GSCs. In contrast, Mash1 was highly expressed in GSI-treated tumor spheres (Figure 6A, 6B), although the expression level of another INP marker, Tubulin α1 was comparable between the GSI-treated tumor spheres and that of control. Altogether, these results suggested that blockade of Notch signaling may promote the conversion of GSCs to INP-like tumor cells.\nGSI-treated primary tumor spheres show similar gene expression profile of INPs. (A, B) cDNA was prepared from total RNA isolated from primary tumor spheres, treated with GSI or DMSO for 5 days respectively, and the expressions of GLAST (P = 0.002), CD133 (P = 0.015), Mash1 (P = 0.050) and Tubulin α1 (P = 0.116), were measured by RT-PCR (A) and Real-time PCR (n = 5) (B), with GAPDH as a reference control. *, P < 0.05, **, P < 0.01.\nThe previous report indicated that blockade of Notch signaling in the CNS increased the frequency of INPs in vivo [21]. Precocious differentiation of NSCs into INPs might exhaust the NSC pool. Therefore, we investigated the effect of inhibiting Notch signaling on the frequency of GSCs and INP-like cells in glioma specimen. In an attempt to distinguish GSCs and INP-like tumor cells, we examined the expression of several markers that could distinguish NSCs from INPs by quantitive RT-PCR [20,21]. Compared with the controls, the primary tumor spheres in the presence of GSI expressed lower Glast and CD133, which are indicative of the frequency of NSCs and GSCs. In contrast, Mash1 was highly expressed in GSI-treated tumor spheres (Figure 6A, 6B), although the expression level of another INP marker, Tubulin α1 was comparable between the GSI-treated tumor spheres and that of control. Altogether, these results suggested that blockade of Notch signaling may promote the conversion of GSCs to INP-like tumor cells.\nGSI-treated primary tumor spheres show similar gene expression profile of INPs. (A, B) cDNA was prepared from total RNA isolated from primary tumor spheres, treated with GSI or DMSO for 5 days respectively, and the expressions of GLAST (P = 0.002), CD133 (P = 0.015), Mash1 (P = 0.050) and Tubulin α1 (P = 0.116), were measured by RT-PCR (A) and Real-time PCR (n = 5) (B), with GAPDH as a reference control. *, P < 0.05, **, P < 0.01.\n[SUBTITLE] GSCs show resistance to GSI treatment compared with NSCs [SUBSECTION] To gain further perspective on the dynamics of cellular proliferation accompanying differentiation, we treated NSCs and tumor spheres at a series of time points following GSI treatment with propidium iodide and examined cell cycle via FACS analysis. Compared with the controls, nearly 15.5 ± 0.5% of the NSCs treated with GSI for 24 h are in the G2+M phase, and then sharply decreased to less than 8.2 ± 1.7% at 72 h (Figure 7A). In contrast, the ratio of GSCs in the G2+M phase were slightly elevated at 48 h, and then declined insignificantly at 72 h (Figure 7B). The result showed that GSI treatment significantly reduced the ratio of the G2+M phase NSCs, but there is no obvious effect on the cell cycle of GSCs. Therefore, NSCs are more sensitive to GSI, while GSCs display a certain degree of resistance to GSI at the early stage of the treatment.\nDifferent effects of GSI-treatment on the cell cycle of NSCs or GSCs. (A, B) Comparisons of cell cycle between NSCs (A) and GSCs (B) in the presence of GSI or DMSO at 24 h, 48 h, 72 h using flow cytometry. Data represent as mean ± SD from three independent experiments. (n = 3, 24 h, P = 0.006, 48 h, P = 0.013). *, P < 0.05, **, P < 0.01.\nTo gain further perspective on the dynamics of cellular proliferation accompanying differentiation, we treated NSCs and tumor spheres at a series of time points following GSI treatment with propidium iodide and examined cell cycle via FACS analysis. Compared with the controls, nearly 15.5 ± 0.5% of the NSCs treated with GSI for 24 h are in the G2+M phase, and then sharply decreased to less than 8.2 ± 1.7% at 72 h (Figure 7A). In contrast, the ratio of GSCs in the G2+M phase were slightly elevated at 48 h, and then declined insignificantly at 72 h (Figure 7B). The result showed that GSI treatment significantly reduced the ratio of the G2+M phase NSCs, but there is no obvious effect on the cell cycle of GSCs. Therefore, NSCs are more sensitive to GSI, while GSCs display a certain degree of resistance to GSI at the early stage of the treatment.\nDifferent effects of GSI-treatment on the cell cycle of NSCs or GSCs. (A, B) Comparisons of cell cycle between NSCs (A) and GSCs (B) in the presence of GSI or DMSO at 24 h, 48 h, 72 h using flow cytometry. Data represent as mean ± SD from three independent experiments. (n = 3, 24 h, P = 0.006, 48 h, P = 0.013). *, P < 0.05, **, P < 0.01.", "Nine specimens of gliomas were used in the current studies, including 3 oligoastrocytomas, 3 oligodendrogliomas, 2 astrocytomas, and 1 glioblastoma, and the specimens were graded according to the WHO grading scheme (Additional file 1: Table S1).\nTumor tissues were dissociated mechanically into a single cell suspension and were cultured in serum-free DMEM/F12 medium supplemented with EGF and bFGF. Seven primary gliomas of the nine gave rise to proliferating tumor spheres. Regardless of pathological subtype and grade, neurosphere-like clusters, or tumor spheres, first appeared within 72 h of primary culture and increased their numbers and diameters quickly during 7 days after the onset of the culture (Figure 1A). In order to estimate whether these tumor spheres showed NSC properties, we stained the tumor spheres from patients with anti-Nestin antibody. The result showed that these tumor spheres expressed Nestin, a marker of NSCs (Figure 1B). The multipotency of these human glioma cell-derived tumor spheres was confirmed by differentiation assay in vitro. We estimated the differentiation capacity of tumor spheres in differentiating conditions by examining the types of molecular markers expressed by neurons and glial cells. We observed that these cells could differentiate into GFAP-positive astrocyte- and MAP2-positive neuron-like cells (Figure 1C, 1D). In addition, a local recurrence tumor also could produce tumor spheres in growth medium (data not shown). Tumor spheres could be passed at least for five generations by mechanical dissociation and their stemness and multipotency could be maintained in serum-free medium supplemented with growth factors for at least one month.\nPatient glioma-derived stem cells have the ability to form neurosphere-like colonies and gave rise to the downstream neural cell types of NSCs. (A) Photomicrographs of typical primary tumor spheres from one glioma tissue at 72 h after plating. (B) Undifferentiated primary tumor spheres expressed high levels of Nestin (red), a marker of NSCs. (C, D) The tumor spheres-derived from human glioma were cultured in differentiation conditional medium for 7 days, and differentiated into neural cells expressing specific molecular markers of GFAP (C, red) and MAP2 (D, green). Scale bar, 100 μm in A, and 50 μm in BCD.", "Stem cell-like cells in brain tumors share many similarities with normal neural stem/progenitor cells and may require Notch signal for their survival and growth. In vitro, NSCs proliferate and form clonal spheres referred to as neurospheres. GSI reduced the proliferation of mouse embryonic brain-derived NSCs in a dose-dependent fashion (Additional file 1: Figure S1). The number of neurospheres was decreased in the presence of GSI, compared with the control treated with DMSO (Figure 2A). In order to confirm that GSI effectively blocked Notch signaling in NSCs in our culture system, we test the expression of Hes1 and Hes5, both of which are downstream molecules of the Notch signaling [12]. Total RNA was prepared from neurospheres on the fifth day of 25 μmol/L GSI treatment and was used for RT-PCR. The expression of Hes1 and Hes5 decreased remarkably in NSCs, suggesting that GSI at this concentration could inhibit Notch signaling effectively (Figure 2B, 2C). We quantitatively analyzed the number of primary neurospheres in the presence of GSI, and found that there was a significant decrease in the number of neurospheres upon GSI treatment at 25 μmol/L (Figure 2D). In order to determine the possible effect of GSI on the NSCs self-renewal ability, we harvested the spheres and dissociated them into a single cell suspension by soft pipeting. When replated in the presence of GSI, the number of secondary neurospheres significantly decreased after 7 days culture (Figure 2E). These results suggested that the proliferation of NSCs was slowed by inhibiting Notch signaling and the self-renewal ability, a key NSC behavior, was at least partially depleted.\nBlockade of Notch signaling attenuates proliferation and self-renewal ability of normal mouse NSCs. (A) Photomicrographs of neurospheres derived from E12.5 mouse brain at 72 h after primary culture, with GSI or DMSO supplemented. (B, C) Total RNA was prepared from neurospheres on the 5th day of GSI or DMSO treatment. And the expressions of Hes1 and Hes5 were measured by RT-PCR (B) and Real-time PCR (C), with β-actin as the reference control (n = 3, Hes5, P = 0.006, Hes1, P = 0.006). (D, E) Equal number of cells (1 × 105/ml) were plated in the growth medium, and the number of primary (n = 3, P = 0.010) (D) and secondary (n = 3, P = 0.043) (E) neurospheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNotch signaling has been shown to inhibit the differentiation of NSCs to INPs [21]. In our study, we tested the expression of molecular markers of INPs in primary neurospheres treated with GSI or DMSO. Quantitative RT-PCR showed that the mRNA levels of Glast, which is indicative of the frequency of NSCs, were decreased, while that of Mash1 and Tubulin α1, both of which are markers of INPs, were increased (Figure 3A, 3B). These results indicated an augmented differentiation from NSCs into INPs upon the blockade of Notch signaling by GSI.\nBlockade of Notch signaling promotes the differentiation of normal mouse NSCs into INPs and downstream neural cell types. (A, B) Total RNA was prepared from GSI or DMSO treated neurospheres derived from E12.5 mouse brain on the 5th day of culture. And the expressions of Glast, Mash1 and Tubulin α1 were measured by RT-PCR (A) and Real-time PCR (B), with β-actin as the reference control (n = 3, GLAST, P = 0.003, Mash1, P = 0.043, Tubulin α1, P = 0.046). (C, D) Immunofluorescence. Differentiated NSCs were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (E, F) Quantification and comparison of neurons (MAP2+) or astrocytes (GFAP+) in GSI-treated and control NSCs. Cells were counterstained with Hoechst, to permit counting of cell nuclei in at least 5 microscopic fields per specimen (n = 3, E, P = 0.021, F, P = 0.031). *, P < 0.05, **, P < 0.01. Scale bar, 50 μm for C and D.\nTo further study the effect of inhibiting Notch signaling on NSC differentiation, we used the neurosphere differentiation assay in vitro. When spheres were cultured adherently on poly-D-lysine coated glass cover slips without growth factors, they began to differentiate into cells bearing specific markers of neurons and astrocytes. We quantitatively compared the cell types produced by neurospheres in the GSI-containing medium with that of the control. All of the neurospheres gave rise to cells with the molecular markers of neurons or astrocytes (Figure 3C, 3D). However, the percentage of MAP2+ cells increased significantly in the presence of GSI, from 29.0 ± 10.4% to 66.5 ± 8.4%, and the percentage of GFAP+ cells in GSI-treated neurospheres was elevated from 8.7 ± 3.0% to 26.9 ± 6.6% (Figure 3E, 3F). These results suggested that inhibiting Notch signaling in NSCs leads to an increase in the number of differentiated cells.", "Although Notch signaling has been shown to play critical roles in the maintenance of normal NSCs, whether this signaling might be involved in tumor stem cells is not fully clear. To determine whether Notch signaling activity was required during growth of GSCs, we investigate the effect of GSI on proliferation and self-renewal of GSCs. After Notch signaling was inhibited in GSCs by GSI treatment at 25 μmol/L, the expressions of Hes5 and Hes1, the specific and direct downstream targets of the Notch/RBP-J transcription complex were identified by RT-PCR and real-time PCR as described previously. After 5 days of GSI treatment, Hes5 and Hes1 expression markedly decreased (Figure 4A, 4B), and no obvious cell death was observed, indicating no effect on cell viability (data not shown). These results indicated that Notch signaling was efficiently blocked by GSI treatment in GSCs.\nAttenuated proliferation and self-renewal ability of patient-derived GSCs on the blockade of Notch signaling. (A, B) Total RNA was prepared from primary tumor spheres on the 5th day in the presence of GSI or DMSO. And the expressions of Hes5 and Hes1 were measured by RT-PCR (A) and Real-time PCR (B), with human GAPDH as the reference control (n = 5, Hes5, P = 0.046, Hes1, P = 0.002). (C, D) Equal number of cells (1 × 105/ml) form brain tumor tissues were plated in the growth medium, the number of primary (n = 4, P = 0.008) (C) and secondary (n = 4, P = 0.041) (D) tumor spheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNext, we quantitatively compared the proliferation and self-renewal ability of GSI-treated GSCs with that of the controls. The number of the primary tumor spheres in the presence of GSI decreased significantly, from 51.5 ± 2.8 to 34.8 ± 3.3 (Figure 4C). Self-renewal ability of the tumor spheres was assayed by dissociating and replating the primary tumor spheres. Our results showed that GSI-treated GSCs generated a decreased number of secondary tumor spheres (17.5 ± 2.3), than the number of controls (31.7 ± 5.6) (Figure 4D). These results showed that the proliferation and self-renewal ability of GSCs also could be attenuated by inhibiting Notch signaling.", "The previous result indicated that inhibiting Notch signaling promotes the normal NSCs to differentiate into neurons and astrocytes, both of which are the downstream neural cell types of NSCs. Therefore, we investigated whether the GSI treatment promoted GSCs differentiation. Interestingly, after 3 days, approximately 18.7 ± 0.9 neurites grew out from each tumor spheres cultured in the medium with GSI, compared to only 6.7 ± 0.9 from that cultured with DMSO. Meanwhile, the average length of neurites increased from 206.0 ± 13.1 μm in tumor spheres culture with DMSO to 269.7 ± 28.4 μm in GSI-treated tumor spheres (Figure 5B, 5C). In order to further confirm whether these cells are the downstream neural cell types, immunofluorescence was performed on differentiated primary GSCs using the specific markers of neurons and astrocytes on the 7th day in differentiating conditional medium (Figure 5D, 5E). We quantitatively compared the cell types produced by neurospheres in the GSI-treated group with that of the control. The percentages of MAP2+ cells and GFAP+ cells increased significantly, as high as 51.6 ± 6.1% and 44.0 ± 1.7%, respectively (Figure 5F, 5G). These results suggest that inhibiting Notch signaling also promotes the differentiation of GSCs.\nAugmented neurite outgrowth and enhanced differentiation of patient-derived tumor spheres on the blockade of Notch signaling. (A) Photomicrographs of differentiated tumor spheres at 72 h after plated in differentiation conditional medium supplemented with GSI or DMSO. (B, C) Comparison of neurites number (n = 3, P < 0.001) (B) and length (n = 3, P = 0.041) (C) between tumor spheres in the presence of GSI and DMSO. (D, E) Immunofluorescence. Differentiated tumor spheres were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (F, G) Quantification and comparison of the percentages of neurons (MAP2+) (n = 3, P < 0.001) (F) or astrocytes (GFAP+) (n = 3, P < 0.001) (G) in the total cell number revealed by Hoechst counterstaining, between GSI-treated and control GSCs. Scale bar, 100 μm for A, and 50 μm for D and E. *, P < 0.05, **, P < 0.01.", "The previous report indicated that blockade of Notch signaling in the CNS increased the frequency of INPs in vivo [21]. Precocious differentiation of NSCs into INPs might exhaust the NSC pool. Therefore, we investigated the effect of inhibiting Notch signaling on the frequency of GSCs and INP-like cells in glioma specimen. In an attempt to distinguish GSCs and INP-like tumor cells, we examined the expression of several markers that could distinguish NSCs from INPs by quantitive RT-PCR [20,21]. Compared with the controls, the primary tumor spheres in the presence of GSI expressed lower Glast and CD133, which are indicative of the frequency of NSCs and GSCs. In contrast, Mash1 was highly expressed in GSI-treated tumor spheres (Figure 6A, 6B), although the expression level of another INP marker, Tubulin α1 was comparable between the GSI-treated tumor spheres and that of control. Altogether, these results suggested that blockade of Notch signaling may promote the conversion of GSCs to INP-like tumor cells.\nGSI-treated primary tumor spheres show similar gene expression profile of INPs. (A, B) cDNA was prepared from total RNA isolated from primary tumor spheres, treated with GSI or DMSO for 5 days respectively, and the expressions of GLAST (P = 0.002), CD133 (P = 0.015), Mash1 (P = 0.050) and Tubulin α1 (P = 0.116), were measured by RT-PCR (A) and Real-time PCR (n = 5) (B), with GAPDH as a reference control. *, P < 0.05, **, P < 0.01.", "To gain further perspective on the dynamics of cellular proliferation accompanying differentiation, we treated NSCs and tumor spheres at a series of time points following GSI treatment with propidium iodide and examined cell cycle via FACS analysis. Compared with the controls, nearly 15.5 ± 0.5% of the NSCs treated with GSI for 24 h are in the G2+M phase, and then sharply decreased to less than 8.2 ± 1.7% at 72 h (Figure 7A). In contrast, the ratio of GSCs in the G2+M phase were slightly elevated at 48 h, and then declined insignificantly at 72 h (Figure 7B). The result showed that GSI treatment significantly reduced the ratio of the G2+M phase NSCs, but there is no obvious effect on the cell cycle of GSCs. Therefore, NSCs are more sensitive to GSI, while GSCs display a certain degree of resistance to GSI at the early stage of the treatment.\nDifferent effects of GSI-treatment on the cell cycle of NSCs or GSCs. (A, B) Comparisons of cell cycle between NSCs (A) and GSCs (B) in the presence of GSI or DMSO at 24 h, 48 h, 72 h using flow cytometry. Data represent as mean ± SD from three independent experiments. (n = 3, 24 h, P = 0.006, 48 h, P = 0.013). *, P < 0.05, **, P < 0.01.", "Tumor stem cells such as GSCs have been considered as a novel target for the therapy of the malignant tumors, because these cells are supposed to play an important role in tumor initiation, growth, and recurrence [4]. Similarities in the growth characteristics and gene expression patterns of normal NSCs and GSCs suggest that pathways important for NSCs are probable targets for oncogenic brain tumor stem cells. In the present study, 1) we isolated GSCs from the human glioma tissues; 2) Like NSCs, these cells had the ability to form spheres in serum-free medium supplemented with growth factors and differentiated into downstream neural cell-like cells; 3) By GSI treatment, the number of GSCs-derived primary neurospheres and secondary neurospheres were markedly reduced compared with those treated with DMSO, indicating that in the long term culture (7-14 days), the proliferation and self-renewal ability of GSCs was ultimately reduced, upon the blocking of Notch signaling; 4) However, within 72 h culture, GSCs showed a certain degree of GSI-resistance, with undisturbed proliferation ability upon GSI treatment; 5) In addition, we showed that on blocking Notch signaling, GSCs are much biased to differentiate into INP-like cells, and ultimately neurons and glial cells in vitro. All these results suggest a promising preclinical application of Notch signaling antagonist (e.g., GSI)-based CSCs-targeting therapy in malignant glioma patients.\n[SUBTITLE] The frequency of GSCs in tumor tissue [SUBSECTION] Although CSCs have been identified as an important factor in tumor initiation and growth, their characteristics remain obscure concerning their heterogeneity. Here we found in our experiments that although seven of the nine human gliomas gave rise to proliferating tumor spheres, different numbers of spheres arisen from equal primary glioma cells among tumor samples. It should be noted that the specimens which did not give rise to proliferating neurospheres were patient #1 (oligoastrocytoma, gradeII) and #7 (anaplastic astrocytoma, grade III), with comparable tumor grades with the other specimens (Additional file 1: Table S1). Because samples are usually drawn from the periphery of the ablated tumor bulk, these two specimens might contain a certain amount of normal tissues. Overall, equal number of cells from high-grade and recurrent tumors, such as giant cell glioblastoma (WHO grade IV) and oligoastrocytoma (WHO grade III, recurrent tumor) often generate more primary tumor spheres. Due to limited number of samples, our accumulated results could not statistically lead to the conclusion that high-grade tumors contain more GSCs at present. However, the tendency described above indicated that the original frequency of GSCs might be different among samples according to tumor grades, or the GSCs from high-grade tumor tissues might show more typical properties of stem cells with higher proliferation and self-renewal ability.\nAlthough CSCs have been identified as an important factor in tumor initiation and growth, their characteristics remain obscure concerning their heterogeneity. Here we found in our experiments that although seven of the nine human gliomas gave rise to proliferating tumor spheres, different numbers of spheres arisen from equal primary glioma cells among tumor samples. It should be noted that the specimens which did not give rise to proliferating neurospheres were patient #1 (oligoastrocytoma, gradeII) and #7 (anaplastic astrocytoma, grade III), with comparable tumor grades with the other specimens (Additional file 1: Table S1). Because samples are usually drawn from the periphery of the ablated tumor bulk, these two specimens might contain a certain amount of normal tissues. Overall, equal number of cells from high-grade and recurrent tumors, such as giant cell glioblastoma (WHO grade IV) and oligoastrocytoma (WHO grade III, recurrent tumor) often generate more primary tumor spheres. Due to limited number of samples, our accumulated results could not statistically lead to the conclusion that high-grade tumors contain more GSCs at present. However, the tendency described above indicated that the original frequency of GSCs might be different among samples according to tumor grades, or the GSCs from high-grade tumor tissues might show more typical properties of stem cells with higher proliferation and self-renewal ability.\n[SUBTITLE] INP-like cells in GSCs population [SUBSECTION] In normal development of the brain, neurons and glia are generated from both NSCs and more limited INPs. And blockade of Notch signaling in NSCs have been shown to promote their conversion into INPs [20,21]. GSCs can differentiate into neurons and astrocytes in culture medium with serum, as shown by our results and previous studies [4]. Like INPs, it is possible that an intermediate glioma progenitor cells (IGPs) also exist, linking the GSCs-IGPs-Neuron/glia hierarchy in tumor microenvironment [32]. Our results show that blocking Notch signaling in the primary tumor spheres leads to down-regulated mRNA level of CD133, a well accepted marker of GSCs at present, indicating a decrease of GSCs. Simultaneously, the mRNA level of Hes5 and Glast, two markers highly expressed in NSCs were also decreased, while that of Mash1, a marker up-regulated in INPs was increased in primary tumor spheres after being treated with GSI [20]. In addition, Tubulin α1, an INP marker, seems can not distinguish IGPs from GSCs. Since GSI-treated primary tumor spheres could still gave rise to secondary spheres, unless much fewer than that derived from control primary spheres, the CD133low/GLASTlow/Hes5low/Mash1high IGPs might exist, with its number increased and proliferating ability decreased after the blockade of Notch signaling. Therefore, inhibiting Notch signaling might have therapeutic potential for human gliomas by exhausting GSCs and instruct them into less proliferative IGPs and differentiated neural cell types.\nIn normal development of the brain, neurons and glia are generated from both NSCs and more limited INPs. And blockade of Notch signaling in NSCs have been shown to promote their conversion into INPs [20,21]. GSCs can differentiate into neurons and astrocytes in culture medium with serum, as shown by our results and previous studies [4]. Like INPs, it is possible that an intermediate glioma progenitor cells (IGPs) also exist, linking the GSCs-IGPs-Neuron/glia hierarchy in tumor microenvironment [32]. Our results show that blocking Notch signaling in the primary tumor spheres leads to down-regulated mRNA level of CD133, a well accepted marker of GSCs at present, indicating a decrease of GSCs. Simultaneously, the mRNA level of Hes5 and Glast, two markers highly expressed in NSCs were also decreased, while that of Mash1, a marker up-regulated in INPs was increased in primary tumor spheres after being treated with GSI [20]. In addition, Tubulin α1, an INP marker, seems can not distinguish IGPs from GSCs. Since GSI-treated primary tumor spheres could still gave rise to secondary spheres, unless much fewer than that derived from control primary spheres, the CD133low/GLASTlow/Hes5low/Mash1high IGPs might exist, with its number increased and proliferating ability decreased after the blockade of Notch signaling. Therefore, inhibiting Notch signaling might have therapeutic potential for human gliomas by exhausting GSCs and instruct them into less proliferative IGPs and differentiated neural cell types.\n[SUBTITLE] Double positive cell types in the derivatives of GSCs [SUBSECTION] Although tumor-derived stem cells had many similarities to normal NSCs, it is important to note that differences might exist between them. Sphere differentiation assay on the specimen of 4# patient demonstrated that GSCs could give rise not only to neurons and glia but also to a few cells that expressed both Map2 and GFAP, the molecular markers of astrocyte and neuron, respectively (Figure S3). Previous studies have reported similar abnormal cells in culture derived from pediatric and adult brain tumors [4,33], indicating that such dual-fate cells might represent a significant fraction of GSCs derived progeny. These Map2+/GFAP+ cells sometimes appeared larger than other cells derived from the same sphere (Figure S3). In addition, the GFAP positive glial cells derived from GSCs showed abnormal morphology, with slim cell bodies and neurites, compared with that derived from NSCs (Figure 3D, Figure 5E). Although morphological differences might exist between mouse and human glial cells, previous research on normal human tissue demonstrated that GFAP staining of human glial cells showed similar morphology with that of mouse glial cells [34]. Therefore, the morphological difference of GFAP positive glial cells might be attributed to whether they are NSC-derived or GSC-derived. Genetically, the generation of the double-positive cells and dysmorphic glial cells may accompany with gene mutation or abnormal activation of some signal pathways, leading to aberrant reprogramming procedure of GSCs, compared with normal differentiation of NSCs.\nAlthough tumor-derived stem cells had many similarities to normal NSCs, it is important to note that differences might exist between them. Sphere differentiation assay on the specimen of 4# patient demonstrated that GSCs could give rise not only to neurons and glia but also to a few cells that expressed both Map2 and GFAP, the molecular markers of astrocyte and neuron, respectively (Figure S3). Previous studies have reported similar abnormal cells in culture derived from pediatric and adult brain tumors [4,33], indicating that such dual-fate cells might represent a significant fraction of GSCs derived progeny. These Map2+/GFAP+ cells sometimes appeared larger than other cells derived from the same sphere (Figure S3). In addition, the GFAP positive glial cells derived from GSCs showed abnormal morphology, with slim cell bodies and neurites, compared with that derived from NSCs (Figure 3D, Figure 5E). Although morphological differences might exist between mouse and human glial cells, previous research on normal human tissue demonstrated that GFAP staining of human glial cells showed similar morphology with that of mouse glial cells [34]. Therefore, the morphological difference of GFAP positive glial cells might be attributed to whether they are NSC-derived or GSC-derived. Genetically, the generation of the double-positive cells and dysmorphic glial cells may accompany with gene mutation or abnormal activation of some signal pathways, leading to aberrant reprogramming procedure of GSCs, compared with normal differentiation of NSCs.\n[SUBTITLE] GSI-resistance of GSCs at the early stage of GSI treatment [SUBSECTION] In our study, we found that the numbers of both primary and secondary tumor spheres were decreased in the long run (7-day culture) after GSI treatment compared with the controls. However, cell cycle analysis results showed that although Notch blockade significantly reduced the ratio of the G2+M phase in NSCs, there is no obvious effect on the percentage of proliferating GSCs within 72 h after GSI treatment. These results indicate that, compared with NSCs, another distinctive feature of GSCs was that the former are more sensitive to GSI, while the latter displays a certain degree of resistance to GSI treatment at the early stage of the treatment. Due to the limited amount of primary glioma specimens, the cell cycle analyses were executed on primary tumor spheres from three independent tumor samples. Therefore, the resistance to GSI in GSCs at the early stage of the cell cycle might be a general characteristic in gliomas, or it only represents a few cases of glioma patients which might display resistance in the preclinical trial of GSI treatment. Previous research show that treatment with dipeptide GSI resulted in a marked reduction in medulloblastoma growth [35]. More recently, a clinical trial for a Notch inhibitor, MK0752 (developed by Merck, Whitehouse Station, NJ), has been launched for T-cell acute lymphatic leukemia and breast cancer patients (http://www.clinicaltrials.gov/ct/show/NCT00100152). Although GSI seems to be a promising reagent targeting GSCs by interfering Notch signaling, our results suggested that its effect might be limited to some glioma patients. Therefore, drug combination should be used at the early stage of therapy. However, since our results are based on in vitro culture system of patient-derived samples, more accurate conclusion could be drawn from animal models or preclinical trials in future study.\nIn our study, we found that the numbers of both primary and secondary tumor spheres were decreased in the long run (7-day culture) after GSI treatment compared with the controls. However, cell cycle analysis results showed that although Notch blockade significantly reduced the ratio of the G2+M phase in NSCs, there is no obvious effect on the percentage of proliferating GSCs within 72 h after GSI treatment. These results indicate that, compared with NSCs, another distinctive feature of GSCs was that the former are more sensitive to GSI, while the latter displays a certain degree of resistance to GSI treatment at the early stage of the treatment. Due to the limited amount of primary glioma specimens, the cell cycle analyses were executed on primary tumor spheres from three independent tumor samples. Therefore, the resistance to GSI in GSCs at the early stage of the cell cycle might be a general characteristic in gliomas, or it only represents a few cases of glioma patients which might display resistance in the preclinical trial of GSI treatment. Previous research show that treatment with dipeptide GSI resulted in a marked reduction in medulloblastoma growth [35]. More recently, a clinical trial for a Notch inhibitor, MK0752 (developed by Merck, Whitehouse Station, NJ), has been launched for T-cell acute lymphatic leukemia and breast cancer patients (http://www.clinicaltrials.gov/ct/show/NCT00100152). Although GSI seems to be a promising reagent targeting GSCs by interfering Notch signaling, our results suggested that its effect might be limited to some glioma patients. Therefore, drug combination should be used at the early stage of therapy. However, since our results are based on in vitro culture system of patient-derived samples, more accurate conclusion could be drawn from animal models or preclinical trials in future study.\n[SUBTITLE] The mechanisms of Notch signaling in regulating the proliferation and differentiation of GSCs [SUBSECTION] The mechanistic links between Notch signaling and the proliferation and differentiation of GSCs were presumably governed by more than one mechanism. In our study, the decreased proliferation and increased differentiation of GSCs upon GSI treatment are accompanied with down-regulation of Hes1 and Hes5, the canonical Notch downstream effectors. In addition, the expression level of Mash1, a proneural gene antagonized by the Hes genes was up-regulated in GSI-treated primary tumor spheres. Therefore, the canonical Notch-CBF1-Hes axis seems also play critical roles in the proliferation and differentiation of GSCs, as its function in NSCs [11].\nOn the other hand, Notch signaling has been shown to have both negative and positive influences on cell cycle progression [11,36]. In the present study, we observed that the proliferation of GSCs decreased significantly in the long term culture, although GSI resistance of three glioma samples was present (see above). Mutations of p53, pTEN and H-Ras, have been identified in tumor tissues of giloma patients. And Notch signaling has been shown to crosstalk with p53 and pTEN signaling pathway, two major regulators of cell cycle [37,38]. In addition, down-regulation of Notch signaling in H-Ras-transformed human breast cells led to a significant decrease in their proliferation [39]. Therefore, how Notch signaling promotes the cell cycle of GSCs is yet to be explored, on the scenery of the complex signal crosstalk and genetic circuitry.\nThe mechanistic links between Notch signaling and the proliferation and differentiation of GSCs were presumably governed by more than one mechanism. In our study, the decreased proliferation and increased differentiation of GSCs upon GSI treatment are accompanied with down-regulation of Hes1 and Hes5, the canonical Notch downstream effectors. In addition, the expression level of Mash1, a proneural gene antagonized by the Hes genes was up-regulated in GSI-treated primary tumor spheres. Therefore, the canonical Notch-CBF1-Hes axis seems also play critical roles in the proliferation and differentiation of GSCs, as its function in NSCs [11].\nOn the other hand, Notch signaling has been shown to have both negative and positive influences on cell cycle progression [11,36]. In the present study, we observed that the proliferation of GSCs decreased significantly in the long term culture, although GSI resistance of three glioma samples was present (see above). Mutations of p53, pTEN and H-Ras, have been identified in tumor tissues of giloma patients. And Notch signaling has been shown to crosstalk with p53 and pTEN signaling pathway, two major regulators of cell cycle [37,38]. In addition, down-regulation of Notch signaling in H-Ras-transformed human breast cells led to a significant decrease in their proliferation [39]. Therefore, how Notch signaling promotes the cell cycle of GSCs is yet to be explored, on the scenery of the complex signal crosstalk and genetic circuitry.", "Although CSCs have been identified as an important factor in tumor initiation and growth, their characteristics remain obscure concerning their heterogeneity. Here we found in our experiments that although seven of the nine human gliomas gave rise to proliferating tumor spheres, different numbers of spheres arisen from equal primary glioma cells among tumor samples. It should be noted that the specimens which did not give rise to proliferating neurospheres were patient #1 (oligoastrocytoma, gradeII) and #7 (anaplastic astrocytoma, grade III), with comparable tumor grades with the other specimens (Additional file 1: Table S1). Because samples are usually drawn from the periphery of the ablated tumor bulk, these two specimens might contain a certain amount of normal tissues. Overall, equal number of cells from high-grade and recurrent tumors, such as giant cell glioblastoma (WHO grade IV) and oligoastrocytoma (WHO grade III, recurrent tumor) often generate more primary tumor spheres. Due to limited number of samples, our accumulated results could not statistically lead to the conclusion that high-grade tumors contain more GSCs at present. However, the tendency described above indicated that the original frequency of GSCs might be different among samples according to tumor grades, or the GSCs from high-grade tumor tissues might show more typical properties of stem cells with higher proliferation and self-renewal ability.", "In normal development of the brain, neurons and glia are generated from both NSCs and more limited INPs. And blockade of Notch signaling in NSCs have been shown to promote their conversion into INPs [20,21]. GSCs can differentiate into neurons and astrocytes in culture medium with serum, as shown by our results and previous studies [4]. Like INPs, it is possible that an intermediate glioma progenitor cells (IGPs) also exist, linking the GSCs-IGPs-Neuron/glia hierarchy in tumor microenvironment [32]. Our results show that blocking Notch signaling in the primary tumor spheres leads to down-regulated mRNA level of CD133, a well accepted marker of GSCs at present, indicating a decrease of GSCs. Simultaneously, the mRNA level of Hes5 and Glast, two markers highly expressed in NSCs were also decreased, while that of Mash1, a marker up-regulated in INPs was increased in primary tumor spheres after being treated with GSI [20]. In addition, Tubulin α1, an INP marker, seems can not distinguish IGPs from GSCs. Since GSI-treated primary tumor spheres could still gave rise to secondary spheres, unless much fewer than that derived from control primary spheres, the CD133low/GLASTlow/Hes5low/Mash1high IGPs might exist, with its number increased and proliferating ability decreased after the blockade of Notch signaling. Therefore, inhibiting Notch signaling might have therapeutic potential for human gliomas by exhausting GSCs and instruct them into less proliferative IGPs and differentiated neural cell types.", "Although tumor-derived stem cells had many similarities to normal NSCs, it is important to note that differences might exist between them. Sphere differentiation assay on the specimen of 4# patient demonstrated that GSCs could give rise not only to neurons and glia but also to a few cells that expressed both Map2 and GFAP, the molecular markers of astrocyte and neuron, respectively (Figure S3). Previous studies have reported similar abnormal cells in culture derived from pediatric and adult brain tumors [4,33], indicating that such dual-fate cells might represent a significant fraction of GSCs derived progeny. These Map2+/GFAP+ cells sometimes appeared larger than other cells derived from the same sphere (Figure S3). In addition, the GFAP positive glial cells derived from GSCs showed abnormal morphology, with slim cell bodies and neurites, compared with that derived from NSCs (Figure 3D, Figure 5E). Although morphological differences might exist between mouse and human glial cells, previous research on normal human tissue demonstrated that GFAP staining of human glial cells showed similar morphology with that of mouse glial cells [34]. Therefore, the morphological difference of GFAP positive glial cells might be attributed to whether they are NSC-derived or GSC-derived. Genetically, the generation of the double-positive cells and dysmorphic glial cells may accompany with gene mutation or abnormal activation of some signal pathways, leading to aberrant reprogramming procedure of GSCs, compared with normal differentiation of NSCs.", "In our study, we found that the numbers of both primary and secondary tumor spheres were decreased in the long run (7-day culture) after GSI treatment compared with the controls. However, cell cycle analysis results showed that although Notch blockade significantly reduced the ratio of the G2+M phase in NSCs, there is no obvious effect on the percentage of proliferating GSCs within 72 h after GSI treatment. These results indicate that, compared with NSCs, another distinctive feature of GSCs was that the former are more sensitive to GSI, while the latter displays a certain degree of resistance to GSI treatment at the early stage of the treatment. Due to the limited amount of primary glioma specimens, the cell cycle analyses were executed on primary tumor spheres from three independent tumor samples. Therefore, the resistance to GSI in GSCs at the early stage of the cell cycle might be a general characteristic in gliomas, or it only represents a few cases of glioma patients which might display resistance in the preclinical trial of GSI treatment. Previous research show that treatment with dipeptide GSI resulted in a marked reduction in medulloblastoma growth [35]. More recently, a clinical trial for a Notch inhibitor, MK0752 (developed by Merck, Whitehouse Station, NJ), has been launched for T-cell acute lymphatic leukemia and breast cancer patients (http://www.clinicaltrials.gov/ct/show/NCT00100152). Although GSI seems to be a promising reagent targeting GSCs by interfering Notch signaling, our results suggested that its effect might be limited to some glioma patients. Therefore, drug combination should be used at the early stage of therapy. However, since our results are based on in vitro culture system of patient-derived samples, more accurate conclusion could be drawn from animal models or preclinical trials in future study.", "The mechanistic links between Notch signaling and the proliferation and differentiation of GSCs were presumably governed by more than one mechanism. In our study, the decreased proliferation and increased differentiation of GSCs upon GSI treatment are accompanied with down-regulation of Hes1 and Hes5, the canonical Notch downstream effectors. In addition, the expression level of Mash1, a proneural gene antagonized by the Hes genes was up-regulated in GSI-treated primary tumor spheres. Therefore, the canonical Notch-CBF1-Hes axis seems also play critical roles in the proliferation and differentiation of GSCs, as its function in NSCs [11].\nOn the other hand, Notch signaling has been shown to have both negative and positive influences on cell cycle progression [11,36]. In the present study, we observed that the proliferation of GSCs decreased significantly in the long term culture, although GSI resistance of three glioma samples was present (see above). Mutations of p53, pTEN and H-Ras, have been identified in tumor tissues of giloma patients. And Notch signaling has been shown to crosstalk with p53 and pTEN signaling pathway, two major regulators of cell cycle [37,38]. In addition, down-regulation of Notch signaling in H-Ras-transformed human breast cells led to a significant decrease in their proliferation [39]. Therefore, how Notch signaling promotes the cell cycle of GSCs is yet to be explored, on the scenery of the complex signal crosstalk and genetic circuitry.", "Our data indicate that like NSCs, Notch signaling maintains the patient-derived GSCs by promoting their self-renewal and inhibiting their differentiation, and support that Notch signal inhibitor might be a prosperous candidate of the drug treatment targeting CSCs for gliomas, however, with GSI-resistance at the early stage of treatment.", "The authors declare that they have no competing interests.", "YYH and MHZ carried out tissue culture, animal experiments and gene expression analyses, participated in study design and manuscript preparation. GC and LLi carried out specimens collection. LLiang, FG and YNW helped histological examination and immunohistochemistry staining. LAF and HH designed the study and prepared the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/82/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Glioma samples", "Neurosphere culture", "Neurosphere assays", "Immunofluorescence", "Quantitative RT-PCR", "DNA content analysis", "Statistics", "Results", "Formation of neurosphere-like colonies from primary glioma specimens", "Blockade of Notch signaling attenuates the proliferation and self-renewal ability and promotes differentiation of normal NSCs", "Decreased proliferation and self-renewal ability of GSCs upon GSI treatment", "Blockade of Notch signaling promotes the differentiation of GSCs", "Blockade of Notch signaling promotes the conversion of GSCs to INP-like cells", "GSCs show resistance to GSI treatment compared with NSCs", "Discussion", "The frequency of GSCs in tumor tissue", "INP-like cells in GSCs population", "Double positive cell types in the derivatives of GSCs", "GSI-resistance of GSCs at the early stage of GSI treatment", "The mechanisms of Notch signaling in regulating the proliferation and differentiation of GSCs", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Glioma, the most common tumor of the central nervous system (CNS), frequently leads to death. Glioma is derived from brain glial tissue and comprises several diverse tumor forms and grades. Treatment of malignant gliomas is often palliative due to their infiltrating nature and high recurrence. Despite advances in surgery, chemotherapy and radiation gradually result in therapy-resistance. However, genetic events that lead to gliomas are mostly unknown.\nRecent researches highlight the importance of cancer-initiating cells in the malignancy of gliomas [1-3]. These cells have been referred to as glioma stem cells (GSC), as they share similarities to normal neural stem cells (NSCs) in the brain. There is increasing evidence that malignant gliomas arise from and contain these minority tumor cells with stem cell-like properties. This subpopulation of tumor cells with the potential for self-renewal and multi-lineage differentiation that recapitulates the phenotype of the original glioma [4-8], plays an important role in glioma initiation, growth, and recurrence. Eliminating GSCs from the bulk tumor mass seems to be a prosperous therapeutic strategy [9,10]. Therefore, it is extremely important to understand the signal pathways that contribute to the formation and maintenance of GSCs.\nA number of signal pathways are involved in the formation and maintenance of stem cells, many of which are closely conserved across species. Notch signaling, an evolutionarily conserved pathway mediating direct cell-cell interaction and signaling, plays a pivotal role in the maintenance of NSCs [11]. The functions of the Notch pathway in cancer formation have been gradually established, and recent data have also implicated a role for Notch signaling in GSCs [12]. Notch is a family of hetero-dimeric transmembrane receptors composed of an extracellular domain responsible for ligand recognition, a transmembrane domain, and an intracellular domain involved in transcriptional regulation. When Notch receptor is triggered by the ligands on the neighboring cells, the intracellular domain of the Notch receptor (NICD) is released from the membrane, after successive proteolytic cleavages by the γ-secretase complex [13,14]. NICD then translocates into the nucleus and associates with the transcription factor RBP-J, the DNA recombination signal binding protein-Jκ. The NICD-RBP-J complex further recruits other co-activators, and activates the expression of downstream genes associated with cell proliferation, differentiation and apoptosis [15]. It is believed that γ-secretase inhibitors (GSI) decrease the activity of Notch signaling and slow the growth of Notch-dependent tumors such as medulloblastoma [12].\nRapid proliferation, self-renewal ability and multipotential differentiation are the hallmarks of both normal NSCs and GSCs. Similarities in the growth characteristics and gene expression patterns of normal NSCs and brain tumor CSCs suggest that pathways important for NSCs are probable targets for eliminating brain tumor CSCs. The RBP-J-mediated canonical Notch pathway plays several significant roles in the maintenance and differentiation of NSCs [16-18]. During embryogenesis, Notch signaling is required to maintain all NSC populations, and to repress the differentiation of NSCs into intermediate neural progenitors (INPs) in vivo [19-21]. Along with later development, Notch signal commits NSCs to an astroglia fate, while repressing neuronal differentiation [22]. In adult, Notch signaling modulates cell cycle in order to ensure brain-derived NSCs retain their self-renewal property [23].\nIncreasing evidence has shown that there is a link between tumorigenesis and aberrantly activated Notch signaling [24,25]. Notch1 and its ligands, Dll1 and Jagged1, were overexpressed in many glioma cell lines and primary human gliomas. When the expression of Notch1, Dll1 or Jagged1 was down-regulated by RNA interference, apoptosis and proliferation inhibition in multiple glioma cell lines were induced [26]. Depletion of Hey1, a member of Hes-related family downstream effectors of Notch signaling, by RNA interference also reduces proliferation of glioblastoma cells in tissue culture [27]. Moreover, the blockade of Notch signaling directly caused cell cycle exit, apoptosis, differentiation, and reduced the CD133-positive cells in medulloblastoma and glioblastoma cell lines while Notch activation enhances the expression of Nestin, promotes cell proliferation and the formation of NSC-like colonies and plays a contributing role in the brain tumor stem cells [28-30]. However, the exact roles of Notch signaling in the proliferation and differentiation of patient-derived GSCs have not been clearly elucidated.\nIn this study, we explore the roles of Notch signaling in patient-derived GSCs with parallel analysis of normal NSCs by using GSI-mediated inhibition of Notch signaling in vitro. The results showed that when Notch signaling was inhibited, the proliferation and self-renewal ability of GSCs from human primary gliomas were attenuated. In addition, the blockade of Notch signaling in GSCs increased their differentiation into the downstream neural cell types, and promoted their conversion from stem cells into INP-like cells. Interestingly, although inhibition of Notch signaling definitely decreased the proliferating GSCs in long term culture, we found that the percentage of G2+M phase-GSCs were almost undisturbed at the initial stage of GSI treatment. To summarize, our results suggested that Notch signaling maintained GSCs by promoting their self-renewal and inhibiting their differentiation into INP-like cells, and supported that Notch signal inhibitors might be prosperous candidates of the treatments targeting CSCs for gliomas.", "[SUBTITLE] Glioma samples [SUBSECTION] Glioma tissues were obtained from 9 adult patients with pathologically diagnosed grade 2 to grade 4 gliomas, at the Department of Neurosurgery in Xijing Hospital, Fourth Military Medical University, under the guidance from the Medical Ethnic Committee of the Fourth Military Medical University. The summary of the patient population is outlined in Additional file 1: Table S1.\nGlioma tissues were obtained from 9 adult patients with pathologically diagnosed grade 2 to grade 4 gliomas, at the Department of Neurosurgery in Xijing Hospital, Fourth Military Medical University, under the guidance from the Medical Ethnic Committee of the Fourth Military Medical University. The summary of the patient population is outlined in Additional file 1: Table S1.\n[SUBTITLE] Neurosphere culture [SUBSECTION] Neurosphere cultures were performed as described previously with some modifications [21]. Briefly, for the culture of NSCs, the brains from embryonic (E) day 12.5 C57BL/6 mice were dissected under a stereomicroscope. And for the culture of GSCs, tissues from patient specimen were acutely minced after sampling. The tissues were then washed, mechanically dissociated by repetitive pipette. Single cells were primarily plated in serum-free Dulbecco's modified Eagle's medium (DMEDM)/F12 medium containing 20 ng/ml basic fibroblast growth factor (bFGF, human recombinant, Sigma), 20 ng/ml epidermal growth factor (EGF, mouse submaxillary), the B-27 (1:50, GIBCO), penicillin (100 U/ml) and streptomycin (0.1 mg/ml). Cells were cultured at a density of 1 × 105 cell/ml in 24-well plates (0.5 ml/well), and were fed every 3 days by adding fresh medium supplemented with GSI or DMSO with indicated concentrations. Animal experiments were reviewed and approved by the Animal Experiment Administration Committee of the Fourth Military Medical University.\nNeurosphere cultures were performed as described previously with some modifications [21]. Briefly, for the culture of NSCs, the brains from embryonic (E) day 12.5 C57BL/6 mice were dissected under a stereomicroscope. And for the culture of GSCs, tissues from patient specimen were acutely minced after sampling. The tissues were then washed, mechanically dissociated by repetitive pipette. Single cells were primarily plated in serum-free Dulbecco's modified Eagle's medium (DMEDM)/F12 medium containing 20 ng/ml basic fibroblast growth factor (bFGF, human recombinant, Sigma), 20 ng/ml epidermal growth factor (EGF, mouse submaxillary), the B-27 (1:50, GIBCO), penicillin (100 U/ml) and streptomycin (0.1 mg/ml). Cells were cultured at a density of 1 × 105 cell/ml in 24-well plates (0.5 ml/well), and were fed every 3 days by adding fresh medium supplemented with GSI or DMSO with indicated concentrations. Animal experiments were reviewed and approved by the Animal Experiment Administration Committee of the Fourth Military Medical University.\n[SUBTITLE] Neurosphere assays [SUBSECTION] After 7 days from primary culture the numbers of primary spheres were counted under a microscope (Additional file 1: Figure S2) [21]. And for the expression of target genes, neurospheres were harvested on the 5th day of culture for RNA extraction, cDNA synthesis, and real-time reverse transcription-polymerase chain reaction (RT-PCR). Primary neurospheres were harvested and dissociated mechanically into single cell suspensions, and were replated at 1 × 105 cells/ml in 24-well plates. Cells were then cultured for another 7 days until secondary spheres formed [31], which were quantified by counting. On the 7th day of primary culture, neurospheres were plated onto poly-D-lysine (Sigma) coated glass cover slips in DMEM/F12 containing 10% fetal bovine serum (FBS) for another 7 days. On the third day of differentiation, neurospheres were photomicrographed and their neurites were counted and measured, then on the 7th day of differentiation culture, immunofluorescence staining was performed as described below.\nAfter 7 days from primary culture the numbers of primary spheres were counted under a microscope (Additional file 1: Figure S2) [21]. And for the expression of target genes, neurospheres were harvested on the 5th day of culture for RNA extraction, cDNA synthesis, and real-time reverse transcription-polymerase chain reaction (RT-PCR). Primary neurospheres were harvested and dissociated mechanically into single cell suspensions, and were replated at 1 × 105 cells/ml in 24-well plates. Cells were then cultured for another 7 days until secondary spheres formed [31], which were quantified by counting. On the 7th day of primary culture, neurospheres were plated onto poly-D-lysine (Sigma) coated glass cover slips in DMEM/F12 containing 10% fetal bovine serum (FBS) for another 7 days. On the third day of differentiation, neurospheres were photomicrographed and their neurites were counted and measured, then on the 7th day of differentiation culture, immunofluorescence staining was performed as described below.\n[SUBTITLE] Immunofluorescence [SUBSECTION] Undifferentiated neurospheres were plated onto poly-D-lysine coated glass cover slips in serum-free medium for 4 h. Then cells were directly fixed in 4% paraformaldehyde at 4°C for 10 min, and incubated with primary antibodies overnight at 4°C, followed by species-specific secondary antibodies. Samples were visualized under fluorescence microscope (FV-1000, Olympus, Japan). Immunofluorescence for differentiated neurospheres was performed in a similar way. Cells were additionally counterstained with Hoechst. Primary antibodies used included rabbit anti-Nestin serum (1:200, Sigma), rabbit anti-glial fibrillary acidic protein (GFAP, 1:200, Sigma), mouse anti-mitogen-activated protein 2 (MAP2, 1:200, Sigma). FITC-conjugated goat anti-mouse IgG and Cy3-conjugated goat anti-rabbit IgG (1:400, Jackson ImmunoResearch) were used as the secondary antibodies.\nUndifferentiated neurospheres were plated onto poly-D-lysine coated glass cover slips in serum-free medium for 4 h. Then cells were directly fixed in 4% paraformaldehyde at 4°C for 10 min, and incubated with primary antibodies overnight at 4°C, followed by species-specific secondary antibodies. Samples were visualized under fluorescence microscope (FV-1000, Olympus, Japan). Immunofluorescence for differentiated neurospheres was performed in a similar way. Cells were additionally counterstained with Hoechst. Primary antibodies used included rabbit anti-Nestin serum (1:200, Sigma), rabbit anti-glial fibrillary acidic protein (GFAP, 1:200, Sigma), mouse anti-mitogen-activated protein 2 (MAP2, 1:200, Sigma). FITC-conjugated goat anti-mouse IgG and Cy3-conjugated goat anti-rabbit IgG (1:400, Jackson ImmunoResearch) were used as the secondary antibodies.\n[SUBTITLE] Quantitative RT-PCR [SUBSECTION] Total RNA of neurospheres was isolated using the Trizol reagent (Invitrogen). cDNA was synthesized and was used for real-time PCR with a kit (SYBR Premix EX Taq, Takara, Kyoto, Japan) and the ABI PRISM 7300 real-time PCR system, with human GAPDH and mouse β-actin as the reference controls. Primers used for real-time PCR were summarized in Additional file 1: Table S3.\nTotal RNA of neurospheres was isolated using the Trizol reagent (Invitrogen). cDNA was synthesized and was used for real-time PCR with a kit (SYBR Premix EX Taq, Takara, Kyoto, Japan) and the ABI PRISM 7300 real-time PCR system, with human GAPDH and mouse β-actin as the reference controls. Primers used for real-time PCR were summarized in Additional file 1: Table S3.\n[SUBTITLE] DNA content analysis [SUBSECTION] Spheres were dissociated mechanically into single cell suspensions in the culture medium. Cells were then washed and resuspended in PBS, and were fixed with ethanol at room temperature for 20 min. Cells were resuspended in PBS containing 50 μg/ml of propidium iodide and 0.1 mg/ml RNase A for 10 min, and were analyzed for ploidy using a flow cytometry (BD Biosciences). Data analysis was performed using the CellQuest software (BD Biosciences).\nSpheres were dissociated mechanically into single cell suspensions in the culture medium. Cells were then washed and resuspended in PBS, and were fixed with ethanol at room temperature for 20 min. Cells were resuspended in PBS containing 50 μg/ml of propidium iodide and 0.1 mg/ml RNase A for 10 min, and were analyzed for ploidy using a flow cytometry (BD Biosciences). Data analysis was performed using the CellQuest software (BD Biosciences).\n[SUBTITLE] Statistics [SUBSECTION] Independent cultures from at least three samples were used for each experiment (Additional file 1: Table S2). For immunofluorescence, cells were counted by Image-ProPlus 6.0, and only cell bodies that were labeled with immunoreactivity were included. Proportions of immunoreactive cells in the total population of cultured cells revealed by Hoechst staining were calculated, and at least 5 microscopic fields per specimen were selected. For neurite analysis, neurites of 30 neurospheres from each culture in the presence of GSI or DMSO were measured. The total numbers of neurites per tumor spheres were counted via photomicrographs taken by a phase contrast microscopy, and the average of the length of neuritis per tumor spheres were measured by Image-ProPlus 6.0. Each experiment was repeated for at least three times. Data were expressed as mean ± s.e.m, and the difference between the two groups was analyzed with the Student's t-test, with P < 0.05 as statistically significant.\nIndependent cultures from at least three samples were used for each experiment (Additional file 1: Table S2). For immunofluorescence, cells were counted by Image-ProPlus 6.0, and only cell bodies that were labeled with immunoreactivity were included. Proportions of immunoreactive cells in the total population of cultured cells revealed by Hoechst staining were calculated, and at least 5 microscopic fields per specimen were selected. For neurite analysis, neurites of 30 neurospheres from each culture in the presence of GSI or DMSO were measured. The total numbers of neurites per tumor spheres were counted via photomicrographs taken by a phase contrast microscopy, and the average of the length of neuritis per tumor spheres were measured by Image-ProPlus 6.0. Each experiment was repeated for at least three times. Data were expressed as mean ± s.e.m, and the difference between the two groups was analyzed with the Student's t-test, with P < 0.05 as statistically significant.", "Glioma tissues were obtained from 9 adult patients with pathologically diagnosed grade 2 to grade 4 gliomas, at the Department of Neurosurgery in Xijing Hospital, Fourth Military Medical University, under the guidance from the Medical Ethnic Committee of the Fourth Military Medical University. The summary of the patient population is outlined in Additional file 1: Table S1.", "Neurosphere cultures were performed as described previously with some modifications [21]. Briefly, for the culture of NSCs, the brains from embryonic (E) day 12.5 C57BL/6 mice were dissected under a stereomicroscope. And for the culture of GSCs, tissues from patient specimen were acutely minced after sampling. The tissues were then washed, mechanically dissociated by repetitive pipette. Single cells were primarily plated in serum-free Dulbecco's modified Eagle's medium (DMEDM)/F12 medium containing 20 ng/ml basic fibroblast growth factor (bFGF, human recombinant, Sigma), 20 ng/ml epidermal growth factor (EGF, mouse submaxillary), the B-27 (1:50, GIBCO), penicillin (100 U/ml) and streptomycin (0.1 mg/ml). Cells were cultured at a density of 1 × 105 cell/ml in 24-well plates (0.5 ml/well), and were fed every 3 days by adding fresh medium supplemented with GSI or DMSO with indicated concentrations. Animal experiments were reviewed and approved by the Animal Experiment Administration Committee of the Fourth Military Medical University.", "After 7 days from primary culture the numbers of primary spheres were counted under a microscope (Additional file 1: Figure S2) [21]. And for the expression of target genes, neurospheres were harvested on the 5th day of culture for RNA extraction, cDNA synthesis, and real-time reverse transcription-polymerase chain reaction (RT-PCR). Primary neurospheres were harvested and dissociated mechanically into single cell suspensions, and were replated at 1 × 105 cells/ml in 24-well plates. Cells were then cultured for another 7 days until secondary spheres formed [31], which were quantified by counting. On the 7th day of primary culture, neurospheres were plated onto poly-D-lysine (Sigma) coated glass cover slips in DMEM/F12 containing 10% fetal bovine serum (FBS) for another 7 days. On the third day of differentiation, neurospheres were photomicrographed and their neurites were counted and measured, then on the 7th day of differentiation culture, immunofluorescence staining was performed as described below.", "Undifferentiated neurospheres were plated onto poly-D-lysine coated glass cover slips in serum-free medium for 4 h. Then cells were directly fixed in 4% paraformaldehyde at 4°C for 10 min, and incubated with primary antibodies overnight at 4°C, followed by species-specific secondary antibodies. Samples were visualized under fluorescence microscope (FV-1000, Olympus, Japan). Immunofluorescence for differentiated neurospheres was performed in a similar way. Cells were additionally counterstained with Hoechst. Primary antibodies used included rabbit anti-Nestin serum (1:200, Sigma), rabbit anti-glial fibrillary acidic protein (GFAP, 1:200, Sigma), mouse anti-mitogen-activated protein 2 (MAP2, 1:200, Sigma). FITC-conjugated goat anti-mouse IgG and Cy3-conjugated goat anti-rabbit IgG (1:400, Jackson ImmunoResearch) were used as the secondary antibodies.", "Total RNA of neurospheres was isolated using the Trizol reagent (Invitrogen). cDNA was synthesized and was used for real-time PCR with a kit (SYBR Premix EX Taq, Takara, Kyoto, Japan) and the ABI PRISM 7300 real-time PCR system, with human GAPDH and mouse β-actin as the reference controls. Primers used for real-time PCR were summarized in Additional file 1: Table S3.", "Spheres were dissociated mechanically into single cell suspensions in the culture medium. Cells were then washed and resuspended in PBS, and were fixed with ethanol at room temperature for 20 min. Cells were resuspended in PBS containing 50 μg/ml of propidium iodide and 0.1 mg/ml RNase A for 10 min, and were analyzed for ploidy using a flow cytometry (BD Biosciences). Data analysis was performed using the CellQuest software (BD Biosciences).", "Independent cultures from at least three samples were used for each experiment (Additional file 1: Table S2). For immunofluorescence, cells were counted by Image-ProPlus 6.0, and only cell bodies that were labeled with immunoreactivity were included. Proportions of immunoreactive cells in the total population of cultured cells revealed by Hoechst staining were calculated, and at least 5 microscopic fields per specimen were selected. For neurite analysis, neurites of 30 neurospheres from each culture in the presence of GSI or DMSO were measured. The total numbers of neurites per tumor spheres were counted via photomicrographs taken by a phase contrast microscopy, and the average of the length of neuritis per tumor spheres were measured by Image-ProPlus 6.0. Each experiment was repeated for at least three times. Data were expressed as mean ± s.e.m, and the difference between the two groups was analyzed with the Student's t-test, with P < 0.05 as statistically significant.", "[SUBTITLE] Formation of neurosphere-like colonies from primary glioma specimens [SUBSECTION] Nine specimens of gliomas were used in the current studies, including 3 oligoastrocytomas, 3 oligodendrogliomas, 2 astrocytomas, and 1 glioblastoma, and the specimens were graded according to the WHO grading scheme (Additional file 1: Table S1).\nTumor tissues were dissociated mechanically into a single cell suspension and were cultured in serum-free DMEM/F12 medium supplemented with EGF and bFGF. Seven primary gliomas of the nine gave rise to proliferating tumor spheres. Regardless of pathological subtype and grade, neurosphere-like clusters, or tumor spheres, first appeared within 72 h of primary culture and increased their numbers and diameters quickly during 7 days after the onset of the culture (Figure 1A). In order to estimate whether these tumor spheres showed NSC properties, we stained the tumor spheres from patients with anti-Nestin antibody. The result showed that these tumor spheres expressed Nestin, a marker of NSCs (Figure 1B). The multipotency of these human glioma cell-derived tumor spheres was confirmed by differentiation assay in vitro. We estimated the differentiation capacity of tumor spheres in differentiating conditions by examining the types of molecular markers expressed by neurons and glial cells. We observed that these cells could differentiate into GFAP-positive astrocyte- and MAP2-positive neuron-like cells (Figure 1C, 1D). In addition, a local recurrence tumor also could produce tumor spheres in growth medium (data not shown). Tumor spheres could be passed at least for five generations by mechanical dissociation and their stemness and multipotency could be maintained in serum-free medium supplemented with growth factors for at least one month.\nPatient glioma-derived stem cells have the ability to form neurosphere-like colonies and gave rise to the downstream neural cell types of NSCs. (A) Photomicrographs of typical primary tumor spheres from one glioma tissue at 72 h after plating. (B) Undifferentiated primary tumor spheres expressed high levels of Nestin (red), a marker of NSCs. (C, D) The tumor spheres-derived from human glioma were cultured in differentiation conditional medium for 7 days, and differentiated into neural cells expressing specific molecular markers of GFAP (C, red) and MAP2 (D, green). Scale bar, 100 μm in A, and 50 μm in BCD.\nNine specimens of gliomas were used in the current studies, including 3 oligoastrocytomas, 3 oligodendrogliomas, 2 astrocytomas, and 1 glioblastoma, and the specimens were graded according to the WHO grading scheme (Additional file 1: Table S1).\nTumor tissues were dissociated mechanically into a single cell suspension and were cultured in serum-free DMEM/F12 medium supplemented with EGF and bFGF. Seven primary gliomas of the nine gave rise to proliferating tumor spheres. Regardless of pathological subtype and grade, neurosphere-like clusters, or tumor spheres, first appeared within 72 h of primary culture and increased their numbers and diameters quickly during 7 days after the onset of the culture (Figure 1A). In order to estimate whether these tumor spheres showed NSC properties, we stained the tumor spheres from patients with anti-Nestin antibody. The result showed that these tumor spheres expressed Nestin, a marker of NSCs (Figure 1B). The multipotency of these human glioma cell-derived tumor spheres was confirmed by differentiation assay in vitro. We estimated the differentiation capacity of tumor spheres in differentiating conditions by examining the types of molecular markers expressed by neurons and glial cells. We observed that these cells could differentiate into GFAP-positive astrocyte- and MAP2-positive neuron-like cells (Figure 1C, 1D). In addition, a local recurrence tumor also could produce tumor spheres in growth medium (data not shown). Tumor spheres could be passed at least for five generations by mechanical dissociation and their stemness and multipotency could be maintained in serum-free medium supplemented with growth factors for at least one month.\nPatient glioma-derived stem cells have the ability to form neurosphere-like colonies and gave rise to the downstream neural cell types of NSCs. (A) Photomicrographs of typical primary tumor spheres from one glioma tissue at 72 h after plating. (B) Undifferentiated primary tumor spheres expressed high levels of Nestin (red), a marker of NSCs. (C, D) The tumor spheres-derived from human glioma were cultured in differentiation conditional medium for 7 days, and differentiated into neural cells expressing specific molecular markers of GFAP (C, red) and MAP2 (D, green). Scale bar, 100 μm in A, and 50 μm in BCD.\n[SUBTITLE] Blockade of Notch signaling attenuates the proliferation and self-renewal ability and promotes differentiation of normal NSCs [SUBSECTION] Stem cell-like cells in brain tumors share many similarities with normal neural stem/progenitor cells and may require Notch signal for their survival and growth. In vitro, NSCs proliferate and form clonal spheres referred to as neurospheres. GSI reduced the proliferation of mouse embryonic brain-derived NSCs in a dose-dependent fashion (Additional file 1: Figure S1). The number of neurospheres was decreased in the presence of GSI, compared with the control treated with DMSO (Figure 2A). In order to confirm that GSI effectively blocked Notch signaling in NSCs in our culture system, we test the expression of Hes1 and Hes5, both of which are downstream molecules of the Notch signaling [12]. Total RNA was prepared from neurospheres on the fifth day of 25 μmol/L GSI treatment and was used for RT-PCR. The expression of Hes1 and Hes5 decreased remarkably in NSCs, suggesting that GSI at this concentration could inhibit Notch signaling effectively (Figure 2B, 2C). We quantitatively analyzed the number of primary neurospheres in the presence of GSI, and found that there was a significant decrease in the number of neurospheres upon GSI treatment at 25 μmol/L (Figure 2D). In order to determine the possible effect of GSI on the NSCs self-renewal ability, we harvested the spheres and dissociated them into a single cell suspension by soft pipeting. When replated in the presence of GSI, the number of secondary neurospheres significantly decreased after 7 days culture (Figure 2E). These results suggested that the proliferation of NSCs was slowed by inhibiting Notch signaling and the self-renewal ability, a key NSC behavior, was at least partially depleted.\nBlockade of Notch signaling attenuates proliferation and self-renewal ability of normal mouse NSCs. (A) Photomicrographs of neurospheres derived from E12.5 mouse brain at 72 h after primary culture, with GSI or DMSO supplemented. (B, C) Total RNA was prepared from neurospheres on the 5th day of GSI or DMSO treatment. And the expressions of Hes1 and Hes5 were measured by RT-PCR (B) and Real-time PCR (C), with β-actin as the reference control (n = 3, Hes5, P = 0.006, Hes1, P = 0.006). (D, E) Equal number of cells (1 × 105/ml) were plated in the growth medium, and the number of primary (n = 3, P = 0.010) (D) and secondary (n = 3, P = 0.043) (E) neurospheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNotch signaling has been shown to inhibit the differentiation of NSCs to INPs [21]. In our study, we tested the expression of molecular markers of INPs in primary neurospheres treated with GSI or DMSO. Quantitative RT-PCR showed that the mRNA levels of Glast, which is indicative of the frequency of NSCs, were decreased, while that of Mash1 and Tubulin α1, both of which are markers of INPs, were increased (Figure 3A, 3B). These results indicated an augmented differentiation from NSCs into INPs upon the blockade of Notch signaling by GSI.\nBlockade of Notch signaling promotes the differentiation of normal mouse NSCs into INPs and downstream neural cell types. (A, B) Total RNA was prepared from GSI or DMSO treated neurospheres derived from E12.5 mouse brain on the 5th day of culture. And the expressions of Glast, Mash1 and Tubulin α1 were measured by RT-PCR (A) and Real-time PCR (B), with β-actin as the reference control (n = 3, GLAST, P = 0.003, Mash1, P = 0.043, Tubulin α1, P = 0.046). (C, D) Immunofluorescence. Differentiated NSCs were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (E, F) Quantification and comparison of neurons (MAP2+) or astrocytes (GFAP+) in GSI-treated and control NSCs. Cells were counterstained with Hoechst, to permit counting of cell nuclei in at least 5 microscopic fields per specimen (n = 3, E, P = 0.021, F, P = 0.031). *, P < 0.05, **, P < 0.01. Scale bar, 50 μm for C and D.\nTo further study the effect of inhibiting Notch signaling on NSC differentiation, we used the neurosphere differentiation assay in vitro. When spheres were cultured adherently on poly-D-lysine coated glass cover slips without growth factors, they began to differentiate into cells bearing specific markers of neurons and astrocytes. We quantitatively compared the cell types produced by neurospheres in the GSI-containing medium with that of the control. All of the neurospheres gave rise to cells with the molecular markers of neurons or astrocytes (Figure 3C, 3D). However, the percentage of MAP2+ cells increased significantly in the presence of GSI, from 29.0 ± 10.4% to 66.5 ± 8.4%, and the percentage of GFAP+ cells in GSI-treated neurospheres was elevated from 8.7 ± 3.0% to 26.9 ± 6.6% (Figure 3E, 3F). These results suggested that inhibiting Notch signaling in NSCs leads to an increase in the number of differentiated cells.\nStem cell-like cells in brain tumors share many similarities with normal neural stem/progenitor cells and may require Notch signal for their survival and growth. In vitro, NSCs proliferate and form clonal spheres referred to as neurospheres. GSI reduced the proliferation of mouse embryonic brain-derived NSCs in a dose-dependent fashion (Additional file 1: Figure S1). The number of neurospheres was decreased in the presence of GSI, compared with the control treated with DMSO (Figure 2A). In order to confirm that GSI effectively blocked Notch signaling in NSCs in our culture system, we test the expression of Hes1 and Hes5, both of which are downstream molecules of the Notch signaling [12]. Total RNA was prepared from neurospheres on the fifth day of 25 μmol/L GSI treatment and was used for RT-PCR. The expression of Hes1 and Hes5 decreased remarkably in NSCs, suggesting that GSI at this concentration could inhibit Notch signaling effectively (Figure 2B, 2C). We quantitatively analyzed the number of primary neurospheres in the presence of GSI, and found that there was a significant decrease in the number of neurospheres upon GSI treatment at 25 μmol/L (Figure 2D). In order to determine the possible effect of GSI on the NSCs self-renewal ability, we harvested the spheres and dissociated them into a single cell suspension by soft pipeting. When replated in the presence of GSI, the number of secondary neurospheres significantly decreased after 7 days culture (Figure 2E). These results suggested that the proliferation of NSCs was slowed by inhibiting Notch signaling and the self-renewal ability, a key NSC behavior, was at least partially depleted.\nBlockade of Notch signaling attenuates proliferation and self-renewal ability of normal mouse NSCs. (A) Photomicrographs of neurospheres derived from E12.5 mouse brain at 72 h after primary culture, with GSI or DMSO supplemented. (B, C) Total RNA was prepared from neurospheres on the 5th day of GSI or DMSO treatment. And the expressions of Hes1 and Hes5 were measured by RT-PCR (B) and Real-time PCR (C), with β-actin as the reference control (n = 3, Hes5, P = 0.006, Hes1, P = 0.006). (D, E) Equal number of cells (1 × 105/ml) were plated in the growth medium, and the number of primary (n = 3, P = 0.010) (D) and secondary (n = 3, P = 0.043) (E) neurospheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNotch signaling has been shown to inhibit the differentiation of NSCs to INPs [21]. In our study, we tested the expression of molecular markers of INPs in primary neurospheres treated with GSI or DMSO. Quantitative RT-PCR showed that the mRNA levels of Glast, which is indicative of the frequency of NSCs, were decreased, while that of Mash1 and Tubulin α1, both of which are markers of INPs, were increased (Figure 3A, 3B). These results indicated an augmented differentiation from NSCs into INPs upon the blockade of Notch signaling by GSI.\nBlockade of Notch signaling promotes the differentiation of normal mouse NSCs into INPs and downstream neural cell types. (A, B) Total RNA was prepared from GSI or DMSO treated neurospheres derived from E12.5 mouse brain on the 5th day of culture. And the expressions of Glast, Mash1 and Tubulin α1 were measured by RT-PCR (A) and Real-time PCR (B), with β-actin as the reference control (n = 3, GLAST, P = 0.003, Mash1, P = 0.043, Tubulin α1, P = 0.046). (C, D) Immunofluorescence. Differentiated NSCs were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (E, F) Quantification and comparison of neurons (MAP2+) or astrocytes (GFAP+) in GSI-treated and control NSCs. Cells were counterstained with Hoechst, to permit counting of cell nuclei in at least 5 microscopic fields per specimen (n = 3, E, P = 0.021, F, P = 0.031). *, P < 0.05, **, P < 0.01. Scale bar, 50 μm for C and D.\nTo further study the effect of inhibiting Notch signaling on NSC differentiation, we used the neurosphere differentiation assay in vitro. When spheres were cultured adherently on poly-D-lysine coated glass cover slips without growth factors, they began to differentiate into cells bearing specific markers of neurons and astrocytes. We quantitatively compared the cell types produced by neurospheres in the GSI-containing medium with that of the control. All of the neurospheres gave rise to cells with the molecular markers of neurons or astrocytes (Figure 3C, 3D). However, the percentage of MAP2+ cells increased significantly in the presence of GSI, from 29.0 ± 10.4% to 66.5 ± 8.4%, and the percentage of GFAP+ cells in GSI-treated neurospheres was elevated from 8.7 ± 3.0% to 26.9 ± 6.6% (Figure 3E, 3F). These results suggested that inhibiting Notch signaling in NSCs leads to an increase in the number of differentiated cells.\n[SUBTITLE] Decreased proliferation and self-renewal ability of GSCs upon GSI treatment [SUBSECTION] Although Notch signaling has been shown to play critical roles in the maintenance of normal NSCs, whether this signaling might be involved in tumor stem cells is not fully clear. To determine whether Notch signaling activity was required during growth of GSCs, we investigate the effect of GSI on proliferation and self-renewal of GSCs. After Notch signaling was inhibited in GSCs by GSI treatment at 25 μmol/L, the expressions of Hes5 and Hes1, the specific and direct downstream targets of the Notch/RBP-J transcription complex were identified by RT-PCR and real-time PCR as described previously. After 5 days of GSI treatment, Hes5 and Hes1 expression markedly decreased (Figure 4A, 4B), and no obvious cell death was observed, indicating no effect on cell viability (data not shown). These results indicated that Notch signaling was efficiently blocked by GSI treatment in GSCs.\nAttenuated proliferation and self-renewal ability of patient-derived GSCs on the blockade of Notch signaling. (A, B) Total RNA was prepared from primary tumor spheres on the 5th day in the presence of GSI or DMSO. And the expressions of Hes5 and Hes1 were measured by RT-PCR (A) and Real-time PCR (B), with human GAPDH as the reference control (n = 5, Hes5, P = 0.046, Hes1, P = 0.002). (C, D) Equal number of cells (1 × 105/ml) form brain tumor tissues were plated in the growth medium, the number of primary (n = 4, P = 0.008) (C) and secondary (n = 4, P = 0.041) (D) tumor spheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNext, we quantitatively compared the proliferation and self-renewal ability of GSI-treated GSCs with that of the controls. The number of the primary tumor spheres in the presence of GSI decreased significantly, from 51.5 ± 2.8 to 34.8 ± 3.3 (Figure 4C). Self-renewal ability of the tumor spheres was assayed by dissociating and replating the primary tumor spheres. Our results showed that GSI-treated GSCs generated a decreased number of secondary tumor spheres (17.5 ± 2.3), than the number of controls (31.7 ± 5.6) (Figure 4D). These results showed that the proliferation and self-renewal ability of GSCs also could be attenuated by inhibiting Notch signaling.\nAlthough Notch signaling has been shown to play critical roles in the maintenance of normal NSCs, whether this signaling might be involved in tumor stem cells is not fully clear. To determine whether Notch signaling activity was required during growth of GSCs, we investigate the effect of GSI on proliferation and self-renewal of GSCs. After Notch signaling was inhibited in GSCs by GSI treatment at 25 μmol/L, the expressions of Hes5 and Hes1, the specific and direct downstream targets of the Notch/RBP-J transcription complex were identified by RT-PCR and real-time PCR as described previously. After 5 days of GSI treatment, Hes5 and Hes1 expression markedly decreased (Figure 4A, 4B), and no obvious cell death was observed, indicating no effect on cell viability (data not shown). These results indicated that Notch signaling was efficiently blocked by GSI treatment in GSCs.\nAttenuated proliferation and self-renewal ability of patient-derived GSCs on the blockade of Notch signaling. (A, B) Total RNA was prepared from primary tumor spheres on the 5th day in the presence of GSI or DMSO. And the expressions of Hes5 and Hes1 were measured by RT-PCR (A) and Real-time PCR (B), with human GAPDH as the reference control (n = 5, Hes5, P = 0.046, Hes1, P = 0.002). (C, D) Equal number of cells (1 × 105/ml) form brain tumor tissues were plated in the growth medium, the number of primary (n = 4, P = 0.008) (C) and secondary (n = 4, P = 0.041) (D) tumor spheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNext, we quantitatively compared the proliferation and self-renewal ability of GSI-treated GSCs with that of the controls. The number of the primary tumor spheres in the presence of GSI decreased significantly, from 51.5 ± 2.8 to 34.8 ± 3.3 (Figure 4C). Self-renewal ability of the tumor spheres was assayed by dissociating and replating the primary tumor spheres. Our results showed that GSI-treated GSCs generated a decreased number of secondary tumor spheres (17.5 ± 2.3), than the number of controls (31.7 ± 5.6) (Figure 4D). These results showed that the proliferation and self-renewal ability of GSCs also could be attenuated by inhibiting Notch signaling.\n[SUBTITLE] Blockade of Notch signaling promotes the differentiation of GSCs [SUBSECTION] The previous result indicated that inhibiting Notch signaling promotes the normal NSCs to differentiate into neurons and astrocytes, both of which are the downstream neural cell types of NSCs. Therefore, we investigated whether the GSI treatment promoted GSCs differentiation. Interestingly, after 3 days, approximately 18.7 ± 0.9 neurites grew out from each tumor spheres cultured in the medium with GSI, compared to only 6.7 ± 0.9 from that cultured with DMSO. Meanwhile, the average length of neurites increased from 206.0 ± 13.1 μm in tumor spheres culture with DMSO to 269.7 ± 28.4 μm in GSI-treated tumor spheres (Figure 5B, 5C). In order to further confirm whether these cells are the downstream neural cell types, immunofluorescence was performed on differentiated primary GSCs using the specific markers of neurons and astrocytes on the 7th day in differentiating conditional medium (Figure 5D, 5E). We quantitatively compared the cell types produced by neurospheres in the GSI-treated group with that of the control. The percentages of MAP2+ cells and GFAP+ cells increased significantly, as high as 51.6 ± 6.1% and 44.0 ± 1.7%, respectively (Figure 5F, 5G). These results suggest that inhibiting Notch signaling also promotes the differentiation of GSCs.\nAugmented neurite outgrowth and enhanced differentiation of patient-derived tumor spheres on the blockade of Notch signaling. (A) Photomicrographs of differentiated tumor spheres at 72 h after plated in differentiation conditional medium supplemented with GSI or DMSO. (B, C) Comparison of neurites number (n = 3, P < 0.001) (B) and length (n = 3, P = 0.041) (C) between tumor spheres in the presence of GSI and DMSO. (D, E) Immunofluorescence. Differentiated tumor spheres were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (F, G) Quantification and comparison of the percentages of neurons (MAP2+) (n = 3, P < 0.001) (F) or astrocytes (GFAP+) (n = 3, P < 0.001) (G) in the total cell number revealed by Hoechst counterstaining, between GSI-treated and control GSCs. Scale bar, 100 μm for A, and 50 μm for D and E. *, P < 0.05, **, P < 0.01.\nThe previous result indicated that inhibiting Notch signaling promotes the normal NSCs to differentiate into neurons and astrocytes, both of which are the downstream neural cell types of NSCs. Therefore, we investigated whether the GSI treatment promoted GSCs differentiation. Interestingly, after 3 days, approximately 18.7 ± 0.9 neurites grew out from each tumor spheres cultured in the medium with GSI, compared to only 6.7 ± 0.9 from that cultured with DMSO. Meanwhile, the average length of neurites increased from 206.0 ± 13.1 μm in tumor spheres culture with DMSO to 269.7 ± 28.4 μm in GSI-treated tumor spheres (Figure 5B, 5C). In order to further confirm whether these cells are the downstream neural cell types, immunofluorescence was performed on differentiated primary GSCs using the specific markers of neurons and astrocytes on the 7th day in differentiating conditional medium (Figure 5D, 5E). We quantitatively compared the cell types produced by neurospheres in the GSI-treated group with that of the control. The percentages of MAP2+ cells and GFAP+ cells increased significantly, as high as 51.6 ± 6.1% and 44.0 ± 1.7%, respectively (Figure 5F, 5G). These results suggest that inhibiting Notch signaling also promotes the differentiation of GSCs.\nAugmented neurite outgrowth and enhanced differentiation of patient-derived tumor spheres on the blockade of Notch signaling. (A) Photomicrographs of differentiated tumor spheres at 72 h after plated in differentiation conditional medium supplemented with GSI or DMSO. (B, C) Comparison of neurites number (n = 3, P < 0.001) (B) and length (n = 3, P = 0.041) (C) between tumor spheres in the presence of GSI and DMSO. (D, E) Immunofluorescence. Differentiated tumor spheres were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (F, G) Quantification and comparison of the percentages of neurons (MAP2+) (n = 3, P < 0.001) (F) or astrocytes (GFAP+) (n = 3, P < 0.001) (G) in the total cell number revealed by Hoechst counterstaining, between GSI-treated and control GSCs. Scale bar, 100 μm for A, and 50 μm for D and E. *, P < 0.05, **, P < 0.01.\n[SUBTITLE] Blockade of Notch signaling promotes the conversion of GSCs to INP-like cells [SUBSECTION] The previous report indicated that blockade of Notch signaling in the CNS increased the frequency of INPs in vivo [21]. Precocious differentiation of NSCs into INPs might exhaust the NSC pool. Therefore, we investigated the effect of inhibiting Notch signaling on the frequency of GSCs and INP-like cells in glioma specimen. In an attempt to distinguish GSCs and INP-like tumor cells, we examined the expression of several markers that could distinguish NSCs from INPs by quantitive RT-PCR [20,21]. Compared with the controls, the primary tumor spheres in the presence of GSI expressed lower Glast and CD133, which are indicative of the frequency of NSCs and GSCs. In contrast, Mash1 was highly expressed in GSI-treated tumor spheres (Figure 6A, 6B), although the expression level of another INP marker, Tubulin α1 was comparable between the GSI-treated tumor spheres and that of control. Altogether, these results suggested that blockade of Notch signaling may promote the conversion of GSCs to INP-like tumor cells.\nGSI-treated primary tumor spheres show similar gene expression profile of INPs. (A, B) cDNA was prepared from total RNA isolated from primary tumor spheres, treated with GSI or DMSO for 5 days respectively, and the expressions of GLAST (P = 0.002), CD133 (P = 0.015), Mash1 (P = 0.050) and Tubulin α1 (P = 0.116), were measured by RT-PCR (A) and Real-time PCR (n = 5) (B), with GAPDH as a reference control. *, P < 0.05, **, P < 0.01.\nThe previous report indicated that blockade of Notch signaling in the CNS increased the frequency of INPs in vivo [21]. Precocious differentiation of NSCs into INPs might exhaust the NSC pool. Therefore, we investigated the effect of inhibiting Notch signaling on the frequency of GSCs and INP-like cells in glioma specimen. In an attempt to distinguish GSCs and INP-like tumor cells, we examined the expression of several markers that could distinguish NSCs from INPs by quantitive RT-PCR [20,21]. Compared with the controls, the primary tumor spheres in the presence of GSI expressed lower Glast and CD133, which are indicative of the frequency of NSCs and GSCs. In contrast, Mash1 was highly expressed in GSI-treated tumor spheres (Figure 6A, 6B), although the expression level of another INP marker, Tubulin α1 was comparable between the GSI-treated tumor spheres and that of control. Altogether, these results suggested that blockade of Notch signaling may promote the conversion of GSCs to INP-like tumor cells.\nGSI-treated primary tumor spheres show similar gene expression profile of INPs. (A, B) cDNA was prepared from total RNA isolated from primary tumor spheres, treated with GSI or DMSO for 5 days respectively, and the expressions of GLAST (P = 0.002), CD133 (P = 0.015), Mash1 (P = 0.050) and Tubulin α1 (P = 0.116), were measured by RT-PCR (A) and Real-time PCR (n = 5) (B), with GAPDH as a reference control. *, P < 0.05, **, P < 0.01.\n[SUBTITLE] GSCs show resistance to GSI treatment compared with NSCs [SUBSECTION] To gain further perspective on the dynamics of cellular proliferation accompanying differentiation, we treated NSCs and tumor spheres at a series of time points following GSI treatment with propidium iodide and examined cell cycle via FACS analysis. Compared with the controls, nearly 15.5 ± 0.5% of the NSCs treated with GSI for 24 h are in the G2+M phase, and then sharply decreased to less than 8.2 ± 1.7% at 72 h (Figure 7A). In contrast, the ratio of GSCs in the G2+M phase were slightly elevated at 48 h, and then declined insignificantly at 72 h (Figure 7B). The result showed that GSI treatment significantly reduced the ratio of the G2+M phase NSCs, but there is no obvious effect on the cell cycle of GSCs. Therefore, NSCs are more sensitive to GSI, while GSCs display a certain degree of resistance to GSI at the early stage of the treatment.\nDifferent effects of GSI-treatment on the cell cycle of NSCs or GSCs. (A, B) Comparisons of cell cycle between NSCs (A) and GSCs (B) in the presence of GSI or DMSO at 24 h, 48 h, 72 h using flow cytometry. Data represent as mean ± SD from three independent experiments. (n = 3, 24 h, P = 0.006, 48 h, P = 0.013). *, P < 0.05, **, P < 0.01.\nTo gain further perspective on the dynamics of cellular proliferation accompanying differentiation, we treated NSCs and tumor spheres at a series of time points following GSI treatment with propidium iodide and examined cell cycle via FACS analysis. Compared with the controls, nearly 15.5 ± 0.5% of the NSCs treated with GSI for 24 h are in the G2+M phase, and then sharply decreased to less than 8.2 ± 1.7% at 72 h (Figure 7A). In contrast, the ratio of GSCs in the G2+M phase were slightly elevated at 48 h, and then declined insignificantly at 72 h (Figure 7B). The result showed that GSI treatment significantly reduced the ratio of the G2+M phase NSCs, but there is no obvious effect on the cell cycle of GSCs. Therefore, NSCs are more sensitive to GSI, while GSCs display a certain degree of resistance to GSI at the early stage of the treatment.\nDifferent effects of GSI-treatment on the cell cycle of NSCs or GSCs. (A, B) Comparisons of cell cycle between NSCs (A) and GSCs (B) in the presence of GSI or DMSO at 24 h, 48 h, 72 h using flow cytometry. Data represent as mean ± SD from three independent experiments. (n = 3, 24 h, P = 0.006, 48 h, P = 0.013). *, P < 0.05, **, P < 0.01.", "Nine specimens of gliomas were used in the current studies, including 3 oligoastrocytomas, 3 oligodendrogliomas, 2 astrocytomas, and 1 glioblastoma, and the specimens were graded according to the WHO grading scheme (Additional file 1: Table S1).\nTumor tissues were dissociated mechanically into a single cell suspension and were cultured in serum-free DMEM/F12 medium supplemented with EGF and bFGF. Seven primary gliomas of the nine gave rise to proliferating tumor spheres. Regardless of pathological subtype and grade, neurosphere-like clusters, or tumor spheres, first appeared within 72 h of primary culture and increased their numbers and diameters quickly during 7 days after the onset of the culture (Figure 1A). In order to estimate whether these tumor spheres showed NSC properties, we stained the tumor spheres from patients with anti-Nestin antibody. The result showed that these tumor spheres expressed Nestin, a marker of NSCs (Figure 1B). The multipotency of these human glioma cell-derived tumor spheres was confirmed by differentiation assay in vitro. We estimated the differentiation capacity of tumor spheres in differentiating conditions by examining the types of molecular markers expressed by neurons and glial cells. We observed that these cells could differentiate into GFAP-positive astrocyte- and MAP2-positive neuron-like cells (Figure 1C, 1D). In addition, a local recurrence tumor also could produce tumor spheres in growth medium (data not shown). Tumor spheres could be passed at least for five generations by mechanical dissociation and their stemness and multipotency could be maintained in serum-free medium supplemented with growth factors for at least one month.\nPatient glioma-derived stem cells have the ability to form neurosphere-like colonies and gave rise to the downstream neural cell types of NSCs. (A) Photomicrographs of typical primary tumor spheres from one glioma tissue at 72 h after plating. (B) Undifferentiated primary tumor spheres expressed high levels of Nestin (red), a marker of NSCs. (C, D) The tumor spheres-derived from human glioma were cultured in differentiation conditional medium for 7 days, and differentiated into neural cells expressing specific molecular markers of GFAP (C, red) and MAP2 (D, green). Scale bar, 100 μm in A, and 50 μm in BCD.", "Stem cell-like cells in brain tumors share many similarities with normal neural stem/progenitor cells and may require Notch signal for their survival and growth. In vitro, NSCs proliferate and form clonal spheres referred to as neurospheres. GSI reduced the proliferation of mouse embryonic brain-derived NSCs in a dose-dependent fashion (Additional file 1: Figure S1). The number of neurospheres was decreased in the presence of GSI, compared with the control treated with DMSO (Figure 2A). In order to confirm that GSI effectively blocked Notch signaling in NSCs in our culture system, we test the expression of Hes1 and Hes5, both of which are downstream molecules of the Notch signaling [12]. Total RNA was prepared from neurospheres on the fifth day of 25 μmol/L GSI treatment and was used for RT-PCR. The expression of Hes1 and Hes5 decreased remarkably in NSCs, suggesting that GSI at this concentration could inhibit Notch signaling effectively (Figure 2B, 2C). We quantitatively analyzed the number of primary neurospheres in the presence of GSI, and found that there was a significant decrease in the number of neurospheres upon GSI treatment at 25 μmol/L (Figure 2D). In order to determine the possible effect of GSI on the NSCs self-renewal ability, we harvested the spheres and dissociated them into a single cell suspension by soft pipeting. When replated in the presence of GSI, the number of secondary neurospheres significantly decreased after 7 days culture (Figure 2E). These results suggested that the proliferation of NSCs was slowed by inhibiting Notch signaling and the self-renewal ability, a key NSC behavior, was at least partially depleted.\nBlockade of Notch signaling attenuates proliferation and self-renewal ability of normal mouse NSCs. (A) Photomicrographs of neurospheres derived from E12.5 mouse brain at 72 h after primary culture, with GSI or DMSO supplemented. (B, C) Total RNA was prepared from neurospheres on the 5th day of GSI or DMSO treatment. And the expressions of Hes1 and Hes5 were measured by RT-PCR (B) and Real-time PCR (C), with β-actin as the reference control (n = 3, Hes5, P = 0.006, Hes1, P = 0.006). (D, E) Equal number of cells (1 × 105/ml) were plated in the growth medium, and the number of primary (n = 3, P = 0.010) (D) and secondary (n = 3, P = 0.043) (E) neurospheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNotch signaling has been shown to inhibit the differentiation of NSCs to INPs [21]. In our study, we tested the expression of molecular markers of INPs in primary neurospheres treated with GSI or DMSO. Quantitative RT-PCR showed that the mRNA levels of Glast, which is indicative of the frequency of NSCs, were decreased, while that of Mash1 and Tubulin α1, both of which are markers of INPs, were increased (Figure 3A, 3B). These results indicated an augmented differentiation from NSCs into INPs upon the blockade of Notch signaling by GSI.\nBlockade of Notch signaling promotes the differentiation of normal mouse NSCs into INPs and downstream neural cell types. (A, B) Total RNA was prepared from GSI or DMSO treated neurospheres derived from E12.5 mouse brain on the 5th day of culture. And the expressions of Glast, Mash1 and Tubulin α1 were measured by RT-PCR (A) and Real-time PCR (B), with β-actin as the reference control (n = 3, GLAST, P = 0.003, Mash1, P = 0.043, Tubulin α1, P = 0.046). (C, D) Immunofluorescence. Differentiated NSCs were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (E, F) Quantification and comparison of neurons (MAP2+) or astrocytes (GFAP+) in GSI-treated and control NSCs. Cells were counterstained with Hoechst, to permit counting of cell nuclei in at least 5 microscopic fields per specimen (n = 3, E, P = 0.021, F, P = 0.031). *, P < 0.05, **, P < 0.01. Scale bar, 50 μm for C and D.\nTo further study the effect of inhibiting Notch signaling on NSC differentiation, we used the neurosphere differentiation assay in vitro. When spheres were cultured adherently on poly-D-lysine coated glass cover slips without growth factors, they began to differentiate into cells bearing specific markers of neurons and astrocytes. We quantitatively compared the cell types produced by neurospheres in the GSI-containing medium with that of the control. All of the neurospheres gave rise to cells with the molecular markers of neurons or astrocytes (Figure 3C, 3D). However, the percentage of MAP2+ cells increased significantly in the presence of GSI, from 29.0 ± 10.4% to 66.5 ± 8.4%, and the percentage of GFAP+ cells in GSI-treated neurospheres was elevated from 8.7 ± 3.0% to 26.9 ± 6.6% (Figure 3E, 3F). These results suggested that inhibiting Notch signaling in NSCs leads to an increase in the number of differentiated cells.", "Although Notch signaling has been shown to play critical roles in the maintenance of normal NSCs, whether this signaling might be involved in tumor stem cells is not fully clear. To determine whether Notch signaling activity was required during growth of GSCs, we investigate the effect of GSI on proliferation and self-renewal of GSCs. After Notch signaling was inhibited in GSCs by GSI treatment at 25 μmol/L, the expressions of Hes5 and Hes1, the specific and direct downstream targets of the Notch/RBP-J transcription complex were identified by RT-PCR and real-time PCR as described previously. After 5 days of GSI treatment, Hes5 and Hes1 expression markedly decreased (Figure 4A, 4B), and no obvious cell death was observed, indicating no effect on cell viability (data not shown). These results indicated that Notch signaling was efficiently blocked by GSI treatment in GSCs.\nAttenuated proliferation and self-renewal ability of patient-derived GSCs on the blockade of Notch signaling. (A, B) Total RNA was prepared from primary tumor spheres on the 5th day in the presence of GSI or DMSO. And the expressions of Hes5 and Hes1 were measured by RT-PCR (A) and Real-time PCR (B), with human GAPDH as the reference control (n = 5, Hes5, P = 0.046, Hes1, P = 0.002). (C, D) Equal number of cells (1 × 105/ml) form brain tumor tissues were plated in the growth medium, the number of primary (n = 4, P = 0.008) (C) and secondary (n = 4, P = 0.041) (D) tumor spheres were counted 7 days after plating. *, P < 0.05, **, P < 0.01.\nNext, we quantitatively compared the proliferation and self-renewal ability of GSI-treated GSCs with that of the controls. The number of the primary tumor spheres in the presence of GSI decreased significantly, from 51.5 ± 2.8 to 34.8 ± 3.3 (Figure 4C). Self-renewal ability of the tumor spheres was assayed by dissociating and replating the primary tumor spheres. Our results showed that GSI-treated GSCs generated a decreased number of secondary tumor spheres (17.5 ± 2.3), than the number of controls (31.7 ± 5.6) (Figure 4D). These results showed that the proliferation and self-renewal ability of GSCs also could be attenuated by inhibiting Notch signaling.", "The previous result indicated that inhibiting Notch signaling promotes the normal NSCs to differentiate into neurons and astrocytes, both of which are the downstream neural cell types of NSCs. Therefore, we investigated whether the GSI treatment promoted GSCs differentiation. Interestingly, after 3 days, approximately 18.7 ± 0.9 neurites grew out from each tumor spheres cultured in the medium with GSI, compared to only 6.7 ± 0.9 from that cultured with DMSO. Meanwhile, the average length of neurites increased from 206.0 ± 13.1 μm in tumor spheres culture with DMSO to 269.7 ± 28.4 μm in GSI-treated tumor spheres (Figure 5B, 5C). In order to further confirm whether these cells are the downstream neural cell types, immunofluorescence was performed on differentiated primary GSCs using the specific markers of neurons and astrocytes on the 7th day in differentiating conditional medium (Figure 5D, 5E). We quantitatively compared the cell types produced by neurospheres in the GSI-treated group with that of the control. The percentages of MAP2+ cells and GFAP+ cells increased significantly, as high as 51.6 ± 6.1% and 44.0 ± 1.7%, respectively (Figure 5F, 5G). These results suggest that inhibiting Notch signaling also promotes the differentiation of GSCs.\nAugmented neurite outgrowth and enhanced differentiation of patient-derived tumor spheres on the blockade of Notch signaling. (A) Photomicrographs of differentiated tumor spheres at 72 h after plated in differentiation conditional medium supplemented with GSI or DMSO. (B, C) Comparison of neurites number (n = 3, P < 0.001) (B) and length (n = 3, P = 0.041) (C) between tumor spheres in the presence of GSI and DMSO. (D, E) Immunofluorescence. Differentiated tumor spheres were stained with anti-GFAP, or anti-MAP2 antibodies after cultured on cover slips in differentiation conditional medium for 7 days. Stained samples were examined under a fluorescence microscope. (F, G) Quantification and comparison of the percentages of neurons (MAP2+) (n = 3, P < 0.001) (F) or astrocytes (GFAP+) (n = 3, P < 0.001) (G) in the total cell number revealed by Hoechst counterstaining, between GSI-treated and control GSCs. Scale bar, 100 μm for A, and 50 μm for D and E. *, P < 0.05, **, P < 0.01.", "The previous report indicated that blockade of Notch signaling in the CNS increased the frequency of INPs in vivo [21]. Precocious differentiation of NSCs into INPs might exhaust the NSC pool. Therefore, we investigated the effect of inhibiting Notch signaling on the frequency of GSCs and INP-like cells in glioma specimen. In an attempt to distinguish GSCs and INP-like tumor cells, we examined the expression of several markers that could distinguish NSCs from INPs by quantitive RT-PCR [20,21]. Compared with the controls, the primary tumor spheres in the presence of GSI expressed lower Glast and CD133, which are indicative of the frequency of NSCs and GSCs. In contrast, Mash1 was highly expressed in GSI-treated tumor spheres (Figure 6A, 6B), although the expression level of another INP marker, Tubulin α1 was comparable between the GSI-treated tumor spheres and that of control. Altogether, these results suggested that blockade of Notch signaling may promote the conversion of GSCs to INP-like tumor cells.\nGSI-treated primary tumor spheres show similar gene expression profile of INPs. (A, B) cDNA was prepared from total RNA isolated from primary tumor spheres, treated with GSI or DMSO for 5 days respectively, and the expressions of GLAST (P = 0.002), CD133 (P = 0.015), Mash1 (P = 0.050) and Tubulin α1 (P = 0.116), were measured by RT-PCR (A) and Real-time PCR (n = 5) (B), with GAPDH as a reference control. *, P < 0.05, **, P < 0.01.", "To gain further perspective on the dynamics of cellular proliferation accompanying differentiation, we treated NSCs and tumor spheres at a series of time points following GSI treatment with propidium iodide and examined cell cycle via FACS analysis. Compared with the controls, nearly 15.5 ± 0.5% of the NSCs treated with GSI for 24 h are in the G2+M phase, and then sharply decreased to less than 8.2 ± 1.7% at 72 h (Figure 7A). In contrast, the ratio of GSCs in the G2+M phase were slightly elevated at 48 h, and then declined insignificantly at 72 h (Figure 7B). The result showed that GSI treatment significantly reduced the ratio of the G2+M phase NSCs, but there is no obvious effect on the cell cycle of GSCs. Therefore, NSCs are more sensitive to GSI, while GSCs display a certain degree of resistance to GSI at the early stage of the treatment.\nDifferent effects of GSI-treatment on the cell cycle of NSCs or GSCs. (A, B) Comparisons of cell cycle between NSCs (A) and GSCs (B) in the presence of GSI or DMSO at 24 h, 48 h, 72 h using flow cytometry. Data represent as mean ± SD from three independent experiments. (n = 3, 24 h, P = 0.006, 48 h, P = 0.013). *, P < 0.05, **, P < 0.01.", "Tumor stem cells such as GSCs have been considered as a novel target for the therapy of the malignant tumors, because these cells are supposed to play an important role in tumor initiation, growth, and recurrence [4]. Similarities in the growth characteristics and gene expression patterns of normal NSCs and GSCs suggest that pathways important for NSCs are probable targets for oncogenic brain tumor stem cells. In the present study, 1) we isolated GSCs from the human glioma tissues; 2) Like NSCs, these cells had the ability to form spheres in serum-free medium supplemented with growth factors and differentiated into downstream neural cell-like cells; 3) By GSI treatment, the number of GSCs-derived primary neurospheres and secondary neurospheres were markedly reduced compared with those treated with DMSO, indicating that in the long term culture (7-14 days), the proliferation and self-renewal ability of GSCs was ultimately reduced, upon the blocking of Notch signaling; 4) However, within 72 h culture, GSCs showed a certain degree of GSI-resistance, with undisturbed proliferation ability upon GSI treatment; 5) In addition, we showed that on blocking Notch signaling, GSCs are much biased to differentiate into INP-like cells, and ultimately neurons and glial cells in vitro. All these results suggest a promising preclinical application of Notch signaling antagonist (e.g., GSI)-based CSCs-targeting therapy in malignant glioma patients.\n[SUBTITLE] The frequency of GSCs in tumor tissue [SUBSECTION] Although CSCs have been identified as an important factor in tumor initiation and growth, their characteristics remain obscure concerning their heterogeneity. Here we found in our experiments that although seven of the nine human gliomas gave rise to proliferating tumor spheres, different numbers of spheres arisen from equal primary glioma cells among tumor samples. It should be noted that the specimens which did not give rise to proliferating neurospheres were patient #1 (oligoastrocytoma, gradeII) and #7 (anaplastic astrocytoma, grade III), with comparable tumor grades with the other specimens (Additional file 1: Table S1). Because samples are usually drawn from the periphery of the ablated tumor bulk, these two specimens might contain a certain amount of normal tissues. Overall, equal number of cells from high-grade and recurrent tumors, such as giant cell glioblastoma (WHO grade IV) and oligoastrocytoma (WHO grade III, recurrent tumor) often generate more primary tumor spheres. Due to limited number of samples, our accumulated results could not statistically lead to the conclusion that high-grade tumors contain more GSCs at present. However, the tendency described above indicated that the original frequency of GSCs might be different among samples according to tumor grades, or the GSCs from high-grade tumor tissues might show more typical properties of stem cells with higher proliferation and self-renewal ability.\nAlthough CSCs have been identified as an important factor in tumor initiation and growth, their characteristics remain obscure concerning their heterogeneity. Here we found in our experiments that although seven of the nine human gliomas gave rise to proliferating tumor spheres, different numbers of spheres arisen from equal primary glioma cells among tumor samples. It should be noted that the specimens which did not give rise to proliferating neurospheres were patient #1 (oligoastrocytoma, gradeII) and #7 (anaplastic astrocytoma, grade III), with comparable tumor grades with the other specimens (Additional file 1: Table S1). Because samples are usually drawn from the periphery of the ablated tumor bulk, these two specimens might contain a certain amount of normal tissues. Overall, equal number of cells from high-grade and recurrent tumors, such as giant cell glioblastoma (WHO grade IV) and oligoastrocytoma (WHO grade III, recurrent tumor) often generate more primary tumor spheres. Due to limited number of samples, our accumulated results could not statistically lead to the conclusion that high-grade tumors contain more GSCs at present. However, the tendency described above indicated that the original frequency of GSCs might be different among samples according to tumor grades, or the GSCs from high-grade tumor tissues might show more typical properties of stem cells with higher proliferation and self-renewal ability.\n[SUBTITLE] INP-like cells in GSCs population [SUBSECTION] In normal development of the brain, neurons and glia are generated from both NSCs and more limited INPs. And blockade of Notch signaling in NSCs have been shown to promote their conversion into INPs [20,21]. GSCs can differentiate into neurons and astrocytes in culture medium with serum, as shown by our results and previous studies [4]. Like INPs, it is possible that an intermediate glioma progenitor cells (IGPs) also exist, linking the GSCs-IGPs-Neuron/glia hierarchy in tumor microenvironment [32]. Our results show that blocking Notch signaling in the primary tumor spheres leads to down-regulated mRNA level of CD133, a well accepted marker of GSCs at present, indicating a decrease of GSCs. Simultaneously, the mRNA level of Hes5 and Glast, two markers highly expressed in NSCs were also decreased, while that of Mash1, a marker up-regulated in INPs was increased in primary tumor spheres after being treated with GSI [20]. In addition, Tubulin α1, an INP marker, seems can not distinguish IGPs from GSCs. Since GSI-treated primary tumor spheres could still gave rise to secondary spheres, unless much fewer than that derived from control primary spheres, the CD133low/GLASTlow/Hes5low/Mash1high IGPs might exist, with its number increased and proliferating ability decreased after the blockade of Notch signaling. Therefore, inhibiting Notch signaling might have therapeutic potential for human gliomas by exhausting GSCs and instruct them into less proliferative IGPs and differentiated neural cell types.\nIn normal development of the brain, neurons and glia are generated from both NSCs and more limited INPs. And blockade of Notch signaling in NSCs have been shown to promote their conversion into INPs [20,21]. GSCs can differentiate into neurons and astrocytes in culture medium with serum, as shown by our results and previous studies [4]. Like INPs, it is possible that an intermediate glioma progenitor cells (IGPs) also exist, linking the GSCs-IGPs-Neuron/glia hierarchy in tumor microenvironment [32]. Our results show that blocking Notch signaling in the primary tumor spheres leads to down-regulated mRNA level of CD133, a well accepted marker of GSCs at present, indicating a decrease of GSCs. Simultaneously, the mRNA level of Hes5 and Glast, two markers highly expressed in NSCs were also decreased, while that of Mash1, a marker up-regulated in INPs was increased in primary tumor spheres after being treated with GSI [20]. In addition, Tubulin α1, an INP marker, seems can not distinguish IGPs from GSCs. Since GSI-treated primary tumor spheres could still gave rise to secondary spheres, unless much fewer than that derived from control primary spheres, the CD133low/GLASTlow/Hes5low/Mash1high IGPs might exist, with its number increased and proliferating ability decreased after the blockade of Notch signaling. Therefore, inhibiting Notch signaling might have therapeutic potential for human gliomas by exhausting GSCs and instruct them into less proliferative IGPs and differentiated neural cell types.\n[SUBTITLE] Double positive cell types in the derivatives of GSCs [SUBSECTION] Although tumor-derived stem cells had many similarities to normal NSCs, it is important to note that differences might exist between them. Sphere differentiation assay on the specimen of 4# patient demonstrated that GSCs could give rise not only to neurons and glia but also to a few cells that expressed both Map2 and GFAP, the molecular markers of astrocyte and neuron, respectively (Figure S3). Previous studies have reported similar abnormal cells in culture derived from pediatric and adult brain tumors [4,33], indicating that such dual-fate cells might represent a significant fraction of GSCs derived progeny. These Map2+/GFAP+ cells sometimes appeared larger than other cells derived from the same sphere (Figure S3). In addition, the GFAP positive glial cells derived from GSCs showed abnormal morphology, with slim cell bodies and neurites, compared with that derived from NSCs (Figure 3D, Figure 5E). Although morphological differences might exist between mouse and human glial cells, previous research on normal human tissue demonstrated that GFAP staining of human glial cells showed similar morphology with that of mouse glial cells [34]. Therefore, the morphological difference of GFAP positive glial cells might be attributed to whether they are NSC-derived or GSC-derived. Genetically, the generation of the double-positive cells and dysmorphic glial cells may accompany with gene mutation or abnormal activation of some signal pathways, leading to aberrant reprogramming procedure of GSCs, compared with normal differentiation of NSCs.\nAlthough tumor-derived stem cells had many similarities to normal NSCs, it is important to note that differences might exist between them. Sphere differentiation assay on the specimen of 4# patient demonstrated that GSCs could give rise not only to neurons and glia but also to a few cells that expressed both Map2 and GFAP, the molecular markers of astrocyte and neuron, respectively (Figure S3). Previous studies have reported similar abnormal cells in culture derived from pediatric and adult brain tumors [4,33], indicating that such dual-fate cells might represent a significant fraction of GSCs derived progeny. These Map2+/GFAP+ cells sometimes appeared larger than other cells derived from the same sphere (Figure S3). In addition, the GFAP positive glial cells derived from GSCs showed abnormal morphology, with slim cell bodies and neurites, compared with that derived from NSCs (Figure 3D, Figure 5E). Although morphological differences might exist between mouse and human glial cells, previous research on normal human tissue demonstrated that GFAP staining of human glial cells showed similar morphology with that of mouse glial cells [34]. Therefore, the morphological difference of GFAP positive glial cells might be attributed to whether they are NSC-derived or GSC-derived. Genetically, the generation of the double-positive cells and dysmorphic glial cells may accompany with gene mutation or abnormal activation of some signal pathways, leading to aberrant reprogramming procedure of GSCs, compared with normal differentiation of NSCs.\n[SUBTITLE] GSI-resistance of GSCs at the early stage of GSI treatment [SUBSECTION] In our study, we found that the numbers of both primary and secondary tumor spheres were decreased in the long run (7-day culture) after GSI treatment compared with the controls. However, cell cycle analysis results showed that although Notch blockade significantly reduced the ratio of the G2+M phase in NSCs, there is no obvious effect on the percentage of proliferating GSCs within 72 h after GSI treatment. These results indicate that, compared with NSCs, another distinctive feature of GSCs was that the former are more sensitive to GSI, while the latter displays a certain degree of resistance to GSI treatment at the early stage of the treatment. Due to the limited amount of primary glioma specimens, the cell cycle analyses were executed on primary tumor spheres from three independent tumor samples. Therefore, the resistance to GSI in GSCs at the early stage of the cell cycle might be a general characteristic in gliomas, or it only represents a few cases of glioma patients which might display resistance in the preclinical trial of GSI treatment. Previous research show that treatment with dipeptide GSI resulted in a marked reduction in medulloblastoma growth [35]. More recently, a clinical trial for a Notch inhibitor, MK0752 (developed by Merck, Whitehouse Station, NJ), has been launched for T-cell acute lymphatic leukemia and breast cancer patients (http://www.clinicaltrials.gov/ct/show/NCT00100152). Although GSI seems to be a promising reagent targeting GSCs by interfering Notch signaling, our results suggested that its effect might be limited to some glioma patients. Therefore, drug combination should be used at the early stage of therapy. However, since our results are based on in vitro culture system of patient-derived samples, more accurate conclusion could be drawn from animal models or preclinical trials in future study.\nIn our study, we found that the numbers of both primary and secondary tumor spheres were decreased in the long run (7-day culture) after GSI treatment compared with the controls. However, cell cycle analysis results showed that although Notch blockade significantly reduced the ratio of the G2+M phase in NSCs, there is no obvious effect on the percentage of proliferating GSCs within 72 h after GSI treatment. These results indicate that, compared with NSCs, another distinctive feature of GSCs was that the former are more sensitive to GSI, while the latter displays a certain degree of resistance to GSI treatment at the early stage of the treatment. Due to the limited amount of primary glioma specimens, the cell cycle analyses were executed on primary tumor spheres from three independent tumor samples. Therefore, the resistance to GSI in GSCs at the early stage of the cell cycle might be a general characteristic in gliomas, or it only represents a few cases of glioma patients which might display resistance in the preclinical trial of GSI treatment. Previous research show that treatment with dipeptide GSI resulted in a marked reduction in medulloblastoma growth [35]. More recently, a clinical trial for a Notch inhibitor, MK0752 (developed by Merck, Whitehouse Station, NJ), has been launched for T-cell acute lymphatic leukemia and breast cancer patients (http://www.clinicaltrials.gov/ct/show/NCT00100152). Although GSI seems to be a promising reagent targeting GSCs by interfering Notch signaling, our results suggested that its effect might be limited to some glioma patients. Therefore, drug combination should be used at the early stage of therapy. However, since our results are based on in vitro culture system of patient-derived samples, more accurate conclusion could be drawn from animal models or preclinical trials in future study.\n[SUBTITLE] The mechanisms of Notch signaling in regulating the proliferation and differentiation of GSCs [SUBSECTION] The mechanistic links between Notch signaling and the proliferation and differentiation of GSCs were presumably governed by more than one mechanism. In our study, the decreased proliferation and increased differentiation of GSCs upon GSI treatment are accompanied with down-regulation of Hes1 and Hes5, the canonical Notch downstream effectors. In addition, the expression level of Mash1, a proneural gene antagonized by the Hes genes was up-regulated in GSI-treated primary tumor spheres. Therefore, the canonical Notch-CBF1-Hes axis seems also play critical roles in the proliferation and differentiation of GSCs, as its function in NSCs [11].\nOn the other hand, Notch signaling has been shown to have both negative and positive influences on cell cycle progression [11,36]. In the present study, we observed that the proliferation of GSCs decreased significantly in the long term culture, although GSI resistance of three glioma samples was present (see above). Mutations of p53, pTEN and H-Ras, have been identified in tumor tissues of giloma patients. And Notch signaling has been shown to crosstalk with p53 and pTEN signaling pathway, two major regulators of cell cycle [37,38]. In addition, down-regulation of Notch signaling in H-Ras-transformed human breast cells led to a significant decrease in their proliferation [39]. Therefore, how Notch signaling promotes the cell cycle of GSCs is yet to be explored, on the scenery of the complex signal crosstalk and genetic circuitry.\nThe mechanistic links between Notch signaling and the proliferation and differentiation of GSCs were presumably governed by more than one mechanism. In our study, the decreased proliferation and increased differentiation of GSCs upon GSI treatment are accompanied with down-regulation of Hes1 and Hes5, the canonical Notch downstream effectors. In addition, the expression level of Mash1, a proneural gene antagonized by the Hes genes was up-regulated in GSI-treated primary tumor spheres. Therefore, the canonical Notch-CBF1-Hes axis seems also play critical roles in the proliferation and differentiation of GSCs, as its function in NSCs [11].\nOn the other hand, Notch signaling has been shown to have both negative and positive influences on cell cycle progression [11,36]. In the present study, we observed that the proliferation of GSCs decreased significantly in the long term culture, although GSI resistance of three glioma samples was present (see above). Mutations of p53, pTEN and H-Ras, have been identified in tumor tissues of giloma patients. And Notch signaling has been shown to crosstalk with p53 and pTEN signaling pathway, two major regulators of cell cycle [37,38]. In addition, down-regulation of Notch signaling in H-Ras-transformed human breast cells led to a significant decrease in their proliferation [39]. Therefore, how Notch signaling promotes the cell cycle of GSCs is yet to be explored, on the scenery of the complex signal crosstalk and genetic circuitry.", "Although CSCs have been identified as an important factor in tumor initiation and growth, their characteristics remain obscure concerning their heterogeneity. Here we found in our experiments that although seven of the nine human gliomas gave rise to proliferating tumor spheres, different numbers of spheres arisen from equal primary glioma cells among tumor samples. It should be noted that the specimens which did not give rise to proliferating neurospheres were patient #1 (oligoastrocytoma, gradeII) and #7 (anaplastic astrocytoma, grade III), with comparable tumor grades with the other specimens (Additional file 1: Table S1). Because samples are usually drawn from the periphery of the ablated tumor bulk, these two specimens might contain a certain amount of normal tissues. Overall, equal number of cells from high-grade and recurrent tumors, such as giant cell glioblastoma (WHO grade IV) and oligoastrocytoma (WHO grade III, recurrent tumor) often generate more primary tumor spheres. Due to limited number of samples, our accumulated results could not statistically lead to the conclusion that high-grade tumors contain more GSCs at present. However, the tendency described above indicated that the original frequency of GSCs might be different among samples according to tumor grades, or the GSCs from high-grade tumor tissues might show more typical properties of stem cells with higher proliferation and self-renewal ability.", "In normal development of the brain, neurons and glia are generated from both NSCs and more limited INPs. And blockade of Notch signaling in NSCs have been shown to promote their conversion into INPs [20,21]. GSCs can differentiate into neurons and astrocytes in culture medium with serum, as shown by our results and previous studies [4]. Like INPs, it is possible that an intermediate glioma progenitor cells (IGPs) also exist, linking the GSCs-IGPs-Neuron/glia hierarchy in tumor microenvironment [32]. Our results show that blocking Notch signaling in the primary tumor spheres leads to down-regulated mRNA level of CD133, a well accepted marker of GSCs at present, indicating a decrease of GSCs. Simultaneously, the mRNA level of Hes5 and Glast, two markers highly expressed in NSCs were also decreased, while that of Mash1, a marker up-regulated in INPs was increased in primary tumor spheres after being treated with GSI [20]. In addition, Tubulin α1, an INP marker, seems can not distinguish IGPs from GSCs. Since GSI-treated primary tumor spheres could still gave rise to secondary spheres, unless much fewer than that derived from control primary spheres, the CD133low/GLASTlow/Hes5low/Mash1high IGPs might exist, with its number increased and proliferating ability decreased after the blockade of Notch signaling. Therefore, inhibiting Notch signaling might have therapeutic potential for human gliomas by exhausting GSCs and instruct them into less proliferative IGPs and differentiated neural cell types.", "Although tumor-derived stem cells had many similarities to normal NSCs, it is important to note that differences might exist between them. Sphere differentiation assay on the specimen of 4# patient demonstrated that GSCs could give rise not only to neurons and glia but also to a few cells that expressed both Map2 and GFAP, the molecular markers of astrocyte and neuron, respectively (Figure S3). Previous studies have reported similar abnormal cells in culture derived from pediatric and adult brain tumors [4,33], indicating that such dual-fate cells might represent a significant fraction of GSCs derived progeny. These Map2+/GFAP+ cells sometimes appeared larger than other cells derived from the same sphere (Figure S3). In addition, the GFAP positive glial cells derived from GSCs showed abnormal morphology, with slim cell bodies and neurites, compared with that derived from NSCs (Figure 3D, Figure 5E). Although morphological differences might exist between mouse and human glial cells, previous research on normal human tissue demonstrated that GFAP staining of human glial cells showed similar morphology with that of mouse glial cells [34]. Therefore, the morphological difference of GFAP positive glial cells might be attributed to whether they are NSC-derived or GSC-derived. Genetically, the generation of the double-positive cells and dysmorphic glial cells may accompany with gene mutation or abnormal activation of some signal pathways, leading to aberrant reprogramming procedure of GSCs, compared with normal differentiation of NSCs.", "In our study, we found that the numbers of both primary and secondary tumor spheres were decreased in the long run (7-day culture) after GSI treatment compared with the controls. However, cell cycle analysis results showed that although Notch blockade significantly reduced the ratio of the G2+M phase in NSCs, there is no obvious effect on the percentage of proliferating GSCs within 72 h after GSI treatment. These results indicate that, compared with NSCs, another distinctive feature of GSCs was that the former are more sensitive to GSI, while the latter displays a certain degree of resistance to GSI treatment at the early stage of the treatment. Due to the limited amount of primary glioma specimens, the cell cycle analyses were executed on primary tumor spheres from three independent tumor samples. Therefore, the resistance to GSI in GSCs at the early stage of the cell cycle might be a general characteristic in gliomas, or it only represents a few cases of glioma patients which might display resistance in the preclinical trial of GSI treatment. Previous research show that treatment with dipeptide GSI resulted in a marked reduction in medulloblastoma growth [35]. More recently, a clinical trial for a Notch inhibitor, MK0752 (developed by Merck, Whitehouse Station, NJ), has been launched for T-cell acute lymphatic leukemia and breast cancer patients (http://www.clinicaltrials.gov/ct/show/NCT00100152). Although GSI seems to be a promising reagent targeting GSCs by interfering Notch signaling, our results suggested that its effect might be limited to some glioma patients. Therefore, drug combination should be used at the early stage of therapy. However, since our results are based on in vitro culture system of patient-derived samples, more accurate conclusion could be drawn from animal models or preclinical trials in future study.", "The mechanistic links between Notch signaling and the proliferation and differentiation of GSCs were presumably governed by more than one mechanism. In our study, the decreased proliferation and increased differentiation of GSCs upon GSI treatment are accompanied with down-regulation of Hes1 and Hes5, the canonical Notch downstream effectors. In addition, the expression level of Mash1, a proneural gene antagonized by the Hes genes was up-regulated in GSI-treated primary tumor spheres. Therefore, the canonical Notch-CBF1-Hes axis seems also play critical roles in the proliferation and differentiation of GSCs, as its function in NSCs [11].\nOn the other hand, Notch signaling has been shown to have both negative and positive influences on cell cycle progression [11,36]. In the present study, we observed that the proliferation of GSCs decreased significantly in the long term culture, although GSI resistance of three glioma samples was present (see above). Mutations of p53, pTEN and H-Ras, have been identified in tumor tissues of giloma patients. And Notch signaling has been shown to crosstalk with p53 and pTEN signaling pathway, two major regulators of cell cycle [37,38]. In addition, down-regulation of Notch signaling in H-Ras-transformed human breast cells led to a significant decrease in their proliferation [39]. Therefore, how Notch signaling promotes the cell cycle of GSCs is yet to be explored, on the scenery of the complex signal crosstalk and genetic circuitry.", "Our data indicate that like NSCs, Notch signaling maintains the patient-derived GSCs by promoting their self-renewal and inhibiting their differentiation, and support that Notch signal inhibitor might be a prosperous candidate of the drug treatment targeting CSCs for gliomas, however, with GSI-resistance at the early stage of treatment.", "The authors declare that they have no competing interests.", "YYH and MHZ carried out tissue culture, animal experiments and gene expression analyses, participated in study design and manuscript preparation. GC and LLi carried out specimens collection. LLiang, FG and YNW helped histological examination and immunohistochemistry staining. LAF and HH designed the study and prepared the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/82/prepub\n", "Hu et al Supplementary materials The file contains Table S1-S3, Figure S1-S3 and their figure legends\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
A(H1N1) pandemic influenza and its prevention by vaccination: paediatricians' opinions before and after the beginning of the vaccination campaign.
21342511
In June 2009, the World Health Organization declared an A(H1N1) influenza pandemic. In October 2009, the largest vaccination campaign in Canadian history began. The aim of this study was to document paediatricians' knowledge, attitudes and practices (KAP) regarding A(H1N1) pandemic influenza and its prevention by vaccination just after the beginning of the A(H1N1) vaccination campaign and to compare the results with those obtained before campaign initiation.
BACKGROUND
A self-administered mail-based questionnaire was sent to all Canadian paediatricians. Questionnaires were analyzed in two subsets: those received before and after the beginning of the vaccination campaign.
METHODS
Overall the response rate was 50%. Respondents' characteristics were comparable between the two subsets. Before the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination compared to more than 75% after. Before the vaccination campaign, half of respondents or less thought that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after. The proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before to 73% after the beginning of the vaccination campaign. The majority of respondents intended to get vaccinated against A(H1N1) influenza themselves (84% before and 92% after). Respondents' intention to recommend the A(H1N1) vaccine to their patients increased from 80% before the beginning of the campaign to 92% after. In multivariate analysis, the main determinants of paediatricians' intention to recommend the A(H1N1) vaccine were their intention to get vaccinated against A(H1N1) influenza themselves and a belief that A(H1N1) vaccine would be well accepted by health professionals who administer vaccines to the public.
RESULTS
Results of this study show important increases in physicians' level of confidence about A(H1N1) vaccine's safety and immunogenicity and their willingness to recommend this vaccine to their patients. These changes could be explained, at least partially, by the important effort done by public health authorities to disseminate information regarding A(H1N1) vaccination.
CONCLUSION
[ "Adolescent", "Canada", "Child", "Child, Preschool", "Clinical Competence", "Disease Outbreaks", "Female", "Health Care Surveys", "Humans", "Immunization Programs", "Infant", "Influenza A Virus, H1N1 Subtype", "Influenza Vaccines", "Influenza, Human", "Male", "Pediatrics", "Physicians" ]
3050752
null
null
Methods
A self-administered, anonymous, mail-based questionnaire was sent to all Canadian paediatricians, except subspecialist. The Canadian Medical Directory [15] was used to identify paediatricians. This database contains more than 58,000 listings of Canadian physicians medical contact information and is updated each year. A multidisciplinary team developed the questionnaire using the Analytical framework for immunization programs in Canada as a theoretical base [16]. This framework was developed to guide and standardize public health decision-making process regarding new immunization programs in Canada. It includes 58 criteria classified into 13 categories. Three categories of this framework were used to guide the construction of the questionnaire: (1) Burden of disease, (2) Vaccine characteristics, and (3) Acceptability of the vaccine program. The final questionnaire included 12 questions on A(H1N1) pandemic influenza and its prevention by vaccination as well as 10 questions on KAP about vaccination in general and 10 questions on demographic and professional characteristics of respondents. Respondents were asked to base their answers on their own knowledge and opinions. For most questions, a 6-point Likert answer scale ranging from "strongly disagree" to "strongly agree" was used. No information on A(H1N1) pandemic influenza or the vaccines was provided. The questionnaire was mailed to 1,852 pediatricians. The first two mailings were done in August-September 2009 and the third in November 2009. The last mailing was sent to 1,118 pediatricians who had not responded to the first two mailings. The study protocol was approved by the Ethics Board of the Laval University Hospital Center (reference number 126.05.02). All vaccines authorized for sale in Canada, including the A(H1N1) influenza vaccines, are reviewed and approved by the federal government (Health Canada). However, each province and territory is responsible for the development of publicly funded immunization programs, including the schedules and the logistics of administering vaccines as well as education of the population and health professionals. The A(H1N1) pandemic vaccine was approved for use in Canada on 21st October 2009 and vaccination campaign started shortly afterwards in all Canadian provinces and territories (within days before or after 29th October 2009). To vaccinate as many persons as possible in the shortest period of time, most Canadian jurisdictions used mass vaccination centres administered by the public system. All provincial and territorial authorities, conjointly with federal authorities, have determined a sequence of vaccination by target groups. Throughout Canada, priority to receive the A(H1N1) vaccine was given to healthcare workers. Mass vaccination campaign ended in most Canadian provinces and territories in mid-December 2009. Due to the intense media coverage and the important educational efforts undertaken around the vaccination campaign and their potential impact on physicians' KAP [17,18], we decided to perform a "before-after" analysis. Questionnaires were analyzed in two subsets: those received before (first subset) and after (second subset) the start of the vaccination campaign (October 29). Descriptive statistics were generated for all variables. Missing responses were excluded from the analyses. Univariate analyses were computed separately for the two data subsets. Comparisons of categorical responses were performed using chi-square or Fisher's exact tests. A multivariate logistic regression model was used to determine variables independently associated with the paediatrician's intention to recommend the A(H1N1) pandemic vaccine. Dependent and explanatory variables were dichotomized: the responses "strongly agree" and "agree" versus all others ("strongly disagree", "disagree", "somewhat disagree" and "somewhat agree"). Variables associated in the univariate analysis with the intention to recommend the vaccine at p ≤ 0.20 were entered into the multivariate regression models using the stepwise selection technique. The model was adjusted to take into consideration the two subsets of data. A new binary explanatory variable was created (subsequently referred to as "subset variable") and forced into the model: questionnaires mailed before the beginning of the vaccination campaign and questionnaires mailed after. Variables were reevaluated in the final model to check for confounding and model fit. A probability level of p < 0.05 based on two-sided tests was considered statistically significant. The collinearity was checked and the adequacy of the model was evaluated by Hosmer and Lemeshow's goodness of fit test. Multiple correspondence analysis (MCA) method [19] was also used as a complementary way to analyse our dataset. MCA is used to detect links between variables (including, in this study, the intention to recommend the A(H1N1) vaccine), but there is no dependent variable. This method is a form of principal components analysis that is appropriate for qualitative variables. MCA searches for principal components, which are new quantitative modelled variables, constructed as linear functions of the initial variables. Finding the principal components is based on the maximisation of the correlation ratio between the principal component and the initial variables. All principal components are mutually uncorrelated by construction. Our initial variables were all variables in the questionnaire pertaining to A(H1N1) vaccine and A(H1N1) influenza and the subset variable. Analysis was computed using the raw variables (6 degrees of answers ranging from "Strongly agree" to "Strongly disagree"). We carried out this MCA as a sensitivity analysis to better assess the role of the subset variable and the impact of all 6 degrees of possible answers on the Likert scale. The Statistical Analysis Systems (SAS®) software (version 9.2 of the SAS system for Windows. Copyright (c) 2002-2008 by SAS Institute Inc., Cary, NC, USA) and R software (version 2.11.1) [20] with the library FactoMineR [21] were used for data analyses.
null
null
null
null
[ "Background", "Results", "Participation and socio-professionals characteristics", "Knowledge, attitudes and practices regarding vaccination in general", "Knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and its prevention by vaccination", "Factors associated with the intention to recommend A(H1N1) pandemic vaccine before and after the start of the vaccination campaign", "Multiple correspondence analyses (MCA)", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "In June 2009, the World Health Organization (WHO) declared an A(H1N1) influenza pandemic [1]. In October 2009, the largest vaccination campaign in Canadian history began. Before the start of the A(H1N1) pandemic influenza vaccination campaign, limited information on the safety and immunogenicity of A(H1N1) influenza vaccines was available. However, the spread of A(H1N1) influenza generated intense media interest in pandemic preparedness and contradictory information around the vaccine and the vaccination campaign was reported [2-6]. During the first wave of A(H1N1) influenza in Canada, 77 deaths were reported, mostly in the provinces of Quebec and Ontario. During the second wave, in early fall 2009, 351 deaths were reported across the country [7]. In Canada, almost exclusively, an A(H1N1) pandemic influenza vaccine (Arepanrix™) containing a novel adjuvant (AS03 adjuvant, as an oil-in-water emulsion) was used [8].\nPhysicians are known to play a key role in public acceptance of new vaccines and their recommendations are an important determinant of vaccine uptake [9-13]. Prior to the A(H1N1) vaccine approval for clinical use and the release of professional association and experts committee recommendations, we documented Canadian family physicians' and paediatricians' knowledge, attitudes and practice (KAP) regarding A(H1N1) pandemic influenza and its prevention by vaccination [14]. In this study, 59% of paediatricians had had some experience with severe cases of A(H1N1) pandemic influenza and the majority (75%) of them were willing to recommend the A(H1N1) pandemic vaccine to their patients. More than 75% of the respondents also indicated the willingness to get the vaccine themselves [14].\nThe aim of this study was to document paediatricians' KAP regarding A(H1N1) pandemic influenza and its prevention by vaccination just after the beginning of the A(H1N1) vaccination campaign and to compare these results with those obtained before vaccination campaign initiation.", "[SUBTITLE] Participation and socio-professionals characteristics [SUBSECTION] Overall, 912 paediatricians have completed the questionnaires: 714 completed the questionnaire before the beginning of the A(H1N1) vaccination campaign and 197 completed it after. After exclusion of physicians no longer practicing, those with incorrect addresses or those who were subspecialists, the overall participation rate was 50% (911/1832). Participation rates by country regions varied from 40.1% in Prairies to 57.7% in Quebec. Table 1 shows respondents' socio-professionals characteristics. No statistically significant differences were found among the characteristics of paediatricians who complete the survey before and after the beginning of A(H1N1) vaccination campaign.\nPaediatricians' professional and demographic characteristics (%)\nOverall, 912 paediatricians have completed the questionnaires: 714 completed the questionnaire before the beginning of the A(H1N1) vaccination campaign and 197 completed it after. After exclusion of physicians no longer practicing, those with incorrect addresses or those who were subspecialists, the overall participation rate was 50% (911/1832). Participation rates by country regions varied from 40.1% in Prairies to 57.7% in Quebec. Table 1 shows respondents' socio-professionals characteristics. No statistically significant differences were found among the characteristics of paediatricians who complete the survey before and after the beginning of A(H1N1) vaccination campaign.\nPaediatricians' professional and demographic characteristics (%)\n[SUBTITLE] Knowledge, attitudes and practices regarding vaccination in general [SUBSECTION] Overall, 98% of paediatricians thought that vaccines recommended by public health authorities are very useful (97,6% before and 98,5% after; p = 0,5911), and 73% agreed or strongly agreed with the statement \"it is very useful to protect children with the vaccines against seasonal influenza\" (72,9% before and 73,9% after; p = 0,7855). When recommending new vaccines to their patients, 91% of paediatricians indicated that they are highly influenced by expert group recommendations (91,3% before and 90,7% after; p = 0,8152) and 90%, by professional association recommendations (90,8% before and 89,7% after; p = 0,6414). Approximately half of paediatricians (49%) stated that it is easy for them to advise their patients on new vaccines. No statistically significant differences were found in attitudes towards vaccination in general between respondents who answered before or after the beginning of A(H1N1) vaccination campaign\nOverall, 98% of paediatricians thought that vaccines recommended by public health authorities are very useful (97,6% before and 98,5% after; p = 0,5911), and 73% agreed or strongly agreed with the statement \"it is very useful to protect children with the vaccines against seasonal influenza\" (72,9% before and 73,9% after; p = 0,7855). When recommending new vaccines to their patients, 91% of paediatricians indicated that they are highly influenced by expert group recommendations (91,3% before and 90,7% after; p = 0,8152) and 90%, by professional association recommendations (90,8% before and 89,7% after; p = 0,6414). Approximately half of paediatricians (49%) stated that it is easy for them to advise their patients on new vaccines. No statistically significant differences were found in attitudes towards vaccination in general between respondents who answered before or after the beginning of A(H1N1) vaccination campaign\n[SUBTITLE] Knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and its prevention by vaccination [SUBSECTION] Before the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination comparatively to more than 75% after campaign initiation. In addition, less respondents considered that A(H1N1) pandemic influenza was severe enough to take special precautions to prevent it before, than after the start of the vaccination campaign (73% agreed or strongly agreed before compared to 63% after, p = 0.0136) (Table 2). Before the vaccination campaign, half of respondents or less agreed or strongly agreed that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after the start of the campaign who felt the vaccine was safe and effective (p < 0.001) (Table 2). Paediatricians' perceived acceptability of A(H1N1) vaccine by the public remained comparable (45% before versus 41% after agreed and strongly agreed, p = 0.3608). Paediatricians' perceived acceptability of the A(H1N1) vaccine by health professionals who administered vaccines (hereafter named \"vaccine providers\") increased after the beginning of vaccination campaign (71% before versus 84% after, p < 0.001). Respondents' intention to recommend the A(H1N1) vaccine increased from 80% before the beginning of the campaign to 92% after, including 46% and 62% of paediatricians that declared they would strongly recommend it, respectively. Globally, 40% of respondents who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend A(H1N1) pandemic vaccine to their patients while 2% of physicians who agreed with the usefulness of the seasonal influenza vaccine did not intended to recommend the A(H1N1) pandemic vaccine (p < 0.001) (data not shown). The majority of respondents intended to get vaccinated against A(H1N1) pandemic influenza themselves (84% before and 92% after, p = 0.003, Table 1). No statistically significant differences were observed in intention to be vaccinated between paediatricians practicing in different Canadian regions, neither before (p = 0.4315) nor after (p = 0.4291) the beginning of the campaign. Before the beginning of the vaccination campaign, 13% of paediatricians were undecided about being vaccinated themselves compared to 3% after. Finally, the proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before the beginning of the campaign to 73% after (p < 0.001), with 3% of respondents reporting they felt their knowledge was insufficient after the beginning of the vaccination campaign versus 17% before (Table 2).\nPaediatricians' knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and vaccine (%)\n* Before/after differences statistically significant at p < 0.05\nBefore the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination comparatively to more than 75% after campaign initiation. In addition, less respondents considered that A(H1N1) pandemic influenza was severe enough to take special precautions to prevent it before, than after the start of the vaccination campaign (73% agreed or strongly agreed before compared to 63% after, p = 0.0136) (Table 2). Before the vaccination campaign, half of respondents or less agreed or strongly agreed that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after the start of the campaign who felt the vaccine was safe and effective (p < 0.001) (Table 2). Paediatricians' perceived acceptability of A(H1N1) vaccine by the public remained comparable (45% before versus 41% after agreed and strongly agreed, p = 0.3608). Paediatricians' perceived acceptability of the A(H1N1) vaccine by health professionals who administered vaccines (hereafter named \"vaccine providers\") increased after the beginning of vaccination campaign (71% before versus 84% after, p < 0.001). Respondents' intention to recommend the A(H1N1) vaccine increased from 80% before the beginning of the campaign to 92% after, including 46% and 62% of paediatricians that declared they would strongly recommend it, respectively. Globally, 40% of respondents who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend A(H1N1) pandemic vaccine to their patients while 2% of physicians who agreed with the usefulness of the seasonal influenza vaccine did not intended to recommend the A(H1N1) pandemic vaccine (p < 0.001) (data not shown). The majority of respondents intended to get vaccinated against A(H1N1) pandemic influenza themselves (84% before and 92% after, p = 0.003, Table 1). No statistically significant differences were observed in intention to be vaccinated between paediatricians practicing in different Canadian regions, neither before (p = 0.4315) nor after (p = 0.4291) the beginning of the campaign. Before the beginning of the vaccination campaign, 13% of paediatricians were undecided about being vaccinated themselves compared to 3% after. Finally, the proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before the beginning of the campaign to 73% after (p < 0.001), with 3% of respondents reporting they felt their knowledge was insufficient after the beginning of the vaccination campaign versus 17% before (Table 2).\nPaediatricians' knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and vaccine (%)\n* Before/after differences statistically significant at p < 0.05\n[SUBTITLE] Factors associated with the intention to recommend A(H1N1) pandemic vaccine before and after the start of the vaccination campaign [SUBSECTION] The intention to get vaccinated against A(H1N1) pandemic influenza themselves (OR = 8.65) and belief that A(H1N1) pandemic vaccine would be well accepted by vaccine providers (OR = 6.65) were the most significant factors associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Six other variables were also significantly associated with the intention to recommend the vaccine: belief that seasonal influenza vaccines are very useful to protect child health; perceived economic burden of A(H1N1) influenza illness; self-estimated sufficiency of knowledge about the A(H1N1) vaccine; perceived safety of the A(H1N1) vaccine; perceived severity of A(H1N1) pandemic influenza and belief that special precautions to prevent A(H1N1) pandemic influenza are needed (Table 3).\nVariables associated with respondents intention to recommend A(H1N1) pandemic vaccine in multivariate regression analysis (N = 709) *\n* Multivariate analyses; OR = odds ratio; CI = confidence interval; 205 physicians were excluded because of missing answers (170 before and 32 after)\n§ Strongly agree, agree\nThe intention to get vaccinated against A(H1N1) pandemic influenza themselves (OR = 8.65) and belief that A(H1N1) pandemic vaccine would be well accepted by vaccine providers (OR = 6.65) were the most significant factors associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Six other variables were also significantly associated with the intention to recommend the vaccine: belief that seasonal influenza vaccines are very useful to protect child health; perceived economic burden of A(H1N1) influenza illness; self-estimated sufficiency of knowledge about the A(H1N1) vaccine; perceived safety of the A(H1N1) vaccine; perceived severity of A(H1N1) pandemic influenza and belief that special precautions to prevent A(H1N1) pandemic influenza are needed (Table 3).\nVariables associated with respondents intention to recommend A(H1N1) pandemic vaccine in multivariate regression analysis (N = 709) *\n* Multivariate analyses; OR = odds ratio; CI = confidence interval; 205 physicians were excluded because of missing answers (170 before and 32 after)\n§ Strongly agree, agree\n[SUBTITLE] Multiple correspondence analyses (MCA) [SUBSECTION] Results of MCA supported the associations found in the logistic regression models (Figure 1). Overall, first and second principal components summarized 13.6% of the initial variables' variability. The coefficients of determination (r2) for the first and second principal components were low (maximum at 0.71 for the first principal components, and 0.57 for the second principal component). It showed that responses were homogeneous among respondents. More precisely, results showed that we could group our respondents by negative levels rather than positive levels of their answers: the levels \"strongly disagree\" and \"disagree\" were the most discriminatory levels.\nCorrelations' graph between variables and the first and second principal components. Total percentage of the variability explained by first and second principal components was 13.6%. Variables almost uncorrelated with first and second principal components aren't represented on the graph, except subset variable (S). The variables: B, C, F, G, J contributed more to first and second principal components than others. - J: Respondent's intention to recommend A(H1N1) pandemic vaccine to their patients - G: \"Belief that A(H1N1) vaccine will be effective\" - F: \"Belief that A(H1N1) vaccine will be safe\" - C: \"Belief that A(H1N1) pandemic influenza is a serious disease\" - B: \"Belief that seasonal influenza vaccines are very useful to protect children health\" All variables, including variables almost uncorrelated with first and second principal component, are listed in an additional file (see Additional File 1).\nBefore and after the onset of the vaccination campaign, the modelled principal component had the respondents' intention to recommend A(H1N1) pandemic vaccine as principal contributor (weight for the \"strongly disagree\" level: 4.14). Then we found respondent's intention to receive A(H1N1) pandemic vaccine (weight for the 'no' level: 3.45), and \"belief that seasonal influenza vaccines are very useful to protect children health\" (weight for the \"strongly disagree\" level: 3.44). Mean of weight for all other variables is 0.87, including the subset variable which was the less correlated variable (variable S on figure 1) with first and second principal components.\nResults of MCA supported the associations found in the logistic regression models (Figure 1). Overall, first and second principal components summarized 13.6% of the initial variables' variability. The coefficients of determination (r2) for the first and second principal components were low (maximum at 0.71 for the first principal components, and 0.57 for the second principal component). It showed that responses were homogeneous among respondents. More precisely, results showed that we could group our respondents by negative levels rather than positive levels of their answers: the levels \"strongly disagree\" and \"disagree\" were the most discriminatory levels.\nCorrelations' graph between variables and the first and second principal components. Total percentage of the variability explained by first and second principal components was 13.6%. Variables almost uncorrelated with first and second principal components aren't represented on the graph, except subset variable (S). The variables: B, C, F, G, J contributed more to first and second principal components than others. - J: Respondent's intention to recommend A(H1N1) pandemic vaccine to their patients - G: \"Belief that A(H1N1) vaccine will be effective\" - F: \"Belief that A(H1N1) vaccine will be safe\" - C: \"Belief that A(H1N1) pandemic influenza is a serious disease\" - B: \"Belief that seasonal influenza vaccines are very useful to protect children health\" All variables, including variables almost uncorrelated with first and second principal component, are listed in an additional file (see Additional File 1).\nBefore and after the onset of the vaccination campaign, the modelled principal component had the respondents' intention to recommend A(H1N1) pandemic vaccine as principal contributor (weight for the \"strongly disagree\" level: 4.14). Then we found respondent's intention to receive A(H1N1) pandemic vaccine (weight for the 'no' level: 3.45), and \"belief that seasonal influenza vaccines are very useful to protect children health\" (weight for the \"strongly disagree\" level: 3.44). Mean of weight for all other variables is 0.87, including the subset variable which was the less correlated variable (variable S on figure 1) with first and second principal components.", "Overall, 912 paediatricians have completed the questionnaires: 714 completed the questionnaire before the beginning of the A(H1N1) vaccination campaign and 197 completed it after. After exclusion of physicians no longer practicing, those with incorrect addresses or those who were subspecialists, the overall participation rate was 50% (911/1832). Participation rates by country regions varied from 40.1% in Prairies to 57.7% in Quebec. Table 1 shows respondents' socio-professionals characteristics. No statistically significant differences were found among the characteristics of paediatricians who complete the survey before and after the beginning of A(H1N1) vaccination campaign.\nPaediatricians' professional and demographic characteristics (%)", "Overall, 98% of paediatricians thought that vaccines recommended by public health authorities are very useful (97,6% before and 98,5% after; p = 0,5911), and 73% agreed or strongly agreed with the statement \"it is very useful to protect children with the vaccines against seasonal influenza\" (72,9% before and 73,9% after; p = 0,7855). When recommending new vaccines to their patients, 91% of paediatricians indicated that they are highly influenced by expert group recommendations (91,3% before and 90,7% after; p = 0,8152) and 90%, by professional association recommendations (90,8% before and 89,7% after; p = 0,6414). Approximately half of paediatricians (49%) stated that it is easy for them to advise their patients on new vaccines. No statistically significant differences were found in attitudes towards vaccination in general between respondents who answered before or after the beginning of A(H1N1) vaccination campaign", "Before the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination comparatively to more than 75% after campaign initiation. In addition, less respondents considered that A(H1N1) pandemic influenza was severe enough to take special precautions to prevent it before, than after the start of the vaccination campaign (73% agreed or strongly agreed before compared to 63% after, p = 0.0136) (Table 2). Before the vaccination campaign, half of respondents or less agreed or strongly agreed that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after the start of the campaign who felt the vaccine was safe and effective (p < 0.001) (Table 2). Paediatricians' perceived acceptability of A(H1N1) vaccine by the public remained comparable (45% before versus 41% after agreed and strongly agreed, p = 0.3608). Paediatricians' perceived acceptability of the A(H1N1) vaccine by health professionals who administered vaccines (hereafter named \"vaccine providers\") increased after the beginning of vaccination campaign (71% before versus 84% after, p < 0.001). Respondents' intention to recommend the A(H1N1) vaccine increased from 80% before the beginning of the campaign to 92% after, including 46% and 62% of paediatricians that declared they would strongly recommend it, respectively. Globally, 40% of respondents who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend A(H1N1) pandemic vaccine to their patients while 2% of physicians who agreed with the usefulness of the seasonal influenza vaccine did not intended to recommend the A(H1N1) pandemic vaccine (p < 0.001) (data not shown). The majority of respondents intended to get vaccinated against A(H1N1) pandemic influenza themselves (84% before and 92% after, p = 0.003, Table 1). No statistically significant differences were observed in intention to be vaccinated between paediatricians practicing in different Canadian regions, neither before (p = 0.4315) nor after (p = 0.4291) the beginning of the campaign. Before the beginning of the vaccination campaign, 13% of paediatricians were undecided about being vaccinated themselves compared to 3% after. Finally, the proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before the beginning of the campaign to 73% after (p < 0.001), with 3% of respondents reporting they felt their knowledge was insufficient after the beginning of the vaccination campaign versus 17% before (Table 2).\nPaediatricians' knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and vaccine (%)\n* Before/after differences statistically significant at p < 0.05", "The intention to get vaccinated against A(H1N1) pandemic influenza themselves (OR = 8.65) and belief that A(H1N1) pandemic vaccine would be well accepted by vaccine providers (OR = 6.65) were the most significant factors associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Six other variables were also significantly associated with the intention to recommend the vaccine: belief that seasonal influenza vaccines are very useful to protect child health; perceived economic burden of A(H1N1) influenza illness; self-estimated sufficiency of knowledge about the A(H1N1) vaccine; perceived safety of the A(H1N1) vaccine; perceived severity of A(H1N1) pandemic influenza and belief that special precautions to prevent A(H1N1) pandemic influenza are needed (Table 3).\nVariables associated with respondents intention to recommend A(H1N1) pandemic vaccine in multivariate regression analysis (N = 709) *\n* Multivariate analyses; OR = odds ratio; CI = confidence interval; 205 physicians were excluded because of missing answers (170 before and 32 after)\n§ Strongly agree, agree", "Results of MCA supported the associations found in the logistic regression models (Figure 1). Overall, first and second principal components summarized 13.6% of the initial variables' variability. The coefficients of determination (r2) for the first and second principal components were low (maximum at 0.71 for the first principal components, and 0.57 for the second principal component). It showed that responses were homogeneous among respondents. More precisely, results showed that we could group our respondents by negative levels rather than positive levels of their answers: the levels \"strongly disagree\" and \"disagree\" were the most discriminatory levels.\nCorrelations' graph between variables and the first and second principal components. Total percentage of the variability explained by first and second principal components was 13.6%. Variables almost uncorrelated with first and second principal components aren't represented on the graph, except subset variable (S). The variables: B, C, F, G, J contributed more to first and second principal components than others. - J: Respondent's intention to recommend A(H1N1) pandemic vaccine to their patients - G: \"Belief that A(H1N1) vaccine will be effective\" - F: \"Belief that A(H1N1) vaccine will be safe\" - C: \"Belief that A(H1N1) pandemic influenza is a serious disease\" - B: \"Belief that seasonal influenza vaccines are very useful to protect children health\" All variables, including variables almost uncorrelated with first and second principal component, are listed in an additional file (see Additional File 1).\nBefore and after the onset of the vaccination campaign, the modelled principal component had the respondents' intention to recommend A(H1N1) pandemic vaccine as principal contributor (weight for the \"strongly disagree\" level: 4.14). Then we found respondent's intention to receive A(H1N1) pandemic vaccine (weight for the 'no' level: 3.45), and \"belief that seasonal influenza vaccines are very useful to protect children health\" (weight for the \"strongly disagree\" level: 3.44). Mean of weight for all other variables is 0.87, including the subset variable which was the less correlated variable (variable S on figure 1) with first and second principal components.", "To our knowledge, this is the first study that measured changes in physicians' KAP regarding A(H1N1) pandemic influenza and its prevention by vaccination before and after the approval of the vaccine and start of the vaccination campaign. Previous studies among health care workers assessed acceptability of the A(H1N1) vaccine before its official approval and program implementation or used hypothetical pandemic vaccination scenarios [18,22-26]. In these studies, intention to be vaccinated against A(H1N1) pandemic influenza varied from 48% to 80% among healthcare workers, comparatively to 84% of paediatricians surveyed before the beginning of the vaccination campaign in our study. Similarly to our results, a study done in Mexico reported that 72% of healthcare workers would recommend the vaccine to their patients, and they were more likely to do so when they had the intention to get vaccinated themselves [25].\nResults of this national survey among paediatricians indicated an important increase in paediatricians perceptions of the burden of A(H1N1) pandemic influenza and support for A(H1N1) vaccination after the beginning of the vaccination campaign. Respondents' endorsement of almost all items regarding A(H1N1) pandemic influenza and its prevention by vaccination increased after the start of the vaccination campaign. This is not surprising given the fact that the first A(H1N1) vaccine available in Canada used a novel adjuvant (AS03) for which limited information regarding the safety and immunogenicity was available. The proportion of physicians who reported they had received sufficient information about A(H1N1) vaccine also increased by 42% after the start of the vaccination campaign. This increase may be attributable to the important educational efforts done at the beginning of the vaccination campaign along with the official recommendations by expert groups and professional associations that were released in early November [27-29].\nHealth professionals' knowledge about vaccines has been previously shown as a main determinant associated with their own vaccine uptake and their intention to recommend the vaccine to their patients [30,31]. The association between physicians own vaccination behaviours and their recommendations to their patients was previously established [12,32-35]. It appears to hold true in a pandemic context, as shown by our results and those previously reported in Mexico [25].\nOur results highlight the positive change in paediatricians' knowledge and level of support of the A(H1N1) vaccines throughout the pandemic vaccination campaign. This change may be attributable to increased education efforts and the very rare vaccine associated adverse events [36], but may also reflect the intense media attention focused on the vaccination campaign. A recent UK study has shown that healthcare workers were more willing to accept stockpiled H5N1 vaccine during a period of high media coverage of a H5N1 outbreak in a poultry farm than 6 months after (63.4% vs. 51.9%, p = 0.009) [18]. The increased exposure of paediatricians to severe cases of A(H1N1) disease, as observed in our results, may also have enhanced paediatricians' acceptability of A(H1N1) pandemic vaccine.\nIn logistic regression analysis, paediatricians' intention to get vaccinated against A(H1N1) pandemic influenza themselves was the most significant factor associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Results obtained by the multiple correspondence analysis (MCA) are consistent with results from the logistic regression analysis. The negative levels were also more discriminatory than the positive ones. This is consistent with results of previous studies that have shown that knowledge and behaviors regarding seasonal influenza influenced A(H1N1) vaccination status: individuals who were not vaccinated against seasonal influenza were less likely to have the intention to receive the pandemic vaccine [37,38]. Intention to be vaccinated against A(H1N1) pandemic influenza was also higher than vaccine uptake against seasonal influenza among healthcare workers usually reported in Canadian studies [39,40], which was estimated at 64% in 2006 [41]. In our study, a significant proportion of paediatricians who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend the A(H1N1) vaccine to their patients. However, results obtained by MCA should be interpreted with precautions as only 13.6% of the variability is summarized by the first and second principal components. This is principally due to the uneven distribution of response levels.\nOur study has several limitations. First, the study wasn't initially designed for a \"before-after\" analysis. The increase in the willingness to be vaccinated against A(H1N1) pandemic influenza observed among paediatricians may result from a response bias of respondents having more doubts about pandemic vaccination before the vaccination campaign actually started. Nonetheless, respondents' demographic and professionals characteristics as well as their attitudes toward vaccination in general were very similar, thus suggesting the two subsets of participants were comparable. Second, the dichotomization of the dependant variable (\"strongly agree\" and \"agree\" versus all others) was a conservative choice and physicians who answered \"somewhat agree\" were considered as having a neutral opinion, not a positive one. Third, the repartition of the answers to the dependant variable in the subset of data collected after the beginning of the vaccination campaign did not allow us to perform two multivariate analysis for the two subset. However, the model was adjusted to take into consideration the time period when the paediatricians completed the survey (before or after the initiation of the campaign). Finally, the response rate was 50% and a non-participation bias cannot be excluded. However, the response rate remains satisfactory for a mail-based survey with physicians [42-44]. In addition, socio-demographic characteristics of respondents are comparable to those reported in other surveys conducted among Canadian paediatricians [42,45]. Socio-professional characteristics of respondents also allow us to suppose a good representativeness.\nResults of this study indicated a high level of paediatricians willingness to be vaccinated against A(H1N1) and to recommend the vaccine to their patients. Lack of knowledge on A(H1N1) vaccine, belief that A(H1N1) was not a severe disease as well as concerns over A(H1N1) vaccine safety and usefulness were barriers to paediatricians' intention to recommend it. This is consistent with barriers to seasonal influenza vaccination among healthcare workers reported in the literature [31,46,47]. Public health interventions to promote seasonal influenza vaccination among healthcare workers should include the delivery of evidence-based information regarding influenza vaccines' safety, efficacy and usefulness. Educational campaigns should also stress out the threat posed by seasonal influenza to healthcare workers and the patients.", "In summary, the results show the important increases in physicians' level of confidence about A(H1N1) vaccine safety and immunogenicity and their willingness to recommend this vaccine during the first months of the campaign. More than 40% of all Canadians aged 12 years or older received at least one dose of the A(H1N1) vaccine during the vaccination campaign [48]. In the province of Quebec, Canada, almost 80% of the children aged between 6 months and 5 years were vaccinated against A(H1N1) influenza [49]. Paediatricians' support of the vaccination campaign and their recommendations were surely one of the key components of such a success.", "This study was financially supported by the Quebec Ministry of Health and Social Services and by an unrestricted grant from GlaxoSmithKline. No private company or their employees were involved in study protocol/questionnaire designing, data collection, data analysis and interpretation or manuscript writing.", "All authors except FD have been involved in the design of the study. ED and FD have drafted the manuscript. FD performed the statistical analysis. All authors have read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/128/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Participation and socio-professionals characteristics", "Knowledge, attitudes and practices regarding vaccination in general", "Knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and its prevention by vaccination", "Factors associated with the intention to recommend A(H1N1) pandemic vaccine before and after the start of the vaccination campaign", "Multiple correspondence analyses (MCA)", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "In June 2009, the World Health Organization (WHO) declared an A(H1N1) influenza pandemic [1]. In October 2009, the largest vaccination campaign in Canadian history began. Before the start of the A(H1N1) pandemic influenza vaccination campaign, limited information on the safety and immunogenicity of A(H1N1) influenza vaccines was available. However, the spread of A(H1N1) influenza generated intense media interest in pandemic preparedness and contradictory information around the vaccine and the vaccination campaign was reported [2-6]. During the first wave of A(H1N1) influenza in Canada, 77 deaths were reported, mostly in the provinces of Quebec and Ontario. During the second wave, in early fall 2009, 351 deaths were reported across the country [7]. In Canada, almost exclusively, an A(H1N1) pandemic influenza vaccine (Arepanrix™) containing a novel adjuvant (AS03 adjuvant, as an oil-in-water emulsion) was used [8].\nPhysicians are known to play a key role in public acceptance of new vaccines and their recommendations are an important determinant of vaccine uptake [9-13]. Prior to the A(H1N1) vaccine approval for clinical use and the release of professional association and experts committee recommendations, we documented Canadian family physicians' and paediatricians' knowledge, attitudes and practice (KAP) regarding A(H1N1) pandemic influenza and its prevention by vaccination [14]. In this study, 59% of paediatricians had had some experience with severe cases of A(H1N1) pandemic influenza and the majority (75%) of them were willing to recommend the A(H1N1) pandemic vaccine to their patients. More than 75% of the respondents also indicated the willingness to get the vaccine themselves [14].\nThe aim of this study was to document paediatricians' KAP regarding A(H1N1) pandemic influenza and its prevention by vaccination just after the beginning of the A(H1N1) vaccination campaign and to compare these results with those obtained before vaccination campaign initiation.", "A self-administered, anonymous, mail-based questionnaire was sent to all Canadian paediatricians, except subspecialist. The Canadian Medical Directory [15] was used to identify paediatricians. This database contains more than 58,000 listings of Canadian physicians medical contact information and is updated each year. A multidisciplinary team developed the questionnaire using the Analytical framework for immunization programs in Canada as a theoretical base [16]. This framework was developed to guide and standardize public health decision-making process regarding new immunization programs in Canada. It includes 58 criteria classified into 13 categories. Three categories of this framework were used to guide the construction of the questionnaire: (1) Burden of disease, (2) Vaccine characteristics, and (3) Acceptability of the vaccine program. The final questionnaire included 12 questions on A(H1N1) pandemic influenza and its prevention by vaccination as well as 10 questions on KAP about vaccination in general and 10 questions on demographic and professional characteristics of respondents. Respondents were asked to base their answers on their own knowledge and opinions. For most questions, a 6-point Likert answer scale ranging from \"strongly disagree\" to \"strongly agree\" was used. No information on A(H1N1) pandemic influenza or the vaccines was provided. The questionnaire was mailed to 1,852 pediatricians. The first two mailings were done in August-September 2009 and the third in November 2009. The last mailing was sent to 1,118 pediatricians who had not responded to the first two mailings. The study protocol was approved by the Ethics Board of the Laval University Hospital Center (reference number 126.05.02).\nAll vaccines authorized for sale in Canada, including the A(H1N1) influenza vaccines, are reviewed and approved by the federal government (Health Canada). However, each province and territory is responsible for the development of publicly funded immunization programs, including the schedules and the logistics of administering vaccines as well as education of the population and health professionals. The A(H1N1) pandemic vaccine was approved for use in Canada on 21st October 2009 and vaccination campaign started shortly afterwards in all Canadian provinces and territories (within days before or after 29th October 2009). To vaccinate as many persons as possible in the shortest period of time, most Canadian jurisdictions used mass vaccination centres administered by the public system. All provincial and territorial authorities, conjointly with federal authorities, have determined a sequence of vaccination by target groups. Throughout Canada, priority to receive the A(H1N1) vaccine was given to healthcare workers. Mass vaccination campaign ended in most Canadian provinces and territories in mid-December 2009.\nDue to the intense media coverage and the important educational efforts undertaken around the vaccination campaign and their potential impact on physicians' KAP [17,18], we decided to perform a \"before-after\" analysis. Questionnaires were analyzed in two subsets: those received before (first subset) and after (second subset) the start of the vaccination campaign (October 29). Descriptive statistics were generated for all variables. Missing responses were excluded from the analyses. Univariate analyses were computed separately for the two data subsets. Comparisons of categorical responses were performed using chi-square or Fisher's exact tests. A multivariate logistic regression model was used to determine variables independently associated with the paediatrician's intention to recommend the A(H1N1) pandemic vaccine. Dependent and explanatory variables were dichotomized: the responses \"strongly agree\" and \"agree\" versus all others (\"strongly disagree\", \"disagree\", \"somewhat disagree\" and \"somewhat agree\"). Variables associated in the univariate analysis with the intention to recommend the vaccine at p ≤ 0.20 were entered into the multivariate regression models using the stepwise selection technique. The model was adjusted to take into consideration the two subsets of data. A new binary explanatory variable was created (subsequently referred to as \"subset variable\") and forced into the model: questionnaires mailed before the beginning of the vaccination campaign and questionnaires mailed after. Variables were reevaluated in the final model to check for confounding and model fit. A probability level of p < 0.05 based on two-sided tests was considered statistically significant. The collinearity was checked and the adequacy of the model was evaluated by Hosmer and Lemeshow's goodness of fit test.\nMultiple correspondence analysis (MCA) method [19] was also used as a complementary way to analyse our dataset. MCA is used to detect links between variables (including, in this study, the intention to recommend the A(H1N1) vaccine), but there is no dependent variable. This method is a form of principal components analysis that is appropriate for qualitative variables. MCA searches for principal components, which are new quantitative modelled variables, constructed as linear functions of the initial variables. Finding the principal components is based on the maximisation of the correlation ratio between the principal component and the initial variables. All principal components are mutually uncorrelated by construction. Our initial variables were all variables in the questionnaire pertaining to A(H1N1) vaccine and A(H1N1) influenza and the subset variable. Analysis was computed using the raw variables (6 degrees of answers ranging from \"Strongly agree\" to \"Strongly disagree\"). We carried out this MCA as a sensitivity analysis to better assess the role of the subset variable and the impact of all 6 degrees of possible answers on the Likert scale. The Statistical Analysis Systems (SAS®) software (version 9.2 of the SAS system for Windows. Copyright (c) 2002-2008 by SAS Institute Inc., Cary, NC, USA) and R software (version 2.11.1) [20] with the library FactoMineR [21] were used for data analyses.", "[SUBTITLE] Participation and socio-professionals characteristics [SUBSECTION] Overall, 912 paediatricians have completed the questionnaires: 714 completed the questionnaire before the beginning of the A(H1N1) vaccination campaign and 197 completed it after. After exclusion of physicians no longer practicing, those with incorrect addresses or those who were subspecialists, the overall participation rate was 50% (911/1832). Participation rates by country regions varied from 40.1% in Prairies to 57.7% in Quebec. Table 1 shows respondents' socio-professionals characteristics. No statistically significant differences were found among the characteristics of paediatricians who complete the survey before and after the beginning of A(H1N1) vaccination campaign.\nPaediatricians' professional and demographic characteristics (%)\nOverall, 912 paediatricians have completed the questionnaires: 714 completed the questionnaire before the beginning of the A(H1N1) vaccination campaign and 197 completed it after. After exclusion of physicians no longer practicing, those with incorrect addresses or those who were subspecialists, the overall participation rate was 50% (911/1832). Participation rates by country regions varied from 40.1% in Prairies to 57.7% in Quebec. Table 1 shows respondents' socio-professionals characteristics. No statistically significant differences were found among the characteristics of paediatricians who complete the survey before and after the beginning of A(H1N1) vaccination campaign.\nPaediatricians' professional and demographic characteristics (%)\n[SUBTITLE] Knowledge, attitudes and practices regarding vaccination in general [SUBSECTION] Overall, 98% of paediatricians thought that vaccines recommended by public health authorities are very useful (97,6% before and 98,5% after; p = 0,5911), and 73% agreed or strongly agreed with the statement \"it is very useful to protect children with the vaccines against seasonal influenza\" (72,9% before and 73,9% after; p = 0,7855). When recommending new vaccines to their patients, 91% of paediatricians indicated that they are highly influenced by expert group recommendations (91,3% before and 90,7% after; p = 0,8152) and 90%, by professional association recommendations (90,8% before and 89,7% after; p = 0,6414). Approximately half of paediatricians (49%) stated that it is easy for them to advise their patients on new vaccines. No statistically significant differences were found in attitudes towards vaccination in general between respondents who answered before or after the beginning of A(H1N1) vaccination campaign\nOverall, 98% of paediatricians thought that vaccines recommended by public health authorities are very useful (97,6% before and 98,5% after; p = 0,5911), and 73% agreed or strongly agreed with the statement \"it is very useful to protect children with the vaccines against seasonal influenza\" (72,9% before and 73,9% after; p = 0,7855). When recommending new vaccines to their patients, 91% of paediatricians indicated that they are highly influenced by expert group recommendations (91,3% before and 90,7% after; p = 0,8152) and 90%, by professional association recommendations (90,8% before and 89,7% after; p = 0,6414). Approximately half of paediatricians (49%) stated that it is easy for them to advise their patients on new vaccines. No statistically significant differences were found in attitudes towards vaccination in general between respondents who answered before or after the beginning of A(H1N1) vaccination campaign\n[SUBTITLE] Knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and its prevention by vaccination [SUBSECTION] Before the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination comparatively to more than 75% after campaign initiation. In addition, less respondents considered that A(H1N1) pandemic influenza was severe enough to take special precautions to prevent it before, than after the start of the vaccination campaign (73% agreed or strongly agreed before compared to 63% after, p = 0.0136) (Table 2). Before the vaccination campaign, half of respondents or less agreed or strongly agreed that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after the start of the campaign who felt the vaccine was safe and effective (p < 0.001) (Table 2). Paediatricians' perceived acceptability of A(H1N1) vaccine by the public remained comparable (45% before versus 41% after agreed and strongly agreed, p = 0.3608). Paediatricians' perceived acceptability of the A(H1N1) vaccine by health professionals who administered vaccines (hereafter named \"vaccine providers\") increased after the beginning of vaccination campaign (71% before versus 84% after, p < 0.001). Respondents' intention to recommend the A(H1N1) vaccine increased from 80% before the beginning of the campaign to 92% after, including 46% and 62% of paediatricians that declared they would strongly recommend it, respectively. Globally, 40% of respondents who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend A(H1N1) pandemic vaccine to their patients while 2% of physicians who agreed with the usefulness of the seasonal influenza vaccine did not intended to recommend the A(H1N1) pandemic vaccine (p < 0.001) (data not shown). The majority of respondents intended to get vaccinated against A(H1N1) pandemic influenza themselves (84% before and 92% after, p = 0.003, Table 1). No statistically significant differences were observed in intention to be vaccinated between paediatricians practicing in different Canadian regions, neither before (p = 0.4315) nor after (p = 0.4291) the beginning of the campaign. Before the beginning of the vaccination campaign, 13% of paediatricians were undecided about being vaccinated themselves compared to 3% after. Finally, the proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before the beginning of the campaign to 73% after (p < 0.001), with 3% of respondents reporting they felt their knowledge was insufficient after the beginning of the vaccination campaign versus 17% before (Table 2).\nPaediatricians' knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and vaccine (%)\n* Before/after differences statistically significant at p < 0.05\nBefore the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination comparatively to more than 75% after campaign initiation. In addition, less respondents considered that A(H1N1) pandemic influenza was severe enough to take special precautions to prevent it before, than after the start of the vaccination campaign (73% agreed or strongly agreed before compared to 63% after, p = 0.0136) (Table 2). Before the vaccination campaign, half of respondents or less agreed or strongly agreed that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after the start of the campaign who felt the vaccine was safe and effective (p < 0.001) (Table 2). Paediatricians' perceived acceptability of A(H1N1) vaccine by the public remained comparable (45% before versus 41% after agreed and strongly agreed, p = 0.3608). Paediatricians' perceived acceptability of the A(H1N1) vaccine by health professionals who administered vaccines (hereafter named \"vaccine providers\") increased after the beginning of vaccination campaign (71% before versus 84% after, p < 0.001). Respondents' intention to recommend the A(H1N1) vaccine increased from 80% before the beginning of the campaign to 92% after, including 46% and 62% of paediatricians that declared they would strongly recommend it, respectively. Globally, 40% of respondents who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend A(H1N1) pandemic vaccine to their patients while 2% of physicians who agreed with the usefulness of the seasonal influenza vaccine did not intended to recommend the A(H1N1) pandemic vaccine (p < 0.001) (data not shown). The majority of respondents intended to get vaccinated against A(H1N1) pandemic influenza themselves (84% before and 92% after, p = 0.003, Table 1). No statistically significant differences were observed in intention to be vaccinated between paediatricians practicing in different Canadian regions, neither before (p = 0.4315) nor after (p = 0.4291) the beginning of the campaign. Before the beginning of the vaccination campaign, 13% of paediatricians were undecided about being vaccinated themselves compared to 3% after. Finally, the proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before the beginning of the campaign to 73% after (p < 0.001), with 3% of respondents reporting they felt their knowledge was insufficient after the beginning of the vaccination campaign versus 17% before (Table 2).\nPaediatricians' knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and vaccine (%)\n* Before/after differences statistically significant at p < 0.05\n[SUBTITLE] Factors associated with the intention to recommend A(H1N1) pandemic vaccine before and after the start of the vaccination campaign [SUBSECTION] The intention to get vaccinated against A(H1N1) pandemic influenza themselves (OR = 8.65) and belief that A(H1N1) pandemic vaccine would be well accepted by vaccine providers (OR = 6.65) were the most significant factors associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Six other variables were also significantly associated with the intention to recommend the vaccine: belief that seasonal influenza vaccines are very useful to protect child health; perceived economic burden of A(H1N1) influenza illness; self-estimated sufficiency of knowledge about the A(H1N1) vaccine; perceived safety of the A(H1N1) vaccine; perceived severity of A(H1N1) pandemic influenza and belief that special precautions to prevent A(H1N1) pandemic influenza are needed (Table 3).\nVariables associated with respondents intention to recommend A(H1N1) pandemic vaccine in multivariate regression analysis (N = 709) *\n* Multivariate analyses; OR = odds ratio; CI = confidence interval; 205 physicians were excluded because of missing answers (170 before and 32 after)\n§ Strongly agree, agree\nThe intention to get vaccinated against A(H1N1) pandemic influenza themselves (OR = 8.65) and belief that A(H1N1) pandemic vaccine would be well accepted by vaccine providers (OR = 6.65) were the most significant factors associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Six other variables were also significantly associated with the intention to recommend the vaccine: belief that seasonal influenza vaccines are very useful to protect child health; perceived economic burden of A(H1N1) influenza illness; self-estimated sufficiency of knowledge about the A(H1N1) vaccine; perceived safety of the A(H1N1) vaccine; perceived severity of A(H1N1) pandemic influenza and belief that special precautions to prevent A(H1N1) pandemic influenza are needed (Table 3).\nVariables associated with respondents intention to recommend A(H1N1) pandemic vaccine in multivariate regression analysis (N = 709) *\n* Multivariate analyses; OR = odds ratio; CI = confidence interval; 205 physicians were excluded because of missing answers (170 before and 32 after)\n§ Strongly agree, agree\n[SUBTITLE] Multiple correspondence analyses (MCA) [SUBSECTION] Results of MCA supported the associations found in the logistic regression models (Figure 1). Overall, first and second principal components summarized 13.6% of the initial variables' variability. The coefficients of determination (r2) for the first and second principal components were low (maximum at 0.71 for the first principal components, and 0.57 for the second principal component). It showed that responses were homogeneous among respondents. More precisely, results showed that we could group our respondents by negative levels rather than positive levels of their answers: the levels \"strongly disagree\" and \"disagree\" were the most discriminatory levels.\nCorrelations' graph between variables and the first and second principal components. Total percentage of the variability explained by first and second principal components was 13.6%. Variables almost uncorrelated with first and second principal components aren't represented on the graph, except subset variable (S). The variables: B, C, F, G, J contributed more to first and second principal components than others. - J: Respondent's intention to recommend A(H1N1) pandemic vaccine to their patients - G: \"Belief that A(H1N1) vaccine will be effective\" - F: \"Belief that A(H1N1) vaccine will be safe\" - C: \"Belief that A(H1N1) pandemic influenza is a serious disease\" - B: \"Belief that seasonal influenza vaccines are very useful to protect children health\" All variables, including variables almost uncorrelated with first and second principal component, are listed in an additional file (see Additional File 1).\nBefore and after the onset of the vaccination campaign, the modelled principal component had the respondents' intention to recommend A(H1N1) pandemic vaccine as principal contributor (weight for the \"strongly disagree\" level: 4.14). Then we found respondent's intention to receive A(H1N1) pandemic vaccine (weight for the 'no' level: 3.45), and \"belief that seasonal influenza vaccines are very useful to protect children health\" (weight for the \"strongly disagree\" level: 3.44). Mean of weight for all other variables is 0.87, including the subset variable which was the less correlated variable (variable S on figure 1) with first and second principal components.\nResults of MCA supported the associations found in the logistic regression models (Figure 1). Overall, first and second principal components summarized 13.6% of the initial variables' variability. The coefficients of determination (r2) for the first and second principal components were low (maximum at 0.71 for the first principal components, and 0.57 for the second principal component). It showed that responses were homogeneous among respondents. More precisely, results showed that we could group our respondents by negative levels rather than positive levels of their answers: the levels \"strongly disagree\" and \"disagree\" were the most discriminatory levels.\nCorrelations' graph between variables and the first and second principal components. Total percentage of the variability explained by first and second principal components was 13.6%. Variables almost uncorrelated with first and second principal components aren't represented on the graph, except subset variable (S). The variables: B, C, F, G, J contributed more to first and second principal components than others. - J: Respondent's intention to recommend A(H1N1) pandemic vaccine to their patients - G: \"Belief that A(H1N1) vaccine will be effective\" - F: \"Belief that A(H1N1) vaccine will be safe\" - C: \"Belief that A(H1N1) pandemic influenza is a serious disease\" - B: \"Belief that seasonal influenza vaccines are very useful to protect children health\" All variables, including variables almost uncorrelated with first and second principal component, are listed in an additional file (see Additional File 1).\nBefore and after the onset of the vaccination campaign, the modelled principal component had the respondents' intention to recommend A(H1N1) pandemic vaccine as principal contributor (weight for the \"strongly disagree\" level: 4.14). Then we found respondent's intention to receive A(H1N1) pandemic vaccine (weight for the 'no' level: 3.45), and \"belief that seasonal influenza vaccines are very useful to protect children health\" (weight for the \"strongly disagree\" level: 3.44). Mean of weight for all other variables is 0.87, including the subset variable which was the less correlated variable (variable S on figure 1) with first and second principal components.", "Overall, 912 paediatricians have completed the questionnaires: 714 completed the questionnaire before the beginning of the A(H1N1) vaccination campaign and 197 completed it after. After exclusion of physicians no longer practicing, those with incorrect addresses or those who were subspecialists, the overall participation rate was 50% (911/1832). Participation rates by country regions varied from 40.1% in Prairies to 57.7% in Quebec. Table 1 shows respondents' socio-professionals characteristics. No statistically significant differences were found among the characteristics of paediatricians who complete the survey before and after the beginning of A(H1N1) vaccination campaign.\nPaediatricians' professional and demographic characteristics (%)", "Overall, 98% of paediatricians thought that vaccines recommended by public health authorities are very useful (97,6% before and 98,5% after; p = 0,5911), and 73% agreed or strongly agreed with the statement \"it is very useful to protect children with the vaccines against seasonal influenza\" (72,9% before and 73,9% after; p = 0,7855). When recommending new vaccines to their patients, 91% of paediatricians indicated that they are highly influenced by expert group recommendations (91,3% before and 90,7% after; p = 0,8152) and 90%, by professional association recommendations (90,8% before and 89,7% after; p = 0,6414). Approximately half of paediatricians (49%) stated that it is easy for them to advise their patients on new vaccines. No statistically significant differences were found in attitudes towards vaccination in general between respondents who answered before or after the beginning of A(H1N1) vaccination campaign", "Before the beginning of the campaign, 63% of paediatricians perceived A(H1N1) pandemic infection as a serious disease, that would occur frequently without vaccination comparatively to more than 75% after campaign initiation. In addition, less respondents considered that A(H1N1) pandemic influenza was severe enough to take special precautions to prevent it before, than after the start of the vaccination campaign (73% agreed or strongly agreed before compared to 63% after, p = 0.0136) (Table 2). Before the vaccination campaign, half of respondents or less agreed or strongly agreed that the A(H1N1) vaccine was safe (50%) and effective (35%) compared to 77% and 72% after the start of the campaign who felt the vaccine was safe and effective (p < 0.001) (Table 2). Paediatricians' perceived acceptability of A(H1N1) vaccine by the public remained comparable (45% before versus 41% after agreed and strongly agreed, p = 0.3608). Paediatricians' perceived acceptability of the A(H1N1) vaccine by health professionals who administered vaccines (hereafter named \"vaccine providers\") increased after the beginning of vaccination campaign (71% before versus 84% after, p < 0.001). Respondents' intention to recommend the A(H1N1) vaccine increased from 80% before the beginning of the campaign to 92% after, including 46% and 62% of paediatricians that declared they would strongly recommend it, respectively. Globally, 40% of respondents who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend A(H1N1) pandemic vaccine to their patients while 2% of physicians who agreed with the usefulness of the seasonal influenza vaccine did not intended to recommend the A(H1N1) pandemic vaccine (p < 0.001) (data not shown). The majority of respondents intended to get vaccinated against A(H1N1) pandemic influenza themselves (84% before and 92% after, p = 0.003, Table 1). No statistically significant differences were observed in intention to be vaccinated between paediatricians practicing in different Canadian regions, neither before (p = 0.4315) nor after (p = 0.4291) the beginning of the campaign. Before the beginning of the vaccination campaign, 13% of paediatricians were undecided about being vaccinated themselves compared to 3% after. Finally, the proportion of paediatricians who reported they had received sufficient information on A(H1N1) vaccine increased from 31% before the beginning of the campaign to 73% after (p < 0.001), with 3% of respondents reporting they felt their knowledge was insufficient after the beginning of the vaccination campaign versus 17% before (Table 2).\nPaediatricians' knowledge, attitudes and practices regarding A(H1N1) pandemic influenza and vaccine (%)\n* Before/after differences statistically significant at p < 0.05", "The intention to get vaccinated against A(H1N1) pandemic influenza themselves (OR = 8.65) and belief that A(H1N1) pandemic vaccine would be well accepted by vaccine providers (OR = 6.65) were the most significant factors associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Six other variables were also significantly associated with the intention to recommend the vaccine: belief that seasonal influenza vaccines are very useful to protect child health; perceived economic burden of A(H1N1) influenza illness; self-estimated sufficiency of knowledge about the A(H1N1) vaccine; perceived safety of the A(H1N1) vaccine; perceived severity of A(H1N1) pandemic influenza and belief that special precautions to prevent A(H1N1) pandemic influenza are needed (Table 3).\nVariables associated with respondents intention to recommend A(H1N1) pandemic vaccine in multivariate regression analysis (N = 709) *\n* Multivariate analyses; OR = odds ratio; CI = confidence interval; 205 physicians were excluded because of missing answers (170 before and 32 after)\n§ Strongly agree, agree", "Results of MCA supported the associations found in the logistic regression models (Figure 1). Overall, first and second principal components summarized 13.6% of the initial variables' variability. The coefficients of determination (r2) for the first and second principal components were low (maximum at 0.71 for the first principal components, and 0.57 for the second principal component). It showed that responses were homogeneous among respondents. More precisely, results showed that we could group our respondents by negative levels rather than positive levels of their answers: the levels \"strongly disagree\" and \"disagree\" were the most discriminatory levels.\nCorrelations' graph between variables and the first and second principal components. Total percentage of the variability explained by first and second principal components was 13.6%. Variables almost uncorrelated with first and second principal components aren't represented on the graph, except subset variable (S). The variables: B, C, F, G, J contributed more to first and second principal components than others. - J: Respondent's intention to recommend A(H1N1) pandemic vaccine to their patients - G: \"Belief that A(H1N1) vaccine will be effective\" - F: \"Belief that A(H1N1) vaccine will be safe\" - C: \"Belief that A(H1N1) pandemic influenza is a serious disease\" - B: \"Belief that seasonal influenza vaccines are very useful to protect children health\" All variables, including variables almost uncorrelated with first and second principal component, are listed in an additional file (see Additional File 1).\nBefore and after the onset of the vaccination campaign, the modelled principal component had the respondents' intention to recommend A(H1N1) pandemic vaccine as principal contributor (weight for the \"strongly disagree\" level: 4.14). Then we found respondent's intention to receive A(H1N1) pandemic vaccine (weight for the 'no' level: 3.45), and \"belief that seasonal influenza vaccines are very useful to protect children health\" (weight for the \"strongly disagree\" level: 3.44). Mean of weight for all other variables is 0.87, including the subset variable which was the less correlated variable (variable S on figure 1) with first and second principal components.", "To our knowledge, this is the first study that measured changes in physicians' KAP regarding A(H1N1) pandemic influenza and its prevention by vaccination before and after the approval of the vaccine and start of the vaccination campaign. Previous studies among health care workers assessed acceptability of the A(H1N1) vaccine before its official approval and program implementation or used hypothetical pandemic vaccination scenarios [18,22-26]. In these studies, intention to be vaccinated against A(H1N1) pandemic influenza varied from 48% to 80% among healthcare workers, comparatively to 84% of paediatricians surveyed before the beginning of the vaccination campaign in our study. Similarly to our results, a study done in Mexico reported that 72% of healthcare workers would recommend the vaccine to their patients, and they were more likely to do so when they had the intention to get vaccinated themselves [25].\nResults of this national survey among paediatricians indicated an important increase in paediatricians perceptions of the burden of A(H1N1) pandemic influenza and support for A(H1N1) vaccination after the beginning of the vaccination campaign. Respondents' endorsement of almost all items regarding A(H1N1) pandemic influenza and its prevention by vaccination increased after the start of the vaccination campaign. This is not surprising given the fact that the first A(H1N1) vaccine available in Canada used a novel adjuvant (AS03) for which limited information regarding the safety and immunogenicity was available. The proportion of physicians who reported they had received sufficient information about A(H1N1) vaccine also increased by 42% after the start of the vaccination campaign. This increase may be attributable to the important educational efforts done at the beginning of the vaccination campaign along with the official recommendations by expert groups and professional associations that were released in early November [27-29].\nHealth professionals' knowledge about vaccines has been previously shown as a main determinant associated with their own vaccine uptake and their intention to recommend the vaccine to their patients [30,31]. The association between physicians own vaccination behaviours and their recommendations to their patients was previously established [12,32-35]. It appears to hold true in a pandemic context, as shown by our results and those previously reported in Mexico [25].\nOur results highlight the positive change in paediatricians' knowledge and level of support of the A(H1N1) vaccines throughout the pandemic vaccination campaign. This change may be attributable to increased education efforts and the very rare vaccine associated adverse events [36], but may also reflect the intense media attention focused on the vaccination campaign. A recent UK study has shown that healthcare workers were more willing to accept stockpiled H5N1 vaccine during a period of high media coverage of a H5N1 outbreak in a poultry farm than 6 months after (63.4% vs. 51.9%, p = 0.009) [18]. The increased exposure of paediatricians to severe cases of A(H1N1) disease, as observed in our results, may also have enhanced paediatricians' acceptability of A(H1N1) pandemic vaccine.\nIn logistic regression analysis, paediatricians' intention to get vaccinated against A(H1N1) pandemic influenza themselves was the most significant factor associated with the intention to recommend A(H1N1) pandemic vaccine to patients. Results obtained by the multiple correspondence analysis (MCA) are consistent with results from the logistic regression analysis. The negative levels were also more discriminatory than the positive ones. This is consistent with results of previous studies that have shown that knowledge and behaviors regarding seasonal influenza influenced A(H1N1) vaccination status: individuals who were not vaccinated against seasonal influenza were less likely to have the intention to receive the pandemic vaccine [37,38]. Intention to be vaccinated against A(H1N1) pandemic influenza was also higher than vaccine uptake against seasonal influenza among healthcare workers usually reported in Canadian studies [39,40], which was estimated at 64% in 2006 [41]. In our study, a significant proportion of paediatricians who disagreed with the usefulness to protect children with the seasonal influenza vaccine did not intended to recommend the A(H1N1) vaccine to their patients. However, results obtained by MCA should be interpreted with precautions as only 13.6% of the variability is summarized by the first and second principal components. This is principally due to the uneven distribution of response levels.\nOur study has several limitations. First, the study wasn't initially designed for a \"before-after\" analysis. The increase in the willingness to be vaccinated against A(H1N1) pandemic influenza observed among paediatricians may result from a response bias of respondents having more doubts about pandemic vaccination before the vaccination campaign actually started. Nonetheless, respondents' demographic and professionals characteristics as well as their attitudes toward vaccination in general were very similar, thus suggesting the two subsets of participants were comparable. Second, the dichotomization of the dependant variable (\"strongly agree\" and \"agree\" versus all others) was a conservative choice and physicians who answered \"somewhat agree\" were considered as having a neutral opinion, not a positive one. Third, the repartition of the answers to the dependant variable in the subset of data collected after the beginning of the vaccination campaign did not allow us to perform two multivariate analysis for the two subset. However, the model was adjusted to take into consideration the time period when the paediatricians completed the survey (before or after the initiation of the campaign). Finally, the response rate was 50% and a non-participation bias cannot be excluded. However, the response rate remains satisfactory for a mail-based survey with physicians [42-44]. In addition, socio-demographic characteristics of respondents are comparable to those reported in other surveys conducted among Canadian paediatricians [42,45]. Socio-professional characteristics of respondents also allow us to suppose a good representativeness.\nResults of this study indicated a high level of paediatricians willingness to be vaccinated against A(H1N1) and to recommend the vaccine to their patients. Lack of knowledge on A(H1N1) vaccine, belief that A(H1N1) was not a severe disease as well as concerns over A(H1N1) vaccine safety and usefulness were barriers to paediatricians' intention to recommend it. This is consistent with barriers to seasonal influenza vaccination among healthcare workers reported in the literature [31,46,47]. Public health interventions to promote seasonal influenza vaccination among healthcare workers should include the delivery of evidence-based information regarding influenza vaccines' safety, efficacy and usefulness. Educational campaigns should also stress out the threat posed by seasonal influenza to healthcare workers and the patients.", "In summary, the results show the important increases in physicians' level of confidence about A(H1N1) vaccine safety and immunogenicity and their willingness to recommend this vaccine during the first months of the campaign. More than 40% of all Canadians aged 12 years or older received at least one dose of the A(H1N1) vaccine during the vaccination campaign [48]. In the province of Quebec, Canada, almost 80% of the children aged between 6 months and 5 years were vaccinated against A(H1N1) influenza [49]. Paediatricians' support of the vaccination campaign and their recommendations were surely one of the key components of such a success.", "This study was financially supported by the Quebec Ministry of Health and Social Services and by an unrestricted grant from GlaxoSmithKline. No private company or their employees were involved in study protocol/questionnaire designing, data collection, data analysis and interpretation or manuscript writing.", "All authors except FD have been involved in the design of the study. ED and FD have drafted the manuscript. FD performed the statistical analysis. All authors have read and approved the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/128/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[]
The motivation to be sedentary predicts weight change when sedentary behaviors are reduced.
21342518
Obesity is correlated with a sedentary lifestyle, and the motivation to be active or sedentary is correlated with obesity. The present study tests the hypothesis that the motivation to be active or sedentary is correlated with weight change when children reduce their sedentary behavior.
BACKGROUND
The motivation to be active or sedentary, changes in weight, and accelerometer assessed physical activity were collected for 55 families with overweight/obese children who participated in a nine-week field study to examine behavior and weight change as a function of reducing sedentary behavior. Children were studied in three 3-week phases, baseline, reduce targeted sedentary behaviors by 25% and reduce targeted sedentary behaviors by 50%. The targeted sedentary behaviors included television, video game playing, video watching, and computer use.
METHODS
The reinforcing value of sedentary behavior but not physical activity, was correlated with weight change, as losing weight was associated with lower reinforcing value of sedentary behaviors. Reducing sedentary behavior was not associated with a significant change in objectively measured physical activity, suggesting the main way in which reducing sedentary behavior influenced weight change is by complementary changes in energy intake. Estimated energy intake supported the hypothesis that reducing sedentary behaviors influences weight by reducing energy intake.
RESULTS
These data show that the motivation to be sedentary limits the effects of reducing sedentary behavior on weight change in obese children.
CONCLUSIONS
[ "Body Mass Index", "Body Weight", "Child", "Child Behavior", "Choice Behavior", "Energy Intake", "Family", "Female", "Health Behavior", "Humans", "Leisure Activities", "Male", "Motivation", "Motor Activity", "Overweight", "Play and Playthings", "Reinforcement, Psychology", "Sedentary Behavior", "United States" ]
3053211
null
null
Methods
[SUBTITLE] Participants [SUBSECTION] Participants were 56 overweight/obese, 8-12 year old American children, recruited from flyers, a direct mailing, and a pre-existing database. All of the children were considered to be overweight or at risk for overweight, defined as a Body Mass Index (BMI) percentile adjusted for age and sex at or above the 85th percentile [17]. Criteria for participation included the following; at least one parent agreed to help their child reduce targeted sedentary behaviors, and measure usual physical activity and dietary intake; the participating child must have engaged in at least 18 hours of targeted sedentary behaviors per week; could not participate in swimming and/or weight training for greater than 5 total combined hours per week; no activity restrictions or physical limitations that could interfere with changes in physical activity, such as developmental disability or injury; no psychopathology or developmental disabilities that would limit participation. All procedures and measures were approved by the University at Buffalo Children and Youth Institutional Review Board. Participants were 56 overweight/obese, 8-12 year old American children, recruited from flyers, a direct mailing, and a pre-existing database. All of the children were considered to be overweight or at risk for overweight, defined as a Body Mass Index (BMI) percentile adjusted for age and sex at or above the 85th percentile [17]. Criteria for participation included the following; at least one parent agreed to help their child reduce targeted sedentary behaviors, and measure usual physical activity and dietary intake; the participating child must have engaged in at least 18 hours of targeted sedentary behaviors per week; could not participate in swimming and/or weight training for greater than 5 total combined hours per week; no activity restrictions or physical limitations that could interfere with changes in physical activity, such as developmental disability or injury; no psychopathology or developmental disabilities that would limit participation. All procedures and measures were approved by the University at Buffalo Children and Youth Institutional Review Board. [SUBTITLE] Design and Procedure [SUBSECTION] After completing the phone screen families were scheduled for an orientation. During the orientation parents and children completed consent and assent forms, child height and weight were measured, and families were oriented on the TV allowance device, the physical activity monitor, and activity diaries. Interested families were fitted with an accelerometer which was worn on two weekdays and one weekend day. Families were scheduled for two laboratory sessions. The child's RRVSED and RRVACT, were measured during the first session, and the accelerometers were calibrated during the second session using a progressive treadmill test. After laboratory testing, families were scheduled for 5 home visits throughout the nine week intervention, and children were scheduled to wear the accelerometer on three randomly selected days, two weekdays and one weekend day. Children were also instructed to self-monitor time on each sedentary and active behavior to ensure adherence with the experimental manipulation in a seven day diary during the last week of each phase. Activity devices were downloaded at home visits 3-5 and self report diaries were checked for accuracy. Weight was measured at each home visit and height was measured at the last home visit. Reminder phone calls were made to ensure the child wore the activity device on their scheduled day. Parent and child manuals were provided to each family explaining the study goals as well as to provide techniques for praise and reducing sedentary behaviors. During the first of five home visits, TV Allowance™ devices were connected to each TV and computer in the home, families were trained on using the devices and asked to maintain their usual pattern of sedentary behaviors, physical activity and dietary intake through the baseline phase; activity devices were fitted to the child, the child was trained on recording in the weekly diary; and weight was measured. During the second home visit each device was checked and TV and computer hours were recorded. At the third home visit, eligibility was determined, amount of screen time was calculated from the allowance devices, and the devices were programmed to decrease TV and computer use by 25% for the next three weeks. During the fourth home visit, devices were programmed to decrease TV and computer use by 50% for the next three weeks. Devices were removed at home visit 5. Two positive reinforcement techniques were used to facilitate adherence to the experimental protocol, praise and monetary reinforcement. Parents were instructed to praise their children when they observed behavior changes in the appropriate direction, and to be very specific in stating what the praise is for and to be consistent in using praise. Families earned up to $325.00 for participation in the 9 week-study. Children earned up to $15/week during the 25% and 50% reduction phases for making the reductions in targeted sedentary behavior ($90), with the amount proportional to the degree of change, with $10 for reaching the decrease goals and an additional 1$ for every hour under their goal up to $5. During the baseline phase families could earn up to $25/week ($75) for completing measurements, and up to $10 per week ($60) for completing measurements during the 25% and 50% reduction phases. Families also earned $100 for completing the study. Families could distribute the family money as they chose. After completing the phone screen families were scheduled for an orientation. During the orientation parents and children completed consent and assent forms, child height and weight were measured, and families were oriented on the TV allowance device, the physical activity monitor, and activity diaries. Interested families were fitted with an accelerometer which was worn on two weekdays and one weekend day. Families were scheduled for two laboratory sessions. The child's RRVSED and RRVACT, were measured during the first session, and the accelerometers were calibrated during the second session using a progressive treadmill test. After laboratory testing, families were scheduled for 5 home visits throughout the nine week intervention, and children were scheduled to wear the accelerometer on three randomly selected days, two weekdays and one weekend day. Children were also instructed to self-monitor time on each sedentary and active behavior to ensure adherence with the experimental manipulation in a seven day diary during the last week of each phase. Activity devices were downloaded at home visits 3-5 and self report diaries were checked for accuracy. Weight was measured at each home visit and height was measured at the last home visit. Reminder phone calls were made to ensure the child wore the activity device on their scheduled day. Parent and child manuals were provided to each family explaining the study goals as well as to provide techniques for praise and reducing sedentary behaviors. During the first of five home visits, TV Allowance™ devices were connected to each TV and computer in the home, families were trained on using the devices and asked to maintain their usual pattern of sedentary behaviors, physical activity and dietary intake through the baseline phase; activity devices were fitted to the child, the child was trained on recording in the weekly diary; and weight was measured. During the second home visit each device was checked and TV and computer hours were recorded. At the third home visit, eligibility was determined, amount of screen time was calculated from the allowance devices, and the devices were programmed to decrease TV and computer use by 25% for the next three weeks. During the fourth home visit, devices were programmed to decrease TV and computer use by 50% for the next three weeks. Devices were removed at home visit 5. Two positive reinforcement techniques were used to facilitate adherence to the experimental protocol, praise and monetary reinforcement. Parents were instructed to praise their children when they observed behavior changes in the appropriate direction, and to be very specific in stating what the praise is for and to be consistent in using praise. Families earned up to $325.00 for participation in the 9 week-study. Children earned up to $15/week during the 25% and 50% reduction phases for making the reductions in targeted sedentary behavior ($90), with the amount proportional to the degree of change, with $10 for reaching the decrease goals and an additional 1$ for every hour under their goal up to $5. During the baseline phase families could earn up to $25/week ($75) for completing measurements, and up to $10 per week ($60) for completing measurements during the 25% and 50% reduction phases. Families also earned $100 for completing the study. Families could distribute the family money as they chose. [SUBTITLE] Measurement [SUBSECTION] [SUBTITLE] Demographic variables and medical history [SUBSECTION] Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview. Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview. [SUBTITLE] Weight, height, BMI [SUBSECTION] Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17]. Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17]. [SUBTITLE] Liking of activities, food, and videos/computer games [SUBSECTION] Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18]. Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18]. [SUBTITLE] The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT) [SUBSECTION] RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room. RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room. [SUBTITLE] Measurement of television, video and computer game playing at home [SUBSECTION] Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases. Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases. [SUBTITLE] Physical activity [SUBSECTION] The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family. To determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure. Based on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated. The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family. To determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure. Based on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated. [SUBTITLE] Demographic variables and medical history [SUBSECTION] Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview. Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview. [SUBTITLE] Weight, height, BMI [SUBSECTION] Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17]. Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17]. [SUBTITLE] Liking of activities, food, and videos/computer games [SUBSECTION] Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18]. Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18]. [SUBTITLE] The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT) [SUBSECTION] RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room. RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room. [SUBTITLE] Measurement of television, video and computer game playing at home [SUBSECTION] Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases. Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases. [SUBTITLE] Physical activity [SUBSECTION] The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family. To determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure. Based on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated. The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family. To determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure. Based on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated. [SUBTITLE] Analytic Plan [SUBSECTION] Repeated measures analysis of variance was used to assess whether changes in targeted sedentary behavior were established to ensure that the targeted behavior had been manipulated by the intervention. Repeated measures analysis of variance was also used to assess changes in body weight, and physical activity. Pearson product moment correlation coefficients were used to assess predictors of weight change, as well as the relationship between changes in targeted sedentary behaviors and physical activity, as well as the relationship between RRVACT and RRVSED, RRVACT and measured physical activity, and RRVSED and total sedentary behavior. Significant factors were then studied in regression models controlling for age, sex and minority status. Significant relationships between weight change and RRVSED were explored by median splits dividing children into those who decreased or maintained weight (N = 20) versus increased (N = 41) their weight over the nine weeks of observation. Repeated measures analysis of variance was used to assess whether changes in targeted sedentary behavior were established to ensure that the targeted behavior had been manipulated by the intervention. Repeated measures analysis of variance was also used to assess changes in body weight, and physical activity. Pearson product moment correlation coefficients were used to assess predictors of weight change, as well as the relationship between changes in targeted sedentary behaviors and physical activity, as well as the relationship between RRVACT and RRVSED, RRVACT and measured physical activity, and RRVSED and total sedentary behavior. Significant factors were then studied in regression models controlling for age, sex and minority status. Significant relationships between weight change and RRVSED were explored by median splits dividing children into those who decreased or maintained weight (N = 20) versus increased (N = 41) their weight over the nine weeks of observation.
null
null
null
null
[ "Introduction", "Participants", "Design and Procedure", "Measurement", "Demographic variables and medical history", "Weight, height, BMI", "Liking of activities, food, and videos/computer games", "The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT)", "Measurement of television, video and computer game playing at home", "Physical activity", "Analytic Plan", "Results", "Discussion", "Competing interests", "Authors' contributions" ]
[ "The choice to be active or sedentary depends in part on individual differences in the motivation to be active or sedentary, as well as constraints on access to sedentary or active alternatives [1,2]. The motivation to be active or sedentary can be operationalized by providing children with the choice to be active or sedentary, varying the behavioral costs to obtain access to the alternatives, and quantifying the amount of work or effort the children will do to gain access to the alternatives. This provides an index of the relative reinforcing value of being active (RRVACT) or sedentary (RRVSED). The relative reinforcing value of physical activity has been associated with physical activity levels, with children who find physical activity more reinforcing also being the most active [3-5]. In addition, there are strong individual differences in the reinforcing value of physical activity, as obese children find physical activity less reinforcing than leaner children [6].\nReported time spent watching television, as one component of being sedentary, is cross-sectionally correlated with obesity in children and adults [7-10], as well as being a risk factor for the development of obesity in children [11,12]. Given the role of a sedentary lifestyle in weight gain and the development of obesity, research suggests that reducing sedentary behavior may be a valuable tool in prevention [13,14] and treatment of pediatric obesity [15,16]. There are two potential ways in which reducing sedentary behaviors can be associated with weight changes. As sedentary behaviors are reduced, complementary reductions in energy intake may occur, or as sedentary behaviors are reduced, children may substitute physical activity for sedentary behaviors.\nDespite the importance of the motivation to be sedentary or active as a predictor of a child's lifestyle choices, there has been no research on how the motivation to be active or sedentary is associated with weight change when sedentary behaviors are reduced. The purpose of this study is to report on how individual differences in the RRVSED or RRVACT are correlated with weight loss during an intervention when sedentary behaviors are reduced.", "Participants were 56 overweight/obese, 8-12 year old American children, recruited from flyers, a direct mailing, and a pre-existing database. All of the children were considered to be overweight or at risk for overweight, defined as a Body Mass Index (BMI) percentile adjusted for age and sex at or above the 85th percentile [17]. Criteria for participation included the following; at least one parent agreed to help their child reduce targeted sedentary behaviors, and measure usual physical activity and dietary intake; the participating child must have engaged in at least 18 hours of targeted sedentary behaviors per week; could not participate in swimming and/or weight training for greater than 5 total combined hours per week; no activity restrictions or physical limitations that could interfere with changes in physical activity, such as developmental disability or injury; no psychopathology or developmental disabilities that would limit participation. All procedures and measures were approved by the University at Buffalo Children and Youth Institutional Review Board.", "After completing the phone screen families were scheduled for an orientation. During the orientation parents and children completed consent and assent forms, child height and weight were measured, and families were oriented on the TV allowance device, the physical activity monitor, and activity diaries. Interested families were fitted with an accelerometer which was worn on two weekdays and one weekend day.\nFamilies were scheduled for two laboratory sessions. The child's RRVSED and RRVACT, were measured during the first session, and the accelerometers were calibrated during the second session using a progressive treadmill test. After laboratory testing, families were scheduled for 5 home visits throughout the nine week intervention, and children were scheduled to wear the accelerometer on three randomly selected days, two weekdays and one weekend day. Children were also instructed to self-monitor time on each sedentary and active behavior to ensure adherence with the experimental manipulation in a seven day diary during the last week of each phase. Activity devices were downloaded at home visits 3-5 and self report diaries were checked for accuracy. Weight was measured at each home visit and height was measured at the last home visit. Reminder phone calls were made to ensure the child wore the activity device on their scheduled day. Parent and child manuals were provided to each family explaining the study goals as well as to provide techniques for praise and reducing sedentary behaviors.\nDuring the first of five home visits, TV Allowance™ devices were connected to each TV and computer in the home, families were trained on using the devices and asked to maintain their usual pattern of sedentary behaviors, physical activity and dietary intake through the baseline phase; activity devices were fitted to the child, the child was trained on recording in the weekly diary; and weight was measured. During the second home visit each device was checked and TV and computer hours were recorded. At the third home visit, eligibility was determined, amount of screen time was calculated from the allowance devices, and the devices were programmed to decrease TV and computer use by 25% for the next three weeks. During the fourth home visit, devices were programmed to decrease TV and computer use by 50% for the next three weeks. Devices were removed at home visit 5.\nTwo positive reinforcement techniques were used to facilitate adherence to the experimental protocol, praise and monetary reinforcement. Parents were instructed to praise their children when they observed behavior changes in the appropriate direction, and to be very specific in stating what the praise is for and to be consistent in using praise. Families earned up to $325.00 for participation in the 9 week-study. Children earned up to $15/week during the 25% and 50% reduction phases for making the reductions in targeted sedentary behavior ($90), with the amount proportional to the degree of change, with $10 for reaching the decrease goals and an additional 1$ for every hour under their goal up to $5. During the baseline phase families could earn up to $25/week ($75) for completing measurements, and up to $10 per week ($60) for completing measurements during the 25% and 50% reduction phases. Families also earned $100 for completing the study. Families could distribute the family money as they chose.", "[SUBTITLE] Demographic variables and medical history [SUBSECTION] Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\nFamily size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\n[SUBTITLE] Weight, height, BMI [SUBSECTION] Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\nChild weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\n[SUBTITLE] Liking of activities, food, and videos/computer games [SUBSECTION] Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\nLiking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\n[SUBTITLE] The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT) [SUBSECTION] RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\nRRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\n[SUBTITLE] Measurement of television, video and computer game playing at home [SUBSECTION] Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\nTelevision, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\n[SUBTITLE] Physical activity [SUBSECTION] The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.\nThe objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.", "Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.", "Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].", "Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].", "RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.", "Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.", "The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.", "Repeated measures analysis of variance was used to assess whether changes in targeted sedentary behavior were established to ensure that the targeted behavior had been manipulated by the intervention. Repeated measures analysis of variance was also used to assess changes in body weight, and physical activity. Pearson product moment correlation coefficients were used to assess predictors of weight change, as well as the relationship between changes in targeted sedentary behaviors and physical activity, as well as the relationship between RRVACT and RRVSED, RRVACT and measured physical activity, and RRVSED and total sedentary behavior. Significant factors were then studied in regression models controlling for age, sex and minority status. Significant relationships between weight change and RRVSED were explored by median splits dividing children into those who decreased or maintained weight (N = 20) versus increased (N = 41) their weight over the nine weeks of observation.", "The average child was 10.7 ± 1.2 years of age, with a height of 57.4 ± 3.7 in, weight of 118.8 ± 30.5 lbs, BMI of 25.0 ± 4.2, and zBMI of 1.8 ± 0.4. Twenty seven (48.2%) of the children were male, and 14 (25%) were non-Caucasian or minority (Table 1). Changes in weight, targeted sedentary behaviors and physical activity are shown across the three phases in Table 2. The average child had a reduction in targeted sedentary behavior from baseline to 25% and 50% reduction phases (F(2, 110) = 285.00, p < 0.0001), with significant changes from baseline to 25% (p < 0.001), with a further significant reduction from 25% to 50% (p < 0.001). There was a reduction in targeted sedentary behavior of 67.6%, with the majority of change (53%) occurring during the initial reduction phase. The changes in sedentary behavior included significant reductions in television watching (F(2,110) = 197.00, p < 0.001) and computer use (F(2,110) = 40.07, p < 0.001). Small, but significant increases in body weight were observed (F(2,110) = 4.84, p < 0.001), with significant changes from baseline to 50% (p = 0.003) phases, but no differences between the 25% and 50% phases (p = 0.20). There was little change in physical activity accelerometer counts (F(2,110) = 0.49, p = 0.61) over phases. In addition, there were no significant changes in average METS (F(2,94) = 0.60, p = 0.55), or in the percentage of time below 2 METS (F(2,94) = 1.61, p = 0.21), above 2 METS (F(2,94) = 1.61, p = 0.21). Amount of time above 3 METS showed a significant decrease over phases (F(2,94) = 4.78, p = 0.01), with decreases from baseline to the 25% (p = 0.03) and 50% (p = 0.02) reduction phases. Estimated energy intake did not significantly change over phases (F(2,110) = 0.49, p = 0.61).\nCharacteristics of the sample (N = 56)\nBehavior and weight changes during the baseline, 25 and 50% reduction phases (Mean ± SEM)\nVariables related to weight change included only OMAXSED (r = 0.31, p = 0.022). Child age (p = .71), sex (p = .64), minority status (p = .67), income (p = .38), reinforcing value of physical activity (p = .72) or baseline values of weight (p = .45), changes in targeted sedentary behavior (p = .44) or changes in physical activity (p = .15) were not related to weight change. Multiple regression controlling for child age, sex and minority status did not reduce the impact of RRVSED on weight change (p = 0.035).\nDifferences in the motivation to be active or sedentary, and changes in sedentary and active behaviors, estimated energy intake and body weight for children who gained (N = 39) or lost (N = 19) weight are shown in Table 3. There was a significant difference in OMAXSED between children who lost or maintained versus those who gained weight during the study (F(1,54 = 4.79, p = 0.03). Figure 1 shows differences in OMAX (left graphs) and the pattern of responding (right graphs), while the top and bottom graphs show motivated responding for sedentary behaviors or physical activity, respectively. As shown in Figure 1, children who maintained or lost weight had lower OMAXSED and responding over progressive ratio schedules for access to sedentary behaviors than children who gained weight, who worked much harder for sedentary behaviors. There were no differences in OMAXACT (F(1,54) = 0.11, p = 0.74) as a function of whether children lost or maintained versus gained weight during the study. As shown in Table 3, there were no differences in the alterations in any activity variable for children who gained or lost weight over the nine weeks of the study. Children who lost weight reduced energy intake by an estimated 223 kcal/day calories when sedentary behaviors were reduced, while children who gained weight when sedentary behaviors were reduced increased their estimated energy intake by 172 kcal/day (F(1,54) = 13.13, p = 0.0006).\nDifferences in behavior for children who lost or gained weight after reduction of sedentary behaviors (Mean ± SEM)\nDifferences (mean ± SEM) in the OMAX (left graphs) and reinforcing value (right graphs) of sedentary behaviors (top graphs) or physical activity (bottom graphs) for children who lost or maintained weight versus gained weight during the 9 week study.\nOMAXACT was not significantly related to OMAXSED (r = 0.07, p = 0.63). Similarly, OMAXACT was not correlated with activity counts at baseline (r = 0.12, p = 0.38), and OMAXSED was not correlated with total sedentary behavior at baseline (r = 0.07, p = 0.63).", "The results show that sedentary behavior was successfully manipulated by over 50%, and that children made the majority of the changes during the 25% reduction phase. These changes were associated with no significant increases in physical activity. We have previously observed minimal changes in physical activity when sedentary behaviors were reduced [25]. Weight change was not associated with changes in physical activity when sedentary behaviors are reduced, suggesting that for the average overweight child, reducing sedentary behaviors does not result in greater physical activity and weight loss. This is consistent with cross-sectional research arguing that the effects of sedentary behavior on body weight are not due to changes in activity energy expenditure [26]. In other research we have shown that increasing sedentary behavior results in a reduction in physical activity, suggesting that the relationship between sedentary behavior and physical activity is not symmetric, and the association between these behaviors may only be present in one direction [25].\nThe variable that was associated with weight change over time was the motivation to be sedentary, which represents the reinforcing value of sedentary behaviors such as television watching, watching videos and playing on the computer. These are activities for which obese children allocate a great deal of their time, and time being sedentary may be independent of time being active [27,28], as the motivation to be sedentary is independent of the motivation to be active. If they were direct substitutes for each other, then they would be significantly negatively correlated, and reducing sedentary behaviors would result in an increase in physical activity, which is generally not observed. In the current study the motivation to be active and the motivation to be sedentary were not significantly related.\nIt is important that only a subset of sedentary behaviors was targeted, to allow the child to choose how to reallocate time that had been allocated for watching television or playing computer games. Since no increases in physical activity were observed, it is likely that children substituted other sedentary behaviors for the targeted sedentary behaviors. Unfortunately, we did not have children record all sedentary behaviors during the reduction phases, so it is not possible to know what sedentary behaviors they engaged in as substitutes.\nThe question is how is the reinforcing value of sedentary behaviors related to weight change? Weight change is due to change in energy balance, which must be due to either reductions in energy intake or increases in energy expenditure. The absence of increases in physical activity reduces the likelihood that physical activity is a substitute for sedentary behaviors, such that children spontaneously increase their physical activity when sedentary behaviors are reduced. This suggests that changes in energy intake are the component of energy balance that is promoting the weight change. The estimated energy intake data showed that children who lost weight reduced their estimated daily energy intake by 223 kcal/day, while those who gained weight increased their estimated daily energy intake by 172 kcal/day. Other investigators have argued that while sedentary behaviors are correlated with weight, the relationship is not mediated by changes in measured physical activity, but are likely to be mediated by changes in eating and energy intake [26]. We have previously shown in non-overweight children that energy intake is a reliable complement to shifts in sedentary behavior [19], such that reductions in sedentary behavior paired with eating result in reductions in energy intake. This may be due to the strength of the relationship between eating and engaging in sedentary behaviors when children enter the study. While eating in association with sedentary behaviors is common, and experimental research has shown that increasing television watching increases energy intake [29-31], there is variability in this relationship. If a child never eats in association with television watching, then reducing television watching cannot result in a reduction in energy intake. On the other hand, if a child consumes food often in association with watching television, then reducing television watching may have a large effect on energy intake and body weight.\nThe motivation to be sedentary is a behavioral phenotype that may lead to a better understanding of factors related to how changing sedentary behavior may relate to changes in energy balance behaviors and weight loss. Given the potential relationship between changes in television watching and energy intake, it is also possible that the motivation to eat is an important factor to predict how reducing sedentary behaviors influences eating. For example, it may be that children who find food more reinforcing would have a harder time reducing food intake when sedentary behaviors are reduced, and they may compensate by increasing intake at other times, or they may resist changes to reduce television watching that is associated with eating, since this would reduce access to powerful reinforcers they want to obtain. This would be an interesting set of studies for future research.\nOne surprising result was the failure to show that the motivation to be active was related to physical activity, or those high in the motivation to be active were more likely to become more active. We have shown in previous research that the motivation to be active is related to more physical activity [3,4], but these studies included children with a wide variety of motivation to be active as well as a variety of levels of physical activity. In the present study of overweight and obese, sedentary children, the range of motivation to be active and of activity levels was constrained, which can lead to lower relationships if there is little variability in the predictor and/or outcome.\nThere are limitations to this study in the measurement of activity and diet. While we had objective measurements of physical activity, we did not collect detailed self-reported information on what types of behaviors people engaged in and what types of behaviors people used to substitute for reduced targeted sedentary behaviors. It would have been interesting to know if specific classes of active or sedentary behaviors were changed. For example, recent data suggests that standing rather than sitting may confer health benefits in adults [32], and it would be interesting to know if children stood more or engaged in light physical activity to replace sedentary behaviors and if these changes would be associated with weight loss or improvement in health. Likewise, it would have been interesting to know if other popular sedentary behaviors, such as talking on the phone or texting, replaced television watching or computer game playing. The TV Allowance™ is a useful behavioral engineering approach to reducing television watching, however, a limitation of using the device is that it is possible to overestimate television watching since children may turn on the television and become engaged in an alternative activity and not watch it. This should be minimized since children are reinforced for reducing television watching, but there may be instances in which television watching is overestimated, and the degree of reduction in television watching is underestimated. Physical activity was measured for three days during each phase, and it may have been useful to collect more extended samples of physical activity during each phase [23]. The results point to changes in energy intake, rather than physical activity, as the mechanism for changes in body weight as people reduce their television watching or computer game playing. Estimates of energy intake were consistent with this hypothesis, and accelerometer based activity counts can provide valid information about energy expenditure [20,33]. In addition, self-reports of energy intake are notoriously inaccurate [34], and previous studies using the current methods for reducing sedentary behavior have shown consistent underreporting of energy intake in obese children and adolescents [19]. Despite the challenges in collecting valid dietary intake data, it would be useful to have dietary information that includes dietary intake as well as macronutrient intake.\nIn summary, the present study replicates previous research that suggests that reducing sedentary behaviors is not associated with an increase in physical activity. The motivation to be sedentary is related to short term weight change when sedentary behaviors are reduced, and this effect may be mediated by changes in energy intake. Thus, one predictor of the effectiveness of programs to reduce sedentary behavior for child weight change may be the motivation to be sedentary.", "Dr. Epstein is a consultant to Kraft foods and NuVal. The other authors do not have any potential conflict of interests.", "LHE and JNR designed the study, and LHE obtained the research funding. MDC obtained IRB approval, and supervised study implementation and data collection. LHE and RAP conducted data analysis. LHE wrote the initial draft of the manuscript, and all authors contributed to the interpretation of data and the writing of the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Participants", "Design and Procedure", "Measurement", "Demographic variables and medical history", "Weight, height, BMI", "Liking of activities, food, and videos/computer games", "The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT)", "Measurement of television, video and computer game playing at home", "Physical activity", "Analytic Plan", "Results", "Discussion", "Competing interests", "Authors' contributions" ]
[ "The choice to be active or sedentary depends in part on individual differences in the motivation to be active or sedentary, as well as constraints on access to sedentary or active alternatives [1,2]. The motivation to be active or sedentary can be operationalized by providing children with the choice to be active or sedentary, varying the behavioral costs to obtain access to the alternatives, and quantifying the amount of work or effort the children will do to gain access to the alternatives. This provides an index of the relative reinforcing value of being active (RRVACT) or sedentary (RRVSED). The relative reinforcing value of physical activity has been associated with physical activity levels, with children who find physical activity more reinforcing also being the most active [3-5]. In addition, there are strong individual differences in the reinforcing value of physical activity, as obese children find physical activity less reinforcing than leaner children [6].\nReported time spent watching television, as one component of being sedentary, is cross-sectionally correlated with obesity in children and adults [7-10], as well as being a risk factor for the development of obesity in children [11,12]. Given the role of a sedentary lifestyle in weight gain and the development of obesity, research suggests that reducing sedentary behavior may be a valuable tool in prevention [13,14] and treatment of pediatric obesity [15,16]. There are two potential ways in which reducing sedentary behaviors can be associated with weight changes. As sedentary behaviors are reduced, complementary reductions in energy intake may occur, or as sedentary behaviors are reduced, children may substitute physical activity for sedentary behaviors.\nDespite the importance of the motivation to be sedentary or active as a predictor of a child's lifestyle choices, there has been no research on how the motivation to be active or sedentary is associated with weight change when sedentary behaviors are reduced. The purpose of this study is to report on how individual differences in the RRVSED or RRVACT are correlated with weight loss during an intervention when sedentary behaviors are reduced.", "[SUBTITLE] Participants [SUBSECTION] Participants were 56 overweight/obese, 8-12 year old American children, recruited from flyers, a direct mailing, and a pre-existing database. All of the children were considered to be overweight or at risk for overweight, defined as a Body Mass Index (BMI) percentile adjusted for age and sex at or above the 85th percentile [17]. Criteria for participation included the following; at least one parent agreed to help their child reduce targeted sedentary behaviors, and measure usual physical activity and dietary intake; the participating child must have engaged in at least 18 hours of targeted sedentary behaviors per week; could not participate in swimming and/or weight training for greater than 5 total combined hours per week; no activity restrictions or physical limitations that could interfere with changes in physical activity, such as developmental disability or injury; no psychopathology or developmental disabilities that would limit participation. All procedures and measures were approved by the University at Buffalo Children and Youth Institutional Review Board.\nParticipants were 56 overweight/obese, 8-12 year old American children, recruited from flyers, a direct mailing, and a pre-existing database. All of the children were considered to be overweight or at risk for overweight, defined as a Body Mass Index (BMI) percentile adjusted for age and sex at or above the 85th percentile [17]. Criteria for participation included the following; at least one parent agreed to help their child reduce targeted sedentary behaviors, and measure usual physical activity and dietary intake; the participating child must have engaged in at least 18 hours of targeted sedentary behaviors per week; could not participate in swimming and/or weight training for greater than 5 total combined hours per week; no activity restrictions or physical limitations that could interfere with changes in physical activity, such as developmental disability or injury; no psychopathology or developmental disabilities that would limit participation. All procedures and measures were approved by the University at Buffalo Children and Youth Institutional Review Board.\n[SUBTITLE] Design and Procedure [SUBSECTION] After completing the phone screen families were scheduled for an orientation. During the orientation parents and children completed consent and assent forms, child height and weight were measured, and families were oriented on the TV allowance device, the physical activity monitor, and activity diaries. Interested families were fitted with an accelerometer which was worn on two weekdays and one weekend day.\nFamilies were scheduled for two laboratory sessions. The child's RRVSED and RRVACT, were measured during the first session, and the accelerometers were calibrated during the second session using a progressive treadmill test. After laboratory testing, families were scheduled for 5 home visits throughout the nine week intervention, and children were scheduled to wear the accelerometer on three randomly selected days, two weekdays and one weekend day. Children were also instructed to self-monitor time on each sedentary and active behavior to ensure adherence with the experimental manipulation in a seven day diary during the last week of each phase. Activity devices were downloaded at home visits 3-5 and self report diaries were checked for accuracy. Weight was measured at each home visit and height was measured at the last home visit. Reminder phone calls were made to ensure the child wore the activity device on their scheduled day. Parent and child manuals were provided to each family explaining the study goals as well as to provide techniques for praise and reducing sedentary behaviors.\nDuring the first of five home visits, TV Allowance™ devices were connected to each TV and computer in the home, families were trained on using the devices and asked to maintain their usual pattern of sedentary behaviors, physical activity and dietary intake through the baseline phase; activity devices were fitted to the child, the child was trained on recording in the weekly diary; and weight was measured. During the second home visit each device was checked and TV and computer hours were recorded. At the third home visit, eligibility was determined, amount of screen time was calculated from the allowance devices, and the devices were programmed to decrease TV and computer use by 25% for the next three weeks. During the fourth home visit, devices were programmed to decrease TV and computer use by 50% for the next three weeks. Devices were removed at home visit 5.\nTwo positive reinforcement techniques were used to facilitate adherence to the experimental protocol, praise and monetary reinforcement. Parents were instructed to praise their children when they observed behavior changes in the appropriate direction, and to be very specific in stating what the praise is for and to be consistent in using praise. Families earned up to $325.00 for participation in the 9 week-study. Children earned up to $15/week during the 25% and 50% reduction phases for making the reductions in targeted sedentary behavior ($90), with the amount proportional to the degree of change, with $10 for reaching the decrease goals and an additional 1$ for every hour under their goal up to $5. During the baseline phase families could earn up to $25/week ($75) for completing measurements, and up to $10 per week ($60) for completing measurements during the 25% and 50% reduction phases. Families also earned $100 for completing the study. Families could distribute the family money as they chose.\nAfter completing the phone screen families were scheduled for an orientation. During the orientation parents and children completed consent and assent forms, child height and weight were measured, and families were oriented on the TV allowance device, the physical activity monitor, and activity diaries. Interested families were fitted with an accelerometer which was worn on two weekdays and one weekend day.\nFamilies were scheduled for two laboratory sessions. The child's RRVSED and RRVACT, were measured during the first session, and the accelerometers were calibrated during the second session using a progressive treadmill test. After laboratory testing, families were scheduled for 5 home visits throughout the nine week intervention, and children were scheduled to wear the accelerometer on three randomly selected days, two weekdays and one weekend day. Children were also instructed to self-monitor time on each sedentary and active behavior to ensure adherence with the experimental manipulation in a seven day diary during the last week of each phase. Activity devices were downloaded at home visits 3-5 and self report diaries were checked for accuracy. Weight was measured at each home visit and height was measured at the last home visit. Reminder phone calls were made to ensure the child wore the activity device on their scheduled day. Parent and child manuals were provided to each family explaining the study goals as well as to provide techniques for praise and reducing sedentary behaviors.\nDuring the first of five home visits, TV Allowance™ devices were connected to each TV and computer in the home, families were trained on using the devices and asked to maintain their usual pattern of sedentary behaviors, physical activity and dietary intake through the baseline phase; activity devices were fitted to the child, the child was trained on recording in the weekly diary; and weight was measured. During the second home visit each device was checked and TV and computer hours were recorded. At the third home visit, eligibility was determined, amount of screen time was calculated from the allowance devices, and the devices were programmed to decrease TV and computer use by 25% for the next three weeks. During the fourth home visit, devices were programmed to decrease TV and computer use by 50% for the next three weeks. Devices were removed at home visit 5.\nTwo positive reinforcement techniques were used to facilitate adherence to the experimental protocol, praise and monetary reinforcement. Parents were instructed to praise their children when they observed behavior changes in the appropriate direction, and to be very specific in stating what the praise is for and to be consistent in using praise. Families earned up to $325.00 for participation in the 9 week-study. Children earned up to $15/week during the 25% and 50% reduction phases for making the reductions in targeted sedentary behavior ($90), with the amount proportional to the degree of change, with $10 for reaching the decrease goals and an additional 1$ for every hour under their goal up to $5. During the baseline phase families could earn up to $25/week ($75) for completing measurements, and up to $10 per week ($60) for completing measurements during the 25% and 50% reduction phases. Families also earned $100 for completing the study. Families could distribute the family money as they chose.\n[SUBTITLE] Measurement [SUBSECTION] [SUBTITLE] Demographic variables and medical history [SUBSECTION] Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\nFamily size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\n[SUBTITLE] Weight, height, BMI [SUBSECTION] Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\nChild weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\n[SUBTITLE] Liking of activities, food, and videos/computer games [SUBSECTION] Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\nLiking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\n[SUBTITLE] The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT) [SUBSECTION] RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\nRRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\n[SUBTITLE] Measurement of television, video and computer game playing at home [SUBSECTION] Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\nTelevision, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\n[SUBTITLE] Physical activity [SUBSECTION] The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.\nThe objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.\n[SUBTITLE] Demographic variables and medical history [SUBSECTION] Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\nFamily size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\n[SUBTITLE] Weight, height, BMI [SUBSECTION] Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\nChild weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\n[SUBTITLE] Liking of activities, food, and videos/computer games [SUBSECTION] Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\nLiking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\n[SUBTITLE] The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT) [SUBSECTION] RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\nRRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\n[SUBTITLE] Measurement of television, video and computer game playing at home [SUBSECTION] Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\nTelevision, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\n[SUBTITLE] Physical activity [SUBSECTION] The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.\nThe objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.\n[SUBTITLE] Analytic Plan [SUBSECTION] Repeated measures analysis of variance was used to assess whether changes in targeted sedentary behavior were established to ensure that the targeted behavior had been manipulated by the intervention. Repeated measures analysis of variance was also used to assess changes in body weight, and physical activity. Pearson product moment correlation coefficients were used to assess predictors of weight change, as well as the relationship between changes in targeted sedentary behaviors and physical activity, as well as the relationship between RRVACT and RRVSED, RRVACT and measured physical activity, and RRVSED and total sedentary behavior. Significant factors were then studied in regression models controlling for age, sex and minority status. Significant relationships between weight change and RRVSED were explored by median splits dividing children into those who decreased or maintained weight (N = 20) versus increased (N = 41) their weight over the nine weeks of observation.\nRepeated measures analysis of variance was used to assess whether changes in targeted sedentary behavior were established to ensure that the targeted behavior had been manipulated by the intervention. Repeated measures analysis of variance was also used to assess changes in body weight, and physical activity. Pearson product moment correlation coefficients were used to assess predictors of weight change, as well as the relationship between changes in targeted sedentary behaviors and physical activity, as well as the relationship between RRVACT and RRVSED, RRVACT and measured physical activity, and RRVSED and total sedentary behavior. Significant factors were then studied in regression models controlling for age, sex and minority status. Significant relationships between weight change and RRVSED were explored by median splits dividing children into those who decreased or maintained weight (N = 20) versus increased (N = 41) their weight over the nine weeks of observation.", "Participants were 56 overweight/obese, 8-12 year old American children, recruited from flyers, a direct mailing, and a pre-existing database. All of the children were considered to be overweight or at risk for overweight, defined as a Body Mass Index (BMI) percentile adjusted for age and sex at or above the 85th percentile [17]. Criteria for participation included the following; at least one parent agreed to help their child reduce targeted sedentary behaviors, and measure usual physical activity and dietary intake; the participating child must have engaged in at least 18 hours of targeted sedentary behaviors per week; could not participate in swimming and/or weight training for greater than 5 total combined hours per week; no activity restrictions or physical limitations that could interfere with changes in physical activity, such as developmental disability or injury; no psychopathology or developmental disabilities that would limit participation. All procedures and measures were approved by the University at Buffalo Children and Youth Institutional Review Board.", "After completing the phone screen families were scheduled for an orientation. During the orientation parents and children completed consent and assent forms, child height and weight were measured, and families were oriented on the TV allowance device, the physical activity monitor, and activity diaries. Interested families were fitted with an accelerometer which was worn on two weekdays and one weekend day.\nFamilies were scheduled for two laboratory sessions. The child's RRVSED and RRVACT, were measured during the first session, and the accelerometers were calibrated during the second session using a progressive treadmill test. After laboratory testing, families were scheduled for 5 home visits throughout the nine week intervention, and children were scheduled to wear the accelerometer on three randomly selected days, two weekdays and one weekend day. Children were also instructed to self-monitor time on each sedentary and active behavior to ensure adherence with the experimental manipulation in a seven day diary during the last week of each phase. Activity devices were downloaded at home visits 3-5 and self report diaries were checked for accuracy. Weight was measured at each home visit and height was measured at the last home visit. Reminder phone calls were made to ensure the child wore the activity device on their scheduled day. Parent and child manuals were provided to each family explaining the study goals as well as to provide techniques for praise and reducing sedentary behaviors.\nDuring the first of five home visits, TV Allowance™ devices were connected to each TV and computer in the home, families were trained on using the devices and asked to maintain their usual pattern of sedentary behaviors, physical activity and dietary intake through the baseline phase; activity devices were fitted to the child, the child was trained on recording in the weekly diary; and weight was measured. During the second home visit each device was checked and TV and computer hours were recorded. At the third home visit, eligibility was determined, amount of screen time was calculated from the allowance devices, and the devices were programmed to decrease TV and computer use by 25% for the next three weeks. During the fourth home visit, devices were programmed to decrease TV and computer use by 50% for the next three weeks. Devices were removed at home visit 5.\nTwo positive reinforcement techniques were used to facilitate adherence to the experimental protocol, praise and monetary reinforcement. Parents were instructed to praise their children when they observed behavior changes in the appropriate direction, and to be very specific in stating what the praise is for and to be consistent in using praise. Families earned up to $325.00 for participation in the 9 week-study. Children earned up to $15/week during the 25% and 50% reduction phases for making the reductions in targeted sedentary behavior ($90), with the amount proportional to the degree of change, with $10 for reaching the decrease goals and an additional 1$ for every hour under their goal up to $5. During the baseline phase families could earn up to $25/week ($75) for completing measurements, and up to $10 per week ($60) for completing measurements during the 25% and 50% reduction phases. Families also earned $100 for completing the study. Families could distribute the family money as they chose.", "[SUBTITLE] Demographic variables and medical history [SUBSECTION] Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\nFamily size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.\n[SUBTITLE] Weight, height, BMI [SUBSECTION] Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\nChild weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].\n[SUBTITLE] Liking of activities, food, and videos/computer games [SUBSECTION] Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\nLiking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].\n[SUBTITLE] The relative reinforcing value of sedentary behavior (RRVSED) and physical activity (RRVACT) [SUBSECTION] RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\nRRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.\n[SUBTITLE] Measurement of television, video and computer game playing at home [SUBSECTION] Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\nTelevision, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.\n[SUBTITLE] Physical activity [SUBSECTION] The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.\nThe objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.", "Family size, family income, parent educational level and racial/ethnic background were obtained using a standardized questionnaire. Current medical problems, including psychiatric diagnoses and eating disorders were assessed at baseline by parent interview.", "Child weight was assessed by use of a Tanita BWB-800P digital scale. Height was assessed using a Digi-Kit digital stadiometer. On the basis of the height and weight data Body Mass Index (BMI) is calculated according to the following formula: (BMI = kg/m2). Children were considered overweight if they were at or above the 85th BMI percentile for their age and sex [17].", "Liking of the activities, videos or computer games was measured on 7 point Likert-type scales anchored by 1 (Do not like) to 7 (Like very much) [18].", "RRVSED or RRVACT is assessed by evaluating how hard a participant will work to obtain access to physical versus sedentary activities [1]. The child first sampled each of the four physical activities and four sedentary behaviors for at least two minutes and then rated them on a scale from 1-7. Children were asked to rank the activities, and the highest rated physical activity and sedentary behavior were chosen for the task. The physical activity alternatives included a balance board, a stationary youth mountain bike, a stepper, and a skipping game, while the sedentary alternative included magazines, puzzles, movies, and Playstation™ 2 video games. The child was instructed how to use the computer-generated task to earn points toward their favorite physical or sedentary activity. The computer displayed three squares where shapes rotated and changed color within each square every time a mouse button was pressed. When all of the shapes matched, the participant earned one point. The child worked on one of two computer monitors, one monitor had the physical activity alternative the other had the sedentary alternative. The reinforcement schedules for both components were initially set at FR4 (fixed ratio 4, which means the participant will earn one point after 4 responses). The schedule increased on a progressive ratio schedule that doubled after 5-points were earned on each schedule. (FR4, FR8, FR16, FR32, FR64, FR128, FR256, FR512, FR 1024 and FR2048). For every five points earned, the participant would receive 2-minutes of time to engage in the activity for which they were playing. The child was able to end the session at any time, they were instructed to tell the experimenter they were all finished when they did not want to earn points any longer. The computer recorded the participants' points earned throughout the session. After instructions were given, the experimenter left the room. RRVSED and RRVACT were quantified by the OMAXSED and the OMAXACT, which is the maximal amount of responding at the highest reinforcement schedule completed. An intercom and a video camera were in the room so that the experimenter could hear and see into the experimental room from an adjoining room.", "Television, VCR/DVD, video game playing, and computer use was measured using the TV Allowance™. The device has a memory which recorded the amount of time that the targeted child and each family member used since the unit was installed. The device has been used in ongoing research in our laboratory, and was used as an important component of a previous study that successfully reduced television watching to prevent the development of obesity in youth [13,19]. At baseline, unlimited TV and computer hours were set on each device, so that study staff could access the total number of hours for television and computer use for each family member. During the 25 and 50% decrease reduction phase the TV Allowance™ was programmed for the sedentary budget for that phase based on baseline amounts. In addition to the TV Allowance™, self-monitoring of sedentary behavior was recorded in a daily habit book which assessed reading, homework and use of hand-held computer games and targeted sedentary behaviors that cannot be quantified in this objective way. Recording was part of the intervention methodology to facilitate the child meeting their behavioral goals during the reduction phases.", "The objective measure of physical activity was the Actigraph™ activity monitor, a small, unobtrusive unidirectional accelerometer with extensive validation in youth as a measure of physical activity [20-23]. The activity monitor was set to record minute by minute measures of physical activity. The activity monitor was worn during school and non-school waking hours on three days (two weekdays and one weekend day) during the last week of each three week period. If a child did not wear their activity monitor on the scheduled day, a day similar to the missed day was rescheduled. Weekly diaries were used in combination with the activity monitor to indicate what physically active and targeted sedentary behaviors the child was engaging in during the last week of each phase of the experiment. The activity monitors were downloaded to a computer at each home visit and weekly diaries were reviewed during the weekly home visit with the family.\nTo determine levels of physical activity based on energy expenditure (METS or metabolic equivalents) we individually calibrated each accelerometer based on a progressive treadmill test. The VO2 (mL/kg/min) and accelerometer counts/minute were sampled each minute, and the accelerometer values were regressed against VO2 values to estimate energy expenditure for different intensities of physical activity. Based on the regression line, rates of accelerometer counts were determined for each participant that corresponded to rest (0 counts), 2 METS (7 mL/kg/min), 3 METS (MVPA, 10.5 mL/kg/min) and 6 METS (VPA, 21 mL/kg/min). Because the accelerometer was only worn for waking hours, and did not include time spent sleeping, we estimated energy expenditure for the remainder of non-accelerometer sampled minutes as expending 1 MET per minute, and computed calories per minute using estimated resting metabolic rate (RMR) using previously published equations for children [24]. Daily RMR calories were converted to RMR calories per minute, then multiplied by number of minutes not sampled by the accelerometer. Energy expenditure estimates while the accelerometer was being worn included RMR component of energy expenditure.\nBased on the estimated total daily energy expenditure, we estimated energy intake and changes in energy balance by considering total daily energy expenditure in respect to weight change. If weight was stable over the nine weeks, it was assumed that energy intake = energy expenditure. If children lost weight, it was assumed that one pound of weight loss was equivalent to negative energy balance of 3500 kcals, or 55.6 kcal/day. Similarly, a gain of one pound over the nine weeks would be equivalent to positive energy balance of 55.6 kcal/day. Based on the estimated energy expenditure and observed weight change, estimated energy intake and changes in energy balance were calculated.", "Repeated measures analysis of variance was used to assess whether changes in targeted sedentary behavior were established to ensure that the targeted behavior had been manipulated by the intervention. Repeated measures analysis of variance was also used to assess changes in body weight, and physical activity. Pearson product moment correlation coefficients were used to assess predictors of weight change, as well as the relationship between changes in targeted sedentary behaviors and physical activity, as well as the relationship between RRVACT and RRVSED, RRVACT and measured physical activity, and RRVSED and total sedentary behavior. Significant factors were then studied in regression models controlling for age, sex and minority status. Significant relationships between weight change and RRVSED were explored by median splits dividing children into those who decreased or maintained weight (N = 20) versus increased (N = 41) their weight over the nine weeks of observation.", "The average child was 10.7 ± 1.2 years of age, with a height of 57.4 ± 3.7 in, weight of 118.8 ± 30.5 lbs, BMI of 25.0 ± 4.2, and zBMI of 1.8 ± 0.4. Twenty seven (48.2%) of the children were male, and 14 (25%) were non-Caucasian or minority (Table 1). Changes in weight, targeted sedentary behaviors and physical activity are shown across the three phases in Table 2. The average child had a reduction in targeted sedentary behavior from baseline to 25% and 50% reduction phases (F(2, 110) = 285.00, p < 0.0001), with significant changes from baseline to 25% (p < 0.001), with a further significant reduction from 25% to 50% (p < 0.001). There was a reduction in targeted sedentary behavior of 67.6%, with the majority of change (53%) occurring during the initial reduction phase. The changes in sedentary behavior included significant reductions in television watching (F(2,110) = 197.00, p < 0.001) and computer use (F(2,110) = 40.07, p < 0.001). Small, but significant increases in body weight were observed (F(2,110) = 4.84, p < 0.001), with significant changes from baseline to 50% (p = 0.003) phases, but no differences between the 25% and 50% phases (p = 0.20). There was little change in physical activity accelerometer counts (F(2,110) = 0.49, p = 0.61) over phases. In addition, there were no significant changes in average METS (F(2,94) = 0.60, p = 0.55), or in the percentage of time below 2 METS (F(2,94) = 1.61, p = 0.21), above 2 METS (F(2,94) = 1.61, p = 0.21). Amount of time above 3 METS showed a significant decrease over phases (F(2,94) = 4.78, p = 0.01), with decreases from baseline to the 25% (p = 0.03) and 50% (p = 0.02) reduction phases. Estimated energy intake did not significantly change over phases (F(2,110) = 0.49, p = 0.61).\nCharacteristics of the sample (N = 56)\nBehavior and weight changes during the baseline, 25 and 50% reduction phases (Mean ± SEM)\nVariables related to weight change included only OMAXSED (r = 0.31, p = 0.022). Child age (p = .71), sex (p = .64), minority status (p = .67), income (p = .38), reinforcing value of physical activity (p = .72) or baseline values of weight (p = .45), changes in targeted sedentary behavior (p = .44) or changes in physical activity (p = .15) were not related to weight change. Multiple regression controlling for child age, sex and minority status did not reduce the impact of RRVSED on weight change (p = 0.035).\nDifferences in the motivation to be active or sedentary, and changes in sedentary and active behaviors, estimated energy intake and body weight for children who gained (N = 39) or lost (N = 19) weight are shown in Table 3. There was a significant difference in OMAXSED between children who lost or maintained versus those who gained weight during the study (F(1,54 = 4.79, p = 0.03). Figure 1 shows differences in OMAX (left graphs) and the pattern of responding (right graphs), while the top and bottom graphs show motivated responding for sedentary behaviors or physical activity, respectively. As shown in Figure 1, children who maintained or lost weight had lower OMAXSED and responding over progressive ratio schedules for access to sedentary behaviors than children who gained weight, who worked much harder for sedentary behaviors. There were no differences in OMAXACT (F(1,54) = 0.11, p = 0.74) as a function of whether children lost or maintained versus gained weight during the study. As shown in Table 3, there were no differences in the alterations in any activity variable for children who gained or lost weight over the nine weeks of the study. Children who lost weight reduced energy intake by an estimated 223 kcal/day calories when sedentary behaviors were reduced, while children who gained weight when sedentary behaviors were reduced increased their estimated energy intake by 172 kcal/day (F(1,54) = 13.13, p = 0.0006).\nDifferences in behavior for children who lost or gained weight after reduction of sedentary behaviors (Mean ± SEM)\nDifferences (mean ± SEM) in the OMAX (left graphs) and reinforcing value (right graphs) of sedentary behaviors (top graphs) or physical activity (bottom graphs) for children who lost or maintained weight versus gained weight during the 9 week study.\nOMAXACT was not significantly related to OMAXSED (r = 0.07, p = 0.63). Similarly, OMAXACT was not correlated with activity counts at baseline (r = 0.12, p = 0.38), and OMAXSED was not correlated with total sedentary behavior at baseline (r = 0.07, p = 0.63).", "The results show that sedentary behavior was successfully manipulated by over 50%, and that children made the majority of the changes during the 25% reduction phase. These changes were associated with no significant increases in physical activity. We have previously observed minimal changes in physical activity when sedentary behaviors were reduced [25]. Weight change was not associated with changes in physical activity when sedentary behaviors are reduced, suggesting that for the average overweight child, reducing sedentary behaviors does not result in greater physical activity and weight loss. This is consistent with cross-sectional research arguing that the effects of sedentary behavior on body weight are not due to changes in activity energy expenditure [26]. In other research we have shown that increasing sedentary behavior results in a reduction in physical activity, suggesting that the relationship between sedentary behavior and physical activity is not symmetric, and the association between these behaviors may only be present in one direction [25].\nThe variable that was associated with weight change over time was the motivation to be sedentary, which represents the reinforcing value of sedentary behaviors such as television watching, watching videos and playing on the computer. These are activities for which obese children allocate a great deal of their time, and time being sedentary may be independent of time being active [27,28], as the motivation to be sedentary is independent of the motivation to be active. If they were direct substitutes for each other, then they would be significantly negatively correlated, and reducing sedentary behaviors would result in an increase in physical activity, which is generally not observed. In the current study the motivation to be active and the motivation to be sedentary were not significantly related.\nIt is important that only a subset of sedentary behaviors was targeted, to allow the child to choose how to reallocate time that had been allocated for watching television or playing computer games. Since no increases in physical activity were observed, it is likely that children substituted other sedentary behaviors for the targeted sedentary behaviors. Unfortunately, we did not have children record all sedentary behaviors during the reduction phases, so it is not possible to know what sedentary behaviors they engaged in as substitutes.\nThe question is how is the reinforcing value of sedentary behaviors related to weight change? Weight change is due to change in energy balance, which must be due to either reductions in energy intake or increases in energy expenditure. The absence of increases in physical activity reduces the likelihood that physical activity is a substitute for sedentary behaviors, such that children spontaneously increase their physical activity when sedentary behaviors are reduced. This suggests that changes in energy intake are the component of energy balance that is promoting the weight change. The estimated energy intake data showed that children who lost weight reduced their estimated daily energy intake by 223 kcal/day, while those who gained weight increased their estimated daily energy intake by 172 kcal/day. Other investigators have argued that while sedentary behaviors are correlated with weight, the relationship is not mediated by changes in measured physical activity, but are likely to be mediated by changes in eating and energy intake [26]. We have previously shown in non-overweight children that energy intake is a reliable complement to shifts in sedentary behavior [19], such that reductions in sedentary behavior paired with eating result in reductions in energy intake. This may be due to the strength of the relationship between eating and engaging in sedentary behaviors when children enter the study. While eating in association with sedentary behaviors is common, and experimental research has shown that increasing television watching increases energy intake [29-31], there is variability in this relationship. If a child never eats in association with television watching, then reducing television watching cannot result in a reduction in energy intake. On the other hand, if a child consumes food often in association with watching television, then reducing television watching may have a large effect on energy intake and body weight.\nThe motivation to be sedentary is a behavioral phenotype that may lead to a better understanding of factors related to how changing sedentary behavior may relate to changes in energy balance behaviors and weight loss. Given the potential relationship between changes in television watching and energy intake, it is also possible that the motivation to eat is an important factor to predict how reducing sedentary behaviors influences eating. For example, it may be that children who find food more reinforcing would have a harder time reducing food intake when sedentary behaviors are reduced, and they may compensate by increasing intake at other times, or they may resist changes to reduce television watching that is associated with eating, since this would reduce access to powerful reinforcers they want to obtain. This would be an interesting set of studies for future research.\nOne surprising result was the failure to show that the motivation to be active was related to physical activity, or those high in the motivation to be active were more likely to become more active. We have shown in previous research that the motivation to be active is related to more physical activity [3,4], but these studies included children with a wide variety of motivation to be active as well as a variety of levels of physical activity. In the present study of overweight and obese, sedentary children, the range of motivation to be active and of activity levels was constrained, which can lead to lower relationships if there is little variability in the predictor and/or outcome.\nThere are limitations to this study in the measurement of activity and diet. While we had objective measurements of physical activity, we did not collect detailed self-reported information on what types of behaviors people engaged in and what types of behaviors people used to substitute for reduced targeted sedentary behaviors. It would have been interesting to know if specific classes of active or sedentary behaviors were changed. For example, recent data suggests that standing rather than sitting may confer health benefits in adults [32], and it would be interesting to know if children stood more or engaged in light physical activity to replace sedentary behaviors and if these changes would be associated with weight loss or improvement in health. Likewise, it would have been interesting to know if other popular sedentary behaviors, such as talking on the phone or texting, replaced television watching or computer game playing. The TV Allowance™ is a useful behavioral engineering approach to reducing television watching, however, a limitation of using the device is that it is possible to overestimate television watching since children may turn on the television and become engaged in an alternative activity and not watch it. This should be minimized since children are reinforced for reducing television watching, but there may be instances in which television watching is overestimated, and the degree of reduction in television watching is underestimated. Physical activity was measured for three days during each phase, and it may have been useful to collect more extended samples of physical activity during each phase [23]. The results point to changes in energy intake, rather than physical activity, as the mechanism for changes in body weight as people reduce their television watching or computer game playing. Estimates of energy intake were consistent with this hypothesis, and accelerometer based activity counts can provide valid information about energy expenditure [20,33]. In addition, self-reports of energy intake are notoriously inaccurate [34], and previous studies using the current methods for reducing sedentary behavior have shown consistent underreporting of energy intake in obese children and adolescents [19]. Despite the challenges in collecting valid dietary intake data, it would be useful to have dietary information that includes dietary intake as well as macronutrient intake.\nIn summary, the present study replicates previous research that suggests that reducing sedentary behaviors is not associated with an increase in physical activity. The motivation to be sedentary is related to short term weight change when sedentary behaviors are reduced, and this effect may be mediated by changes in energy intake. Thus, one predictor of the effectiveness of programs to reduce sedentary behavior for child weight change may be the motivation to be sedentary.", "Dr. Epstein is a consultant to Kraft foods and NuVal. The other authors do not have any potential conflict of interests.", "LHE and JNR designed the study, and LHE obtained the research funding. MDC obtained IRB approval, and supervised study implementation and data collection. LHE and RAP conducted data analysis. LHE wrote the initial draft of the manuscript, and all authors contributed to the interpretation of data and the writing of the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes.
21342519
Mammalian genome sequence data are being acquired in large quantities and at enormous speeds. We now have a tremendous opportunity to better understand which genes are the most variable or conserved, and what their particular functions and evolutionary dynamics are, through comparative genomics.
BACKGROUND
This article has been reviewed by Drs. Anamaria Necsulea (nominated by Nicolas Galtier), Subhajyoti De (nominated by Sarah Teichmann) and Claus O. Wilke.
REVIEWERS
We chose human and eleven other high-coverage mammalian genome data-as well as an avian genome as an outgroup-to analyze orthologous protein-coding genes using nonsynonymous (Ka) and synonymous (Ks) substitution rates. After evaluating eight commonly-used methods of Ka and Ks calculation, we observed that these methods yielded a nearly uniform result when estimating Ka, but not Ks (or Ka/Ks). When sorting genes based on Ka, we noticed that fast-evolving and slow-evolving genes often belonged to different functional classes, with respect to species-specificity and lineage-specificity. In particular, we identified two functional classes of genes in the acquired immune system. Fast-evolving genes coded for signal-transducing proteins, such as receptors, ligands, cytokines, and CDs (cluster of differentiation, mostly surface proteins), whereas the slow-evolving genes were for function-modulating proteins, such as kinases and adaptor proteins. In addition, among slow-evolving genes that had functions related to the central nervous system, neurodegenerative disease-related pathways were enriched significantly in most mammalian species. We also confirmed that gene expression was negatively correlated with evolution rate, i.e. slow-evolving genes were expressed at higher levels than fast-evolving genes. Our results indicated that the functional specializations of the three major mammalian clades were: sensory perception and oncogenesis in primates, reproduction and hormone regulation in large mammals, and immunity and angiotensin in rodents.
RESULTS
Our study suggests that Ka calculation, which is less biased compared to Ks and Ka/Ks, can be used as a parameter to sort genes by evolution rate and can also provide a way to categorize common protein functions and define their interaction networks, either pair-wise or in defined lineages or subgroups. Evaluating gene evolution based on Ka and Ks calculations can be done with large datasets, such as mammalian genomes.
CONCLUSION
[ "Amino Acid Substitution", "Animals", "Computational Biology", "Evolution, Molecular", "Genetic Variation", "Genome", "Humans", "Mammals", "Models, Molecular", "Open Reading Frames", "Phylogeny", "Sequence Alignment", "Species Specificity" ]
3055854
null
null
Methods
[SUBTITLE] Data acquisition and quality assessment [SUBSECTION] The genome data were collected from Ensembl version 53 [61] (http://www.biomart.org/; http://www.ensembl.org/): human (NCBI36), chimpanzee (CHIMP2.1), orangutan (PPYG2), macaque (MMUL1), horse (EquCab2), dog (BROADD2), cow (Btau4), guinea pig (cavPor3), mouse (NCBIM37), rat (RGSC3.4), opossum (BROADO3), platypus (OANA5), and chicken (WASHUC2). We also collected ortholog sequences of humans and other species, saving only the gene pairs marked as one-to-one match to avoid ambiguous definition of orthologs. We used ClustalW [62] to align human amino acid sequences with those of other species, and then translated them back to their corresponding nucleotide sequences. The genome data were collected from Ensembl version 53 [61] (http://www.biomart.org/; http://www.ensembl.org/): human (NCBI36), chimpanzee (CHIMP2.1), orangutan (PPYG2), macaque (MMUL1), horse (EquCab2), dog (BROADD2), cow (Btau4), guinea pig (cavPor3), mouse (NCBIM37), rat (RGSC3.4), opossum (BROADO3), platypus (OANA5), and chicken (WASHUC2). We also collected ortholog sequences of humans and other species, saving only the gene pairs marked as one-to-one match to avoid ambiguous definition of orthologs. We used ClustalW [62] to align human amino acid sequences with those of other species, and then translated them back to their corresponding nucleotide sequences. [SUBTITLE] Defining fast-evolving, intermediately-evolving, and slow-evolving genes [SUBSECTION] We estimated the non-synonymous substitution rates and synonymous substitution rates of orthologs based on a number of algorithms, including NG (the different methods are abbreviated as their authors' last name initials; M stands for a modified version of the original methods) [24], LWL [25], MLWL [28], LPB [26,27], MLPB [28], YN [30], MYN [31], GY [29], and the gamma-series methods [22,23] used in KaKs Calculator 2.0 tool [63]. We adopted 10% as the cut-off value to define fast-, intermediately- or slow-evolving genes in each lineage. We sorted genes by their Ka values from smallest to largest in each lineage, and defined genes corresponding to the lowest, middle, and highest 10% of Ka values to be slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. In this procedure, we considered NA (not applicable) values to be 0, because we observed that NA values are usually associated with 100% identical gene pairs, except in the cases of a few indels (inserts or deletions). We estimated the non-synonymous substitution rates and synonymous substitution rates of orthologs based on a number of algorithms, including NG (the different methods are abbreviated as their authors' last name initials; M stands for a modified version of the original methods) [24], LWL [25], MLWL [28], LPB [26,27], MLPB [28], YN [30], MYN [31], GY [29], and the gamma-series methods [22,23] used in KaKs Calculator 2.0 tool [63]. We adopted 10% as the cut-off value to define fast-, intermediately- or slow-evolving genes in each lineage. We sorted genes by their Ka values from smallest to largest in each lineage, and defined genes corresponding to the lowest, middle, and highest 10% of Ka values to be slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. In this procedure, we considered NA (not applicable) values to be 0, because we observed that NA values are usually associated with 100% identical gene pairs, except in the cases of a few indels (inserts or deletions). [SUBTITLE] Functional classification and other analyses [SUBSECTION] We used IDConvertor [64] to convert the different ID between different gene accessions and utilized the Protein Analysis through Evolutionary Relationships (PANTHER) online system to annotate genes at three levels: biological processes, molecular functions, and pathways [65]. Enrichment analysis was performed based on a combination of Fisher's exact test and multiple testing Bonferroni Step-down (Holm) correction [66]. The cut-off in functional enrichment test is 0.1. The network created based on fast- and slow-evolving genes was drawn with the software Cytoscape [67]. Conserved grade illustrations were created using the Consurf server [68] after submitting protein alignments built with ClustalX [62]. The three-dimensional structures of the corresponding proteins were retrieved from the Protein Data Bank (PDB) [69]. For gene expression analysis, we used the expression profiling of Expressed Sequence Tags (EST) data pooled from 18 tissues as described previously in our published work [70]. We used IDConvertor [64] to convert the different ID between different gene accessions and utilized the Protein Analysis through Evolutionary Relationships (PANTHER) online system to annotate genes at three levels: biological processes, molecular functions, and pathways [65]. Enrichment analysis was performed based on a combination of Fisher's exact test and multiple testing Bonferroni Step-down (Holm) correction [66]. The cut-off in functional enrichment test is 0.1. The network created based on fast- and slow-evolving genes was drawn with the software Cytoscape [67]. Conserved grade illustrations were created using the Consurf server [68] after submitting protein alignments built with ClustalX [62]. The three-dimensional structures of the corresponding proteins were retrieved from the Protein Data Bank (PDB) [69]. For gene expression analysis, we used the expression profiling of Expressed Sequence Tags (EST) data pooled from 18 tissues as described previously in our published work [70].
null
null
null
null
[ "Background", "Results and Discussion", "Data and quality control", "Indexing evolutionary rates of protein-coding genes", "Function characterization of fast-evolving and slow-evolving genes", "Comparisons of fast-evolving and slow-evolving genes and their functions among mammalian lineages", "Comparisons to other studies", "The relationship between evolutionary rate and expression level", "The shared fast-evolving and slow-evolving genes among mammals", "Conclusions", "Data acquisition and quality assessment", "Defining fast-evolving, intermediately-evolving, and slow-evolving genes", "Functional classification and other analyses", "Competing interests", "Authors' contributions", "Reviewers' comments", "Reviewer's report 1", "Authors' response", "Authors' response", "Authors' response", "Comments from the second round of reviewing", "Authors' response", "Reviewer's report 2", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Reviewer's report 3", "Authors' Response", "Authors' response", "Authors' response", "Authors' response" ]
[ "Although protein-coding sequences account for ~1% of the entire mammalian genome, it is the most function-related, dynamic, and informative part of the genome [1]. For molecular evolution studies, protein-coding sequences are central to understanding the mutational dynamics of genes and the functional dynamics of gene networks within a population or among diverse species and lineages.\nFollowing the publication of the complete human genome sequence [2], over a dozen mammalian genomes have been sequenced, allowing mammalian comparative genomics to finally come to age. Genome-wide sequence analysis has been focused on two essential forms of genetic variation. One concerns gene gain-and-loss that is related to the amplification and deletion of certain genes and their chromosomal regions. This is an important evolutionary mechanism to shape mammalian genomes through natural selection, but it also leads to gene family expansion and deletion, which has been proposed to be one molecular origin of chimp-human evolution [3]. Another form of genetic variation is sequence variation at specific nucleotide sites in protein-coding genes. Such variations become functionally relevant when they alter protein sequences.\nThe task of defining positively-selected genes has drawn the most attention, because these genes are often considered to be the major driving forces behind how organisms adapt to their external environments [4,5]. A number of interesting characteristics of positively selected genes have been found: (1) they are more likely to have several classes of functions, including nuclear transport, sensory perception, immune defenses, tumor suppression, apoptosis, and reproduction, and may be involved in Mendelian genetic disorders [6-8]. (2) These genes tend to be expressed at low levels and in a tissue-specific manner [7]. (3) Some highly-expressed genes in the testis were reported to have been subjected to positive selection [6]. (4) Positively selected genes are often species-specific or lineage-specific [7]. As the number of sequenced genomes increases, new approaches and novel methodology will be needed to develop efficient tools for mining vast amounts of sequence data.\nHere, we report a novel yet basic method of defining fast-evolving and slow-evolving genes based on nonsynonymous substitution rates (Ka) in different subgroups or lineages of mammals. We first tested different computational models to see if they provided consistent results when defining the evolution rates of diverse gene classes and families. We then identified percentage shared genes (orthologs) among lineages that were calculated based on different methods, and also looked in more detail at their cellular functions and functional pathways. We also examined the relationship between the evolutionary rates and gene expression levels of these genes, using high-coverage genome sequence and transcriptomic data from thirteen vertebrate species, including human [9], chimpanzee [10], orangutan [11], macaque [12], horse [13], dog [14], cow [15], guinea pig, mouse [16], rat [17], opossum [18], platypus [19], and chicken [20]. Our new method not only confirms the results of many previous studies, but also provides a new and straightforward approach to understanding the evolutionary dynamics of mammalian genes.", "[SUBTITLE] Data and quality control [SUBSECTION] To examine the divergence between humans and other species, we calculated identities by averaging all orthologs in a species: chimpanzee - 99.23%; orangutan - 98.00%; macaque - 96.09%; horse - 89.44%; dog - 87.93%; cow - 87.36%; guinea pig - 85.91%; mouse - 84.54%; rat - 83.92%; opossum - 77.64%; platypus - 74.37%; and chicken - 72.87%. The data gave rise to a bimodal distribution in overall identities, which distinctly separates highly identical primate sequences from the rest (Additional file 1: Figure 1SA). For quality assessment, we also evaluated the alignment qualities of all orthologs.\nFirst, we found that the number of Ns (uncertain nucleotides) in all coding sequences (CDS) fell within reasonable ranges (mean ± standard deviation): (1) the number of Ns/the number of nucleotides = 0.00002740 ± 0.00059475; (2) the total number of orthologs containing Ns/total number of orthologs × 100% = 1.5084%. Second, we evaluated parameters related to the quality of sequence alignments, such as percentage identity and percentage gap (Additional file 1: Figure S1). All of them provided clues for low mismatching rates and limited number of arbitrarily-aligned positions.\nTo examine the divergence between humans and other species, we calculated identities by averaging all orthologs in a species: chimpanzee - 99.23%; orangutan - 98.00%; macaque - 96.09%; horse - 89.44%; dog - 87.93%; cow - 87.36%; guinea pig - 85.91%; mouse - 84.54%; rat - 83.92%; opossum - 77.64%; platypus - 74.37%; and chicken - 72.87%. The data gave rise to a bimodal distribution in overall identities, which distinctly separates highly identical primate sequences from the rest (Additional file 1: Figure 1SA). For quality assessment, we also evaluated the alignment qualities of all orthologs.\nFirst, we found that the number of Ns (uncertain nucleotides) in all coding sequences (CDS) fell within reasonable ranges (mean ± standard deviation): (1) the number of Ns/the number of nucleotides = 0.00002740 ± 0.00059475; (2) the total number of orthologs containing Ns/total number of orthologs × 100% = 1.5084%. Second, we evaluated parameters related to the quality of sequence alignments, such as percentage identity and percentage gap (Additional file 1: Figure S1). All of them provided clues for low mismatching rates and limited number of arbitrarily-aligned positions.\n[SUBTITLE] Indexing evolutionary rates of protein-coding genes [SUBSECTION] Ka and Ks are nonsynonymous (amino-acid-changing) and synonymous (silent) substitution rates, respectively, which are governed by sequence contexts that are functionally-relevant, such as coding amino acids and involving in exon splicing [21]. The ratio of the two parameters, Ka/Ks (a measure of selection strength), is defined as the degree of evolutionary change, normalized by random background mutation. We began by scrutinizing the consistency of Ka and Ks estimates using eight commonly-used methods. We defined two divergence indexes: (i) standard deviation normalized by mean, where eight values from all methods are considered to be a group, and (ii) range normalized by mean, where range is the absolute difference between the estimated maximal and minimal values. In order to keep our comparison unbiased, we eliminated gene pairs when any NA (not applicable or infinite) value occurred in Ka or Ks. We observed that the divergence indexes of Ka were significantly smaller than those of Ks in all examined species (P-value < 2.2e-16, Wilcoxon rank sum test) (Figure 1). The result of our second defined index appeared to be very similar to the first (data not shown). We also investigated the performance of these methods in calculating Ka, Ks, and Ka/Ks. First, we considered six cut-off points for grouping and defining fast-evolving and slow-evolving genes: 5%, 10%, 20%, 30%, 40%, and 50% of the total (see Methods). Second, we applied eight commonly-used methods to calculate the parameters for twelve species at each cut-off value. Lastly, we compared the percentage of shared genes (the number of shared genes from different methods, divided by the total number of genes within a chosen cut-off point) calculated by GY and other methods (Figure 2). We observed that Ka had the highest percentage of shared genes, followed by Ka/Ks; Ks always had the lowest. We also made similar observations using our own gamma-series methods [22,23] (data not shown). It was quite clear that Ka calculations had the most consistent results when sorting protein-coding genes based on their evolutionary rates. As the cut-off values increased from 5% to 50%, the percentages of shared genes also increased, reflecting the fact that more shared genes are obtained by setting less stringent cut-offs (Figure 2A and 2B). We also found a rising trend as the model complexity increased in the order of NG, LWL, MLWL, LPB, MLPB, YN, and MYN (Figure 2C and 2D). We examined the impact of divergent distance on gene sorting using the three parameters, and found that the percentage of shared genes referencing to Ka was consistently high across all twelve species, while those referencing to Ka/Ks and Ks decreased with increasing divergence time between human and other studied species (Figure 2E and 2F). In addition, the percentage of shared genes of Ka/Ks remains moderate between those of Ka and Ks. In particular, there should be more variations in the percentages of shared genes determined by Ka/Ks and Ks than by Ka, when we define slow-evolving genes (Figure 2B, D, and 2F). We found consistent results from the various methods when Ka was used as the measure for sorting genes.\nDivergence index (standard deviation/mean) of Ka and Ks determined based on the eight different methods from the twelve vertebrate species. In the boxplots, lower quantile, median, and upper quantile were represented in the boxes. Mean values were depicted in dots. Outliers were removed to make the plot straightforward. The number codes for the vertebrate species are: 1, chimp; 2, orangutan; 3, macaque; 4, horse; 5, dog; 6, cow; 7, guinea pig; 8, mouse; 9, rat; 10, opossum; 11, platypus; and 12, chicken.\nThe percentage of shared genes of Ka, Ks and Ka/Ks based on GY compared with other seven methods in terms of cut-off (A, B), method (C, D), and species (E, F). Outliers were removed to make the plots straightforward. The number codes for the species are the same as what in Figure 1.\nThe methods used in this study cover a wide range of mutation models with different complexities. NG gives equal weight to every sequence variation path [24] and LWL divides the mutation sites into three categories—non-degenerate, two-fold, and four-fold sites—and assigns fixed weights to synonymous and nonsynonymous sites for the two-fold degenerate sites [25]. LPB adopts a flexible ratio of transitional to transversional substitutions to handle the two-fold sites [26,27]. MLWL or MLPB are improved versions of their parental methods with specific consideration on the arginine codons (an exceptional case from the previous method) [28]. In particular, MLWL also incorporates an independent parameter, the ratio of transitional to transversional substitution rates, into the calculation [28]. Both YN and GY capture the features of codon usage and transition/transversion rates, but they are approximate and maximum likelihood methods, respectively [29,30]. MYN accounts for another important evolutionary characteristic—differences in transitional substitution within purines and pyrimidines [31]. Although these methods model and compute sequence variations in different ways, the Ka values that they calculate appeared to be more consistent than their Ks values or Ka/Ks. We proposed the following reasons (which are not comprehensive): first, real data from large data sets are usually from a broader range of species than computer simulations in the training sets for methodology development, so deviations in Ks values may draw more attentions in discussions. Second, the parameter-rich approaches—such as considering unequal codon usage and unequal transition/transversion rates—may lead to opposite effects on substitution rates when sequence divergence falls out of the \"sweet ranges\" [25,30,32]. Third, when examining closely related species, such primates, one will find that most Ka/Ks values are smaller than 1 and that Ka values are smaller than Ks values under most conditions. For a very limited number of nonsynonymous substitutions, when evolutionary distance is relatively short between species, models that increase complexity, such as those for correcting multiple hits, may not lead to stable estimations [24,32]. Furthermore, when incorporating the shape parameter of gamma distribution into the commonly approximate Ka/Ks methods, we found previously that Ks is more sensitive to changes in the shape parameter under the condition Ka < Ks [23]. Together, there are stronger influences on Ks than on Ka in two cases: when Ka < Ks and when complexity increases in mutation models. Fourth, it has been suggested that Ks estimation does not work well for comparing extremes, such as closely and distantly related species [33,34]. Occasionally, certain larger Ka/Ks values, greater than 1, are identified, as was done in a comparative study between human and chimpanzee genes, perhaps due to a very small Ks [34].\nWe also wondered what would happen when Ka becomes saturated as the divergence of the paired sequences increases. Looking at human vs. chicken, we found that the median Ka exceeded 0.2 and that the maximal Ka was as high as 0.6 after the outliers were eliminated (Additional file 1: Figure S2). This result suggested that their Ka values have not approached saturation yet. In addition, we chose the GY method to compute Ka as an estimator of evolutionary rates, since counting methods usually yield more out-of-range values than maximum likelihood methods (data not shown).\nKa and Ks are nonsynonymous (amino-acid-changing) and synonymous (silent) substitution rates, respectively, which are governed by sequence contexts that are functionally-relevant, such as coding amino acids and involving in exon splicing [21]. The ratio of the two parameters, Ka/Ks (a measure of selection strength), is defined as the degree of evolutionary change, normalized by random background mutation. We began by scrutinizing the consistency of Ka and Ks estimates using eight commonly-used methods. We defined two divergence indexes: (i) standard deviation normalized by mean, where eight values from all methods are considered to be a group, and (ii) range normalized by mean, where range is the absolute difference between the estimated maximal and minimal values. In order to keep our comparison unbiased, we eliminated gene pairs when any NA (not applicable or infinite) value occurred in Ka or Ks. We observed that the divergence indexes of Ka were significantly smaller than those of Ks in all examined species (P-value < 2.2e-16, Wilcoxon rank sum test) (Figure 1). The result of our second defined index appeared to be very similar to the first (data not shown). We also investigated the performance of these methods in calculating Ka, Ks, and Ka/Ks. First, we considered six cut-off points for grouping and defining fast-evolving and slow-evolving genes: 5%, 10%, 20%, 30%, 40%, and 50% of the total (see Methods). Second, we applied eight commonly-used methods to calculate the parameters for twelve species at each cut-off value. Lastly, we compared the percentage of shared genes (the number of shared genes from different methods, divided by the total number of genes within a chosen cut-off point) calculated by GY and other methods (Figure 2). We observed that Ka had the highest percentage of shared genes, followed by Ka/Ks; Ks always had the lowest. We also made similar observations using our own gamma-series methods [22,23] (data not shown). It was quite clear that Ka calculations had the most consistent results when sorting protein-coding genes based on their evolutionary rates. As the cut-off values increased from 5% to 50%, the percentages of shared genes also increased, reflecting the fact that more shared genes are obtained by setting less stringent cut-offs (Figure 2A and 2B). We also found a rising trend as the model complexity increased in the order of NG, LWL, MLWL, LPB, MLPB, YN, and MYN (Figure 2C and 2D). We examined the impact of divergent distance on gene sorting using the three parameters, and found that the percentage of shared genes referencing to Ka was consistently high across all twelve species, while those referencing to Ka/Ks and Ks decreased with increasing divergence time between human and other studied species (Figure 2E and 2F). In addition, the percentage of shared genes of Ka/Ks remains moderate between those of Ka and Ks. In particular, there should be more variations in the percentages of shared genes determined by Ka/Ks and Ks than by Ka, when we define slow-evolving genes (Figure 2B, D, and 2F). We found consistent results from the various methods when Ka was used as the measure for sorting genes.\nDivergence index (standard deviation/mean) of Ka and Ks determined based on the eight different methods from the twelve vertebrate species. In the boxplots, lower quantile, median, and upper quantile were represented in the boxes. Mean values were depicted in dots. Outliers were removed to make the plot straightforward. The number codes for the vertebrate species are: 1, chimp; 2, orangutan; 3, macaque; 4, horse; 5, dog; 6, cow; 7, guinea pig; 8, mouse; 9, rat; 10, opossum; 11, platypus; and 12, chicken.\nThe percentage of shared genes of Ka, Ks and Ka/Ks based on GY compared with other seven methods in terms of cut-off (A, B), method (C, D), and species (E, F). Outliers were removed to make the plots straightforward. The number codes for the species are the same as what in Figure 1.\nThe methods used in this study cover a wide range of mutation models with different complexities. NG gives equal weight to every sequence variation path [24] and LWL divides the mutation sites into three categories—non-degenerate, two-fold, and four-fold sites—and assigns fixed weights to synonymous and nonsynonymous sites for the two-fold degenerate sites [25]. LPB adopts a flexible ratio of transitional to transversional substitutions to handle the two-fold sites [26,27]. MLWL or MLPB are improved versions of their parental methods with specific consideration on the arginine codons (an exceptional case from the previous method) [28]. In particular, MLWL also incorporates an independent parameter, the ratio of transitional to transversional substitution rates, into the calculation [28]. Both YN and GY capture the features of codon usage and transition/transversion rates, but they are approximate and maximum likelihood methods, respectively [29,30]. MYN accounts for another important evolutionary characteristic—differences in transitional substitution within purines and pyrimidines [31]. Although these methods model and compute sequence variations in different ways, the Ka values that they calculate appeared to be more consistent than their Ks values or Ka/Ks. We proposed the following reasons (which are not comprehensive): first, real data from large data sets are usually from a broader range of species than computer simulations in the training sets for methodology development, so deviations in Ks values may draw more attentions in discussions. Second, the parameter-rich approaches—such as considering unequal codon usage and unequal transition/transversion rates—may lead to opposite effects on substitution rates when sequence divergence falls out of the \"sweet ranges\" [25,30,32]. Third, when examining closely related species, such primates, one will find that most Ka/Ks values are smaller than 1 and that Ka values are smaller than Ks values under most conditions. For a very limited number of nonsynonymous substitutions, when evolutionary distance is relatively short between species, models that increase complexity, such as those for correcting multiple hits, may not lead to stable estimations [24,32]. Furthermore, when incorporating the shape parameter of gamma distribution into the commonly approximate Ka/Ks methods, we found previously that Ks is more sensitive to changes in the shape parameter under the condition Ka < Ks [23]. Together, there are stronger influences on Ks than on Ka in two cases: when Ka < Ks and when complexity increases in mutation models. Fourth, it has been suggested that Ks estimation does not work well for comparing extremes, such as closely and distantly related species [33,34]. Occasionally, certain larger Ka/Ks values, greater than 1, are identified, as was done in a comparative study between human and chimpanzee genes, perhaps due to a very small Ks [34].\nWe also wondered what would happen when Ka becomes saturated as the divergence of the paired sequences increases. Looking at human vs. chicken, we found that the median Ka exceeded 0.2 and that the maximal Ka was as high as 0.6 after the outliers were eliminated (Additional file 1: Figure S2). This result suggested that their Ka values have not approached saturation yet. In addition, we chose the GY method to compute Ka as an estimator of evolutionary rates, since counting methods usually yield more out-of-range values than maximum likelihood methods (data not shown).\n[SUBTITLE] Function characterization of fast-evolving and slow-evolving genes [SUBSECTION] To learn about the functions of fast-evolving and slow-evolving genes in each species and lineage, we used custom-designed scripts to assess the enrichment of molecular functions (MF), biological processes (BP), and signal/metabolic pathways (Table 1). We noticed that the number of enriched functions related to slow-evolving genes was 2.53 times greater than those related to fast-evolving genes. Fast-evolving genes also had more lineage-specific functions than slow-evolving genes.\nSelected common functional categories of fast-evolving genes and/or slow-evolving genes among mammalian genomes and lineages.\nNote: The asterisks depict function classification of genes in species and lineages, which are significantly enriched based on Fisher's Exact Test after multiple corrections. The species are coded in numbers as what listed in Figure 1.\nWe found that fast-evolving genes were enriched in immunity-related functions (Table 1), which included genes present in NK, T, and B cells. The genes in NK cells were related to innate immunity (non-specific), and genes in T and B cells were associated with acquired immunity (specific) [35]. Other enriched immunity-related categories of fast-evolving genes included immunoglobulin, cytokine, chemokine, and interleukin. Fast-evolving genes were also enriched in signaling pathways, such as receptors and ligands. Finally, there were a significant number of fast-evolving genes classified as having unknown functions—unclassified biological processes and unclassified molecular functions. It is not surprising that fast-evolving genes may quickly diminish their homology to known proteins and are associated with dietary adaptation, language, appearance, behavior or upright-walking [36]. In the enriched functions of slow-evolving genes, we found a number of important house-keeping functional classes, including transcription, mRNA processing/splicing, translation, protein modification, metabolism, protein traffic, cell cycle, development and endocytosis (Table 1). As a result, fast-evolving and slow-evolving genes have significantly different functions in mammals.\nAnother point of interest is that we identified two immunity-related function categories, T cell and B cell activation, in the enriched functions of slow-evolving genes (Table 1). We also discovered that immunity-related fast-evolving genes were mostly receptors, ligands, cytokines, and CD (cluster of differentiation) molecules, and that slow-evolving immunity-related genes were usually kinases or adaptor proteins. Taking the human-rat comparison as an example, the receptors included MS4A2, FCER1G, FCGRT, KLRG2, IL1RN, TNFRSF1A, TNFRSF25, IFNGR1, IL2RA, TNFRSF4 and TNFRSF8; the cytokines were IL12A and IL1F9; and the ligands were CCL27 and ICOSLG. All of these are highly conserved, functionally important, and involved in complex immunity-related pathways. Cytokines are also involved in the transfer of information between cells, the regulation of cell physiological processes, and the strengthening of immune-competence [37]. CD proteins, generated during the differentiation of lymphocytes, are a class of cell surface molecules that are recognized by specific antibodies on the surfaces of lymphocytes [38]. Adaptor proteins and kinases play significant roles in signal transduction in cell immune systems, mediate specific interactions between proteins, and activate phosphorylation of the target proteins to functionally modify protein structure and activity [39,40]. In summary, receptors, ligands, cytokines, and CDs are likely to evolve faster than kinases and adaptor proteins, although they all function in the acquired immune system (B cell and T cell immunity). These observations suggest that: (1) Genes in the upstream of the immune-related pathways tend to evolve faster than those in the downstream. (2) Immunity-specific genes are likely to evolve faster than multifunctional house-keeping genes, which also perform fundamental functions in non-immune pathways. (3) Genes encoding for proteins that participate in extracellular communion or the reorganization of external pathogens seem to evolve faster than those which encode proteins that play roles in signal transduction and effector activation within single cells [41]. Similar observations have been reported about the evolution of Drosophila's innate immune system [42].\nIn addition, we discovered a few enriched functions that were related to neuro-degenerative diseases or nervous system functionality (Table 1). These slow-evolving genes play roles in progressive neuro-degenerative genetic diseases [43], neural-tube defects [44], proliferative disorders in the central nervous system [45], progressions of brain cancers [46,47], and electrical movement within synapses in the brain [48]. These results are consistent with a previous observation that brain-specific genes tend to have relatively low evolutionary rates in mammals [49]. Brain-specific genes may be expressed in multiple distinct neuronal cell types and in a way resemble house-keeping genes in terms of shared cell types.\nTo learn about the functions of fast-evolving and slow-evolving genes in each species and lineage, we used custom-designed scripts to assess the enrichment of molecular functions (MF), biological processes (BP), and signal/metabolic pathways (Table 1). We noticed that the number of enriched functions related to slow-evolving genes was 2.53 times greater than those related to fast-evolving genes. Fast-evolving genes also had more lineage-specific functions than slow-evolving genes.\nSelected common functional categories of fast-evolving genes and/or slow-evolving genes among mammalian genomes and lineages.\nNote: The asterisks depict function classification of genes in species and lineages, which are significantly enriched based on Fisher's Exact Test after multiple corrections. The species are coded in numbers as what listed in Figure 1.\nWe found that fast-evolving genes were enriched in immunity-related functions (Table 1), which included genes present in NK, T, and B cells. The genes in NK cells were related to innate immunity (non-specific), and genes in T and B cells were associated with acquired immunity (specific) [35]. Other enriched immunity-related categories of fast-evolving genes included immunoglobulin, cytokine, chemokine, and interleukin. Fast-evolving genes were also enriched in signaling pathways, such as receptors and ligands. Finally, there were a significant number of fast-evolving genes classified as having unknown functions—unclassified biological processes and unclassified molecular functions. It is not surprising that fast-evolving genes may quickly diminish their homology to known proteins and are associated with dietary adaptation, language, appearance, behavior or upright-walking [36]. In the enriched functions of slow-evolving genes, we found a number of important house-keeping functional classes, including transcription, mRNA processing/splicing, translation, protein modification, metabolism, protein traffic, cell cycle, development and endocytosis (Table 1). As a result, fast-evolving and slow-evolving genes have significantly different functions in mammals.\nAnother point of interest is that we identified two immunity-related function categories, T cell and B cell activation, in the enriched functions of slow-evolving genes (Table 1). We also discovered that immunity-related fast-evolving genes were mostly receptors, ligands, cytokines, and CD (cluster of differentiation) molecules, and that slow-evolving immunity-related genes were usually kinases or adaptor proteins. Taking the human-rat comparison as an example, the receptors included MS4A2, FCER1G, FCGRT, KLRG2, IL1RN, TNFRSF1A, TNFRSF25, IFNGR1, IL2RA, TNFRSF4 and TNFRSF8; the cytokines were IL12A and IL1F9; and the ligands were CCL27 and ICOSLG. All of these are highly conserved, functionally important, and involved in complex immunity-related pathways. Cytokines are also involved in the transfer of information between cells, the regulation of cell physiological processes, and the strengthening of immune-competence [37]. CD proteins, generated during the differentiation of lymphocytes, are a class of cell surface molecules that are recognized by specific antibodies on the surfaces of lymphocytes [38]. Adaptor proteins and kinases play significant roles in signal transduction in cell immune systems, mediate specific interactions between proteins, and activate phosphorylation of the target proteins to functionally modify protein structure and activity [39,40]. In summary, receptors, ligands, cytokines, and CDs are likely to evolve faster than kinases and adaptor proteins, although they all function in the acquired immune system (B cell and T cell immunity). These observations suggest that: (1) Genes in the upstream of the immune-related pathways tend to evolve faster than those in the downstream. (2) Immunity-specific genes are likely to evolve faster than multifunctional house-keeping genes, which also perform fundamental functions in non-immune pathways. (3) Genes encoding for proteins that participate in extracellular communion or the reorganization of external pathogens seem to evolve faster than those which encode proteins that play roles in signal transduction and effector activation within single cells [41]. Similar observations have been reported about the evolution of Drosophila's innate immune system [42].\nIn addition, we discovered a few enriched functions that were related to neuro-degenerative diseases or nervous system functionality (Table 1). These slow-evolving genes play roles in progressive neuro-degenerative genetic diseases [43], neural-tube defects [44], proliferative disorders in the central nervous system [45], progressions of brain cancers [46,47], and electrical movement within synapses in the brain [48]. These results are consistent with a previous observation that brain-specific genes tend to have relatively low evolutionary rates in mammals [49]. Brain-specific genes may be expressed in multiple distinct neuronal cell types and in a way resemble house-keeping genes in terms of shared cell types.\n[SUBTITLE] Comparisons of fast-evolving and slow-evolving genes and their functions among mammalian lineages [SUBSECTION] We used a network to display the percentages of shared genes among fast-evolving and slow-evolving genes between pairs of mammals (Figure 3). First, two primitive mammals (opossum and platypus) and one bird (chicken) are clearly distinct from other mammals. Second, primates are also closely clustered with one another. Third, mouse serves as an excellent hub that links cow, horse, guinea pig, rat, and opossum. Fourth, large mammals are well connected when all elements are considered. Fifth, some connections may be coincidental, for example, fast-evolving genes shared by dog, horse, and macaque as well as slow-evolving genes shared by cow, macaque, orangutan, and chimp.\nA network of fast-evolving and slow-evolving genes among twelve mammalian species. For any two given species, we calculated the shared number of fast-evolving or slow-evolving genes and subsequently divided them based on the total shared number of genes to normalize the correlation coefficients. We connected the species based on the largest two correlation coefficients for each pair. Red and green lines stand for fast-evolving and slow-evolving genes, respectively, and the yellow lines are the sum of both.\nWe then investigated the exclusive functions of fast-evolving and slow-evolving genes in three mammalian lineages: primates (chimp, orangutan, and macaque), large mammals (horse, dog, and cow), and rodents (guinea pig, mouse, and rat; Table 2, 3, and 4). Although primates are also large mammals, we considered them to be a separate category in order to further stratify our pool. First, we found specific functions that were unique to the three mammalian subgroups in fast-evolving genes: sensory-related (chemosensory perception, olfaction and sensory perception) and cancer related (oncogenesis) in primates (Table 2), immune related (interleukin receptor) in rodents (Table 4), and reproduction related (fertilization) and steroid hormone related (steroid hormone metabolism; Table 3) in large mammals. The first two observations we made are consistent with a previous study [7], and the last one is novel, which may be related to domestication for fast-growth. Second, we also found some lineage-specific functions that involved slow-evolving genes. For instance, we categorized calcium binding proteins, calmodulin related proteins and mitochondrial transport in primates, as well as G protein signalling, enkephalin release, actin binding cytoskeletal proteins, the microtubule family, and exocytosis in rodents. Three critical hormones (alpha adrenergic receptor signalling, oxytocin receptor mediated signalling, and thyrotropin-releasing hormone receptor signalling pathways) are specific to large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in primates.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among primates.\nFunctional enrichment of fast-evolving and slow-evolving genes in large mammals.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in rodents.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among rodents.\nWe used a network to display the percentages of shared genes among fast-evolving and slow-evolving genes between pairs of mammals (Figure 3). First, two primitive mammals (opossum and platypus) and one bird (chicken) are clearly distinct from other mammals. Second, primates are also closely clustered with one another. Third, mouse serves as an excellent hub that links cow, horse, guinea pig, rat, and opossum. Fourth, large mammals are well connected when all elements are considered. Fifth, some connections may be coincidental, for example, fast-evolving genes shared by dog, horse, and macaque as well as slow-evolving genes shared by cow, macaque, orangutan, and chimp.\nA network of fast-evolving and slow-evolving genes among twelve mammalian species. For any two given species, we calculated the shared number of fast-evolving or slow-evolving genes and subsequently divided them based on the total shared number of genes to normalize the correlation coefficients. We connected the species based on the largest two correlation coefficients for each pair. Red and green lines stand for fast-evolving and slow-evolving genes, respectively, and the yellow lines are the sum of both.\nWe then investigated the exclusive functions of fast-evolving and slow-evolving genes in three mammalian lineages: primates (chimp, orangutan, and macaque), large mammals (horse, dog, and cow), and rodents (guinea pig, mouse, and rat; Table 2, 3, and 4). Although primates are also large mammals, we considered them to be a separate category in order to further stratify our pool. First, we found specific functions that were unique to the three mammalian subgroups in fast-evolving genes: sensory-related (chemosensory perception, olfaction and sensory perception) and cancer related (oncogenesis) in primates (Table 2), immune related (interleukin receptor) in rodents (Table 4), and reproduction related (fertilization) and steroid hormone related (steroid hormone metabolism; Table 3) in large mammals. The first two observations we made are consistent with a previous study [7], and the last one is novel, which may be related to domestication for fast-growth. Second, we also found some lineage-specific functions that involved slow-evolving genes. For instance, we categorized calcium binding proteins, calmodulin related proteins and mitochondrial transport in primates, as well as G protein signalling, enkephalin release, actin binding cytoskeletal proteins, the microtubule family, and exocytosis in rodents. Three critical hormones (alpha adrenergic receptor signalling, oxytocin receptor mediated signalling, and thyrotropin-releasing hormone receptor signalling pathways) are specific to large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in primates.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among primates.\nFunctional enrichment of fast-evolving and slow-evolving genes in large mammals.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in rodents.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among rodents.\n[SUBTITLE] Comparisons to other studies [SUBSECTION] There have been three interesting investigations that have used the likelihood ratio test (LRT) to compare two models, and have evaluated the use of Ka/Ks in the identification of positively-selected genes (PSGs) and their enriched functions among six species [6-8]. Our study is unique in that we have analyzed 12 species and considered more than one-thousand fast-evolving genes. The numbers of PSGs in previous studies were at least an order of magnitude less, around tens to hundreds. Although our definition of fast-evolving genes is not fully identical to those of previous studies, our findings on immune-related functions in most species are consistent with previous studies [6,7]. Two other categories that are shared among these studies are chemosensory perception, olfaction, and sensory perception in the human-vs-chimpanzee-specific functions (Table 2) and fertilization in the human-vs-cow-specific functions. This validated the fact that the methods, which were based on simple comparison, yielded conclusions that were similar to those of complicated and over-parametric methods.\nLopez-Bigas et al. conducted a comprehensive study of functional protein sequence divergences between human and other organisms [50]. They focused on variations at the protein level and in a wide range of evolutionary distance, whereas we have focused on variations among mammals at the DNA level [50]. Natural selection acts at three essential levels: domains, catalytic centers, and the DNA and protein level that consists of sequences and protein structures composed of motifs [32]. Since nucleotide sequences are more variable than protein sequences and structures, DNA variations are usually used for the study of short-term evolution, and the latter two are used to study long-term evolution. In our study, we found that the major classified functions were regulatory (e.g. receptor)/response to the surroundings (e.g. immunoglobulin receptor family member) among fast-evolving genes, and metabolism (e.g. protein metabolism and modification), transport (e.g. general vesicle transport) and cell structure (e.g. protein biosynthesis) among slow-evolving genes [50]. We also found developmental processes to be a major functional category in mammals based on the slow-evolving genes when regarding chicken as a reference. This finding agrees with a previous conclusion that development-related genes are highly conserved only among mammals [50]. In addition, at the DNA level, both B-cell-mediated and antibody-mediated immunity and B-cell activation were only identified in mammals but not in chickens. This may reflect differences in B-cell-associated hormonal responses between the bursa of fabricius unique to birds and the bone marrow of mammals [51].\nThere have been three interesting investigations that have used the likelihood ratio test (LRT) to compare two models, and have evaluated the use of Ka/Ks in the identification of positively-selected genes (PSGs) and their enriched functions among six species [6-8]. Our study is unique in that we have analyzed 12 species and considered more than one-thousand fast-evolving genes. The numbers of PSGs in previous studies were at least an order of magnitude less, around tens to hundreds. Although our definition of fast-evolving genes is not fully identical to those of previous studies, our findings on immune-related functions in most species are consistent with previous studies [6,7]. Two other categories that are shared among these studies are chemosensory perception, olfaction, and sensory perception in the human-vs-chimpanzee-specific functions (Table 2) and fertilization in the human-vs-cow-specific functions. This validated the fact that the methods, which were based on simple comparison, yielded conclusions that were similar to those of complicated and over-parametric methods.\nLopez-Bigas et al. conducted a comprehensive study of functional protein sequence divergences between human and other organisms [50]. They focused on variations at the protein level and in a wide range of evolutionary distance, whereas we have focused on variations among mammals at the DNA level [50]. Natural selection acts at three essential levels: domains, catalytic centers, and the DNA and protein level that consists of sequences and protein structures composed of motifs [32]. Since nucleotide sequences are more variable than protein sequences and structures, DNA variations are usually used for the study of short-term evolution, and the latter two are used to study long-term evolution. In our study, we found that the major classified functions were regulatory (e.g. receptor)/response to the surroundings (e.g. immunoglobulin receptor family member) among fast-evolving genes, and metabolism (e.g. protein metabolism and modification), transport (e.g. general vesicle transport) and cell structure (e.g. protein biosynthesis) among slow-evolving genes [50]. We also found developmental processes to be a major functional category in mammals based on the slow-evolving genes when regarding chicken as a reference. This finding agrees with a previous conclusion that development-related genes are highly conserved only among mammals [50]. In addition, at the DNA level, both B-cell-mediated and antibody-mediated immunity and B-cell activation were only identified in mammals but not in chickens. This may reflect differences in B-cell-associated hormonal responses between the bursa of fabricius unique to birds and the bone marrow of mammals [51].\n[SUBTITLE] The relationship between evolutionary rate and expression level [SUBSECTION] Our study focused on general expression profiles based on EST data from 18 human tissues (Figure 4). The expression levels of slow-evolving genes appeared to be significantly higher than those of fast-evolving genes (P-value < 2.2e-16, Wilcoxon rank sum test). We also observed that the expression levels of intermediately-evolving genes were significantly higher than those of fast-evolving genes in most species, except for orangutan and macaque. In addition, we found that the mean of gene expressions was always greater than the median, suggesting that most genes are expressed at very low levels and only a small fraction of genes are expressed at high levels [52]. These observations suggest that there is an inverse relationship between gene evolutionary rates and gene expression levels in mammals, which is similar to a previous result reported for the yeast genome [53,54]. House-keeping [55,56], highly-expressed, and old genes [57,58] all tend to evolve slowly [59], and these genes are functionally well-connected and resistant to sequence changes (negative selected). Tissue-specific [55,56], lowly-expressed, and new genes [57,58] tend to evolve quickly; they are often selection-relaxed and evolving toward novel functions. For example, certain immune-related genes always evolve faster to cope with new or multiple pathogen attacks.\nExpression level correlations and evolvability. S, M, and F stand for slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. Expression levels were calibrated as the number of transcripts per million (TPM). Outliers were removed to make the plots straightforward.\nOur study focused on general expression profiles based on EST data from 18 human tissues (Figure 4). The expression levels of slow-evolving genes appeared to be significantly higher than those of fast-evolving genes (P-value < 2.2e-16, Wilcoxon rank sum test). We also observed that the expression levels of intermediately-evolving genes were significantly higher than those of fast-evolving genes in most species, except for orangutan and macaque. In addition, we found that the mean of gene expressions was always greater than the median, suggesting that most genes are expressed at very low levels and only a small fraction of genes are expressed at high levels [52]. These observations suggest that there is an inverse relationship between gene evolutionary rates and gene expression levels in mammals, which is similar to a previous result reported for the yeast genome [53,54]. House-keeping [55,56], highly-expressed, and old genes [57,58] all tend to evolve slowly [59], and these genes are functionally well-connected and resistant to sequence changes (negative selected). Tissue-specific [55,56], lowly-expressed, and new genes [57,58] tend to evolve quickly; they are often selection-relaxed and evolving toward novel functions. For example, certain immune-related genes always evolve faster to cope with new or multiple pathogen attacks.\nExpression level correlations and evolvability. S, M, and F stand for slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. Expression levels were calibrated as the number of transcripts per million (TPM). Outliers were removed to make the plots straightforward.\n[SUBTITLE] The shared fast-evolving and slow-evolving genes among mammals [SUBSECTION] To understand the functional relevance common for the fast-evolving or slow-evolving genes among different subgroups of mammals, we categorized the shared genes in the lineages of primates, large mammals, and rodents. There were 185, 609, and 695, fast-evolving genes in primates, large mammals, and rodents, respectively, and 355, 600, and 730 slow-evolving genes. However, we only found 15 fast-evolving and 72 slow-evolving genes that were shared by all nine species. This result suggests that fast-evolving and slow-evolving genes tend to be clade-, lineage- or species-specific. However, a limited numbers of shared genes may still lead to a significant number of shared functions (Table 1).\nAlthough we have only compared human genes (as a reference) with those of other mammals, instead of doing pairwise comparisons, our conclusions can still be easily extended to a broader spectrum of mammals, or even other vertebrates. To validate our analyses, we selected two representative proteins, ISG20 and RAB30 (based on orthologs from 20 and 22 mammals, respectively) from 87 shared fast-/slow-evolving genes in nine species to demonstrate their degrees of variation and conservation (Figure 5). The fast-evolving ISG20 (ranked 25, 71, 94, 69, 95, 128, 321, 58, 82, 280 and 423 in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) and the slow-evolving RAB30 (ranked 1, 418, 334, 117, 105, 127, 48, 49, 33, 132 and 446, respectively in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) can be obviously seen from the degree of variability [60]. These two case studies provide a footnote that supports the reliability of our method.\nThree-dimensional conservation grading of ISG20 (A) and RAB30 (B). Two 3-D backbone structures of ISG20 and RAB30 were retrieved from PDB code 1WLJ and 2EW1, respectively. (A) The putative conservation grading was based on the alignment of twenty mammalian protein sequences from: Human (Homo sapiens), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Gorilla (Gorilla gorilla), Macaque (Macaca mulatta), Cow (Bos taurus), Dog (Canis familiaris), Horse (Equus caballus), Cat (Felis catus), Guinea Pig (Cavia porcellus), Mouse (Mus musculus), Rat (Rattus norvegicus), Megabat (Pteropus vampyrus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Hyrax (Procavia capensis), Tree Shrew (Tupaia belangeri), Dolphin (Tursiops truncatus), Opossum (Monodelphis domestica), Platypus (Ornithorhynchus anatinus). (B) These conservation grades were based on the aligned twenty-two mammalian protein sequences from Human (Homo sapiens), Cow (Bos taurus), Dog (Canis familiaris), Guinea Pig (Cavia porcellus), Horse (Equus caballus), Cat (Felis catus), Elephant (Loxodonta africana), Macaque (Macaca mulatta), Mouse Lemur (Microcebus murinus), Opossum (Monodelphis domestica), Mouse (Mus musculus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Platypus (Ornithorhynchus anatinus), Rabbit (Oryctolagus cuniculus), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Hyrax (Procavia capensis), Megabat (Pteropus vampyrus), Rat (Rattus norvegicus), Tree shrew (Tupaia belangeri), Dolphin (Tursiops truncatus). The color bars from the left to the right measure changes from variable to conserved residues. Conservation grading in yellow indicates the residues whose conservation degrees were not supported with sufficient data.\nTo understand the functional relevance common for the fast-evolving or slow-evolving genes among different subgroups of mammals, we categorized the shared genes in the lineages of primates, large mammals, and rodents. There were 185, 609, and 695, fast-evolving genes in primates, large mammals, and rodents, respectively, and 355, 600, and 730 slow-evolving genes. However, we only found 15 fast-evolving and 72 slow-evolving genes that were shared by all nine species. This result suggests that fast-evolving and slow-evolving genes tend to be clade-, lineage- or species-specific. However, a limited numbers of shared genes may still lead to a significant number of shared functions (Table 1).\nAlthough we have only compared human genes (as a reference) with those of other mammals, instead of doing pairwise comparisons, our conclusions can still be easily extended to a broader spectrum of mammals, or even other vertebrates. To validate our analyses, we selected two representative proteins, ISG20 and RAB30 (based on orthologs from 20 and 22 mammals, respectively) from 87 shared fast-/slow-evolving genes in nine species to demonstrate their degrees of variation and conservation (Figure 5). The fast-evolving ISG20 (ranked 25, 71, 94, 69, 95, 128, 321, 58, 82, 280 and 423 in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) and the slow-evolving RAB30 (ranked 1, 418, 334, 117, 105, 127, 48, 49, 33, 132 and 446, respectively in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) can be obviously seen from the degree of variability [60]. These two case studies provide a footnote that supports the reliability of our method.\nThree-dimensional conservation grading of ISG20 (A) and RAB30 (B). Two 3-D backbone structures of ISG20 and RAB30 were retrieved from PDB code 1WLJ and 2EW1, respectively. (A) The putative conservation grading was based on the alignment of twenty mammalian protein sequences from: Human (Homo sapiens), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Gorilla (Gorilla gorilla), Macaque (Macaca mulatta), Cow (Bos taurus), Dog (Canis familiaris), Horse (Equus caballus), Cat (Felis catus), Guinea Pig (Cavia porcellus), Mouse (Mus musculus), Rat (Rattus norvegicus), Megabat (Pteropus vampyrus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Hyrax (Procavia capensis), Tree Shrew (Tupaia belangeri), Dolphin (Tursiops truncatus), Opossum (Monodelphis domestica), Platypus (Ornithorhynchus anatinus). (B) These conservation grades were based on the aligned twenty-two mammalian protein sequences from Human (Homo sapiens), Cow (Bos taurus), Dog (Canis familiaris), Guinea Pig (Cavia porcellus), Horse (Equus caballus), Cat (Felis catus), Elephant (Loxodonta africana), Macaque (Macaca mulatta), Mouse Lemur (Microcebus murinus), Opossum (Monodelphis domestica), Mouse (Mus musculus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Platypus (Ornithorhynchus anatinus), Rabbit (Oryctolagus cuniculus), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Hyrax (Procavia capensis), Megabat (Pteropus vampyrus), Rat (Rattus norvegicus), Tree shrew (Tupaia belangeri), Dolphin (Tursiops truncatus). The color bars from the left to the right measure changes from variable to conserved residues. Conservation grading in yellow indicates the residues whose conservation degrees were not supported with sufficient data.", "To examine the divergence between humans and other species, we calculated identities by averaging all orthologs in a species: chimpanzee - 99.23%; orangutan - 98.00%; macaque - 96.09%; horse - 89.44%; dog - 87.93%; cow - 87.36%; guinea pig - 85.91%; mouse - 84.54%; rat - 83.92%; opossum - 77.64%; platypus - 74.37%; and chicken - 72.87%. The data gave rise to a bimodal distribution in overall identities, which distinctly separates highly identical primate sequences from the rest (Additional file 1: Figure 1SA). For quality assessment, we also evaluated the alignment qualities of all orthologs.\nFirst, we found that the number of Ns (uncertain nucleotides) in all coding sequences (CDS) fell within reasonable ranges (mean ± standard deviation): (1) the number of Ns/the number of nucleotides = 0.00002740 ± 0.00059475; (2) the total number of orthologs containing Ns/total number of orthologs × 100% = 1.5084%. Second, we evaluated parameters related to the quality of sequence alignments, such as percentage identity and percentage gap (Additional file 1: Figure S1). All of them provided clues for low mismatching rates and limited number of arbitrarily-aligned positions.", "Ka and Ks are nonsynonymous (amino-acid-changing) and synonymous (silent) substitution rates, respectively, which are governed by sequence contexts that are functionally-relevant, such as coding amino acids and involving in exon splicing [21]. The ratio of the two parameters, Ka/Ks (a measure of selection strength), is defined as the degree of evolutionary change, normalized by random background mutation. We began by scrutinizing the consistency of Ka and Ks estimates using eight commonly-used methods. We defined two divergence indexes: (i) standard deviation normalized by mean, where eight values from all methods are considered to be a group, and (ii) range normalized by mean, where range is the absolute difference between the estimated maximal and minimal values. In order to keep our comparison unbiased, we eliminated gene pairs when any NA (not applicable or infinite) value occurred in Ka or Ks. We observed that the divergence indexes of Ka were significantly smaller than those of Ks in all examined species (P-value < 2.2e-16, Wilcoxon rank sum test) (Figure 1). The result of our second defined index appeared to be very similar to the first (data not shown). We also investigated the performance of these methods in calculating Ka, Ks, and Ka/Ks. First, we considered six cut-off points for grouping and defining fast-evolving and slow-evolving genes: 5%, 10%, 20%, 30%, 40%, and 50% of the total (see Methods). Second, we applied eight commonly-used methods to calculate the parameters for twelve species at each cut-off value. Lastly, we compared the percentage of shared genes (the number of shared genes from different methods, divided by the total number of genes within a chosen cut-off point) calculated by GY and other methods (Figure 2). We observed that Ka had the highest percentage of shared genes, followed by Ka/Ks; Ks always had the lowest. We also made similar observations using our own gamma-series methods [22,23] (data not shown). It was quite clear that Ka calculations had the most consistent results when sorting protein-coding genes based on their evolutionary rates. As the cut-off values increased from 5% to 50%, the percentages of shared genes also increased, reflecting the fact that more shared genes are obtained by setting less stringent cut-offs (Figure 2A and 2B). We also found a rising trend as the model complexity increased in the order of NG, LWL, MLWL, LPB, MLPB, YN, and MYN (Figure 2C and 2D). We examined the impact of divergent distance on gene sorting using the three parameters, and found that the percentage of shared genes referencing to Ka was consistently high across all twelve species, while those referencing to Ka/Ks and Ks decreased with increasing divergence time between human and other studied species (Figure 2E and 2F). In addition, the percentage of shared genes of Ka/Ks remains moderate between those of Ka and Ks. In particular, there should be more variations in the percentages of shared genes determined by Ka/Ks and Ks than by Ka, when we define slow-evolving genes (Figure 2B, D, and 2F). We found consistent results from the various methods when Ka was used as the measure for sorting genes.\nDivergence index (standard deviation/mean) of Ka and Ks determined based on the eight different methods from the twelve vertebrate species. In the boxplots, lower quantile, median, and upper quantile were represented in the boxes. Mean values were depicted in dots. Outliers were removed to make the plot straightforward. The number codes for the vertebrate species are: 1, chimp; 2, orangutan; 3, macaque; 4, horse; 5, dog; 6, cow; 7, guinea pig; 8, mouse; 9, rat; 10, opossum; 11, platypus; and 12, chicken.\nThe percentage of shared genes of Ka, Ks and Ka/Ks based on GY compared with other seven methods in terms of cut-off (A, B), method (C, D), and species (E, F). Outliers were removed to make the plots straightforward. The number codes for the species are the same as what in Figure 1.\nThe methods used in this study cover a wide range of mutation models with different complexities. NG gives equal weight to every sequence variation path [24] and LWL divides the mutation sites into three categories—non-degenerate, two-fold, and four-fold sites—and assigns fixed weights to synonymous and nonsynonymous sites for the two-fold degenerate sites [25]. LPB adopts a flexible ratio of transitional to transversional substitutions to handle the two-fold sites [26,27]. MLWL or MLPB are improved versions of their parental methods with specific consideration on the arginine codons (an exceptional case from the previous method) [28]. In particular, MLWL also incorporates an independent parameter, the ratio of transitional to transversional substitution rates, into the calculation [28]. Both YN and GY capture the features of codon usage and transition/transversion rates, but they are approximate and maximum likelihood methods, respectively [29,30]. MYN accounts for another important evolutionary characteristic—differences in transitional substitution within purines and pyrimidines [31]. Although these methods model and compute sequence variations in different ways, the Ka values that they calculate appeared to be more consistent than their Ks values or Ka/Ks. We proposed the following reasons (which are not comprehensive): first, real data from large data sets are usually from a broader range of species than computer simulations in the training sets for methodology development, so deviations in Ks values may draw more attentions in discussions. Second, the parameter-rich approaches—such as considering unequal codon usage and unequal transition/transversion rates—may lead to opposite effects on substitution rates when sequence divergence falls out of the \"sweet ranges\" [25,30,32]. Third, when examining closely related species, such primates, one will find that most Ka/Ks values are smaller than 1 and that Ka values are smaller than Ks values under most conditions. For a very limited number of nonsynonymous substitutions, when evolutionary distance is relatively short between species, models that increase complexity, such as those for correcting multiple hits, may not lead to stable estimations [24,32]. Furthermore, when incorporating the shape parameter of gamma distribution into the commonly approximate Ka/Ks methods, we found previously that Ks is more sensitive to changes in the shape parameter under the condition Ka < Ks [23]. Together, there are stronger influences on Ks than on Ka in two cases: when Ka < Ks and when complexity increases in mutation models. Fourth, it has been suggested that Ks estimation does not work well for comparing extremes, such as closely and distantly related species [33,34]. Occasionally, certain larger Ka/Ks values, greater than 1, are identified, as was done in a comparative study between human and chimpanzee genes, perhaps due to a very small Ks [34].\nWe also wondered what would happen when Ka becomes saturated as the divergence of the paired sequences increases. Looking at human vs. chicken, we found that the median Ka exceeded 0.2 and that the maximal Ka was as high as 0.6 after the outliers were eliminated (Additional file 1: Figure S2). This result suggested that their Ka values have not approached saturation yet. In addition, we chose the GY method to compute Ka as an estimator of evolutionary rates, since counting methods usually yield more out-of-range values than maximum likelihood methods (data not shown).", "To learn about the functions of fast-evolving and slow-evolving genes in each species and lineage, we used custom-designed scripts to assess the enrichment of molecular functions (MF), biological processes (BP), and signal/metabolic pathways (Table 1). We noticed that the number of enriched functions related to slow-evolving genes was 2.53 times greater than those related to fast-evolving genes. Fast-evolving genes also had more lineage-specific functions than slow-evolving genes.\nSelected common functional categories of fast-evolving genes and/or slow-evolving genes among mammalian genomes and lineages.\nNote: The asterisks depict function classification of genes in species and lineages, which are significantly enriched based on Fisher's Exact Test after multiple corrections. The species are coded in numbers as what listed in Figure 1.\nWe found that fast-evolving genes were enriched in immunity-related functions (Table 1), which included genes present in NK, T, and B cells. The genes in NK cells were related to innate immunity (non-specific), and genes in T and B cells were associated with acquired immunity (specific) [35]. Other enriched immunity-related categories of fast-evolving genes included immunoglobulin, cytokine, chemokine, and interleukin. Fast-evolving genes were also enriched in signaling pathways, such as receptors and ligands. Finally, there were a significant number of fast-evolving genes classified as having unknown functions—unclassified biological processes and unclassified molecular functions. It is not surprising that fast-evolving genes may quickly diminish their homology to known proteins and are associated with dietary adaptation, language, appearance, behavior or upright-walking [36]. In the enriched functions of slow-evolving genes, we found a number of important house-keeping functional classes, including transcription, mRNA processing/splicing, translation, protein modification, metabolism, protein traffic, cell cycle, development and endocytosis (Table 1). As a result, fast-evolving and slow-evolving genes have significantly different functions in mammals.\nAnother point of interest is that we identified two immunity-related function categories, T cell and B cell activation, in the enriched functions of slow-evolving genes (Table 1). We also discovered that immunity-related fast-evolving genes were mostly receptors, ligands, cytokines, and CD (cluster of differentiation) molecules, and that slow-evolving immunity-related genes were usually kinases or adaptor proteins. Taking the human-rat comparison as an example, the receptors included MS4A2, FCER1G, FCGRT, KLRG2, IL1RN, TNFRSF1A, TNFRSF25, IFNGR1, IL2RA, TNFRSF4 and TNFRSF8; the cytokines were IL12A and IL1F9; and the ligands were CCL27 and ICOSLG. All of these are highly conserved, functionally important, and involved in complex immunity-related pathways. Cytokines are also involved in the transfer of information between cells, the regulation of cell physiological processes, and the strengthening of immune-competence [37]. CD proteins, generated during the differentiation of lymphocytes, are a class of cell surface molecules that are recognized by specific antibodies on the surfaces of lymphocytes [38]. Adaptor proteins and kinases play significant roles in signal transduction in cell immune systems, mediate specific interactions between proteins, and activate phosphorylation of the target proteins to functionally modify protein structure and activity [39,40]. In summary, receptors, ligands, cytokines, and CDs are likely to evolve faster than kinases and adaptor proteins, although they all function in the acquired immune system (B cell and T cell immunity). These observations suggest that: (1) Genes in the upstream of the immune-related pathways tend to evolve faster than those in the downstream. (2) Immunity-specific genes are likely to evolve faster than multifunctional house-keeping genes, which also perform fundamental functions in non-immune pathways. (3) Genes encoding for proteins that participate in extracellular communion or the reorganization of external pathogens seem to evolve faster than those which encode proteins that play roles in signal transduction and effector activation within single cells [41]. Similar observations have been reported about the evolution of Drosophila's innate immune system [42].\nIn addition, we discovered a few enriched functions that were related to neuro-degenerative diseases or nervous system functionality (Table 1). These slow-evolving genes play roles in progressive neuro-degenerative genetic diseases [43], neural-tube defects [44], proliferative disorders in the central nervous system [45], progressions of brain cancers [46,47], and electrical movement within synapses in the brain [48]. These results are consistent with a previous observation that brain-specific genes tend to have relatively low evolutionary rates in mammals [49]. Brain-specific genes may be expressed in multiple distinct neuronal cell types and in a way resemble house-keeping genes in terms of shared cell types.", "We used a network to display the percentages of shared genes among fast-evolving and slow-evolving genes between pairs of mammals (Figure 3). First, two primitive mammals (opossum and platypus) and one bird (chicken) are clearly distinct from other mammals. Second, primates are also closely clustered with one another. Third, mouse serves as an excellent hub that links cow, horse, guinea pig, rat, and opossum. Fourth, large mammals are well connected when all elements are considered. Fifth, some connections may be coincidental, for example, fast-evolving genes shared by dog, horse, and macaque as well as slow-evolving genes shared by cow, macaque, orangutan, and chimp.\nA network of fast-evolving and slow-evolving genes among twelve mammalian species. For any two given species, we calculated the shared number of fast-evolving or slow-evolving genes and subsequently divided them based on the total shared number of genes to normalize the correlation coefficients. We connected the species based on the largest two correlation coefficients for each pair. Red and green lines stand for fast-evolving and slow-evolving genes, respectively, and the yellow lines are the sum of both.\nWe then investigated the exclusive functions of fast-evolving and slow-evolving genes in three mammalian lineages: primates (chimp, orangutan, and macaque), large mammals (horse, dog, and cow), and rodents (guinea pig, mouse, and rat; Table 2, 3, and 4). Although primates are also large mammals, we considered them to be a separate category in order to further stratify our pool. First, we found specific functions that were unique to the three mammalian subgroups in fast-evolving genes: sensory-related (chemosensory perception, olfaction and sensory perception) and cancer related (oncogenesis) in primates (Table 2), immune related (interleukin receptor) in rodents (Table 4), and reproduction related (fertilization) and steroid hormone related (steroid hormone metabolism; Table 3) in large mammals. The first two observations we made are consistent with a previous study [7], and the last one is novel, which may be related to domestication for fast-growth. Second, we also found some lineage-specific functions that involved slow-evolving genes. For instance, we categorized calcium binding proteins, calmodulin related proteins and mitochondrial transport in primates, as well as G protein signalling, enkephalin release, actin binding cytoskeletal proteins, the microtubule family, and exocytosis in rodents. Three critical hormones (alpha adrenergic receptor signalling, oxytocin receptor mediated signalling, and thyrotropin-releasing hormone receptor signalling pathways) are specific to large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in primates.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among primates.\nFunctional enrichment of fast-evolving and slow-evolving genes in large mammals.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in rodents.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among rodents.", "There have been three interesting investigations that have used the likelihood ratio test (LRT) to compare two models, and have evaluated the use of Ka/Ks in the identification of positively-selected genes (PSGs) and their enriched functions among six species [6-8]. Our study is unique in that we have analyzed 12 species and considered more than one-thousand fast-evolving genes. The numbers of PSGs in previous studies were at least an order of magnitude less, around tens to hundreds. Although our definition of fast-evolving genes is not fully identical to those of previous studies, our findings on immune-related functions in most species are consistent with previous studies [6,7]. Two other categories that are shared among these studies are chemosensory perception, olfaction, and sensory perception in the human-vs-chimpanzee-specific functions (Table 2) and fertilization in the human-vs-cow-specific functions. This validated the fact that the methods, which were based on simple comparison, yielded conclusions that were similar to those of complicated and over-parametric methods.\nLopez-Bigas et al. conducted a comprehensive study of functional protein sequence divergences between human and other organisms [50]. They focused on variations at the protein level and in a wide range of evolutionary distance, whereas we have focused on variations among mammals at the DNA level [50]. Natural selection acts at three essential levels: domains, catalytic centers, and the DNA and protein level that consists of sequences and protein structures composed of motifs [32]. Since nucleotide sequences are more variable than protein sequences and structures, DNA variations are usually used for the study of short-term evolution, and the latter two are used to study long-term evolution. In our study, we found that the major classified functions were regulatory (e.g. receptor)/response to the surroundings (e.g. immunoglobulin receptor family member) among fast-evolving genes, and metabolism (e.g. protein metabolism and modification), transport (e.g. general vesicle transport) and cell structure (e.g. protein biosynthesis) among slow-evolving genes [50]. We also found developmental processes to be a major functional category in mammals based on the slow-evolving genes when regarding chicken as a reference. This finding agrees with a previous conclusion that development-related genes are highly conserved only among mammals [50]. In addition, at the DNA level, both B-cell-mediated and antibody-mediated immunity and B-cell activation were only identified in mammals but not in chickens. This may reflect differences in B-cell-associated hormonal responses between the bursa of fabricius unique to birds and the bone marrow of mammals [51].", "Our study focused on general expression profiles based on EST data from 18 human tissues (Figure 4). The expression levels of slow-evolving genes appeared to be significantly higher than those of fast-evolving genes (P-value < 2.2e-16, Wilcoxon rank sum test). We also observed that the expression levels of intermediately-evolving genes were significantly higher than those of fast-evolving genes in most species, except for orangutan and macaque. In addition, we found that the mean of gene expressions was always greater than the median, suggesting that most genes are expressed at very low levels and only a small fraction of genes are expressed at high levels [52]. These observations suggest that there is an inverse relationship between gene evolutionary rates and gene expression levels in mammals, which is similar to a previous result reported for the yeast genome [53,54]. House-keeping [55,56], highly-expressed, and old genes [57,58] all tend to evolve slowly [59], and these genes are functionally well-connected and resistant to sequence changes (negative selected). Tissue-specific [55,56], lowly-expressed, and new genes [57,58] tend to evolve quickly; they are often selection-relaxed and evolving toward novel functions. For example, certain immune-related genes always evolve faster to cope with new or multiple pathogen attacks.\nExpression level correlations and evolvability. S, M, and F stand for slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. Expression levels were calibrated as the number of transcripts per million (TPM). Outliers were removed to make the plots straightforward.", "To understand the functional relevance common for the fast-evolving or slow-evolving genes among different subgroups of mammals, we categorized the shared genes in the lineages of primates, large mammals, and rodents. There were 185, 609, and 695, fast-evolving genes in primates, large mammals, and rodents, respectively, and 355, 600, and 730 slow-evolving genes. However, we only found 15 fast-evolving and 72 slow-evolving genes that were shared by all nine species. This result suggests that fast-evolving and slow-evolving genes tend to be clade-, lineage- or species-specific. However, a limited numbers of shared genes may still lead to a significant number of shared functions (Table 1).\nAlthough we have only compared human genes (as a reference) with those of other mammals, instead of doing pairwise comparisons, our conclusions can still be easily extended to a broader spectrum of mammals, or even other vertebrates. To validate our analyses, we selected two representative proteins, ISG20 and RAB30 (based on orthologs from 20 and 22 mammals, respectively) from 87 shared fast-/slow-evolving genes in nine species to demonstrate their degrees of variation and conservation (Figure 5). The fast-evolving ISG20 (ranked 25, 71, 94, 69, 95, 128, 321, 58, 82, 280 and 423 in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) and the slow-evolving RAB30 (ranked 1, 418, 334, 117, 105, 127, 48, 49, 33, 132 and 446, respectively in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) can be obviously seen from the degree of variability [60]. These two case studies provide a footnote that supports the reliability of our method.\nThree-dimensional conservation grading of ISG20 (A) and RAB30 (B). Two 3-D backbone structures of ISG20 and RAB30 were retrieved from PDB code 1WLJ and 2EW1, respectively. (A) The putative conservation grading was based on the alignment of twenty mammalian protein sequences from: Human (Homo sapiens), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Gorilla (Gorilla gorilla), Macaque (Macaca mulatta), Cow (Bos taurus), Dog (Canis familiaris), Horse (Equus caballus), Cat (Felis catus), Guinea Pig (Cavia porcellus), Mouse (Mus musculus), Rat (Rattus norvegicus), Megabat (Pteropus vampyrus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Hyrax (Procavia capensis), Tree Shrew (Tupaia belangeri), Dolphin (Tursiops truncatus), Opossum (Monodelphis domestica), Platypus (Ornithorhynchus anatinus). (B) These conservation grades were based on the aligned twenty-two mammalian protein sequences from Human (Homo sapiens), Cow (Bos taurus), Dog (Canis familiaris), Guinea Pig (Cavia porcellus), Horse (Equus caballus), Cat (Felis catus), Elephant (Loxodonta africana), Macaque (Macaca mulatta), Mouse Lemur (Microcebus murinus), Opossum (Monodelphis domestica), Mouse (Mus musculus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Platypus (Ornithorhynchus anatinus), Rabbit (Oryctolagus cuniculus), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Hyrax (Procavia capensis), Megabat (Pteropus vampyrus), Rat (Rattus norvegicus), Tree shrew (Tupaia belangeri), Dolphin (Tursiops truncatus). The color bars from the left to the right measure changes from variable to conserved residues. Conservation grading in yellow indicates the residues whose conservation degrees were not supported with sufficient data.", "In this study, we carried out an evolutionary analysis of human protein-coding genes that are shared among mammals. We not only demonstrated that Ka is a useful and stable indicator for studying mammalian gene evolution, but we also revealed that the rate at which a gene evolves is related to its function. In particular, we found enriched immune-related functions in both fast-evolving and slow-evolving genes, and slow-evolving genes were significantly enriched in functions related to the central nervous system. Furthermore, we observed that slow-evolving genes tended to be expressed at higher levels. Our results provide valuable insights for the functional characterization of genes and gene classes in different mammalian lineages.", "The genome data were collected from Ensembl version 53 [61] (http://www.biomart.org/; http://www.ensembl.org/): human (NCBI36), chimpanzee (CHIMP2.1), orangutan (PPYG2), macaque (MMUL1), horse (EquCab2), dog (BROADD2), cow (Btau4), guinea pig (cavPor3), mouse (NCBIM37), rat (RGSC3.4), opossum (BROADO3), platypus (OANA5), and chicken (WASHUC2). We also collected ortholog sequences of humans and other species, saving only the gene pairs marked as one-to-one match to avoid ambiguous definition of orthologs. We used ClustalW [62] to align human amino acid sequences with those of other species, and then translated them back to their corresponding nucleotide sequences.", "We estimated the non-synonymous substitution rates and synonymous substitution rates of orthologs based on a number of algorithms, including NG (the different methods are abbreviated as their authors' last name initials; M stands for a modified version of the original methods) [24], LWL [25], MLWL [28], LPB [26,27], MLPB [28], YN [30], MYN [31], GY [29], and the gamma-series methods [22,23] used in KaKs Calculator 2.0 tool [63]. We adopted 10% as the cut-off value to define fast-, intermediately- or slow-evolving genes in each lineage. We sorted genes by their Ka values from smallest to largest in each lineage, and defined genes corresponding to the lowest, middle, and highest 10% of Ka values to be slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. In this procedure, we considered NA (not applicable) values to be 0, because we observed that NA values are usually associated with 100% identical gene pairs, except in the cases of a few indels (inserts or deletions).", "We used IDConvertor [64] to convert the different ID between different gene accessions and utilized the Protein Analysis through Evolutionary Relationships (PANTHER) online system to annotate genes at three levels: biological processes, molecular functions, and pathways [65]. Enrichment analysis was performed based on a combination of Fisher's exact test and multiple testing Bonferroni Step-down (Holm) correction [66]. The cut-off in functional enrichment test is 0.1. The network created based on fast- and slow-evolving genes was drawn with the software Cytoscape [67]. Conserved grade illustrations were created using the Consurf server [68] after submitting protein alignments built with ClustalX [62]. The three-dimensional structures of the corresponding proteins were retrieved from the Protein Data Bank (PDB) [69]. For gene expression analysis, we used the expression profiling of Expressed Sequence Tags (EST) data pooled from 18 tissues as described previously in our published work [70].", "The authors declare that they have no competing interests.", "DW and JY conceived and designed this study; DW drafted this paper; DW, FL, LW collected the data; DW and FL analyzed the data; SH participated in revising the paper; JY supervised the project and revised this manuscript. All authors read and approved the final version of the manuscript.", "[SUBTITLE] Reviewer's report 1 [SUBSECTION] Anamaria Necsulea, Université de Lyon, F-69000, Lyon; Université Lyon 1; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Evolutive, F-69622, Villeurbanne, France; HELIX, Unité de recherche INRIA (nominated by Nicolas Galtier, CNRS-Université Montpellier II, Laboratoire \"Genome, Populations, Interactions, Adaptation\", Montpellier, France)\nThis manuscript attempts to assess the rate of evolution of mammalian protein-coding genes, and to extract the defining characteristics of fast and slow evolving genes. This subject has been addressed extensively in the literature, and the findings of the present manuscript are not novel. Unfortunately, the lack of novelty is not the biggest fault of this article: the methodology employed is often flawed and the text is very badly written.\nIn order to estimate the rates of evolution of mammalian protein-coding genes, the authors compute the Ka, Ks and Ka/Ks values for pairwise 1-1 orthologues between human and the other species in their dataset. The Ka and Ks computations are performed with several methods available in the literature, and they observe that the Ks measurement does not yield consistent results between the different methods employed. Rather than investigating in detail why this happens (the saturation problem is only briefly mentioned), the authors decide to use Ka as an estimate of the rate of evolution. This is of course correct if the rate of protein sequence evolution is of interest, but without any correction for the mutation rate, one cannot make inferences about the strength of natural selection on protein-coding genes based on Ka alone. Yet the authors use Ka as \"an estimator of selection\" (page 9).\n[SUBTITLE] Authors' response [SUBSECTION] We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\nWe added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\nWe are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\n[SUBTITLE] Authors' response [SUBSECTION] We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\nWe revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\n[SUBTITLE] Comments from the second round of reviewing [SUBSECTION] I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\nI am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\n[SUBTITLE] Authors' response [SUBSECTION] First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nFirst, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nAnamaria Necsulea, Université de Lyon, F-69000, Lyon; Université Lyon 1; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Evolutive, F-69622, Villeurbanne, France; HELIX, Unité de recherche INRIA (nominated by Nicolas Galtier, CNRS-Université Montpellier II, Laboratoire \"Genome, Populations, Interactions, Adaptation\", Montpellier, France)\nThis manuscript attempts to assess the rate of evolution of mammalian protein-coding genes, and to extract the defining characteristics of fast and slow evolving genes. This subject has been addressed extensively in the literature, and the findings of the present manuscript are not novel. Unfortunately, the lack of novelty is not the biggest fault of this article: the methodology employed is often flawed and the text is very badly written.\nIn order to estimate the rates of evolution of mammalian protein-coding genes, the authors compute the Ka, Ks and Ka/Ks values for pairwise 1-1 orthologues between human and the other species in their dataset. The Ka and Ks computations are performed with several methods available in the literature, and they observe that the Ks measurement does not yield consistent results between the different methods employed. Rather than investigating in detail why this happens (the saturation problem is only briefly mentioned), the authors decide to use Ka as an estimate of the rate of evolution. This is of course correct if the rate of protein sequence evolution is of interest, but without any correction for the mutation rate, one cannot make inferences about the strength of natural selection on protein-coding genes based on Ka alone. Yet the authors use Ka as \"an estimator of selection\" (page 9).\n[SUBTITLE] Authors' response [SUBSECTION] We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\nWe added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\nWe are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\n[SUBTITLE] Authors' response [SUBSECTION] We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\nWe revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\n[SUBTITLE] Comments from the second round of reviewing [SUBSECTION] I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\nI am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\n[SUBTITLE] Authors' response [SUBSECTION] First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nFirst, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\n[SUBTITLE] Reviewer's report 2 [SUBSECTION] Subhajyoti De, Dana-Farber Cancer Institute and Harvard School of Public Health, Harvard University, Boston, USA (nominated by Sarah Teichmann, MRC Laboratory of Molecular Biology, Cambridge, United Kingdom).\nThe paper 'A method for defining evolving protein coding genes' by Wang et al. presents an evolutionary analysis of orthologus protein-coding genes across different species. My main concern with this paper is the lack of novelty. The main conclusions of this paper — (i) different functional classes of genes evolve differently, (ii) highly expressed genes evolve slowly and (iii) fast evolving genes often evolve in a lineage-specific manner—have already been reported comprehensively by several groups (Gerstein, Siepel, Hurst, Koonin, Drummond, Nielsen, Bustamante and many other labs). The authors merely reconfirm their findings. Many of those previous papers are not cited either.\n[SUBTITLE] Authors' response [SUBSECTION] As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\nAs pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\n[SUBTITLE] Authors' response [SUBSECTION] We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\nWe revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\n[SUBTITLE] Authors' response [SUBSECTION] We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\nWe have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\nWe have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\nWe have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\n[SUBTITLE] Authors' response [SUBSECTION] Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\nLopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\n[SUBTITLE] Authors' response [SUBSECTION] We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\nWe are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\n[SUBTITLE] Authors' response [SUBSECTION] In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nIn the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nSubhajyoti De, Dana-Farber Cancer Institute and Harvard School of Public Health, Harvard University, Boston, USA (nominated by Sarah Teichmann, MRC Laboratory of Molecular Biology, Cambridge, United Kingdom).\nThe paper 'A method for defining evolving protein coding genes' by Wang et al. presents an evolutionary analysis of orthologus protein-coding genes across different species. My main concern with this paper is the lack of novelty. The main conclusions of this paper — (i) different functional classes of genes evolve differently, (ii) highly expressed genes evolve slowly and (iii) fast evolving genes often evolve in a lineage-specific manner—have already been reported comprehensively by several groups (Gerstein, Siepel, Hurst, Koonin, Drummond, Nielsen, Bustamante and many other labs). The authors merely reconfirm their findings. Many of those previous papers are not cited either.\n[SUBTITLE] Authors' response [SUBSECTION] As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\nAs pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\n[SUBTITLE] Authors' response [SUBSECTION] We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\nWe revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\n[SUBTITLE] Authors' response [SUBSECTION] We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\nWe have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\nWe have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\nWe have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\n[SUBTITLE] Authors' response [SUBSECTION] Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\nLopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\n[SUBTITLE] Authors' response [SUBSECTION] We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\nWe are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\n[SUBTITLE] Authors' response [SUBSECTION] In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nIn the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\n[SUBTITLE] Reviewer's report 3 [SUBSECTION] Claus O. Wilke, Center for Computational Biology and Bioinformatics and Institute for Cell and Molecular Biology, University of Texas, Austin, Texas, United States\nThe authors study the evolutionary rates of mammalian genes using eight different methods of evolutionary-rate calculation. They conclude that Ka is more consistently estimated by these different methods than Ks and that therefore Ka will be more informative in many contexts than Ks or Ka/Ks.\nWhile I think that the paper makes a valuable contribution, I feel that the impact of the paper has been diluted by the authors' choice to actually combine two separate parts (with separate messages) into one paper. The first part (which I find valuable) is the analysis of the consistency of rate estimations by different methods. The second part (of whose value I'm less convinced) looks at the functional classification of genes evolving at different rates.\n[SUBTITLE] Authors' Response [SUBSECTION] The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\nThe point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\n[SUBTITLE] Authors' response [SUBSECTION] We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\nWe continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\n[SUBTITLE] Authors' response [SUBSECTION] We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\nWe cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\n[SUBTITLE] Authors' response [SUBSECTION] We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nWe have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nClaus O. Wilke, Center for Computational Biology and Bioinformatics and Institute for Cell and Molecular Biology, University of Texas, Austin, Texas, United States\nThe authors study the evolutionary rates of mammalian genes using eight different methods of evolutionary-rate calculation. They conclude that Ka is more consistently estimated by these different methods than Ks and that therefore Ka will be more informative in many contexts than Ks or Ka/Ks.\nWhile I think that the paper makes a valuable contribution, I feel that the impact of the paper has been diluted by the authors' choice to actually combine two separate parts (with separate messages) into one paper. The first part (which I find valuable) is the analysis of the consistency of rate estimations by different methods. The second part (of whose value I'm less convinced) looks at the functional classification of genes evolving at different rates.\n[SUBTITLE] Authors' Response [SUBSECTION] The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\nThe point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\n[SUBTITLE] Authors' response [SUBSECTION] We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\nWe continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\n[SUBTITLE] Authors' response [SUBSECTION] We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\nWe cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\n[SUBTITLE] Authors' response [SUBSECTION] We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nWe have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.", "Anamaria Necsulea, Université de Lyon, F-69000, Lyon; Université Lyon 1; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Evolutive, F-69622, Villeurbanne, France; HELIX, Unité de recherche INRIA (nominated by Nicolas Galtier, CNRS-Université Montpellier II, Laboratoire \"Genome, Populations, Interactions, Adaptation\", Montpellier, France)\nThis manuscript attempts to assess the rate of evolution of mammalian protein-coding genes, and to extract the defining characteristics of fast and slow evolving genes. This subject has been addressed extensively in the literature, and the findings of the present manuscript are not novel. Unfortunately, the lack of novelty is not the biggest fault of this article: the methodology employed is often flawed and the text is very badly written.\nIn order to estimate the rates of evolution of mammalian protein-coding genes, the authors compute the Ka, Ks and Ka/Ks values for pairwise 1-1 orthologues between human and the other species in their dataset. The Ka and Ks computations are performed with several methods available in the literature, and they observe that the Ks measurement does not yield consistent results between the different methods employed. Rather than investigating in detail why this happens (the saturation problem is only briefly mentioned), the authors decide to use Ka as an estimate of the rate of evolution. This is of course correct if the rate of protein sequence evolution is of interest, but without any correction for the mutation rate, one cannot make inferences about the strength of natural selection on protein-coding genes based on Ka alone. Yet the authors use Ka as \"an estimator of selection\" (page 9).\n[SUBTITLE] Authors' response [SUBSECTION] We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\nWe added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\nWe are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\n[SUBTITLE] Authors' response [SUBSECTION] We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\nWe revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\n[SUBTITLE] Comments from the second round of reviewing [SUBSECTION] I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\nI am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\n[SUBTITLE] Authors' response [SUBSECTION] First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nFirst, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).", "We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).", "We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?", "We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".", "I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.", "First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).", "Subhajyoti De, Dana-Farber Cancer Institute and Harvard School of Public Health, Harvard University, Boston, USA (nominated by Sarah Teichmann, MRC Laboratory of Molecular Biology, Cambridge, United Kingdom).\nThe paper 'A method for defining evolving protein coding genes' by Wang et al. presents an evolutionary analysis of orthologus protein-coding genes across different species. My main concern with this paper is the lack of novelty. The main conclusions of this paper — (i) different functional classes of genes evolve differently, (ii) highly expressed genes evolve slowly and (iii) fast evolving genes often evolve in a lineage-specific manner—have already been reported comprehensively by several groups (Gerstein, Siepel, Hurst, Koonin, Drummond, Nielsen, Bustamante and many other labs). The authors merely reconfirm their findings. Many of those previous papers are not cited either.\n[SUBTITLE] Authors' response [SUBSECTION] As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\nAs pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\n[SUBTITLE] Authors' response [SUBSECTION] We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\nWe revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\n[SUBTITLE] Authors' response [SUBSECTION] We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\nWe have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\nWe have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\nWe have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\n[SUBTITLE] Authors' response [SUBSECTION] Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\nLopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\n[SUBTITLE] Authors' response [SUBSECTION] We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\nWe are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\n[SUBTITLE] Authors' response [SUBSECTION] In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nIn the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.", "As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.", "We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.", "We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).", "We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).", "We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.", "Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.", "We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.", "In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.", "Claus O. Wilke, Center for Computational Biology and Bioinformatics and Institute for Cell and Molecular Biology, University of Texas, Austin, Texas, United States\nThe authors study the evolutionary rates of mammalian genes using eight different methods of evolutionary-rate calculation. They conclude that Ka is more consistently estimated by these different methods than Ks and that therefore Ka will be more informative in many contexts than Ks or Ka/Ks.\nWhile I think that the paper makes a valuable contribution, I feel that the impact of the paper has been diluted by the authors' choice to actually combine two separate parts (with separate messages) into one paper. The first part (which I find valuable) is the analysis of the consistency of rate estimations by different methods. The second part (of whose value I'm less convinced) looks at the functional classification of genes evolving at different rates.\n[SUBTITLE] Authors' Response [SUBSECTION] The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\nThe point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\n[SUBTITLE] Authors' response [SUBSECTION] We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\nWe continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\n[SUBTITLE] Authors' response [SUBSECTION] We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\nWe cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\n[SUBTITLE] Authors' response [SUBSECTION] We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nWe have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.", "The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.", "We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.", "We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.", "We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results and Discussion", "Data and quality control", "Indexing evolutionary rates of protein-coding genes", "Function characterization of fast-evolving and slow-evolving genes", "Comparisons of fast-evolving and slow-evolving genes and their functions among mammalian lineages", "Comparisons to other studies", "The relationship between evolutionary rate and expression level", "The shared fast-evolving and slow-evolving genes among mammals", "Conclusions", "Methods", "Data acquisition and quality assessment", "Defining fast-evolving, intermediately-evolving, and slow-evolving genes", "Functional classification and other analyses", "Competing interests", "Authors' contributions", "Reviewers' comments", "Reviewer's report 1", "Authors' response", "Authors' response", "Authors' response", "Comments from the second round of reviewing", "Authors' response", "Reviewer's report 2", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Authors' response", "Reviewer's report 3", "Authors' Response", "Authors' response", "Authors' response", "Authors' response", "Supplementary Material" ]
[ "Although protein-coding sequences account for ~1% of the entire mammalian genome, it is the most function-related, dynamic, and informative part of the genome [1]. For molecular evolution studies, protein-coding sequences are central to understanding the mutational dynamics of genes and the functional dynamics of gene networks within a population or among diverse species and lineages.\nFollowing the publication of the complete human genome sequence [2], over a dozen mammalian genomes have been sequenced, allowing mammalian comparative genomics to finally come to age. Genome-wide sequence analysis has been focused on two essential forms of genetic variation. One concerns gene gain-and-loss that is related to the amplification and deletion of certain genes and their chromosomal regions. This is an important evolutionary mechanism to shape mammalian genomes through natural selection, but it also leads to gene family expansion and deletion, which has been proposed to be one molecular origin of chimp-human evolution [3]. Another form of genetic variation is sequence variation at specific nucleotide sites in protein-coding genes. Such variations become functionally relevant when they alter protein sequences.\nThe task of defining positively-selected genes has drawn the most attention, because these genes are often considered to be the major driving forces behind how organisms adapt to their external environments [4,5]. A number of interesting characteristics of positively selected genes have been found: (1) they are more likely to have several classes of functions, including nuclear transport, sensory perception, immune defenses, tumor suppression, apoptosis, and reproduction, and may be involved in Mendelian genetic disorders [6-8]. (2) These genes tend to be expressed at low levels and in a tissue-specific manner [7]. (3) Some highly-expressed genes in the testis were reported to have been subjected to positive selection [6]. (4) Positively selected genes are often species-specific or lineage-specific [7]. As the number of sequenced genomes increases, new approaches and novel methodology will be needed to develop efficient tools for mining vast amounts of sequence data.\nHere, we report a novel yet basic method of defining fast-evolving and slow-evolving genes based on nonsynonymous substitution rates (Ka) in different subgroups or lineages of mammals. We first tested different computational models to see if they provided consistent results when defining the evolution rates of diverse gene classes and families. We then identified percentage shared genes (orthologs) among lineages that were calculated based on different methods, and also looked in more detail at their cellular functions and functional pathways. We also examined the relationship between the evolutionary rates and gene expression levels of these genes, using high-coverage genome sequence and transcriptomic data from thirteen vertebrate species, including human [9], chimpanzee [10], orangutan [11], macaque [12], horse [13], dog [14], cow [15], guinea pig, mouse [16], rat [17], opossum [18], platypus [19], and chicken [20]. Our new method not only confirms the results of many previous studies, but also provides a new and straightforward approach to understanding the evolutionary dynamics of mammalian genes.", "[SUBTITLE] Data and quality control [SUBSECTION] To examine the divergence between humans and other species, we calculated identities by averaging all orthologs in a species: chimpanzee - 99.23%; orangutan - 98.00%; macaque - 96.09%; horse - 89.44%; dog - 87.93%; cow - 87.36%; guinea pig - 85.91%; mouse - 84.54%; rat - 83.92%; opossum - 77.64%; platypus - 74.37%; and chicken - 72.87%. The data gave rise to a bimodal distribution in overall identities, which distinctly separates highly identical primate sequences from the rest (Additional file 1: Figure 1SA). For quality assessment, we also evaluated the alignment qualities of all orthologs.\nFirst, we found that the number of Ns (uncertain nucleotides) in all coding sequences (CDS) fell within reasonable ranges (mean ± standard deviation): (1) the number of Ns/the number of nucleotides = 0.00002740 ± 0.00059475; (2) the total number of orthologs containing Ns/total number of orthologs × 100% = 1.5084%. Second, we evaluated parameters related to the quality of sequence alignments, such as percentage identity and percentage gap (Additional file 1: Figure S1). All of them provided clues for low mismatching rates and limited number of arbitrarily-aligned positions.\nTo examine the divergence between humans and other species, we calculated identities by averaging all orthologs in a species: chimpanzee - 99.23%; orangutan - 98.00%; macaque - 96.09%; horse - 89.44%; dog - 87.93%; cow - 87.36%; guinea pig - 85.91%; mouse - 84.54%; rat - 83.92%; opossum - 77.64%; platypus - 74.37%; and chicken - 72.87%. The data gave rise to a bimodal distribution in overall identities, which distinctly separates highly identical primate sequences from the rest (Additional file 1: Figure 1SA). For quality assessment, we also evaluated the alignment qualities of all orthologs.\nFirst, we found that the number of Ns (uncertain nucleotides) in all coding sequences (CDS) fell within reasonable ranges (mean ± standard deviation): (1) the number of Ns/the number of nucleotides = 0.00002740 ± 0.00059475; (2) the total number of orthologs containing Ns/total number of orthologs × 100% = 1.5084%. Second, we evaluated parameters related to the quality of sequence alignments, such as percentage identity and percentage gap (Additional file 1: Figure S1). All of them provided clues for low mismatching rates and limited number of arbitrarily-aligned positions.\n[SUBTITLE] Indexing evolutionary rates of protein-coding genes [SUBSECTION] Ka and Ks are nonsynonymous (amino-acid-changing) and synonymous (silent) substitution rates, respectively, which are governed by sequence contexts that are functionally-relevant, such as coding amino acids and involving in exon splicing [21]. The ratio of the two parameters, Ka/Ks (a measure of selection strength), is defined as the degree of evolutionary change, normalized by random background mutation. We began by scrutinizing the consistency of Ka and Ks estimates using eight commonly-used methods. We defined two divergence indexes: (i) standard deviation normalized by mean, where eight values from all methods are considered to be a group, and (ii) range normalized by mean, where range is the absolute difference between the estimated maximal and minimal values. In order to keep our comparison unbiased, we eliminated gene pairs when any NA (not applicable or infinite) value occurred in Ka or Ks. We observed that the divergence indexes of Ka were significantly smaller than those of Ks in all examined species (P-value < 2.2e-16, Wilcoxon rank sum test) (Figure 1). The result of our second defined index appeared to be very similar to the first (data not shown). We also investigated the performance of these methods in calculating Ka, Ks, and Ka/Ks. First, we considered six cut-off points for grouping and defining fast-evolving and slow-evolving genes: 5%, 10%, 20%, 30%, 40%, and 50% of the total (see Methods). Second, we applied eight commonly-used methods to calculate the parameters for twelve species at each cut-off value. Lastly, we compared the percentage of shared genes (the number of shared genes from different methods, divided by the total number of genes within a chosen cut-off point) calculated by GY and other methods (Figure 2). We observed that Ka had the highest percentage of shared genes, followed by Ka/Ks; Ks always had the lowest. We also made similar observations using our own gamma-series methods [22,23] (data not shown). It was quite clear that Ka calculations had the most consistent results when sorting protein-coding genes based on their evolutionary rates. As the cut-off values increased from 5% to 50%, the percentages of shared genes also increased, reflecting the fact that more shared genes are obtained by setting less stringent cut-offs (Figure 2A and 2B). We also found a rising trend as the model complexity increased in the order of NG, LWL, MLWL, LPB, MLPB, YN, and MYN (Figure 2C and 2D). We examined the impact of divergent distance on gene sorting using the three parameters, and found that the percentage of shared genes referencing to Ka was consistently high across all twelve species, while those referencing to Ka/Ks and Ks decreased with increasing divergence time between human and other studied species (Figure 2E and 2F). In addition, the percentage of shared genes of Ka/Ks remains moderate between those of Ka and Ks. In particular, there should be more variations in the percentages of shared genes determined by Ka/Ks and Ks than by Ka, when we define slow-evolving genes (Figure 2B, D, and 2F). We found consistent results from the various methods when Ka was used as the measure for sorting genes.\nDivergence index (standard deviation/mean) of Ka and Ks determined based on the eight different methods from the twelve vertebrate species. In the boxplots, lower quantile, median, and upper quantile were represented in the boxes. Mean values were depicted in dots. Outliers were removed to make the plot straightforward. The number codes for the vertebrate species are: 1, chimp; 2, orangutan; 3, macaque; 4, horse; 5, dog; 6, cow; 7, guinea pig; 8, mouse; 9, rat; 10, opossum; 11, platypus; and 12, chicken.\nThe percentage of shared genes of Ka, Ks and Ka/Ks based on GY compared with other seven methods in terms of cut-off (A, B), method (C, D), and species (E, F). Outliers were removed to make the plots straightforward. The number codes for the species are the same as what in Figure 1.\nThe methods used in this study cover a wide range of mutation models with different complexities. NG gives equal weight to every sequence variation path [24] and LWL divides the mutation sites into three categories—non-degenerate, two-fold, and four-fold sites—and assigns fixed weights to synonymous and nonsynonymous sites for the two-fold degenerate sites [25]. LPB adopts a flexible ratio of transitional to transversional substitutions to handle the two-fold sites [26,27]. MLWL or MLPB are improved versions of their parental methods with specific consideration on the arginine codons (an exceptional case from the previous method) [28]. In particular, MLWL also incorporates an independent parameter, the ratio of transitional to transversional substitution rates, into the calculation [28]. Both YN and GY capture the features of codon usage and transition/transversion rates, but they are approximate and maximum likelihood methods, respectively [29,30]. MYN accounts for another important evolutionary characteristic—differences in transitional substitution within purines and pyrimidines [31]. Although these methods model and compute sequence variations in different ways, the Ka values that they calculate appeared to be more consistent than their Ks values or Ka/Ks. We proposed the following reasons (which are not comprehensive): first, real data from large data sets are usually from a broader range of species than computer simulations in the training sets for methodology development, so deviations in Ks values may draw more attentions in discussions. Second, the parameter-rich approaches—such as considering unequal codon usage and unequal transition/transversion rates—may lead to opposite effects on substitution rates when sequence divergence falls out of the \"sweet ranges\" [25,30,32]. Third, when examining closely related species, such primates, one will find that most Ka/Ks values are smaller than 1 and that Ka values are smaller than Ks values under most conditions. For a very limited number of nonsynonymous substitutions, when evolutionary distance is relatively short between species, models that increase complexity, such as those for correcting multiple hits, may not lead to stable estimations [24,32]. Furthermore, when incorporating the shape parameter of gamma distribution into the commonly approximate Ka/Ks methods, we found previously that Ks is more sensitive to changes in the shape parameter under the condition Ka < Ks [23]. Together, there are stronger influences on Ks than on Ka in two cases: when Ka < Ks and when complexity increases in mutation models. Fourth, it has been suggested that Ks estimation does not work well for comparing extremes, such as closely and distantly related species [33,34]. Occasionally, certain larger Ka/Ks values, greater than 1, are identified, as was done in a comparative study between human and chimpanzee genes, perhaps due to a very small Ks [34].\nWe also wondered what would happen when Ka becomes saturated as the divergence of the paired sequences increases. Looking at human vs. chicken, we found that the median Ka exceeded 0.2 and that the maximal Ka was as high as 0.6 after the outliers were eliminated (Additional file 1: Figure S2). This result suggested that their Ka values have not approached saturation yet. In addition, we chose the GY method to compute Ka as an estimator of evolutionary rates, since counting methods usually yield more out-of-range values than maximum likelihood methods (data not shown).\nKa and Ks are nonsynonymous (amino-acid-changing) and synonymous (silent) substitution rates, respectively, which are governed by sequence contexts that are functionally-relevant, such as coding amino acids and involving in exon splicing [21]. The ratio of the two parameters, Ka/Ks (a measure of selection strength), is defined as the degree of evolutionary change, normalized by random background mutation. We began by scrutinizing the consistency of Ka and Ks estimates using eight commonly-used methods. We defined two divergence indexes: (i) standard deviation normalized by mean, where eight values from all methods are considered to be a group, and (ii) range normalized by mean, where range is the absolute difference between the estimated maximal and minimal values. In order to keep our comparison unbiased, we eliminated gene pairs when any NA (not applicable or infinite) value occurred in Ka or Ks. We observed that the divergence indexes of Ka were significantly smaller than those of Ks in all examined species (P-value < 2.2e-16, Wilcoxon rank sum test) (Figure 1). The result of our second defined index appeared to be very similar to the first (data not shown). We also investigated the performance of these methods in calculating Ka, Ks, and Ka/Ks. First, we considered six cut-off points for grouping and defining fast-evolving and slow-evolving genes: 5%, 10%, 20%, 30%, 40%, and 50% of the total (see Methods). Second, we applied eight commonly-used methods to calculate the parameters for twelve species at each cut-off value. Lastly, we compared the percentage of shared genes (the number of shared genes from different methods, divided by the total number of genes within a chosen cut-off point) calculated by GY and other methods (Figure 2). We observed that Ka had the highest percentage of shared genes, followed by Ka/Ks; Ks always had the lowest. We also made similar observations using our own gamma-series methods [22,23] (data not shown). It was quite clear that Ka calculations had the most consistent results when sorting protein-coding genes based on their evolutionary rates. As the cut-off values increased from 5% to 50%, the percentages of shared genes also increased, reflecting the fact that more shared genes are obtained by setting less stringent cut-offs (Figure 2A and 2B). We also found a rising trend as the model complexity increased in the order of NG, LWL, MLWL, LPB, MLPB, YN, and MYN (Figure 2C and 2D). We examined the impact of divergent distance on gene sorting using the three parameters, and found that the percentage of shared genes referencing to Ka was consistently high across all twelve species, while those referencing to Ka/Ks and Ks decreased with increasing divergence time between human and other studied species (Figure 2E and 2F). In addition, the percentage of shared genes of Ka/Ks remains moderate between those of Ka and Ks. In particular, there should be more variations in the percentages of shared genes determined by Ka/Ks and Ks than by Ka, when we define slow-evolving genes (Figure 2B, D, and 2F). We found consistent results from the various methods when Ka was used as the measure for sorting genes.\nDivergence index (standard deviation/mean) of Ka and Ks determined based on the eight different methods from the twelve vertebrate species. In the boxplots, lower quantile, median, and upper quantile were represented in the boxes. Mean values were depicted in dots. Outliers were removed to make the plot straightforward. The number codes for the vertebrate species are: 1, chimp; 2, orangutan; 3, macaque; 4, horse; 5, dog; 6, cow; 7, guinea pig; 8, mouse; 9, rat; 10, opossum; 11, platypus; and 12, chicken.\nThe percentage of shared genes of Ka, Ks and Ka/Ks based on GY compared with other seven methods in terms of cut-off (A, B), method (C, D), and species (E, F). Outliers were removed to make the plots straightforward. The number codes for the species are the same as what in Figure 1.\nThe methods used in this study cover a wide range of mutation models with different complexities. NG gives equal weight to every sequence variation path [24] and LWL divides the mutation sites into three categories—non-degenerate, two-fold, and four-fold sites—and assigns fixed weights to synonymous and nonsynonymous sites for the two-fold degenerate sites [25]. LPB adopts a flexible ratio of transitional to transversional substitutions to handle the two-fold sites [26,27]. MLWL or MLPB are improved versions of their parental methods with specific consideration on the arginine codons (an exceptional case from the previous method) [28]. In particular, MLWL also incorporates an independent parameter, the ratio of transitional to transversional substitution rates, into the calculation [28]. Both YN and GY capture the features of codon usage and transition/transversion rates, but they are approximate and maximum likelihood methods, respectively [29,30]. MYN accounts for another important evolutionary characteristic—differences in transitional substitution within purines and pyrimidines [31]. Although these methods model and compute sequence variations in different ways, the Ka values that they calculate appeared to be more consistent than their Ks values or Ka/Ks. We proposed the following reasons (which are not comprehensive): first, real data from large data sets are usually from a broader range of species than computer simulations in the training sets for methodology development, so deviations in Ks values may draw more attentions in discussions. Second, the parameter-rich approaches—such as considering unequal codon usage and unequal transition/transversion rates—may lead to opposite effects on substitution rates when sequence divergence falls out of the \"sweet ranges\" [25,30,32]. Third, when examining closely related species, such primates, one will find that most Ka/Ks values are smaller than 1 and that Ka values are smaller than Ks values under most conditions. For a very limited number of nonsynonymous substitutions, when evolutionary distance is relatively short between species, models that increase complexity, such as those for correcting multiple hits, may not lead to stable estimations [24,32]. Furthermore, when incorporating the shape parameter of gamma distribution into the commonly approximate Ka/Ks methods, we found previously that Ks is more sensitive to changes in the shape parameter under the condition Ka < Ks [23]. Together, there are stronger influences on Ks than on Ka in two cases: when Ka < Ks and when complexity increases in mutation models. Fourth, it has been suggested that Ks estimation does not work well for comparing extremes, such as closely and distantly related species [33,34]. Occasionally, certain larger Ka/Ks values, greater than 1, are identified, as was done in a comparative study between human and chimpanzee genes, perhaps due to a very small Ks [34].\nWe also wondered what would happen when Ka becomes saturated as the divergence of the paired sequences increases. Looking at human vs. chicken, we found that the median Ka exceeded 0.2 and that the maximal Ka was as high as 0.6 after the outliers were eliminated (Additional file 1: Figure S2). This result suggested that their Ka values have not approached saturation yet. In addition, we chose the GY method to compute Ka as an estimator of evolutionary rates, since counting methods usually yield more out-of-range values than maximum likelihood methods (data not shown).\n[SUBTITLE] Function characterization of fast-evolving and slow-evolving genes [SUBSECTION] To learn about the functions of fast-evolving and slow-evolving genes in each species and lineage, we used custom-designed scripts to assess the enrichment of molecular functions (MF), biological processes (BP), and signal/metabolic pathways (Table 1). We noticed that the number of enriched functions related to slow-evolving genes was 2.53 times greater than those related to fast-evolving genes. Fast-evolving genes also had more lineage-specific functions than slow-evolving genes.\nSelected common functional categories of fast-evolving genes and/or slow-evolving genes among mammalian genomes and lineages.\nNote: The asterisks depict function classification of genes in species and lineages, which are significantly enriched based on Fisher's Exact Test after multiple corrections. The species are coded in numbers as what listed in Figure 1.\nWe found that fast-evolving genes were enriched in immunity-related functions (Table 1), which included genes present in NK, T, and B cells. The genes in NK cells were related to innate immunity (non-specific), and genes in T and B cells were associated with acquired immunity (specific) [35]. Other enriched immunity-related categories of fast-evolving genes included immunoglobulin, cytokine, chemokine, and interleukin. Fast-evolving genes were also enriched in signaling pathways, such as receptors and ligands. Finally, there were a significant number of fast-evolving genes classified as having unknown functions—unclassified biological processes and unclassified molecular functions. It is not surprising that fast-evolving genes may quickly diminish their homology to known proteins and are associated with dietary adaptation, language, appearance, behavior or upright-walking [36]. In the enriched functions of slow-evolving genes, we found a number of important house-keeping functional classes, including transcription, mRNA processing/splicing, translation, protein modification, metabolism, protein traffic, cell cycle, development and endocytosis (Table 1). As a result, fast-evolving and slow-evolving genes have significantly different functions in mammals.\nAnother point of interest is that we identified two immunity-related function categories, T cell and B cell activation, in the enriched functions of slow-evolving genes (Table 1). We also discovered that immunity-related fast-evolving genes were mostly receptors, ligands, cytokines, and CD (cluster of differentiation) molecules, and that slow-evolving immunity-related genes were usually kinases or adaptor proteins. Taking the human-rat comparison as an example, the receptors included MS4A2, FCER1G, FCGRT, KLRG2, IL1RN, TNFRSF1A, TNFRSF25, IFNGR1, IL2RA, TNFRSF4 and TNFRSF8; the cytokines were IL12A and IL1F9; and the ligands were CCL27 and ICOSLG. All of these are highly conserved, functionally important, and involved in complex immunity-related pathways. Cytokines are also involved in the transfer of information between cells, the regulation of cell physiological processes, and the strengthening of immune-competence [37]. CD proteins, generated during the differentiation of lymphocytes, are a class of cell surface molecules that are recognized by specific antibodies on the surfaces of lymphocytes [38]. Adaptor proteins and kinases play significant roles in signal transduction in cell immune systems, mediate specific interactions between proteins, and activate phosphorylation of the target proteins to functionally modify protein structure and activity [39,40]. In summary, receptors, ligands, cytokines, and CDs are likely to evolve faster than kinases and adaptor proteins, although they all function in the acquired immune system (B cell and T cell immunity). These observations suggest that: (1) Genes in the upstream of the immune-related pathways tend to evolve faster than those in the downstream. (2) Immunity-specific genes are likely to evolve faster than multifunctional house-keeping genes, which also perform fundamental functions in non-immune pathways. (3) Genes encoding for proteins that participate in extracellular communion or the reorganization of external pathogens seem to evolve faster than those which encode proteins that play roles in signal transduction and effector activation within single cells [41]. Similar observations have been reported about the evolution of Drosophila's innate immune system [42].\nIn addition, we discovered a few enriched functions that were related to neuro-degenerative diseases or nervous system functionality (Table 1). These slow-evolving genes play roles in progressive neuro-degenerative genetic diseases [43], neural-tube defects [44], proliferative disorders in the central nervous system [45], progressions of brain cancers [46,47], and electrical movement within synapses in the brain [48]. These results are consistent with a previous observation that brain-specific genes tend to have relatively low evolutionary rates in mammals [49]. Brain-specific genes may be expressed in multiple distinct neuronal cell types and in a way resemble house-keeping genes in terms of shared cell types.\nTo learn about the functions of fast-evolving and slow-evolving genes in each species and lineage, we used custom-designed scripts to assess the enrichment of molecular functions (MF), biological processes (BP), and signal/metabolic pathways (Table 1). We noticed that the number of enriched functions related to slow-evolving genes was 2.53 times greater than those related to fast-evolving genes. Fast-evolving genes also had more lineage-specific functions than slow-evolving genes.\nSelected common functional categories of fast-evolving genes and/or slow-evolving genes among mammalian genomes and lineages.\nNote: The asterisks depict function classification of genes in species and lineages, which are significantly enriched based on Fisher's Exact Test after multiple corrections. The species are coded in numbers as what listed in Figure 1.\nWe found that fast-evolving genes were enriched in immunity-related functions (Table 1), which included genes present in NK, T, and B cells. The genes in NK cells were related to innate immunity (non-specific), and genes in T and B cells were associated with acquired immunity (specific) [35]. Other enriched immunity-related categories of fast-evolving genes included immunoglobulin, cytokine, chemokine, and interleukin. Fast-evolving genes were also enriched in signaling pathways, such as receptors and ligands. Finally, there were a significant number of fast-evolving genes classified as having unknown functions—unclassified biological processes and unclassified molecular functions. It is not surprising that fast-evolving genes may quickly diminish their homology to known proteins and are associated with dietary adaptation, language, appearance, behavior or upright-walking [36]. In the enriched functions of slow-evolving genes, we found a number of important house-keeping functional classes, including transcription, mRNA processing/splicing, translation, protein modification, metabolism, protein traffic, cell cycle, development and endocytosis (Table 1). As a result, fast-evolving and slow-evolving genes have significantly different functions in mammals.\nAnother point of interest is that we identified two immunity-related function categories, T cell and B cell activation, in the enriched functions of slow-evolving genes (Table 1). We also discovered that immunity-related fast-evolving genes were mostly receptors, ligands, cytokines, and CD (cluster of differentiation) molecules, and that slow-evolving immunity-related genes were usually kinases or adaptor proteins. Taking the human-rat comparison as an example, the receptors included MS4A2, FCER1G, FCGRT, KLRG2, IL1RN, TNFRSF1A, TNFRSF25, IFNGR1, IL2RA, TNFRSF4 and TNFRSF8; the cytokines were IL12A and IL1F9; and the ligands were CCL27 and ICOSLG. All of these are highly conserved, functionally important, and involved in complex immunity-related pathways. Cytokines are also involved in the transfer of information between cells, the regulation of cell physiological processes, and the strengthening of immune-competence [37]. CD proteins, generated during the differentiation of lymphocytes, are a class of cell surface molecules that are recognized by specific antibodies on the surfaces of lymphocytes [38]. Adaptor proteins and kinases play significant roles in signal transduction in cell immune systems, mediate specific interactions between proteins, and activate phosphorylation of the target proteins to functionally modify protein structure and activity [39,40]. In summary, receptors, ligands, cytokines, and CDs are likely to evolve faster than kinases and adaptor proteins, although they all function in the acquired immune system (B cell and T cell immunity). These observations suggest that: (1) Genes in the upstream of the immune-related pathways tend to evolve faster than those in the downstream. (2) Immunity-specific genes are likely to evolve faster than multifunctional house-keeping genes, which also perform fundamental functions in non-immune pathways. (3) Genes encoding for proteins that participate in extracellular communion or the reorganization of external pathogens seem to evolve faster than those which encode proteins that play roles in signal transduction and effector activation within single cells [41]. Similar observations have been reported about the evolution of Drosophila's innate immune system [42].\nIn addition, we discovered a few enriched functions that were related to neuro-degenerative diseases or nervous system functionality (Table 1). These slow-evolving genes play roles in progressive neuro-degenerative genetic diseases [43], neural-tube defects [44], proliferative disorders in the central nervous system [45], progressions of brain cancers [46,47], and electrical movement within synapses in the brain [48]. These results are consistent with a previous observation that brain-specific genes tend to have relatively low evolutionary rates in mammals [49]. Brain-specific genes may be expressed in multiple distinct neuronal cell types and in a way resemble house-keeping genes in terms of shared cell types.\n[SUBTITLE] Comparisons of fast-evolving and slow-evolving genes and their functions among mammalian lineages [SUBSECTION] We used a network to display the percentages of shared genes among fast-evolving and slow-evolving genes between pairs of mammals (Figure 3). First, two primitive mammals (opossum and platypus) and one bird (chicken) are clearly distinct from other mammals. Second, primates are also closely clustered with one another. Third, mouse serves as an excellent hub that links cow, horse, guinea pig, rat, and opossum. Fourth, large mammals are well connected when all elements are considered. Fifth, some connections may be coincidental, for example, fast-evolving genes shared by dog, horse, and macaque as well as slow-evolving genes shared by cow, macaque, orangutan, and chimp.\nA network of fast-evolving and slow-evolving genes among twelve mammalian species. For any two given species, we calculated the shared number of fast-evolving or slow-evolving genes and subsequently divided them based on the total shared number of genes to normalize the correlation coefficients. We connected the species based on the largest two correlation coefficients for each pair. Red and green lines stand for fast-evolving and slow-evolving genes, respectively, and the yellow lines are the sum of both.\nWe then investigated the exclusive functions of fast-evolving and slow-evolving genes in three mammalian lineages: primates (chimp, orangutan, and macaque), large mammals (horse, dog, and cow), and rodents (guinea pig, mouse, and rat; Table 2, 3, and 4). Although primates are also large mammals, we considered them to be a separate category in order to further stratify our pool. First, we found specific functions that were unique to the three mammalian subgroups in fast-evolving genes: sensory-related (chemosensory perception, olfaction and sensory perception) and cancer related (oncogenesis) in primates (Table 2), immune related (interleukin receptor) in rodents (Table 4), and reproduction related (fertilization) and steroid hormone related (steroid hormone metabolism; Table 3) in large mammals. The first two observations we made are consistent with a previous study [7], and the last one is novel, which may be related to domestication for fast-growth. Second, we also found some lineage-specific functions that involved slow-evolving genes. For instance, we categorized calcium binding proteins, calmodulin related proteins and mitochondrial transport in primates, as well as G protein signalling, enkephalin release, actin binding cytoskeletal proteins, the microtubule family, and exocytosis in rodents. Three critical hormones (alpha adrenergic receptor signalling, oxytocin receptor mediated signalling, and thyrotropin-releasing hormone receptor signalling pathways) are specific to large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in primates.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among primates.\nFunctional enrichment of fast-evolving and slow-evolving genes in large mammals.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in rodents.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among rodents.\nWe used a network to display the percentages of shared genes among fast-evolving and slow-evolving genes between pairs of mammals (Figure 3). First, two primitive mammals (opossum and platypus) and one bird (chicken) are clearly distinct from other mammals. Second, primates are also closely clustered with one another. Third, mouse serves as an excellent hub that links cow, horse, guinea pig, rat, and opossum. Fourth, large mammals are well connected when all elements are considered. Fifth, some connections may be coincidental, for example, fast-evolving genes shared by dog, horse, and macaque as well as slow-evolving genes shared by cow, macaque, orangutan, and chimp.\nA network of fast-evolving and slow-evolving genes among twelve mammalian species. For any two given species, we calculated the shared number of fast-evolving or slow-evolving genes and subsequently divided them based on the total shared number of genes to normalize the correlation coefficients. We connected the species based on the largest two correlation coefficients for each pair. Red and green lines stand for fast-evolving and slow-evolving genes, respectively, and the yellow lines are the sum of both.\nWe then investigated the exclusive functions of fast-evolving and slow-evolving genes in three mammalian lineages: primates (chimp, orangutan, and macaque), large mammals (horse, dog, and cow), and rodents (guinea pig, mouse, and rat; Table 2, 3, and 4). Although primates are also large mammals, we considered them to be a separate category in order to further stratify our pool. First, we found specific functions that were unique to the three mammalian subgroups in fast-evolving genes: sensory-related (chemosensory perception, olfaction and sensory perception) and cancer related (oncogenesis) in primates (Table 2), immune related (interleukin receptor) in rodents (Table 4), and reproduction related (fertilization) and steroid hormone related (steroid hormone metabolism; Table 3) in large mammals. The first two observations we made are consistent with a previous study [7], and the last one is novel, which may be related to domestication for fast-growth. Second, we also found some lineage-specific functions that involved slow-evolving genes. For instance, we categorized calcium binding proteins, calmodulin related proteins and mitochondrial transport in primates, as well as G protein signalling, enkephalin release, actin binding cytoskeletal proteins, the microtubule family, and exocytosis in rodents. Three critical hormones (alpha adrenergic receptor signalling, oxytocin receptor mediated signalling, and thyrotropin-releasing hormone receptor signalling pathways) are specific to large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in primates.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among primates.\nFunctional enrichment of fast-evolving and slow-evolving genes in large mammals.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in rodents.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among rodents.\n[SUBTITLE] Comparisons to other studies [SUBSECTION] There have been three interesting investigations that have used the likelihood ratio test (LRT) to compare two models, and have evaluated the use of Ka/Ks in the identification of positively-selected genes (PSGs) and their enriched functions among six species [6-8]. Our study is unique in that we have analyzed 12 species and considered more than one-thousand fast-evolving genes. The numbers of PSGs in previous studies were at least an order of magnitude less, around tens to hundreds. Although our definition of fast-evolving genes is not fully identical to those of previous studies, our findings on immune-related functions in most species are consistent with previous studies [6,7]. Two other categories that are shared among these studies are chemosensory perception, olfaction, and sensory perception in the human-vs-chimpanzee-specific functions (Table 2) and fertilization in the human-vs-cow-specific functions. This validated the fact that the methods, which were based on simple comparison, yielded conclusions that were similar to those of complicated and over-parametric methods.\nLopez-Bigas et al. conducted a comprehensive study of functional protein sequence divergences between human and other organisms [50]. They focused on variations at the protein level and in a wide range of evolutionary distance, whereas we have focused on variations among mammals at the DNA level [50]. Natural selection acts at three essential levels: domains, catalytic centers, and the DNA and protein level that consists of sequences and protein structures composed of motifs [32]. Since nucleotide sequences are more variable than protein sequences and structures, DNA variations are usually used for the study of short-term evolution, and the latter two are used to study long-term evolution. In our study, we found that the major classified functions were regulatory (e.g. receptor)/response to the surroundings (e.g. immunoglobulin receptor family member) among fast-evolving genes, and metabolism (e.g. protein metabolism and modification), transport (e.g. general vesicle transport) and cell structure (e.g. protein biosynthesis) among slow-evolving genes [50]. We also found developmental processes to be a major functional category in mammals based on the slow-evolving genes when regarding chicken as a reference. This finding agrees with a previous conclusion that development-related genes are highly conserved only among mammals [50]. In addition, at the DNA level, both B-cell-mediated and antibody-mediated immunity and B-cell activation were only identified in mammals but not in chickens. This may reflect differences in B-cell-associated hormonal responses between the bursa of fabricius unique to birds and the bone marrow of mammals [51].\nThere have been three interesting investigations that have used the likelihood ratio test (LRT) to compare two models, and have evaluated the use of Ka/Ks in the identification of positively-selected genes (PSGs) and their enriched functions among six species [6-8]. Our study is unique in that we have analyzed 12 species and considered more than one-thousand fast-evolving genes. The numbers of PSGs in previous studies were at least an order of magnitude less, around tens to hundreds. Although our definition of fast-evolving genes is not fully identical to those of previous studies, our findings on immune-related functions in most species are consistent with previous studies [6,7]. Two other categories that are shared among these studies are chemosensory perception, olfaction, and sensory perception in the human-vs-chimpanzee-specific functions (Table 2) and fertilization in the human-vs-cow-specific functions. This validated the fact that the methods, which were based on simple comparison, yielded conclusions that were similar to those of complicated and over-parametric methods.\nLopez-Bigas et al. conducted a comprehensive study of functional protein sequence divergences between human and other organisms [50]. They focused on variations at the protein level and in a wide range of evolutionary distance, whereas we have focused on variations among mammals at the DNA level [50]. Natural selection acts at three essential levels: domains, catalytic centers, and the DNA and protein level that consists of sequences and protein structures composed of motifs [32]. Since nucleotide sequences are more variable than protein sequences and structures, DNA variations are usually used for the study of short-term evolution, and the latter two are used to study long-term evolution. In our study, we found that the major classified functions were regulatory (e.g. receptor)/response to the surroundings (e.g. immunoglobulin receptor family member) among fast-evolving genes, and metabolism (e.g. protein metabolism and modification), transport (e.g. general vesicle transport) and cell structure (e.g. protein biosynthesis) among slow-evolving genes [50]. We also found developmental processes to be a major functional category in mammals based on the slow-evolving genes when regarding chicken as a reference. This finding agrees with a previous conclusion that development-related genes are highly conserved only among mammals [50]. In addition, at the DNA level, both B-cell-mediated and antibody-mediated immunity and B-cell activation were only identified in mammals but not in chickens. This may reflect differences in B-cell-associated hormonal responses between the bursa of fabricius unique to birds and the bone marrow of mammals [51].\n[SUBTITLE] The relationship between evolutionary rate and expression level [SUBSECTION] Our study focused on general expression profiles based on EST data from 18 human tissues (Figure 4). The expression levels of slow-evolving genes appeared to be significantly higher than those of fast-evolving genes (P-value < 2.2e-16, Wilcoxon rank sum test). We also observed that the expression levels of intermediately-evolving genes were significantly higher than those of fast-evolving genes in most species, except for orangutan and macaque. In addition, we found that the mean of gene expressions was always greater than the median, suggesting that most genes are expressed at very low levels and only a small fraction of genes are expressed at high levels [52]. These observations suggest that there is an inverse relationship between gene evolutionary rates and gene expression levels in mammals, which is similar to a previous result reported for the yeast genome [53,54]. House-keeping [55,56], highly-expressed, and old genes [57,58] all tend to evolve slowly [59], and these genes are functionally well-connected and resistant to sequence changes (negative selected). Tissue-specific [55,56], lowly-expressed, and new genes [57,58] tend to evolve quickly; they are often selection-relaxed and evolving toward novel functions. For example, certain immune-related genes always evolve faster to cope with new or multiple pathogen attacks.\nExpression level correlations and evolvability. S, M, and F stand for slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. Expression levels were calibrated as the number of transcripts per million (TPM). Outliers were removed to make the plots straightforward.\nOur study focused on general expression profiles based on EST data from 18 human tissues (Figure 4). The expression levels of slow-evolving genes appeared to be significantly higher than those of fast-evolving genes (P-value < 2.2e-16, Wilcoxon rank sum test). We also observed that the expression levels of intermediately-evolving genes were significantly higher than those of fast-evolving genes in most species, except for orangutan and macaque. In addition, we found that the mean of gene expressions was always greater than the median, suggesting that most genes are expressed at very low levels and only a small fraction of genes are expressed at high levels [52]. These observations suggest that there is an inverse relationship between gene evolutionary rates and gene expression levels in mammals, which is similar to a previous result reported for the yeast genome [53,54]. House-keeping [55,56], highly-expressed, and old genes [57,58] all tend to evolve slowly [59], and these genes are functionally well-connected and resistant to sequence changes (negative selected). Tissue-specific [55,56], lowly-expressed, and new genes [57,58] tend to evolve quickly; they are often selection-relaxed and evolving toward novel functions. For example, certain immune-related genes always evolve faster to cope with new or multiple pathogen attacks.\nExpression level correlations and evolvability. S, M, and F stand for slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. Expression levels were calibrated as the number of transcripts per million (TPM). Outliers were removed to make the plots straightforward.\n[SUBTITLE] The shared fast-evolving and slow-evolving genes among mammals [SUBSECTION] To understand the functional relevance common for the fast-evolving or slow-evolving genes among different subgroups of mammals, we categorized the shared genes in the lineages of primates, large mammals, and rodents. There were 185, 609, and 695, fast-evolving genes in primates, large mammals, and rodents, respectively, and 355, 600, and 730 slow-evolving genes. However, we only found 15 fast-evolving and 72 slow-evolving genes that were shared by all nine species. This result suggests that fast-evolving and slow-evolving genes tend to be clade-, lineage- or species-specific. However, a limited numbers of shared genes may still lead to a significant number of shared functions (Table 1).\nAlthough we have only compared human genes (as a reference) with those of other mammals, instead of doing pairwise comparisons, our conclusions can still be easily extended to a broader spectrum of mammals, or even other vertebrates. To validate our analyses, we selected two representative proteins, ISG20 and RAB30 (based on orthologs from 20 and 22 mammals, respectively) from 87 shared fast-/slow-evolving genes in nine species to demonstrate their degrees of variation and conservation (Figure 5). The fast-evolving ISG20 (ranked 25, 71, 94, 69, 95, 128, 321, 58, 82, 280 and 423 in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) and the slow-evolving RAB30 (ranked 1, 418, 334, 117, 105, 127, 48, 49, 33, 132 and 446, respectively in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) can be obviously seen from the degree of variability [60]. These two case studies provide a footnote that supports the reliability of our method.\nThree-dimensional conservation grading of ISG20 (A) and RAB30 (B). Two 3-D backbone structures of ISG20 and RAB30 were retrieved from PDB code 1WLJ and 2EW1, respectively. (A) The putative conservation grading was based on the alignment of twenty mammalian protein sequences from: Human (Homo sapiens), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Gorilla (Gorilla gorilla), Macaque (Macaca mulatta), Cow (Bos taurus), Dog (Canis familiaris), Horse (Equus caballus), Cat (Felis catus), Guinea Pig (Cavia porcellus), Mouse (Mus musculus), Rat (Rattus norvegicus), Megabat (Pteropus vampyrus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Hyrax (Procavia capensis), Tree Shrew (Tupaia belangeri), Dolphin (Tursiops truncatus), Opossum (Monodelphis domestica), Platypus (Ornithorhynchus anatinus). (B) These conservation grades were based on the aligned twenty-two mammalian protein sequences from Human (Homo sapiens), Cow (Bos taurus), Dog (Canis familiaris), Guinea Pig (Cavia porcellus), Horse (Equus caballus), Cat (Felis catus), Elephant (Loxodonta africana), Macaque (Macaca mulatta), Mouse Lemur (Microcebus murinus), Opossum (Monodelphis domestica), Mouse (Mus musculus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Platypus (Ornithorhynchus anatinus), Rabbit (Oryctolagus cuniculus), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Hyrax (Procavia capensis), Megabat (Pteropus vampyrus), Rat (Rattus norvegicus), Tree shrew (Tupaia belangeri), Dolphin (Tursiops truncatus). The color bars from the left to the right measure changes from variable to conserved residues. Conservation grading in yellow indicates the residues whose conservation degrees were not supported with sufficient data.\nTo understand the functional relevance common for the fast-evolving or slow-evolving genes among different subgroups of mammals, we categorized the shared genes in the lineages of primates, large mammals, and rodents. There were 185, 609, and 695, fast-evolving genes in primates, large mammals, and rodents, respectively, and 355, 600, and 730 slow-evolving genes. However, we only found 15 fast-evolving and 72 slow-evolving genes that were shared by all nine species. This result suggests that fast-evolving and slow-evolving genes tend to be clade-, lineage- or species-specific. However, a limited numbers of shared genes may still lead to a significant number of shared functions (Table 1).\nAlthough we have only compared human genes (as a reference) with those of other mammals, instead of doing pairwise comparisons, our conclusions can still be easily extended to a broader spectrum of mammals, or even other vertebrates. To validate our analyses, we selected two representative proteins, ISG20 and RAB30 (based on orthologs from 20 and 22 mammals, respectively) from 87 shared fast-/slow-evolving genes in nine species to demonstrate their degrees of variation and conservation (Figure 5). The fast-evolving ISG20 (ranked 25, 71, 94, 69, 95, 128, 321, 58, 82, 280 and 423 in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) and the slow-evolving RAB30 (ranked 1, 418, 334, 117, 105, 127, 48, 49, 33, 132 and 446, respectively in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) can be obviously seen from the degree of variability [60]. These two case studies provide a footnote that supports the reliability of our method.\nThree-dimensional conservation grading of ISG20 (A) and RAB30 (B). Two 3-D backbone structures of ISG20 and RAB30 were retrieved from PDB code 1WLJ and 2EW1, respectively. (A) The putative conservation grading was based on the alignment of twenty mammalian protein sequences from: Human (Homo sapiens), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Gorilla (Gorilla gorilla), Macaque (Macaca mulatta), Cow (Bos taurus), Dog (Canis familiaris), Horse (Equus caballus), Cat (Felis catus), Guinea Pig (Cavia porcellus), Mouse (Mus musculus), Rat (Rattus norvegicus), Megabat (Pteropus vampyrus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Hyrax (Procavia capensis), Tree Shrew (Tupaia belangeri), Dolphin (Tursiops truncatus), Opossum (Monodelphis domestica), Platypus (Ornithorhynchus anatinus). (B) These conservation grades were based on the aligned twenty-two mammalian protein sequences from Human (Homo sapiens), Cow (Bos taurus), Dog (Canis familiaris), Guinea Pig (Cavia porcellus), Horse (Equus caballus), Cat (Felis catus), Elephant (Loxodonta africana), Macaque (Macaca mulatta), Mouse Lemur (Microcebus murinus), Opossum (Monodelphis domestica), Mouse (Mus musculus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Platypus (Ornithorhynchus anatinus), Rabbit (Oryctolagus cuniculus), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Hyrax (Procavia capensis), Megabat (Pteropus vampyrus), Rat (Rattus norvegicus), Tree shrew (Tupaia belangeri), Dolphin (Tursiops truncatus). The color bars from the left to the right measure changes from variable to conserved residues. Conservation grading in yellow indicates the residues whose conservation degrees were not supported with sufficient data.", "To examine the divergence between humans and other species, we calculated identities by averaging all orthologs in a species: chimpanzee - 99.23%; orangutan - 98.00%; macaque - 96.09%; horse - 89.44%; dog - 87.93%; cow - 87.36%; guinea pig - 85.91%; mouse - 84.54%; rat - 83.92%; opossum - 77.64%; platypus - 74.37%; and chicken - 72.87%. The data gave rise to a bimodal distribution in overall identities, which distinctly separates highly identical primate sequences from the rest (Additional file 1: Figure 1SA). For quality assessment, we also evaluated the alignment qualities of all orthologs.\nFirst, we found that the number of Ns (uncertain nucleotides) in all coding sequences (CDS) fell within reasonable ranges (mean ± standard deviation): (1) the number of Ns/the number of nucleotides = 0.00002740 ± 0.00059475; (2) the total number of orthologs containing Ns/total number of orthologs × 100% = 1.5084%. Second, we evaluated parameters related to the quality of sequence alignments, such as percentage identity and percentage gap (Additional file 1: Figure S1). All of them provided clues for low mismatching rates and limited number of arbitrarily-aligned positions.", "Ka and Ks are nonsynonymous (amino-acid-changing) and synonymous (silent) substitution rates, respectively, which are governed by sequence contexts that are functionally-relevant, such as coding amino acids and involving in exon splicing [21]. The ratio of the two parameters, Ka/Ks (a measure of selection strength), is defined as the degree of evolutionary change, normalized by random background mutation. We began by scrutinizing the consistency of Ka and Ks estimates using eight commonly-used methods. We defined two divergence indexes: (i) standard deviation normalized by mean, where eight values from all methods are considered to be a group, and (ii) range normalized by mean, where range is the absolute difference between the estimated maximal and minimal values. In order to keep our comparison unbiased, we eliminated gene pairs when any NA (not applicable or infinite) value occurred in Ka or Ks. We observed that the divergence indexes of Ka were significantly smaller than those of Ks in all examined species (P-value < 2.2e-16, Wilcoxon rank sum test) (Figure 1). The result of our second defined index appeared to be very similar to the first (data not shown). We also investigated the performance of these methods in calculating Ka, Ks, and Ka/Ks. First, we considered six cut-off points for grouping and defining fast-evolving and slow-evolving genes: 5%, 10%, 20%, 30%, 40%, and 50% of the total (see Methods). Second, we applied eight commonly-used methods to calculate the parameters for twelve species at each cut-off value. Lastly, we compared the percentage of shared genes (the number of shared genes from different methods, divided by the total number of genes within a chosen cut-off point) calculated by GY and other methods (Figure 2). We observed that Ka had the highest percentage of shared genes, followed by Ka/Ks; Ks always had the lowest. We also made similar observations using our own gamma-series methods [22,23] (data not shown). It was quite clear that Ka calculations had the most consistent results when sorting protein-coding genes based on their evolutionary rates. As the cut-off values increased from 5% to 50%, the percentages of shared genes also increased, reflecting the fact that more shared genes are obtained by setting less stringent cut-offs (Figure 2A and 2B). We also found a rising trend as the model complexity increased in the order of NG, LWL, MLWL, LPB, MLPB, YN, and MYN (Figure 2C and 2D). We examined the impact of divergent distance on gene sorting using the three parameters, and found that the percentage of shared genes referencing to Ka was consistently high across all twelve species, while those referencing to Ka/Ks and Ks decreased with increasing divergence time between human and other studied species (Figure 2E and 2F). In addition, the percentage of shared genes of Ka/Ks remains moderate between those of Ka and Ks. In particular, there should be more variations in the percentages of shared genes determined by Ka/Ks and Ks than by Ka, when we define slow-evolving genes (Figure 2B, D, and 2F). We found consistent results from the various methods when Ka was used as the measure for sorting genes.\nDivergence index (standard deviation/mean) of Ka and Ks determined based on the eight different methods from the twelve vertebrate species. In the boxplots, lower quantile, median, and upper quantile were represented in the boxes. Mean values were depicted in dots. Outliers were removed to make the plot straightforward. The number codes for the vertebrate species are: 1, chimp; 2, orangutan; 3, macaque; 4, horse; 5, dog; 6, cow; 7, guinea pig; 8, mouse; 9, rat; 10, opossum; 11, platypus; and 12, chicken.\nThe percentage of shared genes of Ka, Ks and Ka/Ks based on GY compared with other seven methods in terms of cut-off (A, B), method (C, D), and species (E, F). Outliers were removed to make the plots straightforward. The number codes for the species are the same as what in Figure 1.\nThe methods used in this study cover a wide range of mutation models with different complexities. NG gives equal weight to every sequence variation path [24] and LWL divides the mutation sites into three categories—non-degenerate, two-fold, and four-fold sites—and assigns fixed weights to synonymous and nonsynonymous sites for the two-fold degenerate sites [25]. LPB adopts a flexible ratio of transitional to transversional substitutions to handle the two-fold sites [26,27]. MLWL or MLPB are improved versions of their parental methods with specific consideration on the arginine codons (an exceptional case from the previous method) [28]. In particular, MLWL also incorporates an independent parameter, the ratio of transitional to transversional substitution rates, into the calculation [28]. Both YN and GY capture the features of codon usage and transition/transversion rates, but they are approximate and maximum likelihood methods, respectively [29,30]. MYN accounts for another important evolutionary characteristic—differences in transitional substitution within purines and pyrimidines [31]. Although these methods model and compute sequence variations in different ways, the Ka values that they calculate appeared to be more consistent than their Ks values or Ka/Ks. We proposed the following reasons (which are not comprehensive): first, real data from large data sets are usually from a broader range of species than computer simulations in the training sets for methodology development, so deviations in Ks values may draw more attentions in discussions. Second, the parameter-rich approaches—such as considering unequal codon usage and unequal transition/transversion rates—may lead to opposite effects on substitution rates when sequence divergence falls out of the \"sweet ranges\" [25,30,32]. Third, when examining closely related species, such primates, one will find that most Ka/Ks values are smaller than 1 and that Ka values are smaller than Ks values under most conditions. For a very limited number of nonsynonymous substitutions, when evolutionary distance is relatively short between species, models that increase complexity, such as those for correcting multiple hits, may not lead to stable estimations [24,32]. Furthermore, when incorporating the shape parameter of gamma distribution into the commonly approximate Ka/Ks methods, we found previously that Ks is more sensitive to changes in the shape parameter under the condition Ka < Ks [23]. Together, there are stronger influences on Ks than on Ka in two cases: when Ka < Ks and when complexity increases in mutation models. Fourth, it has been suggested that Ks estimation does not work well for comparing extremes, such as closely and distantly related species [33,34]. Occasionally, certain larger Ka/Ks values, greater than 1, are identified, as was done in a comparative study between human and chimpanzee genes, perhaps due to a very small Ks [34].\nWe also wondered what would happen when Ka becomes saturated as the divergence of the paired sequences increases. Looking at human vs. chicken, we found that the median Ka exceeded 0.2 and that the maximal Ka was as high as 0.6 after the outliers were eliminated (Additional file 1: Figure S2). This result suggested that their Ka values have not approached saturation yet. In addition, we chose the GY method to compute Ka as an estimator of evolutionary rates, since counting methods usually yield more out-of-range values than maximum likelihood methods (data not shown).", "To learn about the functions of fast-evolving and slow-evolving genes in each species and lineage, we used custom-designed scripts to assess the enrichment of molecular functions (MF), biological processes (BP), and signal/metabolic pathways (Table 1). We noticed that the number of enriched functions related to slow-evolving genes was 2.53 times greater than those related to fast-evolving genes. Fast-evolving genes also had more lineage-specific functions than slow-evolving genes.\nSelected common functional categories of fast-evolving genes and/or slow-evolving genes among mammalian genomes and lineages.\nNote: The asterisks depict function classification of genes in species and lineages, which are significantly enriched based on Fisher's Exact Test after multiple corrections. The species are coded in numbers as what listed in Figure 1.\nWe found that fast-evolving genes were enriched in immunity-related functions (Table 1), which included genes present in NK, T, and B cells. The genes in NK cells were related to innate immunity (non-specific), and genes in T and B cells were associated with acquired immunity (specific) [35]. Other enriched immunity-related categories of fast-evolving genes included immunoglobulin, cytokine, chemokine, and interleukin. Fast-evolving genes were also enriched in signaling pathways, such as receptors and ligands. Finally, there were a significant number of fast-evolving genes classified as having unknown functions—unclassified biological processes and unclassified molecular functions. It is not surprising that fast-evolving genes may quickly diminish their homology to known proteins and are associated with dietary adaptation, language, appearance, behavior or upright-walking [36]. In the enriched functions of slow-evolving genes, we found a number of important house-keeping functional classes, including transcription, mRNA processing/splicing, translation, protein modification, metabolism, protein traffic, cell cycle, development and endocytosis (Table 1). As a result, fast-evolving and slow-evolving genes have significantly different functions in mammals.\nAnother point of interest is that we identified two immunity-related function categories, T cell and B cell activation, in the enriched functions of slow-evolving genes (Table 1). We also discovered that immunity-related fast-evolving genes were mostly receptors, ligands, cytokines, and CD (cluster of differentiation) molecules, and that slow-evolving immunity-related genes were usually kinases or adaptor proteins. Taking the human-rat comparison as an example, the receptors included MS4A2, FCER1G, FCGRT, KLRG2, IL1RN, TNFRSF1A, TNFRSF25, IFNGR1, IL2RA, TNFRSF4 and TNFRSF8; the cytokines were IL12A and IL1F9; and the ligands were CCL27 and ICOSLG. All of these are highly conserved, functionally important, and involved in complex immunity-related pathways. Cytokines are also involved in the transfer of information between cells, the regulation of cell physiological processes, and the strengthening of immune-competence [37]. CD proteins, generated during the differentiation of lymphocytes, are a class of cell surface molecules that are recognized by specific antibodies on the surfaces of lymphocytes [38]. Adaptor proteins and kinases play significant roles in signal transduction in cell immune systems, mediate specific interactions between proteins, and activate phosphorylation of the target proteins to functionally modify protein structure and activity [39,40]. In summary, receptors, ligands, cytokines, and CDs are likely to evolve faster than kinases and adaptor proteins, although they all function in the acquired immune system (B cell and T cell immunity). These observations suggest that: (1) Genes in the upstream of the immune-related pathways tend to evolve faster than those in the downstream. (2) Immunity-specific genes are likely to evolve faster than multifunctional house-keeping genes, which also perform fundamental functions in non-immune pathways. (3) Genes encoding for proteins that participate in extracellular communion or the reorganization of external pathogens seem to evolve faster than those which encode proteins that play roles in signal transduction and effector activation within single cells [41]. Similar observations have been reported about the evolution of Drosophila's innate immune system [42].\nIn addition, we discovered a few enriched functions that were related to neuro-degenerative diseases or nervous system functionality (Table 1). These slow-evolving genes play roles in progressive neuro-degenerative genetic diseases [43], neural-tube defects [44], proliferative disorders in the central nervous system [45], progressions of brain cancers [46,47], and electrical movement within synapses in the brain [48]. These results are consistent with a previous observation that brain-specific genes tend to have relatively low evolutionary rates in mammals [49]. Brain-specific genes may be expressed in multiple distinct neuronal cell types and in a way resemble house-keeping genes in terms of shared cell types.", "We used a network to display the percentages of shared genes among fast-evolving and slow-evolving genes between pairs of mammals (Figure 3). First, two primitive mammals (opossum and platypus) and one bird (chicken) are clearly distinct from other mammals. Second, primates are also closely clustered with one another. Third, mouse serves as an excellent hub that links cow, horse, guinea pig, rat, and opossum. Fourth, large mammals are well connected when all elements are considered. Fifth, some connections may be coincidental, for example, fast-evolving genes shared by dog, horse, and macaque as well as slow-evolving genes shared by cow, macaque, orangutan, and chimp.\nA network of fast-evolving and slow-evolving genes among twelve mammalian species. For any two given species, we calculated the shared number of fast-evolving or slow-evolving genes and subsequently divided them based on the total shared number of genes to normalize the correlation coefficients. We connected the species based on the largest two correlation coefficients for each pair. Red and green lines stand for fast-evolving and slow-evolving genes, respectively, and the yellow lines are the sum of both.\nWe then investigated the exclusive functions of fast-evolving and slow-evolving genes in three mammalian lineages: primates (chimp, orangutan, and macaque), large mammals (horse, dog, and cow), and rodents (guinea pig, mouse, and rat; Table 2, 3, and 4). Although primates are also large mammals, we considered them to be a separate category in order to further stratify our pool. First, we found specific functions that were unique to the three mammalian subgroups in fast-evolving genes: sensory-related (chemosensory perception, olfaction and sensory perception) and cancer related (oncogenesis) in primates (Table 2), immune related (interleukin receptor) in rodents (Table 4), and reproduction related (fertilization) and steroid hormone related (steroid hormone metabolism; Table 3) in large mammals. The first two observations we made are consistent with a previous study [7], and the last one is novel, which may be related to domestication for fast-growth. Second, we also found some lineage-specific functions that involved slow-evolving genes. For instance, we categorized calcium binding proteins, calmodulin related proteins and mitochondrial transport in primates, as well as G protein signalling, enkephalin release, actin binding cytoskeletal proteins, the microtubule family, and exocytosis in rodents. Three critical hormones (alpha adrenergic receptor signalling, oxytocin receptor mediated signalling, and thyrotropin-releasing hormone receptor signalling pathways) are specific to large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in primates.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among primates.\nFunctional enrichment of fast-evolving and slow-evolving genes in large mammals.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among large mammals.\nFunctional enrichment of fast-evolving and slow-evolving genes in rodents.\nNote: The asterisks depict functional classifications of genes that are significantly enriched based on Fisher's Exact Test after multiple corrections among rodents.", "There have been three interesting investigations that have used the likelihood ratio test (LRT) to compare two models, and have evaluated the use of Ka/Ks in the identification of positively-selected genes (PSGs) and their enriched functions among six species [6-8]. Our study is unique in that we have analyzed 12 species and considered more than one-thousand fast-evolving genes. The numbers of PSGs in previous studies were at least an order of magnitude less, around tens to hundreds. Although our definition of fast-evolving genes is not fully identical to those of previous studies, our findings on immune-related functions in most species are consistent with previous studies [6,7]. Two other categories that are shared among these studies are chemosensory perception, olfaction, and sensory perception in the human-vs-chimpanzee-specific functions (Table 2) and fertilization in the human-vs-cow-specific functions. This validated the fact that the methods, which were based on simple comparison, yielded conclusions that were similar to those of complicated and over-parametric methods.\nLopez-Bigas et al. conducted a comprehensive study of functional protein sequence divergences between human and other organisms [50]. They focused on variations at the protein level and in a wide range of evolutionary distance, whereas we have focused on variations among mammals at the DNA level [50]. Natural selection acts at three essential levels: domains, catalytic centers, and the DNA and protein level that consists of sequences and protein structures composed of motifs [32]. Since nucleotide sequences are more variable than protein sequences and structures, DNA variations are usually used for the study of short-term evolution, and the latter two are used to study long-term evolution. In our study, we found that the major classified functions were regulatory (e.g. receptor)/response to the surroundings (e.g. immunoglobulin receptor family member) among fast-evolving genes, and metabolism (e.g. protein metabolism and modification), transport (e.g. general vesicle transport) and cell structure (e.g. protein biosynthesis) among slow-evolving genes [50]. We also found developmental processes to be a major functional category in mammals based on the slow-evolving genes when regarding chicken as a reference. This finding agrees with a previous conclusion that development-related genes are highly conserved only among mammals [50]. In addition, at the DNA level, both B-cell-mediated and antibody-mediated immunity and B-cell activation were only identified in mammals but not in chickens. This may reflect differences in B-cell-associated hormonal responses between the bursa of fabricius unique to birds and the bone marrow of mammals [51].", "Our study focused on general expression profiles based on EST data from 18 human tissues (Figure 4). The expression levels of slow-evolving genes appeared to be significantly higher than those of fast-evolving genes (P-value < 2.2e-16, Wilcoxon rank sum test). We also observed that the expression levels of intermediately-evolving genes were significantly higher than those of fast-evolving genes in most species, except for orangutan and macaque. In addition, we found that the mean of gene expressions was always greater than the median, suggesting that most genes are expressed at very low levels and only a small fraction of genes are expressed at high levels [52]. These observations suggest that there is an inverse relationship between gene evolutionary rates and gene expression levels in mammals, which is similar to a previous result reported for the yeast genome [53,54]. House-keeping [55,56], highly-expressed, and old genes [57,58] all tend to evolve slowly [59], and these genes are functionally well-connected and resistant to sequence changes (negative selected). Tissue-specific [55,56], lowly-expressed, and new genes [57,58] tend to evolve quickly; they are often selection-relaxed and evolving toward novel functions. For example, certain immune-related genes always evolve faster to cope with new or multiple pathogen attacks.\nExpression level correlations and evolvability. S, M, and F stand for slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. Expression levels were calibrated as the number of transcripts per million (TPM). Outliers were removed to make the plots straightforward.", "To understand the functional relevance common for the fast-evolving or slow-evolving genes among different subgroups of mammals, we categorized the shared genes in the lineages of primates, large mammals, and rodents. There were 185, 609, and 695, fast-evolving genes in primates, large mammals, and rodents, respectively, and 355, 600, and 730 slow-evolving genes. However, we only found 15 fast-evolving and 72 slow-evolving genes that were shared by all nine species. This result suggests that fast-evolving and slow-evolving genes tend to be clade-, lineage- or species-specific. However, a limited numbers of shared genes may still lead to a significant number of shared functions (Table 1).\nAlthough we have only compared human genes (as a reference) with those of other mammals, instead of doing pairwise comparisons, our conclusions can still be easily extended to a broader spectrum of mammals, or even other vertebrates. To validate our analyses, we selected two representative proteins, ISG20 and RAB30 (based on orthologs from 20 and 22 mammals, respectively) from 87 shared fast-/slow-evolving genes in nine species to demonstrate their degrees of variation and conservation (Figure 5). The fast-evolving ISG20 (ranked 25, 71, 94, 69, 95, 128, 321, 58, 82, 280 and 423 in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) and the slow-evolving RAB30 (ranked 1, 418, 334, 117, 105, 127, 48, 49, 33, 132 and 446, respectively in chimpanzee, orangutan, macaque, horse, dog, cow, guinea pig, mouse, rat, opossum and platypus, respectively) can be obviously seen from the degree of variability [60]. These two case studies provide a footnote that supports the reliability of our method.\nThree-dimensional conservation grading of ISG20 (A) and RAB30 (B). Two 3-D backbone structures of ISG20 and RAB30 were retrieved from PDB code 1WLJ and 2EW1, respectively. (A) The putative conservation grading was based on the alignment of twenty mammalian protein sequences from: Human (Homo sapiens), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Gorilla (Gorilla gorilla), Macaque (Macaca mulatta), Cow (Bos taurus), Dog (Canis familiaris), Horse (Equus caballus), Cat (Felis catus), Guinea Pig (Cavia porcellus), Mouse (Mus musculus), Rat (Rattus norvegicus), Megabat (Pteropus vampyrus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Hyrax (Procavia capensis), Tree Shrew (Tupaia belangeri), Dolphin (Tursiops truncatus), Opossum (Monodelphis domestica), Platypus (Ornithorhynchus anatinus). (B) These conservation grades were based on the aligned twenty-two mammalian protein sequences from Human (Homo sapiens), Cow (Bos taurus), Dog (Canis familiaris), Guinea Pig (Cavia porcellus), Horse (Equus caballus), Cat (Felis catus), Elephant (Loxodonta africana), Macaque (Macaca mulatta), Mouse Lemur (Microcebus murinus), Opossum (Monodelphis domestica), Mouse (Mus musculus), Microbat (Myotis lucifugus), Pika (Ochotona princeps), Platypus (Ornithorhynchus anatinus), Rabbit (Oryctolagus cuniculus), Chimpanzee (Pan troglodytes), Orangutan (Pongo pygmaeus), Hyrax (Procavia capensis), Megabat (Pteropus vampyrus), Rat (Rattus norvegicus), Tree shrew (Tupaia belangeri), Dolphin (Tursiops truncatus). The color bars from the left to the right measure changes from variable to conserved residues. Conservation grading in yellow indicates the residues whose conservation degrees were not supported with sufficient data.", "In this study, we carried out an evolutionary analysis of human protein-coding genes that are shared among mammals. We not only demonstrated that Ka is a useful and stable indicator for studying mammalian gene evolution, but we also revealed that the rate at which a gene evolves is related to its function. In particular, we found enriched immune-related functions in both fast-evolving and slow-evolving genes, and slow-evolving genes were significantly enriched in functions related to the central nervous system. Furthermore, we observed that slow-evolving genes tended to be expressed at higher levels. Our results provide valuable insights for the functional characterization of genes and gene classes in different mammalian lineages.", "[SUBTITLE] Data acquisition and quality assessment [SUBSECTION] The genome data were collected from Ensembl version 53 [61] (http://www.biomart.org/; http://www.ensembl.org/): human (NCBI36), chimpanzee (CHIMP2.1), orangutan (PPYG2), macaque (MMUL1), horse (EquCab2), dog (BROADD2), cow (Btau4), guinea pig (cavPor3), mouse (NCBIM37), rat (RGSC3.4), opossum (BROADO3), platypus (OANA5), and chicken (WASHUC2). We also collected ortholog sequences of humans and other species, saving only the gene pairs marked as one-to-one match to avoid ambiguous definition of orthologs. We used ClustalW [62] to align human amino acid sequences with those of other species, and then translated them back to their corresponding nucleotide sequences.\nThe genome data were collected from Ensembl version 53 [61] (http://www.biomart.org/; http://www.ensembl.org/): human (NCBI36), chimpanzee (CHIMP2.1), orangutan (PPYG2), macaque (MMUL1), horse (EquCab2), dog (BROADD2), cow (Btau4), guinea pig (cavPor3), mouse (NCBIM37), rat (RGSC3.4), opossum (BROADO3), platypus (OANA5), and chicken (WASHUC2). We also collected ortholog sequences of humans and other species, saving only the gene pairs marked as one-to-one match to avoid ambiguous definition of orthologs. We used ClustalW [62] to align human amino acid sequences with those of other species, and then translated them back to their corresponding nucleotide sequences.\n[SUBTITLE] Defining fast-evolving, intermediately-evolving, and slow-evolving genes [SUBSECTION] We estimated the non-synonymous substitution rates and synonymous substitution rates of orthologs based on a number of algorithms, including NG (the different methods are abbreviated as their authors' last name initials; M stands for a modified version of the original methods) [24], LWL [25], MLWL [28], LPB [26,27], MLPB [28], YN [30], MYN [31], GY [29], and the gamma-series methods [22,23] used in KaKs Calculator 2.0 tool [63]. We adopted 10% as the cut-off value to define fast-, intermediately- or slow-evolving genes in each lineage. We sorted genes by their Ka values from smallest to largest in each lineage, and defined genes corresponding to the lowest, middle, and highest 10% of Ka values to be slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. In this procedure, we considered NA (not applicable) values to be 0, because we observed that NA values are usually associated with 100% identical gene pairs, except in the cases of a few indels (inserts or deletions).\nWe estimated the non-synonymous substitution rates and synonymous substitution rates of orthologs based on a number of algorithms, including NG (the different methods are abbreviated as their authors' last name initials; M stands for a modified version of the original methods) [24], LWL [25], MLWL [28], LPB [26,27], MLPB [28], YN [30], MYN [31], GY [29], and the gamma-series methods [22,23] used in KaKs Calculator 2.0 tool [63]. We adopted 10% as the cut-off value to define fast-, intermediately- or slow-evolving genes in each lineage. We sorted genes by their Ka values from smallest to largest in each lineage, and defined genes corresponding to the lowest, middle, and highest 10% of Ka values to be slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. In this procedure, we considered NA (not applicable) values to be 0, because we observed that NA values are usually associated with 100% identical gene pairs, except in the cases of a few indels (inserts or deletions).\n[SUBTITLE] Functional classification and other analyses [SUBSECTION] We used IDConvertor [64] to convert the different ID between different gene accessions and utilized the Protein Analysis through Evolutionary Relationships (PANTHER) online system to annotate genes at three levels: biological processes, molecular functions, and pathways [65]. Enrichment analysis was performed based on a combination of Fisher's exact test and multiple testing Bonferroni Step-down (Holm) correction [66]. The cut-off in functional enrichment test is 0.1. The network created based on fast- and slow-evolving genes was drawn with the software Cytoscape [67]. Conserved grade illustrations were created using the Consurf server [68] after submitting protein alignments built with ClustalX [62]. The three-dimensional structures of the corresponding proteins were retrieved from the Protein Data Bank (PDB) [69]. For gene expression analysis, we used the expression profiling of Expressed Sequence Tags (EST) data pooled from 18 tissues as described previously in our published work [70].\nWe used IDConvertor [64] to convert the different ID between different gene accessions and utilized the Protein Analysis through Evolutionary Relationships (PANTHER) online system to annotate genes at three levels: biological processes, molecular functions, and pathways [65]. Enrichment analysis was performed based on a combination of Fisher's exact test and multiple testing Bonferroni Step-down (Holm) correction [66]. The cut-off in functional enrichment test is 0.1. The network created based on fast- and slow-evolving genes was drawn with the software Cytoscape [67]. Conserved grade illustrations were created using the Consurf server [68] after submitting protein alignments built with ClustalX [62]. The three-dimensional structures of the corresponding proteins were retrieved from the Protein Data Bank (PDB) [69]. For gene expression analysis, we used the expression profiling of Expressed Sequence Tags (EST) data pooled from 18 tissues as described previously in our published work [70].", "The genome data were collected from Ensembl version 53 [61] (http://www.biomart.org/; http://www.ensembl.org/): human (NCBI36), chimpanzee (CHIMP2.1), orangutan (PPYG2), macaque (MMUL1), horse (EquCab2), dog (BROADD2), cow (Btau4), guinea pig (cavPor3), mouse (NCBIM37), rat (RGSC3.4), opossum (BROADO3), platypus (OANA5), and chicken (WASHUC2). We also collected ortholog sequences of humans and other species, saving only the gene pairs marked as one-to-one match to avoid ambiguous definition of orthologs. We used ClustalW [62] to align human amino acid sequences with those of other species, and then translated them back to their corresponding nucleotide sequences.", "We estimated the non-synonymous substitution rates and synonymous substitution rates of orthologs based on a number of algorithms, including NG (the different methods are abbreviated as their authors' last name initials; M stands for a modified version of the original methods) [24], LWL [25], MLWL [28], LPB [26,27], MLPB [28], YN [30], MYN [31], GY [29], and the gamma-series methods [22,23] used in KaKs Calculator 2.0 tool [63]. We adopted 10% as the cut-off value to define fast-, intermediately- or slow-evolving genes in each lineage. We sorted genes by their Ka values from smallest to largest in each lineage, and defined genes corresponding to the lowest, middle, and highest 10% of Ka values to be slow-evolving, intermediately-evolving, and fast-evolving genes, respectively. In this procedure, we considered NA (not applicable) values to be 0, because we observed that NA values are usually associated with 100% identical gene pairs, except in the cases of a few indels (inserts or deletions).", "We used IDConvertor [64] to convert the different ID between different gene accessions and utilized the Protein Analysis through Evolutionary Relationships (PANTHER) online system to annotate genes at three levels: biological processes, molecular functions, and pathways [65]. Enrichment analysis was performed based on a combination of Fisher's exact test and multiple testing Bonferroni Step-down (Holm) correction [66]. The cut-off in functional enrichment test is 0.1. The network created based on fast- and slow-evolving genes was drawn with the software Cytoscape [67]. Conserved grade illustrations were created using the Consurf server [68] after submitting protein alignments built with ClustalX [62]. The three-dimensional structures of the corresponding proteins were retrieved from the Protein Data Bank (PDB) [69]. For gene expression analysis, we used the expression profiling of Expressed Sequence Tags (EST) data pooled from 18 tissues as described previously in our published work [70].", "The authors declare that they have no competing interests.", "DW and JY conceived and designed this study; DW drafted this paper; DW, FL, LW collected the data; DW and FL analyzed the data; SH participated in revising the paper; JY supervised the project and revised this manuscript. All authors read and approved the final version of the manuscript.", "[SUBTITLE] Reviewer's report 1 [SUBSECTION] Anamaria Necsulea, Université de Lyon, F-69000, Lyon; Université Lyon 1; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Evolutive, F-69622, Villeurbanne, France; HELIX, Unité de recherche INRIA (nominated by Nicolas Galtier, CNRS-Université Montpellier II, Laboratoire \"Genome, Populations, Interactions, Adaptation\", Montpellier, France)\nThis manuscript attempts to assess the rate of evolution of mammalian protein-coding genes, and to extract the defining characteristics of fast and slow evolving genes. This subject has been addressed extensively in the literature, and the findings of the present manuscript are not novel. Unfortunately, the lack of novelty is not the biggest fault of this article: the methodology employed is often flawed and the text is very badly written.\nIn order to estimate the rates of evolution of mammalian protein-coding genes, the authors compute the Ka, Ks and Ka/Ks values for pairwise 1-1 orthologues between human and the other species in their dataset. The Ka and Ks computations are performed with several methods available in the literature, and they observe that the Ks measurement does not yield consistent results between the different methods employed. Rather than investigating in detail why this happens (the saturation problem is only briefly mentioned), the authors decide to use Ka as an estimate of the rate of evolution. This is of course correct if the rate of protein sequence evolution is of interest, but without any correction for the mutation rate, one cannot make inferences about the strength of natural selection on protein-coding genes based on Ka alone. Yet the authors use Ka as \"an estimator of selection\" (page 9).\n[SUBTITLE] Authors' response [SUBSECTION] We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\nWe added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\nWe are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\n[SUBTITLE] Authors' response [SUBSECTION] We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\nWe revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\n[SUBTITLE] Comments from the second round of reviewing [SUBSECTION] I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\nI am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\n[SUBTITLE] Authors' response [SUBSECTION] First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nFirst, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nAnamaria Necsulea, Université de Lyon, F-69000, Lyon; Université Lyon 1; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Evolutive, F-69622, Villeurbanne, France; HELIX, Unité de recherche INRIA (nominated by Nicolas Galtier, CNRS-Université Montpellier II, Laboratoire \"Genome, Populations, Interactions, Adaptation\", Montpellier, France)\nThis manuscript attempts to assess the rate of evolution of mammalian protein-coding genes, and to extract the defining characteristics of fast and slow evolving genes. This subject has been addressed extensively in the literature, and the findings of the present manuscript are not novel. Unfortunately, the lack of novelty is not the biggest fault of this article: the methodology employed is often flawed and the text is very badly written.\nIn order to estimate the rates of evolution of mammalian protein-coding genes, the authors compute the Ka, Ks and Ka/Ks values for pairwise 1-1 orthologues between human and the other species in their dataset. The Ka and Ks computations are performed with several methods available in the literature, and they observe that the Ks measurement does not yield consistent results between the different methods employed. Rather than investigating in detail why this happens (the saturation problem is only briefly mentioned), the authors decide to use Ka as an estimate of the rate of evolution. This is of course correct if the rate of protein sequence evolution is of interest, but without any correction for the mutation rate, one cannot make inferences about the strength of natural selection on protein-coding genes based on Ka alone. Yet the authors use Ka as \"an estimator of selection\" (page 9).\n[SUBTITLE] Authors' response [SUBSECTION] We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\nWe added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\nWe are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\n[SUBTITLE] Authors' response [SUBSECTION] We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\nWe revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\n[SUBTITLE] Comments from the second round of reviewing [SUBSECTION] I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\nI am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\n[SUBTITLE] Authors' response [SUBSECTION] First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nFirst, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\n[SUBTITLE] Reviewer's report 2 [SUBSECTION] Subhajyoti De, Dana-Farber Cancer Institute and Harvard School of Public Health, Harvard University, Boston, USA (nominated by Sarah Teichmann, MRC Laboratory of Molecular Biology, Cambridge, United Kingdom).\nThe paper 'A method for defining evolving protein coding genes' by Wang et al. presents an evolutionary analysis of orthologus protein-coding genes across different species. My main concern with this paper is the lack of novelty. The main conclusions of this paper — (i) different functional classes of genes evolve differently, (ii) highly expressed genes evolve slowly and (iii) fast evolving genes often evolve in a lineage-specific manner—have already been reported comprehensively by several groups (Gerstein, Siepel, Hurst, Koonin, Drummond, Nielsen, Bustamante and many other labs). The authors merely reconfirm their findings. Many of those previous papers are not cited either.\n[SUBTITLE] Authors' response [SUBSECTION] As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\nAs pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\n[SUBTITLE] Authors' response [SUBSECTION] We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\nWe revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\n[SUBTITLE] Authors' response [SUBSECTION] We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\nWe have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\nWe have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\nWe have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\n[SUBTITLE] Authors' response [SUBSECTION] Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\nLopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\n[SUBTITLE] Authors' response [SUBSECTION] We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\nWe are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\n[SUBTITLE] Authors' response [SUBSECTION] In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nIn the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nSubhajyoti De, Dana-Farber Cancer Institute and Harvard School of Public Health, Harvard University, Boston, USA (nominated by Sarah Teichmann, MRC Laboratory of Molecular Biology, Cambridge, United Kingdom).\nThe paper 'A method for defining evolving protein coding genes' by Wang et al. presents an evolutionary analysis of orthologus protein-coding genes across different species. My main concern with this paper is the lack of novelty. The main conclusions of this paper — (i) different functional classes of genes evolve differently, (ii) highly expressed genes evolve slowly and (iii) fast evolving genes often evolve in a lineage-specific manner—have already been reported comprehensively by several groups (Gerstein, Siepel, Hurst, Koonin, Drummond, Nielsen, Bustamante and many other labs). The authors merely reconfirm their findings. Many of those previous papers are not cited either.\n[SUBTITLE] Authors' response [SUBSECTION] As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\nAs pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\n[SUBTITLE] Authors' response [SUBSECTION] We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\nWe revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\n[SUBTITLE] Authors' response [SUBSECTION] We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\nWe have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\nWe have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\nWe have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\n[SUBTITLE] Authors' response [SUBSECTION] Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\nLopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\n[SUBTITLE] Authors' response [SUBSECTION] We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\nWe are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\n[SUBTITLE] Authors' response [SUBSECTION] In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nIn the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\n[SUBTITLE] Reviewer's report 3 [SUBSECTION] Claus O. Wilke, Center for Computational Biology and Bioinformatics and Institute for Cell and Molecular Biology, University of Texas, Austin, Texas, United States\nThe authors study the evolutionary rates of mammalian genes using eight different methods of evolutionary-rate calculation. They conclude that Ka is more consistently estimated by these different methods than Ks and that therefore Ka will be more informative in many contexts than Ks or Ka/Ks.\nWhile I think that the paper makes a valuable contribution, I feel that the impact of the paper has been diluted by the authors' choice to actually combine two separate parts (with separate messages) into one paper. The first part (which I find valuable) is the analysis of the consistency of rate estimations by different methods. The second part (of whose value I'm less convinced) looks at the functional classification of genes evolving at different rates.\n[SUBTITLE] Authors' Response [SUBSECTION] The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\nThe point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\n[SUBTITLE] Authors' response [SUBSECTION] We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\nWe continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\n[SUBTITLE] Authors' response [SUBSECTION] We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\nWe cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\n[SUBTITLE] Authors' response [SUBSECTION] We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nWe have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nClaus O. Wilke, Center for Computational Biology and Bioinformatics and Institute for Cell and Molecular Biology, University of Texas, Austin, Texas, United States\nThe authors study the evolutionary rates of mammalian genes using eight different methods of evolutionary-rate calculation. They conclude that Ka is more consistently estimated by these different methods than Ks and that therefore Ka will be more informative in many contexts than Ks or Ka/Ks.\nWhile I think that the paper makes a valuable contribution, I feel that the impact of the paper has been diluted by the authors' choice to actually combine two separate parts (with separate messages) into one paper. The first part (which I find valuable) is the analysis of the consistency of rate estimations by different methods. The second part (of whose value I'm less convinced) looks at the functional classification of genes evolving at different rates.\n[SUBTITLE] Authors' Response [SUBSECTION] The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\nThe point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\n[SUBTITLE] Authors' response [SUBSECTION] We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\nWe continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\n[SUBTITLE] Authors' response [SUBSECTION] We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\nWe cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\n[SUBTITLE] Authors' response [SUBSECTION] We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nWe have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.", "Anamaria Necsulea, Université de Lyon, F-69000, Lyon; Université Lyon 1; CNRS, UMR 5558, Laboratoire de Biométrie et Biologie Evolutive, F-69622, Villeurbanne, France; HELIX, Unité de recherche INRIA (nominated by Nicolas Galtier, CNRS-Université Montpellier II, Laboratoire \"Genome, Populations, Interactions, Adaptation\", Montpellier, France)\nThis manuscript attempts to assess the rate of evolution of mammalian protein-coding genes, and to extract the defining characteristics of fast and slow evolving genes. This subject has been addressed extensively in the literature, and the findings of the present manuscript are not novel. Unfortunately, the lack of novelty is not the biggest fault of this article: the methodology employed is often flawed and the text is very badly written.\nIn order to estimate the rates of evolution of mammalian protein-coding genes, the authors compute the Ka, Ks and Ka/Ks values for pairwise 1-1 orthologues between human and the other species in their dataset. The Ka and Ks computations are performed with several methods available in the literature, and they observe that the Ks measurement does not yield consistent results between the different methods employed. Rather than investigating in detail why this happens (the saturation problem is only briefly mentioned), the authors decide to use Ka as an estimate of the rate of evolution. This is of course correct if the rate of protein sequence evolution is of interest, but without any correction for the mutation rate, one cannot make inferences about the strength of natural selection on protein-coding genes based on Ka alone. Yet the authors use Ka as \"an estimator of selection\" (page 9).\n[SUBTITLE] Authors' response [SUBSECTION] We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\nWe added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\nWe are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?\n[SUBTITLE] Authors' response [SUBSECTION] We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\nWe revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".\n[SUBTITLE] Comments from the second round of reviewing [SUBSECTION] I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\nI am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.\n[SUBTITLE] Authors' response [SUBSECTION] First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).\nFirst, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).", "We added a few discussion points about the reason why Ka values from multiple methods yield more consistent results than Ks values. We also changed the description \"an estimator of selection\" into \"Ka as an estimator of evolutionary rate\".\nThe authors then go on to compare the results obtained for the different mammals, and they infer lineage-specific accelerations based solely on the pairwise \"human-other species\" comparisons. This does not make sense. The authors should be aware that there are methods for the estimation of branch-specific Ka, Ks and Ka/Ks ratios that use a multiple-species sequence alignment and that take into account the underlying phylogeny (see for example PAML — perhaps the most commonly used — Z. Yang, Mol. Biol. Evol., 2007).", "We are fully aware that the Likelihood Ratio Test (LRT) methods [71,72]are applicable in inferring positive selections on genes in specific braches (or clades) and researchers use these methods to different species including mammals and others [6-8,73]. One of the objectives of our study is to compare our method based on simple pairwise comparison between human and other mammals with the LRT methods. We found that our method is simply capable of capturing the key conclusions from other methods and can be used to discover evolutionary features of lineage-specific genes (such as lineage-specific functions of large mammals). Furthermore, pairwise alignments utilize more sequence information than multiple sequence alignments do, especially when closely related (for instance, a few percent differences) and less-than-perfect sequences are aligned. The LRT methods usually require the construction of phylogenies and compare two models, and they are usually parameter-rich, especially when a large number of sequences from multiple species are examined. After all, we are not here to challenge the power of the LRT methods, but to suggest a simple and efficient method as an alternative.\nFinally, the manuscript is very poorly written, to the point that the meaning of the phrases is often incomprehensible. This is evident even for the title: \"A method for defining evolving protein-coding genes\" — evolving as opposed to what?", "We revised the manuscript again for clarity and accuracy. We also changed the title into \"A method for defining fast-evolving and slow-evolving protein-coding genes\".", "I am not in the least convinced by the revision of the manuscript. The modifications to the original manuscript are only superficial, and the content remains unworthy of publication. None of the results are new. The analysis of Ka rates is now so well established, that it is generally done in practical courses, for a bachelor's degree, and cannot by itself constitute the subject of a publication. Moreover, the methodology and the interpretation of the results are flawed. The authors continue to perform pairwise comparisons between human and each of the other species, and yet they discuss lineage-specific accelerations. This does not make sense. To give just one example, the authors discuss the proportion of fast-evolving genes that are 'shared among mammals'. Could it be that these genes are in fact accelerated only in the human lineage? When performing pairwise comparisons, with human as a reference, the genes that are specific to human would appear as fast-evolving in all comparisons.", "First, what we are emphasizing here is not the ways to calculate Ka and Ks but their overall effects on data analyses, which are useful for the end users, especially biologists who are eager to understand the essence of the methodology and their applications. Second, the calculations for Ka and Ks values are all relative. We have several reasons for choosing just human-to-other-mammal comparisons. The most important reason is the fact that human data are the best among all mammalian genomes sequenced so far. Other mammalian genomes are not sequenced, assembled, and annotated to the standard of human data yet. The net result for choosing a shared ortholog set for all mammals, due to the variable data quality, is that we will not be able to find good representatives for fast-evolving genes that share similar functional categories since most of the gene annotations rely heavily on those of the human data. Especially for extreme cases, such as fast-evolving genes, we do not anticipate that these genes themselves are shared by all or even most of the mammals but do share the specific functional categories. The second reason why we only use human-to-other-mammal comparison is data size. If we did an all-against-all analysis, we would have to write several other manuscripts to describe our results and that would not be desirable either at this point in time: we would have to improve the data quality for all other sequenced mammals, except for human and mouse perhaps, which are better assembled and annotated. The last, but not the least important, reason we have chosen to compare human genes to their orthologs in other mammalian species is so that we can understand the evolution rates of human genes first. In other words, we want to first investigate how human protein-coding genes have evolved from their ancestors in other presumably distinct mammalian lineages. In addition, we carried out a mouse-centric analysis and validated most of the human-centric results in the function categories of fast- or slow-evolving genes (Additional file 1: Table S1).", "Subhajyoti De, Dana-Farber Cancer Institute and Harvard School of Public Health, Harvard University, Boston, USA (nominated by Sarah Teichmann, MRC Laboratory of Molecular Biology, Cambridge, United Kingdom).\nThe paper 'A method for defining evolving protein coding genes' by Wang et al. presents an evolutionary analysis of orthologus protein-coding genes across different species. My main concern with this paper is the lack of novelty. The main conclusions of this paper — (i) different functional classes of genes evolve differently, (ii) highly expressed genes evolve slowly and (iii) fast evolving genes often evolve in a lineage-specific manner—have already been reported comprehensively by several groups (Gerstein, Siepel, Hurst, Koonin, Drummond, Nielsen, Bustamante and many other labs). The authors merely reconfirm their findings. Many of those previous papers are not cited either.\n[SUBTITLE] Authors' response [SUBSECTION] As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\nAs pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.\n[SUBTITLE] Authors' response [SUBSECTION] We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\nWe revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.\n[SUBTITLE] Authors' response [SUBSECTION] We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\nWe have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\nWe have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).\n[SUBTITLE] Authors' response [SUBSECTION] We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\nWe have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.\n[SUBTITLE] Authors' response [SUBSECTION] Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\nLopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.\n[SUBTITLE] Authors' response [SUBSECTION] We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\nWe are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.\n[SUBTITLE] Authors' response [SUBSECTION] In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.\nIn the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.", "As pointed out by Dr. Claus O. Wilke, we do have a \"central hypothesis\" here, which is novel and valid. We are not contradicting any of the conclusions made by many others who have applied the methods we used also to analyze mammalian genomes or any other multiple sequences, but merely share our surprise that Ka calculation is unusually robust among all these methods. Nevertheless, we added more citations in the revised version as we made further comparisons with several representative publications.\nI am also confused with the other conclusion of this paper — 'Ka is better than Ka/Ks and Ks for evolutionary estimation'. Ka, Ks and Ka/Ks quantify different evolutionary features, and it would be unfair to compare them directly.", "We revised the sentence and it is now reads: \"Ka estimated from a diverse selection of methods has more consistent results than Ka/Ks and Ks.\nIn addition, many statements in that section are incorrect. For instance,\n(i) \"Ka/Ks and Ka are usually used to weigh the evolutionary rate for large number of genes, where the former has been used more frequently.\" — Ka/Ks is a measure of selection, and not used to calculate evolutionary divergence per se.", "We have revised this sentence accordingly.\n(ii) \"We decided to choose Ka, an estimator of selection, rather than Ks, an indicator of random mutations for our studies\" — Ka is a measure of nonsynonymous divergence and not a measure of selection. Moreover, Ks is often influenced by sequence context (see papers by Laurence Hurst in 2007).", "We have revised the sentences and added the citation accordingly.\n(iii) \"Occasionally, larger Ka/Ks values, greater than 1, have been identified, such as those in a comparative study between human and chimpanzee, perhaps due to smaller Ks (Koonin and Rogozin, 2003)\" — the statement, and the paragraph, lead to an incomplete impression that all Ka/Ks > 1 in human-chimpanzee are due to small Ks and therefore not indicative of selection. Yes, it is possible that for some genes high Ka/Ks can arise by chance, but that's not the complete picture. Many genes with high Ka/Ks ratio are classic examples of positive selection (e.g. FOXP2, and also see Clark et al. Science, 2003 [8], Nielsen et al. in PLoS Biol. 2005 [6]).", "We have revised the sentences accordingly.\nLopez-Bigas et al. studied evolution of human protein coding genes in different eukaryotes ranging from primates and other mammals to yeast at the protein sequence level. They also showed that sequence similarity and Ka (or dN) are highly correlated (see supplementary information of Lopez Bigas et al.[50]). Therefore it is not surprising that using Ka, the authors find similar results.", "Lopez-Bigas et al found a negative correlation (nearly -0.7) between Conservation Score (CS) and Ka [50]. This linearly correlated relationship does not mean that the two indexes are exactly the same. As matter of fact, the same protein may be encoded by different codons at the nucleotide level. Therefore, the calculations of protein similarity and nonsynonymous substitution rates (nonsynonymous substitutions/nonsynonymous sites) on the basis of nucleotide substitution models may lead to different results. In addition, we did find some new functions at the DNA level (e.g. B cell- and antibody-mediated immunity as well as B-cell activation).\nPlease note that Gene and ortholog annotation have improved since Ensembl v53 (especially for chimpanzee, orang etc). Moreover, gene expression data for over 70 tissue types in both human and mouse are available from GNF-Symatlas, and it is pretty comprehensive.", "We are grateful to the reviewer for the note. Actually, at the time we began this project, Ensembl version 53 (released in 2009) was the most up-to-date. We did check the newer versions and the methodology used for database construction has not been changed. The only things that have changed are a few up-to-date genome assemblies which will only result in incremental improvements on a negligible fraction of the genes that we analyzed here. We used previously published procedures to select Expressed Sequence Tag (EST) data from 18 representative tissues (referring to major anatomic systems and succeeded in applying the data to define housekeeping genes [56,70]and minimal introns related studies [74]. It is rather unfortunate that the current RNA-seq data have not covered enough tissue samples yet. In addition, the house-keeping genes we defined seem holding very well in our recent analysis with limited number of tissue samples (around 10; data not shown).\nThe authors calculated Ka, Ks, Ka/Ks using several different algorithms and found that results do not exactly overlap i.e. shared gene ratio is not 100%. Perhaps it would be interesting to evaluate the performance of those algorithms, check which ones provide more consistent results and why.", "In the computer simulations of our previous studies, we have found that the Ka/Ks-calculating methods based on similar substitution models (capturing similar evolutionary features) often yielded similar results [23,75]. In this study, however, we were surprised to find consistent Ka values from this diverse group of methods. We added new analyses and discussions in the revised manuscript concerning the causative factors of inconsistency between different methods' estimates of Ka and Ks.", "Claus O. Wilke, Center for Computational Biology and Bioinformatics and Institute for Cell and Molecular Biology, University of Texas, Austin, Texas, United States\nThe authors study the evolutionary rates of mammalian genes using eight different methods of evolutionary-rate calculation. They conclude that Ka is more consistently estimated by these different methods than Ks and that therefore Ka will be more informative in many contexts than Ks or Ka/Ks.\nWhile I think that the paper makes a valuable contribution, I feel that the impact of the paper has been diluted by the authors' choice to actually combine two separate parts (with separate messages) into one paper. The first part (which I find valuable) is the analysis of the consistency of rate estimations by different methods. The second part (of whose value I'm less convinced) looks at the functional classification of genes evolving at different rates.\n[SUBTITLE] Authors' Response [SUBSECTION] The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\nThe point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.\n[SUBTITLE] Authors' response [SUBSECTION] We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\nWe continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.\n[SUBTITLE] Authors' response [SUBSECTION] We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\nWe cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.\n[SUBTITLE] Authors' response [SUBSECTION] We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.\nWe have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.", "The point is well-taken. In the second part, we just showed selective examples (maybe just the tip of the iceberg) for possible applications of the method. We have weakened some of our conclusions in the second part and explained the weakness of the data set itself (see response to the Reviewer 1). We are in the process of doing thorough analysis on genes that are classified based on Ka values among mammalian genomes, and pinpointing their functional roles in gene interaction networks.\nSpecific comments:\n1. The first part is improved in the revision, but still not entirely satisfying. I don't really get a good take-home message from this part. Which method should I use to estimate evolutionary rates? Are there specific reasons why some methods give different results than others? Maybe the differences in Ks results simply reflect improvements in estimation methods over time? Note that the model abbreviations (NG, LWL, MLWL, etc) are never defined.", "We continue to improve our writing in the current revision. The take-home messages for the first part are two-fold. First, Ka calculation is more consistent than Ks calculation regardless of what methods are used. Second, depending on the evolutionary distance between the sequences of the two species evaluated, one can choose more or less complex models for Ka and Ks calculation but they result in more or less similar results for Ka but not for Ks. The reasons why Ks values vary when using different methods are complicated, as we have discussed in the manuscript. We added a note to explain the naming conventions for the different methods.\n2. I remain unconvinced by the second part. My most important criticism, that the functional characterization is confounded by expression level, has not been substantially addressed.", "We cited 8 consecutive references (from 52 to 59) where this issue has been intensively discussed.\n3. I'm not convinced that the title faithfully reflects the contents of the paper. What is the method for defining fast-evolving and slow-evolving protein-coding genes? If the method is simply \"Use Ka\", I'd argue that people have done that before.", "We have changed the title to \"Nonsynonymous substitution rate (Ka) is a relatively consistent parameter for defining fast-evolving and slow-evolving protein-coding genes\". We have searched the related literature carefully and have not found publications that have done such thorough evaluations on the methods.", "Estimation of the sequence alignment quality (Figure S1), boxplots of Ka distributions in twelve species (Figure S2) and selected common functional categories of fast-evolving and slow-evolving genes based on mouse-centric analyses (Table S1).\nClick here for file" ]
[ null, null, null, null, null, null, null, null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Tuberculous meningitis in Denmark: a review of 50 cases.
21342524
Tuberculous meningitis is the most severe manifestation of extrapulmonary tuberculosis with a high mortality rate and a high rate of sequelae among survivors. The aim of this study is to assess the current epidemiology, clinical features, diagnostic procedures, treatment and outcome in patients with tuberculous meningitis in Denmark, a country with a low tuberculosis incidence.
BACKGROUND
A nationwide retrospective study was conducted, comprising all patients notified with tuberculous meningitis (TBM) in Denmark from 2000-2008. Medical records were reviewed using a standardised protocol.
METHODS
Fifty patients, including 12 paediatric patients, were identified. 78% of the patients were immigrants from countries of high tuberculosis endemicity. 64% of all patients had a pre-existing immunosuppressive condition; 10% were HIV positive, 48% were HIV seronegative and 42% had an unknown HIV status. Median symptom duration before admission was 14 days in the Danish patient population and 20 days in the immigrant group. Biochemical analysis of cerebrospinal fluid (CSF) samples revealed pleocytosis in 90% with lymphocyte predominance in 66%. Protein levels were elevated in 86%. The most common findings on neuro-radiological imaging were basal meningeal enhancement, tuberculomas and hydrocephalus. Lumbar puncture was performed on 42 patients; 31 of these specimens (74%) had a positive CSF culture for mycobacteria and 9.5% were smear positive for acid-fast bacilli. The overall mortality rate was 19% and 48% of the remaining patients had neurological sequelae of varying degree.
RESULTS
TBM is a rare but severe manifestation of extrapulmonary TB in Denmark. The clinician must be prepared to treat empirically if the suspicion of TBM has arisen to improve treatment outcome.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Cerebrospinal Fluid", "Child", "Child, Preschool", "Denmark", "Female", "Humans", "Infant", "Male", "Meninges", "Middle Aged", "Mycobacterium tuberculosis", "Radiography", "Retrospective Studies", "Tuberculosis, Meningeal", "Young Adult" ]
3050726
null
null
Methods
To initiate this retrospective study we obtained data on notified TBM cases from the National TB Notification Register between January 2000 and December 2008. The medical records of all patients, including children, with TBM were retrieved from Paediatric Departments and Departments of Infectious Diseases at five major University Hospitals in Denmark. Of note, in Denmark all cases of tuberculous meningitis are treated at university hospitals as required by national guidelines. Permission was granted from the Danish Data Protection Agency. Demographic data, medical history, clinical presentation at admission, radiological and microbiological data was reviewed along with the clinical course, treatment and outcome. All microbiological data was retrieved from The International Reference Laboratory of Mycobacteriology at Statens Serum Institut.
null
null
null
null
[ "Background", "Results", "Demography", "Clinical presentation", "Cerebrospinal fluid characteristics", "Radiology", "Microbiology", "Treatment and outcome", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tuberculosis (TB) of the central nervous system is the most severe manifestation of extrapulmonary TB and constitutes approximately 1% of all new cases annually [1]. Although the incidence of tuberculous meningitis (TBM) is low in high-income countries, it remains one of the most severe and eventually fatal infectious conditions - especially in times of increasing use of immunosuppressive drugs, increased access to transplantation (also for patients from TB endemic countries), changing HIV patterns and increasing prevalence of type II diabetes.\nTBM is characterised by a slowly progressing granulomatous inflammation of the basal meninges. This inflammatory reaction can lead to a number of complications, such as hydrocephalus, cerebral vascular infarction, cranial nerve palsy and, if left untreated, death. Rapid diagnosis and initiation of treatment is therefore necessary to reduce the high mortality and severe sequelae associated with the disease. Diagnosing TBM can be difficult as the symptoms are unspecific and mimic those of meningitis caused by other microbiological agents or other cerebrovascular events.\nDenmark has a low TB incidence of 6.7/100.000 with 367 notified cases in 2008 [2]; yet, the country has had fluctuations in the annually reported number of TB cases throughout the last 20 years. In 1986 there was an all-time low of 299 new cases [3] and in 2000 there was a peak of 548 new cases [4], equalling an increase of 83% in 14 years. This increase was primarily due to an increase in immigration from high-incidence countries [3,5].", "[SUBTITLE] Demography [SUBSECTION] 50 cases of cerebral TB were notified throughout the period; out of these 15 (30%) had tuberculomas. A total of 12 paediatric patients (aged 1-15 years) were found. All results have been divided into two sub-groups - one consisting of native Danish patients, the other of immigrants. The immigrant group of patients in this study includes patients born abroad and their children up to 25 years of age, as well as people from Greenland living in Denmark.\nA total of 11 patients (22%) were native Danes, the remaining 39 (78%) were immigrants from varying countries; three from Greenland, three from Ex-Yugoslavia, four from Sub-Saharan African countries with high HIV prevalence (Ethiopia, Uganda, South Africa), 22 from the Eastern Mediterranean region as defined by the World Health Organisation, WHO, (primarily Morocco, Somalia, Pakistan), and seven from Asia (China, India, Indonesia, Vietnam, Cambodia). 13 out of the 15 (87%) tuberculoma patients were immigrants.\nThere was an equal sex distribution in the two groups. Median age for the Danish patients was 17 years (range 1-52) and for the immigrant group 34 years (range 1-71).\nFurther demographic data are specified in Table 1.\nDemographic data, clinical presentation and outcome\n*n = 37 in the immigrant group as two patients were lost for follow-up\nSix Danish patients (55%) had pre-existing immunosuppressive diseases or conditions like malignancy or diabetes. Three patients were taking immunosuppressive medication due to organ transplant. Some were recorded as having a drug and/or alcohol abuse, and one patient was known HIV positive at the time of TBM diagnosis.\nSimilar results were found in the immigrant group with 26 out of 39 patients (67%) having one or more dispositions. Among these patients, four were HIV positive, two had kidney transplants, six had diabetes and four were recorded as having an alcohol abuse. Two patients had a family member who had recently been treated for TB.\nWithin the two patient populations, a total of 24 patients (48%) were HIV seronegative and 21 had an unknown HIV status (42%).\n50 cases of cerebral TB were notified throughout the period; out of these 15 (30%) had tuberculomas. A total of 12 paediatric patients (aged 1-15 years) were found. All results have been divided into two sub-groups - one consisting of native Danish patients, the other of immigrants. The immigrant group of patients in this study includes patients born abroad and their children up to 25 years of age, as well as people from Greenland living in Denmark.\nA total of 11 patients (22%) were native Danes, the remaining 39 (78%) were immigrants from varying countries; three from Greenland, three from Ex-Yugoslavia, four from Sub-Saharan African countries with high HIV prevalence (Ethiopia, Uganda, South Africa), 22 from the Eastern Mediterranean region as defined by the World Health Organisation, WHO, (primarily Morocco, Somalia, Pakistan), and seven from Asia (China, India, Indonesia, Vietnam, Cambodia). 13 out of the 15 (87%) tuberculoma patients were immigrants.\nThere was an equal sex distribution in the two groups. Median age for the Danish patients was 17 years (range 1-52) and for the immigrant group 34 years (range 1-71).\nFurther demographic data are specified in Table 1.\nDemographic data, clinical presentation and outcome\n*n = 37 in the immigrant group as two patients were lost for follow-up\nSix Danish patients (55%) had pre-existing immunosuppressive diseases or conditions like malignancy or diabetes. Three patients were taking immunosuppressive medication due to organ transplant. Some were recorded as having a drug and/or alcohol abuse, and one patient was known HIV positive at the time of TBM diagnosis.\nSimilar results were found in the immigrant group with 26 out of 39 patients (67%) having one or more dispositions. Among these patients, four were HIV positive, two had kidney transplants, six had diabetes and four were recorded as having an alcohol abuse. Two patients had a family member who had recently been treated for TB.\nWithin the two patient populations, a total of 24 patients (48%) were HIV seronegative and 21 had an unknown HIV status (42%).\n[SUBTITLE] Clinical presentation [SUBSECTION] Over half the patients presented with fever and headache. The classic sign of meningeal stiffness was found in less than half the patients. Neurological signs upon admission was affected in the majority of cases: 52% of the patients were described with a general altered mental state (i.e. confusion), 36% had cranial nerve paralysis (predominantly facial nerve or abducens nerve affection) and 16% presented with generalised convulsions upon hospitalization (Table 1).\nFive of the immigrant patients were initially admitted at neurology departments with headaches, convulsions and altered cerebral conditions. Due to symptoms of increased intra-cranial pressure, these patients underwent CT scanning and tumour like processes were identified. A lumbar puncture was never performed on any of these patients. They were transferred to neurosurgery departments for brain biopsy. Diagnosis was achieved based on histology and culture results.\nSymptom duration before admission ranged greatly with a median of 14 days in the Danish patients and 20 days in the immigrant group.\nThe tuberculin skin test was not performed routinely in all patients; only 20 results were available for this study; 15 (75%) were positive. BCG scar status had not been noted in any of the reviewed patient files.\nOver half the patients presented with fever and headache. The classic sign of meningeal stiffness was found in less than half the patients. Neurological signs upon admission was affected in the majority of cases: 52% of the patients were described with a general altered mental state (i.e. confusion), 36% had cranial nerve paralysis (predominantly facial nerve or abducens nerve affection) and 16% presented with generalised convulsions upon hospitalization (Table 1).\nFive of the immigrant patients were initially admitted at neurology departments with headaches, convulsions and altered cerebral conditions. Due to symptoms of increased intra-cranial pressure, these patients underwent CT scanning and tumour like processes were identified. A lumbar puncture was never performed on any of these patients. They were transferred to neurosurgery departments for brain biopsy. Diagnosis was achieved based on histology and culture results.\nSymptom duration before admission ranged greatly with a median of 14 days in the Danish patients and 20 days in the immigrant group.\nThe tuberculin skin test was not performed routinely in all patients; only 20 results were available for this study; 15 (75%) were positive. BCG scar status had not been noted in any of the reviewed patient files.\n[SUBTITLE] Cerebrospinal fluid characteristics [SUBSECTION] A total of 42 patients had a lumbar puncture performed at the time of admission. The remaining eight patients all had tuberculomas and did not have lumbar puncture performed due to symptoms of increased intra-cranial pressure. The white cell count (WCC) was elevated in 38 patients (90%) with lymphocyte predominance in 66%. Protein levels were elevated in 36 patients (86%). Eleven patients had a protein level above 3 g/L. The glucose content was below the minimum reference value in 20 patients (48%). CSF:blood glucose ratios could be calculated in a total of 26 patients; this ranged from 0,09 - 0,71. The ratio was below 0,3 in 50% of these patients (Table 2). Eight out of the nine fatal cases had pleocytosis and elevated protein levels. Only one of the fatal cases had a normal WCC but the CSF had elevated protein content; diagnosis was in this case established by a positive CSF culture.\nCerebrospinal fluid results in 42 patients\nWCC = white cell count; CSF = cerebrospinal fluid; NAA = nucleic acid amplification\n*Data from 7 Danes and 19 immigrants\n** Data from 6 Danes and 30 immigrants\nA total of 42 patients had a lumbar puncture performed at the time of admission. The remaining eight patients all had tuberculomas and did not have lumbar puncture performed due to symptoms of increased intra-cranial pressure. The white cell count (WCC) was elevated in 38 patients (90%) with lymphocyte predominance in 66%. Protein levels were elevated in 36 patients (86%). Eleven patients had a protein level above 3 g/L. The glucose content was below the minimum reference value in 20 patients (48%). CSF:blood glucose ratios could be calculated in a total of 26 patients; this ranged from 0,09 - 0,71. The ratio was below 0,3 in 50% of these patients (Table 2). Eight out of the nine fatal cases had pleocytosis and elevated protein levels. Only one of the fatal cases had a normal WCC but the CSF had elevated protein content; diagnosis was in this case established by a positive CSF culture.\nCerebrospinal fluid results in 42 patients\nWCC = white cell count; CSF = cerebrospinal fluid; NAA = nucleic acid amplification\n*Data from 7 Danes and 19 immigrants\n** Data from 6 Danes and 30 immigrants\n[SUBTITLE] Radiology [SUBSECTION] All patients were investigated with either cranial CT or MR scans at time of admission; 22 patients had both scans performed (Table 3). Cranial MR scans showed basal meningeal enhancement in six Danish patients and in 10 immigrants. Tuberculomas were more frequently found in the immigrant patient group. Eight patients with normal CT scans went on to have a cranial MR scan performed and this contributed diagnostically in showing basal meningeal enhancement in all. Eight patients had the combined finding of hydrocephalus and fresh infarct on both CT and MR scans. Only one patient presented both a normal CT and MR scan.\nNeuroradiological findings of 22 patients where both MR & CT was performed\nChest X-rays were performed in 48 patients and was found abnormal in 26 cases (54%). Concomitant pulmonary TB was microbiologically verified in 23 cases.\nAll patients were investigated with either cranial CT or MR scans at time of admission; 22 patients had both scans performed (Table 3). Cranial MR scans showed basal meningeal enhancement in six Danish patients and in 10 immigrants. Tuberculomas were more frequently found in the immigrant patient group. Eight patients with normal CT scans went on to have a cranial MR scan performed and this contributed diagnostically in showing basal meningeal enhancement in all. Eight patients had the combined finding of hydrocephalus and fresh infarct on both CT and MR scans. Only one patient presented both a normal CT and MR scan.\nNeuroradiological findings of 22 patients where both MR & CT was performed\nChest X-rays were performed in 48 patients and was found abnormal in 26 cases (54%). Concomitant pulmonary TB was microbiologically verified in 23 cases.\n[SUBTITLE] Microbiology [SUBSECTION] Results from microbiological analyses performed on CSF are shown in Table 2. In 41 patients (82%) the diagnosis was verified by culture. The number of days before 31 CSF cultures became positive ranged between 7 to 46 days (median 20 days). Nucleic acid amplification (NAA) test was positive for M. tuberculosis complex in one sample, but the culture remained negative. In five cases, microbiological diagnosis was obtained through cultures of biopsies from tuberculomas. Furthermore, five patients had positive cultures of non-cerebral specimens (four respiratory and one tissue biopsy from lumbar vertebrae). In eight patients (16%) the diagnosis could not be verified by culture.\nDrug susceptibility testing for first line drugs was performed for all 41 culture positive cases. Two isolates were resistant to isoniazid (4%) and one isolate (2%) had resistance to both rifampicin and isoniazid, thus multi-drug resistance. No other polyresistance or extensively drug resistant isolates was identified.\nResults from microbiological analyses performed on CSF are shown in Table 2. In 41 patients (82%) the diagnosis was verified by culture. The number of days before 31 CSF cultures became positive ranged between 7 to 46 days (median 20 days). Nucleic acid amplification (NAA) test was positive for M. tuberculosis complex in one sample, but the culture remained negative. In five cases, microbiological diagnosis was obtained through cultures of biopsies from tuberculomas. Furthermore, five patients had positive cultures of non-cerebral specimens (four respiratory and one tissue biopsy from lumbar vertebrae). In eight patients (16%) the diagnosis could not be verified by culture.\nDrug susceptibility testing for first line drugs was performed for all 41 culture positive cases. Two isolates were resistant to isoniazid (4%) and one isolate (2%) had resistance to both rifampicin and isoniazid, thus multi-drug resistance. No other polyresistance or extensively drug resistant isolates was identified.\n[SUBTITLE] Treatment and outcome [SUBSECTION] All patients with fully susceptible isolates were treated with rifampicin (10 mg/kg, max 600 mg), isoniazid (5 mg/kg, max 300 mg), ethambutol (20 mg/kg, max 1200 mg), and pyrazinamide (30 mg/kg, max 2000 mg) in the first 2 months of the intensive phase of treatment followed by rifampicin and isoniazid in the continuation phase of 7- 10 months, except one department where all patients (n = 25) in this latter phase were treated for a minimum of 4 months.\nRifabutin was replaced with rifampicin in one HIV positive patient because a protease inhibitor was part of the anti-retroviral treatment. Rifampicin was discontinued in one patient due to persistent thrombocytopenia; isoniazid was discontinued in one patient due to hepatotoxicity. Two patients did not receive ethambutol as they had received a kidney transplant. In these cases either ofloxacin or moxifloxacin were included in the treatment. Fluoroquinolones and amikacin were included in the treatment of the patient with multi-drug resistant TB. Treatment was then given for 24 months.\nCorticosteroids were used in all but six patients, of whom three died before treatment was initiated. Prednisolone was generally used at an initial dose of 1 mg/kg with a gradual reduction over 4-6 weeks. There were 12 paediatric patients and all but two received prednisolone; one died before TBM was diagnosed.\nNeurosurgical intervention in terms of shunting was performed in four patients with altered cerebral condition and severe hydrocephalus identified on MR or CT scans.\nThe delay in initiation of treatment post-admittance ranged from 0 to 52 days with the highest median of 7 days found in the Danish patient group.\nAll patients, except two, who were transferred to their home country, completed the treatment. 16 patients (33%) had full recovery and 23 (48%) patients had sequelae. A total of nine patients (19%) died; the age range was 4-71 years (median 45 years).\nThree patients died after several months' treatment due to severe neurological sequelae. The remaining six patients died within one month of admittance; three of these patients never received any treatment. The cause of death in the six patients was severe hydrocephalus and infarcts.\nAll patients with fully susceptible isolates were treated with rifampicin (10 mg/kg, max 600 mg), isoniazid (5 mg/kg, max 300 mg), ethambutol (20 mg/kg, max 1200 mg), and pyrazinamide (30 mg/kg, max 2000 mg) in the first 2 months of the intensive phase of treatment followed by rifampicin and isoniazid in the continuation phase of 7- 10 months, except one department where all patients (n = 25) in this latter phase were treated for a minimum of 4 months.\nRifabutin was replaced with rifampicin in one HIV positive patient because a protease inhibitor was part of the anti-retroviral treatment. Rifampicin was discontinued in one patient due to persistent thrombocytopenia; isoniazid was discontinued in one patient due to hepatotoxicity. Two patients did not receive ethambutol as they had received a kidney transplant. In these cases either ofloxacin or moxifloxacin were included in the treatment. Fluoroquinolones and amikacin were included in the treatment of the patient with multi-drug resistant TB. Treatment was then given for 24 months.\nCorticosteroids were used in all but six patients, of whom three died before treatment was initiated. Prednisolone was generally used at an initial dose of 1 mg/kg with a gradual reduction over 4-6 weeks. There were 12 paediatric patients and all but two received prednisolone; one died before TBM was diagnosed.\nNeurosurgical intervention in terms of shunting was performed in four patients with altered cerebral condition and severe hydrocephalus identified on MR or CT scans.\nThe delay in initiation of treatment post-admittance ranged from 0 to 52 days with the highest median of 7 days found in the Danish patient group.\nAll patients, except two, who were transferred to their home country, completed the treatment. 16 patients (33%) had full recovery and 23 (48%) patients had sequelae. A total of nine patients (19%) died; the age range was 4-71 years (median 45 years).\nThree patients died after several months' treatment due to severe neurological sequelae. The remaining six patients died within one month of admittance; three of these patients never received any treatment. The cause of death in the six patients was severe hydrocephalus and infarcts.", "50 cases of cerebral TB were notified throughout the period; out of these 15 (30%) had tuberculomas. A total of 12 paediatric patients (aged 1-15 years) were found. All results have been divided into two sub-groups - one consisting of native Danish patients, the other of immigrants. The immigrant group of patients in this study includes patients born abroad and their children up to 25 years of age, as well as people from Greenland living in Denmark.\nA total of 11 patients (22%) were native Danes, the remaining 39 (78%) were immigrants from varying countries; three from Greenland, three from Ex-Yugoslavia, four from Sub-Saharan African countries with high HIV prevalence (Ethiopia, Uganda, South Africa), 22 from the Eastern Mediterranean region as defined by the World Health Organisation, WHO, (primarily Morocco, Somalia, Pakistan), and seven from Asia (China, India, Indonesia, Vietnam, Cambodia). 13 out of the 15 (87%) tuberculoma patients were immigrants.\nThere was an equal sex distribution in the two groups. Median age for the Danish patients was 17 years (range 1-52) and for the immigrant group 34 years (range 1-71).\nFurther demographic data are specified in Table 1.\nDemographic data, clinical presentation and outcome\n*n = 37 in the immigrant group as two patients were lost for follow-up\nSix Danish patients (55%) had pre-existing immunosuppressive diseases or conditions like malignancy or diabetes. Three patients were taking immunosuppressive medication due to organ transplant. Some were recorded as having a drug and/or alcohol abuse, and one patient was known HIV positive at the time of TBM diagnosis.\nSimilar results were found in the immigrant group with 26 out of 39 patients (67%) having one or more dispositions. Among these patients, four were HIV positive, two had kidney transplants, six had diabetes and four were recorded as having an alcohol abuse. Two patients had a family member who had recently been treated for TB.\nWithin the two patient populations, a total of 24 patients (48%) were HIV seronegative and 21 had an unknown HIV status (42%).", "Over half the patients presented with fever and headache. The classic sign of meningeal stiffness was found in less than half the patients. Neurological signs upon admission was affected in the majority of cases: 52% of the patients were described with a general altered mental state (i.e. confusion), 36% had cranial nerve paralysis (predominantly facial nerve or abducens nerve affection) and 16% presented with generalised convulsions upon hospitalization (Table 1).\nFive of the immigrant patients were initially admitted at neurology departments with headaches, convulsions and altered cerebral conditions. Due to symptoms of increased intra-cranial pressure, these patients underwent CT scanning and tumour like processes were identified. A lumbar puncture was never performed on any of these patients. They were transferred to neurosurgery departments for brain biopsy. Diagnosis was achieved based on histology and culture results.\nSymptom duration before admission ranged greatly with a median of 14 days in the Danish patients and 20 days in the immigrant group.\nThe tuberculin skin test was not performed routinely in all patients; only 20 results were available for this study; 15 (75%) were positive. BCG scar status had not been noted in any of the reviewed patient files.", "A total of 42 patients had a lumbar puncture performed at the time of admission. The remaining eight patients all had tuberculomas and did not have lumbar puncture performed due to symptoms of increased intra-cranial pressure. The white cell count (WCC) was elevated in 38 patients (90%) with lymphocyte predominance in 66%. Protein levels were elevated in 36 patients (86%). Eleven patients had a protein level above 3 g/L. The glucose content was below the minimum reference value in 20 patients (48%). CSF:blood glucose ratios could be calculated in a total of 26 patients; this ranged from 0,09 - 0,71. The ratio was below 0,3 in 50% of these patients (Table 2). Eight out of the nine fatal cases had pleocytosis and elevated protein levels. Only one of the fatal cases had a normal WCC but the CSF had elevated protein content; diagnosis was in this case established by a positive CSF culture.\nCerebrospinal fluid results in 42 patients\nWCC = white cell count; CSF = cerebrospinal fluid; NAA = nucleic acid amplification\n*Data from 7 Danes and 19 immigrants\n** Data from 6 Danes and 30 immigrants", "All patients were investigated with either cranial CT or MR scans at time of admission; 22 patients had both scans performed (Table 3). Cranial MR scans showed basal meningeal enhancement in six Danish patients and in 10 immigrants. Tuberculomas were more frequently found in the immigrant patient group. Eight patients with normal CT scans went on to have a cranial MR scan performed and this contributed diagnostically in showing basal meningeal enhancement in all. Eight patients had the combined finding of hydrocephalus and fresh infarct on both CT and MR scans. Only one patient presented both a normal CT and MR scan.\nNeuroradiological findings of 22 patients where both MR & CT was performed\nChest X-rays were performed in 48 patients and was found abnormal in 26 cases (54%). Concomitant pulmonary TB was microbiologically verified in 23 cases.", "Results from microbiological analyses performed on CSF are shown in Table 2. In 41 patients (82%) the diagnosis was verified by culture. The number of days before 31 CSF cultures became positive ranged between 7 to 46 days (median 20 days). Nucleic acid amplification (NAA) test was positive for M. tuberculosis complex in one sample, but the culture remained negative. In five cases, microbiological diagnosis was obtained through cultures of biopsies from tuberculomas. Furthermore, five patients had positive cultures of non-cerebral specimens (four respiratory and one tissue biopsy from lumbar vertebrae). In eight patients (16%) the diagnosis could not be verified by culture.\nDrug susceptibility testing for first line drugs was performed for all 41 culture positive cases. Two isolates were resistant to isoniazid (4%) and one isolate (2%) had resistance to both rifampicin and isoniazid, thus multi-drug resistance. No other polyresistance or extensively drug resistant isolates was identified.", "All patients with fully susceptible isolates were treated with rifampicin (10 mg/kg, max 600 mg), isoniazid (5 mg/kg, max 300 mg), ethambutol (20 mg/kg, max 1200 mg), and pyrazinamide (30 mg/kg, max 2000 mg) in the first 2 months of the intensive phase of treatment followed by rifampicin and isoniazid in the continuation phase of 7- 10 months, except one department where all patients (n = 25) in this latter phase were treated for a minimum of 4 months.\nRifabutin was replaced with rifampicin in one HIV positive patient because a protease inhibitor was part of the anti-retroviral treatment. Rifampicin was discontinued in one patient due to persistent thrombocytopenia; isoniazid was discontinued in one patient due to hepatotoxicity. Two patients did not receive ethambutol as they had received a kidney transplant. In these cases either ofloxacin or moxifloxacin were included in the treatment. Fluoroquinolones and amikacin were included in the treatment of the patient with multi-drug resistant TB. Treatment was then given for 24 months.\nCorticosteroids were used in all but six patients, of whom three died before treatment was initiated. Prednisolone was generally used at an initial dose of 1 mg/kg with a gradual reduction over 4-6 weeks. There were 12 paediatric patients and all but two received prednisolone; one died before TBM was diagnosed.\nNeurosurgical intervention in terms of shunting was performed in four patients with altered cerebral condition and severe hydrocephalus identified on MR or CT scans.\nThe delay in initiation of treatment post-admittance ranged from 0 to 52 days with the highest median of 7 days found in the Danish patient group.\nAll patients, except two, who were transferred to their home country, completed the treatment. 16 patients (33%) had full recovery and 23 (48%) patients had sequelae. A total of nine patients (19%) died; the age range was 4-71 years (median 45 years).\nThree patients died after several months' treatment due to severe neurological sequelae. The remaining six patients died within one month of admittance; three of these patients never received any treatment. The cause of death in the six patients was severe hydrocephalus and infarcts.", "Extrapulmonary TB accounts for 1/3 of all TB notifications in Denmark. In 2008, 60% of notified cases were of other ethnic background than Danish [2]. TBM remains a rare manifestation of extrapulmonary TB in Denmark with a consistent incidence of five to six new cases per year during the past ten years. Almost 80% of our study population consists of immigrants from countries of high tuberculosis endemicity. This resembles a significant increase from the figure of 26% reported in a Danish study from the 1970s-1980s [6] and also marks a slight increase from the most recent Danish study where this figure was 65% [7]. The predominance of immigrants cannot be linked to health disparity, since the Danish health system has free and equal access for all citizens. Another important characteristic of our patient population is the high prevalence of underlying immunosuppressive disease; a total of 64% are found in this study, compared to recent figures of 25% - 35% [6,7]. 10% of the patients were HIV positive, but it is worrying that HIV serostatus was not available in 42%. Since active TB is more common in people infected with HIV [8], it must be stressed that HIV testing should always be performed in conjunction with diagnosing TBM.\nThe clinical presentation of TBM is vague with non-specific symptoms that are hard to distinguish from other types of bacterial meningitis. It is noted that less than half our patients had the typical sign of meningeal stiffness. A long history of illness (over 5-6 days) has previously been shown to be a clinical variable highly predictive of TBM [9,10]; we have similar findings with a median of 10 and 14 days respectively in our two patient populations.\nBiochemical CSF analysis revealed that the majority of patients (66%) had the characteristic findings of pleocytosis with lymphocyte predominance. Also, protein levels were, as would be expected, elevated in the majority of patients as well. Previous studies from Iran, India and Vietnam have found the percentage of CSF lymphocytes to be one of the strongest diagnostic variables predictive of TBM, especially when the total WCC is less than 1000 × 103/ml [11,12]. Only one of our patients had a WCC above 1000 × 103/ml.\nDirect microscopy of CSF for acid-fast bacilli was positive in only 9.5% of the specimens. This low sensitivity is unfortunate, as this is the simplest diagnostic tool available and can also be used easily in a low-resource setting. Other series have found varying sensitivities of a direct smear and figures as high as 58% have previously been reported [13]. NAA tests have been shown to be more sensitive and specific than microscopy in diagnosing pulmonary TB [14], but their performance in extrapulmonary specimens, such as CSF, have been disappointing [15]. In this study the reported 42% sensitivity of NAA is considerably low, but still better than the sensitivity of smears. This shows that diagnosing TBM in a setting of high resourcefulness remains challenging.\nThe culture of CSF samples was positive for M. tuberculosis in 74%; this figure is higher than the 55% reported in two previous Danish studies [6,7]. Culture remains an essential, but time consuming, tool in diagnosing TBM; we report a median of 20 days for bacterial growth to be confirmed. Culture can verify the TBM diagnosis but cannot be relied on in the initial, critical diagnostic phase. Culture is however still of importance, especially for identifying resistant isolates.\nThe most common findings on neuroradiological images were basal meningeal enhancement and tuberculomas. As seen clearly in especially the Danish group of patients, MR scans proved more sensitive for identifying meningeal enhancement than CT scans (86% vs. 0%). Tuberculomas were more commonly found in the immigrant group. Cranial CT scans seem to be just as sensitive as MR scans in identifying hydrocephalus, infarcts and tuberculomas. A recent study found that MR is superior to CT in identifying basal meningeal enhancement as well as infarcts; hydrocephalus was in the same study detected equally by MR and CT scans [16]. In conclusion, MR scans should be considered as the primary choice for neuroradiological imaging in the initial diagnostic phase in a high-resource setting. A total of 46% of our patients also had pulmonary TB and this emphasises the importance of chest X-rays and microbiological analyses on respiratory specimens in the diagnostic process.\nThe treatment regimes described in this study are not homogenous. Standard anti-tuberculous drugs were used at all centres, but the total length of treatment varied from 6-12 months. National guidelines suggest treatment duration of 6 months, except in severe cases, where 9-12 months treatment regimens are applied, as also recommended by WHO [17]. This is in contrast to the recommendations applied in the United Kingdom and the United States, where all TBM patients are treated for 9-12 months [18,19]. A high proportion of the patients were treated with adjuvant prednisolone treatment; this is a well-established component of TBM treatment and has been shown to significantly reduce mortality rates [20].\nThe mortality rate was 19%. The cause of death was hydrocephalus and/or brain damage in all patients. Similar studies from other developed nations such as the United States and Australia have reported mortality rates of 41% [21] and 7% [22], respectively. From countries of endemic TB as well as high HIV prevalence, mortality rates have been significantly higher: from South Africa 69% [23] and from Vietnam 67% [24]. The proportion of patients with various neurological sequelae in our study is high at 50%, consistent with previous reports [7].", "TBM remains a serious disease even in a setting of low TB incidence such as Denmark. The disease primarily affects immigrants from regions of high TB endemicity and has a high mortality rate and a high rate of sequelae among survivors. The diagnosis of TBM remains difficult, even in a high-resource setting, as the currently available diagnostic tools lack sensitivity. Although TBM only affects a handful of people each year in Denmark, the clinician must be prepared to treat empirically if the suspicion of TBM has arisen.", "The authors declare that they have no competing interests.", "AC and IJ collected and analysed patient data and drafted the article. PA provided the notification data and VT provided microbiological data. IJ and ÅA supervised the study. All authors read and approved the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/47/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Demography", "Clinical presentation", "Cerebrospinal fluid characteristics", "Radiology", "Microbiology", "Treatment and outcome", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tuberculosis (TB) of the central nervous system is the most severe manifestation of extrapulmonary TB and constitutes approximately 1% of all new cases annually [1]. Although the incidence of tuberculous meningitis (TBM) is low in high-income countries, it remains one of the most severe and eventually fatal infectious conditions - especially in times of increasing use of immunosuppressive drugs, increased access to transplantation (also for patients from TB endemic countries), changing HIV patterns and increasing prevalence of type II diabetes.\nTBM is characterised by a slowly progressing granulomatous inflammation of the basal meninges. This inflammatory reaction can lead to a number of complications, such as hydrocephalus, cerebral vascular infarction, cranial nerve palsy and, if left untreated, death. Rapid diagnosis and initiation of treatment is therefore necessary to reduce the high mortality and severe sequelae associated with the disease. Diagnosing TBM can be difficult as the symptoms are unspecific and mimic those of meningitis caused by other microbiological agents or other cerebrovascular events.\nDenmark has a low TB incidence of 6.7/100.000 with 367 notified cases in 2008 [2]; yet, the country has had fluctuations in the annually reported number of TB cases throughout the last 20 years. In 1986 there was an all-time low of 299 new cases [3] and in 2000 there was a peak of 548 new cases [4], equalling an increase of 83% in 14 years. This increase was primarily due to an increase in immigration from high-incidence countries [3,5].", "To initiate this retrospective study we obtained data on notified TBM cases from the National TB Notification Register between January 2000 and December 2008. The medical records of all patients, including children, with TBM were retrieved from Paediatric Departments and Departments of Infectious Diseases at five major University Hospitals in Denmark. Of note, in Denmark all cases of tuberculous meningitis are treated at university hospitals as required by national guidelines. Permission was granted from the Danish Data Protection Agency.\nDemographic data, medical history, clinical presentation at admission, radiological and microbiological data was reviewed along with the clinical course, treatment and outcome. All microbiological data was retrieved from The International Reference Laboratory of Mycobacteriology at Statens Serum Institut.", "[SUBTITLE] Demography [SUBSECTION] 50 cases of cerebral TB were notified throughout the period; out of these 15 (30%) had tuberculomas. A total of 12 paediatric patients (aged 1-15 years) were found. All results have been divided into two sub-groups - one consisting of native Danish patients, the other of immigrants. The immigrant group of patients in this study includes patients born abroad and their children up to 25 years of age, as well as people from Greenland living in Denmark.\nA total of 11 patients (22%) were native Danes, the remaining 39 (78%) were immigrants from varying countries; three from Greenland, three from Ex-Yugoslavia, four from Sub-Saharan African countries with high HIV prevalence (Ethiopia, Uganda, South Africa), 22 from the Eastern Mediterranean region as defined by the World Health Organisation, WHO, (primarily Morocco, Somalia, Pakistan), and seven from Asia (China, India, Indonesia, Vietnam, Cambodia). 13 out of the 15 (87%) tuberculoma patients were immigrants.\nThere was an equal sex distribution in the two groups. Median age for the Danish patients was 17 years (range 1-52) and for the immigrant group 34 years (range 1-71).\nFurther demographic data are specified in Table 1.\nDemographic data, clinical presentation and outcome\n*n = 37 in the immigrant group as two patients were lost for follow-up\nSix Danish patients (55%) had pre-existing immunosuppressive diseases or conditions like malignancy or diabetes. Three patients were taking immunosuppressive medication due to organ transplant. Some were recorded as having a drug and/or alcohol abuse, and one patient was known HIV positive at the time of TBM diagnosis.\nSimilar results were found in the immigrant group with 26 out of 39 patients (67%) having one or more dispositions. Among these patients, four were HIV positive, two had kidney transplants, six had diabetes and four were recorded as having an alcohol abuse. Two patients had a family member who had recently been treated for TB.\nWithin the two patient populations, a total of 24 patients (48%) were HIV seronegative and 21 had an unknown HIV status (42%).\n50 cases of cerebral TB were notified throughout the period; out of these 15 (30%) had tuberculomas. A total of 12 paediatric patients (aged 1-15 years) were found. All results have been divided into two sub-groups - one consisting of native Danish patients, the other of immigrants. The immigrant group of patients in this study includes patients born abroad and their children up to 25 years of age, as well as people from Greenland living in Denmark.\nA total of 11 patients (22%) were native Danes, the remaining 39 (78%) were immigrants from varying countries; three from Greenland, three from Ex-Yugoslavia, four from Sub-Saharan African countries with high HIV prevalence (Ethiopia, Uganda, South Africa), 22 from the Eastern Mediterranean region as defined by the World Health Organisation, WHO, (primarily Morocco, Somalia, Pakistan), and seven from Asia (China, India, Indonesia, Vietnam, Cambodia). 13 out of the 15 (87%) tuberculoma patients were immigrants.\nThere was an equal sex distribution in the two groups. Median age for the Danish patients was 17 years (range 1-52) and for the immigrant group 34 years (range 1-71).\nFurther demographic data are specified in Table 1.\nDemographic data, clinical presentation and outcome\n*n = 37 in the immigrant group as two patients were lost for follow-up\nSix Danish patients (55%) had pre-existing immunosuppressive diseases or conditions like malignancy or diabetes. Three patients were taking immunosuppressive medication due to organ transplant. Some were recorded as having a drug and/or alcohol abuse, and one patient was known HIV positive at the time of TBM diagnosis.\nSimilar results were found in the immigrant group with 26 out of 39 patients (67%) having one or more dispositions. Among these patients, four were HIV positive, two had kidney transplants, six had diabetes and four were recorded as having an alcohol abuse. Two patients had a family member who had recently been treated for TB.\nWithin the two patient populations, a total of 24 patients (48%) were HIV seronegative and 21 had an unknown HIV status (42%).\n[SUBTITLE] Clinical presentation [SUBSECTION] Over half the patients presented with fever and headache. The classic sign of meningeal stiffness was found in less than half the patients. Neurological signs upon admission was affected in the majority of cases: 52% of the patients were described with a general altered mental state (i.e. confusion), 36% had cranial nerve paralysis (predominantly facial nerve or abducens nerve affection) and 16% presented with generalised convulsions upon hospitalization (Table 1).\nFive of the immigrant patients were initially admitted at neurology departments with headaches, convulsions and altered cerebral conditions. Due to symptoms of increased intra-cranial pressure, these patients underwent CT scanning and tumour like processes were identified. A lumbar puncture was never performed on any of these patients. They were transferred to neurosurgery departments for brain biopsy. Diagnosis was achieved based on histology and culture results.\nSymptom duration before admission ranged greatly with a median of 14 days in the Danish patients and 20 days in the immigrant group.\nThe tuberculin skin test was not performed routinely in all patients; only 20 results were available for this study; 15 (75%) were positive. BCG scar status had not been noted in any of the reviewed patient files.\nOver half the patients presented with fever and headache. The classic sign of meningeal stiffness was found in less than half the patients. Neurological signs upon admission was affected in the majority of cases: 52% of the patients were described with a general altered mental state (i.e. confusion), 36% had cranial nerve paralysis (predominantly facial nerve or abducens nerve affection) and 16% presented with generalised convulsions upon hospitalization (Table 1).\nFive of the immigrant patients were initially admitted at neurology departments with headaches, convulsions and altered cerebral conditions. Due to symptoms of increased intra-cranial pressure, these patients underwent CT scanning and tumour like processes were identified. A lumbar puncture was never performed on any of these patients. They were transferred to neurosurgery departments for brain biopsy. Diagnosis was achieved based on histology and culture results.\nSymptom duration before admission ranged greatly with a median of 14 days in the Danish patients and 20 days in the immigrant group.\nThe tuberculin skin test was not performed routinely in all patients; only 20 results were available for this study; 15 (75%) were positive. BCG scar status had not been noted in any of the reviewed patient files.\n[SUBTITLE] Cerebrospinal fluid characteristics [SUBSECTION] A total of 42 patients had a lumbar puncture performed at the time of admission. The remaining eight patients all had tuberculomas and did not have lumbar puncture performed due to symptoms of increased intra-cranial pressure. The white cell count (WCC) was elevated in 38 patients (90%) with lymphocyte predominance in 66%. Protein levels were elevated in 36 patients (86%). Eleven patients had a protein level above 3 g/L. The glucose content was below the minimum reference value in 20 patients (48%). CSF:blood glucose ratios could be calculated in a total of 26 patients; this ranged from 0,09 - 0,71. The ratio was below 0,3 in 50% of these patients (Table 2). Eight out of the nine fatal cases had pleocytosis and elevated protein levels. Only one of the fatal cases had a normal WCC but the CSF had elevated protein content; diagnosis was in this case established by a positive CSF culture.\nCerebrospinal fluid results in 42 patients\nWCC = white cell count; CSF = cerebrospinal fluid; NAA = nucleic acid amplification\n*Data from 7 Danes and 19 immigrants\n** Data from 6 Danes and 30 immigrants\nA total of 42 patients had a lumbar puncture performed at the time of admission. The remaining eight patients all had tuberculomas and did not have lumbar puncture performed due to symptoms of increased intra-cranial pressure. The white cell count (WCC) was elevated in 38 patients (90%) with lymphocyte predominance in 66%. Protein levels were elevated in 36 patients (86%). Eleven patients had a protein level above 3 g/L. The glucose content was below the minimum reference value in 20 patients (48%). CSF:blood glucose ratios could be calculated in a total of 26 patients; this ranged from 0,09 - 0,71. The ratio was below 0,3 in 50% of these patients (Table 2). Eight out of the nine fatal cases had pleocytosis and elevated protein levels. Only one of the fatal cases had a normal WCC but the CSF had elevated protein content; diagnosis was in this case established by a positive CSF culture.\nCerebrospinal fluid results in 42 patients\nWCC = white cell count; CSF = cerebrospinal fluid; NAA = nucleic acid amplification\n*Data from 7 Danes and 19 immigrants\n** Data from 6 Danes and 30 immigrants\n[SUBTITLE] Radiology [SUBSECTION] All patients were investigated with either cranial CT or MR scans at time of admission; 22 patients had both scans performed (Table 3). Cranial MR scans showed basal meningeal enhancement in six Danish patients and in 10 immigrants. Tuberculomas were more frequently found in the immigrant patient group. Eight patients with normal CT scans went on to have a cranial MR scan performed and this contributed diagnostically in showing basal meningeal enhancement in all. Eight patients had the combined finding of hydrocephalus and fresh infarct on both CT and MR scans. Only one patient presented both a normal CT and MR scan.\nNeuroradiological findings of 22 patients where both MR & CT was performed\nChest X-rays were performed in 48 patients and was found abnormal in 26 cases (54%). Concomitant pulmonary TB was microbiologically verified in 23 cases.\nAll patients were investigated with either cranial CT or MR scans at time of admission; 22 patients had both scans performed (Table 3). Cranial MR scans showed basal meningeal enhancement in six Danish patients and in 10 immigrants. Tuberculomas were more frequently found in the immigrant patient group. Eight patients with normal CT scans went on to have a cranial MR scan performed and this contributed diagnostically in showing basal meningeal enhancement in all. Eight patients had the combined finding of hydrocephalus and fresh infarct on both CT and MR scans. Only one patient presented both a normal CT and MR scan.\nNeuroradiological findings of 22 patients where both MR & CT was performed\nChest X-rays were performed in 48 patients and was found abnormal in 26 cases (54%). Concomitant pulmonary TB was microbiologically verified in 23 cases.\n[SUBTITLE] Microbiology [SUBSECTION] Results from microbiological analyses performed on CSF are shown in Table 2. In 41 patients (82%) the diagnosis was verified by culture. The number of days before 31 CSF cultures became positive ranged between 7 to 46 days (median 20 days). Nucleic acid amplification (NAA) test was positive for M. tuberculosis complex in one sample, but the culture remained negative. In five cases, microbiological diagnosis was obtained through cultures of biopsies from tuberculomas. Furthermore, five patients had positive cultures of non-cerebral specimens (four respiratory and one tissue biopsy from lumbar vertebrae). In eight patients (16%) the diagnosis could not be verified by culture.\nDrug susceptibility testing for first line drugs was performed for all 41 culture positive cases. Two isolates were resistant to isoniazid (4%) and one isolate (2%) had resistance to both rifampicin and isoniazid, thus multi-drug resistance. No other polyresistance or extensively drug resistant isolates was identified.\nResults from microbiological analyses performed on CSF are shown in Table 2. In 41 patients (82%) the diagnosis was verified by culture. The number of days before 31 CSF cultures became positive ranged between 7 to 46 days (median 20 days). Nucleic acid amplification (NAA) test was positive for M. tuberculosis complex in one sample, but the culture remained negative. In five cases, microbiological diagnosis was obtained through cultures of biopsies from tuberculomas. Furthermore, five patients had positive cultures of non-cerebral specimens (four respiratory and one tissue biopsy from lumbar vertebrae). In eight patients (16%) the diagnosis could not be verified by culture.\nDrug susceptibility testing for first line drugs was performed for all 41 culture positive cases. Two isolates were resistant to isoniazid (4%) and one isolate (2%) had resistance to both rifampicin and isoniazid, thus multi-drug resistance. No other polyresistance or extensively drug resistant isolates was identified.\n[SUBTITLE] Treatment and outcome [SUBSECTION] All patients with fully susceptible isolates were treated with rifampicin (10 mg/kg, max 600 mg), isoniazid (5 mg/kg, max 300 mg), ethambutol (20 mg/kg, max 1200 mg), and pyrazinamide (30 mg/kg, max 2000 mg) in the first 2 months of the intensive phase of treatment followed by rifampicin and isoniazid in the continuation phase of 7- 10 months, except one department where all patients (n = 25) in this latter phase were treated for a minimum of 4 months.\nRifabutin was replaced with rifampicin in one HIV positive patient because a protease inhibitor was part of the anti-retroviral treatment. Rifampicin was discontinued in one patient due to persistent thrombocytopenia; isoniazid was discontinued in one patient due to hepatotoxicity. Two patients did not receive ethambutol as they had received a kidney transplant. In these cases either ofloxacin or moxifloxacin were included in the treatment. Fluoroquinolones and amikacin were included in the treatment of the patient with multi-drug resistant TB. Treatment was then given for 24 months.\nCorticosteroids were used in all but six patients, of whom three died before treatment was initiated. Prednisolone was generally used at an initial dose of 1 mg/kg with a gradual reduction over 4-6 weeks. There were 12 paediatric patients and all but two received prednisolone; one died before TBM was diagnosed.\nNeurosurgical intervention in terms of shunting was performed in four patients with altered cerebral condition and severe hydrocephalus identified on MR or CT scans.\nThe delay in initiation of treatment post-admittance ranged from 0 to 52 days with the highest median of 7 days found in the Danish patient group.\nAll patients, except two, who were transferred to their home country, completed the treatment. 16 patients (33%) had full recovery and 23 (48%) patients had sequelae. A total of nine patients (19%) died; the age range was 4-71 years (median 45 years).\nThree patients died after several months' treatment due to severe neurological sequelae. The remaining six patients died within one month of admittance; three of these patients never received any treatment. The cause of death in the six patients was severe hydrocephalus and infarcts.\nAll patients with fully susceptible isolates were treated with rifampicin (10 mg/kg, max 600 mg), isoniazid (5 mg/kg, max 300 mg), ethambutol (20 mg/kg, max 1200 mg), and pyrazinamide (30 mg/kg, max 2000 mg) in the first 2 months of the intensive phase of treatment followed by rifampicin and isoniazid in the continuation phase of 7- 10 months, except one department where all patients (n = 25) in this latter phase were treated for a minimum of 4 months.\nRifabutin was replaced with rifampicin in one HIV positive patient because a protease inhibitor was part of the anti-retroviral treatment. Rifampicin was discontinued in one patient due to persistent thrombocytopenia; isoniazid was discontinued in one patient due to hepatotoxicity. Two patients did not receive ethambutol as they had received a kidney transplant. In these cases either ofloxacin or moxifloxacin were included in the treatment. Fluoroquinolones and amikacin were included in the treatment of the patient with multi-drug resistant TB. Treatment was then given for 24 months.\nCorticosteroids were used in all but six patients, of whom three died before treatment was initiated. Prednisolone was generally used at an initial dose of 1 mg/kg with a gradual reduction over 4-6 weeks. There were 12 paediatric patients and all but two received prednisolone; one died before TBM was diagnosed.\nNeurosurgical intervention in terms of shunting was performed in four patients with altered cerebral condition and severe hydrocephalus identified on MR or CT scans.\nThe delay in initiation of treatment post-admittance ranged from 0 to 52 days with the highest median of 7 days found in the Danish patient group.\nAll patients, except two, who were transferred to their home country, completed the treatment. 16 patients (33%) had full recovery and 23 (48%) patients had sequelae. A total of nine patients (19%) died; the age range was 4-71 years (median 45 years).\nThree patients died after several months' treatment due to severe neurological sequelae. The remaining six patients died within one month of admittance; three of these patients never received any treatment. The cause of death in the six patients was severe hydrocephalus and infarcts.", "50 cases of cerebral TB were notified throughout the period; out of these 15 (30%) had tuberculomas. A total of 12 paediatric patients (aged 1-15 years) were found. All results have been divided into two sub-groups - one consisting of native Danish patients, the other of immigrants. The immigrant group of patients in this study includes patients born abroad and their children up to 25 years of age, as well as people from Greenland living in Denmark.\nA total of 11 patients (22%) were native Danes, the remaining 39 (78%) were immigrants from varying countries; three from Greenland, three from Ex-Yugoslavia, four from Sub-Saharan African countries with high HIV prevalence (Ethiopia, Uganda, South Africa), 22 from the Eastern Mediterranean region as defined by the World Health Organisation, WHO, (primarily Morocco, Somalia, Pakistan), and seven from Asia (China, India, Indonesia, Vietnam, Cambodia). 13 out of the 15 (87%) tuberculoma patients were immigrants.\nThere was an equal sex distribution in the two groups. Median age for the Danish patients was 17 years (range 1-52) and for the immigrant group 34 years (range 1-71).\nFurther demographic data are specified in Table 1.\nDemographic data, clinical presentation and outcome\n*n = 37 in the immigrant group as two patients were lost for follow-up\nSix Danish patients (55%) had pre-existing immunosuppressive diseases or conditions like malignancy or diabetes. Three patients were taking immunosuppressive medication due to organ transplant. Some were recorded as having a drug and/or alcohol abuse, and one patient was known HIV positive at the time of TBM diagnosis.\nSimilar results were found in the immigrant group with 26 out of 39 patients (67%) having one or more dispositions. Among these patients, four were HIV positive, two had kidney transplants, six had diabetes and four were recorded as having an alcohol abuse. Two patients had a family member who had recently been treated for TB.\nWithin the two patient populations, a total of 24 patients (48%) were HIV seronegative and 21 had an unknown HIV status (42%).", "Over half the patients presented with fever and headache. The classic sign of meningeal stiffness was found in less than half the patients. Neurological signs upon admission was affected in the majority of cases: 52% of the patients were described with a general altered mental state (i.e. confusion), 36% had cranial nerve paralysis (predominantly facial nerve or abducens nerve affection) and 16% presented with generalised convulsions upon hospitalization (Table 1).\nFive of the immigrant patients were initially admitted at neurology departments with headaches, convulsions and altered cerebral conditions. Due to symptoms of increased intra-cranial pressure, these patients underwent CT scanning and tumour like processes were identified. A lumbar puncture was never performed on any of these patients. They were transferred to neurosurgery departments for brain biopsy. Diagnosis was achieved based on histology and culture results.\nSymptom duration before admission ranged greatly with a median of 14 days in the Danish patients and 20 days in the immigrant group.\nThe tuberculin skin test was not performed routinely in all patients; only 20 results were available for this study; 15 (75%) were positive. BCG scar status had not been noted in any of the reviewed patient files.", "A total of 42 patients had a lumbar puncture performed at the time of admission. The remaining eight patients all had tuberculomas and did not have lumbar puncture performed due to symptoms of increased intra-cranial pressure. The white cell count (WCC) was elevated in 38 patients (90%) with lymphocyte predominance in 66%. Protein levels were elevated in 36 patients (86%). Eleven patients had a protein level above 3 g/L. The glucose content was below the minimum reference value in 20 patients (48%). CSF:blood glucose ratios could be calculated in a total of 26 patients; this ranged from 0,09 - 0,71. The ratio was below 0,3 in 50% of these patients (Table 2). Eight out of the nine fatal cases had pleocytosis and elevated protein levels. Only one of the fatal cases had a normal WCC but the CSF had elevated protein content; diagnosis was in this case established by a positive CSF culture.\nCerebrospinal fluid results in 42 patients\nWCC = white cell count; CSF = cerebrospinal fluid; NAA = nucleic acid amplification\n*Data from 7 Danes and 19 immigrants\n** Data from 6 Danes and 30 immigrants", "All patients were investigated with either cranial CT or MR scans at time of admission; 22 patients had both scans performed (Table 3). Cranial MR scans showed basal meningeal enhancement in six Danish patients and in 10 immigrants. Tuberculomas were more frequently found in the immigrant patient group. Eight patients with normal CT scans went on to have a cranial MR scan performed and this contributed diagnostically in showing basal meningeal enhancement in all. Eight patients had the combined finding of hydrocephalus and fresh infarct on both CT and MR scans. Only one patient presented both a normal CT and MR scan.\nNeuroradiological findings of 22 patients where both MR & CT was performed\nChest X-rays were performed in 48 patients and was found abnormal in 26 cases (54%). Concomitant pulmonary TB was microbiologically verified in 23 cases.", "Results from microbiological analyses performed on CSF are shown in Table 2. In 41 patients (82%) the diagnosis was verified by culture. The number of days before 31 CSF cultures became positive ranged between 7 to 46 days (median 20 days). Nucleic acid amplification (NAA) test was positive for M. tuberculosis complex in one sample, but the culture remained negative. In five cases, microbiological diagnosis was obtained through cultures of biopsies from tuberculomas. Furthermore, five patients had positive cultures of non-cerebral specimens (four respiratory and one tissue biopsy from lumbar vertebrae). In eight patients (16%) the diagnosis could not be verified by culture.\nDrug susceptibility testing for first line drugs was performed for all 41 culture positive cases. Two isolates were resistant to isoniazid (4%) and one isolate (2%) had resistance to both rifampicin and isoniazid, thus multi-drug resistance. No other polyresistance or extensively drug resistant isolates was identified.", "All patients with fully susceptible isolates were treated with rifampicin (10 mg/kg, max 600 mg), isoniazid (5 mg/kg, max 300 mg), ethambutol (20 mg/kg, max 1200 mg), and pyrazinamide (30 mg/kg, max 2000 mg) in the first 2 months of the intensive phase of treatment followed by rifampicin and isoniazid in the continuation phase of 7- 10 months, except one department where all patients (n = 25) in this latter phase were treated for a minimum of 4 months.\nRifabutin was replaced with rifampicin in one HIV positive patient because a protease inhibitor was part of the anti-retroviral treatment. Rifampicin was discontinued in one patient due to persistent thrombocytopenia; isoniazid was discontinued in one patient due to hepatotoxicity. Two patients did not receive ethambutol as they had received a kidney transplant. In these cases either ofloxacin or moxifloxacin were included in the treatment. Fluoroquinolones and amikacin were included in the treatment of the patient with multi-drug resistant TB. Treatment was then given for 24 months.\nCorticosteroids were used in all but six patients, of whom three died before treatment was initiated. Prednisolone was generally used at an initial dose of 1 mg/kg with a gradual reduction over 4-6 weeks. There were 12 paediatric patients and all but two received prednisolone; one died before TBM was diagnosed.\nNeurosurgical intervention in terms of shunting was performed in four patients with altered cerebral condition and severe hydrocephalus identified on MR or CT scans.\nThe delay in initiation of treatment post-admittance ranged from 0 to 52 days with the highest median of 7 days found in the Danish patient group.\nAll patients, except two, who were transferred to their home country, completed the treatment. 16 patients (33%) had full recovery and 23 (48%) patients had sequelae. A total of nine patients (19%) died; the age range was 4-71 years (median 45 years).\nThree patients died after several months' treatment due to severe neurological sequelae. The remaining six patients died within one month of admittance; three of these patients never received any treatment. The cause of death in the six patients was severe hydrocephalus and infarcts.", "Extrapulmonary TB accounts for 1/3 of all TB notifications in Denmark. In 2008, 60% of notified cases were of other ethnic background than Danish [2]. TBM remains a rare manifestation of extrapulmonary TB in Denmark with a consistent incidence of five to six new cases per year during the past ten years. Almost 80% of our study population consists of immigrants from countries of high tuberculosis endemicity. This resembles a significant increase from the figure of 26% reported in a Danish study from the 1970s-1980s [6] and also marks a slight increase from the most recent Danish study where this figure was 65% [7]. The predominance of immigrants cannot be linked to health disparity, since the Danish health system has free and equal access for all citizens. Another important characteristic of our patient population is the high prevalence of underlying immunosuppressive disease; a total of 64% are found in this study, compared to recent figures of 25% - 35% [6,7]. 10% of the patients were HIV positive, but it is worrying that HIV serostatus was not available in 42%. Since active TB is more common in people infected with HIV [8], it must be stressed that HIV testing should always be performed in conjunction with diagnosing TBM.\nThe clinical presentation of TBM is vague with non-specific symptoms that are hard to distinguish from other types of bacterial meningitis. It is noted that less than half our patients had the typical sign of meningeal stiffness. A long history of illness (over 5-6 days) has previously been shown to be a clinical variable highly predictive of TBM [9,10]; we have similar findings with a median of 10 and 14 days respectively in our two patient populations.\nBiochemical CSF analysis revealed that the majority of patients (66%) had the characteristic findings of pleocytosis with lymphocyte predominance. Also, protein levels were, as would be expected, elevated in the majority of patients as well. Previous studies from Iran, India and Vietnam have found the percentage of CSF lymphocytes to be one of the strongest diagnostic variables predictive of TBM, especially when the total WCC is less than 1000 × 103/ml [11,12]. Only one of our patients had a WCC above 1000 × 103/ml.\nDirect microscopy of CSF for acid-fast bacilli was positive in only 9.5% of the specimens. This low sensitivity is unfortunate, as this is the simplest diagnostic tool available and can also be used easily in a low-resource setting. Other series have found varying sensitivities of a direct smear and figures as high as 58% have previously been reported [13]. NAA tests have been shown to be more sensitive and specific than microscopy in diagnosing pulmonary TB [14], but their performance in extrapulmonary specimens, such as CSF, have been disappointing [15]. In this study the reported 42% sensitivity of NAA is considerably low, but still better than the sensitivity of smears. This shows that diagnosing TBM in a setting of high resourcefulness remains challenging.\nThe culture of CSF samples was positive for M. tuberculosis in 74%; this figure is higher than the 55% reported in two previous Danish studies [6,7]. Culture remains an essential, but time consuming, tool in diagnosing TBM; we report a median of 20 days for bacterial growth to be confirmed. Culture can verify the TBM diagnosis but cannot be relied on in the initial, critical diagnostic phase. Culture is however still of importance, especially for identifying resistant isolates.\nThe most common findings on neuroradiological images were basal meningeal enhancement and tuberculomas. As seen clearly in especially the Danish group of patients, MR scans proved more sensitive for identifying meningeal enhancement than CT scans (86% vs. 0%). Tuberculomas were more commonly found in the immigrant group. Cranial CT scans seem to be just as sensitive as MR scans in identifying hydrocephalus, infarcts and tuberculomas. A recent study found that MR is superior to CT in identifying basal meningeal enhancement as well as infarcts; hydrocephalus was in the same study detected equally by MR and CT scans [16]. In conclusion, MR scans should be considered as the primary choice for neuroradiological imaging in the initial diagnostic phase in a high-resource setting. A total of 46% of our patients also had pulmonary TB and this emphasises the importance of chest X-rays and microbiological analyses on respiratory specimens in the diagnostic process.\nThe treatment regimes described in this study are not homogenous. Standard anti-tuberculous drugs were used at all centres, but the total length of treatment varied from 6-12 months. National guidelines suggest treatment duration of 6 months, except in severe cases, where 9-12 months treatment regimens are applied, as also recommended by WHO [17]. This is in contrast to the recommendations applied in the United Kingdom and the United States, where all TBM patients are treated for 9-12 months [18,19]. A high proportion of the patients were treated with adjuvant prednisolone treatment; this is a well-established component of TBM treatment and has been shown to significantly reduce mortality rates [20].\nThe mortality rate was 19%. The cause of death was hydrocephalus and/or brain damage in all patients. Similar studies from other developed nations such as the United States and Australia have reported mortality rates of 41% [21] and 7% [22], respectively. From countries of endemic TB as well as high HIV prevalence, mortality rates have been significantly higher: from South Africa 69% [23] and from Vietnam 67% [24]. The proportion of patients with various neurological sequelae in our study is high at 50%, consistent with previous reports [7].", "TBM remains a serious disease even in a setting of low TB incidence such as Denmark. The disease primarily affects immigrants from regions of high TB endemicity and has a high mortality rate and a high rate of sequelae among survivors. The diagnosis of TBM remains difficult, even in a high-resource setting, as the currently available diagnostic tools lack sensitivity. Although TBM only affects a handful of people each year in Denmark, the clinician must be prepared to treat empirically if the suspicion of TBM has arisen.", "The authors declare that they have no competing interests.", "AC and IJ collected and analysed patient data and drafted the article. PA provided the notification data and VT provided microbiological data. IJ and ÅA supervised the study. All authors read and approved the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/47/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Enhanced specific immune responses by CpG DNA in mice immunized with recombinant hepatitis B surface antigen and HB vaccine.
21342531
Hepatitis B vaccine adjuvant, alum, is generally used for vaccination although it does not stimulate Th1 immunity and 10% of the population has low or no antibody response. Efforts have been continued to find more efficient vaccine adjuvants for better antibody response as well as stimulation of Th1 immunity.
BACKGROUND
CpG DNA was used as an adjuvant for recombinant HBsAg to immunize 6- to 8-week-old female BALB/c mice with or without alum for different dosages. The production of HBsAb, CD80 and CD86 from dendritic cells, and cytokines IL-10, IL12, etc., were analyzed and compared for the performance of immunization.
METHODS
5-20 μg CpG DNA had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. The mice vaccinated with recombinant 20 μg CpG DNA and regular vaccine (containing alum adjuvant) had the highest concentration of antibody production. IL-12b, IL-12a and IL10 mRNA reached to the peak level between 3 and 6 hours after the CpG DNA induction in splenocytes. The expression levels of CD80 and CD86 leucocyte surface molecules were increased with 20 μg CpG DNA alone or with 20 μg CpG DNA and 4 μg HBsAg.
RESULTS
Our results confirmed the adjuvant effect of CpG DNA for HBsAg in the mouse model. The increase of IL10 and IL12 production suggested the involvement of Th1 cell activation. The activation of CD80 and CD86 molecules by CpG-ODN might be part of the mechanism of T/B cells coordination and the enhancement of recombinant HBsAg induced immune response.
CONCLUSIONS
[ "Adjuvants, Immunologic", "Animals", "B7-1 Antigen", "B7-2 Antigen", "Cytokines", "Dendritic Cells", "Female", "Hepatitis B Antibodies", "Hepatitis B Surface Antigens", "Hepatitis B Vaccines", "Mice", "Mice, Inbred BALB C", "Oligodeoxyribonucleotides", "Recombinant Proteins" ]
3050826
null
null
Methods
[SUBTITLE] Oligodeoxynucleotides [SUBSECTION] ODN (BW006) was provided by Yunnan Wosen Biotechnology Company. BW006 was synthesized with the sequence of 5'-tcgacgttcgtcgttcgtcgttc-3' and followed by sulphurization. Lipopolysaccharide (LPS) level in BW6 was less than 0.5 ng/mg by Limulus assay. ODN (BW006) was provided by Yunnan Wosen Biotechnology Company. BW006 was synthesized with the sequence of 5'-tcgacgttcgtcgttcgtcgttc-3' and followed by sulphurization. Lipopolysaccharide (LPS) level in BW6 was less than 0.5 ng/mg by Limulus assay. [SUBTITLE] Recombinant HBsAg [SUBSECTION] The awd2 subtype HBsAg was expressed in Hansenular polymorpha yeast and produced by Yunnan Wosen Biotechnology Company. The antigen had a concentration of 0.236 mg/ml and purity of greater than 99.0% by high-performance liquid chromatography (HPLC) and silver staining. LPS level in the antigen was less than 10 EU/ml by Limulus assay. The awd2 subtype HBsAg was expressed in Hansenular polymorpha yeast and produced by Yunnan Wosen Biotechnology Company. The antigen had a concentration of 0.236 mg/ml and purity of greater than 99.0% by high-performance liquid chromatography (HPLC) and silver staining. LPS level in the antigen was less than 10 EU/ml by Limulus assay. [SUBTITLE] HB vaccine [SUBSECTION] The awd2 HB vaccine was made with alum adjuvant and provided by Yunnan Wosen Biocenology Company. The concentrations of HBsAg and alum were 24 μg/ml and 0.5 mg/ml respectively. The awd2 HB vaccine was made with alum adjuvant and provided by Yunnan Wosen Biocenology Company. The concentrations of HBsAg and alum were 24 μg/ml and 0.5 mg/ml respectively. [SUBTITLE] Reagent for the detection of cytokine expression, anti-HBs and leucocyte surface molecules [SUBSECTION] The reagents for cytokine detection were provided by Panomics Company and used per protocol of each cytokine. The Anti-HBs was detected with automated chemiluminescent microparticle immunoassay, Abbott ARCHITECT® anti-HBs. The reagents for the detection of leucocyte surface molecules were provided by BioLegend Company. The reagents for cytokine detection were provided by Panomics Company and used per protocol of each cytokine. The Anti-HBs was detected with automated chemiluminescent microparticle immunoassay, Abbott ARCHITECT® anti-HBs. The reagents for the detection of leucocyte surface molecules were provided by BioLegend Company. [SUBTITLE] Evaluation of cytokine mRNA [SUBSECTION] Splenocytes were isolated from 6- to 8-week-old female BALB/c mouse (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China). Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway) and suspended in 1640 medium (RPMI 1640, Hyclone) with penicillin-streptomycin (final concentrations of 100 U/ml and 100 μg/ml respectively) (Sigma, Irvine, U.K.). 0.1 ml of the single cell suspension and 0.1 ml BW006 (final concentration of 5 μg/ml) were added to round-bottom 96 wells microtiter plates and cultured at 37°C with 5% CO2. The mRNA expression of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b and IFN-r were analyzed at 1, 3, 6 and 12 hours respectively, each in triplicate. Splenocytes were isolated from 6- to 8-week-old female BALB/c mouse (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China). Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway) and suspended in 1640 medium (RPMI 1640, Hyclone) with penicillin-streptomycin (final concentrations of 100 U/ml and 100 μg/ml respectively) (Sigma, Irvine, U.K.). 0.1 ml of the single cell suspension and 0.1 ml BW006 (final concentration of 5 μg/ml) were added to round-bottom 96 wells microtiter plates and cultured at 37°C with 5% CO2. The mRNA expression of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b and IFN-r were analyzed at 1, 3, 6 and 12 hours respectively, each in triplicate. [SUBTITLE] Evaluation of in vivo HBsAb production [SUBSECTION] 100 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 10 groups and respectively received a single intramuscular injection (i.m.) into left tibialis anterior muscles, of 0.1 ml solution containing 50 μg alum, 20 μg BW006, 2 μg recombinant HBsAg, 2 μg recombinant HBsAg with 1.25 μg BW006, 2 μg recombinant HBsAg with 5 μg BW006, 2 μg recombinant HBsAg with 20 μg BW006, 2 μg recombinant HB vaccine, 2 μg recombinant HB vaccine with 1.25 μg BW006, 2 μg recombinant HB vaccine with 5 μg BW006 and 2 μg recombinant HB vaccine with 20 μg BW006. The duplicate aliquots of plasma were collected weekly from 1-4 weeks, one for HBsAb testing and the other one for HBsAb titration if it was HBsAb positive. The antibodies against HBsAg were detected and quantified in triplicate for each specimen with Abbott ARCHITECT® anti-HBs reagent as described above. The average value of the triplicate equal to or greater than 10 mIU/ml was considered as positive HBsAb conversion. The numbers of serum HBsAb conversion and titer of each positive serum were calculated. 100 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 10 groups and respectively received a single intramuscular injection (i.m.) into left tibialis anterior muscles, of 0.1 ml solution containing 50 μg alum, 20 μg BW006, 2 μg recombinant HBsAg, 2 μg recombinant HBsAg with 1.25 μg BW006, 2 μg recombinant HBsAg with 5 μg BW006, 2 μg recombinant HBsAg with 20 μg BW006, 2 μg recombinant HB vaccine, 2 μg recombinant HB vaccine with 1.25 μg BW006, 2 μg recombinant HB vaccine with 5 μg BW006 and 2 μg recombinant HB vaccine with 20 μg BW006. The duplicate aliquots of plasma were collected weekly from 1-4 weeks, one for HBsAb testing and the other one for HBsAb titration if it was HBsAb positive. The antibodies against HBsAg were detected and quantified in triplicate for each specimen with Abbott ARCHITECT® anti-HBs reagent as described above. The average value of the triplicate equal to or greater than 10 mIU/ml was considered as positive HBsAb conversion. The numbers of serum HBsAb conversion and titer of each positive serum were calculated. [SUBTITLE] Detection of leucocyte surface molecules CD80 and CD86 [SUBSECTION] Sixteen 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 4 groups and given a single subcutaneous injection of 0.1 ml solution containing 4 μg recombinant HBsAg, 20 μg BW006, 4 μg recombinant HBsAg with 20 μg BW006, or NS, respectively. The mice of all four groups were euthanized 12 hours after the injection and dendritic cells were separated by auto-MACs kit (Miltenyibiotec Com.) per manufacturer's instructions. Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway). Cell concentration of dendritic cells were adjusted to 1 × 105/ml and co-culvated with rat-anti mouse APC-CD11c, CD80-FITC and CD86-PE antibodies from BioLegend company at room temperature for 20 minutes. The cells were centrifuged at 400 g for 10 minutes and re-suspended with 500 μl of 10% paraformaldehyde. The expression levels of the surface molecules CD80 and CD86 were detected and quantified by flow cytometry. Sixteen 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 4 groups and given a single subcutaneous injection of 0.1 ml solution containing 4 μg recombinant HBsAg, 20 μg BW006, 4 μg recombinant HBsAg with 20 μg BW006, or NS, respectively. The mice of all four groups were euthanized 12 hours after the injection and dendritic cells were separated by auto-MACs kit (Miltenyibiotec Com.) per manufacturer's instructions. Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway). Cell concentration of dendritic cells were adjusted to 1 × 105/ml and co-culvated with rat-anti mouse APC-CD11c, CD80-FITC and CD86-PE antibodies from BioLegend company at room temperature for 20 minutes. The cells were centrifuged at 400 g for 10 minutes and re-suspended with 500 μl of 10% paraformaldehyde. The expression levels of the surface molecules CD80 and CD86 were detected and quantified by flow cytometry. [SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was performed by SAS version 9.2 (SAS Institute Inc., Cary, North Carolina, USA). Categorical variables were compared by Chi's square test. If 50% of cells had expected counts less than 5, Fisher's Exact test results would be used. Continuous variables were expressed as mean with standard deviation and compared by Student's t-test or Mann Whitney U test as appropriate. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant. Statistical analysis was performed by SAS version 9.2 (SAS Institute Inc., Cary, North Carolina, USA). Categorical variables were compared by Chi's square test. If 50% of cells had expected counts less than 5, Fisher's Exact test results would be used. Continuous variables were expressed as mean with standard deviation and compared by Student's t-test or Mann Whitney U test as appropriate. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant.
null
null
null
null
[ "Background", "Oligodeoxynucleotides", "Recombinant HBsAg", "HB vaccine", "Reagent for the detection of cytokine expression, anti-HBs and leucocyte surface molecules", "Evaluation of cytokine mRNA", "Evaluation of in vivo HBsAb production", "Detection of leucocyte surface molecules CD80 and CD86", "Statistical analysis", "Results", "Enhancement of HBsAb production", "Increase of cytokines mRNA production of splenocytes with stimulation of BW006", "Stimulation of leucocyte surface molecules CD80 and CD86", "Discussion", "Competing interests", "Authors' contributions" ]
[ "Hepatitis B (HB) is the disease caused by hepatitis B virus (HBV) infection and is the most commonly seen liver disease worldwide [1]. The virus can be transmitted parenterally, perinatally and sexually [2,3]. The viral infection has a worldwide distribution. The World Health Organization estimates that more than 2 billion people have been infected with HBV worldwide, 360 million of which are chronically infected and at risk of liver cirrhosis, liver cancer or other serious illness, or death. HBV is estimated to be responsible for 500, 000 to 700,000 deaths annually [4]. A survey study conducted by China CDC showed 7.2% weighted surface antigen (HBsAg) prevalence by ELISA among population aged 1~59 years old in 2006 [5].\nDespite the progress in prophylaxis, diagnosis and treatment, vaccination is still the most cost effective way of fighting against the virus. The currently used recombinant HBsAg has to be combined with adjuvant, usually alum, due to the weak immunity production of the antigen alone. Alum has been used as vaccine adjuvant for more than 70 years, although the molecular mechanism of action and the target cells of alum are still unknown. It has been assumed that adsorption to alum increases antigen availability at the injection site, allowing an efficient uptake by antigen-presenting cells (APCs) [6]. The increased antigen uptake by dendritic cells (DCs) observed in vitro also supported the antigen delivery function [7]. The intraperitoneal (i.p.) injection of alum could induce the recruitment of monocytes, which may uptake the vaccine antigen [8]. Alum is generally recognized as a stimulator of Th2 immunity. However, it does not stimulate Th1 immunity and 10% of the population has low response or no antibody response [9].\nCpG DNA can induce proliferation of almost all B cells and trigger polyclonal immunoglobulin (Ig) secretion, which is T cell independent and antigen nonspecific [10]. A strong synergetic response was observed when the CpG DNA is used together with alum [11]. We show the dose related enhancement effect of CpG DNA in increasing the production of HBsAb in mouse compared with the recombinant antigen of HBsAg alone or alum adjuvant HBV vaccine. The synergetic response was also observed in the expression levels of cluster of differentiation 80 (CD80) and cluster of differentiation 86 (CD86). The dynamics of a group of cytokines was analyzed and compared for different experiment groups.", "ODN (BW006) was provided by Yunnan Wosen Biotechnology Company. BW006 was synthesized with the sequence of 5'-tcgacgttcgtcgttcgtcgttc-3' and followed by sulphurization. Lipopolysaccharide (LPS) level in BW6 was less than 0.5 ng/mg by Limulus assay.", "The awd2 subtype HBsAg was expressed in Hansenular polymorpha yeast and produced by Yunnan Wosen Biotechnology Company. The antigen had a concentration of 0.236 mg/ml and purity of greater than 99.0% by high-performance liquid chromatography (HPLC) and silver staining. LPS level in the antigen was less than 10 EU/ml by Limulus assay.", "The awd2 HB vaccine was made with alum adjuvant and provided by Yunnan Wosen Biocenology Company. The concentrations of HBsAg and alum were 24 μg/ml and 0.5 mg/ml respectively.", "The reagents for cytokine detection were provided by Panomics Company and used per protocol of each cytokine. The Anti-HBs was detected with automated chemiluminescent microparticle immunoassay, Abbott ARCHITECT® anti-HBs. The reagents for the detection of leucocyte surface molecules were provided by BioLegend Company.", "Splenocytes were isolated from 6- to 8-week-old female BALB/c mouse (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China). Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway) and suspended in 1640 medium (RPMI 1640, Hyclone) with penicillin-streptomycin (final concentrations of 100 U/ml and 100 μg/ml respectively) (Sigma, Irvine, U.K.). 0.1 ml of the single cell suspension and 0.1 ml BW006 (final concentration of 5 μg/ml) were added to round-bottom 96 wells microtiter plates and cultured at 37°C with 5% CO2. The mRNA expression of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b and IFN-r were analyzed at 1, 3, 6 and 12 hours respectively, each in triplicate.", "100 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 10 groups and respectively received a single intramuscular injection (i.m.) into left tibialis anterior muscles, of 0.1 ml solution containing 50 μg alum, 20 μg BW006, 2 μg recombinant HBsAg, 2 μg recombinant HBsAg with 1.25 μg BW006, 2 μg recombinant HBsAg with 5 μg BW006, 2 μg recombinant HBsAg with 20 μg BW006, 2 μg recombinant HB vaccine, 2 μg recombinant HB vaccine with 1.25 μg BW006, 2 μg recombinant HB vaccine with 5 μg BW006 and 2 μg recombinant HB vaccine with 20 μg BW006. The duplicate aliquots of plasma were collected weekly from 1-4 weeks, one for HBsAb testing and the other one for HBsAb titration if it was HBsAb positive. The antibodies against HBsAg were detected and quantified in triplicate for each specimen with Abbott ARCHITECT® anti-HBs reagent as described above. The average value of the triplicate equal to or greater than 10 mIU/ml was considered as positive HBsAb conversion. The numbers of serum HBsAb conversion and titer of each positive serum were calculated.", "Sixteen 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 4 groups and given a single subcutaneous injection of 0.1 ml solution containing 4 μg recombinant HBsAg, 20 μg BW006, 4 μg recombinant HBsAg with 20 μg BW006, or NS, respectively. The mice of all four groups were euthanized 12 hours after the injection and dendritic cells were separated by auto-MACs kit (Miltenyibiotec Com.) per manufacturer's instructions. Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway). Cell concentration of dendritic cells were adjusted to 1 × 105/ml and co-culvated with rat-anti mouse APC-CD11c, CD80-FITC and CD86-PE antibodies from BioLegend company at room temperature for 20 minutes. The cells were centrifuged at 400 g for 10 minutes and re-suspended with 500 μl of 10% paraformaldehyde. The expression levels of the surface molecules CD80 and CD86 were detected and quantified by flow cytometry.", "Statistical analysis was performed by SAS version 9.2 (SAS Institute Inc., Cary, North Carolina, USA). Categorical variables were compared by Chi's square test. If 50% of cells had expected counts less than 5, Fisher's Exact test results would be used. Continuous variables were expressed as mean with standard deviation and compared by Student's t-test or Mann Whitney U test as appropriate. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant.", "[SUBTITLE] Enhancement of HBsAb production [SUBSECTION] HBsAb serum conversion was confirmed after 3 weeks of vaccination in all 10 mice injected with 20 μg BW006 (CpG DNA) and HBV vaccine, while 6 and 8 mice respectively had HBsAb serum conversion after 3 and 4 weeks of vaccination with regular HBV vaccine alone. The positive proportion of HBsAb serum conversion was significantly higher after two weeks of vaccination in the group vaccinated with 20 μg BW006 and HBV vaccine than that in the group with HBV vaccine alone (7 Vs. 1 out of 10, P = 0.0198). BW006 dose dependant co-stimulation effect of HBsAb serum conversion on HBV vaccine was seen between the ranges of 1.25 μg to 20 μg. The result showed that 5 μg BW006 had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. All 10 mice vaccinated with HBsAg and 5 μg BW006 had anti-HBs serum conversion after 3 weeks. Surprisingly, 5 μg BW006 with HBsAg had almost the same co-stimulation effect on HBsAb serum conversion as 20 μg BW006 with HBV vaccine in the mouse model. However, the fourth week HBsAb concentration was at least 2 times higher in the mice vaccinated with 20 μg BW006 with vaccine than that in any other combination (Figure 1).\nThe average production of HBsAb (mIU/ml) in mice immunized with different dosage of CpG DNA and recombinant expressed HBsAg or HBV commercial vaccine in four weeks. A: mice immunized with HBV vaccine and 20 μg CpG DNA; star: mice immunized with HBV vaccine and 5 μg CpG DNA; dot: mice immunized with HBsAg and 5 μg CpG DNA; square: mice immunized with HBsAg and 20 μg CpG DNA; diamond: mice immunized with HBV vaccine and 1.25 μg CpG DNA; circle: mice immunized with HBsAg and 1.25 μg CpG DNA; triangle: mice only immunized with HBV vaccine; line only: mice only immunized with HBsAg; mice injected with alum and saline solution did not produce HBsAb.\nHBsAb serum conversion was confirmed after 3 weeks of vaccination in all 10 mice injected with 20 μg BW006 (CpG DNA) and HBV vaccine, while 6 and 8 mice respectively had HBsAb serum conversion after 3 and 4 weeks of vaccination with regular HBV vaccine alone. The positive proportion of HBsAb serum conversion was significantly higher after two weeks of vaccination in the group vaccinated with 20 μg BW006 and HBV vaccine than that in the group with HBV vaccine alone (7 Vs. 1 out of 10, P = 0.0198). BW006 dose dependant co-stimulation effect of HBsAb serum conversion on HBV vaccine was seen between the ranges of 1.25 μg to 20 μg. The result showed that 5 μg BW006 had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. All 10 mice vaccinated with HBsAg and 5 μg BW006 had anti-HBs serum conversion after 3 weeks. Surprisingly, 5 μg BW006 with HBsAg had almost the same co-stimulation effect on HBsAb serum conversion as 20 μg BW006 with HBV vaccine in the mouse model. However, the fourth week HBsAb concentration was at least 2 times higher in the mice vaccinated with 20 μg BW006 with vaccine than that in any other combination (Figure 1).\nThe average production of HBsAb (mIU/ml) in mice immunized with different dosage of CpG DNA and recombinant expressed HBsAg or HBV commercial vaccine in four weeks. A: mice immunized with HBV vaccine and 20 μg CpG DNA; star: mice immunized with HBV vaccine and 5 μg CpG DNA; dot: mice immunized with HBsAg and 5 μg CpG DNA; square: mice immunized with HBsAg and 20 μg CpG DNA; diamond: mice immunized with HBV vaccine and 1.25 μg CpG DNA; circle: mice immunized with HBsAg and 1.25 μg CpG DNA; triangle: mice only immunized with HBV vaccine; line only: mice only immunized with HBsAg; mice injected with alum and saline solution did not produce HBsAb.\n[SUBTITLE] Increase of cytokines mRNA production of splenocytes with stimulation of BW006 [SUBSECTION] Splenocytes co-cultured with 5 μg/ml final concentration of BW006 had different effects on cytokines production (Figure 2). The highest increase of cytokines was IL-12b between 3 and 6 hours after the introduction of BW006 (22.29 and 26.07 times vs. 0 hour, P < 0.0001, respectively). The peak increase of IL-12b began decreasing after 6 hours of co-culture with BW006 and had almost half the peak value of mRNA at 12 hours. IL-12a increased significantly after 2 hours of stimulation. However, the mRNA production was only 2.17 times high as the value before co-culture with BW006 and decreased to the original value around 6 hours. The other significant increase of cytokine mRNA was observed in IL-10, which had a peak increase between 3 and 6 hours (10.86 and 11.93 times vs. 0 hour, P < 0.0001, respectively) and decreased to almost half at 12 hours. All the other cytokines tested, including IL-2, IL-4, IL-5 and IFN-r, had variations during the observation period. But these differences were not significant.\nThe ratios of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b, IFN-r mRNA expression in mouse mononuclear cells stimulated with CpG DNA (5 μg/ml) in 12 hours. Line with squares: IL-12b; triangles: IL-10; dots: IL-12a; circles: IL-2; stars: IL-4; diamonds: IL-5; line only: IFN-r.\nSplenocytes co-cultured with 5 μg/ml final concentration of BW006 had different effects on cytokines production (Figure 2). The highest increase of cytokines was IL-12b between 3 and 6 hours after the introduction of BW006 (22.29 and 26.07 times vs. 0 hour, P < 0.0001, respectively). The peak increase of IL-12b began decreasing after 6 hours of co-culture with BW006 and had almost half the peak value of mRNA at 12 hours. IL-12a increased significantly after 2 hours of stimulation. However, the mRNA production was only 2.17 times high as the value before co-culture with BW006 and decreased to the original value around 6 hours. The other significant increase of cytokine mRNA was observed in IL-10, which had a peak increase between 3 and 6 hours (10.86 and 11.93 times vs. 0 hour, P < 0.0001, respectively) and decreased to almost half at 12 hours. All the other cytokines tested, including IL-2, IL-4, IL-5 and IFN-r, had variations during the observation period. But these differences were not significant.\nThe ratios of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b, IFN-r mRNA expression in mouse mononuclear cells stimulated with CpG DNA (5 μg/ml) in 12 hours. Line with squares: IL-12b; triangles: IL-10; dots: IL-12a; circles: IL-2; stars: IL-4; diamonds: IL-5; line only: IFN-r.\n[SUBTITLE] Stimulation of leucocyte surface molecules CD80 and CD86 [SUBSECTION] There was no significant difference between mice groups of negative control and 4 μg recombinant HBsAg injection in the positive proportion (8.61% Vs. 10.65%) and fluorescent intensity (118.12 Vs. 122.6) of surface molecule CD80 expression in leucocyte cells (Figure 3). 20 μg BW006 increased the positive proportion (15.14% for BW006 alone and 15.84% for BW006 and 4 μg recombinant HBsAg) and fluorescent intensity (139.86 for BW006 alone and 158.67 for BW006 and 4 μg recombinant HBsAg) of surface molecule CD80 expression in leucocyte cells.\nThe positive proportions (%) of CD80 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group4), respectively.\nThe same trend was seen in CD86 (Figure 4), in which positive expression proportion was almost doubled in leucocyte cells compared with that in the control group (52.12% vs. 27.37%) or in 4 μg recombinant HBsAg injection alone group (54.09% vs. 28.36%). The fluorescent intensity of surface molecule CD86 expression also increased with BW006 compared with that of the control group (292.68 Vs. 213.78) or of HBsAg alone group (299.35 Vs. 211.78) in leucocyte cells.\nThe positive proportions (%) of CD86 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group 4), respectively.\nThere was no significant difference between mice groups of negative control and 4 μg recombinant HBsAg injection in the positive proportion (8.61% Vs. 10.65%) and fluorescent intensity (118.12 Vs. 122.6) of surface molecule CD80 expression in leucocyte cells (Figure 3). 20 μg BW006 increased the positive proportion (15.14% for BW006 alone and 15.84% for BW006 and 4 μg recombinant HBsAg) and fluorescent intensity (139.86 for BW006 alone and 158.67 for BW006 and 4 μg recombinant HBsAg) of surface molecule CD80 expression in leucocyte cells.\nThe positive proportions (%) of CD80 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group4), respectively.\nThe same trend was seen in CD86 (Figure 4), in which positive expression proportion was almost doubled in leucocyte cells compared with that in the control group (52.12% vs. 27.37%) or in 4 μg recombinant HBsAg injection alone group (54.09% vs. 28.36%). The fluorescent intensity of surface molecule CD86 expression also increased with BW006 compared with that of the control group (292.68 Vs. 213.78) or of HBsAg alone group (299.35 Vs. 211.78) in leucocyte cells.\nThe positive proportions (%) of CD86 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group 4), respectively.", "HBsAb serum conversion was confirmed after 3 weeks of vaccination in all 10 mice injected with 20 μg BW006 (CpG DNA) and HBV vaccine, while 6 and 8 mice respectively had HBsAb serum conversion after 3 and 4 weeks of vaccination with regular HBV vaccine alone. The positive proportion of HBsAb serum conversion was significantly higher after two weeks of vaccination in the group vaccinated with 20 μg BW006 and HBV vaccine than that in the group with HBV vaccine alone (7 Vs. 1 out of 10, P = 0.0198). BW006 dose dependant co-stimulation effect of HBsAb serum conversion on HBV vaccine was seen between the ranges of 1.25 μg to 20 μg. The result showed that 5 μg BW006 had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. All 10 mice vaccinated with HBsAg and 5 μg BW006 had anti-HBs serum conversion after 3 weeks. Surprisingly, 5 μg BW006 with HBsAg had almost the same co-stimulation effect on HBsAb serum conversion as 20 μg BW006 with HBV vaccine in the mouse model. However, the fourth week HBsAb concentration was at least 2 times higher in the mice vaccinated with 20 μg BW006 with vaccine than that in any other combination (Figure 1).\nThe average production of HBsAb (mIU/ml) in mice immunized with different dosage of CpG DNA and recombinant expressed HBsAg or HBV commercial vaccine in four weeks. A: mice immunized with HBV vaccine and 20 μg CpG DNA; star: mice immunized with HBV vaccine and 5 μg CpG DNA; dot: mice immunized with HBsAg and 5 μg CpG DNA; square: mice immunized with HBsAg and 20 μg CpG DNA; diamond: mice immunized with HBV vaccine and 1.25 μg CpG DNA; circle: mice immunized with HBsAg and 1.25 μg CpG DNA; triangle: mice only immunized with HBV vaccine; line only: mice only immunized with HBsAg; mice injected with alum and saline solution did not produce HBsAb.", "Splenocytes co-cultured with 5 μg/ml final concentration of BW006 had different effects on cytokines production (Figure 2). The highest increase of cytokines was IL-12b between 3 and 6 hours after the introduction of BW006 (22.29 and 26.07 times vs. 0 hour, P < 0.0001, respectively). The peak increase of IL-12b began decreasing after 6 hours of co-culture with BW006 and had almost half the peak value of mRNA at 12 hours. IL-12a increased significantly after 2 hours of stimulation. However, the mRNA production was only 2.17 times high as the value before co-culture with BW006 and decreased to the original value around 6 hours. The other significant increase of cytokine mRNA was observed in IL-10, which had a peak increase between 3 and 6 hours (10.86 and 11.93 times vs. 0 hour, P < 0.0001, respectively) and decreased to almost half at 12 hours. All the other cytokines tested, including IL-2, IL-4, IL-5 and IFN-r, had variations during the observation period. But these differences were not significant.\nThe ratios of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b, IFN-r mRNA expression in mouse mononuclear cells stimulated with CpG DNA (5 μg/ml) in 12 hours. Line with squares: IL-12b; triangles: IL-10; dots: IL-12a; circles: IL-2; stars: IL-4; diamonds: IL-5; line only: IFN-r.", "There was no significant difference between mice groups of negative control and 4 μg recombinant HBsAg injection in the positive proportion (8.61% Vs. 10.65%) and fluorescent intensity (118.12 Vs. 122.6) of surface molecule CD80 expression in leucocyte cells (Figure 3). 20 μg BW006 increased the positive proportion (15.14% for BW006 alone and 15.84% for BW006 and 4 μg recombinant HBsAg) and fluorescent intensity (139.86 for BW006 alone and 158.67 for BW006 and 4 μg recombinant HBsAg) of surface molecule CD80 expression in leucocyte cells.\nThe positive proportions (%) of CD80 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group4), respectively.\nThe same trend was seen in CD86 (Figure 4), in which positive expression proportion was almost doubled in leucocyte cells compared with that in the control group (52.12% vs. 27.37%) or in 4 μg recombinant HBsAg injection alone group (54.09% vs. 28.36%). The fluorescent intensity of surface molecule CD86 expression also increased with BW006 compared with that of the control group (292.68 Vs. 213.78) or of HBsAg alone group (299.35 Vs. 211.78) in leucocyte cells.\nThe positive proportions (%) of CD86 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group 4), respectively.", "Alum adjuvant HBV vaccine has been used for hepatitis B prevention for decades. However, there have been vaccine unresponsive cases reported in the immunodeficient patients of uremia and other conditions [11-13]. Scientists have been working for applicable new adjuvant for HBV and other diseases [14-17]. CpG ODN can enhance the immune response of live-attenuated or multivalent vaccines that cannot mix with alum. There have been studies that confirmed the potentiality of CpG ODN as HBV adjuvant [10,18]. We designed the new sequence of CpG ODN with two motifs that could improve the immune response of HBV recombinant expressed HBsAg and also vaccine (containing alum adjuvant). The co-stimulatory effect was observed dynamically with anti-HBs production in mice. The production of cytokines in vitro and the expression of leucocyte surface molecules CD80 and CD86 were evaluated in vitro for the possible mechanism of the enhanced immunization of HBV.\nOur new CpG ODN alone can induce HBsAg producing anti-HBs response or together with alum in the mouse model (CpG ODN and regular vaccine groups). However, a stronger specific antibody production was observed in the mice immunized with CpG ODN and HBV regular vaccine after 3 weeks of vaccination, owing to the synergistic action of CpG and alum. Alum adjuvant has an important disadvantage of induction of a Th2 rather than a Th1-type immune response. The use of alum as an adjuvant was reported to interfere with cell-mediated immunity and blocks activation of CD8+ CTL [19]. The Th1 immune response induced by CpG ODN was able to overcome the Th2 bias of alum for both antibody isotype and CTL response when both the agents were used together [20]. Our results showed even higher specific antibody production.\nInterleukin-12, a cytokine with an important role against intracellular pathogens, promotes Th1 cell development, cell mediated cytotoxicity, and interferon-gamma production. The production of IL12, especially the increase of IL-12b played an important role in HBV clearance [21]. Almost a 30-fold increase of IL12b was observed in splenocytes after 3-6 hours co-culture with 5 μg/ml final concentration of CpG ODN. The significant increase of IL10 seems to play a role in Th1/Th2 cells balance.\nThe T cell immune response requires co-stimulatory signals delivered through one or more receptors on the surface of T cells. The proteins, CD80 and CD86 (also known as the B7-1 and B7-2 ligands), are molecules found on activated B cells and monocytes, which provide a costimulatory signal necessary for T cell activation and survival [22,23]. The expression of CD80 and CD86 has been used for evaluation of T cell response for vaccine development [24-26]. The increased production of HBsAb as well as IL12 and IL10 might be associated with the activation of CD80 and CD86 expression in the experiment groups with HBsAg and CpG-ODN. Decreased function of peripheral blood dendritic cells has been reported in patients with hepatocellular carcinoma with hepatitis B and C virus infection [27]. Myeloid dendritic cells (mDC) of patients with chronic HBV are impaired in their maturation and function, resulting in more tolerogenic rather than immunogenic responses, which may contribute to viral persistence [28]. It is interesting that there was no significant difference in positive proportion or intensity of CD80 and CD86 expression between the mouse group vaccinated with CpG-ODN alone and the group with CpG-ODN together with recombinant HBsAg in our experiment.", "The authors declare that they have no competing interests.", "XZ and PH contributed equally to the manuscript. XZ and PH carried out the mouse vaccination, the detection of HBsAb, cytokines and the surface molecules CD80 and CD86 production. ZH assisted the immunoassays. XW performed the statistical analysis and drafted the manuscript. ZL conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Oligodeoxynucleotides", "Recombinant HBsAg", "HB vaccine", "Reagent for the detection of cytokine expression, anti-HBs and leucocyte surface molecules", "Evaluation of cytokine mRNA", "Evaluation of in vivo HBsAb production", "Detection of leucocyte surface molecules CD80 and CD86", "Statistical analysis", "Results", "Enhancement of HBsAb production", "Increase of cytokines mRNA production of splenocytes with stimulation of BW006", "Stimulation of leucocyte surface molecules CD80 and CD86", "Discussion", "Competing interests", "Authors' contributions" ]
[ "Hepatitis B (HB) is the disease caused by hepatitis B virus (HBV) infection and is the most commonly seen liver disease worldwide [1]. The virus can be transmitted parenterally, perinatally and sexually [2,3]. The viral infection has a worldwide distribution. The World Health Organization estimates that more than 2 billion people have been infected with HBV worldwide, 360 million of which are chronically infected and at risk of liver cirrhosis, liver cancer or other serious illness, or death. HBV is estimated to be responsible for 500, 000 to 700,000 deaths annually [4]. A survey study conducted by China CDC showed 7.2% weighted surface antigen (HBsAg) prevalence by ELISA among population aged 1~59 years old in 2006 [5].\nDespite the progress in prophylaxis, diagnosis and treatment, vaccination is still the most cost effective way of fighting against the virus. The currently used recombinant HBsAg has to be combined with adjuvant, usually alum, due to the weak immunity production of the antigen alone. Alum has been used as vaccine adjuvant for more than 70 years, although the molecular mechanism of action and the target cells of alum are still unknown. It has been assumed that adsorption to alum increases antigen availability at the injection site, allowing an efficient uptake by antigen-presenting cells (APCs) [6]. The increased antigen uptake by dendritic cells (DCs) observed in vitro also supported the antigen delivery function [7]. The intraperitoneal (i.p.) injection of alum could induce the recruitment of monocytes, which may uptake the vaccine antigen [8]. Alum is generally recognized as a stimulator of Th2 immunity. However, it does not stimulate Th1 immunity and 10% of the population has low response or no antibody response [9].\nCpG DNA can induce proliferation of almost all B cells and trigger polyclonal immunoglobulin (Ig) secretion, which is T cell independent and antigen nonspecific [10]. A strong synergetic response was observed when the CpG DNA is used together with alum [11]. We show the dose related enhancement effect of CpG DNA in increasing the production of HBsAb in mouse compared with the recombinant antigen of HBsAg alone or alum adjuvant HBV vaccine. The synergetic response was also observed in the expression levels of cluster of differentiation 80 (CD80) and cluster of differentiation 86 (CD86). The dynamics of a group of cytokines was analyzed and compared for different experiment groups.", "[SUBTITLE] Oligodeoxynucleotides [SUBSECTION] ODN (BW006) was provided by Yunnan Wosen Biotechnology Company. BW006 was synthesized with the sequence of 5'-tcgacgttcgtcgttcgtcgttc-3' and followed by sulphurization. Lipopolysaccharide (LPS) level in BW6 was less than 0.5 ng/mg by Limulus assay.\nODN (BW006) was provided by Yunnan Wosen Biotechnology Company. BW006 was synthesized with the sequence of 5'-tcgacgttcgtcgttcgtcgttc-3' and followed by sulphurization. Lipopolysaccharide (LPS) level in BW6 was less than 0.5 ng/mg by Limulus assay.\n[SUBTITLE] Recombinant HBsAg [SUBSECTION] The awd2 subtype HBsAg was expressed in Hansenular polymorpha yeast and produced by Yunnan Wosen Biotechnology Company. The antigen had a concentration of 0.236 mg/ml and purity of greater than 99.0% by high-performance liquid chromatography (HPLC) and silver staining. LPS level in the antigen was less than 10 EU/ml by Limulus assay.\nThe awd2 subtype HBsAg was expressed in Hansenular polymorpha yeast and produced by Yunnan Wosen Biotechnology Company. The antigen had a concentration of 0.236 mg/ml and purity of greater than 99.0% by high-performance liquid chromatography (HPLC) and silver staining. LPS level in the antigen was less than 10 EU/ml by Limulus assay.\n[SUBTITLE] HB vaccine [SUBSECTION] The awd2 HB vaccine was made with alum adjuvant and provided by Yunnan Wosen Biocenology Company. The concentrations of HBsAg and alum were 24 μg/ml and 0.5 mg/ml respectively.\nThe awd2 HB vaccine was made with alum adjuvant and provided by Yunnan Wosen Biocenology Company. The concentrations of HBsAg and alum were 24 μg/ml and 0.5 mg/ml respectively.\n[SUBTITLE] Reagent for the detection of cytokine expression, anti-HBs and leucocyte surface molecules [SUBSECTION] The reagents for cytokine detection were provided by Panomics Company and used per protocol of each cytokine. The Anti-HBs was detected with automated chemiluminescent microparticle immunoassay, Abbott ARCHITECT® anti-HBs. The reagents for the detection of leucocyte surface molecules were provided by BioLegend Company.\nThe reagents for cytokine detection were provided by Panomics Company and used per protocol of each cytokine. The Anti-HBs was detected with automated chemiluminescent microparticle immunoassay, Abbott ARCHITECT® anti-HBs. The reagents for the detection of leucocyte surface molecules were provided by BioLegend Company.\n[SUBTITLE] Evaluation of cytokine mRNA [SUBSECTION] Splenocytes were isolated from 6- to 8-week-old female BALB/c mouse (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China). Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway) and suspended in 1640 medium (RPMI 1640, Hyclone) with penicillin-streptomycin (final concentrations of 100 U/ml and 100 μg/ml respectively) (Sigma, Irvine, U.K.). 0.1 ml of the single cell suspension and 0.1 ml BW006 (final concentration of 5 μg/ml) were added to round-bottom 96 wells microtiter plates and cultured at 37°C with 5% CO2. The mRNA expression of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b and IFN-r were analyzed at 1, 3, 6 and 12 hours respectively, each in triplicate.\nSplenocytes were isolated from 6- to 8-week-old female BALB/c mouse (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China). Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway) and suspended in 1640 medium (RPMI 1640, Hyclone) with penicillin-streptomycin (final concentrations of 100 U/ml and 100 μg/ml respectively) (Sigma, Irvine, U.K.). 0.1 ml of the single cell suspension and 0.1 ml BW006 (final concentration of 5 μg/ml) were added to round-bottom 96 wells microtiter plates and cultured at 37°C with 5% CO2. The mRNA expression of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b and IFN-r were analyzed at 1, 3, 6 and 12 hours respectively, each in triplicate.\n[SUBTITLE] Evaluation of in vivo HBsAb production [SUBSECTION] 100 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 10 groups and respectively received a single intramuscular injection (i.m.) into left tibialis anterior muscles, of 0.1 ml solution containing 50 μg alum, 20 μg BW006, 2 μg recombinant HBsAg, 2 μg recombinant HBsAg with 1.25 μg BW006, 2 μg recombinant HBsAg with 5 μg BW006, 2 μg recombinant HBsAg with 20 μg BW006, 2 μg recombinant HB vaccine, 2 μg recombinant HB vaccine with 1.25 μg BW006, 2 μg recombinant HB vaccine with 5 μg BW006 and 2 μg recombinant HB vaccine with 20 μg BW006. The duplicate aliquots of plasma were collected weekly from 1-4 weeks, one for HBsAb testing and the other one for HBsAb titration if it was HBsAb positive. The antibodies against HBsAg were detected and quantified in triplicate for each specimen with Abbott ARCHITECT® anti-HBs reagent as described above. The average value of the triplicate equal to or greater than 10 mIU/ml was considered as positive HBsAb conversion. The numbers of serum HBsAb conversion and titer of each positive serum were calculated.\n100 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 10 groups and respectively received a single intramuscular injection (i.m.) into left tibialis anterior muscles, of 0.1 ml solution containing 50 μg alum, 20 μg BW006, 2 μg recombinant HBsAg, 2 μg recombinant HBsAg with 1.25 μg BW006, 2 μg recombinant HBsAg with 5 μg BW006, 2 μg recombinant HBsAg with 20 μg BW006, 2 μg recombinant HB vaccine, 2 μg recombinant HB vaccine with 1.25 μg BW006, 2 μg recombinant HB vaccine with 5 μg BW006 and 2 μg recombinant HB vaccine with 20 μg BW006. The duplicate aliquots of plasma were collected weekly from 1-4 weeks, one for HBsAb testing and the other one for HBsAb titration if it was HBsAb positive. The antibodies against HBsAg were detected and quantified in triplicate for each specimen with Abbott ARCHITECT® anti-HBs reagent as described above. The average value of the triplicate equal to or greater than 10 mIU/ml was considered as positive HBsAb conversion. The numbers of serum HBsAb conversion and titer of each positive serum were calculated.\n[SUBTITLE] Detection of leucocyte surface molecules CD80 and CD86 [SUBSECTION] Sixteen 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 4 groups and given a single subcutaneous injection of 0.1 ml solution containing 4 μg recombinant HBsAg, 20 μg BW006, 4 μg recombinant HBsAg with 20 μg BW006, or NS, respectively. The mice of all four groups were euthanized 12 hours after the injection and dendritic cells were separated by auto-MACs kit (Miltenyibiotec Com.) per manufacturer's instructions. Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway). Cell concentration of dendritic cells were adjusted to 1 × 105/ml and co-culvated with rat-anti mouse APC-CD11c, CD80-FITC and CD86-PE antibodies from BioLegend company at room temperature for 20 minutes. The cells were centrifuged at 400 g for 10 minutes and re-suspended with 500 μl of 10% paraformaldehyde. The expression levels of the surface molecules CD80 and CD86 were detected and quantified by flow cytometry.\nSixteen 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 4 groups and given a single subcutaneous injection of 0.1 ml solution containing 4 μg recombinant HBsAg, 20 μg BW006, 4 μg recombinant HBsAg with 20 μg BW006, or NS, respectively. The mice of all four groups were euthanized 12 hours after the injection and dendritic cells were separated by auto-MACs kit (Miltenyibiotec Com.) per manufacturer's instructions. Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway). Cell concentration of dendritic cells were adjusted to 1 × 105/ml and co-culvated with rat-anti mouse APC-CD11c, CD80-FITC and CD86-PE antibodies from BioLegend company at room temperature for 20 minutes. The cells were centrifuged at 400 g for 10 minutes and re-suspended with 500 μl of 10% paraformaldehyde. The expression levels of the surface molecules CD80 and CD86 were detected and quantified by flow cytometry.\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was performed by SAS version 9.2 (SAS Institute Inc., Cary, North Carolina, USA). Categorical variables were compared by Chi's square test. If 50% of cells had expected counts less than 5, Fisher's Exact test results would be used. Continuous variables were expressed as mean with standard deviation and compared by Student's t-test or Mann Whitney U test as appropriate. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant.\nStatistical analysis was performed by SAS version 9.2 (SAS Institute Inc., Cary, North Carolina, USA). Categorical variables were compared by Chi's square test. If 50% of cells had expected counts less than 5, Fisher's Exact test results would be used. Continuous variables were expressed as mean with standard deviation and compared by Student's t-test or Mann Whitney U test as appropriate. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant.", "ODN (BW006) was provided by Yunnan Wosen Biotechnology Company. BW006 was synthesized with the sequence of 5'-tcgacgttcgtcgttcgtcgttc-3' and followed by sulphurization. Lipopolysaccharide (LPS) level in BW6 was less than 0.5 ng/mg by Limulus assay.", "The awd2 subtype HBsAg was expressed in Hansenular polymorpha yeast and produced by Yunnan Wosen Biotechnology Company. The antigen had a concentration of 0.236 mg/ml and purity of greater than 99.0% by high-performance liquid chromatography (HPLC) and silver staining. LPS level in the antigen was less than 10 EU/ml by Limulus assay.", "The awd2 HB vaccine was made with alum adjuvant and provided by Yunnan Wosen Biocenology Company. The concentrations of HBsAg and alum were 24 μg/ml and 0.5 mg/ml respectively.", "The reagents for cytokine detection were provided by Panomics Company and used per protocol of each cytokine. The Anti-HBs was detected with automated chemiluminescent microparticle immunoassay, Abbott ARCHITECT® anti-HBs. The reagents for the detection of leucocyte surface molecules were provided by BioLegend Company.", "Splenocytes were isolated from 6- to 8-week-old female BALB/c mouse (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China). Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway) and suspended in 1640 medium (RPMI 1640, Hyclone) with penicillin-streptomycin (final concentrations of 100 U/ml and 100 μg/ml respectively) (Sigma, Irvine, U.K.). 0.1 ml of the single cell suspension and 0.1 ml BW006 (final concentration of 5 μg/ml) were added to round-bottom 96 wells microtiter plates and cultured at 37°C with 5% CO2. The mRNA expression of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b and IFN-r were analyzed at 1, 3, 6 and 12 hours respectively, each in triplicate.", "100 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 10 groups and respectively received a single intramuscular injection (i.m.) into left tibialis anterior muscles, of 0.1 ml solution containing 50 μg alum, 20 μg BW006, 2 μg recombinant HBsAg, 2 μg recombinant HBsAg with 1.25 μg BW006, 2 μg recombinant HBsAg with 5 μg BW006, 2 μg recombinant HBsAg with 20 μg BW006, 2 μg recombinant HB vaccine, 2 μg recombinant HB vaccine with 1.25 μg BW006, 2 μg recombinant HB vaccine with 5 μg BW006 and 2 μg recombinant HB vaccine with 20 μg BW006. The duplicate aliquots of plasma were collected weekly from 1-4 weeks, one for HBsAb testing and the other one for HBsAb titration if it was HBsAb positive. The antibodies against HBsAg were detected and quantified in triplicate for each specimen with Abbott ARCHITECT® anti-HBs reagent as described above. The average value of the triplicate equal to or greater than 10 mIU/ml was considered as positive HBsAb conversion. The numbers of serum HBsAb conversion and titer of each positive serum were calculated.", "Sixteen 6- to 8-week-old female BALB/c mice (H-2d, National Institute for the Control of Pharmaceutical and Biological Products, Beijing, China) were evenly divided into 4 groups and given a single subcutaneous injection of 0.1 ml solution containing 4 μg recombinant HBsAg, 20 μg BW006, 4 μg recombinant HBsAg with 20 μg BW006, or NS, respectively. The mice of all four groups were euthanized 12 hours after the injection and dendritic cells were separated by auto-MACs kit (Miltenyibiotec Com.) per manufacturer's instructions. Single cell suspensions (1 × 106/ml) were prepared with NycoPrep ™ 1.077A (AXIS-SHIELD PoC AS, Oslo, Norway). Cell concentration of dendritic cells were adjusted to 1 × 105/ml and co-culvated with rat-anti mouse APC-CD11c, CD80-FITC and CD86-PE antibodies from BioLegend company at room temperature for 20 minutes. The cells were centrifuged at 400 g for 10 minutes and re-suspended with 500 μl of 10% paraformaldehyde. The expression levels of the surface molecules CD80 and CD86 were detected and quantified by flow cytometry.", "Statistical analysis was performed by SAS version 9.2 (SAS Institute Inc., Cary, North Carolina, USA). Categorical variables were compared by Chi's square test. If 50% of cells had expected counts less than 5, Fisher's Exact test results would be used. Continuous variables were expressed as mean with standard deviation and compared by Student's t-test or Mann Whitney U test as appropriate. All statistical tests were two-sided, and p values less than 0.05 were considered statistically significant.", "[SUBTITLE] Enhancement of HBsAb production [SUBSECTION] HBsAb serum conversion was confirmed after 3 weeks of vaccination in all 10 mice injected with 20 μg BW006 (CpG DNA) and HBV vaccine, while 6 and 8 mice respectively had HBsAb serum conversion after 3 and 4 weeks of vaccination with regular HBV vaccine alone. The positive proportion of HBsAb serum conversion was significantly higher after two weeks of vaccination in the group vaccinated with 20 μg BW006 and HBV vaccine than that in the group with HBV vaccine alone (7 Vs. 1 out of 10, P = 0.0198). BW006 dose dependant co-stimulation effect of HBsAb serum conversion on HBV vaccine was seen between the ranges of 1.25 μg to 20 μg. The result showed that 5 μg BW006 had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. All 10 mice vaccinated with HBsAg and 5 μg BW006 had anti-HBs serum conversion after 3 weeks. Surprisingly, 5 μg BW006 with HBsAg had almost the same co-stimulation effect on HBsAb serum conversion as 20 μg BW006 with HBV vaccine in the mouse model. However, the fourth week HBsAb concentration was at least 2 times higher in the mice vaccinated with 20 μg BW006 with vaccine than that in any other combination (Figure 1).\nThe average production of HBsAb (mIU/ml) in mice immunized with different dosage of CpG DNA and recombinant expressed HBsAg or HBV commercial vaccine in four weeks. A: mice immunized with HBV vaccine and 20 μg CpG DNA; star: mice immunized with HBV vaccine and 5 μg CpG DNA; dot: mice immunized with HBsAg and 5 μg CpG DNA; square: mice immunized with HBsAg and 20 μg CpG DNA; diamond: mice immunized with HBV vaccine and 1.25 μg CpG DNA; circle: mice immunized with HBsAg and 1.25 μg CpG DNA; triangle: mice only immunized with HBV vaccine; line only: mice only immunized with HBsAg; mice injected with alum and saline solution did not produce HBsAb.\nHBsAb serum conversion was confirmed after 3 weeks of vaccination in all 10 mice injected with 20 μg BW006 (CpG DNA) and HBV vaccine, while 6 and 8 mice respectively had HBsAb serum conversion after 3 and 4 weeks of vaccination with regular HBV vaccine alone. The positive proportion of HBsAb serum conversion was significantly higher after two weeks of vaccination in the group vaccinated with 20 μg BW006 and HBV vaccine than that in the group with HBV vaccine alone (7 Vs. 1 out of 10, P = 0.0198). BW006 dose dependant co-stimulation effect of HBsAb serum conversion on HBV vaccine was seen between the ranges of 1.25 μg to 20 μg. The result showed that 5 μg BW006 had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. All 10 mice vaccinated with HBsAg and 5 μg BW006 had anti-HBs serum conversion after 3 weeks. Surprisingly, 5 μg BW006 with HBsAg had almost the same co-stimulation effect on HBsAb serum conversion as 20 μg BW006 with HBV vaccine in the mouse model. However, the fourth week HBsAb concentration was at least 2 times higher in the mice vaccinated with 20 μg BW006 with vaccine than that in any other combination (Figure 1).\nThe average production of HBsAb (mIU/ml) in mice immunized with different dosage of CpG DNA and recombinant expressed HBsAg or HBV commercial vaccine in four weeks. A: mice immunized with HBV vaccine and 20 μg CpG DNA; star: mice immunized with HBV vaccine and 5 μg CpG DNA; dot: mice immunized with HBsAg and 5 μg CpG DNA; square: mice immunized with HBsAg and 20 μg CpG DNA; diamond: mice immunized with HBV vaccine and 1.25 μg CpG DNA; circle: mice immunized with HBsAg and 1.25 μg CpG DNA; triangle: mice only immunized with HBV vaccine; line only: mice only immunized with HBsAg; mice injected with alum and saline solution did not produce HBsAb.\n[SUBTITLE] Increase of cytokines mRNA production of splenocytes with stimulation of BW006 [SUBSECTION] Splenocytes co-cultured with 5 μg/ml final concentration of BW006 had different effects on cytokines production (Figure 2). The highest increase of cytokines was IL-12b between 3 and 6 hours after the introduction of BW006 (22.29 and 26.07 times vs. 0 hour, P < 0.0001, respectively). The peak increase of IL-12b began decreasing after 6 hours of co-culture with BW006 and had almost half the peak value of mRNA at 12 hours. IL-12a increased significantly after 2 hours of stimulation. However, the mRNA production was only 2.17 times high as the value before co-culture with BW006 and decreased to the original value around 6 hours. The other significant increase of cytokine mRNA was observed in IL-10, which had a peak increase between 3 and 6 hours (10.86 and 11.93 times vs. 0 hour, P < 0.0001, respectively) and decreased to almost half at 12 hours. All the other cytokines tested, including IL-2, IL-4, IL-5 and IFN-r, had variations during the observation period. But these differences were not significant.\nThe ratios of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b, IFN-r mRNA expression in mouse mononuclear cells stimulated with CpG DNA (5 μg/ml) in 12 hours. Line with squares: IL-12b; triangles: IL-10; dots: IL-12a; circles: IL-2; stars: IL-4; diamonds: IL-5; line only: IFN-r.\nSplenocytes co-cultured with 5 μg/ml final concentration of BW006 had different effects on cytokines production (Figure 2). The highest increase of cytokines was IL-12b between 3 and 6 hours after the introduction of BW006 (22.29 and 26.07 times vs. 0 hour, P < 0.0001, respectively). The peak increase of IL-12b began decreasing after 6 hours of co-culture with BW006 and had almost half the peak value of mRNA at 12 hours. IL-12a increased significantly after 2 hours of stimulation. However, the mRNA production was only 2.17 times high as the value before co-culture with BW006 and decreased to the original value around 6 hours. The other significant increase of cytokine mRNA was observed in IL-10, which had a peak increase between 3 and 6 hours (10.86 and 11.93 times vs. 0 hour, P < 0.0001, respectively) and decreased to almost half at 12 hours. All the other cytokines tested, including IL-2, IL-4, IL-5 and IFN-r, had variations during the observation period. But these differences were not significant.\nThe ratios of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b, IFN-r mRNA expression in mouse mononuclear cells stimulated with CpG DNA (5 μg/ml) in 12 hours. Line with squares: IL-12b; triangles: IL-10; dots: IL-12a; circles: IL-2; stars: IL-4; diamonds: IL-5; line only: IFN-r.\n[SUBTITLE] Stimulation of leucocyte surface molecules CD80 and CD86 [SUBSECTION] There was no significant difference between mice groups of negative control and 4 μg recombinant HBsAg injection in the positive proportion (8.61% Vs. 10.65%) and fluorescent intensity (118.12 Vs. 122.6) of surface molecule CD80 expression in leucocyte cells (Figure 3). 20 μg BW006 increased the positive proportion (15.14% for BW006 alone and 15.84% for BW006 and 4 μg recombinant HBsAg) and fluorescent intensity (139.86 for BW006 alone and 158.67 for BW006 and 4 μg recombinant HBsAg) of surface molecule CD80 expression in leucocyte cells.\nThe positive proportions (%) of CD80 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group4), respectively.\nThe same trend was seen in CD86 (Figure 4), in which positive expression proportion was almost doubled in leucocyte cells compared with that in the control group (52.12% vs. 27.37%) or in 4 μg recombinant HBsAg injection alone group (54.09% vs. 28.36%). The fluorescent intensity of surface molecule CD86 expression also increased with BW006 compared with that of the control group (292.68 Vs. 213.78) or of HBsAg alone group (299.35 Vs. 211.78) in leucocyte cells.\nThe positive proportions (%) of CD86 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group 4), respectively.\nThere was no significant difference between mice groups of negative control and 4 μg recombinant HBsAg injection in the positive proportion (8.61% Vs. 10.65%) and fluorescent intensity (118.12 Vs. 122.6) of surface molecule CD80 expression in leucocyte cells (Figure 3). 20 μg BW006 increased the positive proportion (15.14% for BW006 alone and 15.84% for BW006 and 4 μg recombinant HBsAg) and fluorescent intensity (139.86 for BW006 alone and 158.67 for BW006 and 4 μg recombinant HBsAg) of surface molecule CD80 expression in leucocyte cells.\nThe positive proportions (%) of CD80 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group4), respectively.\nThe same trend was seen in CD86 (Figure 4), in which positive expression proportion was almost doubled in leucocyte cells compared with that in the control group (52.12% vs. 27.37%) or in 4 μg recombinant HBsAg injection alone group (54.09% vs. 28.36%). The fluorescent intensity of surface molecule CD86 expression also increased with BW006 compared with that of the control group (292.68 Vs. 213.78) or of HBsAg alone group (299.35 Vs. 211.78) in leucocyte cells.\nThe positive proportions (%) of CD86 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group 4), respectively.", "HBsAb serum conversion was confirmed after 3 weeks of vaccination in all 10 mice injected with 20 μg BW006 (CpG DNA) and HBV vaccine, while 6 and 8 mice respectively had HBsAb serum conversion after 3 and 4 weeks of vaccination with regular HBV vaccine alone. The positive proportion of HBsAb serum conversion was significantly higher after two weeks of vaccination in the group vaccinated with 20 μg BW006 and HBV vaccine than that in the group with HBV vaccine alone (7 Vs. 1 out of 10, P = 0.0198). BW006 dose dependant co-stimulation effect of HBsAb serum conversion on HBV vaccine was seen between the ranges of 1.25 μg to 20 μg. The result showed that 5 μg BW006 had the best co-stimulation effect of HBsAb serum conversion for mice vaccinated with recombinant expressed HBsAg. All 10 mice vaccinated with HBsAg and 5 μg BW006 had anti-HBs serum conversion after 3 weeks. Surprisingly, 5 μg BW006 with HBsAg had almost the same co-stimulation effect on HBsAb serum conversion as 20 μg BW006 with HBV vaccine in the mouse model. However, the fourth week HBsAb concentration was at least 2 times higher in the mice vaccinated with 20 μg BW006 with vaccine than that in any other combination (Figure 1).\nThe average production of HBsAb (mIU/ml) in mice immunized with different dosage of CpG DNA and recombinant expressed HBsAg or HBV commercial vaccine in four weeks. A: mice immunized with HBV vaccine and 20 μg CpG DNA; star: mice immunized with HBV vaccine and 5 μg CpG DNA; dot: mice immunized with HBsAg and 5 μg CpG DNA; square: mice immunized with HBsAg and 20 μg CpG DNA; diamond: mice immunized with HBV vaccine and 1.25 μg CpG DNA; circle: mice immunized with HBsAg and 1.25 μg CpG DNA; triangle: mice only immunized with HBV vaccine; line only: mice only immunized with HBsAg; mice injected with alum and saline solution did not produce HBsAb.", "Splenocytes co-cultured with 5 μg/ml final concentration of BW006 had different effects on cytokines production (Figure 2). The highest increase of cytokines was IL-12b between 3 and 6 hours after the introduction of BW006 (22.29 and 26.07 times vs. 0 hour, P < 0.0001, respectively). The peak increase of IL-12b began decreasing after 6 hours of co-culture with BW006 and had almost half the peak value of mRNA at 12 hours. IL-12a increased significantly after 2 hours of stimulation. However, the mRNA production was only 2.17 times high as the value before co-culture with BW006 and decreased to the original value around 6 hours. The other significant increase of cytokine mRNA was observed in IL-10, which had a peak increase between 3 and 6 hours (10.86 and 11.93 times vs. 0 hour, P < 0.0001, respectively) and decreased to almost half at 12 hours. All the other cytokines tested, including IL-2, IL-4, IL-5 and IFN-r, had variations during the observation period. But these differences were not significant.\nThe ratios of IL-2, IL-4, IL-5, IL-10, IL-12a, IL-12b, IFN-r mRNA expression in mouse mononuclear cells stimulated with CpG DNA (5 μg/ml) in 12 hours. Line with squares: IL-12b; triangles: IL-10; dots: IL-12a; circles: IL-2; stars: IL-4; diamonds: IL-5; line only: IFN-r.", "There was no significant difference between mice groups of negative control and 4 μg recombinant HBsAg injection in the positive proportion (8.61% Vs. 10.65%) and fluorescent intensity (118.12 Vs. 122.6) of surface molecule CD80 expression in leucocyte cells (Figure 3). 20 μg BW006 increased the positive proportion (15.14% for BW006 alone and 15.84% for BW006 and 4 μg recombinant HBsAg) and fluorescent intensity (139.86 for BW006 alone and 158.67 for BW006 and 4 μg recombinant HBsAg) of surface molecule CD80 expression in leucocyte cells.\nThe positive proportions (%) of CD80 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group4), respectively.\nThe same trend was seen in CD86 (Figure 4), in which positive expression proportion was almost doubled in leucocyte cells compared with that in the control group (52.12% vs. 27.37%) or in 4 μg recombinant HBsAg injection alone group (54.09% vs. 28.36%). The fluorescent intensity of surface molecule CD86 expression also increased with BW006 compared with that of the control group (292.68 Vs. 213.78) or of HBsAg alone group (299.35 Vs. 211.78) in leucocyte cells.\nThe positive proportions (%) of CD86 (bar) and the average fluorescent intensities (line) of mouse DC cells stimulated with saline solution (group 1), 4 μg HBsAg (group 2), 20 μg CpG DNA (group 3) and 4 μg HBsAg+20 μg CpG DNA (group 4), respectively.", "Alum adjuvant HBV vaccine has been used for hepatitis B prevention for decades. However, there have been vaccine unresponsive cases reported in the immunodeficient patients of uremia and other conditions [11-13]. Scientists have been working for applicable new adjuvant for HBV and other diseases [14-17]. CpG ODN can enhance the immune response of live-attenuated or multivalent vaccines that cannot mix with alum. There have been studies that confirmed the potentiality of CpG ODN as HBV adjuvant [10,18]. We designed the new sequence of CpG ODN with two motifs that could improve the immune response of HBV recombinant expressed HBsAg and also vaccine (containing alum adjuvant). The co-stimulatory effect was observed dynamically with anti-HBs production in mice. The production of cytokines in vitro and the expression of leucocyte surface molecules CD80 and CD86 were evaluated in vitro for the possible mechanism of the enhanced immunization of HBV.\nOur new CpG ODN alone can induce HBsAg producing anti-HBs response or together with alum in the mouse model (CpG ODN and regular vaccine groups). However, a stronger specific antibody production was observed in the mice immunized with CpG ODN and HBV regular vaccine after 3 weeks of vaccination, owing to the synergistic action of CpG and alum. Alum adjuvant has an important disadvantage of induction of a Th2 rather than a Th1-type immune response. The use of alum as an adjuvant was reported to interfere with cell-mediated immunity and blocks activation of CD8+ CTL [19]. The Th1 immune response induced by CpG ODN was able to overcome the Th2 bias of alum for both antibody isotype and CTL response when both the agents were used together [20]. Our results showed even higher specific antibody production.\nInterleukin-12, a cytokine with an important role against intracellular pathogens, promotes Th1 cell development, cell mediated cytotoxicity, and interferon-gamma production. The production of IL12, especially the increase of IL-12b played an important role in HBV clearance [21]. Almost a 30-fold increase of IL12b was observed in splenocytes after 3-6 hours co-culture with 5 μg/ml final concentration of CpG ODN. The significant increase of IL10 seems to play a role in Th1/Th2 cells balance.\nThe T cell immune response requires co-stimulatory signals delivered through one or more receptors on the surface of T cells. The proteins, CD80 and CD86 (also known as the B7-1 and B7-2 ligands), are molecules found on activated B cells and monocytes, which provide a costimulatory signal necessary for T cell activation and survival [22,23]. The expression of CD80 and CD86 has been used for evaluation of T cell response for vaccine development [24-26]. The increased production of HBsAb as well as IL12 and IL10 might be associated with the activation of CD80 and CD86 expression in the experiment groups with HBsAg and CpG-ODN. Decreased function of peripheral blood dendritic cells has been reported in patients with hepatocellular carcinoma with hepatitis B and C virus infection [27]. Myeloid dendritic cells (mDC) of patients with chronic HBV are impaired in their maturation and function, resulting in more tolerogenic rather than immunogenic responses, which may contribute to viral persistence [28]. It is interesting that there was no significant difference in positive proportion or intensity of CD80 and CD86 expression between the mouse group vaccinated with CpG-ODN alone and the group with CpG-ODN together with recombinant HBsAg in our experiment.", "The authors declare that they have no competing interests.", "XZ and PH contributed equally to the manuscript. XZ and PH carried out the mouse vaccination, the detection of HBsAb, cytokines and the surface molecules CD80 and CD86 production. ZH assisted the immunoassays. XW performed the statistical analysis and drafted the manuscript. ZL conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
The relative risk of second primary cancers in Queensland, Australia: a retrospective cohort study.
21342533
Cancer survivors face an increased likelihood of being subsequently diagnosed with another cancer. The aim of this study was to quantify the relative risk of survivors developing a second primary cancer in Queensland, Australia.
BACKGROUND
Standardised incidence rates stratified by type of first primary cancer, type of second primary cancer, sex, age at first diagnosis, period of first diagnosis and follow-up interval were calculated for residents of Queensland, Australia, who were diagnosed with a first primary invasive cancer between 1982 and 2001 and survived for a minimum of 2 months.
METHODS
A total of 23,580 second invasive primary cancers were observed over 1,370,247 years of follow-up among 204,962 cancer patients. Both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) within the study cohort were found to have a significant excess risk of developing a second cancer relative to the incidence of cancer in the general population. The observed number of second primary cancers was also higher than expected within each age group, across all time periods and during each follow-up interval.
RESULTS
The excess risk of developing a second malignancy among cancer survivors can likely be attributed to factors including similar aetiologies, genetics and the effects of treatment, underlining the need for ongoing monitoring of cancer patients to detect subsequent tumours at an early stage. Education campaigns developed specifically for survivors may be required to lessen the prevalence of known cancer risk factors.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Australia", "Cohort Studies", "Female", "Follow-Up Studies", "Humans", "Male", "Middle Aged", "Neoplasms, Second Primary", "Queensland", "Retrospective Studies", "Risk", "Young Adult" ]
3052198
null
null
Methods
A retrospective cohort design was used for this analysis. De-identified case records were obtained from the Queensland Cancer Registry (QCR), a population-based registry covering the entire state. All public and private hospitals, nursing homes and pathology services throughout Queensland are required by law to notify the QCR about any patients diagnosed with cancer, except for non-melanoma skin cancers [8]. The study cohort included all Queensland residents diagnosed with a first primary invasive cancer between 1982 and 2001 who survived for a minimum of 2 months. Since classifications for childhood cancers are different to adult cancers [9], we decided to restrict the cohort to people who were 15 years or older at the time of first diagnosis. A small number of records were excluded because information was missing for age (n = 20), or because the person was known to have had a first primary cancer diagnosed prior to 1982 (n = 76). The cohort was followed up until 31st December 2006 (allowing a potential minimum of five years and a maximum of 25 years after the initial diagnosis) to ascertain the occurrence of second primary invasive cancers. Histologically similar cases of cancer at the same body site were included, unless the medical record indicated that the tumour was recurrent or metastatic. Synchronous primary cancers (those diagnosed within 2 months of the first primary cancer) [10] were excluded because they were more likely to have been diagnosed as a result of detection bias [6]. Third or subsequent primary cancers were not considered in this analysis. Cancers of various body sites were analysed in a separate group if they averaged more than 100 eligible first primary cancers per year. These cancers are shown in Table 1. Cancers of the head and neck (ICD-O-3 topography codes C00-C14 and C30-C32) were grouped together prior to analysis, as were cancers of the colon and rectum (C18-C20 and C218). All remaining types of cancer were analysed collectively in the category labelled "Other", including cases where site was ill-defined or unknown. Characteristics of the study cohort n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. Person years at risk (PYAR) among people diagnosed with a first primary cancer was calculated as the time from 2 months after diagnosis until 31 December 2006, date of death or date of diagnosis of a second primary cancer, whichever came first. Data were stratified by type of first primary cancer, type of second primary cancer, sex, age at first diagnosis, period of first diagnosis and follow-up interval. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. The expected number of second primary cancers in each stratum was calculated by multiplying the sum of PYAR by the cancer-specific incidence rate experienced by the general Queensland population, matched by sex, age group and time period. Standardised incidence ratios (SIRs) were then obtained by dividing the observed number of cases of second primary cancer by the expected number. The SIR is thus used to estimate the risk of a cancer patient developing a second primary malignancy relative to the incidence of cancer among the general population. Confidence intervals (CIs) for the SIRs were derived from the Poisson distribution [11] and calculated at the 95% level of certainty. All analyses were conducted using SAS v9.2 for Windows. Data required for this study was non-identifiable so no ethics committee approval was necessary.
null
null
null
null
[ "Background", "Results", "Relative risk of second primary cancers by sex", "Relative risk of second primary cancers by age group at diagnosis", "Relative risk of second primary cancers by time period of first diagnosis", "Relative risk of second primary cancers by follow-up interval", "Relative risk by type of first and second primary cancers and sex", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The number of people being diagnosed with cancer worldwide is rising sharply, mainly as a result of population growth, ageing and increases in the prevalence of several lifestyle factors related to cancer risk [1,2]. Combined with improvements in survival, due in part to both earlier detection and better treatment, more people are living with a diagnosis of cancer than ever before, and this increasing trend is likely to continue [3].\nOne of the consequences of surviving cancer is the increased likelihood of being diagnosed with a second primary cancer. Among other things, these second cancers may be the result of lifestyle choices, genetics, environmental exposures (all of which may also be related to the first cancer) and late effects of treatment [3,4].\nFor the patient and their treating clinician, quantifying and characterising the risks for second malignancies has important implications for screening (particularly when effective screening methods such as mammography are available), prevention strategies and counselling [3,5,6]. Identifying cancers which have an elevated likelihood of occurring together is also a useful starting point for investigating possible shared aetiologies and mechanisms of carcinogenesis [7].\nThis study reports for the first time the relative risks of a second cancer for people diagnosed with a first primary cancer in Queensland, Australia.", "The basic characteristics of the study cohort are summarised in Table 1. Among the 204,962 eligible cancer patients, a total of 23,580 second invasive primary cancers were observed during 1,370,247 years of follow-up (median follow-up = 5.5 years per person, interquartile range = 1.3 to 10.2 years per person). In terms of absolute numbers, second primary cancers were more common among males and increased with older age, which is consistent with the distribution of first primary cancers. About one in ten (10.6%) of the second primary cancers were diagnosed within a year of the first diagnosis, while more than one in five (20.6%) were diagnosed at least 10 years afterwards. The highest proportions of second primary cancers occurred following an initial diagnosis of melanoma (21.6%), colorectal cancer (12.9%), prostate cancer (12.7%), or female breast cancer (12.6%).\n[SUBTITLE] Relative risk of second primary cancers by sex [SUBSECTION] Compared to the incidence of cancer in the general Queensland population, both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) in the study cohort exhibited an increased risk of developing a second primary cancer (Table 2). Significantly increased relative risks of invasive cancer were recorded among males following diagnosis of head and neck cancer, oesophageal cancer, lung cancer, melanoma, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. In contrast, males initially diagnosed with either prostate or stomach cancer subsequently experienced a significantly lower risk of cancer compared to the general population.\nRelative risk of second primary cancer by type of first primary cancer and sex, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk\nWithin the female cohort, the relative risk of a second cancer was higher for those diagnosed with head and neck cancer, colorectal cancer, lung cancer, melanoma, breast cancer, cervical cancer, uterine cancer, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. There were no types of cancer for which female survivors had a significantly lower risk of developing a second invasive cancer in relation to the general population.\nCompared to the incidence of cancer in the general Queensland population, both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) in the study cohort exhibited an increased risk of developing a second primary cancer (Table 2). Significantly increased relative risks of invasive cancer were recorded among males following diagnosis of head and neck cancer, oesophageal cancer, lung cancer, melanoma, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. In contrast, males initially diagnosed with either prostate or stomach cancer subsequently experienced a significantly lower risk of cancer compared to the general population.\nRelative risk of second primary cancer by type of first primary cancer and sex, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk\nWithin the female cohort, the relative risk of a second cancer was higher for those diagnosed with head and neck cancer, colorectal cancer, lung cancer, melanoma, breast cancer, cervical cancer, uterine cancer, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. There were no types of cancer for which female survivors had a significantly lower risk of developing a second invasive cancer in relation to the general population.\n[SUBTITLE] Relative risk of second primary cancers by age group at diagnosis [SUBSECTION] The risk of a second malignancy was higher in comparison to the general population for each of the three age groups for all first primary cancers combined, but tended to decrease as age at first diagnosis increased (Table 3) - 15-49 years (SIR = 1.84; 95% CI = 1.77-1.90), 50-64 years (SIR = 1.39; 95% CI = 1.36-1.42) and 65 years and older (SIR = 1.23; 95% CI = 1.20-1.25). Different patterns emerged within the various cancer-specific cohorts. An elevated relative risk across all three age groups was found following head and neck cancer, lung cancer, melanoma, female breast cancer, cervical cancer, kidney cancer, bladder cancer, non-Hodgkin lymphoma and lymphoid leukaemia. For oesophageal and colorectal cancer, significantly increased relative risks were only observed among those aged under 65 years at first diagnosis, while for pancreatic and brain cancers the risk was elevated in the younger age groups but was lower than expected for people aged 65 years and over. Consistently decreased relative risks were recorded within each age group for males with prostate cancer, although the SIR was not statistically significant for those aged 15-49 years.\nRelative risk of second primary cancer by type of first primary cancer and age group at first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for the age groups 15-49 years and 50-64 years have been combined due to less than 5 observed cases aged 15-49 years.\nThe risk of a second malignancy was higher in comparison to the general population for each of the three age groups for all first primary cancers combined, but tended to decrease as age at first diagnosis increased (Table 3) - 15-49 years (SIR = 1.84; 95% CI = 1.77-1.90), 50-64 years (SIR = 1.39; 95% CI = 1.36-1.42) and 65 years and older (SIR = 1.23; 95% CI = 1.20-1.25). Different patterns emerged within the various cancer-specific cohorts. An elevated relative risk across all three age groups was found following head and neck cancer, lung cancer, melanoma, female breast cancer, cervical cancer, kidney cancer, bladder cancer, non-Hodgkin lymphoma and lymphoid leukaemia. For oesophageal and colorectal cancer, significantly increased relative risks were only observed among those aged under 65 years at first diagnosis, while for pancreatic and brain cancers the risk was elevated in the younger age groups but was lower than expected for people aged 65 years and over. Consistently decreased relative risks were recorded within each age group for males with prostate cancer, although the SIR was not statistically significant for those aged 15-49 years.\nRelative risk of second primary cancer by type of first primary cancer and age group at first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for the age groups 15-49 years and 50-64 years have been combined due to less than 5 observed cases aged 15-49 years.\n[SUBTITLE] Relative risk of second primary cancers by time period of first diagnosis [SUBSECTION] There was some evidence that the more recently a first primary cancer was diagnosed the higher the relative risk of a second primary cancer, with a gradual increase across the four time periods for all cancers combined (Table 4): 1982-1986 (SIR = 1.14; 95% CI = 1.08-1.20), 1987-1991 (SIR = 1.22; 95% CI = 1.17-1.28), 1992-1996 (SIR = 1.36; 95% CI = 1.31-1.41) and 1997-2001 (SIR = 1.46; 95% CI = 1.41-1.50). In particular, the SIRs for people with colorectal cancer, lung cancer, breast cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia and myeloid leukaemia were only significant for the later time periods. Among men with prostate cancer, the risk of developing a second primary cancer was lower than the incidence of cancer experienced by males in the general population irrespective of the year of first diagnosis, except for the latest period (1997-2001). Several cancer-specific cohorts had SIRs that were significantly higher than expected in each time period, including head and neck cancer, melanoma, kidney cancer and bladder cancer.\nRelative risk of second primary cancer by type of first primary cancer and time period of first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. Estimates are not shown for Brain & CNS for the periods 1982-1986 and 1992-1996 due to observed cell counts of less than 5.\nThere was some evidence that the more recently a first primary cancer was diagnosed the higher the relative risk of a second primary cancer, with a gradual increase across the four time periods for all cancers combined (Table 4): 1982-1986 (SIR = 1.14; 95% CI = 1.08-1.20), 1987-1991 (SIR = 1.22; 95% CI = 1.17-1.28), 1992-1996 (SIR = 1.36; 95% CI = 1.31-1.41) and 1997-2001 (SIR = 1.46; 95% CI = 1.41-1.50). In particular, the SIRs for people with colorectal cancer, lung cancer, breast cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia and myeloid leukaemia were only significant for the later time periods. Among men with prostate cancer, the risk of developing a second primary cancer was lower than the incidence of cancer experienced by males in the general population irrespective of the year of first diagnosis, except for the latest period (1997-2001). Several cancer-specific cohorts had SIRs that were significantly higher than expected in each time period, including head and neck cancer, melanoma, kidney cancer and bladder cancer.\nRelative risk of second primary cancer by type of first primary cancer and time period of first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. Estimates are not shown for Brain & CNS for the periods 1982-1986 and 1992-1996 due to observed cell counts of less than 5.\n[SUBTITLE] Relative risk of second primary cancers by follow-up interval [SUBSECTION] The risk of a second primary cancer for all survivors combined remained consistently elevated compared to the general population during each follow-up interval (Table 5): 2 months to less than 1 year after first diagnosis (SIR = 1.32; 95% CI = 1.27-1.37), 1 year to less than 5 years (SIR = 1.33; 95% CI = 1.31-1.36), 5 years to less than 10 years (SIR = 1.39; 95% CI = 1.35-1.42) and 10 years or longer (SIR = 1.28; 95% CI = 1.24-1.31). A significantly increased relative risk across all follow-up intervals was also observed following a diagnosis of head and neck cancer, melanoma, kidney cancer, bladder cancer or lymphoid leukaemia, while the risks were significantly higher from 1 year or longer after diagnosis for colorectal cancer, lung cancer, female breast cancer or non-Hodgkin lymphoma. The relative risk of developing another cancer was significantly higher among cervical cancer survivors up to 10 years after the initial diagnosis but not after that. Prostate cancer patients were found to have subsequent risks that remained lower than the matching population, particularly 10 or more years after initial diagnosis.\nRelative risk of second primary cancer by type of first primary cancer and follow-up interval, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for 5 years to less than 10 years after first diagnosis and 10 years or longer after first diagnosis have been combined due to less than 5 observed cases 10 years or longer after first diagnosis.\nThe risk of a second primary cancer for all survivors combined remained consistently elevated compared to the general population during each follow-up interval (Table 5): 2 months to less than 1 year after first diagnosis (SIR = 1.32; 95% CI = 1.27-1.37), 1 year to less than 5 years (SIR = 1.33; 95% CI = 1.31-1.36), 5 years to less than 10 years (SIR = 1.39; 95% CI = 1.35-1.42) and 10 years or longer (SIR = 1.28; 95% CI = 1.24-1.31). A significantly increased relative risk across all follow-up intervals was also observed following a diagnosis of head and neck cancer, melanoma, kidney cancer, bladder cancer or lymphoid leukaemia, while the risks were significantly higher from 1 year or longer after diagnosis for colorectal cancer, lung cancer, female breast cancer or non-Hodgkin lymphoma. The relative risk of developing another cancer was significantly higher among cervical cancer survivors up to 10 years after the initial diagnosis but not after that. Prostate cancer patients were found to have subsequent risks that remained lower than the matching population, particularly 10 or more years after initial diagnosis.\nRelative risk of second primary cancer by type of first primary cancer and follow-up interval, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for 5 years to less than 10 years after first diagnosis and 10 years or longer after first diagnosis have been combined due to less than 5 observed cases 10 years or longer after first diagnosis.\n[SUBTITLE] Relative risk by type of first and second primary cancers and sex [SUBSECTION] The relative risks for specific second primary cancers varied substantially according to the type of first primary cancer (Figure 1 and 2). Within the melanoma cohort (Figure 1c and 1d), both males and females were over six times more likely to be diagnosed with another primary melanoma compared to the general population. They also had significantly increased relative risks for several other cancers, including thyroid cancer and lymphoid leukaemia (both males and females), brain cancer, non-Hodgkin lymphoma, prostate cancer and colorectal cancer (males only) and kidney cancer and breast cancer (females only). However, lung cancers occurred less often than expected among males with a first primary melanoma.\nRelative risk following all cancers combined, melanoma or colorectal cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour.Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. X-axes are shown on a log scale.\nRelative risk following prostate cancer, breast cancer, head and neck cancer or lung cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour. Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. The SIR for a second primary prostate cancer following a first primary prostate cancer was 0.3 and is not shown in Figure 2a. No cases of lymphoid leukaemia following head and neck cancer were recorded for females (Figure 2d). X-axes are shown on a log scale. Note different scale endpoints for x-axis used in Figures 2d, 2e and 2f.\nAn elevated relative risk of melanoma was also observed for both sexes in the colorectal cancer cohort (Figure 1e and 1f). In addition, male colorectal cancer survivors had higher risks for lymphoid leukaemia, oesophageal cancer and kidney cancer but a lower risk of developing a second primary colorectal cancer compared to the general population, while females with colorectal cancer experienced a subsequent increased risk of breast cancer.\nMen diagnosed with prostate cancer had significantly increased relative risks for thyroid cancer, melanoma, bladder cancer, kidney cancer, non-Hodgkin lymphoma and colorectal cancer but a significantly decreased risk of lung cancer (Figure 2a). Instances of primary prostate cancer occurring twice in the same person were rare. Female breast cancer survivors were more likely to be diagnosed with uterine cancer, myeloid leukaemia, stomach cancer, breast cancer, ovarian cancer, melanoma, kidney cancer or colorectal cancer than were the general population (Figure 2b).\nPeople initially diagnosed with head and neck cancer were found to have elevated relative risks for a second cancer of the head and neck, oesophageal cancer, lung cancer and non-Hodgkin lymphoma (Figure 2c and 2d). Males with head and neck cancer were also at increased risk of developing melanoma, colorectal cancer or bladder cancer. All lung cancer patients had a significantly increased risk of oesophageal and head and neck cancers compared to other residents of Queensland (Figure 2e and 2f), while males with lung cancer experienced high relative risks of being subsequently diagnosed with kidney, pancreas or bladder cancers and females with lung cancer had a significantly increased risk of developing a second primary lung cancer. Lymphoid leukaemia occurred less often among males following lung cancer than in the general population, although both the observed and expected number of cases were small.\nThe relative risks for specific second primary cancers varied substantially according to the type of first primary cancer (Figure 1 and 2). Within the melanoma cohort (Figure 1c and 1d), both males and females were over six times more likely to be diagnosed with another primary melanoma compared to the general population. They also had significantly increased relative risks for several other cancers, including thyroid cancer and lymphoid leukaemia (both males and females), brain cancer, non-Hodgkin lymphoma, prostate cancer and colorectal cancer (males only) and kidney cancer and breast cancer (females only). However, lung cancers occurred less often than expected among males with a first primary melanoma.\nRelative risk following all cancers combined, melanoma or colorectal cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour.Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. X-axes are shown on a log scale.\nRelative risk following prostate cancer, breast cancer, head and neck cancer or lung cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour. Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. The SIR for a second primary prostate cancer following a first primary prostate cancer was 0.3 and is not shown in Figure 2a. No cases of lymphoid leukaemia following head and neck cancer were recorded for females (Figure 2d). X-axes are shown on a log scale. Note different scale endpoints for x-axis used in Figures 2d, 2e and 2f.\nAn elevated relative risk of melanoma was also observed for both sexes in the colorectal cancer cohort (Figure 1e and 1f). In addition, male colorectal cancer survivors had higher risks for lymphoid leukaemia, oesophageal cancer and kidney cancer but a lower risk of developing a second primary colorectal cancer compared to the general population, while females with colorectal cancer experienced a subsequent increased risk of breast cancer.\nMen diagnosed with prostate cancer had significantly increased relative risks for thyroid cancer, melanoma, bladder cancer, kidney cancer, non-Hodgkin lymphoma and colorectal cancer but a significantly decreased risk of lung cancer (Figure 2a). Instances of primary prostate cancer occurring twice in the same person were rare. Female breast cancer survivors were more likely to be diagnosed with uterine cancer, myeloid leukaemia, stomach cancer, breast cancer, ovarian cancer, melanoma, kidney cancer or colorectal cancer than were the general population (Figure 2b).\nPeople initially diagnosed with head and neck cancer were found to have elevated relative risks for a second cancer of the head and neck, oesophageal cancer, lung cancer and non-Hodgkin lymphoma (Figure 2c and 2d). Males with head and neck cancer were also at increased risk of developing melanoma, colorectal cancer or bladder cancer. All lung cancer patients had a significantly increased risk of oesophageal and head and neck cancers compared to other residents of Queensland (Figure 2e and 2f), while males with lung cancer experienced high relative risks of being subsequently diagnosed with kidney, pancreas or bladder cancers and females with lung cancer had a significantly increased risk of developing a second primary lung cancer. Lymphoid leukaemia occurred less often among males following lung cancer than in the general population, although both the observed and expected number of cases were small.", "Compared to the incidence of cancer in the general Queensland population, both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) in the study cohort exhibited an increased risk of developing a second primary cancer (Table 2). Significantly increased relative risks of invasive cancer were recorded among males following diagnosis of head and neck cancer, oesophageal cancer, lung cancer, melanoma, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. In contrast, males initially diagnosed with either prostate or stomach cancer subsequently experienced a significantly lower risk of cancer compared to the general population.\nRelative risk of second primary cancer by type of first primary cancer and sex, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk\nWithin the female cohort, the relative risk of a second cancer was higher for those diagnosed with head and neck cancer, colorectal cancer, lung cancer, melanoma, breast cancer, cervical cancer, uterine cancer, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. There were no types of cancer for which female survivors had a significantly lower risk of developing a second invasive cancer in relation to the general population.", "The risk of a second malignancy was higher in comparison to the general population for each of the three age groups for all first primary cancers combined, but tended to decrease as age at first diagnosis increased (Table 3) - 15-49 years (SIR = 1.84; 95% CI = 1.77-1.90), 50-64 years (SIR = 1.39; 95% CI = 1.36-1.42) and 65 years and older (SIR = 1.23; 95% CI = 1.20-1.25). Different patterns emerged within the various cancer-specific cohorts. An elevated relative risk across all three age groups was found following head and neck cancer, lung cancer, melanoma, female breast cancer, cervical cancer, kidney cancer, bladder cancer, non-Hodgkin lymphoma and lymphoid leukaemia. For oesophageal and colorectal cancer, significantly increased relative risks were only observed among those aged under 65 years at first diagnosis, while for pancreatic and brain cancers the risk was elevated in the younger age groups but was lower than expected for people aged 65 years and over. Consistently decreased relative risks were recorded within each age group for males with prostate cancer, although the SIR was not statistically significant for those aged 15-49 years.\nRelative risk of second primary cancer by type of first primary cancer and age group at first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for the age groups 15-49 years and 50-64 years have been combined due to less than 5 observed cases aged 15-49 years.", "There was some evidence that the more recently a first primary cancer was diagnosed the higher the relative risk of a second primary cancer, with a gradual increase across the four time periods for all cancers combined (Table 4): 1982-1986 (SIR = 1.14; 95% CI = 1.08-1.20), 1987-1991 (SIR = 1.22; 95% CI = 1.17-1.28), 1992-1996 (SIR = 1.36; 95% CI = 1.31-1.41) and 1997-2001 (SIR = 1.46; 95% CI = 1.41-1.50). In particular, the SIRs for people with colorectal cancer, lung cancer, breast cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia and myeloid leukaemia were only significant for the later time periods. Among men with prostate cancer, the risk of developing a second primary cancer was lower than the incidence of cancer experienced by males in the general population irrespective of the year of first diagnosis, except for the latest period (1997-2001). Several cancer-specific cohorts had SIRs that were significantly higher than expected in each time period, including head and neck cancer, melanoma, kidney cancer and bladder cancer.\nRelative risk of second primary cancer by type of first primary cancer and time period of first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. Estimates are not shown for Brain & CNS for the periods 1982-1986 and 1992-1996 due to observed cell counts of less than 5.", "The risk of a second primary cancer for all survivors combined remained consistently elevated compared to the general population during each follow-up interval (Table 5): 2 months to less than 1 year after first diagnosis (SIR = 1.32; 95% CI = 1.27-1.37), 1 year to less than 5 years (SIR = 1.33; 95% CI = 1.31-1.36), 5 years to less than 10 years (SIR = 1.39; 95% CI = 1.35-1.42) and 10 years or longer (SIR = 1.28; 95% CI = 1.24-1.31). A significantly increased relative risk across all follow-up intervals was also observed following a diagnosis of head and neck cancer, melanoma, kidney cancer, bladder cancer or lymphoid leukaemia, while the risks were significantly higher from 1 year or longer after diagnosis for colorectal cancer, lung cancer, female breast cancer or non-Hodgkin lymphoma. The relative risk of developing another cancer was significantly higher among cervical cancer survivors up to 10 years after the initial diagnosis but not after that. Prostate cancer patients were found to have subsequent risks that remained lower than the matching population, particularly 10 or more years after initial diagnosis.\nRelative risk of second primary cancer by type of first primary cancer and follow-up interval, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for 5 years to less than 10 years after first diagnosis and 10 years or longer after first diagnosis have been combined due to less than 5 observed cases 10 years or longer after first diagnosis.", "The relative risks for specific second primary cancers varied substantially according to the type of first primary cancer (Figure 1 and 2). Within the melanoma cohort (Figure 1c and 1d), both males and females were over six times more likely to be diagnosed with another primary melanoma compared to the general population. They also had significantly increased relative risks for several other cancers, including thyroid cancer and lymphoid leukaemia (both males and females), brain cancer, non-Hodgkin lymphoma, prostate cancer and colorectal cancer (males only) and kidney cancer and breast cancer (females only). However, lung cancers occurred less often than expected among males with a first primary melanoma.\nRelative risk following all cancers combined, melanoma or colorectal cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour.Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. X-axes are shown on a log scale.\nRelative risk following prostate cancer, breast cancer, head and neck cancer or lung cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour. Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. The SIR for a second primary prostate cancer following a first primary prostate cancer was 0.3 and is not shown in Figure 2a. No cases of lymphoid leukaemia following head and neck cancer were recorded for females (Figure 2d). X-axes are shown on a log scale. Note different scale endpoints for x-axis used in Figures 2d, 2e and 2f.\nAn elevated relative risk of melanoma was also observed for both sexes in the colorectal cancer cohort (Figure 1e and 1f). In addition, male colorectal cancer survivors had higher risks for lymphoid leukaemia, oesophageal cancer and kidney cancer but a lower risk of developing a second primary colorectal cancer compared to the general population, while females with colorectal cancer experienced a subsequent increased risk of breast cancer.\nMen diagnosed with prostate cancer had significantly increased relative risks for thyroid cancer, melanoma, bladder cancer, kidney cancer, non-Hodgkin lymphoma and colorectal cancer but a significantly decreased risk of lung cancer (Figure 2a). Instances of primary prostate cancer occurring twice in the same person were rare. Female breast cancer survivors were more likely to be diagnosed with uterine cancer, myeloid leukaemia, stomach cancer, breast cancer, ovarian cancer, melanoma, kidney cancer or colorectal cancer than were the general population (Figure 2b).\nPeople initially diagnosed with head and neck cancer were found to have elevated relative risks for a second cancer of the head and neck, oesophageal cancer, lung cancer and non-Hodgkin lymphoma (Figure 2c and 2d). Males with head and neck cancer were also at increased risk of developing melanoma, colorectal cancer or bladder cancer. All lung cancer patients had a significantly increased risk of oesophageal and head and neck cancers compared to other residents of Queensland (Figure 2e and 2f), while males with lung cancer experienced high relative risks of being subsequently diagnosed with kidney, pancreas or bladder cancers and females with lung cancer had a significantly increased risk of developing a second primary lung cancer. Lymphoid leukaemia occurred less often among males following lung cancer than in the general population, although both the observed and expected number of cases were small.", "Cancer patients in our study cohort were at significantly higher risk of a second diagnosis compared to the underlying incidence rates experienced in the entire population of Queensland. Although there was some variation in the size of the estimated relative risks, a consistent pattern of increased risk following all first primary cancers combined was seen for both males and females, and across all age groups at first diagnosis, time periods of first diagnosis and follow up intervals.\nOur data offers further evidence of significant associations between particular types of first and second primary cancers that have been previously documented elsewhere in Australia and around the world [6,12-16]. Some examples include the mutual increased risks between head and neck, lung and oesophageal cancers and the relationship between melanoma and both prostate and female breast cancers.\nMuch of the elevated risk of being diagnosed with a second malignancy can be attributed to risk behaviours (such as smoking, harmful levels of alcohol consumption and poor diet), inherited susceptibilities and/or the medical treatment that cancer survivors have received [3,4,6]. On occasions, treatment can also have the opposite effect of reducing the risk of subsequent diagnosis. As was noted in another recent study of second primary cancer [13], the overall lower SIRs observed among prostate cancer survivors are mainly due to extremely low rates of reoccurrence as the organ is often completely removed as part of treatment. Given that prostate cancer is the most common cancer diagnosed for males in Queensland, this also helps to explain why the total relative risk is lower for male survivors compared to female survivors.\nAnother potential explanation for the difference in risk between cancer survivors and the general population is that some demographic factors may not be comparable between the two groups. Although the relative risks were based on calculations matched by sex, age group and time period, it is possible that other qualities, such as socioeconomic status, may vary between people who have been diagnosed with cancer and those who have not. This is likely to be the reason for at least part of the reduced risk of lung cancer (which has higher incidence among lower socioeconomic groups) for males following melanoma (which is more common in segments of the community with higher socioeconomic status) [6,17]. A similar explanation could be used for the deficit of lung cancer after a diagnosis of prostate cancer. In contrast, the high reciprocal relative risks of melanoma with both female breast cancer and prostate cancer may correlate with the higher incidence for each of these malignancies among more affluent populations [18-20].\nOne of the interesting relationships that emerged from the analysis was the variation in relative risk of second primary cancer by age at first diagnosis following cancer of the brain and central nervous system. Younger survivors (15-49 years old) had an increased risk of being diagnosed with a second primary cancer while survivors aged 65 years or over had a decreased risk. This is most likely due to the various histopathological subtypes of brain tumours which are more common in the different age groups. For example, glioblastoma tends to be diagnosed at an older age and is associated with poor survival, allowing a limited time for treatment-related second primary cancers to appear [6].\nThe reasons behind an increased relative risk of second primary cancer following certain types of cancer among survivors who were diagnosed more recently are unknown. Tsukuma et al. [16] described a similar pattern in Japan, and suggested that apparent increases in risk may be due to improved follow-up and surveillance of cancer patients. Another possible cause is the change in treatment modalities over time [4,21].\nSecond primary cancers that arise due to the effects of treatment for the initial cancer might be expected to occur many years after the first diagnosis. However, the increase in relative risk remained fairly consistent irrespective of time since diagnosis, in accordance with results published elsewhere in Australia [13] and the United States [6].\nThe main strengths of this study include the extensive population-based coverage achieved by the QCR for the reporting of cancers among Queensland residents, combined with a high level of histological verification (88% in 2006) [8], which is important for distinguishing between new primary cancers and metastases of an existing cancer. Since all data used in this study have been collected prospectively for administrative purposes, and coded independently of the hypotheses, the opportunity for recall or information bias has been removed.\nIncreased medical surveillance of newly-diagnosed cancer patients may introduce a detection bias for second primary cancers. The likelihood of this happening has been reduced by only considering metachronous primary cancers, with a two-month window between first and second diagnosis. It is also possible that some second primary cancers were incorrectly classified as first primary cancers, especially for cancers diagnosed soon after the establishment of the QCR in the early 1980s; we are unable to quantify the impact of this on the observed results.\nWhile acknowledging that the study cohort and comparison population were not independent, with the population containing people already diagnosed with cancer, this proportion was less than 0.5% in any given year. Finally, as a result of the large number of comparisons made, the possibility that some of the SIR estimates have been spuriously identified as statistically significant needs to be considered, particularly those based on small numbers of primary or secondary cancers, and these results should therefore be interpreted with due caution.", "Our results demonstrate that cancer survivors in Queensland, Australia, like those in other countries, are confronted by a very real, ongoing risk of developing a second primary cancer that is significantly higher than the incidence of cancer experienced by the general population. Some first and second primary cancers share common aetiologies, making it imperative that cancer patients adopt a healthier lifestyle in order to lessen their chances of subsequent diagnoses [4,22]. Recent studies have reported little difference in the health behaviours of cancer survivors compared to the wider community [23-25], suggesting that health promotion efforts may need to be specifically targeted at people diagnosed with cancer due to their subsequent increased risk. Further work is needed to determine exactly how beneficial changes to lifestyle may be in regard to the risk of developing a second primary cancer [26]. Even so, some second primary cancers will be unavoidable, and as the number of cancer survivors continues to grow, the importance of ongoing medical supervision and screening to detect second primary cancers at an earlier stage and thereby improve the effectiveness of treatment will remain critical.", "The authors declare that they have no competing interests.", "DY conducted the statistical analysis and drafted the manuscript. PB conceived the study and edited the draft manuscript. Both authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/83/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Relative risk of second primary cancers by sex", "Relative risk of second primary cancers by age group at diagnosis", "Relative risk of second primary cancers by time period of first diagnosis", "Relative risk of second primary cancers by follow-up interval", "Relative risk by type of first and second primary cancers and sex", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The number of people being diagnosed with cancer worldwide is rising sharply, mainly as a result of population growth, ageing and increases in the prevalence of several lifestyle factors related to cancer risk [1,2]. Combined with improvements in survival, due in part to both earlier detection and better treatment, more people are living with a diagnosis of cancer than ever before, and this increasing trend is likely to continue [3].\nOne of the consequences of surviving cancer is the increased likelihood of being diagnosed with a second primary cancer. Among other things, these second cancers may be the result of lifestyle choices, genetics, environmental exposures (all of which may also be related to the first cancer) and late effects of treatment [3,4].\nFor the patient and their treating clinician, quantifying and characterising the risks for second malignancies has important implications for screening (particularly when effective screening methods such as mammography are available), prevention strategies and counselling [3,5,6]. Identifying cancers which have an elevated likelihood of occurring together is also a useful starting point for investigating possible shared aetiologies and mechanisms of carcinogenesis [7].\nThis study reports for the first time the relative risks of a second cancer for people diagnosed with a first primary cancer in Queensland, Australia.", "A retrospective cohort design was used for this analysis. De-identified case records were obtained from the Queensland Cancer Registry (QCR), a population-based registry covering the entire state. All public and private hospitals, nursing homes and pathology services throughout Queensland are required by law to notify the QCR about any patients diagnosed with cancer, except for non-melanoma skin cancers [8].\nThe study cohort included all Queensland residents diagnosed with a first primary invasive cancer between 1982 and 2001 who survived for a minimum of 2 months. Since classifications for childhood cancers are different to adult cancers [9], we decided to restrict the cohort to people who were 15 years or older at the time of first diagnosis. A small number of records were excluded because information was missing for age (n = 20), or because the person was known to have had a first primary cancer diagnosed prior to 1982 (n = 76).\nThe cohort was followed up until 31st December 2006 (allowing a potential minimum of five years and a maximum of 25 years after the initial diagnosis) to ascertain the occurrence of second primary invasive cancers. Histologically similar cases of cancer at the same body site were included, unless the medical record indicated that the tumour was recurrent or metastatic. Synchronous primary cancers (those diagnosed within 2 months of the first primary cancer) [10] were excluded because they were more likely to have been diagnosed as a result of detection bias [6]. Third or subsequent primary cancers were not considered in this analysis.\nCancers of various body sites were analysed in a separate group if they averaged more than 100 eligible first primary cancers per year. These cancers are shown in Table 1. Cancers of the head and neck (ICD-O-3 topography codes C00-C14 and C30-C32) were grouped together prior to analysis, as were cancers of the colon and rectum (C18-C20 and C218). All remaining types of cancer were analysed collectively in the category labelled \"Other\", including cases where site was ill-defined or unknown.\nCharacteristics of the study cohort\nn.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours.\nPerson years at risk (PYAR) among people diagnosed with a first primary cancer was calculated as the time from 2 months after diagnosis until 31 December 2006, date of death or date of diagnosis of a second primary cancer, whichever came first. Data were stratified by type of first primary cancer, type of second primary cancer, sex, age at first diagnosis, period of first diagnosis and follow-up interval. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. The expected number of second primary cancers in each stratum was calculated by multiplying the sum of PYAR by the cancer-specific incidence rate experienced by the general Queensland population, matched by sex, age group and time period. Standardised incidence ratios (SIRs) were then obtained by dividing the observed number of cases of second primary cancer by the expected number. The SIR is thus used to estimate the risk of a cancer patient developing a second primary malignancy relative to the incidence of cancer among the general population. Confidence intervals (CIs) for the SIRs were derived from the Poisson distribution [11] and calculated at the 95% level of certainty.\nAll analyses were conducted using SAS v9.2 for Windows. Data required for this study was non-identifiable so no ethics committee approval was necessary.", "The basic characteristics of the study cohort are summarised in Table 1. Among the 204,962 eligible cancer patients, a total of 23,580 second invasive primary cancers were observed during 1,370,247 years of follow-up (median follow-up = 5.5 years per person, interquartile range = 1.3 to 10.2 years per person). In terms of absolute numbers, second primary cancers were more common among males and increased with older age, which is consistent with the distribution of first primary cancers. About one in ten (10.6%) of the second primary cancers were diagnosed within a year of the first diagnosis, while more than one in five (20.6%) were diagnosed at least 10 years afterwards. The highest proportions of second primary cancers occurred following an initial diagnosis of melanoma (21.6%), colorectal cancer (12.9%), prostate cancer (12.7%), or female breast cancer (12.6%).\n[SUBTITLE] Relative risk of second primary cancers by sex [SUBSECTION] Compared to the incidence of cancer in the general Queensland population, both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) in the study cohort exhibited an increased risk of developing a second primary cancer (Table 2). Significantly increased relative risks of invasive cancer were recorded among males following diagnosis of head and neck cancer, oesophageal cancer, lung cancer, melanoma, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. In contrast, males initially diagnosed with either prostate or stomach cancer subsequently experienced a significantly lower risk of cancer compared to the general population.\nRelative risk of second primary cancer by type of first primary cancer and sex, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk\nWithin the female cohort, the relative risk of a second cancer was higher for those diagnosed with head and neck cancer, colorectal cancer, lung cancer, melanoma, breast cancer, cervical cancer, uterine cancer, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. There were no types of cancer for which female survivors had a significantly lower risk of developing a second invasive cancer in relation to the general population.\nCompared to the incidence of cancer in the general Queensland population, both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) in the study cohort exhibited an increased risk of developing a second primary cancer (Table 2). Significantly increased relative risks of invasive cancer were recorded among males following diagnosis of head and neck cancer, oesophageal cancer, lung cancer, melanoma, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. In contrast, males initially diagnosed with either prostate or stomach cancer subsequently experienced a significantly lower risk of cancer compared to the general population.\nRelative risk of second primary cancer by type of first primary cancer and sex, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk\nWithin the female cohort, the relative risk of a second cancer was higher for those diagnosed with head and neck cancer, colorectal cancer, lung cancer, melanoma, breast cancer, cervical cancer, uterine cancer, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. There were no types of cancer for which female survivors had a significantly lower risk of developing a second invasive cancer in relation to the general population.\n[SUBTITLE] Relative risk of second primary cancers by age group at diagnosis [SUBSECTION] The risk of a second malignancy was higher in comparison to the general population for each of the three age groups for all first primary cancers combined, but tended to decrease as age at first diagnosis increased (Table 3) - 15-49 years (SIR = 1.84; 95% CI = 1.77-1.90), 50-64 years (SIR = 1.39; 95% CI = 1.36-1.42) and 65 years and older (SIR = 1.23; 95% CI = 1.20-1.25). Different patterns emerged within the various cancer-specific cohorts. An elevated relative risk across all three age groups was found following head and neck cancer, lung cancer, melanoma, female breast cancer, cervical cancer, kidney cancer, bladder cancer, non-Hodgkin lymphoma and lymphoid leukaemia. For oesophageal and colorectal cancer, significantly increased relative risks were only observed among those aged under 65 years at first diagnosis, while for pancreatic and brain cancers the risk was elevated in the younger age groups but was lower than expected for people aged 65 years and over. Consistently decreased relative risks were recorded within each age group for males with prostate cancer, although the SIR was not statistically significant for those aged 15-49 years.\nRelative risk of second primary cancer by type of first primary cancer and age group at first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for the age groups 15-49 years and 50-64 years have been combined due to less than 5 observed cases aged 15-49 years.\nThe risk of a second malignancy was higher in comparison to the general population for each of the three age groups for all first primary cancers combined, but tended to decrease as age at first diagnosis increased (Table 3) - 15-49 years (SIR = 1.84; 95% CI = 1.77-1.90), 50-64 years (SIR = 1.39; 95% CI = 1.36-1.42) and 65 years and older (SIR = 1.23; 95% CI = 1.20-1.25). Different patterns emerged within the various cancer-specific cohorts. An elevated relative risk across all three age groups was found following head and neck cancer, lung cancer, melanoma, female breast cancer, cervical cancer, kidney cancer, bladder cancer, non-Hodgkin lymphoma and lymphoid leukaemia. For oesophageal and colorectal cancer, significantly increased relative risks were only observed among those aged under 65 years at first diagnosis, while for pancreatic and brain cancers the risk was elevated in the younger age groups but was lower than expected for people aged 65 years and over. Consistently decreased relative risks were recorded within each age group for males with prostate cancer, although the SIR was not statistically significant for those aged 15-49 years.\nRelative risk of second primary cancer by type of first primary cancer and age group at first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for the age groups 15-49 years and 50-64 years have been combined due to less than 5 observed cases aged 15-49 years.\n[SUBTITLE] Relative risk of second primary cancers by time period of first diagnosis [SUBSECTION] There was some evidence that the more recently a first primary cancer was diagnosed the higher the relative risk of a second primary cancer, with a gradual increase across the four time periods for all cancers combined (Table 4): 1982-1986 (SIR = 1.14; 95% CI = 1.08-1.20), 1987-1991 (SIR = 1.22; 95% CI = 1.17-1.28), 1992-1996 (SIR = 1.36; 95% CI = 1.31-1.41) and 1997-2001 (SIR = 1.46; 95% CI = 1.41-1.50). In particular, the SIRs for people with colorectal cancer, lung cancer, breast cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia and myeloid leukaemia were only significant for the later time periods. Among men with prostate cancer, the risk of developing a second primary cancer was lower than the incidence of cancer experienced by males in the general population irrespective of the year of first diagnosis, except for the latest period (1997-2001). Several cancer-specific cohorts had SIRs that were significantly higher than expected in each time period, including head and neck cancer, melanoma, kidney cancer and bladder cancer.\nRelative risk of second primary cancer by type of first primary cancer and time period of first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. Estimates are not shown for Brain & CNS for the periods 1982-1986 and 1992-1996 due to observed cell counts of less than 5.\nThere was some evidence that the more recently a first primary cancer was diagnosed the higher the relative risk of a second primary cancer, with a gradual increase across the four time periods for all cancers combined (Table 4): 1982-1986 (SIR = 1.14; 95% CI = 1.08-1.20), 1987-1991 (SIR = 1.22; 95% CI = 1.17-1.28), 1992-1996 (SIR = 1.36; 95% CI = 1.31-1.41) and 1997-2001 (SIR = 1.46; 95% CI = 1.41-1.50). In particular, the SIRs for people with colorectal cancer, lung cancer, breast cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia and myeloid leukaemia were only significant for the later time periods. Among men with prostate cancer, the risk of developing a second primary cancer was lower than the incidence of cancer experienced by males in the general population irrespective of the year of first diagnosis, except for the latest period (1997-2001). Several cancer-specific cohorts had SIRs that were significantly higher than expected in each time period, including head and neck cancer, melanoma, kidney cancer and bladder cancer.\nRelative risk of second primary cancer by type of first primary cancer and time period of first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. Estimates are not shown for Brain & CNS for the periods 1982-1986 and 1992-1996 due to observed cell counts of less than 5.\n[SUBTITLE] Relative risk of second primary cancers by follow-up interval [SUBSECTION] The risk of a second primary cancer for all survivors combined remained consistently elevated compared to the general population during each follow-up interval (Table 5): 2 months to less than 1 year after first diagnosis (SIR = 1.32; 95% CI = 1.27-1.37), 1 year to less than 5 years (SIR = 1.33; 95% CI = 1.31-1.36), 5 years to less than 10 years (SIR = 1.39; 95% CI = 1.35-1.42) and 10 years or longer (SIR = 1.28; 95% CI = 1.24-1.31). A significantly increased relative risk across all follow-up intervals was also observed following a diagnosis of head and neck cancer, melanoma, kidney cancer, bladder cancer or lymphoid leukaemia, while the risks were significantly higher from 1 year or longer after diagnosis for colorectal cancer, lung cancer, female breast cancer or non-Hodgkin lymphoma. The relative risk of developing another cancer was significantly higher among cervical cancer survivors up to 10 years after the initial diagnosis but not after that. Prostate cancer patients were found to have subsequent risks that remained lower than the matching population, particularly 10 or more years after initial diagnosis.\nRelative risk of second primary cancer by type of first primary cancer and follow-up interval, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for 5 years to less than 10 years after first diagnosis and 10 years or longer after first diagnosis have been combined due to less than 5 observed cases 10 years or longer after first diagnosis.\nThe risk of a second primary cancer for all survivors combined remained consistently elevated compared to the general population during each follow-up interval (Table 5): 2 months to less than 1 year after first diagnosis (SIR = 1.32; 95% CI = 1.27-1.37), 1 year to less than 5 years (SIR = 1.33; 95% CI = 1.31-1.36), 5 years to less than 10 years (SIR = 1.39; 95% CI = 1.35-1.42) and 10 years or longer (SIR = 1.28; 95% CI = 1.24-1.31). A significantly increased relative risk across all follow-up intervals was also observed following a diagnosis of head and neck cancer, melanoma, kidney cancer, bladder cancer or lymphoid leukaemia, while the risks were significantly higher from 1 year or longer after diagnosis for colorectal cancer, lung cancer, female breast cancer or non-Hodgkin lymphoma. The relative risk of developing another cancer was significantly higher among cervical cancer survivors up to 10 years after the initial diagnosis but not after that. Prostate cancer patients were found to have subsequent risks that remained lower than the matching population, particularly 10 or more years after initial diagnosis.\nRelative risk of second primary cancer by type of first primary cancer and follow-up interval, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for 5 years to less than 10 years after first diagnosis and 10 years or longer after first diagnosis have been combined due to less than 5 observed cases 10 years or longer after first diagnosis.\n[SUBTITLE] Relative risk by type of first and second primary cancers and sex [SUBSECTION] The relative risks for specific second primary cancers varied substantially according to the type of first primary cancer (Figure 1 and 2). Within the melanoma cohort (Figure 1c and 1d), both males and females were over six times more likely to be diagnosed with another primary melanoma compared to the general population. They also had significantly increased relative risks for several other cancers, including thyroid cancer and lymphoid leukaemia (both males and females), brain cancer, non-Hodgkin lymphoma, prostate cancer and colorectal cancer (males only) and kidney cancer and breast cancer (females only). However, lung cancers occurred less often than expected among males with a first primary melanoma.\nRelative risk following all cancers combined, melanoma or colorectal cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour.Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. X-axes are shown on a log scale.\nRelative risk following prostate cancer, breast cancer, head and neck cancer or lung cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour. Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. The SIR for a second primary prostate cancer following a first primary prostate cancer was 0.3 and is not shown in Figure 2a. No cases of lymphoid leukaemia following head and neck cancer were recorded for females (Figure 2d). X-axes are shown on a log scale. Note different scale endpoints for x-axis used in Figures 2d, 2e and 2f.\nAn elevated relative risk of melanoma was also observed for both sexes in the colorectal cancer cohort (Figure 1e and 1f). In addition, male colorectal cancer survivors had higher risks for lymphoid leukaemia, oesophageal cancer and kidney cancer but a lower risk of developing a second primary colorectal cancer compared to the general population, while females with colorectal cancer experienced a subsequent increased risk of breast cancer.\nMen diagnosed with prostate cancer had significantly increased relative risks for thyroid cancer, melanoma, bladder cancer, kidney cancer, non-Hodgkin lymphoma and colorectal cancer but a significantly decreased risk of lung cancer (Figure 2a). Instances of primary prostate cancer occurring twice in the same person were rare. Female breast cancer survivors were more likely to be diagnosed with uterine cancer, myeloid leukaemia, stomach cancer, breast cancer, ovarian cancer, melanoma, kidney cancer or colorectal cancer than were the general population (Figure 2b).\nPeople initially diagnosed with head and neck cancer were found to have elevated relative risks for a second cancer of the head and neck, oesophageal cancer, lung cancer and non-Hodgkin lymphoma (Figure 2c and 2d). Males with head and neck cancer were also at increased risk of developing melanoma, colorectal cancer or bladder cancer. All lung cancer patients had a significantly increased risk of oesophageal and head and neck cancers compared to other residents of Queensland (Figure 2e and 2f), while males with lung cancer experienced high relative risks of being subsequently diagnosed with kidney, pancreas or bladder cancers and females with lung cancer had a significantly increased risk of developing a second primary lung cancer. Lymphoid leukaemia occurred less often among males following lung cancer than in the general population, although both the observed and expected number of cases were small.\nThe relative risks for specific second primary cancers varied substantially according to the type of first primary cancer (Figure 1 and 2). Within the melanoma cohort (Figure 1c and 1d), both males and females were over six times more likely to be diagnosed with another primary melanoma compared to the general population. They also had significantly increased relative risks for several other cancers, including thyroid cancer and lymphoid leukaemia (both males and females), brain cancer, non-Hodgkin lymphoma, prostate cancer and colorectal cancer (males only) and kidney cancer and breast cancer (females only). However, lung cancers occurred less often than expected among males with a first primary melanoma.\nRelative risk following all cancers combined, melanoma or colorectal cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour.Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. X-axes are shown on a log scale.\nRelative risk following prostate cancer, breast cancer, head and neck cancer or lung cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour. Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. The SIR for a second primary prostate cancer following a first primary prostate cancer was 0.3 and is not shown in Figure 2a. No cases of lymphoid leukaemia following head and neck cancer were recorded for females (Figure 2d). X-axes are shown on a log scale. Note different scale endpoints for x-axis used in Figures 2d, 2e and 2f.\nAn elevated relative risk of melanoma was also observed for both sexes in the colorectal cancer cohort (Figure 1e and 1f). In addition, male colorectal cancer survivors had higher risks for lymphoid leukaemia, oesophageal cancer and kidney cancer but a lower risk of developing a second primary colorectal cancer compared to the general population, while females with colorectal cancer experienced a subsequent increased risk of breast cancer.\nMen diagnosed with prostate cancer had significantly increased relative risks for thyroid cancer, melanoma, bladder cancer, kidney cancer, non-Hodgkin lymphoma and colorectal cancer but a significantly decreased risk of lung cancer (Figure 2a). Instances of primary prostate cancer occurring twice in the same person were rare. Female breast cancer survivors were more likely to be diagnosed with uterine cancer, myeloid leukaemia, stomach cancer, breast cancer, ovarian cancer, melanoma, kidney cancer or colorectal cancer than were the general population (Figure 2b).\nPeople initially diagnosed with head and neck cancer were found to have elevated relative risks for a second cancer of the head and neck, oesophageal cancer, lung cancer and non-Hodgkin lymphoma (Figure 2c and 2d). Males with head and neck cancer were also at increased risk of developing melanoma, colorectal cancer or bladder cancer. All lung cancer patients had a significantly increased risk of oesophageal and head and neck cancers compared to other residents of Queensland (Figure 2e and 2f), while males with lung cancer experienced high relative risks of being subsequently diagnosed with kidney, pancreas or bladder cancers and females with lung cancer had a significantly increased risk of developing a second primary lung cancer. Lymphoid leukaemia occurred less often among males following lung cancer than in the general population, although both the observed and expected number of cases were small.", "Compared to the incidence of cancer in the general Queensland population, both males (SIR = 1.22; 95% CI = 1.20-1.24) and females (SIR = 1.36; 95% CI = 1.33-1.39) in the study cohort exhibited an increased risk of developing a second primary cancer (Table 2). Significantly increased relative risks of invasive cancer were recorded among males following diagnosis of head and neck cancer, oesophageal cancer, lung cancer, melanoma, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. In contrast, males initially diagnosed with either prostate or stomach cancer subsequently experienced a significantly lower risk of cancer compared to the general population.\nRelative risk of second primary cancer by type of first primary cancer and sex, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; n.a. = not applicable; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk\nWithin the female cohort, the relative risk of a second cancer was higher for those diagnosed with head and neck cancer, colorectal cancer, lung cancer, melanoma, breast cancer, cervical cancer, uterine cancer, kidney cancer, bladder cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia or myeloid leukaemia. There were no types of cancer for which female survivors had a significantly lower risk of developing a second invasive cancer in relation to the general population.", "The risk of a second malignancy was higher in comparison to the general population for each of the three age groups for all first primary cancers combined, but tended to decrease as age at first diagnosis increased (Table 3) - 15-49 years (SIR = 1.84; 95% CI = 1.77-1.90), 50-64 years (SIR = 1.39; 95% CI = 1.36-1.42) and 65 years and older (SIR = 1.23; 95% CI = 1.20-1.25). Different patterns emerged within the various cancer-specific cohorts. An elevated relative risk across all three age groups was found following head and neck cancer, lung cancer, melanoma, female breast cancer, cervical cancer, kidney cancer, bladder cancer, non-Hodgkin lymphoma and lymphoid leukaemia. For oesophageal and colorectal cancer, significantly increased relative risks were only observed among those aged under 65 years at first diagnosis, while for pancreatic and brain cancers the risk was elevated in the younger age groups but was lower than expected for people aged 65 years and over. Consistently decreased relative risks were recorded within each age group for males with prostate cancer, although the SIR was not statistically significant for those aged 15-49 years.\nRelative risk of second primary cancer by type of first primary cancer and age group at first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for the age groups 15-49 years and 50-64 years have been combined due to less than 5 observed cases aged 15-49 years.", "There was some evidence that the more recently a first primary cancer was diagnosed the higher the relative risk of a second primary cancer, with a gradual increase across the four time periods for all cancers combined (Table 4): 1982-1986 (SIR = 1.14; 95% CI = 1.08-1.20), 1987-1991 (SIR = 1.22; 95% CI = 1.17-1.28), 1992-1996 (SIR = 1.36; 95% CI = 1.31-1.41) and 1997-2001 (SIR = 1.46; 95% CI = 1.41-1.50). In particular, the SIRs for people with colorectal cancer, lung cancer, breast cancer, thyroid cancer, non-Hodgkin lymphoma, lymphoid leukaemia and myeloid leukaemia were only significant for the later time periods. Among men with prostate cancer, the risk of developing a second primary cancer was lower than the incidence of cancer experienced by males in the general population irrespective of the year of first diagnosis, except for the latest period (1997-2001). Several cancer-specific cohorts had SIRs that were significantly higher than expected in each time period, including head and neck cancer, melanoma, kidney cancer and bladder cancer.\nRelative risk of second primary cancer by type of first primary cancer and time period of first diagnosis, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. Analysis by period of first diagnosis was restricted to five years of follow-up to allow more consistent comparisons across time. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. Estimates are not shown for Brain & CNS for the periods 1982-1986 and 1992-1996 due to observed cell counts of less than 5.", "The risk of a second primary cancer for all survivors combined remained consistently elevated compared to the general population during each follow-up interval (Table 5): 2 months to less than 1 year after first diagnosis (SIR = 1.32; 95% CI = 1.27-1.37), 1 year to less than 5 years (SIR = 1.33; 95% CI = 1.31-1.36), 5 years to less than 10 years (SIR = 1.39; 95% CI = 1.35-1.42) and 10 years or longer (SIR = 1.28; 95% CI = 1.24-1.31). A significantly increased relative risk across all follow-up intervals was also observed following a diagnosis of head and neck cancer, melanoma, kidney cancer, bladder cancer or lymphoid leukaemia, while the risks were significantly higher from 1 year or longer after diagnosis for colorectal cancer, lung cancer, female breast cancer or non-Hodgkin lymphoma. The relative risk of developing another cancer was significantly higher among cervical cancer survivors up to 10 years after the initial diagnosis but not after that. Prostate cancer patients were found to have subsequent risks that remained lower than the matching population, particularly 10 or more years after initial diagnosis.\nRelative risk of second primary cancer by type of first primary cancer and follow-up interval, Queensland, 1982-2006\nObs. = Observed number of second primary cancers; SIR = standardised incidence ratio; CI = confidence interval; CNS = central nervous system; PCT = plasma cell tumours. SIRs shown in normal bold font indicate significantly increased risk; SIRs shown in bold italics indicate significantly decreased risk. SIRs for pancreatic cancer for 5 years to less than 10 years after first diagnosis and 10 years or longer after first diagnosis have been combined due to less than 5 observed cases 10 years or longer after first diagnosis.", "The relative risks for specific second primary cancers varied substantially according to the type of first primary cancer (Figure 1 and 2). Within the melanoma cohort (Figure 1c and 1d), both males and females were over six times more likely to be diagnosed with another primary melanoma compared to the general population. They also had significantly increased relative risks for several other cancers, including thyroid cancer and lymphoid leukaemia (both males and females), brain cancer, non-Hodgkin lymphoma, prostate cancer and colorectal cancer (males only) and kidney cancer and breast cancer (females only). However, lung cancers occurred less often than expected among males with a first primary melanoma.\nRelative risk following all cancers combined, melanoma or colorectal cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour.Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. X-axes are shown on a log scale.\nRelative risk following prostate cancer, breast cancer, head and neck cancer or lung cancer by type of second primary cancer and sex, Queensland, 1982-2006. CNS: central nervous system; PCT: plasma cell tumour. Vertical black line indicates SIR point estimate; grey shading indicates SIR 95% confidence interval. The SIR for a second primary prostate cancer following a first primary prostate cancer was 0.3 and is not shown in Figure 2a. No cases of lymphoid leukaemia following head and neck cancer were recorded for females (Figure 2d). X-axes are shown on a log scale. Note different scale endpoints for x-axis used in Figures 2d, 2e and 2f.\nAn elevated relative risk of melanoma was also observed for both sexes in the colorectal cancer cohort (Figure 1e and 1f). In addition, male colorectal cancer survivors had higher risks for lymphoid leukaemia, oesophageal cancer and kidney cancer but a lower risk of developing a second primary colorectal cancer compared to the general population, while females with colorectal cancer experienced a subsequent increased risk of breast cancer.\nMen diagnosed with prostate cancer had significantly increased relative risks for thyroid cancer, melanoma, bladder cancer, kidney cancer, non-Hodgkin lymphoma and colorectal cancer but a significantly decreased risk of lung cancer (Figure 2a). Instances of primary prostate cancer occurring twice in the same person were rare. Female breast cancer survivors were more likely to be diagnosed with uterine cancer, myeloid leukaemia, stomach cancer, breast cancer, ovarian cancer, melanoma, kidney cancer or colorectal cancer than were the general population (Figure 2b).\nPeople initially diagnosed with head and neck cancer were found to have elevated relative risks for a second cancer of the head and neck, oesophageal cancer, lung cancer and non-Hodgkin lymphoma (Figure 2c and 2d). Males with head and neck cancer were also at increased risk of developing melanoma, colorectal cancer or bladder cancer. All lung cancer patients had a significantly increased risk of oesophageal and head and neck cancers compared to other residents of Queensland (Figure 2e and 2f), while males with lung cancer experienced high relative risks of being subsequently diagnosed with kidney, pancreas or bladder cancers and females with lung cancer had a significantly increased risk of developing a second primary lung cancer. Lymphoid leukaemia occurred less often among males following lung cancer than in the general population, although both the observed and expected number of cases were small.", "Cancer patients in our study cohort were at significantly higher risk of a second diagnosis compared to the underlying incidence rates experienced in the entire population of Queensland. Although there was some variation in the size of the estimated relative risks, a consistent pattern of increased risk following all first primary cancers combined was seen for both males and females, and across all age groups at first diagnosis, time periods of first diagnosis and follow up intervals.\nOur data offers further evidence of significant associations between particular types of first and second primary cancers that have been previously documented elsewhere in Australia and around the world [6,12-16]. Some examples include the mutual increased risks between head and neck, lung and oesophageal cancers and the relationship between melanoma and both prostate and female breast cancers.\nMuch of the elevated risk of being diagnosed with a second malignancy can be attributed to risk behaviours (such as smoking, harmful levels of alcohol consumption and poor diet), inherited susceptibilities and/or the medical treatment that cancer survivors have received [3,4,6]. On occasions, treatment can also have the opposite effect of reducing the risk of subsequent diagnosis. As was noted in another recent study of second primary cancer [13], the overall lower SIRs observed among prostate cancer survivors are mainly due to extremely low rates of reoccurrence as the organ is often completely removed as part of treatment. Given that prostate cancer is the most common cancer diagnosed for males in Queensland, this also helps to explain why the total relative risk is lower for male survivors compared to female survivors.\nAnother potential explanation for the difference in risk between cancer survivors and the general population is that some demographic factors may not be comparable between the two groups. Although the relative risks were based on calculations matched by sex, age group and time period, it is possible that other qualities, such as socioeconomic status, may vary between people who have been diagnosed with cancer and those who have not. This is likely to be the reason for at least part of the reduced risk of lung cancer (which has higher incidence among lower socioeconomic groups) for males following melanoma (which is more common in segments of the community with higher socioeconomic status) [6,17]. A similar explanation could be used for the deficit of lung cancer after a diagnosis of prostate cancer. In contrast, the high reciprocal relative risks of melanoma with both female breast cancer and prostate cancer may correlate with the higher incidence for each of these malignancies among more affluent populations [18-20].\nOne of the interesting relationships that emerged from the analysis was the variation in relative risk of second primary cancer by age at first diagnosis following cancer of the brain and central nervous system. Younger survivors (15-49 years old) had an increased risk of being diagnosed with a second primary cancer while survivors aged 65 years or over had a decreased risk. This is most likely due to the various histopathological subtypes of brain tumours which are more common in the different age groups. For example, glioblastoma tends to be diagnosed at an older age and is associated with poor survival, allowing a limited time for treatment-related second primary cancers to appear [6].\nThe reasons behind an increased relative risk of second primary cancer following certain types of cancer among survivors who were diagnosed more recently are unknown. Tsukuma et al. [16] described a similar pattern in Japan, and suggested that apparent increases in risk may be due to improved follow-up and surveillance of cancer patients. Another possible cause is the change in treatment modalities over time [4,21].\nSecond primary cancers that arise due to the effects of treatment for the initial cancer might be expected to occur many years after the first diagnosis. However, the increase in relative risk remained fairly consistent irrespective of time since diagnosis, in accordance with results published elsewhere in Australia [13] and the United States [6].\nThe main strengths of this study include the extensive population-based coverage achieved by the QCR for the reporting of cancers among Queensland residents, combined with a high level of histological verification (88% in 2006) [8], which is important for distinguishing between new primary cancers and metastases of an existing cancer. Since all data used in this study have been collected prospectively for administrative purposes, and coded independently of the hypotheses, the opportunity for recall or information bias has been removed.\nIncreased medical surveillance of newly-diagnosed cancer patients may introduce a detection bias for second primary cancers. The likelihood of this happening has been reduced by only considering metachronous primary cancers, with a two-month window between first and second diagnosis. It is also possible that some second primary cancers were incorrectly classified as first primary cancers, especially for cancers diagnosed soon after the establishment of the QCR in the early 1980s; we are unable to quantify the impact of this on the observed results.\nWhile acknowledging that the study cohort and comparison population were not independent, with the population containing people already diagnosed with cancer, this proportion was less than 0.5% in any given year. Finally, as a result of the large number of comparisons made, the possibility that some of the SIR estimates have been spuriously identified as statistically significant needs to be considered, particularly those based on small numbers of primary or secondary cancers, and these results should therefore be interpreted with due caution.", "Our results demonstrate that cancer survivors in Queensland, Australia, like those in other countries, are confronted by a very real, ongoing risk of developing a second primary cancer that is significantly higher than the incidence of cancer experienced by the general population. Some first and second primary cancers share common aetiologies, making it imperative that cancer patients adopt a healthier lifestyle in order to lessen their chances of subsequent diagnoses [4,22]. Recent studies have reported little difference in the health behaviours of cancer survivors compared to the wider community [23-25], suggesting that health promotion efforts may need to be specifically targeted at people diagnosed with cancer due to their subsequent increased risk. Further work is needed to determine exactly how beneficial changes to lifestyle may be in regard to the risk of developing a second primary cancer [26]. Even so, some second primary cancers will be unavoidable, and as the number of cancer survivors continues to grow, the importance of ongoing medical supervision and screening to detect second primary cancers at an earlier stage and thereby improve the effectiveness of treatment will remain critical.", "The authors declare that they have no competing interests.", "DY conducted the statistical analysis and drafted the manuscript. PB conceived the study and edited the draft manuscript. Both authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/83/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[]
Integrative analysis of next generation sequencing for small non-coding RNAs and transcriptional regulation in Myelodysplastic Syndromes.
21342535
Myelodysplastic Syndromes (MDSS) are pre-leukemic disorders with increasing incident rates worldwide, but very limited treatment options. Little is known about small regulatory RNAs and how they contribute to pathogenesis, progression and transcriptome changes in MDS.
BACKGROUND
Patients' primary marrow cells were screened for short RNAs (RNA-seq) using next generation sequencing. Exon arrays from the same cells were used to profile gene expression and additional measures on 98 patients obtained. Integrative bioinformatics algorithms were proposed, and pathway and ontology analysis performed.
METHODS
In low-grade MDS, observations implied extensive post-transcriptional regulation via microRNAs (miRNA) and the recently discovered Piwi interacting RNAs (piRNA). Large expression differences were found for MDS-associated and novel miRNAs, including 48 sequences matching to miRNA star (miRNA*) motifs. The detected species were predicted to regulate disease stage specific molecular functions and pathways, including apoptosis and response to DNA damage. In high-grade MDS, results suggested extensive post-translation editing via transfer RNAs (tRNAs), providing a potential link for reduced apoptosis, a hallmark for this disease stage. Bioinformatics analysis confirmed important regulatory roles for MDS linked miRNAs and TFs, and strengthened the biological significance of miRNA*. The "RNA polymerase II promoters" were identified as the tightest controlled biological function. We suggest their control by a miRNA dominated feedback loop, which might be linked to the dramatically different miRNA amounts seen between low and high-grade MDS.
RESULTS
The presented results provide novel findings that build a basis of further investigations of diagnostic biomarkers, targeted therapies and studies on MDS pathogenesis.
DISCUSSION
[ "Computational Biology", "Exons", "Gene Expression Profiling", "Gene Expression Regulation", "Humans", "Myelodysplastic Syndromes", "Nucleic Acid Conformation", "RNA Polymerase II", "RNA, Small Untranslated", "Sequence Analysis, RNA", "Transcription, Genetic" ]
3060843
null
null
Methods
[SUBTITLE] Patient samples [SUBSECTION] Samples were obtained from patients presenting at The Methodist Hospital. The use of marrow samples was approved by The Methodist Hospital Institutional Review Board. All research described conformed to the Helsinki Declaration. Samples were obtained from patients presenting at The Methodist Hospital. The use of marrow samples was approved by The Methodist Hospital Institutional Review Board. All research described conformed to the Helsinki Declaration. [SUBTITLE] High throughput small RNA sequencing and data analysis [SUBSECTION] RNA in the 18-30 bp range was isolated from a 15 percent urea-PAGE gel, and ligated to Solexa SRA5' and SRA 3' adapters, according to the standard protocol (available: http://www.illumina.com). Briefly, the SRA5' adapter was ligated to the 5' end of the selected RNAs. The ligation products were gel purified and SRA3' adapters ligated to their 3' ends. The resulting products were also gel purified, reverse transcribed and amplified with primers containing sequences complementary to the SRA5' and SRA3' adapters, after which they were gel purified again. The size and quality of the resulting libraries were verified using an Agilent DNA1000 Bioanalyzer chip (Agilent) and sequenced on a Solexa GAIIx, using PhiX as a loading control and analyzed with the standard Illumina Pipeline version 1.4. This produced approximately 13 million reads per lane. In our analysis we used the s_x_sequence.txt files, containing 64 bit quality-scored output per-lane. The first 20bases of these reads were parsed in Mysql database tables, and further analyses utilized the MySQL database engine. At this stage, the database was employed to identify and count distinct reads and to export this information into fasta formatted output files (Additional files 1, 2, 3). The results were used to map each small RNA to its matching position in the human genome. A variety of algorithms exists to perform this task including ELAND, which is provided with the Solexa GAIIx. However, a particular fast and memory efficient algorithm that outperforms other approaches is Bowtie [19]. This algorithm allows filtering alignments based on mismatches and can omit reads matched to multiple positions on the reference. The human genome version GRCh37 was downloaded from the NCBI website and converted into a bowtie index file. All distinct reads were aligned to this reference sequence. We allowed for at most two mismatches and only considered reads that aligned to at most 25 positions in the genome (parameter setting v = 2 and m = 25). With this parameter set, on average, 70 percent of the short sequence reads from all three lanes had positive matches to genome coordinates, about 21 percent did not match any genome position and about 10 percent had more than 25 matches. A number of different databases were used as annotation basis for the aligned next generation sequencing reads. Information on sequences and genome positions of miRNAs were obtained from miRBase version 14. However, since our sample preparation and sequencing protocol is not specific for miRNAs, we downloaded information on other small RNAs from the UCSC genome browser. This contains genome positions for different small RNAs, including but not limited to tRNAs, rRNAs, scRNAs, suRNAs and srpRNA in the repeatmasker track, as well as positions of known exons. The sequences of known human piRNAs were searched and downloaded from the NCBI http://www.ncbi.nlm.nih.gov. The implemented annotation algorithm first checked if a read falls into a known miRNA loci (compare Figure 1). Unmatched reads were further aligned to primary miRNA sequences and perfect matches registered. If no match was identified, known loci for other small RNAs were searched in the following order rRNA, scRNA, sRNA, srpRNA, simple repeat and other RNAs. If a read was still uncharacterized, it was aligned against all piRNA sequences and matches returned for perfect alignments. Finally, if none of the above criteria was satisfied, positions for all human exons were first checked, if no match was identified reads were classified as unknown. The number of sequenced reads that annotated with a known RNA locus were used to represent its expression. NGS data analysis pipeline and comparison of sRNA annotations in MDS. NGS data analysis pipeline used for this study. In A) we show the annotation of a sequence read. It was detected about 18000 times in RAEB2 and aligned at nine different positions, spread over six chromosomes, on the human genome (green). A single alignment position is shown (red) with the used annotation hierarchy (blue). The purple callbox, details the matched loci for miRNA let-7a-1, its full primary sequence (top), its mature sequence (middle) and the aligned short read (bottom). The brown callbox shows all nine annotations, including a number of miRNAs from the has-let-7 family as well as a piRNA. In B) we compare the total RNA content measured from our high-throughput sequencing and annotation steps, on the left results for the RAEB2, in the middle results for RA and on the right results for control. The read counts for miRNA and miRNA* were compared for the RA, RAEB2 and controls and significant differential expression defined following the example in [20]. We required that the ratio R of read counts in two different cells was within R1 > 1.5 ∨ R2 < 0.67 and the read count difference D within D1 > 100 ∨ D2 < -100. Consequently, over expression was defined byR1 and D1 and under expression byR2 and D2. RNA in the 18-30 bp range was isolated from a 15 percent urea-PAGE gel, and ligated to Solexa SRA5' and SRA 3' adapters, according to the standard protocol (available: http://www.illumina.com). Briefly, the SRA5' adapter was ligated to the 5' end of the selected RNAs. The ligation products were gel purified and SRA3' adapters ligated to their 3' ends. The resulting products were also gel purified, reverse transcribed and amplified with primers containing sequences complementary to the SRA5' and SRA3' adapters, after which they were gel purified again. The size and quality of the resulting libraries were verified using an Agilent DNA1000 Bioanalyzer chip (Agilent) and sequenced on a Solexa GAIIx, using PhiX as a loading control and analyzed with the standard Illumina Pipeline version 1.4. This produced approximately 13 million reads per lane. In our analysis we used the s_x_sequence.txt files, containing 64 bit quality-scored output per-lane. The first 20bases of these reads were parsed in Mysql database tables, and further analyses utilized the MySQL database engine. At this stage, the database was employed to identify and count distinct reads and to export this information into fasta formatted output files (Additional files 1, 2, 3). The results were used to map each small RNA to its matching position in the human genome. A variety of algorithms exists to perform this task including ELAND, which is provided with the Solexa GAIIx. However, a particular fast and memory efficient algorithm that outperforms other approaches is Bowtie [19]. This algorithm allows filtering alignments based on mismatches and can omit reads matched to multiple positions on the reference. The human genome version GRCh37 was downloaded from the NCBI website and converted into a bowtie index file. All distinct reads were aligned to this reference sequence. We allowed for at most two mismatches and only considered reads that aligned to at most 25 positions in the genome (parameter setting v = 2 and m = 25). With this parameter set, on average, 70 percent of the short sequence reads from all three lanes had positive matches to genome coordinates, about 21 percent did not match any genome position and about 10 percent had more than 25 matches. A number of different databases were used as annotation basis for the aligned next generation sequencing reads. Information on sequences and genome positions of miRNAs were obtained from miRBase version 14. However, since our sample preparation and sequencing protocol is not specific for miRNAs, we downloaded information on other small RNAs from the UCSC genome browser. This contains genome positions for different small RNAs, including but not limited to tRNAs, rRNAs, scRNAs, suRNAs and srpRNA in the repeatmasker track, as well as positions of known exons. The sequences of known human piRNAs were searched and downloaded from the NCBI http://www.ncbi.nlm.nih.gov. The implemented annotation algorithm first checked if a read falls into a known miRNA loci (compare Figure 1). Unmatched reads were further aligned to primary miRNA sequences and perfect matches registered. If no match was identified, known loci for other small RNAs were searched in the following order rRNA, scRNA, sRNA, srpRNA, simple repeat and other RNAs. If a read was still uncharacterized, it was aligned against all piRNA sequences and matches returned for perfect alignments. Finally, if none of the above criteria was satisfied, positions for all human exons were first checked, if no match was identified reads were classified as unknown. The number of sequenced reads that annotated with a known RNA locus were used to represent its expression. NGS data analysis pipeline and comparison of sRNA annotations in MDS. NGS data analysis pipeline used for this study. In A) we show the annotation of a sequence read. It was detected about 18000 times in RAEB2 and aligned at nine different positions, spread over six chromosomes, on the human genome (green). A single alignment position is shown (red) with the used annotation hierarchy (blue). The purple callbox, details the matched loci for miRNA let-7a-1, its full primary sequence (top), its mature sequence (middle) and the aligned short read (bottom). The brown callbox shows all nine annotations, including a number of miRNAs from the has-let-7 family as well as a piRNA. In B) we compare the total RNA content measured from our high-throughput sequencing and annotation steps, on the left results for the RAEB2, in the middle results for RA and on the right results for control. The read counts for miRNA and miRNA* were compared for the RA, RAEB2 and controls and significant differential expression defined following the example in [20]. We required that the ratio R of read counts in two different cells was within R1 > 1.5 ∨ R2 < 0.67 and the read count difference D within D1 > 100 ∨ D2 < -100. Consequently, over expression was defined byR1 and D1 and under expression byR2 and D2. [SUBTITLE] Exon array profiling and data analysis [SUBSECTION] A total of 50ng RNA was extracted from each analyzed sample. We used primer provided from NuGEN and followed the manufacturer's protocol for the first strand cDNA synthesis. For RNA primer annealing, their mixtures were incubated for 2 minutes at 65°C and cooled to 4°C. After cooling, cDNA synthesis cycle followed; 4°C for 1 minute, 25°C for 10 minutes, 42°C for 10 minutes, 70°C for 15 minutes, and again 4°C for 1 minute. The second stranded reaction followed immediately. After mixing the first strand solution with second strand cDNA synthesis reaction solution, the entire mixture was incubated in the thermocycler as follows: 4°C for 1 minute, 25°C for 10 minutes, 50°C for 30 minutes, 70°C for 5 minutes, 4°C. Then, using the Agencourt® RNAClean® beads, the entire cDNA was purified according to the manufacturer's protocol. For the sense transcript cDNA generation, WT-Ovation™ Exon Module (NuGEN) was used. Based on the instructions in the manufacturer's manual, 3 μg of each cDNA was mixed with the provided primers and incubated for 5 minutes at 95°C and cooled to 4°C. After mixing with enzyme solution, the entire reaction mixture was incubated as follows: 1 minute at 4°C, 10 minutes at 30°C, 60 minutes at 42°C, 10 minutes at 75°C, and cooled to 4°C. Then the ST-cDNA was purified with the QIAGEN DNA clearing kit. After the purification, fragmentation reaction was carried out using FL-Ovation™ cDNA Biotin Module V.2 according to the recommended methods. Briefly, 5 μg of cDNA was mixed with the provided enzyme mix and incubated 30 minutes at 37°C and 2 minutes at 95°C. Then the reaction was cooled to 4°C. Next, the reaction was subjected to the labeling reaction as suggested by the manufacturer. The fragmented cDNA was mixed with labeling reaction mix and incubated at 37°C for 60 minutes and 70°C for 10 minutes. Then, the reaction was cooled to 4°C and used immediately for array hybridization. For the array hybridization, instead of recommended by Affimatrix, we used the standard array protocol provided by the NuGEN exon module. For hybridization, Chips were incubated in Gene Chip Hybridization Oven 640 and underwent the washing and staining processes according to the FS450_0001 fluidic protocol. Then, the array was scanned using Gene Chip Scanner 3000 (GCS3000). The exon arrays for control, RA and RAEB2 were loaded into the Partek Genomics Suite 6.5. The Robust Multi-array Analysis (RMA) algorithm was used for initial intensity analysis [21] (Additional file 4). We generated gene expression estimates by averaging the intensities of all exons in a gene. Differential expression was defined as discussed for the NGS analysis above. A total of 50ng RNA was extracted from each analyzed sample. We used primer provided from NuGEN and followed the manufacturer's protocol for the first strand cDNA synthesis. For RNA primer annealing, their mixtures were incubated for 2 minutes at 65°C and cooled to 4°C. After cooling, cDNA synthesis cycle followed; 4°C for 1 minute, 25°C for 10 minutes, 42°C for 10 minutes, 70°C for 15 minutes, and again 4°C for 1 minute. The second stranded reaction followed immediately. After mixing the first strand solution with second strand cDNA synthesis reaction solution, the entire mixture was incubated in the thermocycler as follows: 4°C for 1 minute, 25°C for 10 minutes, 50°C for 30 minutes, 70°C for 5 minutes, 4°C. Then, using the Agencourt® RNAClean® beads, the entire cDNA was purified according to the manufacturer's protocol. For the sense transcript cDNA generation, WT-Ovation™ Exon Module (NuGEN) was used. Based on the instructions in the manufacturer's manual, 3 μg of each cDNA was mixed with the provided primers and incubated for 5 minutes at 95°C and cooled to 4°C. After mixing with enzyme solution, the entire reaction mixture was incubated as follows: 1 minute at 4°C, 10 minutes at 30°C, 60 minutes at 42°C, 10 minutes at 75°C, and cooled to 4°C. Then the ST-cDNA was purified with the QIAGEN DNA clearing kit. After the purification, fragmentation reaction was carried out using FL-Ovation™ cDNA Biotin Module V.2 according to the recommended methods. Briefly, 5 μg of cDNA was mixed with the provided enzyme mix and incubated 30 minutes at 37°C and 2 minutes at 95°C. Then the reaction was cooled to 4°C. Next, the reaction was subjected to the labeling reaction as suggested by the manufacturer. The fragmented cDNA was mixed with labeling reaction mix and incubated at 37°C for 60 minutes and 70°C for 10 minutes. Then, the reaction was cooled to 4°C and used immediately for array hybridization. For the array hybridization, instead of recommended by Affimatrix, we used the standard array protocol provided by the NuGEN exon module. For hybridization, Chips were incubated in Gene Chip Hybridization Oven 640 and underwent the washing and staining processes according to the FS450_0001 fluidic protocol. Then, the array was scanned using Gene Chip Scanner 3000 (GCS3000). The exon arrays for control, RA and RAEB2 were loaded into the Partek Genomics Suite 6.5. The Robust Multi-array Analysis (RMA) algorithm was used for initial intensity analysis [21] (Additional file 4). We generated gene expression estimates by averaging the intensities of all exons in a gene. Differential expression was defined as discussed for the NGS analysis above. [SUBTITLE] Integrated target genes for MDS [SUBSECTION] In an earlier study Pellegatii and colleagues [22] used an Affymetrics Human Genome U133 Plus 2.0 GeneChip to assay consistently differentially expressed genes in hematopoietic stem cells (HSC) of 183 patients compared to 17 HSC of normal controls. This identified 534 probesets for RA and 4670 from RAEB2 patients. We matched these probesets to gene symbols and identified their corresponding transcript IDs on the Exon GeneChip. For the RA gene list, 69 probesets did not have annotated gene symbols, 103 had no corresponding transcripts and for 431 matching IDs were found. For the RAEB2 gene list, 807 probesets had no annotation, 1009 had no matching transcripts and for 3661 matching IDs were found. Altogether, this created a target gene space of 4092 probesets that were further analyzed by our bioinformatics modeling approach. In an earlier study Pellegatii and colleagues [22] used an Affymetrics Human Genome U133 Plus 2.0 GeneChip to assay consistently differentially expressed genes in hematopoietic stem cells (HSC) of 183 patients compared to 17 HSC of normal controls. This identified 534 probesets for RA and 4670 from RAEB2 patients. We matched these probesets to gene symbols and identified their corresponding transcript IDs on the Exon GeneChip. For the RA gene list, 69 probesets did not have annotated gene symbols, 103 had no corresponding transcripts and for 431 matching IDs were found. For the RAEB2 gene list, 807 probesets had no annotation, 1009 had no matching transcripts and for 3661 matching IDs were found. Altogether, this created a target gene space of 4092 probesets that were further analyzed by our bioinformatics modeling approach. [SUBTITLE] Secondary structure and location of novel miRNA* sequences [SUBSECTION] The secondary structures for all miRNAs with stem-loop sequences deposited in miRBase were calculated using the Matlab Bioinformatics toolbox (version R2009a). The locations of mature miRNAs were identified as perfect alignments between the stem-loop and mature miRNA sequence. We calculated the locations of novel miRNA* sequences based on the genome coordinates of aligned small RNA reads. We note that due to mismatches in the miRBase alignments, e.g. between the miRNA stem-loop and the human genome, some derivations between the small RNA sequencing reads and the deposited stem-loop sequences may exist. All information was visualized using the tool VARNA [23]. The secondary structures for all miRNAs with stem-loop sequences deposited in miRBase were calculated using the Matlab Bioinformatics toolbox (version R2009a). The locations of mature miRNAs were identified as perfect alignments between the stem-loop and mature miRNA sequence. We calculated the locations of novel miRNA* sequences based on the genome coordinates of aligned small RNA reads. We note that due to mismatches in the miRBase alignments, e.g. between the miRNA stem-loop and the human genome, some derivations between the small RNA sequencing reads and the deposited stem-loop sequences may exist. All information was visualized using the tool VARNA [23]. [SUBTITLE] Prediction of miRNA-mRNA and miRNA*-mRNA pairs [SUBSECTION] Information on miRNA target genes was obtained from two popular and publicly available miRNA target prediction databases. We retrieved flat files for all predicted human miRNA targets available in miRanda [24] and targets conserved over different mammalian species from targetscan [25]. In order to reduce the number of false positive predictions we considered only targets predicted by both algorithms, which resulted in about 110.000 miRNA-mRNA pairs. In theory the majority of miRNA* are degraded in the cell. Therefore, we restricted our analysis to sequences with minimum read counts of 100. In each case, we define a 7-mer nucleotide sequences based on the small RNA read with the highest copy number throughout the control, low and high risk MDS samples. The nucleotides at positions two to eight were extracted and transformed into the RNA alphabet. The seed regions were checked for overlap with other known miRNA and miRNA* sequences and the targetscanS algorithm was used to predict miRNA*-mRNA pairs, if the seed sequence was previously unreported. In general, this algorithm performs target predictions based on perfect and conserved matches between the genes untranslated region (UTR) and the first six nucleotides of the seed sequence. It further requires that the seed region is followed either by the nucleotide A (known as a t1A anchor) or that the position eight of the alignment contains a perfect Watson-Crick pairing. On contrast, if the seed sequences matched with a previously reported miRNA or miRNA*, we used the target prediction strategy as reported above. Information on miRNA target genes was obtained from two popular and publicly available miRNA target prediction databases. We retrieved flat files for all predicted human miRNA targets available in miRanda [24] and targets conserved over different mammalian species from targetscan [25]. In order to reduce the number of false positive predictions we considered only targets predicted by both algorithms, which resulted in about 110.000 miRNA-mRNA pairs. In theory the majority of miRNA* are degraded in the cell. Therefore, we restricted our analysis to sequences with minimum read counts of 100. In each case, we define a 7-mer nucleotide sequences based on the small RNA read with the highest copy number throughout the control, low and high risk MDS samples. The nucleotides at positions two to eight were extracted and transformed into the RNA alphabet. The seed regions were checked for overlap with other known miRNA and miRNA* sequences and the targetscanS algorithm was used to predict miRNA*-mRNA pairs, if the seed sequence was previously unreported. In general, this algorithm performs target predictions based on perfect and conserved matches between the genes untranslated region (UTR) and the first six nucleotides of the seed sequence. It further requires that the seed region is followed either by the nucleotide A (known as a t1A anchor) or that the position eight of the alignment contains a perfect Watson-Crick pairing. On contrast, if the seed sequences matched with a previously reported miRNA or miRNA*, we used the target prediction strategy as reported above. [SUBTITLE] Prediction of transcription factor target genes [SUBSECTION] The flat files FACTOR and GENE of the commercially available database TRANSFAC v2008_2 [26] were downloaded and parsed into a MySQL database. The FACTOR and GENE flat files contain information on transcription factor proteins and genes regulated by transcription factors, respectively. A total of 2362 regulating factors for the human species (Homo Sapiens) were extracted and 70 entries, that did not describe proteins, but other regulatory factors were omitted. A large fraction (about 77 percent) of the remaining 2292 transcription factor proteins were mapped to Uniprot [27], either by external database ID's, or exact matches between protein names. With these accessions the protein coding gene IDs, as well as other information was downloaded automatically via a MATLAB based data retrieval algorithm implemented for this study. The transcript and probeset annotation files for the Affymetrix GeneChip Human Exon 1.0 ST Array were downloaded from the manufacture's website http://www.affymetrix.com and parsed into MySQL tables. Transcript IDs for 98 percent of the human transcription factor coding genes were extracted based on direct matches between gene names. Genes that can potentially be up regulated when the transcription factor protein binds to a specific site in its promoter region are called transcription factor target genes. We extracted all target genes for human transcription factor proteins by joining a number of database tables. This revealed 3296 gene targets for the 2292 transcription factor proteins. We used direct matches between the target gene names, as well as additional entries, to identify corresponding transcripts on the Affymetrix GeneChip. This resulted in matches for 83 percent of the target genes. The flat files FACTOR and GENE of the commercially available database TRANSFAC v2008_2 [26] were downloaded and parsed into a MySQL database. The FACTOR and GENE flat files contain information on transcription factor proteins and genes regulated by transcription factors, respectively. A total of 2362 regulating factors for the human species (Homo Sapiens) were extracted and 70 entries, that did not describe proteins, but other regulatory factors were omitted. A large fraction (about 77 percent) of the remaining 2292 transcription factor proteins were mapped to Uniprot [27], either by external database ID's, or exact matches between protein names. With these accessions the protein coding gene IDs, as well as other information was downloaded automatically via a MATLAB based data retrieval algorithm implemented for this study. The transcript and probeset annotation files for the Affymetrix GeneChip Human Exon 1.0 ST Array were downloaded from the manufacture's website http://www.affymetrix.com and parsed into MySQL tables. Transcript IDs for 98 percent of the human transcription factor coding genes were extracted based on direct matches between gene names. Genes that can potentially be up regulated when the transcription factor protein binds to a specific site in its promoter region are called transcription factor target genes. We extracted all target genes for human transcription factor proteins by joining a number of database tables. This revealed 3296 gene targets for the 2292 transcription factor proteins. We used direct matches between the target gene names, as well as additional entries, to identify corresponding transcripts on the Affymetrix GeneChip. This resulted in matches for 83 percent of the target genes. [SUBTITLE] Functional analysis for miRNA and miRNA* targets [SUBSECTION] The functional analysis of miRNA and miRNA* were performed by means of their predicted target genes. However, since the pools of potential target genes are large and suffer from high false positive rates, we selected only a limited set of genes for functional analysis. Therefore, we defined a threshold T describing the number of different miRNA or miRNA* that regulate a gene. Similar to many biological phenomena such functions are described by power laws (see Figure 2) and we aimed to select T in the exponential part of the function. This ensured that the selected genes were targeted by a large number of different miRNAs. We further tried to select at most 100 genes for the analysis. In each case, the selected target genes were imported into Ingenuity Pathway Analysis (IPA) version 8.5 and analyzed using the IPA Core Analysis algorithm. Threshold for miRNA/miRNA* target gene selection. This figure describes the number of genes (x-axis) that are targeted by different miRNA*s (y-axis), for the example of RA cells. In this particular case, we selected the threshold T to be 13 miRNAs and 93 different genes were selected for functional analysis. The functional analysis of miRNA and miRNA* were performed by means of their predicted target genes. However, since the pools of potential target genes are large and suffer from high false positive rates, we selected only a limited set of genes for functional analysis. Therefore, we defined a threshold T describing the number of different miRNA or miRNA* that regulate a gene. Similar to many biological phenomena such functions are described by power laws (see Figure 2) and we aimed to select T in the exponential part of the function. This ensured that the selected genes were targeted by a large number of different miRNAs. We further tried to select at most 100 genes for the analysis. In each case, the selected target genes were imported into Ingenuity Pathway Analysis (IPA) version 8.5 and analyzed using the IPA Core Analysis algorithm. Threshold for miRNA/miRNA* target gene selection. This figure describes the number of genes (x-axis) that are targeted by different miRNA*s (y-axis), for the example of RA cells. In this particular case, we selected the threshold T to be 13 miRNAs and 93 different genes were selected for functional analysis. [SUBTITLE] Data integration model and detection of important gene regulators [SUBSECTION] The proposed data integration model assumed that the mRNA amount present in a cell at any given time is linearly depended on the concentration of transcriptional acting TFs and post-transcriptional acting miRNAs. Therefore, gene expression was modeled as a linear combination of these factors plus random noise, which can be expressed following a standard regression model [28] (1) y i = β 0 + ∑ p = 1 N β p x p i + ε where yi is the expression of gene i, i = 1,..., G with G being the number of genes under study, (β0,..., βN) are the regression coefficients to be estimated by our model, N sums up the number of TFs and miRNAs observed in the cells under study, ε is the noise term which is assumed an independent Gaussian random variable with expectation zero and variance σ2, xpi was defined as (2) x p i = α p i γ p δ p where xpi is a factor associating gene i with regulator p, γp is a regulation characteristic and δp the expression level of regulator p. The association xpi was determined by miRNA and TF target prediction and xpi was set to one if gene i was a target of regulator p, otherwise xpi was set to zero. Transcription factors generally contribute to transcription and hence higher target genes levels, therefore, γp was set to one if p was a TFs. On contrast, miRNAs are known to post-transcriptionally degrade mRNAs, hence γp was set to minus one if p was a miRNA. The expression levels δp were determined by experiments as discussed earlier. Note that all expression values were normalized to controls and standardized to mean zero and standard deviation one. The above regression problem was solved using the recently proposed cyclical coordinate descent algorithm, which is based on an elastic net penalty [29]. This algorithm is particularly fast and the elastic net penalty is most appropriate to handle large and sparse problems (compare Additional file 5 Figure S1) of correlated inputs. In addition, it has the beneficial property of shrinking a number of predictor values βp to exactly zero, hence integrating an effective variable selection approach, otherwise computationally expensive [30]. Note, that the penalty is weighted and that these weights were determined by cross validation. The proposed data integration model assumed that the mRNA amount present in a cell at any given time is linearly depended on the concentration of transcriptional acting TFs and post-transcriptional acting miRNAs. Therefore, gene expression was modeled as a linear combination of these factors plus random noise, which can be expressed following a standard regression model [28] (1) y i = β 0 + ∑ p = 1 N β p x p i + ε where yi is the expression of gene i, i = 1,..., G with G being the number of genes under study, (β0,..., βN) are the regression coefficients to be estimated by our model, N sums up the number of TFs and miRNAs observed in the cells under study, ε is the noise term which is assumed an independent Gaussian random variable with expectation zero and variance σ2, xpi was defined as (2) x p i = α p i γ p δ p where xpi is a factor associating gene i with regulator p, γp is a regulation characteristic and δp the expression level of regulator p. The association xpi was determined by miRNA and TF target prediction and xpi was set to one if gene i was a target of regulator p, otherwise xpi was set to zero. Transcription factors generally contribute to transcription and hence higher target genes levels, therefore, γp was set to one if p was a TFs. On contrast, miRNAs are known to post-transcriptionally degrade mRNAs, hence γp was set to minus one if p was a miRNA. The expression levels δp were determined by experiments as discussed earlier. Note that all expression values were normalized to controls and standardized to mean zero and standard deviation one. The above regression problem was solved using the recently proposed cyclical coordinate descent algorithm, which is based on an elastic net penalty [29]. This algorithm is particularly fast and the elastic net penalty is most appropriate to handle large and sparse problems (compare Additional file 5 Figure S1) of correlated inputs. In addition, it has the beneficial property of shrinking a number of predictor values βp to exactly zero, hence integrating an effective variable selection approach, otherwise computationally expensive [30]. Note, that the penalty is weighted and that these weights were determined by cross validation.
null
null
null
null
[ "Background", "Patient samples", "High throughput small RNA sequencing and data analysis", "Exon array profiling and data analysis", "Integrated target genes for MDS", "Secondary structure and location of novel miRNA* sequences", "Prediction of miRNA-mRNA and miRNA*-mRNA pairs", "Prediction of transcription factor target genes", "Functional analysis for miRNA and miRNA* targets", "Data integration model and detection of important gene regulators", "Results and Discussion", "Defining the small RNAome of Myelodysplastic Syndromes by next generation sequencing", "Detailed characterization of expressed miRNA loci and identification of novel miRNA*", "Functional roles of miRNA and miRNA* in Myelodysplastic Syndromes", "Computational modeling of transcriptome regulation in Myelodysplastic Syndromes", "Key functions regulated by miRNAs and TFs in Myelodysplastic Syndromes", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Myelodysplastic Syndromes (MDS) are a group of heterogeneous hematopoietic stem cell disorders, which often lead to acute myeloid leukemia (AML). This group of diseases is most common in the growing demographic of the late sixties-early seventies [1]. In the United States the estimated number of new cases per year is about 40,000-76,000 with an attached cost of about 30.000 USD per person and year.\nMDS is characterized by ineffective bone marrow hematopoiesis, leading to cytopenias [2], with a highly variable disease progression that ranges from a slow development over many years to a rapid progression to AML within a few months. Patients can be classified into risk groups, primarily based on bone marrow myeloblast counts [3,4]. These include refractory anemia (RA), describing an early disease stage (low-grade MDS) and the refractory anemias with excess of blasts (RAEB1, RAEB2), which represent the later stages of the disease (high-grade MDS). While the median survival times are relatively long in the low and intermediate-1 classes, 97 and 63 months respectively, they are considerably shorter in the later classes with 26 for the intermediate-2 and only 11 months in the high risk group [5]. Current treatment options are rare and show only limited success. They mainly include allogeneic stem cell transplantation, treatment with hypomethylating agents and Lenalidomide.\nThere is increasing evidence that dysregulation of a number of different molecular pathways is involved from the disease onset, however, clearly defined mechanisms remain elusive [6]. The accumulation of cellular death is a common trait for the early stage of MDS [7,8]. It is thought to counteract the proliferation of dysfunctional cells and is the key characteristic of ineffective hematopoiesis and marrow failure [9,10]. With the continued expansion of diseased cells, genetic damage accumulates and contributes to disease progression, which may result in the transformation to AML. The later stages of MDS have been implicated with angiogenesis and reduced apoptosis [11-15].\nRecent studies have suggested that small non-coding RNAs (sRNAs), in particular microRNAs (miRNAs), contribute to the pathogenesis and progression of MDS [16,17]. However, very limited information on sRNA expression has been reported for MDS to date. To overcome this bottleneck, we performed high-throughput next generation sequencing of small RNAs (RNA-seq) in primary marrow cells of low- and high-grade MDS patients, together with matched controls. The relatively new technology of RNA-seq [18] is the method of choice for sensitive global detection of different sRNAs across an unparalleled dynamic range, and we detected sRNAs with read counts from ten to one million reads. The data obtained here suggest important roles for Piwi-interacting RNAs (piRNA), transfer RNAs (tRNA) and miRNAs, including many known and novel microRNAs star (miRNA*). Further functional analysis of miRNA/miRNA* showed that these species regulate disease stage-specific molecular functions and pathways, in particular, those known to be deregulated at the gene expression level. In addition, integrative bioinformatics modeling of our experimental data and bioinformatics databases identified the disease stage-specific regulation of the polymerase II promoter by miRNAs and transcription factors (TFs). This suggested a feedback loop that might contribute to the attenuation of miRNA expression in high-grade MDS.", "Samples were obtained from patients presenting at The Methodist Hospital. The use of marrow samples was approved by The Methodist Hospital Institutional Review Board. All research described conformed to the Helsinki Declaration.", "RNA in the 18-30 bp range was isolated from a 15 percent urea-PAGE gel, and ligated to Solexa SRA5' and SRA 3' adapters, according to the standard protocol (available: http://www.illumina.com). Briefly, the SRA5' adapter was ligated to the 5' end of the selected RNAs. The ligation products were gel purified and SRA3' adapters ligated to their 3' ends. The resulting products were also gel purified, reverse transcribed and amplified with primers containing sequences complementary to the SRA5' and SRA3' adapters, after which they were gel purified again. The size and quality of the resulting libraries were verified using an Agilent DNA1000 Bioanalyzer chip (Agilent) and sequenced on a Solexa GAIIx, using PhiX as a loading control and analyzed with the standard Illumina Pipeline version 1.4. This produced approximately 13 million reads per lane.\nIn our analysis we used the s_x_sequence.txt files, containing 64 bit quality-scored output per-lane. The first 20bases of these reads were parsed in Mysql database tables, and further analyses utilized the MySQL database engine.\nAt this stage, the database was employed to identify and count distinct reads and to export this information into fasta formatted output files (Additional files 1, 2, 3). The results were used to map each small RNA to its matching position in the human genome. A variety of algorithms exists to perform this task including ELAND, which is provided with the Solexa GAIIx. However, a particular fast and memory efficient algorithm that outperforms other approaches is Bowtie [19]. This algorithm allows filtering alignments based on mismatches and can omit reads matched to multiple positions on the reference. The human genome version GRCh37 was downloaded from the NCBI website and converted into a bowtie index file. All distinct reads were aligned to this reference sequence. We allowed for at most two mismatches and only considered reads that aligned to at most 25 positions in the genome (parameter setting v = 2 and m = 25). With this parameter set, on average, 70 percent of the short sequence reads from all three lanes had positive matches to genome coordinates, about 21 percent did not match any genome position and about 10 percent had more than 25 matches.\nA number of different databases were used as annotation basis for the aligned next generation sequencing reads. Information on sequences and genome positions of miRNAs were obtained from miRBase version 14. However, since our sample preparation and sequencing protocol is not specific for miRNAs, we downloaded information on other small RNAs from the UCSC genome browser. This contains genome positions for different small RNAs, including but not limited to tRNAs, rRNAs, scRNAs, suRNAs and srpRNA in the repeatmasker track, as well as positions of known exons. The sequences of known human piRNAs were searched and downloaded from the NCBI http://www.ncbi.nlm.nih.gov.\nThe implemented annotation algorithm first checked if a read falls into a known miRNA loci (compare Figure 1). Unmatched reads were further aligned to primary miRNA sequences and perfect matches registered. If no match was identified, known loci for other small RNAs were searched in the following order rRNA, scRNA, sRNA, srpRNA, simple repeat and other RNAs. If a read was still uncharacterized, it was aligned against all piRNA sequences and matches returned for perfect alignments. Finally, if none of the above criteria was satisfied, positions for all human exons were first checked, if no match was identified reads were classified as unknown. The number of sequenced reads that annotated with a known RNA locus were used to represent its expression.\nNGS data analysis pipeline and comparison of sRNA annotations in MDS. NGS data analysis pipeline used for this study. In A) we show the annotation of a sequence read. It was detected about 18000 times in RAEB2 and aligned at nine different positions, spread over six chromosomes, on the human genome (green). A single alignment position is shown (red) with the used annotation hierarchy (blue). The purple callbox, details the matched loci for miRNA let-7a-1, its full primary sequence (top), its mature sequence (middle) and the aligned short read (bottom). The brown callbox shows all nine annotations, including a number of miRNAs from the has-let-7 family as well as a piRNA. In B) we compare the total RNA content measured from our high-throughput sequencing and annotation steps, on the left results for the RAEB2, in the middle results for RA and on the right results for control.\nThe read counts for miRNA and miRNA* were compared for the RA, RAEB2 and controls and significant differential expression defined following the example in [20]. We required that the ratio R of read counts in two different cells was within R1 > 1.5 ∨ R2 < 0.67 and the read count difference D within D1 > 100 ∨ D2 < -100. Consequently, over expression was defined byR1 and D1 and under expression byR2 and D2.", "A total of 50ng RNA was extracted from each analyzed sample. We used primer provided from NuGEN and followed the manufacturer's protocol for the first strand cDNA synthesis. For RNA primer annealing, their mixtures were incubated for 2 minutes at 65°C and cooled to 4°C. After cooling, cDNA synthesis cycle followed; 4°C for 1 minute, 25°C for 10 minutes, 42°C for 10 minutes, 70°C for 15 minutes, and again 4°C for 1 minute. The second stranded reaction followed immediately. After mixing the first strand solution with second strand cDNA synthesis reaction solution, the entire mixture was incubated in the thermocycler as follows: 4°C for 1 minute, 25°C for 10 minutes, 50°C for 30 minutes, 70°C for 5 minutes, 4°C. Then, using the Agencourt® RNAClean® beads, the entire cDNA was purified according to the manufacturer's protocol. For the sense transcript cDNA generation, WT-Ovation™ Exon Module (NuGEN) was used. Based on the instructions in the manufacturer's manual, 3 μg of each cDNA was mixed with the provided primers and incubated for 5 minutes at 95°C and cooled to 4°C. After mixing with enzyme solution, the entire reaction mixture was incubated as follows: 1 minute at 4°C, 10 minutes at 30°C, 60 minutes at 42°C, 10 minutes at 75°C, and cooled to 4°C. Then the ST-cDNA was purified with the QIAGEN DNA clearing kit. After the purification, fragmentation reaction was carried out using FL-Ovation™ cDNA Biotin Module V.2 according to the recommended methods. Briefly, 5 μg of cDNA was mixed with the provided enzyme mix and incubated 30 minutes at 37°C and 2 minutes at 95°C. Then the reaction was cooled to 4°C. Next, the reaction was subjected to the labeling reaction as suggested by the manufacturer. The fragmented cDNA was mixed with labeling reaction mix and incubated at 37°C for 60 minutes and 70°C for 10 minutes. Then, the reaction was cooled to 4°C and used immediately for array hybridization. For the array hybridization, instead of recommended by Affimatrix, we used the standard array protocol provided by the NuGEN exon module. For hybridization, Chips were incubated in Gene Chip Hybridization Oven 640 and underwent the washing and staining processes according to the FS450_0001 fluidic protocol. Then, the array was scanned using Gene Chip Scanner 3000 (GCS3000).\nThe exon arrays for control, RA and RAEB2 were loaded into the Partek Genomics Suite 6.5. The Robust Multi-array Analysis (RMA) algorithm was used for initial intensity analysis [21] (Additional file 4). We generated gene expression estimates by averaging the intensities of all exons in a gene. Differential expression was defined as discussed for the NGS analysis above.", "In an earlier study Pellegatii and colleagues [22] used an Affymetrics Human Genome U133 Plus 2.0 GeneChip to assay consistently differentially expressed genes in hematopoietic stem cells (HSC) of 183 patients compared to 17 HSC of normal controls. This identified 534 probesets for RA and 4670 from RAEB2 patients. We matched these probesets to gene symbols and identified their corresponding transcript IDs on the Exon GeneChip. For the RA gene list, 69 probesets did not have annotated gene symbols, 103 had no corresponding transcripts and for 431 matching IDs were found. For the RAEB2 gene list, 807 probesets had no annotation, 1009 had no matching transcripts and for 3661 matching IDs were found. Altogether, this created a target gene space of 4092 probesets that were further analyzed by our bioinformatics modeling approach.", "The secondary structures for all miRNAs with stem-loop sequences deposited in miRBase were calculated using the Matlab Bioinformatics toolbox (version R2009a). The locations of mature miRNAs were identified as perfect alignments between the stem-loop and mature miRNA sequence. We calculated the locations of novel miRNA* sequences based on the genome coordinates of aligned small RNA reads. We note that due to mismatches in the miRBase alignments, e.g. between the miRNA stem-loop and the human genome, some derivations between the small RNA sequencing reads and the deposited stem-loop sequences may exist. All information was visualized using the tool VARNA [23].", "Information on miRNA target genes was obtained from two popular and publicly available miRNA target prediction databases. We retrieved flat files for all predicted human miRNA targets available in miRanda [24] and targets conserved over different mammalian species from targetscan [25]. In order to reduce the number of false positive predictions we considered only targets predicted by both algorithms, which resulted in about 110.000 miRNA-mRNA pairs.\nIn theory the majority of miRNA* are degraded in the cell. Therefore, we restricted our analysis to sequences with minimum read counts of 100. In each case, we define a 7-mer nucleotide sequences based on the small RNA read with the highest copy number throughout the control, low and high risk MDS samples. The nucleotides at positions two to eight were extracted and transformed into the RNA alphabet. The seed regions were checked for overlap with other known miRNA and miRNA* sequences and the targetscanS algorithm was used to predict miRNA*-mRNA pairs, if the seed sequence was previously unreported. In general, this algorithm performs target predictions based on perfect and conserved matches between the genes untranslated region (UTR) and the first six nucleotides of the seed sequence. It further requires that the seed region is followed either by the nucleotide A (known as a t1A anchor) or that the position eight of the alignment contains a perfect Watson-Crick pairing. On contrast, if the seed sequences matched with a previously reported miRNA or miRNA*, we used the target prediction strategy as reported above.", "The flat files FACTOR and GENE of the commercially available database TRANSFAC v2008_2 [26] were downloaded and parsed into a MySQL database. The FACTOR and GENE flat files contain information on transcription factor proteins and genes regulated by transcription factors, respectively. A total of 2362 regulating factors for the human species (Homo Sapiens) were extracted and 70 entries, that did not describe proteins, but other regulatory factors were omitted. A large fraction (about 77 percent) of the remaining 2292 transcription factor proteins were mapped to Uniprot [27], either by external database ID's, or exact matches between protein names. With these accessions the protein coding gene IDs, as well as other information was downloaded automatically via a MATLAB based data retrieval algorithm implemented for this study. The transcript and probeset annotation files for the Affymetrix GeneChip Human Exon 1.0 ST Array were downloaded from the manufacture's website http://www.affymetrix.com and parsed into MySQL tables. Transcript IDs for 98 percent of the human transcription factor coding genes were extracted based on direct matches between gene names.\nGenes that can potentially be up regulated when the transcription factor protein binds to a specific site in its promoter region are called transcription factor target genes. We extracted all target genes for human transcription factor proteins by joining a number of database tables. This revealed 3296 gene targets for the 2292 transcription factor proteins. We used direct matches between the target gene names, as well as additional entries, to identify corresponding transcripts on the Affymetrix GeneChip. This resulted in matches for 83 percent of the target genes.", "The functional analysis of miRNA and miRNA* were performed by means of their predicted target genes. However, since the pools of potential target genes are large and suffer from high false positive rates, we selected only a limited set of genes for functional analysis. Therefore, we defined a threshold T describing the number of different miRNA or miRNA* that regulate a gene. Similar to many biological phenomena such functions are described by power laws (see Figure 2) and we aimed to select T in the exponential part of the function. This ensured that the selected genes were targeted by a large number of different miRNAs. We further tried to select at most 100 genes for the analysis. In each case, the selected target genes were imported into Ingenuity Pathway Analysis (IPA) version 8.5 and analyzed using the IPA Core Analysis algorithm.\nThreshold for miRNA/miRNA* target gene selection. This figure describes the number of genes (x-axis) that are targeted by different miRNA*s (y-axis), for the example of RA cells. In this particular case, we selected the threshold T to be 13 miRNAs and 93 different genes were selected for functional analysis.", "The proposed data integration model assumed that the mRNA amount present in a cell at any given time is linearly depended on the concentration of transcriptional acting TFs and post-transcriptional acting miRNAs. Therefore, gene expression was modeled as a linear combination of these factors plus random noise, which can be expressed following a standard regression model [28]\n\n\n(1)\n\n\n\ny\ni\n\n=\n\nβ\n0\n\n+\n\n\n∑\n\np\n=\n1\n\nN\n\n\n\nβ\np\n\n\nx\np\ni\n\n+\nε\n\n\n\n\n\n\nwhere yi is the expression of gene i, i = 1,..., G with G being the number of genes under study, (β0,..., βN) are the regression coefficients to be estimated by our model, N sums up the number of TFs and miRNAs observed in the cells under study, ε is the noise term which is assumed an independent Gaussian random variable with expectation zero and variance σ2, xpi was defined as\n\n\n(2)\n\n\n\nx\np\ni\n\n=\n\nα\np\ni\n\n\nγ\np\n\n\nδ\np\n\n\n\n\n\nwhere xpi is a factor associating gene i with regulator p, γp is a regulation characteristic and δp the expression level of regulator p. The association xpi was determined by miRNA and TF target prediction and xpi was set to one if gene i was a target of regulator p, otherwise xpi was set to zero. Transcription factors generally contribute to transcription and hence higher target genes levels, therefore, γp was set to one if p was a TFs. On contrast, miRNAs are known to post-transcriptionally degrade mRNAs, hence γp was set to minus one if p was a miRNA. The expression levels δp were determined by experiments as discussed earlier. Note that all expression values were normalized to controls and standardized to mean zero and standard deviation one.\nThe above regression problem was solved using the recently proposed cyclical coordinate descent algorithm, which is based on an elastic net penalty [29]. This algorithm is particularly fast and the elastic net penalty is most appropriate to handle large and sparse problems (compare Additional file 5 Figure S1) of correlated inputs. In addition, it has the beneficial property of shrinking a number of predictor values βp to exactly zero, hence integrating an effective variable selection approach, otherwise computationally expensive [30]. Note, that the penalty is weighted and that these weights were determined by cross validation.", "[SUBTITLE] Defining the small RNAome of Myelodysplastic Syndromes by next generation sequencing [SUBSECTION] We performed high-throughput next generation sequencing of small RNAs (RNA-seq) on primary cells from control, low-grade (RA) and high-grade (RAEB2) MDS patients on an Illumina Genome Analyzer IIx (see Methods). This resulted in about thirteen million short sequence reads (length 38 bp) per sample. We implemented an annotation algorithm that integrates knowledge from diverse biological databases to characterize each RNA-seq read (Figure 1). In brief, all reads were trimmed (length 22 bp) and aligned against the current version of the human genome (GRCh37), using the publicly available software Bowtie [19]. We allowed for at most two mismatches between the reference and read sequences. Since, the analyzed reads were relatively short and we allowed mismatches, a large number aligned to multiple genome positions (green part Figure 1). Consistent with previous analyses, we decided to discard reads having more than 25 alignment positions [31]. For annotation, we matched small sequencing reads to a set of small RNAs that included miRNAs from miRBase [32], a number of other small RNAs, including tRNAs and rRNAs, from the RepeatMasker track of UCSCs genome browser [33], as well as piRNAs from the NCBI database http://www.ncbi.nlm.nih.gov (blue callout box Figure 1). This mapping showed that the composition of the small RNAome was dramatically different from the analyzed samples, suggesting a shift in the regulation of small RNA targets during the progression of this disease.\nFirst, the relative amounts of tRNA to rRNA were significantly larger in RAEB2 compared to RA and control (36 vs. 1.6 and 1). Since tRNAs are vital building blocks for protein synthesis and required during translation, this may indicate an increased regulation of translation at this disease stage. A recent study based on tRNA microarrays reported a 20-fold elevation of tRNAs in tumor samples versus normal samples [34]. In addition, tRNAs have been shown to inhibit cytochorme c activated apoptosis [35,36]. Taken together, the high tRNA content may contribute to the two well known characteristics of high-grade MDSs, decreased apoptosis (in contrast to low-grade MDS) and high rate of leukemia transformation. To our knowledge, this novel finding has not been reported for MDS, highlighting the combined use of next generation sequencing and the proposed annotation methodology.\nNext, the obtained sequencing data demonstrated the first evidence of piRNA expression in marrow cells, and particular enrichment in low-grade MDS. Piwi-interacting RNAs are a relative newly defined class of none coding RNAs with length from 26 to 32nt [37,38]. In RA their expression increased, accounting for about nine percent of total sRNA counts, compared to about two and one percent in RAEB2 and controls, respectively. The biogenesis of piRNA is not fully understood today, but increasing evidence pinpoints that PIWI proteins are required for the accumulation of piRNAs [39-42]. In accordance with this concept, our exon array data showed that piwil1 and piwil2, two of the four human PIWI coding genes, were significantly up-regulated in RA, compared to control and high-grade MDS cells. Furthermore, recent studies have indicated that the PIWI-piRNA complex may have a role in post-transcriptional silencing damaged DNA fragments [39,43,44] and that interrupting PIWI-piRNA formation can lead to DNA double strand breaks [45]. Altogether, these findings suggest that piRNA might be used as diagnostic markers for low-grade MDS, however, further studies of their role in MDS pathogenesis are warranted.\nFinally, we found an increased regulatory role of miRNAs in cells of RA and RAEB2 patients. In low-grade MDS miRNAs represented about 35 percent of the total sRNAs, an almost 4-fold increase compared to control, highlighting their role in disease pathogenesis. Similarly, miRNA percentages were elevated to about 14 percent in RAEB2 compared to control, although at a lower extent (two-fold increase). Of note, miRNAs are currently the most widely studied species of sRNAs and they are known to influence mRNA levels as well as translation. Due to their profound effects, the above findings, and taken into account insufficient literature on miRNAs in MDS, we decided to further investigate and discuss their roles in MDS.\nSequencing of additional RNAomes is required to confirm the observed trends over a larger patient population.\nWe performed high-throughput next generation sequencing of small RNAs (RNA-seq) on primary cells from control, low-grade (RA) and high-grade (RAEB2) MDS patients on an Illumina Genome Analyzer IIx (see Methods). This resulted in about thirteen million short sequence reads (length 38 bp) per sample. We implemented an annotation algorithm that integrates knowledge from diverse biological databases to characterize each RNA-seq read (Figure 1). In brief, all reads were trimmed (length 22 bp) and aligned against the current version of the human genome (GRCh37), using the publicly available software Bowtie [19]. We allowed for at most two mismatches between the reference and read sequences. Since, the analyzed reads were relatively short and we allowed mismatches, a large number aligned to multiple genome positions (green part Figure 1). Consistent with previous analyses, we decided to discard reads having more than 25 alignment positions [31]. For annotation, we matched small sequencing reads to a set of small RNAs that included miRNAs from miRBase [32], a number of other small RNAs, including tRNAs and rRNAs, from the RepeatMasker track of UCSCs genome browser [33], as well as piRNAs from the NCBI database http://www.ncbi.nlm.nih.gov (blue callout box Figure 1). This mapping showed that the composition of the small RNAome was dramatically different from the analyzed samples, suggesting a shift in the regulation of small RNA targets during the progression of this disease.\nFirst, the relative amounts of tRNA to rRNA were significantly larger in RAEB2 compared to RA and control (36 vs. 1.6 and 1). Since tRNAs are vital building blocks for protein synthesis and required during translation, this may indicate an increased regulation of translation at this disease stage. A recent study based on tRNA microarrays reported a 20-fold elevation of tRNAs in tumor samples versus normal samples [34]. In addition, tRNAs have been shown to inhibit cytochorme c activated apoptosis [35,36]. Taken together, the high tRNA content may contribute to the two well known characteristics of high-grade MDSs, decreased apoptosis (in contrast to low-grade MDS) and high rate of leukemia transformation. To our knowledge, this novel finding has not been reported for MDS, highlighting the combined use of next generation sequencing and the proposed annotation methodology.\nNext, the obtained sequencing data demonstrated the first evidence of piRNA expression in marrow cells, and particular enrichment in low-grade MDS. Piwi-interacting RNAs are a relative newly defined class of none coding RNAs with length from 26 to 32nt [37,38]. In RA their expression increased, accounting for about nine percent of total sRNA counts, compared to about two and one percent in RAEB2 and controls, respectively. The biogenesis of piRNA is not fully understood today, but increasing evidence pinpoints that PIWI proteins are required for the accumulation of piRNAs [39-42]. In accordance with this concept, our exon array data showed that piwil1 and piwil2, two of the four human PIWI coding genes, were significantly up-regulated in RA, compared to control and high-grade MDS cells. Furthermore, recent studies have indicated that the PIWI-piRNA complex may have a role in post-transcriptional silencing damaged DNA fragments [39,43,44] and that interrupting PIWI-piRNA formation can lead to DNA double strand breaks [45]. Altogether, these findings suggest that piRNA might be used as diagnostic markers for low-grade MDS, however, further studies of their role in MDS pathogenesis are warranted.\nFinally, we found an increased regulatory role of miRNAs in cells of RA and RAEB2 patients. In low-grade MDS miRNAs represented about 35 percent of the total sRNAs, an almost 4-fold increase compared to control, highlighting their role in disease pathogenesis. Similarly, miRNA percentages were elevated to about 14 percent in RAEB2 compared to control, although at a lower extent (two-fold increase). Of note, miRNAs are currently the most widely studied species of sRNAs and they are known to influence mRNA levels as well as translation. Due to their profound effects, the above findings, and taken into account insufficient literature on miRNAs in MDS, we decided to further investigate and discuss their roles in MDS.\nSequencing of additional RNAomes is required to confirm the observed trends over a larger patient population.\n[SUBTITLE] Detailed characterization of expressed miRNA loci and identification of novel miRNA* [SUBSECTION] In the analyzed samples, reads were found at 246 different full-length primary miRNA sequence loci. These included matches at 173 different mature miRNA sites in RA, 93 in controls and 79 in RAEB2. Expression varied between samples and was generally more elevated in RA compared to RAEB2 (compare Figure 3 and Additional file 6 Tables S1,S2 and S3). The miRNA hsa-mir-125b-2 was an exception and more elevated in RAEB2 (read counts: 264 RAEB2, 87 RA and zero in controls). A single miRNA, hsa-mir-720 (fold change 10), was significantly down-regulated in RA and no copies were detected in RAEB2. Furthermore, a total of 58 miRNAs were only expressed in RA (Additional file 6 Table S4), hsa-mir-191 was unique to controls and hsa-mir-9-3 was only detected in RAEB2.\nComparison of miRNA expression. A heat map of the log2 transformed expression levels for miRNAs and miRNA* in the three analyzed samples.\nA number of high-throughput sequencing studies have recently reported the detection of miRNA*, often with higher copy numbers than their mature counterparts [46,47]. These studies further suggest that miRNA* associate with the effector complex AGO1 and regulate target gene expression. However, their roles in MDS have never been studied and we found reads matching to miRNA* motifs on 68 loci in RA, 55 in control and 24 in RAEB2 cells. In addition, multiple reads matched to uncharacterized positions on 59 different primary miRNA sequences. Interestingly, no miRNA* motifs had been reported for these loci before. Therefore, we visualized the secondary structure for their primary sequence, the location of the mature sequence and the reads clustered at uncharacterized loci (see Figure 4 Methods and Additional file 6 Table S5). Our bioinformatics analysis showed that most uncharacterized reads aligned on the miRNA* arm, opposite to the mature sequence. This has led to the definition of 59 previously unreported miRNA* candidates, of which 20 seed sequences have previously been associated in the targetscan database [48], but which did not exist in the miRBase version (v14) used for this study. We classified the remaining 39 motifs as novel miRNA* sequences (miRNA**) and folding information with locations on the miRNA arms are given in Additional file 6 Table S5.\nmiRNA* analysis pipeline. Analysis pipeline for the visualization of novel miRNA* from small RNA sequencing reads aligned to uncharacterized loci on known primary miRNA sequences.\nConsidering all samples together, significant expression was detected (read count at least 100) for 128 miRNA*, including 123 miRNA* in RA, 72 in control and 31 in RAEB2. Interestingly, in our RNA-seq data either the miRNA or the miRNA* (including miRNA**) arms were expressed at many miRNA loci (Additional file 5 Figure S2), suggesting a non-random and selective expression of the two different miRNA arms. Importantly, we found that 24 miRNA* were only expressed in RA, hsa-mir-24-1* was unique to control (copy number: 119) and no miRNA* was uniquely expressed in RAEB2. These miRNA* can potentially be used as biomarkers to diagnose low-grade MDS, which has significant overlapping morphologic and clinical features with reactive cytopenias, and is consequently very difficult to diagnose. However, further validation in additional patients and with different methods is needed to confirm these findings. Details for the ten miRNA* with the greatest fold changes in RA are given in Table 1 further information can be found in Additional file 6 Tables S1 and S4.\nDifferentially expressed miRNA* and their target genes.\nList of ten miRNA* (see Additional file 6 Table S4 for folding information) that were detected with the largest fold changes in control and low-grad cells. We show the fold change, p-value (measuring if the number of down regulated target genes is greater than expected by chance) and target genes with regulation (bold arrows mark significant and italic non-significant regulation). We assessed the significantly down regulated genes for functional enrichment and pathways. The top five enriched biological functions included RNA Post-Transcriptional Modification (pval:1.2E-04), Cellular Growth and Proliferation (pval:1.25E-04), Cell Death (pval:5.79E-04) and Cancer (pval:5.95E-04-). The top six enriched canonical pathways included IL-22 Signaling (pval:2.63E-04), p53 Signaling (pval: 8.32E-04), IL-15 Signaling (pval:2.95E-03), B Cell Receptor Signaling (pval:4.47E-03) and FLT3 Signaling in Hematopoietic Progenitor Cells (pval:4.68E-03).\nIn the analyzed samples, reads were found at 246 different full-length primary miRNA sequence loci. These included matches at 173 different mature miRNA sites in RA, 93 in controls and 79 in RAEB2. Expression varied between samples and was generally more elevated in RA compared to RAEB2 (compare Figure 3 and Additional file 6 Tables S1,S2 and S3). The miRNA hsa-mir-125b-2 was an exception and more elevated in RAEB2 (read counts: 264 RAEB2, 87 RA and zero in controls). A single miRNA, hsa-mir-720 (fold change 10), was significantly down-regulated in RA and no copies were detected in RAEB2. Furthermore, a total of 58 miRNAs were only expressed in RA (Additional file 6 Table S4), hsa-mir-191 was unique to controls and hsa-mir-9-3 was only detected in RAEB2.\nComparison of miRNA expression. A heat map of the log2 transformed expression levels for miRNAs and miRNA* in the three analyzed samples.\nA number of high-throughput sequencing studies have recently reported the detection of miRNA*, often with higher copy numbers than their mature counterparts [46,47]. These studies further suggest that miRNA* associate with the effector complex AGO1 and regulate target gene expression. However, their roles in MDS have never been studied and we found reads matching to miRNA* motifs on 68 loci in RA, 55 in control and 24 in RAEB2 cells. In addition, multiple reads matched to uncharacterized positions on 59 different primary miRNA sequences. Interestingly, no miRNA* motifs had been reported for these loci before. Therefore, we visualized the secondary structure for their primary sequence, the location of the mature sequence and the reads clustered at uncharacterized loci (see Figure 4 Methods and Additional file 6 Table S5). Our bioinformatics analysis showed that most uncharacterized reads aligned on the miRNA* arm, opposite to the mature sequence. This has led to the definition of 59 previously unreported miRNA* candidates, of which 20 seed sequences have previously been associated in the targetscan database [48], but which did not exist in the miRBase version (v14) used for this study. We classified the remaining 39 motifs as novel miRNA* sequences (miRNA**) and folding information with locations on the miRNA arms are given in Additional file 6 Table S5.\nmiRNA* analysis pipeline. Analysis pipeline for the visualization of novel miRNA* from small RNA sequencing reads aligned to uncharacterized loci on known primary miRNA sequences.\nConsidering all samples together, significant expression was detected (read count at least 100) for 128 miRNA*, including 123 miRNA* in RA, 72 in control and 31 in RAEB2. Interestingly, in our RNA-seq data either the miRNA or the miRNA* (including miRNA**) arms were expressed at many miRNA loci (Additional file 5 Figure S2), suggesting a non-random and selective expression of the two different miRNA arms. Importantly, we found that 24 miRNA* were only expressed in RA, hsa-mir-24-1* was unique to control (copy number: 119) and no miRNA* was uniquely expressed in RAEB2. These miRNA* can potentially be used as biomarkers to diagnose low-grade MDS, which has significant overlapping morphologic and clinical features with reactive cytopenias, and is consequently very difficult to diagnose. However, further validation in additional patients and with different methods is needed to confirm these findings. Details for the ten miRNA* with the greatest fold changes in RA are given in Table 1 further information can be found in Additional file 6 Tables S1 and S4.\nDifferentially expressed miRNA* and their target genes.\nList of ten miRNA* (see Additional file 6 Table S4 for folding information) that were detected with the largest fold changes in control and low-grad cells. We show the fold change, p-value (measuring if the number of down regulated target genes is greater than expected by chance) and target genes with regulation (bold arrows mark significant and italic non-significant regulation). We assessed the significantly down regulated genes for functional enrichment and pathways. The top five enriched biological functions included RNA Post-Transcriptional Modification (pval:1.2E-04), Cellular Growth and Proliferation (pval:1.25E-04), Cell Death (pval:5.79E-04) and Cancer (pval:5.95E-04-). The top six enriched canonical pathways included IL-22 Signaling (pval:2.63E-04), p53 Signaling (pval: 8.32E-04), IL-15 Signaling (pval:2.95E-03), B Cell Receptor Signaling (pval:4.47E-03) and FLT3 Signaling in Hematopoietic Progenitor Cells (pval:4.68E-03).\n[SUBTITLE] Functional roles of miRNA and miRNA* in Myelodysplastic Syndromes [SUBSECTION] In order to identify biological functions that might contribute to low-grade MDS, and can be modulated by the detected miRNA/miRNA*, we first identified target genes for 91 miRNA and 104 miRNA* that were highest expressed in RA, compared to RAEB2 and control marrow cells. The total number of uniquely regulated mRNAs was 7021 for miRNA* and 4665 for miRNA (see Methods). To select high confidence targets, each gene was further ranked according to the number of miRNAs or miRNA* that potentially control its expression or translation (see Methods). This was necessary to counteract the high false positive rates of in-silico miRNA target predictions, which for example do not consider tissue specificity. From this ranking two gene sets (Table 2), the first consisting of 74 genes controlled by 19 miRNAs and the second consisting of 93 genes regulated by at least 14 miRNA*, were selected to compare significantly enriched molecular and cellular functions (Methods). Interestingly, four out of the top five functions, with the smallest p-values, overlapped. These included \"Cell Death\", \"Cellular Development\", \"Cell Cycle\" and \"Gene Expression\" (Table 2). The high compatibility suggested that the detected miRNA* fulfill similar roles to their mature counterparts, providing further evidence of their selectivity and biological importance.\nEnriched biological processes of miRNA and miRNA* target genes.\nThis tables gives an overview of the selected miRNA (top) and miRNA* (bottom) target genes, their regulation (bold is used for significant expression and italic for non-significant expression), the top five molecular functions of these genes as well as the genes involved in these functions.\nTo study the overall role of miRNA/miRNA* in RA and RAEB2 cells, their target genes were combined for further analysis. In RA, we included 94 genes regulated by at least 27, and in RAEB2 a total 83 genes targeted by at least three different miRNA/miRNA*. The difference in the required number of regulating miRNA/miRNA* were attributed to the higher number of differentially expressed miRNA in RA (compare Additional file 5 Figure S3).\nNext, we identified significantly enriched molecular and cellular functions (Methods) and compared results with a recent large scale gene expression study of 183 MDS patients [22].\nIn both disease grades the selected genes were enriched for the molecular function of \"Cell Death\" (RA: 9.86E-06, RAEB2: 1.75E-04). This is in agreement with the above study, which identified apoptosis as the main deregulated process in low-grade MDS.\nAgain consistent with the cited study, miRNA/miRNA* targets selected in both MDS subtypes were enriched for \"DNA Replication, Recombination, and Repair \" (RA:1.12E-03, RAEB2: 6.67E-03).\nIn addition, cell cycle regulatory genes were among the indentified target genes for both, RA and RAEB2. In accordance with the study cited above, we found that the \"G2/M phase\" (RAEB2:1.55 E-3) and \"DNA damage checkpoint\" (RAEB2: 6.67E-3) were exclusively regulated in RAEB2. On contrast the \"G1 phase\" (6.17E-06) was exclusive to RA.\nThese findings showed that miRNA/miRNA* interfere with molecular functions and pathways known to be deregulated at the transcriptomic level, as reported in the cited gene expression study (some additional information is given in Additional file 7). In the following we proposed a bioinformatics modeling approach to further elucidate the effects of miRNA/miRNA* on the MDS transcriptome.\nIn order to identify biological functions that might contribute to low-grade MDS, and can be modulated by the detected miRNA/miRNA*, we first identified target genes for 91 miRNA and 104 miRNA* that were highest expressed in RA, compared to RAEB2 and control marrow cells. The total number of uniquely regulated mRNAs was 7021 for miRNA* and 4665 for miRNA (see Methods). To select high confidence targets, each gene was further ranked according to the number of miRNAs or miRNA* that potentially control its expression or translation (see Methods). This was necessary to counteract the high false positive rates of in-silico miRNA target predictions, which for example do not consider tissue specificity. From this ranking two gene sets (Table 2), the first consisting of 74 genes controlled by 19 miRNAs and the second consisting of 93 genes regulated by at least 14 miRNA*, were selected to compare significantly enriched molecular and cellular functions (Methods). Interestingly, four out of the top five functions, with the smallest p-values, overlapped. These included \"Cell Death\", \"Cellular Development\", \"Cell Cycle\" and \"Gene Expression\" (Table 2). The high compatibility suggested that the detected miRNA* fulfill similar roles to their mature counterparts, providing further evidence of their selectivity and biological importance.\nEnriched biological processes of miRNA and miRNA* target genes.\nThis tables gives an overview of the selected miRNA (top) and miRNA* (bottom) target genes, their regulation (bold is used for significant expression and italic for non-significant expression), the top five molecular functions of these genes as well as the genes involved in these functions.\nTo study the overall role of miRNA/miRNA* in RA and RAEB2 cells, their target genes were combined for further analysis. In RA, we included 94 genes regulated by at least 27, and in RAEB2 a total 83 genes targeted by at least three different miRNA/miRNA*. The difference in the required number of regulating miRNA/miRNA* were attributed to the higher number of differentially expressed miRNA in RA (compare Additional file 5 Figure S3).\nNext, we identified significantly enriched molecular and cellular functions (Methods) and compared results with a recent large scale gene expression study of 183 MDS patients [22].\nIn both disease grades the selected genes were enriched for the molecular function of \"Cell Death\" (RA: 9.86E-06, RAEB2: 1.75E-04). This is in agreement with the above study, which identified apoptosis as the main deregulated process in low-grade MDS.\nAgain consistent with the cited study, miRNA/miRNA* targets selected in both MDS subtypes were enriched for \"DNA Replication, Recombination, and Repair \" (RA:1.12E-03, RAEB2: 6.67E-03).\nIn addition, cell cycle regulatory genes were among the indentified target genes for both, RA and RAEB2. In accordance with the study cited above, we found that the \"G2/M phase\" (RAEB2:1.55 E-3) and \"DNA damage checkpoint\" (RAEB2: 6.67E-3) were exclusively regulated in RAEB2. On contrast the \"G1 phase\" (6.17E-06) was exclusive to RA.\nThese findings showed that miRNA/miRNA* interfere with molecular functions and pathways known to be deregulated at the transcriptomic level, as reported in the cited gene expression study (some additional information is given in Additional file 7). In the following we proposed a bioinformatics modeling approach to further elucidate the effects of miRNA/miRNA* on the MDS transcriptome.\n[SUBTITLE] Computational modeling of transcriptome regulation in Myelodysplastic Syndromes [SUBSECTION] In the recent years it has become increasingly evident that miRNAs and TFs coordinate to regulate mRNA levels [49]. Consequently, we proposed a bioinformatics model that accounts for both effects. It integrated miRNA expression levels measured by next generation sequencing, gene expression measured by exons arrays, as well as data of a recently published gene expression microarray study [22]. All datasets were linked using a number of publicly and commercially available bioinformatics databases (Methods). In particular, we focused on the regulation of genes consistently differentially expressed over a large patient pool, that can be influenced by miRNAs/miRNAs* and TFs detected in our samples. The general workflow is illustrated in Figure 5 and we briefly describe the main aspects below (more information is given in the Methods section and Additional file 5 Figure S4).\nTranscriptome analysis pipeline. Pipeline for the integrative analysis of the MDS transcriptome, further described in the text and Additional file 5 Figure S4.\nThe analysis started with miRNA profiling in samples of RA and RAEB2 patients by next generation sequencing, as discussed earlier.\nIn addition, we measured gene expression and splice form variations using the Affymetrix GeneChip Human Exon 1.0 ST Array. In an earlier study the bone marrow of 55 RA and 43 RAEB patients were compared against 17 controls and genes collectively differentially expressed explored [22]. These differentially expressed genes were merged with the exon array profiling (Additional file 5 Figure S5) and a set of 385 RA and 2795 RAEB2 genes was constructed.\nAgain, bioinformatics databases were used to map between the obtained gene lists and interacting miRNAs and TFs. This identified about 10.000 possible interactions between 217 miRNA (94 miRNA and 123 miRNA*), either expressed in RA or RAEB2, and their corresponding genes.\nIn a similar step all known human TF proteins and their validated promoter targets were identified. Next, their coding genes were determined using a retrieval algorithm which automatically queries the Universal Protein Resource [27]. The coding gene IDs were then mapped to Affymetrix transcript IDs to obtain gene expression levels from the analyzed exon array. After TFs with low expression levels were erased, 198 TFs with 465 validated interactions to the described MDS gene pool could be identified.\nHowever, 1073 genes could not be associated with an expressed miRNA nor a TF, and thus potential secondary targets were omitted from further analysis.\nThe obtained expression levels for all miRNA/miRNA*, TF and genes were normalized to their respective controls and then standardized to a mean of zero and a standard deviation of one.\nTo develop a bioinformatics model for gene expression regulation, we assumed that the mRNA amount, present in a cell at any time, is linearly dependent on its positive acting TFs and negative acting miRNAs [50,51]. Hence, the mRNA amounts can be modeled as a linear combination of the standardized expression levels of miRNAs and TFs. Note that all expression measures for genes, miRNA and TF were acquired from marrow cells of the same patients, whereas the other mentioned studies relied on expression levels from multiple studies of different tissues.\nThe resulting model for RA consisted of 1640 equations to represent each RA gene and 415 predictors (regulators, e.g. miRNA and TFs). For RAEB2 we used 1216 equations and 290 predictors.\nIn spite of the huge variable space, we were interested to determine how much each regulator contributes to the expression of the analyzed genes. This is a particular large regression problem and our input data, similar to other biological measurements, was highly correlated. In addition, the average number of miRNA and TF regulators per gene was small compared to the variable space (see Additional file 5 Figure S6), leading to a set of sparse equations, which posed another algorithmic difficulty.\nTo overcome these issues, we applied the recently proposed elastic net algorithm [29] that is specifically equipped to handle large, correlated and sparse problems. In addition, its regularization term was designed to shrink a numbers of predictors to exactly zero. This eliminates variables (miRNAs and TFs) without importance, and directly incorporates a feature selection procedure, which is otherwise computationally expensive.\nIn RA this strategy identified 349 variables, out of 415, with coefficients different from zero. Similarly, for RAEB2 it selected 197 out of the 290 possible variables. In order to rule out the possibility that these results are purely dependent on the expression levels of the regulators, or the number of regulated genes, we calculated a series of correlation coefficients. With Pearson Correlation Coefficients of 0.003 and 0.067 for the expression and 0.062 and 0.007 for the number of regulated genes, there were no correlations found for the low- and high-grade MDS, respectively.\nThe selected variables for RA included 119 miRNA*, 90 miRNA and 140 TF. In addition to the increased expression of miRNA* in RA and their potential to regulate low-grade MDS associated biological functions and pathways, the large selection of miRNA* provides further mathematical evidence for their regulatory importance.\nTo identify important miRNA/miRNA* and TFs, all regulators were ranked based on the aberration of their regression coefficients from zero (Figure 6). A large deviation, in positive or negative direction, is synonymous with a large influence on gene expression.\nMDS transcriptome regulators. Top 20 regulators determined by the proposed modeling approach. The y-axis shows the regression coefficients and the x-axis lists the regulator names. We named TF with their transfac accession and the corresponding protein name. The miRNAs are named with their miRBase accession and we marked previously known miRNA* with a single and novel miRNA* with two stars. In addition, we indicate the rounded regression coefficients on the respective regulator bars.\nIn RA, two subtype-specific expressed miRNAs were selected as most dominant regulators. Whereas the differentially expressed target genes of hsa-mir-1977** regulate hematopoiesis and apoptosis, hsa-miR-130a has previously been associated with the regulation of angiogenesis and platelet physiology [52,53]. The transcription factor E2F1 ranked three and is known to regulate S-phase dependent apoptosis in MDS [54,55]. Similar, eight out of 13 TF within the top 20 have previously been associated with \"Hematological Disease\" or \"Hematopoiesis\".\nFor RAEB2, the proposed pipeline selected 46 miRNA*, 76 miRNA and 84 TFs as influential. The 20 highest ranked regulators included 16 TFs, of which 12 have previously been associated with either \"Hematological Disease\" or \"Hematopoeisis\". The top ranked TF, AP-2β, has a known role in the development of metastatic phenotypes as well as apoptosis [56]. The highest ranked miRNAs were hsa-miR-122 and hsa-miR-20b, both expressed moderately and not linked to the RAEB2 phenotype.\nIn conclusion, the ranking of miRNAs and TFs with known and important relation to MDS shows the power of our approach. While a few TF have already been extensively investigated in MDS, an in-depth understanding of miRNA regulation remains elusive. We are planning to further study the functions of the novel miRNAs hsa-mir-1977** and hsa-miR-130a in primary cells to confirm our findings and illustrate their roles in MDS.\nIn the recent years it has become increasingly evident that miRNAs and TFs coordinate to regulate mRNA levels [49]. Consequently, we proposed a bioinformatics model that accounts for both effects. It integrated miRNA expression levels measured by next generation sequencing, gene expression measured by exons arrays, as well as data of a recently published gene expression microarray study [22]. All datasets were linked using a number of publicly and commercially available bioinformatics databases (Methods). In particular, we focused on the regulation of genes consistently differentially expressed over a large patient pool, that can be influenced by miRNAs/miRNAs* and TFs detected in our samples. The general workflow is illustrated in Figure 5 and we briefly describe the main aspects below (more information is given in the Methods section and Additional file 5 Figure S4).\nTranscriptome analysis pipeline. Pipeline for the integrative analysis of the MDS transcriptome, further described in the text and Additional file 5 Figure S4.\nThe analysis started with miRNA profiling in samples of RA and RAEB2 patients by next generation sequencing, as discussed earlier.\nIn addition, we measured gene expression and splice form variations using the Affymetrix GeneChip Human Exon 1.0 ST Array. In an earlier study the bone marrow of 55 RA and 43 RAEB patients were compared against 17 controls and genes collectively differentially expressed explored [22]. These differentially expressed genes were merged with the exon array profiling (Additional file 5 Figure S5) and a set of 385 RA and 2795 RAEB2 genes was constructed.\nAgain, bioinformatics databases were used to map between the obtained gene lists and interacting miRNAs and TFs. This identified about 10.000 possible interactions between 217 miRNA (94 miRNA and 123 miRNA*), either expressed in RA or RAEB2, and their corresponding genes.\nIn a similar step all known human TF proteins and their validated promoter targets were identified. Next, their coding genes were determined using a retrieval algorithm which automatically queries the Universal Protein Resource [27]. The coding gene IDs were then mapped to Affymetrix transcript IDs to obtain gene expression levels from the analyzed exon array. After TFs with low expression levels were erased, 198 TFs with 465 validated interactions to the described MDS gene pool could be identified.\nHowever, 1073 genes could not be associated with an expressed miRNA nor a TF, and thus potential secondary targets were omitted from further analysis.\nThe obtained expression levels for all miRNA/miRNA*, TF and genes were normalized to their respective controls and then standardized to a mean of zero and a standard deviation of one.\nTo develop a bioinformatics model for gene expression regulation, we assumed that the mRNA amount, present in a cell at any time, is linearly dependent on its positive acting TFs and negative acting miRNAs [50,51]. Hence, the mRNA amounts can be modeled as a linear combination of the standardized expression levels of miRNAs and TFs. Note that all expression measures for genes, miRNA and TF were acquired from marrow cells of the same patients, whereas the other mentioned studies relied on expression levels from multiple studies of different tissues.\nThe resulting model for RA consisted of 1640 equations to represent each RA gene and 415 predictors (regulators, e.g. miRNA and TFs). For RAEB2 we used 1216 equations and 290 predictors.\nIn spite of the huge variable space, we were interested to determine how much each regulator contributes to the expression of the analyzed genes. This is a particular large regression problem and our input data, similar to other biological measurements, was highly correlated. In addition, the average number of miRNA and TF regulators per gene was small compared to the variable space (see Additional file 5 Figure S6), leading to a set of sparse equations, which posed another algorithmic difficulty.\nTo overcome these issues, we applied the recently proposed elastic net algorithm [29] that is specifically equipped to handle large, correlated and sparse problems. In addition, its regularization term was designed to shrink a numbers of predictors to exactly zero. This eliminates variables (miRNAs and TFs) without importance, and directly incorporates a feature selection procedure, which is otherwise computationally expensive.\nIn RA this strategy identified 349 variables, out of 415, with coefficients different from zero. Similarly, for RAEB2 it selected 197 out of the 290 possible variables. In order to rule out the possibility that these results are purely dependent on the expression levels of the regulators, or the number of regulated genes, we calculated a series of correlation coefficients. With Pearson Correlation Coefficients of 0.003 and 0.067 for the expression and 0.062 and 0.007 for the number of regulated genes, there were no correlations found for the low- and high-grade MDS, respectively.\nThe selected variables for RA included 119 miRNA*, 90 miRNA and 140 TF. In addition to the increased expression of miRNA* in RA and their potential to regulate low-grade MDS associated biological functions and pathways, the large selection of miRNA* provides further mathematical evidence for their regulatory importance.\nTo identify important miRNA/miRNA* and TFs, all regulators were ranked based on the aberration of their regression coefficients from zero (Figure 6). A large deviation, in positive or negative direction, is synonymous with a large influence on gene expression.\nMDS transcriptome regulators. Top 20 regulators determined by the proposed modeling approach. The y-axis shows the regression coefficients and the x-axis lists the regulator names. We named TF with their transfac accession and the corresponding protein name. The miRNAs are named with their miRBase accession and we marked previously known miRNA* with a single and novel miRNA* with two stars. In addition, we indicate the rounded regression coefficients on the respective regulator bars.\nIn RA, two subtype-specific expressed miRNAs were selected as most dominant regulators. Whereas the differentially expressed target genes of hsa-mir-1977** regulate hematopoiesis and apoptosis, hsa-miR-130a has previously been associated with the regulation of angiogenesis and platelet physiology [52,53]. The transcription factor E2F1 ranked three and is known to regulate S-phase dependent apoptosis in MDS [54,55]. Similar, eight out of 13 TF within the top 20 have previously been associated with \"Hematological Disease\" or \"Hematopoiesis\".\nFor RAEB2, the proposed pipeline selected 46 miRNA*, 76 miRNA and 84 TFs as influential. The 20 highest ranked regulators included 16 TFs, of which 12 have previously been associated with either \"Hematological Disease\" or \"Hematopoeisis\". The top ranked TF, AP-2β, has a known role in the development of metastatic phenotypes as well as apoptosis [56]. The highest ranked miRNAs were hsa-miR-122 and hsa-miR-20b, both expressed moderately and not linked to the RAEB2 phenotype.\nIn conclusion, the ranking of miRNAs and TFs with known and important relation to MDS shows the power of our approach. While a few TF have already been extensively investigated in MDS, an in-depth understanding of miRNA regulation remains elusive. We are planning to further study the functions of the novel miRNAs hsa-mir-1977** and hsa-miR-130a in primary cells to confirm our findings and illustrate their roles in MDS.\n[SUBTITLE] Key functions regulated by miRNAs and TFs in Myelodysplastic Syndromes [SUBSECTION] In order to identify molecular processes influenced by the above regulators, we first annotated the target genes of highly ranked miRNAs/miRNA* and TFs (e.g. absolute regression coefficients greater than one) with pre-filtered (e.g. having less than 500 genes) gene ontologies [57]. Then each biological process was ranked according to the number of involved target genes. Further, genes differentially expressed in each process term were identified and overlaid with the above ranking onto Figure 7.\nMDS regulated biological processes. Illustration of biological processes that are highly regulated by influential miRNAs and TFs, as selected by our in-silico model. The left figure shows results for the low risk and the right figure for the high risk grade. In both graphs the x-axis describes the regulated process. The y-axis shows, in the black bar, the number of selected miRNA and TF that regulate a certain processes. In the red bar the number of down- and in green bar the number of up regulated genes are shown.\nSome highly regulated processes, such as angiogenesis, were shared between low- and high-grade MDS. Moreover, our model indicated a few biological processes that are highly regulated in both disease subtypes, but different in the levels of their expression. For example \"nuclear mRNA splicing, via spliceosome\", \"G1/S transition of mitotic cell cycle\" or \"protein import into the nucleus, docking \". Rationally, such processes are potential keys that can define functional differences in MDS subtypes.\nOf particular interest was the process \"negative regulation of transcription from RNA polymerase II promoters\" (GO:0000122), which was the most regulated process in both MDS grades. This pathway prevents or reduces transcription of different RNAs, including miRNAs.\nIn RA, the majority of the differentially expressed genes in this term were down regulated (Figure 7), hence promoting transcription. By contrast in RAEB2, the majority of differentially expressed genes were up regulated, leading to a reduced RNA production.\nTherefore, these results are in agreement with our earlier findings that some miRNAs were only detected, or had higher copy numbers, in RA compared to RAEB2.\nAltogether, these results suggested that the differences in miRNA expression between RA and RAEB2, and potentially their downstream targets, might be the result of RNA polymerase II promoter regulation. In RA, this would indicate a potential feedback system in which expressed miRNA and TF down regulate \"GO:0000122\". In turn, this could increase expression of RNA and hence accumulate miRNAs. By contrast in RAEB2, the selected miRNA and TF up regulate \"GO:0000122\". This drives the cell to reduce RNAs synthesis and consequently decreases their overall amount.\nThus, the discussed feedback loops are a potential explanation for the high amounts of miRNA seen in RA and the much lower amount in RAEB2, two obvious discoveries from the RNA-seq analysis described above. Further studies to investigate the role of this pathway in MDS are warranted.\nIn order to identify molecular processes influenced by the above regulators, we first annotated the target genes of highly ranked miRNAs/miRNA* and TFs (e.g. absolute regression coefficients greater than one) with pre-filtered (e.g. having less than 500 genes) gene ontologies [57]. Then each biological process was ranked according to the number of involved target genes. Further, genes differentially expressed in each process term were identified and overlaid with the above ranking onto Figure 7.\nMDS regulated biological processes. Illustration of biological processes that are highly regulated by influential miRNAs and TFs, as selected by our in-silico model. The left figure shows results for the low risk and the right figure for the high risk grade. In both graphs the x-axis describes the regulated process. The y-axis shows, in the black bar, the number of selected miRNA and TF that regulate a certain processes. In the red bar the number of down- and in green bar the number of up regulated genes are shown.\nSome highly regulated processes, such as angiogenesis, were shared between low- and high-grade MDS. Moreover, our model indicated a few biological processes that are highly regulated in both disease subtypes, but different in the levels of their expression. For example \"nuclear mRNA splicing, via spliceosome\", \"G1/S transition of mitotic cell cycle\" or \"protein import into the nucleus, docking \". Rationally, such processes are potential keys that can define functional differences in MDS subtypes.\nOf particular interest was the process \"negative regulation of transcription from RNA polymerase II promoters\" (GO:0000122), which was the most regulated process in both MDS grades. This pathway prevents or reduces transcription of different RNAs, including miRNAs.\nIn RA, the majority of the differentially expressed genes in this term were down regulated (Figure 7), hence promoting transcription. By contrast in RAEB2, the majority of differentially expressed genes were up regulated, leading to a reduced RNA production.\nTherefore, these results are in agreement with our earlier findings that some miRNAs were only detected, or had higher copy numbers, in RA compared to RAEB2.\nAltogether, these results suggested that the differences in miRNA expression between RA and RAEB2, and potentially their downstream targets, might be the result of RNA polymerase II promoter regulation. In RA, this would indicate a potential feedback system in which expressed miRNA and TF down regulate \"GO:0000122\". In turn, this could increase expression of RNA and hence accumulate miRNAs. By contrast in RAEB2, the selected miRNA and TF up regulate \"GO:0000122\". This drives the cell to reduce RNAs synthesis and consequently decreases their overall amount.\nThus, the discussed feedback loops are a potential explanation for the high amounts of miRNA seen in RA and the much lower amount in RAEB2, two obvious discoveries from the RNA-seq analysis described above. Further studies to investigate the role of this pathway in MDS are warranted.", "We performed high-throughput next generation sequencing of small RNAs (RNA-seq) on primary cells from control, low-grade (RA) and high-grade (RAEB2) MDS patients on an Illumina Genome Analyzer IIx (see Methods). This resulted in about thirteen million short sequence reads (length 38 bp) per sample. We implemented an annotation algorithm that integrates knowledge from diverse biological databases to characterize each RNA-seq read (Figure 1). In brief, all reads were trimmed (length 22 bp) and aligned against the current version of the human genome (GRCh37), using the publicly available software Bowtie [19]. We allowed for at most two mismatches between the reference and read sequences. Since, the analyzed reads were relatively short and we allowed mismatches, a large number aligned to multiple genome positions (green part Figure 1). Consistent with previous analyses, we decided to discard reads having more than 25 alignment positions [31]. For annotation, we matched small sequencing reads to a set of small RNAs that included miRNAs from miRBase [32], a number of other small RNAs, including tRNAs and rRNAs, from the RepeatMasker track of UCSCs genome browser [33], as well as piRNAs from the NCBI database http://www.ncbi.nlm.nih.gov (blue callout box Figure 1). This mapping showed that the composition of the small RNAome was dramatically different from the analyzed samples, suggesting a shift in the regulation of small RNA targets during the progression of this disease.\nFirst, the relative amounts of tRNA to rRNA were significantly larger in RAEB2 compared to RA and control (36 vs. 1.6 and 1). Since tRNAs are vital building blocks for protein synthesis and required during translation, this may indicate an increased regulation of translation at this disease stage. A recent study based on tRNA microarrays reported a 20-fold elevation of tRNAs in tumor samples versus normal samples [34]. In addition, tRNAs have been shown to inhibit cytochorme c activated apoptosis [35,36]. Taken together, the high tRNA content may contribute to the two well known characteristics of high-grade MDSs, decreased apoptosis (in contrast to low-grade MDS) and high rate of leukemia transformation. To our knowledge, this novel finding has not been reported for MDS, highlighting the combined use of next generation sequencing and the proposed annotation methodology.\nNext, the obtained sequencing data demonstrated the first evidence of piRNA expression in marrow cells, and particular enrichment in low-grade MDS. Piwi-interacting RNAs are a relative newly defined class of none coding RNAs with length from 26 to 32nt [37,38]. In RA their expression increased, accounting for about nine percent of total sRNA counts, compared to about two and one percent in RAEB2 and controls, respectively. The biogenesis of piRNA is not fully understood today, but increasing evidence pinpoints that PIWI proteins are required for the accumulation of piRNAs [39-42]. In accordance with this concept, our exon array data showed that piwil1 and piwil2, two of the four human PIWI coding genes, were significantly up-regulated in RA, compared to control and high-grade MDS cells. Furthermore, recent studies have indicated that the PIWI-piRNA complex may have a role in post-transcriptional silencing damaged DNA fragments [39,43,44] and that interrupting PIWI-piRNA formation can lead to DNA double strand breaks [45]. Altogether, these findings suggest that piRNA might be used as diagnostic markers for low-grade MDS, however, further studies of their role in MDS pathogenesis are warranted.\nFinally, we found an increased regulatory role of miRNAs in cells of RA and RAEB2 patients. In low-grade MDS miRNAs represented about 35 percent of the total sRNAs, an almost 4-fold increase compared to control, highlighting their role in disease pathogenesis. Similarly, miRNA percentages were elevated to about 14 percent in RAEB2 compared to control, although at a lower extent (two-fold increase). Of note, miRNAs are currently the most widely studied species of sRNAs and they are known to influence mRNA levels as well as translation. Due to their profound effects, the above findings, and taken into account insufficient literature on miRNAs in MDS, we decided to further investigate and discuss their roles in MDS.\nSequencing of additional RNAomes is required to confirm the observed trends over a larger patient population.", "In the analyzed samples, reads were found at 246 different full-length primary miRNA sequence loci. These included matches at 173 different mature miRNA sites in RA, 93 in controls and 79 in RAEB2. Expression varied between samples and was generally more elevated in RA compared to RAEB2 (compare Figure 3 and Additional file 6 Tables S1,S2 and S3). The miRNA hsa-mir-125b-2 was an exception and more elevated in RAEB2 (read counts: 264 RAEB2, 87 RA and zero in controls). A single miRNA, hsa-mir-720 (fold change 10), was significantly down-regulated in RA and no copies were detected in RAEB2. Furthermore, a total of 58 miRNAs were only expressed in RA (Additional file 6 Table S4), hsa-mir-191 was unique to controls and hsa-mir-9-3 was only detected in RAEB2.\nComparison of miRNA expression. A heat map of the log2 transformed expression levels for miRNAs and miRNA* in the three analyzed samples.\nA number of high-throughput sequencing studies have recently reported the detection of miRNA*, often with higher copy numbers than their mature counterparts [46,47]. These studies further suggest that miRNA* associate with the effector complex AGO1 and regulate target gene expression. However, their roles in MDS have never been studied and we found reads matching to miRNA* motifs on 68 loci in RA, 55 in control and 24 in RAEB2 cells. In addition, multiple reads matched to uncharacterized positions on 59 different primary miRNA sequences. Interestingly, no miRNA* motifs had been reported for these loci before. Therefore, we visualized the secondary structure for their primary sequence, the location of the mature sequence and the reads clustered at uncharacterized loci (see Figure 4 Methods and Additional file 6 Table S5). Our bioinformatics analysis showed that most uncharacterized reads aligned on the miRNA* arm, opposite to the mature sequence. This has led to the definition of 59 previously unreported miRNA* candidates, of which 20 seed sequences have previously been associated in the targetscan database [48], but which did not exist in the miRBase version (v14) used for this study. We classified the remaining 39 motifs as novel miRNA* sequences (miRNA**) and folding information with locations on the miRNA arms are given in Additional file 6 Table S5.\nmiRNA* analysis pipeline. Analysis pipeline for the visualization of novel miRNA* from small RNA sequencing reads aligned to uncharacterized loci on known primary miRNA sequences.\nConsidering all samples together, significant expression was detected (read count at least 100) for 128 miRNA*, including 123 miRNA* in RA, 72 in control and 31 in RAEB2. Interestingly, in our RNA-seq data either the miRNA or the miRNA* (including miRNA**) arms were expressed at many miRNA loci (Additional file 5 Figure S2), suggesting a non-random and selective expression of the two different miRNA arms. Importantly, we found that 24 miRNA* were only expressed in RA, hsa-mir-24-1* was unique to control (copy number: 119) and no miRNA* was uniquely expressed in RAEB2. These miRNA* can potentially be used as biomarkers to diagnose low-grade MDS, which has significant overlapping morphologic and clinical features with reactive cytopenias, and is consequently very difficult to diagnose. However, further validation in additional patients and with different methods is needed to confirm these findings. Details for the ten miRNA* with the greatest fold changes in RA are given in Table 1 further information can be found in Additional file 6 Tables S1 and S4.\nDifferentially expressed miRNA* and their target genes.\nList of ten miRNA* (see Additional file 6 Table S4 for folding information) that were detected with the largest fold changes in control and low-grad cells. We show the fold change, p-value (measuring if the number of down regulated target genes is greater than expected by chance) and target genes with regulation (bold arrows mark significant and italic non-significant regulation). We assessed the significantly down regulated genes for functional enrichment and pathways. The top five enriched biological functions included RNA Post-Transcriptional Modification (pval:1.2E-04), Cellular Growth and Proliferation (pval:1.25E-04), Cell Death (pval:5.79E-04) and Cancer (pval:5.95E-04-). The top six enriched canonical pathways included IL-22 Signaling (pval:2.63E-04), p53 Signaling (pval: 8.32E-04), IL-15 Signaling (pval:2.95E-03), B Cell Receptor Signaling (pval:4.47E-03) and FLT3 Signaling in Hematopoietic Progenitor Cells (pval:4.68E-03).", "In order to identify biological functions that might contribute to low-grade MDS, and can be modulated by the detected miRNA/miRNA*, we first identified target genes for 91 miRNA and 104 miRNA* that were highest expressed in RA, compared to RAEB2 and control marrow cells. The total number of uniquely regulated mRNAs was 7021 for miRNA* and 4665 for miRNA (see Methods). To select high confidence targets, each gene was further ranked according to the number of miRNAs or miRNA* that potentially control its expression or translation (see Methods). This was necessary to counteract the high false positive rates of in-silico miRNA target predictions, which for example do not consider tissue specificity. From this ranking two gene sets (Table 2), the first consisting of 74 genes controlled by 19 miRNAs and the second consisting of 93 genes regulated by at least 14 miRNA*, were selected to compare significantly enriched molecular and cellular functions (Methods). Interestingly, four out of the top five functions, with the smallest p-values, overlapped. These included \"Cell Death\", \"Cellular Development\", \"Cell Cycle\" and \"Gene Expression\" (Table 2). The high compatibility suggested that the detected miRNA* fulfill similar roles to their mature counterparts, providing further evidence of their selectivity and biological importance.\nEnriched biological processes of miRNA and miRNA* target genes.\nThis tables gives an overview of the selected miRNA (top) and miRNA* (bottom) target genes, their regulation (bold is used for significant expression and italic for non-significant expression), the top five molecular functions of these genes as well as the genes involved in these functions.\nTo study the overall role of miRNA/miRNA* in RA and RAEB2 cells, their target genes were combined for further analysis. In RA, we included 94 genes regulated by at least 27, and in RAEB2 a total 83 genes targeted by at least three different miRNA/miRNA*. The difference in the required number of regulating miRNA/miRNA* were attributed to the higher number of differentially expressed miRNA in RA (compare Additional file 5 Figure S3).\nNext, we identified significantly enriched molecular and cellular functions (Methods) and compared results with a recent large scale gene expression study of 183 MDS patients [22].\nIn both disease grades the selected genes were enriched for the molecular function of \"Cell Death\" (RA: 9.86E-06, RAEB2: 1.75E-04). This is in agreement with the above study, which identified apoptosis as the main deregulated process in low-grade MDS.\nAgain consistent with the cited study, miRNA/miRNA* targets selected in both MDS subtypes were enriched for \"DNA Replication, Recombination, and Repair \" (RA:1.12E-03, RAEB2: 6.67E-03).\nIn addition, cell cycle regulatory genes were among the indentified target genes for both, RA and RAEB2. In accordance with the study cited above, we found that the \"G2/M phase\" (RAEB2:1.55 E-3) and \"DNA damage checkpoint\" (RAEB2: 6.67E-3) were exclusively regulated in RAEB2. On contrast the \"G1 phase\" (6.17E-06) was exclusive to RA.\nThese findings showed that miRNA/miRNA* interfere with molecular functions and pathways known to be deregulated at the transcriptomic level, as reported in the cited gene expression study (some additional information is given in Additional file 7). In the following we proposed a bioinformatics modeling approach to further elucidate the effects of miRNA/miRNA* on the MDS transcriptome.", "In the recent years it has become increasingly evident that miRNAs and TFs coordinate to regulate mRNA levels [49]. Consequently, we proposed a bioinformatics model that accounts for both effects. It integrated miRNA expression levels measured by next generation sequencing, gene expression measured by exons arrays, as well as data of a recently published gene expression microarray study [22]. All datasets were linked using a number of publicly and commercially available bioinformatics databases (Methods). In particular, we focused on the regulation of genes consistently differentially expressed over a large patient pool, that can be influenced by miRNAs/miRNAs* and TFs detected in our samples. The general workflow is illustrated in Figure 5 and we briefly describe the main aspects below (more information is given in the Methods section and Additional file 5 Figure S4).\nTranscriptome analysis pipeline. Pipeline for the integrative analysis of the MDS transcriptome, further described in the text and Additional file 5 Figure S4.\nThe analysis started with miRNA profiling in samples of RA and RAEB2 patients by next generation sequencing, as discussed earlier.\nIn addition, we measured gene expression and splice form variations using the Affymetrix GeneChip Human Exon 1.0 ST Array. In an earlier study the bone marrow of 55 RA and 43 RAEB patients were compared against 17 controls and genes collectively differentially expressed explored [22]. These differentially expressed genes were merged with the exon array profiling (Additional file 5 Figure S5) and a set of 385 RA and 2795 RAEB2 genes was constructed.\nAgain, bioinformatics databases were used to map between the obtained gene lists and interacting miRNAs and TFs. This identified about 10.000 possible interactions between 217 miRNA (94 miRNA and 123 miRNA*), either expressed in RA or RAEB2, and their corresponding genes.\nIn a similar step all known human TF proteins and their validated promoter targets were identified. Next, their coding genes were determined using a retrieval algorithm which automatically queries the Universal Protein Resource [27]. The coding gene IDs were then mapped to Affymetrix transcript IDs to obtain gene expression levels from the analyzed exon array. After TFs with low expression levels were erased, 198 TFs with 465 validated interactions to the described MDS gene pool could be identified.\nHowever, 1073 genes could not be associated with an expressed miRNA nor a TF, and thus potential secondary targets were omitted from further analysis.\nThe obtained expression levels for all miRNA/miRNA*, TF and genes were normalized to their respective controls and then standardized to a mean of zero and a standard deviation of one.\nTo develop a bioinformatics model for gene expression regulation, we assumed that the mRNA amount, present in a cell at any time, is linearly dependent on its positive acting TFs and negative acting miRNAs [50,51]. Hence, the mRNA amounts can be modeled as a linear combination of the standardized expression levels of miRNAs and TFs. Note that all expression measures for genes, miRNA and TF were acquired from marrow cells of the same patients, whereas the other mentioned studies relied on expression levels from multiple studies of different tissues.\nThe resulting model for RA consisted of 1640 equations to represent each RA gene and 415 predictors (regulators, e.g. miRNA and TFs). For RAEB2 we used 1216 equations and 290 predictors.\nIn spite of the huge variable space, we were interested to determine how much each regulator contributes to the expression of the analyzed genes. This is a particular large regression problem and our input data, similar to other biological measurements, was highly correlated. In addition, the average number of miRNA and TF regulators per gene was small compared to the variable space (see Additional file 5 Figure S6), leading to a set of sparse equations, which posed another algorithmic difficulty.\nTo overcome these issues, we applied the recently proposed elastic net algorithm [29] that is specifically equipped to handle large, correlated and sparse problems. In addition, its regularization term was designed to shrink a numbers of predictors to exactly zero. This eliminates variables (miRNAs and TFs) without importance, and directly incorporates a feature selection procedure, which is otherwise computationally expensive.\nIn RA this strategy identified 349 variables, out of 415, with coefficients different from zero. Similarly, for RAEB2 it selected 197 out of the 290 possible variables. In order to rule out the possibility that these results are purely dependent on the expression levels of the regulators, or the number of regulated genes, we calculated a series of correlation coefficients. With Pearson Correlation Coefficients of 0.003 and 0.067 for the expression and 0.062 and 0.007 for the number of regulated genes, there were no correlations found for the low- and high-grade MDS, respectively.\nThe selected variables for RA included 119 miRNA*, 90 miRNA and 140 TF. In addition to the increased expression of miRNA* in RA and their potential to regulate low-grade MDS associated biological functions and pathways, the large selection of miRNA* provides further mathematical evidence for their regulatory importance.\nTo identify important miRNA/miRNA* and TFs, all regulators were ranked based on the aberration of their regression coefficients from zero (Figure 6). A large deviation, in positive or negative direction, is synonymous with a large influence on gene expression.\nMDS transcriptome regulators. Top 20 regulators determined by the proposed modeling approach. The y-axis shows the regression coefficients and the x-axis lists the regulator names. We named TF with their transfac accession and the corresponding protein name. The miRNAs are named with their miRBase accession and we marked previously known miRNA* with a single and novel miRNA* with two stars. In addition, we indicate the rounded regression coefficients on the respective regulator bars.\nIn RA, two subtype-specific expressed miRNAs were selected as most dominant regulators. Whereas the differentially expressed target genes of hsa-mir-1977** regulate hematopoiesis and apoptosis, hsa-miR-130a has previously been associated with the regulation of angiogenesis and platelet physiology [52,53]. The transcription factor E2F1 ranked three and is known to regulate S-phase dependent apoptosis in MDS [54,55]. Similar, eight out of 13 TF within the top 20 have previously been associated with \"Hematological Disease\" or \"Hematopoiesis\".\nFor RAEB2, the proposed pipeline selected 46 miRNA*, 76 miRNA and 84 TFs as influential. The 20 highest ranked regulators included 16 TFs, of which 12 have previously been associated with either \"Hematological Disease\" or \"Hematopoeisis\". The top ranked TF, AP-2β, has a known role in the development of metastatic phenotypes as well as apoptosis [56]. The highest ranked miRNAs were hsa-miR-122 and hsa-miR-20b, both expressed moderately and not linked to the RAEB2 phenotype.\nIn conclusion, the ranking of miRNAs and TFs with known and important relation to MDS shows the power of our approach. While a few TF have already been extensively investigated in MDS, an in-depth understanding of miRNA regulation remains elusive. We are planning to further study the functions of the novel miRNAs hsa-mir-1977** and hsa-miR-130a in primary cells to confirm our findings and illustrate their roles in MDS.", "In order to identify molecular processes influenced by the above regulators, we first annotated the target genes of highly ranked miRNAs/miRNA* and TFs (e.g. absolute regression coefficients greater than one) with pre-filtered (e.g. having less than 500 genes) gene ontologies [57]. Then each biological process was ranked according to the number of involved target genes. Further, genes differentially expressed in each process term were identified and overlaid with the above ranking onto Figure 7.\nMDS regulated biological processes. Illustration of biological processes that are highly regulated by influential miRNAs and TFs, as selected by our in-silico model. The left figure shows results for the low risk and the right figure for the high risk grade. In both graphs the x-axis describes the regulated process. The y-axis shows, in the black bar, the number of selected miRNA and TF that regulate a certain processes. In the red bar the number of down- and in green bar the number of up regulated genes are shown.\nSome highly regulated processes, such as angiogenesis, were shared between low- and high-grade MDS. Moreover, our model indicated a few biological processes that are highly regulated in both disease subtypes, but different in the levels of their expression. For example \"nuclear mRNA splicing, via spliceosome\", \"G1/S transition of mitotic cell cycle\" or \"protein import into the nucleus, docking \". Rationally, such processes are potential keys that can define functional differences in MDS subtypes.\nOf particular interest was the process \"negative regulation of transcription from RNA polymerase II promoters\" (GO:0000122), which was the most regulated process in both MDS grades. This pathway prevents or reduces transcription of different RNAs, including miRNAs.\nIn RA, the majority of the differentially expressed genes in this term were down regulated (Figure 7), hence promoting transcription. By contrast in RAEB2, the majority of differentially expressed genes were up regulated, leading to a reduced RNA production.\nTherefore, these results are in agreement with our earlier findings that some miRNAs were only detected, or had higher copy numbers, in RA compared to RAEB2.\nAltogether, these results suggested that the differences in miRNA expression between RA and RAEB2, and potentially their downstream targets, might be the result of RNA polymerase II promoter regulation. In RA, this would indicate a potential feedback system in which expressed miRNA and TF down regulate \"GO:0000122\". In turn, this could increase expression of RNA and hence accumulate miRNAs. By contrast in RAEB2, the selected miRNA and TF up regulate \"GO:0000122\". This drives the cell to reduce RNAs synthesis and consequently decreases their overall amount.\nThus, the discussed feedback loops are a potential explanation for the high amounts of miRNA seen in RA and the much lower amount in RAEB2, two obvious discoveries from the RNA-seq analysis described above. Further studies to investigate the role of this pathway in MDS are warranted.", "In this paper we presented the first systematic profiling for small RNAs in Myelodysplastic Syndromes using next generation sequencing on the current Illumina Genome Analyzer IIx platform. A custom data analysis pipeline that handled raw reads, sequence alignment, data storage as well as integrative read annotation was implemented. The analysis showed that the small RNAome in low-grade MDS (RA) was enriched for piRNAs, potentially protecting DNA from the accumulation of mutations, a mechanism not observed in high-grade MDS (RAEB2). By contrast, tRNAs were enriched in RAEB2, which might contribute to the characteristic reduction in apoptotic cell death at this disease stage. In both grades a number of differentially expressed miRNAs and miRNA* were detected and 48 previously unreported miRNA* exposed. In all analyzed cells, miRNA reads were often found for either the mature or the star sequence, indicating selective expression of miRNA and miRNA*. Subsequent functional analysis of target genes showed that both miRNA species (i.e. miRNA and miRNA*), regulate similar MDS stage specific molecular functions and pathways indicating that miRNA* also play important regulatory roles on the MDS transcriptome. Using integrative bioinformatics modeling, we identified miRNA species and TFs that act as important regulators for a MDS transcriptome that is consistently deregulated over a large MDS patient pool. Further ontology analysis identified the geneontology process of \"negative regulation of transcription from RNA polymerase II promoters\" as highly controlled in both MDS grades. Additionally, our findings suggested a potential feedback loop, where specific miRNAs and TFs regulate their own expression by either enhancing polymerase II promoter function, as seen in RA, or repressing its function, as found in RAEB2. Further studies are warranted to experimentally substantiate our observation and to develop novel biomarkers for the diagnosis and treatment of MDS.", "The authors declare that they have no competing interests.", "XZ and CCC designed the study. SA performed the RNA-SEQ and JW the exon arrays. DB performed the data analysis, wrote the manuscript and contributed the study design. XZ and TP supervised the data analysis. CCC and PW supervised the data generation. MB contributed to the data interpretation and manuscript writing. All authors read, assisted with editing, and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1755-8794/4/19/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patient samples", "High throughput small RNA sequencing and data analysis", "Exon array profiling and data analysis", "Integrated target genes for MDS", "Secondary structure and location of novel miRNA* sequences", "Prediction of miRNA-mRNA and miRNA*-mRNA pairs", "Prediction of transcription factor target genes", "Functional analysis for miRNA and miRNA* targets", "Data integration model and detection of important gene regulators", "Results and Discussion", "Defining the small RNAome of Myelodysplastic Syndromes by next generation sequencing", "Detailed characterization of expressed miRNA loci and identification of novel miRNA*", "Functional roles of miRNA and miRNA* in Myelodysplastic Syndromes", "Computational modeling of transcriptome regulation in Myelodysplastic Syndromes", "Key functions regulated by miRNAs and TFs in Myelodysplastic Syndromes", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Myelodysplastic Syndromes (MDS) are a group of heterogeneous hematopoietic stem cell disorders, which often lead to acute myeloid leukemia (AML). This group of diseases is most common in the growing demographic of the late sixties-early seventies [1]. In the United States the estimated number of new cases per year is about 40,000-76,000 with an attached cost of about 30.000 USD per person and year.\nMDS is characterized by ineffective bone marrow hematopoiesis, leading to cytopenias [2], with a highly variable disease progression that ranges from a slow development over many years to a rapid progression to AML within a few months. Patients can be classified into risk groups, primarily based on bone marrow myeloblast counts [3,4]. These include refractory anemia (RA), describing an early disease stage (low-grade MDS) and the refractory anemias with excess of blasts (RAEB1, RAEB2), which represent the later stages of the disease (high-grade MDS). While the median survival times are relatively long in the low and intermediate-1 classes, 97 and 63 months respectively, they are considerably shorter in the later classes with 26 for the intermediate-2 and only 11 months in the high risk group [5]. Current treatment options are rare and show only limited success. They mainly include allogeneic stem cell transplantation, treatment with hypomethylating agents and Lenalidomide.\nThere is increasing evidence that dysregulation of a number of different molecular pathways is involved from the disease onset, however, clearly defined mechanisms remain elusive [6]. The accumulation of cellular death is a common trait for the early stage of MDS [7,8]. It is thought to counteract the proliferation of dysfunctional cells and is the key characteristic of ineffective hematopoiesis and marrow failure [9,10]. With the continued expansion of diseased cells, genetic damage accumulates and contributes to disease progression, which may result in the transformation to AML. The later stages of MDS have been implicated with angiogenesis and reduced apoptosis [11-15].\nRecent studies have suggested that small non-coding RNAs (sRNAs), in particular microRNAs (miRNAs), contribute to the pathogenesis and progression of MDS [16,17]. However, very limited information on sRNA expression has been reported for MDS to date. To overcome this bottleneck, we performed high-throughput next generation sequencing of small RNAs (RNA-seq) in primary marrow cells of low- and high-grade MDS patients, together with matched controls. The relatively new technology of RNA-seq [18] is the method of choice for sensitive global detection of different sRNAs across an unparalleled dynamic range, and we detected sRNAs with read counts from ten to one million reads. The data obtained here suggest important roles for Piwi-interacting RNAs (piRNA), transfer RNAs (tRNA) and miRNAs, including many known and novel microRNAs star (miRNA*). Further functional analysis of miRNA/miRNA* showed that these species regulate disease stage-specific molecular functions and pathways, in particular, those known to be deregulated at the gene expression level. In addition, integrative bioinformatics modeling of our experimental data and bioinformatics databases identified the disease stage-specific regulation of the polymerase II promoter by miRNAs and transcription factors (TFs). This suggested a feedback loop that might contribute to the attenuation of miRNA expression in high-grade MDS.", "[SUBTITLE] Patient samples [SUBSECTION] Samples were obtained from patients presenting at The Methodist Hospital. The use of marrow samples was approved by The Methodist Hospital Institutional Review Board. All research described conformed to the Helsinki Declaration.\nSamples were obtained from patients presenting at The Methodist Hospital. The use of marrow samples was approved by The Methodist Hospital Institutional Review Board. All research described conformed to the Helsinki Declaration.\n[SUBTITLE] High throughput small RNA sequencing and data analysis [SUBSECTION] RNA in the 18-30 bp range was isolated from a 15 percent urea-PAGE gel, and ligated to Solexa SRA5' and SRA 3' adapters, according to the standard protocol (available: http://www.illumina.com). Briefly, the SRA5' adapter was ligated to the 5' end of the selected RNAs. The ligation products were gel purified and SRA3' adapters ligated to their 3' ends. The resulting products were also gel purified, reverse transcribed and amplified with primers containing sequences complementary to the SRA5' and SRA3' adapters, after which they were gel purified again. The size and quality of the resulting libraries were verified using an Agilent DNA1000 Bioanalyzer chip (Agilent) and sequenced on a Solexa GAIIx, using PhiX as a loading control and analyzed with the standard Illumina Pipeline version 1.4. This produced approximately 13 million reads per lane.\nIn our analysis we used the s_x_sequence.txt files, containing 64 bit quality-scored output per-lane. The first 20bases of these reads were parsed in Mysql database tables, and further analyses utilized the MySQL database engine.\nAt this stage, the database was employed to identify and count distinct reads and to export this information into fasta formatted output files (Additional files 1, 2, 3). The results were used to map each small RNA to its matching position in the human genome. A variety of algorithms exists to perform this task including ELAND, which is provided with the Solexa GAIIx. However, a particular fast and memory efficient algorithm that outperforms other approaches is Bowtie [19]. This algorithm allows filtering alignments based on mismatches and can omit reads matched to multiple positions on the reference. The human genome version GRCh37 was downloaded from the NCBI website and converted into a bowtie index file. All distinct reads were aligned to this reference sequence. We allowed for at most two mismatches and only considered reads that aligned to at most 25 positions in the genome (parameter setting v = 2 and m = 25). With this parameter set, on average, 70 percent of the short sequence reads from all three lanes had positive matches to genome coordinates, about 21 percent did not match any genome position and about 10 percent had more than 25 matches.\nA number of different databases were used as annotation basis for the aligned next generation sequencing reads. Information on sequences and genome positions of miRNAs were obtained from miRBase version 14. However, since our sample preparation and sequencing protocol is not specific for miRNAs, we downloaded information on other small RNAs from the UCSC genome browser. This contains genome positions for different small RNAs, including but not limited to tRNAs, rRNAs, scRNAs, suRNAs and srpRNA in the repeatmasker track, as well as positions of known exons. The sequences of known human piRNAs were searched and downloaded from the NCBI http://www.ncbi.nlm.nih.gov.\nThe implemented annotation algorithm first checked if a read falls into a known miRNA loci (compare Figure 1). Unmatched reads were further aligned to primary miRNA sequences and perfect matches registered. If no match was identified, known loci for other small RNAs were searched in the following order rRNA, scRNA, sRNA, srpRNA, simple repeat and other RNAs. If a read was still uncharacterized, it was aligned against all piRNA sequences and matches returned for perfect alignments. Finally, if none of the above criteria was satisfied, positions for all human exons were first checked, if no match was identified reads were classified as unknown. The number of sequenced reads that annotated with a known RNA locus were used to represent its expression.\nNGS data analysis pipeline and comparison of sRNA annotations in MDS. NGS data analysis pipeline used for this study. In A) we show the annotation of a sequence read. It was detected about 18000 times in RAEB2 and aligned at nine different positions, spread over six chromosomes, on the human genome (green). A single alignment position is shown (red) with the used annotation hierarchy (blue). The purple callbox, details the matched loci for miRNA let-7a-1, its full primary sequence (top), its mature sequence (middle) and the aligned short read (bottom). The brown callbox shows all nine annotations, including a number of miRNAs from the has-let-7 family as well as a piRNA. In B) we compare the total RNA content measured from our high-throughput sequencing and annotation steps, on the left results for the RAEB2, in the middle results for RA and on the right results for control.\nThe read counts for miRNA and miRNA* were compared for the RA, RAEB2 and controls and significant differential expression defined following the example in [20]. We required that the ratio R of read counts in two different cells was within R1 > 1.5 ∨ R2 < 0.67 and the read count difference D within D1 > 100 ∨ D2 < -100. Consequently, over expression was defined byR1 and D1 and under expression byR2 and D2.\nRNA in the 18-30 bp range was isolated from a 15 percent urea-PAGE gel, and ligated to Solexa SRA5' and SRA 3' adapters, according to the standard protocol (available: http://www.illumina.com). Briefly, the SRA5' adapter was ligated to the 5' end of the selected RNAs. The ligation products were gel purified and SRA3' adapters ligated to their 3' ends. The resulting products were also gel purified, reverse transcribed and amplified with primers containing sequences complementary to the SRA5' and SRA3' adapters, after which they were gel purified again. The size and quality of the resulting libraries were verified using an Agilent DNA1000 Bioanalyzer chip (Agilent) and sequenced on a Solexa GAIIx, using PhiX as a loading control and analyzed with the standard Illumina Pipeline version 1.4. This produced approximately 13 million reads per lane.\nIn our analysis we used the s_x_sequence.txt files, containing 64 bit quality-scored output per-lane. The first 20bases of these reads were parsed in Mysql database tables, and further analyses utilized the MySQL database engine.\nAt this stage, the database was employed to identify and count distinct reads and to export this information into fasta formatted output files (Additional files 1, 2, 3). The results were used to map each small RNA to its matching position in the human genome. A variety of algorithms exists to perform this task including ELAND, which is provided with the Solexa GAIIx. However, a particular fast and memory efficient algorithm that outperforms other approaches is Bowtie [19]. This algorithm allows filtering alignments based on mismatches and can omit reads matched to multiple positions on the reference. The human genome version GRCh37 was downloaded from the NCBI website and converted into a bowtie index file. All distinct reads were aligned to this reference sequence. We allowed for at most two mismatches and only considered reads that aligned to at most 25 positions in the genome (parameter setting v = 2 and m = 25). With this parameter set, on average, 70 percent of the short sequence reads from all three lanes had positive matches to genome coordinates, about 21 percent did not match any genome position and about 10 percent had more than 25 matches.\nA number of different databases were used as annotation basis for the aligned next generation sequencing reads. Information on sequences and genome positions of miRNAs were obtained from miRBase version 14. However, since our sample preparation and sequencing protocol is not specific for miRNAs, we downloaded information on other small RNAs from the UCSC genome browser. This contains genome positions for different small RNAs, including but not limited to tRNAs, rRNAs, scRNAs, suRNAs and srpRNA in the repeatmasker track, as well as positions of known exons. The sequences of known human piRNAs were searched and downloaded from the NCBI http://www.ncbi.nlm.nih.gov.\nThe implemented annotation algorithm first checked if a read falls into a known miRNA loci (compare Figure 1). Unmatched reads were further aligned to primary miRNA sequences and perfect matches registered. If no match was identified, known loci for other small RNAs were searched in the following order rRNA, scRNA, sRNA, srpRNA, simple repeat and other RNAs. If a read was still uncharacterized, it was aligned against all piRNA sequences and matches returned for perfect alignments. Finally, if none of the above criteria was satisfied, positions for all human exons were first checked, if no match was identified reads were classified as unknown. The number of sequenced reads that annotated with a known RNA locus were used to represent its expression.\nNGS data analysis pipeline and comparison of sRNA annotations in MDS. NGS data analysis pipeline used for this study. In A) we show the annotation of a sequence read. It was detected about 18000 times in RAEB2 and aligned at nine different positions, spread over six chromosomes, on the human genome (green). A single alignment position is shown (red) with the used annotation hierarchy (blue). The purple callbox, details the matched loci for miRNA let-7a-1, its full primary sequence (top), its mature sequence (middle) and the aligned short read (bottom). The brown callbox shows all nine annotations, including a number of miRNAs from the has-let-7 family as well as a piRNA. In B) we compare the total RNA content measured from our high-throughput sequencing and annotation steps, on the left results for the RAEB2, in the middle results for RA and on the right results for control.\nThe read counts for miRNA and miRNA* were compared for the RA, RAEB2 and controls and significant differential expression defined following the example in [20]. We required that the ratio R of read counts in two different cells was within R1 > 1.5 ∨ R2 < 0.67 and the read count difference D within D1 > 100 ∨ D2 < -100. Consequently, over expression was defined byR1 and D1 and under expression byR2 and D2.\n[SUBTITLE] Exon array profiling and data analysis [SUBSECTION] A total of 50ng RNA was extracted from each analyzed sample. We used primer provided from NuGEN and followed the manufacturer's protocol for the first strand cDNA synthesis. For RNA primer annealing, their mixtures were incubated for 2 minutes at 65°C and cooled to 4°C. After cooling, cDNA synthesis cycle followed; 4°C for 1 minute, 25°C for 10 minutes, 42°C for 10 minutes, 70°C for 15 minutes, and again 4°C for 1 minute. The second stranded reaction followed immediately. After mixing the first strand solution with second strand cDNA synthesis reaction solution, the entire mixture was incubated in the thermocycler as follows: 4°C for 1 minute, 25°C for 10 minutes, 50°C for 30 minutes, 70°C for 5 minutes, 4°C. Then, using the Agencourt® RNAClean® beads, the entire cDNA was purified according to the manufacturer's protocol. For the sense transcript cDNA generation, WT-Ovation™ Exon Module (NuGEN) was used. Based on the instructions in the manufacturer's manual, 3 μg of each cDNA was mixed with the provided primers and incubated for 5 minutes at 95°C and cooled to 4°C. After mixing with enzyme solution, the entire reaction mixture was incubated as follows: 1 minute at 4°C, 10 minutes at 30°C, 60 minutes at 42°C, 10 minutes at 75°C, and cooled to 4°C. Then the ST-cDNA was purified with the QIAGEN DNA clearing kit. After the purification, fragmentation reaction was carried out using FL-Ovation™ cDNA Biotin Module V.2 according to the recommended methods. Briefly, 5 μg of cDNA was mixed with the provided enzyme mix and incubated 30 minutes at 37°C and 2 minutes at 95°C. Then the reaction was cooled to 4°C. Next, the reaction was subjected to the labeling reaction as suggested by the manufacturer. The fragmented cDNA was mixed with labeling reaction mix and incubated at 37°C for 60 minutes and 70°C for 10 minutes. Then, the reaction was cooled to 4°C and used immediately for array hybridization. For the array hybridization, instead of recommended by Affimatrix, we used the standard array protocol provided by the NuGEN exon module. For hybridization, Chips were incubated in Gene Chip Hybridization Oven 640 and underwent the washing and staining processes according to the FS450_0001 fluidic protocol. Then, the array was scanned using Gene Chip Scanner 3000 (GCS3000).\nThe exon arrays for control, RA and RAEB2 were loaded into the Partek Genomics Suite 6.5. The Robust Multi-array Analysis (RMA) algorithm was used for initial intensity analysis [21] (Additional file 4). We generated gene expression estimates by averaging the intensities of all exons in a gene. Differential expression was defined as discussed for the NGS analysis above.\nA total of 50ng RNA was extracted from each analyzed sample. We used primer provided from NuGEN and followed the manufacturer's protocol for the first strand cDNA synthesis. For RNA primer annealing, their mixtures were incubated for 2 minutes at 65°C and cooled to 4°C. After cooling, cDNA synthesis cycle followed; 4°C for 1 minute, 25°C for 10 minutes, 42°C for 10 minutes, 70°C for 15 minutes, and again 4°C for 1 minute. The second stranded reaction followed immediately. After mixing the first strand solution with second strand cDNA synthesis reaction solution, the entire mixture was incubated in the thermocycler as follows: 4°C for 1 minute, 25°C for 10 minutes, 50°C for 30 minutes, 70°C for 5 minutes, 4°C. Then, using the Agencourt® RNAClean® beads, the entire cDNA was purified according to the manufacturer's protocol. For the sense transcript cDNA generation, WT-Ovation™ Exon Module (NuGEN) was used. Based on the instructions in the manufacturer's manual, 3 μg of each cDNA was mixed with the provided primers and incubated for 5 minutes at 95°C and cooled to 4°C. After mixing with enzyme solution, the entire reaction mixture was incubated as follows: 1 minute at 4°C, 10 minutes at 30°C, 60 minutes at 42°C, 10 minutes at 75°C, and cooled to 4°C. Then the ST-cDNA was purified with the QIAGEN DNA clearing kit. After the purification, fragmentation reaction was carried out using FL-Ovation™ cDNA Biotin Module V.2 according to the recommended methods. Briefly, 5 μg of cDNA was mixed with the provided enzyme mix and incubated 30 minutes at 37°C and 2 minutes at 95°C. Then the reaction was cooled to 4°C. Next, the reaction was subjected to the labeling reaction as suggested by the manufacturer. The fragmented cDNA was mixed with labeling reaction mix and incubated at 37°C for 60 minutes and 70°C for 10 minutes. Then, the reaction was cooled to 4°C and used immediately for array hybridization. For the array hybridization, instead of recommended by Affimatrix, we used the standard array protocol provided by the NuGEN exon module. For hybridization, Chips were incubated in Gene Chip Hybridization Oven 640 and underwent the washing and staining processes according to the FS450_0001 fluidic protocol. Then, the array was scanned using Gene Chip Scanner 3000 (GCS3000).\nThe exon arrays for control, RA and RAEB2 were loaded into the Partek Genomics Suite 6.5. The Robust Multi-array Analysis (RMA) algorithm was used for initial intensity analysis [21] (Additional file 4). We generated gene expression estimates by averaging the intensities of all exons in a gene. Differential expression was defined as discussed for the NGS analysis above.\n[SUBTITLE] Integrated target genes for MDS [SUBSECTION] In an earlier study Pellegatii and colleagues [22] used an Affymetrics Human Genome U133 Plus 2.0 GeneChip to assay consistently differentially expressed genes in hematopoietic stem cells (HSC) of 183 patients compared to 17 HSC of normal controls. This identified 534 probesets for RA and 4670 from RAEB2 patients. We matched these probesets to gene symbols and identified their corresponding transcript IDs on the Exon GeneChip. For the RA gene list, 69 probesets did not have annotated gene symbols, 103 had no corresponding transcripts and for 431 matching IDs were found. For the RAEB2 gene list, 807 probesets had no annotation, 1009 had no matching transcripts and for 3661 matching IDs were found. Altogether, this created a target gene space of 4092 probesets that were further analyzed by our bioinformatics modeling approach.\nIn an earlier study Pellegatii and colleagues [22] used an Affymetrics Human Genome U133 Plus 2.0 GeneChip to assay consistently differentially expressed genes in hematopoietic stem cells (HSC) of 183 patients compared to 17 HSC of normal controls. This identified 534 probesets for RA and 4670 from RAEB2 patients. We matched these probesets to gene symbols and identified their corresponding transcript IDs on the Exon GeneChip. For the RA gene list, 69 probesets did not have annotated gene symbols, 103 had no corresponding transcripts and for 431 matching IDs were found. For the RAEB2 gene list, 807 probesets had no annotation, 1009 had no matching transcripts and for 3661 matching IDs were found. Altogether, this created a target gene space of 4092 probesets that were further analyzed by our bioinformatics modeling approach.\n[SUBTITLE] Secondary structure and location of novel miRNA* sequences [SUBSECTION] The secondary structures for all miRNAs with stem-loop sequences deposited in miRBase were calculated using the Matlab Bioinformatics toolbox (version R2009a). The locations of mature miRNAs were identified as perfect alignments between the stem-loop and mature miRNA sequence. We calculated the locations of novel miRNA* sequences based on the genome coordinates of aligned small RNA reads. We note that due to mismatches in the miRBase alignments, e.g. between the miRNA stem-loop and the human genome, some derivations between the small RNA sequencing reads and the deposited stem-loop sequences may exist. All information was visualized using the tool VARNA [23].\nThe secondary structures for all miRNAs with stem-loop sequences deposited in miRBase were calculated using the Matlab Bioinformatics toolbox (version R2009a). The locations of mature miRNAs were identified as perfect alignments between the stem-loop and mature miRNA sequence. We calculated the locations of novel miRNA* sequences based on the genome coordinates of aligned small RNA reads. We note that due to mismatches in the miRBase alignments, e.g. between the miRNA stem-loop and the human genome, some derivations between the small RNA sequencing reads and the deposited stem-loop sequences may exist. All information was visualized using the tool VARNA [23].\n[SUBTITLE] Prediction of miRNA-mRNA and miRNA*-mRNA pairs [SUBSECTION] Information on miRNA target genes was obtained from two popular and publicly available miRNA target prediction databases. We retrieved flat files for all predicted human miRNA targets available in miRanda [24] and targets conserved over different mammalian species from targetscan [25]. In order to reduce the number of false positive predictions we considered only targets predicted by both algorithms, which resulted in about 110.000 miRNA-mRNA pairs.\nIn theory the majority of miRNA* are degraded in the cell. Therefore, we restricted our analysis to sequences with minimum read counts of 100. In each case, we define a 7-mer nucleotide sequences based on the small RNA read with the highest copy number throughout the control, low and high risk MDS samples. The nucleotides at positions two to eight were extracted and transformed into the RNA alphabet. The seed regions were checked for overlap with other known miRNA and miRNA* sequences and the targetscanS algorithm was used to predict miRNA*-mRNA pairs, if the seed sequence was previously unreported. In general, this algorithm performs target predictions based on perfect and conserved matches between the genes untranslated region (UTR) and the first six nucleotides of the seed sequence. It further requires that the seed region is followed either by the nucleotide A (known as a t1A anchor) or that the position eight of the alignment contains a perfect Watson-Crick pairing. On contrast, if the seed sequences matched with a previously reported miRNA or miRNA*, we used the target prediction strategy as reported above.\nInformation on miRNA target genes was obtained from two popular and publicly available miRNA target prediction databases. We retrieved flat files for all predicted human miRNA targets available in miRanda [24] and targets conserved over different mammalian species from targetscan [25]. In order to reduce the number of false positive predictions we considered only targets predicted by both algorithms, which resulted in about 110.000 miRNA-mRNA pairs.\nIn theory the majority of miRNA* are degraded in the cell. Therefore, we restricted our analysis to sequences with minimum read counts of 100. In each case, we define a 7-mer nucleotide sequences based on the small RNA read with the highest copy number throughout the control, low and high risk MDS samples. The nucleotides at positions two to eight were extracted and transformed into the RNA alphabet. The seed regions were checked for overlap with other known miRNA and miRNA* sequences and the targetscanS algorithm was used to predict miRNA*-mRNA pairs, if the seed sequence was previously unreported. In general, this algorithm performs target predictions based on perfect and conserved matches between the genes untranslated region (UTR) and the first six nucleotides of the seed sequence. It further requires that the seed region is followed either by the nucleotide A (known as a t1A anchor) or that the position eight of the alignment contains a perfect Watson-Crick pairing. On contrast, if the seed sequences matched with a previously reported miRNA or miRNA*, we used the target prediction strategy as reported above.\n[SUBTITLE] Prediction of transcription factor target genes [SUBSECTION] The flat files FACTOR and GENE of the commercially available database TRANSFAC v2008_2 [26] were downloaded and parsed into a MySQL database. The FACTOR and GENE flat files contain information on transcription factor proteins and genes regulated by transcription factors, respectively. A total of 2362 regulating factors for the human species (Homo Sapiens) were extracted and 70 entries, that did not describe proteins, but other regulatory factors were omitted. A large fraction (about 77 percent) of the remaining 2292 transcription factor proteins were mapped to Uniprot [27], either by external database ID's, or exact matches between protein names. With these accessions the protein coding gene IDs, as well as other information was downloaded automatically via a MATLAB based data retrieval algorithm implemented for this study. The transcript and probeset annotation files for the Affymetrix GeneChip Human Exon 1.0 ST Array were downloaded from the manufacture's website http://www.affymetrix.com and parsed into MySQL tables. Transcript IDs for 98 percent of the human transcription factor coding genes were extracted based on direct matches between gene names.\nGenes that can potentially be up regulated when the transcription factor protein binds to a specific site in its promoter region are called transcription factor target genes. We extracted all target genes for human transcription factor proteins by joining a number of database tables. This revealed 3296 gene targets for the 2292 transcription factor proteins. We used direct matches between the target gene names, as well as additional entries, to identify corresponding transcripts on the Affymetrix GeneChip. This resulted in matches for 83 percent of the target genes.\nThe flat files FACTOR and GENE of the commercially available database TRANSFAC v2008_2 [26] were downloaded and parsed into a MySQL database. The FACTOR and GENE flat files contain information on transcription factor proteins and genes regulated by transcription factors, respectively. A total of 2362 regulating factors for the human species (Homo Sapiens) were extracted and 70 entries, that did not describe proteins, but other regulatory factors were omitted. A large fraction (about 77 percent) of the remaining 2292 transcription factor proteins were mapped to Uniprot [27], either by external database ID's, or exact matches between protein names. With these accessions the protein coding gene IDs, as well as other information was downloaded automatically via a MATLAB based data retrieval algorithm implemented for this study. The transcript and probeset annotation files for the Affymetrix GeneChip Human Exon 1.0 ST Array were downloaded from the manufacture's website http://www.affymetrix.com and parsed into MySQL tables. Transcript IDs for 98 percent of the human transcription factor coding genes were extracted based on direct matches between gene names.\nGenes that can potentially be up regulated when the transcription factor protein binds to a specific site in its promoter region are called transcription factor target genes. We extracted all target genes for human transcription factor proteins by joining a number of database tables. This revealed 3296 gene targets for the 2292 transcription factor proteins. We used direct matches between the target gene names, as well as additional entries, to identify corresponding transcripts on the Affymetrix GeneChip. This resulted in matches for 83 percent of the target genes.\n[SUBTITLE] Functional analysis for miRNA and miRNA* targets [SUBSECTION] The functional analysis of miRNA and miRNA* were performed by means of their predicted target genes. However, since the pools of potential target genes are large and suffer from high false positive rates, we selected only a limited set of genes for functional analysis. Therefore, we defined a threshold T describing the number of different miRNA or miRNA* that regulate a gene. Similar to many biological phenomena such functions are described by power laws (see Figure 2) and we aimed to select T in the exponential part of the function. This ensured that the selected genes were targeted by a large number of different miRNAs. We further tried to select at most 100 genes for the analysis. In each case, the selected target genes were imported into Ingenuity Pathway Analysis (IPA) version 8.5 and analyzed using the IPA Core Analysis algorithm.\nThreshold for miRNA/miRNA* target gene selection. This figure describes the number of genes (x-axis) that are targeted by different miRNA*s (y-axis), for the example of RA cells. In this particular case, we selected the threshold T to be 13 miRNAs and 93 different genes were selected for functional analysis.\nThe functional analysis of miRNA and miRNA* were performed by means of their predicted target genes. However, since the pools of potential target genes are large and suffer from high false positive rates, we selected only a limited set of genes for functional analysis. Therefore, we defined a threshold T describing the number of different miRNA or miRNA* that regulate a gene. Similar to many biological phenomena such functions are described by power laws (see Figure 2) and we aimed to select T in the exponential part of the function. This ensured that the selected genes were targeted by a large number of different miRNAs. We further tried to select at most 100 genes for the analysis. In each case, the selected target genes were imported into Ingenuity Pathway Analysis (IPA) version 8.5 and analyzed using the IPA Core Analysis algorithm.\nThreshold for miRNA/miRNA* target gene selection. This figure describes the number of genes (x-axis) that are targeted by different miRNA*s (y-axis), for the example of RA cells. In this particular case, we selected the threshold T to be 13 miRNAs and 93 different genes were selected for functional analysis.\n[SUBTITLE] Data integration model and detection of important gene regulators [SUBSECTION] The proposed data integration model assumed that the mRNA amount present in a cell at any given time is linearly depended on the concentration of transcriptional acting TFs and post-transcriptional acting miRNAs. Therefore, gene expression was modeled as a linear combination of these factors plus random noise, which can be expressed following a standard regression model [28]\n\n\n(1)\n\n\n\ny\ni\n\n=\n\nβ\n0\n\n+\n\n\n∑\n\np\n=\n1\n\nN\n\n\n\nβ\np\n\n\nx\np\ni\n\n+\nε\n\n\n\n\n\n\nwhere yi is the expression of gene i, i = 1,..., G with G being the number of genes under study, (β0,..., βN) are the regression coefficients to be estimated by our model, N sums up the number of TFs and miRNAs observed in the cells under study, ε is the noise term which is assumed an independent Gaussian random variable with expectation zero and variance σ2, xpi was defined as\n\n\n(2)\n\n\n\nx\np\ni\n\n=\n\nα\np\ni\n\n\nγ\np\n\n\nδ\np\n\n\n\n\n\nwhere xpi is a factor associating gene i with regulator p, γp is a regulation characteristic and δp the expression level of regulator p. The association xpi was determined by miRNA and TF target prediction and xpi was set to one if gene i was a target of regulator p, otherwise xpi was set to zero. Transcription factors generally contribute to transcription and hence higher target genes levels, therefore, γp was set to one if p was a TFs. On contrast, miRNAs are known to post-transcriptionally degrade mRNAs, hence γp was set to minus one if p was a miRNA. The expression levels δp were determined by experiments as discussed earlier. Note that all expression values were normalized to controls and standardized to mean zero and standard deviation one.\nThe above regression problem was solved using the recently proposed cyclical coordinate descent algorithm, which is based on an elastic net penalty [29]. This algorithm is particularly fast and the elastic net penalty is most appropriate to handle large and sparse problems (compare Additional file 5 Figure S1) of correlated inputs. In addition, it has the beneficial property of shrinking a number of predictor values βp to exactly zero, hence integrating an effective variable selection approach, otherwise computationally expensive [30]. Note, that the penalty is weighted and that these weights were determined by cross validation.\nThe proposed data integration model assumed that the mRNA amount present in a cell at any given time is linearly depended on the concentration of transcriptional acting TFs and post-transcriptional acting miRNAs. Therefore, gene expression was modeled as a linear combination of these factors plus random noise, which can be expressed following a standard regression model [28]\n\n\n(1)\n\n\n\ny\ni\n\n=\n\nβ\n0\n\n+\n\n\n∑\n\np\n=\n1\n\nN\n\n\n\nβ\np\n\n\nx\np\ni\n\n+\nε\n\n\n\n\n\n\nwhere yi is the expression of gene i, i = 1,..., G with G being the number of genes under study, (β0,..., βN) are the regression coefficients to be estimated by our model, N sums up the number of TFs and miRNAs observed in the cells under study, ε is the noise term which is assumed an independent Gaussian random variable with expectation zero and variance σ2, xpi was defined as\n\n\n(2)\n\n\n\nx\np\ni\n\n=\n\nα\np\ni\n\n\nγ\np\n\n\nδ\np\n\n\n\n\n\nwhere xpi is a factor associating gene i with regulator p, γp is a regulation characteristic and δp the expression level of regulator p. The association xpi was determined by miRNA and TF target prediction and xpi was set to one if gene i was a target of regulator p, otherwise xpi was set to zero. Transcription factors generally contribute to transcription and hence higher target genes levels, therefore, γp was set to one if p was a TFs. On contrast, miRNAs are known to post-transcriptionally degrade mRNAs, hence γp was set to minus one if p was a miRNA. The expression levels δp were determined by experiments as discussed earlier. Note that all expression values were normalized to controls and standardized to mean zero and standard deviation one.\nThe above regression problem was solved using the recently proposed cyclical coordinate descent algorithm, which is based on an elastic net penalty [29]. This algorithm is particularly fast and the elastic net penalty is most appropriate to handle large and sparse problems (compare Additional file 5 Figure S1) of correlated inputs. In addition, it has the beneficial property of shrinking a number of predictor values βp to exactly zero, hence integrating an effective variable selection approach, otherwise computationally expensive [30]. Note, that the penalty is weighted and that these weights were determined by cross validation.", "Samples were obtained from patients presenting at The Methodist Hospital. The use of marrow samples was approved by The Methodist Hospital Institutional Review Board. All research described conformed to the Helsinki Declaration.", "RNA in the 18-30 bp range was isolated from a 15 percent urea-PAGE gel, and ligated to Solexa SRA5' and SRA 3' adapters, according to the standard protocol (available: http://www.illumina.com). Briefly, the SRA5' adapter was ligated to the 5' end of the selected RNAs. The ligation products were gel purified and SRA3' adapters ligated to their 3' ends. The resulting products were also gel purified, reverse transcribed and amplified with primers containing sequences complementary to the SRA5' and SRA3' adapters, after which they were gel purified again. The size and quality of the resulting libraries were verified using an Agilent DNA1000 Bioanalyzer chip (Agilent) and sequenced on a Solexa GAIIx, using PhiX as a loading control and analyzed with the standard Illumina Pipeline version 1.4. This produced approximately 13 million reads per lane.\nIn our analysis we used the s_x_sequence.txt files, containing 64 bit quality-scored output per-lane. The first 20bases of these reads were parsed in Mysql database tables, and further analyses utilized the MySQL database engine.\nAt this stage, the database was employed to identify and count distinct reads and to export this information into fasta formatted output files (Additional files 1, 2, 3). The results were used to map each small RNA to its matching position in the human genome. A variety of algorithms exists to perform this task including ELAND, which is provided with the Solexa GAIIx. However, a particular fast and memory efficient algorithm that outperforms other approaches is Bowtie [19]. This algorithm allows filtering alignments based on mismatches and can omit reads matched to multiple positions on the reference. The human genome version GRCh37 was downloaded from the NCBI website and converted into a bowtie index file. All distinct reads were aligned to this reference sequence. We allowed for at most two mismatches and only considered reads that aligned to at most 25 positions in the genome (parameter setting v = 2 and m = 25). With this parameter set, on average, 70 percent of the short sequence reads from all three lanes had positive matches to genome coordinates, about 21 percent did not match any genome position and about 10 percent had more than 25 matches.\nA number of different databases were used as annotation basis for the aligned next generation sequencing reads. Information on sequences and genome positions of miRNAs were obtained from miRBase version 14. However, since our sample preparation and sequencing protocol is not specific for miRNAs, we downloaded information on other small RNAs from the UCSC genome browser. This contains genome positions for different small RNAs, including but not limited to tRNAs, rRNAs, scRNAs, suRNAs and srpRNA in the repeatmasker track, as well as positions of known exons. The sequences of known human piRNAs were searched and downloaded from the NCBI http://www.ncbi.nlm.nih.gov.\nThe implemented annotation algorithm first checked if a read falls into a known miRNA loci (compare Figure 1). Unmatched reads were further aligned to primary miRNA sequences and perfect matches registered. If no match was identified, known loci for other small RNAs were searched in the following order rRNA, scRNA, sRNA, srpRNA, simple repeat and other RNAs. If a read was still uncharacterized, it was aligned against all piRNA sequences and matches returned for perfect alignments. Finally, if none of the above criteria was satisfied, positions for all human exons were first checked, if no match was identified reads were classified as unknown. The number of sequenced reads that annotated with a known RNA locus were used to represent its expression.\nNGS data analysis pipeline and comparison of sRNA annotations in MDS. NGS data analysis pipeline used for this study. In A) we show the annotation of a sequence read. It was detected about 18000 times in RAEB2 and aligned at nine different positions, spread over six chromosomes, on the human genome (green). A single alignment position is shown (red) with the used annotation hierarchy (blue). The purple callbox, details the matched loci for miRNA let-7a-1, its full primary sequence (top), its mature sequence (middle) and the aligned short read (bottom). The brown callbox shows all nine annotations, including a number of miRNAs from the has-let-7 family as well as a piRNA. In B) we compare the total RNA content measured from our high-throughput sequencing and annotation steps, on the left results for the RAEB2, in the middle results for RA and on the right results for control.\nThe read counts for miRNA and miRNA* were compared for the RA, RAEB2 and controls and significant differential expression defined following the example in [20]. We required that the ratio R of read counts in two different cells was within R1 > 1.5 ∨ R2 < 0.67 and the read count difference D within D1 > 100 ∨ D2 < -100. Consequently, over expression was defined byR1 and D1 and under expression byR2 and D2.", "A total of 50ng RNA was extracted from each analyzed sample. We used primer provided from NuGEN and followed the manufacturer's protocol for the first strand cDNA synthesis. For RNA primer annealing, their mixtures were incubated for 2 minutes at 65°C and cooled to 4°C. After cooling, cDNA synthesis cycle followed; 4°C for 1 minute, 25°C for 10 minutes, 42°C for 10 minutes, 70°C for 15 minutes, and again 4°C for 1 minute. The second stranded reaction followed immediately. After mixing the first strand solution with second strand cDNA synthesis reaction solution, the entire mixture was incubated in the thermocycler as follows: 4°C for 1 minute, 25°C for 10 minutes, 50°C for 30 minutes, 70°C for 5 minutes, 4°C. Then, using the Agencourt® RNAClean® beads, the entire cDNA was purified according to the manufacturer's protocol. For the sense transcript cDNA generation, WT-Ovation™ Exon Module (NuGEN) was used. Based on the instructions in the manufacturer's manual, 3 μg of each cDNA was mixed with the provided primers and incubated for 5 minutes at 95°C and cooled to 4°C. After mixing with enzyme solution, the entire reaction mixture was incubated as follows: 1 minute at 4°C, 10 minutes at 30°C, 60 minutes at 42°C, 10 minutes at 75°C, and cooled to 4°C. Then the ST-cDNA was purified with the QIAGEN DNA clearing kit. After the purification, fragmentation reaction was carried out using FL-Ovation™ cDNA Biotin Module V.2 according to the recommended methods. Briefly, 5 μg of cDNA was mixed with the provided enzyme mix and incubated 30 minutes at 37°C and 2 minutes at 95°C. Then the reaction was cooled to 4°C. Next, the reaction was subjected to the labeling reaction as suggested by the manufacturer. The fragmented cDNA was mixed with labeling reaction mix and incubated at 37°C for 60 minutes and 70°C for 10 minutes. Then, the reaction was cooled to 4°C and used immediately for array hybridization. For the array hybridization, instead of recommended by Affimatrix, we used the standard array protocol provided by the NuGEN exon module. For hybridization, Chips were incubated in Gene Chip Hybridization Oven 640 and underwent the washing and staining processes according to the FS450_0001 fluidic protocol. Then, the array was scanned using Gene Chip Scanner 3000 (GCS3000).\nThe exon arrays for control, RA and RAEB2 were loaded into the Partek Genomics Suite 6.5. The Robust Multi-array Analysis (RMA) algorithm was used for initial intensity analysis [21] (Additional file 4). We generated gene expression estimates by averaging the intensities of all exons in a gene. Differential expression was defined as discussed for the NGS analysis above.", "In an earlier study Pellegatii and colleagues [22] used an Affymetrics Human Genome U133 Plus 2.0 GeneChip to assay consistently differentially expressed genes in hematopoietic stem cells (HSC) of 183 patients compared to 17 HSC of normal controls. This identified 534 probesets for RA and 4670 from RAEB2 patients. We matched these probesets to gene symbols and identified their corresponding transcript IDs on the Exon GeneChip. For the RA gene list, 69 probesets did not have annotated gene symbols, 103 had no corresponding transcripts and for 431 matching IDs were found. For the RAEB2 gene list, 807 probesets had no annotation, 1009 had no matching transcripts and for 3661 matching IDs were found. Altogether, this created a target gene space of 4092 probesets that were further analyzed by our bioinformatics modeling approach.", "The secondary structures for all miRNAs with stem-loop sequences deposited in miRBase were calculated using the Matlab Bioinformatics toolbox (version R2009a). The locations of mature miRNAs were identified as perfect alignments between the stem-loop and mature miRNA sequence. We calculated the locations of novel miRNA* sequences based on the genome coordinates of aligned small RNA reads. We note that due to mismatches in the miRBase alignments, e.g. between the miRNA stem-loop and the human genome, some derivations between the small RNA sequencing reads and the deposited stem-loop sequences may exist. All information was visualized using the tool VARNA [23].", "Information on miRNA target genes was obtained from two popular and publicly available miRNA target prediction databases. We retrieved flat files for all predicted human miRNA targets available in miRanda [24] and targets conserved over different mammalian species from targetscan [25]. In order to reduce the number of false positive predictions we considered only targets predicted by both algorithms, which resulted in about 110.000 miRNA-mRNA pairs.\nIn theory the majority of miRNA* are degraded in the cell. Therefore, we restricted our analysis to sequences with minimum read counts of 100. In each case, we define a 7-mer nucleotide sequences based on the small RNA read with the highest copy number throughout the control, low and high risk MDS samples. The nucleotides at positions two to eight were extracted and transformed into the RNA alphabet. The seed regions were checked for overlap with other known miRNA and miRNA* sequences and the targetscanS algorithm was used to predict miRNA*-mRNA pairs, if the seed sequence was previously unreported. In general, this algorithm performs target predictions based on perfect and conserved matches between the genes untranslated region (UTR) and the first six nucleotides of the seed sequence. It further requires that the seed region is followed either by the nucleotide A (known as a t1A anchor) or that the position eight of the alignment contains a perfect Watson-Crick pairing. On contrast, if the seed sequences matched with a previously reported miRNA or miRNA*, we used the target prediction strategy as reported above.", "The flat files FACTOR and GENE of the commercially available database TRANSFAC v2008_2 [26] were downloaded and parsed into a MySQL database. The FACTOR and GENE flat files contain information on transcription factor proteins and genes regulated by transcription factors, respectively. A total of 2362 regulating factors for the human species (Homo Sapiens) were extracted and 70 entries, that did not describe proteins, but other regulatory factors were omitted. A large fraction (about 77 percent) of the remaining 2292 transcription factor proteins were mapped to Uniprot [27], either by external database ID's, or exact matches between protein names. With these accessions the protein coding gene IDs, as well as other information was downloaded automatically via a MATLAB based data retrieval algorithm implemented for this study. The transcript and probeset annotation files for the Affymetrix GeneChip Human Exon 1.0 ST Array were downloaded from the manufacture's website http://www.affymetrix.com and parsed into MySQL tables. Transcript IDs for 98 percent of the human transcription factor coding genes were extracted based on direct matches between gene names.\nGenes that can potentially be up regulated when the transcription factor protein binds to a specific site in its promoter region are called transcription factor target genes. We extracted all target genes for human transcription factor proteins by joining a number of database tables. This revealed 3296 gene targets for the 2292 transcription factor proteins. We used direct matches between the target gene names, as well as additional entries, to identify corresponding transcripts on the Affymetrix GeneChip. This resulted in matches for 83 percent of the target genes.", "The functional analysis of miRNA and miRNA* were performed by means of their predicted target genes. However, since the pools of potential target genes are large and suffer from high false positive rates, we selected only a limited set of genes for functional analysis. Therefore, we defined a threshold T describing the number of different miRNA or miRNA* that regulate a gene. Similar to many biological phenomena such functions are described by power laws (see Figure 2) and we aimed to select T in the exponential part of the function. This ensured that the selected genes were targeted by a large number of different miRNAs. We further tried to select at most 100 genes for the analysis. In each case, the selected target genes were imported into Ingenuity Pathway Analysis (IPA) version 8.5 and analyzed using the IPA Core Analysis algorithm.\nThreshold for miRNA/miRNA* target gene selection. This figure describes the number of genes (x-axis) that are targeted by different miRNA*s (y-axis), for the example of RA cells. In this particular case, we selected the threshold T to be 13 miRNAs and 93 different genes were selected for functional analysis.", "The proposed data integration model assumed that the mRNA amount present in a cell at any given time is linearly depended on the concentration of transcriptional acting TFs and post-transcriptional acting miRNAs. Therefore, gene expression was modeled as a linear combination of these factors plus random noise, which can be expressed following a standard regression model [28]\n\n\n(1)\n\n\n\ny\ni\n\n=\n\nβ\n0\n\n+\n\n\n∑\n\np\n=\n1\n\nN\n\n\n\nβ\np\n\n\nx\np\ni\n\n+\nε\n\n\n\n\n\n\nwhere yi is the expression of gene i, i = 1,..., G with G being the number of genes under study, (β0,..., βN) are the regression coefficients to be estimated by our model, N sums up the number of TFs and miRNAs observed in the cells under study, ε is the noise term which is assumed an independent Gaussian random variable with expectation zero and variance σ2, xpi was defined as\n\n\n(2)\n\n\n\nx\np\ni\n\n=\n\nα\np\ni\n\n\nγ\np\n\n\nδ\np\n\n\n\n\n\nwhere xpi is a factor associating gene i with regulator p, γp is a regulation characteristic and δp the expression level of regulator p. The association xpi was determined by miRNA and TF target prediction and xpi was set to one if gene i was a target of regulator p, otherwise xpi was set to zero. Transcription factors generally contribute to transcription and hence higher target genes levels, therefore, γp was set to one if p was a TFs. On contrast, miRNAs are known to post-transcriptionally degrade mRNAs, hence γp was set to minus one if p was a miRNA. The expression levels δp were determined by experiments as discussed earlier. Note that all expression values were normalized to controls and standardized to mean zero and standard deviation one.\nThe above regression problem was solved using the recently proposed cyclical coordinate descent algorithm, which is based on an elastic net penalty [29]. This algorithm is particularly fast and the elastic net penalty is most appropriate to handle large and sparse problems (compare Additional file 5 Figure S1) of correlated inputs. In addition, it has the beneficial property of shrinking a number of predictor values βp to exactly zero, hence integrating an effective variable selection approach, otherwise computationally expensive [30]. Note, that the penalty is weighted and that these weights were determined by cross validation.", "[SUBTITLE] Defining the small RNAome of Myelodysplastic Syndromes by next generation sequencing [SUBSECTION] We performed high-throughput next generation sequencing of small RNAs (RNA-seq) on primary cells from control, low-grade (RA) and high-grade (RAEB2) MDS patients on an Illumina Genome Analyzer IIx (see Methods). This resulted in about thirteen million short sequence reads (length 38 bp) per sample. We implemented an annotation algorithm that integrates knowledge from diverse biological databases to characterize each RNA-seq read (Figure 1). In brief, all reads were trimmed (length 22 bp) and aligned against the current version of the human genome (GRCh37), using the publicly available software Bowtie [19]. We allowed for at most two mismatches between the reference and read sequences. Since, the analyzed reads were relatively short and we allowed mismatches, a large number aligned to multiple genome positions (green part Figure 1). Consistent with previous analyses, we decided to discard reads having more than 25 alignment positions [31]. For annotation, we matched small sequencing reads to a set of small RNAs that included miRNAs from miRBase [32], a number of other small RNAs, including tRNAs and rRNAs, from the RepeatMasker track of UCSCs genome browser [33], as well as piRNAs from the NCBI database http://www.ncbi.nlm.nih.gov (blue callout box Figure 1). This mapping showed that the composition of the small RNAome was dramatically different from the analyzed samples, suggesting a shift in the regulation of small RNA targets during the progression of this disease.\nFirst, the relative amounts of tRNA to rRNA were significantly larger in RAEB2 compared to RA and control (36 vs. 1.6 and 1). Since tRNAs are vital building blocks for protein synthesis and required during translation, this may indicate an increased regulation of translation at this disease stage. A recent study based on tRNA microarrays reported a 20-fold elevation of tRNAs in tumor samples versus normal samples [34]. In addition, tRNAs have been shown to inhibit cytochorme c activated apoptosis [35,36]. Taken together, the high tRNA content may contribute to the two well known characteristics of high-grade MDSs, decreased apoptosis (in contrast to low-grade MDS) and high rate of leukemia transformation. To our knowledge, this novel finding has not been reported for MDS, highlighting the combined use of next generation sequencing and the proposed annotation methodology.\nNext, the obtained sequencing data demonstrated the first evidence of piRNA expression in marrow cells, and particular enrichment in low-grade MDS. Piwi-interacting RNAs are a relative newly defined class of none coding RNAs with length from 26 to 32nt [37,38]. In RA their expression increased, accounting for about nine percent of total sRNA counts, compared to about two and one percent in RAEB2 and controls, respectively. The biogenesis of piRNA is not fully understood today, but increasing evidence pinpoints that PIWI proteins are required for the accumulation of piRNAs [39-42]. In accordance with this concept, our exon array data showed that piwil1 and piwil2, two of the four human PIWI coding genes, were significantly up-regulated in RA, compared to control and high-grade MDS cells. Furthermore, recent studies have indicated that the PIWI-piRNA complex may have a role in post-transcriptional silencing damaged DNA fragments [39,43,44] and that interrupting PIWI-piRNA formation can lead to DNA double strand breaks [45]. Altogether, these findings suggest that piRNA might be used as diagnostic markers for low-grade MDS, however, further studies of their role in MDS pathogenesis are warranted.\nFinally, we found an increased regulatory role of miRNAs in cells of RA and RAEB2 patients. In low-grade MDS miRNAs represented about 35 percent of the total sRNAs, an almost 4-fold increase compared to control, highlighting their role in disease pathogenesis. Similarly, miRNA percentages were elevated to about 14 percent in RAEB2 compared to control, although at a lower extent (two-fold increase). Of note, miRNAs are currently the most widely studied species of sRNAs and they are known to influence mRNA levels as well as translation. Due to their profound effects, the above findings, and taken into account insufficient literature on miRNAs in MDS, we decided to further investigate and discuss their roles in MDS.\nSequencing of additional RNAomes is required to confirm the observed trends over a larger patient population.\nWe performed high-throughput next generation sequencing of small RNAs (RNA-seq) on primary cells from control, low-grade (RA) and high-grade (RAEB2) MDS patients on an Illumina Genome Analyzer IIx (see Methods). This resulted in about thirteen million short sequence reads (length 38 bp) per sample. We implemented an annotation algorithm that integrates knowledge from diverse biological databases to characterize each RNA-seq read (Figure 1). In brief, all reads were trimmed (length 22 bp) and aligned against the current version of the human genome (GRCh37), using the publicly available software Bowtie [19]. We allowed for at most two mismatches between the reference and read sequences. Since, the analyzed reads were relatively short and we allowed mismatches, a large number aligned to multiple genome positions (green part Figure 1). Consistent with previous analyses, we decided to discard reads having more than 25 alignment positions [31]. For annotation, we matched small sequencing reads to a set of small RNAs that included miRNAs from miRBase [32], a number of other small RNAs, including tRNAs and rRNAs, from the RepeatMasker track of UCSCs genome browser [33], as well as piRNAs from the NCBI database http://www.ncbi.nlm.nih.gov (blue callout box Figure 1). This mapping showed that the composition of the small RNAome was dramatically different from the analyzed samples, suggesting a shift in the regulation of small RNA targets during the progression of this disease.\nFirst, the relative amounts of tRNA to rRNA were significantly larger in RAEB2 compared to RA and control (36 vs. 1.6 and 1). Since tRNAs are vital building blocks for protein synthesis and required during translation, this may indicate an increased regulation of translation at this disease stage. A recent study based on tRNA microarrays reported a 20-fold elevation of tRNAs in tumor samples versus normal samples [34]. In addition, tRNAs have been shown to inhibit cytochorme c activated apoptosis [35,36]. Taken together, the high tRNA content may contribute to the two well known characteristics of high-grade MDSs, decreased apoptosis (in contrast to low-grade MDS) and high rate of leukemia transformation. To our knowledge, this novel finding has not been reported for MDS, highlighting the combined use of next generation sequencing and the proposed annotation methodology.\nNext, the obtained sequencing data demonstrated the first evidence of piRNA expression in marrow cells, and particular enrichment in low-grade MDS. Piwi-interacting RNAs are a relative newly defined class of none coding RNAs with length from 26 to 32nt [37,38]. In RA their expression increased, accounting for about nine percent of total sRNA counts, compared to about two and one percent in RAEB2 and controls, respectively. The biogenesis of piRNA is not fully understood today, but increasing evidence pinpoints that PIWI proteins are required for the accumulation of piRNAs [39-42]. In accordance with this concept, our exon array data showed that piwil1 and piwil2, two of the four human PIWI coding genes, were significantly up-regulated in RA, compared to control and high-grade MDS cells. Furthermore, recent studies have indicated that the PIWI-piRNA complex may have a role in post-transcriptional silencing damaged DNA fragments [39,43,44] and that interrupting PIWI-piRNA formation can lead to DNA double strand breaks [45]. Altogether, these findings suggest that piRNA might be used as diagnostic markers for low-grade MDS, however, further studies of their role in MDS pathogenesis are warranted.\nFinally, we found an increased regulatory role of miRNAs in cells of RA and RAEB2 patients. In low-grade MDS miRNAs represented about 35 percent of the total sRNAs, an almost 4-fold increase compared to control, highlighting their role in disease pathogenesis. Similarly, miRNA percentages were elevated to about 14 percent in RAEB2 compared to control, although at a lower extent (two-fold increase). Of note, miRNAs are currently the most widely studied species of sRNAs and they are known to influence mRNA levels as well as translation. Due to their profound effects, the above findings, and taken into account insufficient literature on miRNAs in MDS, we decided to further investigate and discuss their roles in MDS.\nSequencing of additional RNAomes is required to confirm the observed trends over a larger patient population.\n[SUBTITLE] Detailed characterization of expressed miRNA loci and identification of novel miRNA* [SUBSECTION] In the analyzed samples, reads were found at 246 different full-length primary miRNA sequence loci. These included matches at 173 different mature miRNA sites in RA, 93 in controls and 79 in RAEB2. Expression varied between samples and was generally more elevated in RA compared to RAEB2 (compare Figure 3 and Additional file 6 Tables S1,S2 and S3). The miRNA hsa-mir-125b-2 was an exception and more elevated in RAEB2 (read counts: 264 RAEB2, 87 RA and zero in controls). A single miRNA, hsa-mir-720 (fold change 10), was significantly down-regulated in RA and no copies were detected in RAEB2. Furthermore, a total of 58 miRNAs were only expressed in RA (Additional file 6 Table S4), hsa-mir-191 was unique to controls and hsa-mir-9-3 was only detected in RAEB2.\nComparison of miRNA expression. A heat map of the log2 transformed expression levels for miRNAs and miRNA* in the three analyzed samples.\nA number of high-throughput sequencing studies have recently reported the detection of miRNA*, often with higher copy numbers than their mature counterparts [46,47]. These studies further suggest that miRNA* associate with the effector complex AGO1 and regulate target gene expression. However, their roles in MDS have never been studied and we found reads matching to miRNA* motifs on 68 loci in RA, 55 in control and 24 in RAEB2 cells. In addition, multiple reads matched to uncharacterized positions on 59 different primary miRNA sequences. Interestingly, no miRNA* motifs had been reported for these loci before. Therefore, we visualized the secondary structure for their primary sequence, the location of the mature sequence and the reads clustered at uncharacterized loci (see Figure 4 Methods and Additional file 6 Table S5). Our bioinformatics analysis showed that most uncharacterized reads aligned on the miRNA* arm, opposite to the mature sequence. This has led to the definition of 59 previously unreported miRNA* candidates, of which 20 seed sequences have previously been associated in the targetscan database [48], but which did not exist in the miRBase version (v14) used for this study. We classified the remaining 39 motifs as novel miRNA* sequences (miRNA**) and folding information with locations on the miRNA arms are given in Additional file 6 Table S5.\nmiRNA* analysis pipeline. Analysis pipeline for the visualization of novel miRNA* from small RNA sequencing reads aligned to uncharacterized loci on known primary miRNA sequences.\nConsidering all samples together, significant expression was detected (read count at least 100) for 128 miRNA*, including 123 miRNA* in RA, 72 in control and 31 in RAEB2. Interestingly, in our RNA-seq data either the miRNA or the miRNA* (including miRNA**) arms were expressed at many miRNA loci (Additional file 5 Figure S2), suggesting a non-random and selective expression of the two different miRNA arms. Importantly, we found that 24 miRNA* were only expressed in RA, hsa-mir-24-1* was unique to control (copy number: 119) and no miRNA* was uniquely expressed in RAEB2. These miRNA* can potentially be used as biomarkers to diagnose low-grade MDS, which has significant overlapping morphologic and clinical features with reactive cytopenias, and is consequently very difficult to diagnose. However, further validation in additional patients and with different methods is needed to confirm these findings. Details for the ten miRNA* with the greatest fold changes in RA are given in Table 1 further information can be found in Additional file 6 Tables S1 and S4.\nDifferentially expressed miRNA* and their target genes.\nList of ten miRNA* (see Additional file 6 Table S4 for folding information) that were detected with the largest fold changes in control and low-grad cells. We show the fold change, p-value (measuring if the number of down regulated target genes is greater than expected by chance) and target genes with regulation (bold arrows mark significant and italic non-significant regulation). We assessed the significantly down regulated genes for functional enrichment and pathways. The top five enriched biological functions included RNA Post-Transcriptional Modification (pval:1.2E-04), Cellular Growth and Proliferation (pval:1.25E-04), Cell Death (pval:5.79E-04) and Cancer (pval:5.95E-04-). The top six enriched canonical pathways included IL-22 Signaling (pval:2.63E-04), p53 Signaling (pval: 8.32E-04), IL-15 Signaling (pval:2.95E-03), B Cell Receptor Signaling (pval:4.47E-03) and FLT3 Signaling in Hematopoietic Progenitor Cells (pval:4.68E-03).\nIn the analyzed samples, reads were found at 246 different full-length primary miRNA sequence loci. These included matches at 173 different mature miRNA sites in RA, 93 in controls and 79 in RAEB2. Expression varied between samples and was generally more elevated in RA compared to RAEB2 (compare Figure 3 and Additional file 6 Tables S1,S2 and S3). The miRNA hsa-mir-125b-2 was an exception and more elevated in RAEB2 (read counts: 264 RAEB2, 87 RA and zero in controls). A single miRNA, hsa-mir-720 (fold change 10), was significantly down-regulated in RA and no copies were detected in RAEB2. Furthermore, a total of 58 miRNAs were only expressed in RA (Additional file 6 Table S4), hsa-mir-191 was unique to controls and hsa-mir-9-3 was only detected in RAEB2.\nComparison of miRNA expression. A heat map of the log2 transformed expression levels for miRNAs and miRNA* in the three analyzed samples.\nA number of high-throughput sequencing studies have recently reported the detection of miRNA*, often with higher copy numbers than their mature counterparts [46,47]. These studies further suggest that miRNA* associate with the effector complex AGO1 and regulate target gene expression. However, their roles in MDS have never been studied and we found reads matching to miRNA* motifs on 68 loci in RA, 55 in control and 24 in RAEB2 cells. In addition, multiple reads matched to uncharacterized positions on 59 different primary miRNA sequences. Interestingly, no miRNA* motifs had been reported for these loci before. Therefore, we visualized the secondary structure for their primary sequence, the location of the mature sequence and the reads clustered at uncharacterized loci (see Figure 4 Methods and Additional file 6 Table S5). Our bioinformatics analysis showed that most uncharacterized reads aligned on the miRNA* arm, opposite to the mature sequence. This has led to the definition of 59 previously unreported miRNA* candidates, of which 20 seed sequences have previously been associated in the targetscan database [48], but which did not exist in the miRBase version (v14) used for this study. We classified the remaining 39 motifs as novel miRNA* sequences (miRNA**) and folding information with locations on the miRNA arms are given in Additional file 6 Table S5.\nmiRNA* analysis pipeline. Analysis pipeline for the visualization of novel miRNA* from small RNA sequencing reads aligned to uncharacterized loci on known primary miRNA sequences.\nConsidering all samples together, significant expression was detected (read count at least 100) for 128 miRNA*, including 123 miRNA* in RA, 72 in control and 31 in RAEB2. Interestingly, in our RNA-seq data either the miRNA or the miRNA* (including miRNA**) arms were expressed at many miRNA loci (Additional file 5 Figure S2), suggesting a non-random and selective expression of the two different miRNA arms. Importantly, we found that 24 miRNA* were only expressed in RA, hsa-mir-24-1* was unique to control (copy number: 119) and no miRNA* was uniquely expressed in RAEB2. These miRNA* can potentially be used as biomarkers to diagnose low-grade MDS, which has significant overlapping morphologic and clinical features with reactive cytopenias, and is consequently very difficult to diagnose. However, further validation in additional patients and with different methods is needed to confirm these findings. Details for the ten miRNA* with the greatest fold changes in RA are given in Table 1 further information can be found in Additional file 6 Tables S1 and S4.\nDifferentially expressed miRNA* and their target genes.\nList of ten miRNA* (see Additional file 6 Table S4 for folding information) that were detected with the largest fold changes in control and low-grad cells. We show the fold change, p-value (measuring if the number of down regulated target genes is greater than expected by chance) and target genes with regulation (bold arrows mark significant and italic non-significant regulation). We assessed the significantly down regulated genes for functional enrichment and pathways. The top five enriched biological functions included RNA Post-Transcriptional Modification (pval:1.2E-04), Cellular Growth and Proliferation (pval:1.25E-04), Cell Death (pval:5.79E-04) and Cancer (pval:5.95E-04-). The top six enriched canonical pathways included IL-22 Signaling (pval:2.63E-04), p53 Signaling (pval: 8.32E-04), IL-15 Signaling (pval:2.95E-03), B Cell Receptor Signaling (pval:4.47E-03) and FLT3 Signaling in Hematopoietic Progenitor Cells (pval:4.68E-03).\n[SUBTITLE] Functional roles of miRNA and miRNA* in Myelodysplastic Syndromes [SUBSECTION] In order to identify biological functions that might contribute to low-grade MDS, and can be modulated by the detected miRNA/miRNA*, we first identified target genes for 91 miRNA and 104 miRNA* that were highest expressed in RA, compared to RAEB2 and control marrow cells. The total number of uniquely regulated mRNAs was 7021 for miRNA* and 4665 for miRNA (see Methods). To select high confidence targets, each gene was further ranked according to the number of miRNAs or miRNA* that potentially control its expression or translation (see Methods). This was necessary to counteract the high false positive rates of in-silico miRNA target predictions, which for example do not consider tissue specificity. From this ranking two gene sets (Table 2), the first consisting of 74 genes controlled by 19 miRNAs and the second consisting of 93 genes regulated by at least 14 miRNA*, were selected to compare significantly enriched molecular and cellular functions (Methods). Interestingly, four out of the top five functions, with the smallest p-values, overlapped. These included \"Cell Death\", \"Cellular Development\", \"Cell Cycle\" and \"Gene Expression\" (Table 2). The high compatibility suggested that the detected miRNA* fulfill similar roles to their mature counterparts, providing further evidence of their selectivity and biological importance.\nEnriched biological processes of miRNA and miRNA* target genes.\nThis tables gives an overview of the selected miRNA (top) and miRNA* (bottom) target genes, their regulation (bold is used for significant expression and italic for non-significant expression), the top five molecular functions of these genes as well as the genes involved in these functions.\nTo study the overall role of miRNA/miRNA* in RA and RAEB2 cells, their target genes were combined for further analysis. In RA, we included 94 genes regulated by at least 27, and in RAEB2 a total 83 genes targeted by at least three different miRNA/miRNA*. The difference in the required number of regulating miRNA/miRNA* were attributed to the higher number of differentially expressed miRNA in RA (compare Additional file 5 Figure S3).\nNext, we identified significantly enriched molecular and cellular functions (Methods) and compared results with a recent large scale gene expression study of 183 MDS patients [22].\nIn both disease grades the selected genes were enriched for the molecular function of \"Cell Death\" (RA: 9.86E-06, RAEB2: 1.75E-04). This is in agreement with the above study, which identified apoptosis as the main deregulated process in low-grade MDS.\nAgain consistent with the cited study, miRNA/miRNA* targets selected in both MDS subtypes were enriched for \"DNA Replication, Recombination, and Repair \" (RA:1.12E-03, RAEB2: 6.67E-03).\nIn addition, cell cycle regulatory genes were among the indentified target genes for both, RA and RAEB2. In accordance with the study cited above, we found that the \"G2/M phase\" (RAEB2:1.55 E-3) and \"DNA damage checkpoint\" (RAEB2: 6.67E-3) were exclusively regulated in RAEB2. On contrast the \"G1 phase\" (6.17E-06) was exclusive to RA.\nThese findings showed that miRNA/miRNA* interfere with molecular functions and pathways known to be deregulated at the transcriptomic level, as reported in the cited gene expression study (some additional information is given in Additional file 7). In the following we proposed a bioinformatics modeling approach to further elucidate the effects of miRNA/miRNA* on the MDS transcriptome.\nIn order to identify biological functions that might contribute to low-grade MDS, and can be modulated by the detected miRNA/miRNA*, we first identified target genes for 91 miRNA and 104 miRNA* that were highest expressed in RA, compared to RAEB2 and control marrow cells. The total number of uniquely regulated mRNAs was 7021 for miRNA* and 4665 for miRNA (see Methods). To select high confidence targets, each gene was further ranked according to the number of miRNAs or miRNA* that potentially control its expression or translation (see Methods). This was necessary to counteract the high false positive rates of in-silico miRNA target predictions, which for example do not consider tissue specificity. From this ranking two gene sets (Table 2), the first consisting of 74 genes controlled by 19 miRNAs and the second consisting of 93 genes regulated by at least 14 miRNA*, were selected to compare significantly enriched molecular and cellular functions (Methods). Interestingly, four out of the top five functions, with the smallest p-values, overlapped. These included \"Cell Death\", \"Cellular Development\", \"Cell Cycle\" and \"Gene Expression\" (Table 2). The high compatibility suggested that the detected miRNA* fulfill similar roles to their mature counterparts, providing further evidence of their selectivity and biological importance.\nEnriched biological processes of miRNA and miRNA* target genes.\nThis tables gives an overview of the selected miRNA (top) and miRNA* (bottom) target genes, their regulation (bold is used for significant expression and italic for non-significant expression), the top five molecular functions of these genes as well as the genes involved in these functions.\nTo study the overall role of miRNA/miRNA* in RA and RAEB2 cells, their target genes were combined for further analysis. In RA, we included 94 genes regulated by at least 27, and in RAEB2 a total 83 genes targeted by at least three different miRNA/miRNA*. The difference in the required number of regulating miRNA/miRNA* were attributed to the higher number of differentially expressed miRNA in RA (compare Additional file 5 Figure S3).\nNext, we identified significantly enriched molecular and cellular functions (Methods) and compared results with a recent large scale gene expression study of 183 MDS patients [22].\nIn both disease grades the selected genes were enriched for the molecular function of \"Cell Death\" (RA: 9.86E-06, RAEB2: 1.75E-04). This is in agreement with the above study, which identified apoptosis as the main deregulated process in low-grade MDS.\nAgain consistent with the cited study, miRNA/miRNA* targets selected in both MDS subtypes were enriched for \"DNA Replication, Recombination, and Repair \" (RA:1.12E-03, RAEB2: 6.67E-03).\nIn addition, cell cycle regulatory genes were among the indentified target genes for both, RA and RAEB2. In accordance with the study cited above, we found that the \"G2/M phase\" (RAEB2:1.55 E-3) and \"DNA damage checkpoint\" (RAEB2: 6.67E-3) were exclusively regulated in RAEB2. On contrast the \"G1 phase\" (6.17E-06) was exclusive to RA.\nThese findings showed that miRNA/miRNA* interfere with molecular functions and pathways known to be deregulated at the transcriptomic level, as reported in the cited gene expression study (some additional information is given in Additional file 7). In the following we proposed a bioinformatics modeling approach to further elucidate the effects of miRNA/miRNA* on the MDS transcriptome.\n[SUBTITLE] Computational modeling of transcriptome regulation in Myelodysplastic Syndromes [SUBSECTION] In the recent years it has become increasingly evident that miRNAs and TFs coordinate to regulate mRNA levels [49]. Consequently, we proposed a bioinformatics model that accounts for both effects. It integrated miRNA expression levels measured by next generation sequencing, gene expression measured by exons arrays, as well as data of a recently published gene expression microarray study [22]. All datasets were linked using a number of publicly and commercially available bioinformatics databases (Methods). In particular, we focused on the regulation of genes consistently differentially expressed over a large patient pool, that can be influenced by miRNAs/miRNAs* and TFs detected in our samples. The general workflow is illustrated in Figure 5 and we briefly describe the main aspects below (more information is given in the Methods section and Additional file 5 Figure S4).\nTranscriptome analysis pipeline. Pipeline for the integrative analysis of the MDS transcriptome, further described in the text and Additional file 5 Figure S4.\nThe analysis started with miRNA profiling in samples of RA and RAEB2 patients by next generation sequencing, as discussed earlier.\nIn addition, we measured gene expression and splice form variations using the Affymetrix GeneChip Human Exon 1.0 ST Array. In an earlier study the bone marrow of 55 RA and 43 RAEB patients were compared against 17 controls and genes collectively differentially expressed explored [22]. These differentially expressed genes were merged with the exon array profiling (Additional file 5 Figure S5) and a set of 385 RA and 2795 RAEB2 genes was constructed.\nAgain, bioinformatics databases were used to map between the obtained gene lists and interacting miRNAs and TFs. This identified about 10.000 possible interactions between 217 miRNA (94 miRNA and 123 miRNA*), either expressed in RA or RAEB2, and their corresponding genes.\nIn a similar step all known human TF proteins and their validated promoter targets were identified. Next, their coding genes were determined using a retrieval algorithm which automatically queries the Universal Protein Resource [27]. The coding gene IDs were then mapped to Affymetrix transcript IDs to obtain gene expression levels from the analyzed exon array. After TFs with low expression levels were erased, 198 TFs with 465 validated interactions to the described MDS gene pool could be identified.\nHowever, 1073 genes could not be associated with an expressed miRNA nor a TF, and thus potential secondary targets were omitted from further analysis.\nThe obtained expression levels for all miRNA/miRNA*, TF and genes were normalized to their respective controls and then standardized to a mean of zero and a standard deviation of one.\nTo develop a bioinformatics model for gene expression regulation, we assumed that the mRNA amount, present in a cell at any time, is linearly dependent on its positive acting TFs and negative acting miRNAs [50,51]. Hence, the mRNA amounts can be modeled as a linear combination of the standardized expression levels of miRNAs and TFs. Note that all expression measures for genes, miRNA and TF were acquired from marrow cells of the same patients, whereas the other mentioned studies relied on expression levels from multiple studies of different tissues.\nThe resulting model for RA consisted of 1640 equations to represent each RA gene and 415 predictors (regulators, e.g. miRNA and TFs). For RAEB2 we used 1216 equations and 290 predictors.\nIn spite of the huge variable space, we were interested to determine how much each regulator contributes to the expression of the analyzed genes. This is a particular large regression problem and our input data, similar to other biological measurements, was highly correlated. In addition, the average number of miRNA and TF regulators per gene was small compared to the variable space (see Additional file 5 Figure S6), leading to a set of sparse equations, which posed another algorithmic difficulty.\nTo overcome these issues, we applied the recently proposed elastic net algorithm [29] that is specifically equipped to handle large, correlated and sparse problems. In addition, its regularization term was designed to shrink a numbers of predictors to exactly zero. This eliminates variables (miRNAs and TFs) without importance, and directly incorporates a feature selection procedure, which is otherwise computationally expensive.\nIn RA this strategy identified 349 variables, out of 415, with coefficients different from zero. Similarly, for RAEB2 it selected 197 out of the 290 possible variables. In order to rule out the possibility that these results are purely dependent on the expression levels of the regulators, or the number of regulated genes, we calculated a series of correlation coefficients. With Pearson Correlation Coefficients of 0.003 and 0.067 for the expression and 0.062 and 0.007 for the number of regulated genes, there were no correlations found for the low- and high-grade MDS, respectively.\nThe selected variables for RA included 119 miRNA*, 90 miRNA and 140 TF. In addition to the increased expression of miRNA* in RA and their potential to regulate low-grade MDS associated biological functions and pathways, the large selection of miRNA* provides further mathematical evidence for their regulatory importance.\nTo identify important miRNA/miRNA* and TFs, all regulators were ranked based on the aberration of their regression coefficients from zero (Figure 6). A large deviation, in positive or negative direction, is synonymous with a large influence on gene expression.\nMDS transcriptome regulators. Top 20 regulators determined by the proposed modeling approach. The y-axis shows the regression coefficients and the x-axis lists the regulator names. We named TF with their transfac accession and the corresponding protein name. The miRNAs are named with their miRBase accession and we marked previously known miRNA* with a single and novel miRNA* with two stars. In addition, we indicate the rounded regression coefficients on the respective regulator bars.\nIn RA, two subtype-specific expressed miRNAs were selected as most dominant regulators. Whereas the differentially expressed target genes of hsa-mir-1977** regulate hematopoiesis and apoptosis, hsa-miR-130a has previously been associated with the regulation of angiogenesis and platelet physiology [52,53]. The transcription factor E2F1 ranked three and is known to regulate S-phase dependent apoptosis in MDS [54,55]. Similar, eight out of 13 TF within the top 20 have previously been associated with \"Hematological Disease\" or \"Hematopoiesis\".\nFor RAEB2, the proposed pipeline selected 46 miRNA*, 76 miRNA and 84 TFs as influential. The 20 highest ranked regulators included 16 TFs, of which 12 have previously been associated with either \"Hematological Disease\" or \"Hematopoeisis\". The top ranked TF, AP-2β, has a known role in the development of metastatic phenotypes as well as apoptosis [56]. The highest ranked miRNAs were hsa-miR-122 and hsa-miR-20b, both expressed moderately and not linked to the RAEB2 phenotype.\nIn conclusion, the ranking of miRNAs and TFs with known and important relation to MDS shows the power of our approach. While a few TF have already been extensively investigated in MDS, an in-depth understanding of miRNA regulation remains elusive. We are planning to further study the functions of the novel miRNAs hsa-mir-1977** and hsa-miR-130a in primary cells to confirm our findings and illustrate their roles in MDS.\nIn the recent years it has become increasingly evident that miRNAs and TFs coordinate to regulate mRNA levels [49]. Consequently, we proposed a bioinformatics model that accounts for both effects. It integrated miRNA expression levels measured by next generation sequencing, gene expression measured by exons arrays, as well as data of a recently published gene expression microarray study [22]. All datasets were linked using a number of publicly and commercially available bioinformatics databases (Methods). In particular, we focused on the regulation of genes consistently differentially expressed over a large patient pool, that can be influenced by miRNAs/miRNAs* and TFs detected in our samples. The general workflow is illustrated in Figure 5 and we briefly describe the main aspects below (more information is given in the Methods section and Additional file 5 Figure S4).\nTranscriptome analysis pipeline. Pipeline for the integrative analysis of the MDS transcriptome, further described in the text and Additional file 5 Figure S4.\nThe analysis started with miRNA profiling in samples of RA and RAEB2 patients by next generation sequencing, as discussed earlier.\nIn addition, we measured gene expression and splice form variations using the Affymetrix GeneChip Human Exon 1.0 ST Array. In an earlier study the bone marrow of 55 RA and 43 RAEB patients were compared against 17 controls and genes collectively differentially expressed explored [22]. These differentially expressed genes were merged with the exon array profiling (Additional file 5 Figure S5) and a set of 385 RA and 2795 RAEB2 genes was constructed.\nAgain, bioinformatics databases were used to map between the obtained gene lists and interacting miRNAs and TFs. This identified about 10.000 possible interactions between 217 miRNA (94 miRNA and 123 miRNA*), either expressed in RA or RAEB2, and their corresponding genes.\nIn a similar step all known human TF proteins and their validated promoter targets were identified. Next, their coding genes were determined using a retrieval algorithm which automatically queries the Universal Protein Resource [27]. The coding gene IDs were then mapped to Affymetrix transcript IDs to obtain gene expression levels from the analyzed exon array. After TFs with low expression levels were erased, 198 TFs with 465 validated interactions to the described MDS gene pool could be identified.\nHowever, 1073 genes could not be associated with an expressed miRNA nor a TF, and thus potential secondary targets were omitted from further analysis.\nThe obtained expression levels for all miRNA/miRNA*, TF and genes were normalized to their respective controls and then standardized to a mean of zero and a standard deviation of one.\nTo develop a bioinformatics model for gene expression regulation, we assumed that the mRNA amount, present in a cell at any time, is linearly dependent on its positive acting TFs and negative acting miRNAs [50,51]. Hence, the mRNA amounts can be modeled as a linear combination of the standardized expression levels of miRNAs and TFs. Note that all expression measures for genes, miRNA and TF were acquired from marrow cells of the same patients, whereas the other mentioned studies relied on expression levels from multiple studies of different tissues.\nThe resulting model for RA consisted of 1640 equations to represent each RA gene and 415 predictors (regulators, e.g. miRNA and TFs). For RAEB2 we used 1216 equations and 290 predictors.\nIn spite of the huge variable space, we were interested to determine how much each regulator contributes to the expression of the analyzed genes. This is a particular large regression problem and our input data, similar to other biological measurements, was highly correlated. In addition, the average number of miRNA and TF regulators per gene was small compared to the variable space (see Additional file 5 Figure S6), leading to a set of sparse equations, which posed another algorithmic difficulty.\nTo overcome these issues, we applied the recently proposed elastic net algorithm [29] that is specifically equipped to handle large, correlated and sparse problems. In addition, its regularization term was designed to shrink a numbers of predictors to exactly zero. This eliminates variables (miRNAs and TFs) without importance, and directly incorporates a feature selection procedure, which is otherwise computationally expensive.\nIn RA this strategy identified 349 variables, out of 415, with coefficients different from zero. Similarly, for RAEB2 it selected 197 out of the 290 possible variables. In order to rule out the possibility that these results are purely dependent on the expression levels of the regulators, or the number of regulated genes, we calculated a series of correlation coefficients. With Pearson Correlation Coefficients of 0.003 and 0.067 for the expression and 0.062 and 0.007 for the number of regulated genes, there were no correlations found for the low- and high-grade MDS, respectively.\nThe selected variables for RA included 119 miRNA*, 90 miRNA and 140 TF. In addition to the increased expression of miRNA* in RA and their potential to regulate low-grade MDS associated biological functions and pathways, the large selection of miRNA* provides further mathematical evidence for their regulatory importance.\nTo identify important miRNA/miRNA* and TFs, all regulators were ranked based on the aberration of their regression coefficients from zero (Figure 6). A large deviation, in positive or negative direction, is synonymous with a large influence on gene expression.\nMDS transcriptome regulators. Top 20 regulators determined by the proposed modeling approach. The y-axis shows the regression coefficients and the x-axis lists the regulator names. We named TF with their transfac accession and the corresponding protein name. The miRNAs are named with their miRBase accession and we marked previously known miRNA* with a single and novel miRNA* with two stars. In addition, we indicate the rounded regression coefficients on the respective regulator bars.\nIn RA, two subtype-specific expressed miRNAs were selected as most dominant regulators. Whereas the differentially expressed target genes of hsa-mir-1977** regulate hematopoiesis and apoptosis, hsa-miR-130a has previously been associated with the regulation of angiogenesis and platelet physiology [52,53]. The transcription factor E2F1 ranked three and is known to regulate S-phase dependent apoptosis in MDS [54,55]. Similar, eight out of 13 TF within the top 20 have previously been associated with \"Hematological Disease\" or \"Hematopoiesis\".\nFor RAEB2, the proposed pipeline selected 46 miRNA*, 76 miRNA and 84 TFs as influential. The 20 highest ranked regulators included 16 TFs, of which 12 have previously been associated with either \"Hematological Disease\" or \"Hematopoeisis\". The top ranked TF, AP-2β, has a known role in the development of metastatic phenotypes as well as apoptosis [56]. The highest ranked miRNAs were hsa-miR-122 and hsa-miR-20b, both expressed moderately and not linked to the RAEB2 phenotype.\nIn conclusion, the ranking of miRNAs and TFs with known and important relation to MDS shows the power of our approach. While a few TF have already been extensively investigated in MDS, an in-depth understanding of miRNA regulation remains elusive. We are planning to further study the functions of the novel miRNAs hsa-mir-1977** and hsa-miR-130a in primary cells to confirm our findings and illustrate their roles in MDS.\n[SUBTITLE] Key functions regulated by miRNAs and TFs in Myelodysplastic Syndromes [SUBSECTION] In order to identify molecular processes influenced by the above regulators, we first annotated the target genes of highly ranked miRNAs/miRNA* and TFs (e.g. absolute regression coefficients greater than one) with pre-filtered (e.g. having less than 500 genes) gene ontologies [57]. Then each biological process was ranked according to the number of involved target genes. Further, genes differentially expressed in each process term were identified and overlaid with the above ranking onto Figure 7.\nMDS regulated biological processes. Illustration of biological processes that are highly regulated by influential miRNAs and TFs, as selected by our in-silico model. The left figure shows results for the low risk and the right figure for the high risk grade. In both graphs the x-axis describes the regulated process. The y-axis shows, in the black bar, the number of selected miRNA and TF that regulate a certain processes. In the red bar the number of down- and in green bar the number of up regulated genes are shown.\nSome highly regulated processes, such as angiogenesis, were shared between low- and high-grade MDS. Moreover, our model indicated a few biological processes that are highly regulated in both disease subtypes, but different in the levels of their expression. For example \"nuclear mRNA splicing, via spliceosome\", \"G1/S transition of mitotic cell cycle\" or \"protein import into the nucleus, docking \". Rationally, such processes are potential keys that can define functional differences in MDS subtypes.\nOf particular interest was the process \"negative regulation of transcription from RNA polymerase II promoters\" (GO:0000122), which was the most regulated process in both MDS grades. This pathway prevents or reduces transcription of different RNAs, including miRNAs.\nIn RA, the majority of the differentially expressed genes in this term were down regulated (Figure 7), hence promoting transcription. By contrast in RAEB2, the majority of differentially expressed genes were up regulated, leading to a reduced RNA production.\nTherefore, these results are in agreement with our earlier findings that some miRNAs were only detected, or had higher copy numbers, in RA compared to RAEB2.\nAltogether, these results suggested that the differences in miRNA expression between RA and RAEB2, and potentially their downstream targets, might be the result of RNA polymerase II promoter regulation. In RA, this would indicate a potential feedback system in which expressed miRNA and TF down regulate \"GO:0000122\". In turn, this could increase expression of RNA and hence accumulate miRNAs. By contrast in RAEB2, the selected miRNA and TF up regulate \"GO:0000122\". This drives the cell to reduce RNAs synthesis and consequently decreases their overall amount.\nThus, the discussed feedback loops are a potential explanation for the high amounts of miRNA seen in RA and the much lower amount in RAEB2, two obvious discoveries from the RNA-seq analysis described above. Further studies to investigate the role of this pathway in MDS are warranted.\nIn order to identify molecular processes influenced by the above regulators, we first annotated the target genes of highly ranked miRNAs/miRNA* and TFs (e.g. absolute regression coefficients greater than one) with pre-filtered (e.g. having less than 500 genes) gene ontologies [57]. Then each biological process was ranked according to the number of involved target genes. Further, genes differentially expressed in each process term were identified and overlaid with the above ranking onto Figure 7.\nMDS regulated biological processes. Illustration of biological processes that are highly regulated by influential miRNAs and TFs, as selected by our in-silico model. The left figure shows results for the low risk and the right figure for the high risk grade. In both graphs the x-axis describes the regulated process. The y-axis shows, in the black bar, the number of selected miRNA and TF that regulate a certain processes. In the red bar the number of down- and in green bar the number of up regulated genes are shown.\nSome highly regulated processes, such as angiogenesis, were shared between low- and high-grade MDS. Moreover, our model indicated a few biological processes that are highly regulated in both disease subtypes, but different in the levels of their expression. For example \"nuclear mRNA splicing, via spliceosome\", \"G1/S transition of mitotic cell cycle\" or \"protein import into the nucleus, docking \". Rationally, such processes are potential keys that can define functional differences in MDS subtypes.\nOf particular interest was the process \"negative regulation of transcription from RNA polymerase II promoters\" (GO:0000122), which was the most regulated process in both MDS grades. This pathway prevents or reduces transcription of different RNAs, including miRNAs.\nIn RA, the majority of the differentially expressed genes in this term were down regulated (Figure 7), hence promoting transcription. By contrast in RAEB2, the majority of differentially expressed genes were up regulated, leading to a reduced RNA production.\nTherefore, these results are in agreement with our earlier findings that some miRNAs were only detected, or had higher copy numbers, in RA compared to RAEB2.\nAltogether, these results suggested that the differences in miRNA expression between RA and RAEB2, and potentially their downstream targets, might be the result of RNA polymerase II promoter regulation. In RA, this would indicate a potential feedback system in which expressed miRNA and TF down regulate \"GO:0000122\". In turn, this could increase expression of RNA and hence accumulate miRNAs. By contrast in RAEB2, the selected miRNA and TF up regulate \"GO:0000122\". This drives the cell to reduce RNAs synthesis and consequently decreases their overall amount.\nThus, the discussed feedback loops are a potential explanation for the high amounts of miRNA seen in RA and the much lower amount in RAEB2, two obvious discoveries from the RNA-seq analysis described above. Further studies to investigate the role of this pathway in MDS are warranted.", "We performed high-throughput next generation sequencing of small RNAs (RNA-seq) on primary cells from control, low-grade (RA) and high-grade (RAEB2) MDS patients on an Illumina Genome Analyzer IIx (see Methods). This resulted in about thirteen million short sequence reads (length 38 bp) per sample. We implemented an annotation algorithm that integrates knowledge from diverse biological databases to characterize each RNA-seq read (Figure 1). In brief, all reads were trimmed (length 22 bp) and aligned against the current version of the human genome (GRCh37), using the publicly available software Bowtie [19]. We allowed for at most two mismatches between the reference and read sequences. Since, the analyzed reads were relatively short and we allowed mismatches, a large number aligned to multiple genome positions (green part Figure 1). Consistent with previous analyses, we decided to discard reads having more than 25 alignment positions [31]. For annotation, we matched small sequencing reads to a set of small RNAs that included miRNAs from miRBase [32], a number of other small RNAs, including tRNAs and rRNAs, from the RepeatMasker track of UCSCs genome browser [33], as well as piRNAs from the NCBI database http://www.ncbi.nlm.nih.gov (blue callout box Figure 1). This mapping showed that the composition of the small RNAome was dramatically different from the analyzed samples, suggesting a shift in the regulation of small RNA targets during the progression of this disease.\nFirst, the relative amounts of tRNA to rRNA were significantly larger in RAEB2 compared to RA and control (36 vs. 1.6 and 1). Since tRNAs are vital building blocks for protein synthesis and required during translation, this may indicate an increased regulation of translation at this disease stage. A recent study based on tRNA microarrays reported a 20-fold elevation of tRNAs in tumor samples versus normal samples [34]. In addition, tRNAs have been shown to inhibit cytochorme c activated apoptosis [35,36]. Taken together, the high tRNA content may contribute to the two well known characteristics of high-grade MDSs, decreased apoptosis (in contrast to low-grade MDS) and high rate of leukemia transformation. To our knowledge, this novel finding has not been reported for MDS, highlighting the combined use of next generation sequencing and the proposed annotation methodology.\nNext, the obtained sequencing data demonstrated the first evidence of piRNA expression in marrow cells, and particular enrichment in low-grade MDS. Piwi-interacting RNAs are a relative newly defined class of none coding RNAs with length from 26 to 32nt [37,38]. In RA their expression increased, accounting for about nine percent of total sRNA counts, compared to about two and one percent in RAEB2 and controls, respectively. The biogenesis of piRNA is not fully understood today, but increasing evidence pinpoints that PIWI proteins are required for the accumulation of piRNAs [39-42]. In accordance with this concept, our exon array data showed that piwil1 and piwil2, two of the four human PIWI coding genes, were significantly up-regulated in RA, compared to control and high-grade MDS cells. Furthermore, recent studies have indicated that the PIWI-piRNA complex may have a role in post-transcriptional silencing damaged DNA fragments [39,43,44] and that interrupting PIWI-piRNA formation can lead to DNA double strand breaks [45]. Altogether, these findings suggest that piRNA might be used as diagnostic markers for low-grade MDS, however, further studies of their role in MDS pathogenesis are warranted.\nFinally, we found an increased regulatory role of miRNAs in cells of RA and RAEB2 patients. In low-grade MDS miRNAs represented about 35 percent of the total sRNAs, an almost 4-fold increase compared to control, highlighting their role in disease pathogenesis. Similarly, miRNA percentages were elevated to about 14 percent in RAEB2 compared to control, although at a lower extent (two-fold increase). Of note, miRNAs are currently the most widely studied species of sRNAs and they are known to influence mRNA levels as well as translation. Due to their profound effects, the above findings, and taken into account insufficient literature on miRNAs in MDS, we decided to further investigate and discuss their roles in MDS.\nSequencing of additional RNAomes is required to confirm the observed trends over a larger patient population.", "In the analyzed samples, reads were found at 246 different full-length primary miRNA sequence loci. These included matches at 173 different mature miRNA sites in RA, 93 in controls and 79 in RAEB2. Expression varied between samples and was generally more elevated in RA compared to RAEB2 (compare Figure 3 and Additional file 6 Tables S1,S2 and S3). The miRNA hsa-mir-125b-2 was an exception and more elevated in RAEB2 (read counts: 264 RAEB2, 87 RA and zero in controls). A single miRNA, hsa-mir-720 (fold change 10), was significantly down-regulated in RA and no copies were detected in RAEB2. Furthermore, a total of 58 miRNAs were only expressed in RA (Additional file 6 Table S4), hsa-mir-191 was unique to controls and hsa-mir-9-3 was only detected in RAEB2.\nComparison of miRNA expression. A heat map of the log2 transformed expression levels for miRNAs and miRNA* in the three analyzed samples.\nA number of high-throughput sequencing studies have recently reported the detection of miRNA*, often with higher copy numbers than their mature counterparts [46,47]. These studies further suggest that miRNA* associate with the effector complex AGO1 and regulate target gene expression. However, their roles in MDS have never been studied and we found reads matching to miRNA* motifs on 68 loci in RA, 55 in control and 24 in RAEB2 cells. In addition, multiple reads matched to uncharacterized positions on 59 different primary miRNA sequences. Interestingly, no miRNA* motifs had been reported for these loci before. Therefore, we visualized the secondary structure for their primary sequence, the location of the mature sequence and the reads clustered at uncharacterized loci (see Figure 4 Methods and Additional file 6 Table S5). Our bioinformatics analysis showed that most uncharacterized reads aligned on the miRNA* arm, opposite to the mature sequence. This has led to the definition of 59 previously unreported miRNA* candidates, of which 20 seed sequences have previously been associated in the targetscan database [48], but which did not exist in the miRBase version (v14) used for this study. We classified the remaining 39 motifs as novel miRNA* sequences (miRNA**) and folding information with locations on the miRNA arms are given in Additional file 6 Table S5.\nmiRNA* analysis pipeline. Analysis pipeline for the visualization of novel miRNA* from small RNA sequencing reads aligned to uncharacterized loci on known primary miRNA sequences.\nConsidering all samples together, significant expression was detected (read count at least 100) for 128 miRNA*, including 123 miRNA* in RA, 72 in control and 31 in RAEB2. Interestingly, in our RNA-seq data either the miRNA or the miRNA* (including miRNA**) arms were expressed at many miRNA loci (Additional file 5 Figure S2), suggesting a non-random and selective expression of the two different miRNA arms. Importantly, we found that 24 miRNA* were only expressed in RA, hsa-mir-24-1* was unique to control (copy number: 119) and no miRNA* was uniquely expressed in RAEB2. These miRNA* can potentially be used as biomarkers to diagnose low-grade MDS, which has significant overlapping morphologic and clinical features with reactive cytopenias, and is consequently very difficult to diagnose. However, further validation in additional patients and with different methods is needed to confirm these findings. Details for the ten miRNA* with the greatest fold changes in RA are given in Table 1 further information can be found in Additional file 6 Tables S1 and S4.\nDifferentially expressed miRNA* and their target genes.\nList of ten miRNA* (see Additional file 6 Table S4 for folding information) that were detected with the largest fold changes in control and low-grad cells. We show the fold change, p-value (measuring if the number of down regulated target genes is greater than expected by chance) and target genes with regulation (bold arrows mark significant and italic non-significant regulation). We assessed the significantly down regulated genes for functional enrichment and pathways. The top five enriched biological functions included RNA Post-Transcriptional Modification (pval:1.2E-04), Cellular Growth and Proliferation (pval:1.25E-04), Cell Death (pval:5.79E-04) and Cancer (pval:5.95E-04-). The top six enriched canonical pathways included IL-22 Signaling (pval:2.63E-04), p53 Signaling (pval: 8.32E-04), IL-15 Signaling (pval:2.95E-03), B Cell Receptor Signaling (pval:4.47E-03) and FLT3 Signaling in Hematopoietic Progenitor Cells (pval:4.68E-03).", "In order to identify biological functions that might contribute to low-grade MDS, and can be modulated by the detected miRNA/miRNA*, we first identified target genes for 91 miRNA and 104 miRNA* that were highest expressed in RA, compared to RAEB2 and control marrow cells. The total number of uniquely regulated mRNAs was 7021 for miRNA* and 4665 for miRNA (see Methods). To select high confidence targets, each gene was further ranked according to the number of miRNAs or miRNA* that potentially control its expression or translation (see Methods). This was necessary to counteract the high false positive rates of in-silico miRNA target predictions, which for example do not consider tissue specificity. From this ranking two gene sets (Table 2), the first consisting of 74 genes controlled by 19 miRNAs and the second consisting of 93 genes regulated by at least 14 miRNA*, were selected to compare significantly enriched molecular and cellular functions (Methods). Interestingly, four out of the top five functions, with the smallest p-values, overlapped. These included \"Cell Death\", \"Cellular Development\", \"Cell Cycle\" and \"Gene Expression\" (Table 2). The high compatibility suggested that the detected miRNA* fulfill similar roles to their mature counterparts, providing further evidence of their selectivity and biological importance.\nEnriched biological processes of miRNA and miRNA* target genes.\nThis tables gives an overview of the selected miRNA (top) and miRNA* (bottom) target genes, their regulation (bold is used for significant expression and italic for non-significant expression), the top five molecular functions of these genes as well as the genes involved in these functions.\nTo study the overall role of miRNA/miRNA* in RA and RAEB2 cells, their target genes were combined for further analysis. In RA, we included 94 genes regulated by at least 27, and in RAEB2 a total 83 genes targeted by at least three different miRNA/miRNA*. The difference in the required number of regulating miRNA/miRNA* were attributed to the higher number of differentially expressed miRNA in RA (compare Additional file 5 Figure S3).\nNext, we identified significantly enriched molecular and cellular functions (Methods) and compared results with a recent large scale gene expression study of 183 MDS patients [22].\nIn both disease grades the selected genes were enriched for the molecular function of \"Cell Death\" (RA: 9.86E-06, RAEB2: 1.75E-04). This is in agreement with the above study, which identified apoptosis as the main deregulated process in low-grade MDS.\nAgain consistent with the cited study, miRNA/miRNA* targets selected in both MDS subtypes were enriched for \"DNA Replication, Recombination, and Repair \" (RA:1.12E-03, RAEB2: 6.67E-03).\nIn addition, cell cycle regulatory genes were among the indentified target genes for both, RA and RAEB2. In accordance with the study cited above, we found that the \"G2/M phase\" (RAEB2:1.55 E-3) and \"DNA damage checkpoint\" (RAEB2: 6.67E-3) were exclusively regulated in RAEB2. On contrast the \"G1 phase\" (6.17E-06) was exclusive to RA.\nThese findings showed that miRNA/miRNA* interfere with molecular functions and pathways known to be deregulated at the transcriptomic level, as reported in the cited gene expression study (some additional information is given in Additional file 7). In the following we proposed a bioinformatics modeling approach to further elucidate the effects of miRNA/miRNA* on the MDS transcriptome.", "In the recent years it has become increasingly evident that miRNAs and TFs coordinate to regulate mRNA levels [49]. Consequently, we proposed a bioinformatics model that accounts for both effects. It integrated miRNA expression levels measured by next generation sequencing, gene expression measured by exons arrays, as well as data of a recently published gene expression microarray study [22]. All datasets were linked using a number of publicly and commercially available bioinformatics databases (Methods). In particular, we focused on the regulation of genes consistently differentially expressed over a large patient pool, that can be influenced by miRNAs/miRNAs* and TFs detected in our samples. The general workflow is illustrated in Figure 5 and we briefly describe the main aspects below (more information is given in the Methods section and Additional file 5 Figure S4).\nTranscriptome analysis pipeline. Pipeline for the integrative analysis of the MDS transcriptome, further described in the text and Additional file 5 Figure S4.\nThe analysis started with miRNA profiling in samples of RA and RAEB2 patients by next generation sequencing, as discussed earlier.\nIn addition, we measured gene expression and splice form variations using the Affymetrix GeneChip Human Exon 1.0 ST Array. In an earlier study the bone marrow of 55 RA and 43 RAEB patients were compared against 17 controls and genes collectively differentially expressed explored [22]. These differentially expressed genes were merged with the exon array profiling (Additional file 5 Figure S5) and a set of 385 RA and 2795 RAEB2 genes was constructed.\nAgain, bioinformatics databases were used to map between the obtained gene lists and interacting miRNAs and TFs. This identified about 10.000 possible interactions between 217 miRNA (94 miRNA and 123 miRNA*), either expressed in RA or RAEB2, and their corresponding genes.\nIn a similar step all known human TF proteins and their validated promoter targets were identified. Next, their coding genes were determined using a retrieval algorithm which automatically queries the Universal Protein Resource [27]. The coding gene IDs were then mapped to Affymetrix transcript IDs to obtain gene expression levels from the analyzed exon array. After TFs with low expression levels were erased, 198 TFs with 465 validated interactions to the described MDS gene pool could be identified.\nHowever, 1073 genes could not be associated with an expressed miRNA nor a TF, and thus potential secondary targets were omitted from further analysis.\nThe obtained expression levels for all miRNA/miRNA*, TF and genes were normalized to their respective controls and then standardized to a mean of zero and a standard deviation of one.\nTo develop a bioinformatics model for gene expression regulation, we assumed that the mRNA amount, present in a cell at any time, is linearly dependent on its positive acting TFs and negative acting miRNAs [50,51]. Hence, the mRNA amounts can be modeled as a linear combination of the standardized expression levels of miRNAs and TFs. Note that all expression measures for genes, miRNA and TF were acquired from marrow cells of the same patients, whereas the other mentioned studies relied on expression levels from multiple studies of different tissues.\nThe resulting model for RA consisted of 1640 equations to represent each RA gene and 415 predictors (regulators, e.g. miRNA and TFs). For RAEB2 we used 1216 equations and 290 predictors.\nIn spite of the huge variable space, we were interested to determine how much each regulator contributes to the expression of the analyzed genes. This is a particular large regression problem and our input data, similar to other biological measurements, was highly correlated. In addition, the average number of miRNA and TF regulators per gene was small compared to the variable space (see Additional file 5 Figure S6), leading to a set of sparse equations, which posed another algorithmic difficulty.\nTo overcome these issues, we applied the recently proposed elastic net algorithm [29] that is specifically equipped to handle large, correlated and sparse problems. In addition, its regularization term was designed to shrink a numbers of predictors to exactly zero. This eliminates variables (miRNAs and TFs) without importance, and directly incorporates a feature selection procedure, which is otherwise computationally expensive.\nIn RA this strategy identified 349 variables, out of 415, with coefficients different from zero. Similarly, for RAEB2 it selected 197 out of the 290 possible variables. In order to rule out the possibility that these results are purely dependent on the expression levels of the regulators, or the number of regulated genes, we calculated a series of correlation coefficients. With Pearson Correlation Coefficients of 0.003 and 0.067 for the expression and 0.062 and 0.007 for the number of regulated genes, there were no correlations found for the low- and high-grade MDS, respectively.\nThe selected variables for RA included 119 miRNA*, 90 miRNA and 140 TF. In addition to the increased expression of miRNA* in RA and their potential to regulate low-grade MDS associated biological functions and pathways, the large selection of miRNA* provides further mathematical evidence for their regulatory importance.\nTo identify important miRNA/miRNA* and TFs, all regulators were ranked based on the aberration of their regression coefficients from zero (Figure 6). A large deviation, in positive or negative direction, is synonymous with a large influence on gene expression.\nMDS transcriptome regulators. Top 20 regulators determined by the proposed modeling approach. The y-axis shows the regression coefficients and the x-axis lists the regulator names. We named TF with their transfac accession and the corresponding protein name. The miRNAs are named with their miRBase accession and we marked previously known miRNA* with a single and novel miRNA* with two stars. In addition, we indicate the rounded regression coefficients on the respective regulator bars.\nIn RA, two subtype-specific expressed miRNAs were selected as most dominant regulators. Whereas the differentially expressed target genes of hsa-mir-1977** regulate hematopoiesis and apoptosis, hsa-miR-130a has previously been associated with the regulation of angiogenesis and platelet physiology [52,53]. The transcription factor E2F1 ranked three and is known to regulate S-phase dependent apoptosis in MDS [54,55]. Similar, eight out of 13 TF within the top 20 have previously been associated with \"Hematological Disease\" or \"Hematopoiesis\".\nFor RAEB2, the proposed pipeline selected 46 miRNA*, 76 miRNA and 84 TFs as influential. The 20 highest ranked regulators included 16 TFs, of which 12 have previously been associated with either \"Hematological Disease\" or \"Hematopoeisis\". The top ranked TF, AP-2β, has a known role in the development of metastatic phenotypes as well as apoptosis [56]. The highest ranked miRNAs were hsa-miR-122 and hsa-miR-20b, both expressed moderately and not linked to the RAEB2 phenotype.\nIn conclusion, the ranking of miRNAs and TFs with known and important relation to MDS shows the power of our approach. While a few TF have already been extensively investigated in MDS, an in-depth understanding of miRNA regulation remains elusive. We are planning to further study the functions of the novel miRNAs hsa-mir-1977** and hsa-miR-130a in primary cells to confirm our findings and illustrate their roles in MDS.", "In order to identify molecular processes influenced by the above regulators, we first annotated the target genes of highly ranked miRNAs/miRNA* and TFs (e.g. absolute regression coefficients greater than one) with pre-filtered (e.g. having less than 500 genes) gene ontologies [57]. Then each biological process was ranked according to the number of involved target genes. Further, genes differentially expressed in each process term were identified and overlaid with the above ranking onto Figure 7.\nMDS regulated biological processes. Illustration of biological processes that are highly regulated by influential miRNAs and TFs, as selected by our in-silico model. The left figure shows results for the low risk and the right figure for the high risk grade. In both graphs the x-axis describes the regulated process. The y-axis shows, in the black bar, the number of selected miRNA and TF that regulate a certain processes. In the red bar the number of down- and in green bar the number of up regulated genes are shown.\nSome highly regulated processes, such as angiogenesis, were shared between low- and high-grade MDS. Moreover, our model indicated a few biological processes that are highly regulated in both disease subtypes, but different in the levels of their expression. For example \"nuclear mRNA splicing, via spliceosome\", \"G1/S transition of mitotic cell cycle\" or \"protein import into the nucleus, docking \". Rationally, such processes are potential keys that can define functional differences in MDS subtypes.\nOf particular interest was the process \"negative regulation of transcription from RNA polymerase II promoters\" (GO:0000122), which was the most regulated process in both MDS grades. This pathway prevents or reduces transcription of different RNAs, including miRNAs.\nIn RA, the majority of the differentially expressed genes in this term were down regulated (Figure 7), hence promoting transcription. By contrast in RAEB2, the majority of differentially expressed genes were up regulated, leading to a reduced RNA production.\nTherefore, these results are in agreement with our earlier findings that some miRNAs were only detected, or had higher copy numbers, in RA compared to RAEB2.\nAltogether, these results suggested that the differences in miRNA expression between RA and RAEB2, and potentially their downstream targets, might be the result of RNA polymerase II promoter regulation. In RA, this would indicate a potential feedback system in which expressed miRNA and TF down regulate \"GO:0000122\". In turn, this could increase expression of RNA and hence accumulate miRNAs. By contrast in RAEB2, the selected miRNA and TF up regulate \"GO:0000122\". This drives the cell to reduce RNAs synthesis and consequently decreases their overall amount.\nThus, the discussed feedback loops are a potential explanation for the high amounts of miRNA seen in RA and the much lower amount in RAEB2, two obvious discoveries from the RNA-seq analysis described above. Further studies to investigate the role of this pathway in MDS are warranted.", "In this paper we presented the first systematic profiling for small RNAs in Myelodysplastic Syndromes using next generation sequencing on the current Illumina Genome Analyzer IIx platform. A custom data analysis pipeline that handled raw reads, sequence alignment, data storage as well as integrative read annotation was implemented. The analysis showed that the small RNAome in low-grade MDS (RA) was enriched for piRNAs, potentially protecting DNA from the accumulation of mutations, a mechanism not observed in high-grade MDS (RAEB2). By contrast, tRNAs were enriched in RAEB2, which might contribute to the characteristic reduction in apoptotic cell death at this disease stage. In both grades a number of differentially expressed miRNAs and miRNA* were detected and 48 previously unreported miRNA* exposed. In all analyzed cells, miRNA reads were often found for either the mature or the star sequence, indicating selective expression of miRNA and miRNA*. Subsequent functional analysis of target genes showed that both miRNA species (i.e. miRNA and miRNA*), regulate similar MDS stage specific molecular functions and pathways indicating that miRNA* also play important regulatory roles on the MDS transcriptome. Using integrative bioinformatics modeling, we identified miRNA species and TFs that act as important regulators for a MDS transcriptome that is consistently deregulated over a large MDS patient pool. Further ontology analysis identified the geneontology process of \"negative regulation of transcription from RNA polymerase II promoters\" as highly controlled in both MDS grades. Additionally, our findings suggested a potential feedback loop, where specific miRNAs and TFs regulate their own expression by either enhancing polymerase II promoter function, as seen in RA, or repressing its function, as found in RAEB2. Further studies are warranted to experimentally substantiate our observation and to develop novel biomarkers for the diagnosis and treatment of MDS.", "The authors declare that they have no competing interests.", "XZ and CCC designed the study. SA performed the RNA-SEQ and JW the exon arrays. DB performed the data analysis, wrote the manuscript and contributed the study design. XZ and TP supervised the data analysis. CCC and PW supervised the data generation. MB contributed to the data interpretation and manuscript writing. All authors read, assisted with editing, and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1755-8794/4/19/prepub\n", "This file contains all unique sequence reads in fasta format for the control population. The identifiers contain the number of times a read was sequenced, e.g. the x251 for the identifier run_2_s_5_25_1_x251 means the read was sequenced 251 times.\nClick here for file\nThis file contains all unique sequence reads in fasta format for the RA population. The identifiers contain the number of times a read was sequenced, e.g. the x251 for the identifier run_2_s_5_25_1_x251 means the read was sequenced 251 times.\nClick here for file\nThis file contains all unique sequence reads in fasta format for the RAEB2 population. The identifiers contain the number of times a read was sequenced, e.g. the x251 for the identifier run_2_s_5_25_1_x251 means the read was sequenced 251 times.\nClick here for file\nThis file contains the summarized gene expression levels as log intensities for the control, RA and RAEB2 populations.\nClick here for file\nThis file contains the supplemental Figures referenced in this article.\nClick here for file\nThis file contains the supplemental Tables referenced in this article.\nClick here for file\nThis file contains the supplemental Text referenced in this article.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Factors influencing patient satisfaction with dental appearance and treatments they desire to improve aesthetics.
21342536
We assessed factors influencing patients' satisfaction with their dental appearance and the treatments they desired to improve dental aesthetics.
BACKGROUND
A cross-sectional study was performed out among 235 adult patients who visited the Hospital Universiti Sains Malaysia dental clinic. A structured, interviewer-guided questionnaire was used to identify patient satisfaction with their general dental appearance, cosmetic elements and desired treatments.
METHODS
The 235 patients consisted of 70 males (29.8%) and 165 females (70.2%), of mean age 31.5 years (SD 13.0). Of these patients, 124 (52.8%) were not satisfied with their general dental appearance. In addition, 132 patients (56.2%) were not happy with the color of their teeth, 76 (32.3%), regarded their teeth were poorly aligned, 62 (26.4%), as crowded and 56 (23.4%) protruded. Dissatisfaction with tooth color was significantly higher in female than in male patients (odds ratio [OR] of 1.99 (95% confidence interval [CI] 1.13-3.50). Tooth whitening was the treatment most desired by patients (48.1%). Results of multiple logistic regression analysis showed that patient dissatisfaction with general dental appearance was significantly associated with female gender (OR = 2.18; 95% CI: 1.18-4.03), unhappiness with tooth color (OR = 3.05; 95% CI: 1.74-5.34) and the opinion that their teeth protruded (OR = 2.91, 95% CI: 1.44-5.91).
RESULTS
Most patients in this study were not satisfied with their dental appearance with a greater percentage of females expressing dissatisfaction than males. An age was not associated with satisfaction. Unhappiness with tooth color and feelings of having protruding teeth also had a significant negative influence on patient satisfaction with general dental appearance.
CONCLUSIONS
[ "Adolescent", "Adult", "Cross-Sectional Studies", "Educational Status", "Esthetics, Dental", "Female", "Health Services Needs and Demand", "Humans", "Likelihood Functions", "Logistic Models", "Malaysia", "Male", "Middle Aged", "Models, Psychological", "Orthodontics, Corrective", "Patient Satisfaction", "ROC Curve", "Sampling Studies", "Sex Factors", "Surveys and Questionnaires", "Tooth Bleaching", "Young Adult" ]
3059271
null
null
Methods
This cross sectional study was carried out from June 1, 2009 to January 31, 2010 among patients who attended the HUSM dental clinic. All included patients were newly registered adults >18 years old, who had not received any dental treatment within the previous six months, were able to understand the Malay language and have no clear evidence of cognitive disturbances. Sample size was calculated using the formula estimating a single proportion with a requirement for 95% confidence [19]. The prevalence of dissatisfaction with dental appearance was estimated to be 62.7% based on the satisfaction with dental aesthetics among adult patients attending a military dental clinic in Tel Aviv, Israel [3]. Considering the available resources, a sample size of 183 was selected with a precision of 0.07 (7%). To accommodate for a 30% non-response rate, 238 patients were invited to participate in this study. A systematic random sampling technique was used to select the study sample. The sampling interval was decided based on the estimated number of eligible patients attending the clinic on a normal outpatient day, with every tenth patient was invited to participate. No possible biases regarding the selection of the study population were anticipated and the samples were representative of the reference population. This study was approved by the Research and Ethics Committee (Human), Universiti Sains Malaysia. A structured, interviewer guided questionnaire (Table 1) was used for data collection. The questionnaire consisted of questions on socio-demographic items including sex, age, and level of education, as well as questions on each patient's satisfaction with his/her then-current general dental appearance. Patients were also asked about their satisfaction with tooth color, perceived malalignment of teeth (crowding, poorly aligned or protruding), caries in anterior teeth, non-aesthetic anterior tooth color restoration and presence of tooth fracture. In addition, patients were asked to select the aesthetic treatments they wished to undergo, including orthodontic treatment, crowns, tooth whitening, tooth color restorations and partial dentures. Questionnaire used in the study Prior to this study the clarity of the questionnaire was pre-tested on 15 patients who were not involved in the study. Feedback regarding problems understanding and answering the questionnaire was obtained and addressed. Each patient provided written informed consent before participation in this study. Data were entered and analyzed using the Statistical Package for Social Sciences (SPSS) for Windows software (version 12.0; SPSS Inc, Chicago). Descriptive statistics such as mean and standard deviation (SD) for continuous variables and frequency and percentage for categorical variables were determined. The chi-square test was used to compare the sex, age, education levels of patients who were and were not satisfied with their dental appearance. The level of significance was set at 0.05. Factors influencing patient satisfaction with dental appearance were determined at both the univariate and multivariate levels using simple logistic regression analysis and multiple logistic regression analysis, respectively. Variables selected for inclusion in the multiple logistic regression analysis model were selected using the forward stepwise logistic regression method. Following the fit of the preliminary model, the importance of each variable was verified. The interaction terms were checked using the Likelihood Ratio (LR) test. Multicollinearity problems were was identified by the Variance Inflation Factor (VIF) test. The final model was assessed for fitness using the Hosmer-Lemeshow goodness-of-fit test. The classification tables for sensitivity and specificity as well as the area under the Receiver Operating Characteristic (ROC) curve were also recorded to assess the model fitness. Influential outliers were identified using Cook's distance. Data points above 1.0 were considered as influential outliers.
null
null
null
null
[ "Background", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Dental appearance is an important feature in determining the attractiveness of a face, and thus plays a key role in human social interactions. Among the significant factors affecting overall dental appearance are tooth color, shape, and position; quality of restoration; and the general arrangement of the dentition, especially of the anterior teeth [1]. Furthermore, an aesthetically pleasing smile was found to depend on tooth color, size, shape, and position, upper lip position, visibility of teeth and amount of gingival display [2]. Although each factor may be considered individually, all components must act together to create a harmonic and symmetric entity that produces the final aesthetic effect [1].\nIn general, people desire for pearly white teeth. Thus, tooth color is one of the most important factors determining satisfaction with dental appearance [1,3]. Self-satisfaction with tooth color decreases with increasing severity of discoloration [4,5]. White teeth have been positively correlated with high ratings of social competence, intellectual ability, psychological adjustment and relationship status [6]. Alternatively, untreated dental caries, non-aesthetic or discolored anterior teeth restorations and missing anterior teeth usually lead to dissatisfaction with dental appearance [3,7-9]. Furthermore, treatments improving dental aesthetics have been found to increase patient quality of life and psychological status [10,11].\nMalocclusion is a common oral disorder, although treatment needs and demands vary. In some populations, tooth misalignments are not regarded as serious enough to necessitate treatment [12-14], whereas, in other populations, the need for orthodontic treatment may be very high [15,16]. There is general agreement that people are motivated to seek orthodontic treatment because of the negative physical, psychological and social impacts of malocclusion, but studies of the effects of malocclusion and its treatment on people's lives have yielded inconsistent results [17,18]. These discrepancies may be due to various interpretations of physical, psychological and social impact and lack of standardized methods to measure these quality of life constructs [17].\nCurrently, cosmetic dentistry has become an important aspect of dentistry. Tooth whitening treatments, anterior teeth restoration, labial veneers crowns, and orthodontic treatment are frequently demanded by patients who interested in improving their dental appearance [3]. We have assessed satisfaction with dental appearance, desired treatments to improve dental appearance, and factors that influence satisfaction with dental appearance among adult patients who attended the dental clinic at the Hospital Universiti Sains Malaysia (HUSM).", "The demographic background of the patients and their satisfaction with their dental appearance are shown in Table 2. Of the 235 patients, (70.2%) were female. Ages ranged from 18 to 62 years with a mean age of 31.5 years (SD 13.0). We found that (52.8%) of these patients were not happy with their general dental appearance, with dissatisfaction with tooth color being the most common (56.2%). In addition, some patients regarded their teeth as poorly aligned (32.3%), crowded (26.4%), and protruding (23.4%). Others reasons for dissatisfaction include self-reported presence of caries (43.4%), non-aesthetic restorations (30.6%), and tooth fractures (15.3%). Patients also answered questions about the treatments they desired to improve their appearance (Table 3). We found that 48.1% wished to have their teeth whitened, followed by restoration of tooth color (18.3%), dentures (16.2%), orthodontic treatment (14.0%), and dental crowns (11.5%).\nBackground of patients and satisfaction with dental appearance (n = 235)\nDesired aesthetic dental treatments (n = 235)\nTable 4 shows a comparison between the 111 patients who were and the 124 who were not satisfied with their general dental appearance. We found that satisfaction with dental appearance differed significantly between males and females. In addition, dissatisfaction with tooth color and perception that of having protruding teeth had significant negative impacts on patient satisfaction with general dental appearance. No other dental problem or condition was associated with patient satisfaction with general dental appearance.\nProfile of patients who were (n = 111) and were not (n = 124) satisfied with their general dental appearance\nSimple logistic regression analysis of factors influencing patient satisfaction with dental appearance found no significant associations between patient satisfaction and age, education level, perception of having crowded and poorly aligned teeth, self-reported dental caries, non-aesthetic restorations, and fractures of the anterior teeth (Table 5). However, dissatisfaction with general dental appearance was significantly associated with female gender (OR = 2.70, 95% CI: 1.51-4.82), with unhappiness with tooth color (OR = 3.35, 95% CI: 2.01-5.92) and with regarding their teeth as protruding (OR = 3.42, 95% CI: 1.75-6.71).\nFactors influencing patients' satisfaction with dental appearance by simple logistic regression analysis\na Likelihood Ratio (LR) test\nb Wald test\nMultivariable logistic regression analysis showed that female gender (OR = 2.18, 95% CI: 1.18-4.03), unhappiness with tooth color (OR = 3.05, 95% CI: 1.74-5.34) and the opinion who felt that their teeth were protruded (OR = 2.91, 95% CI: 1.44-5.91) were significant independent determinants of patient satisfaction with general appearance (Table 6). Possible two-way interactions between factors were not significant, and there was no multicollinearity problem. The preliminary final model was checked for fitness. The result of Hosmer-Lemeshow goodness-of-fit test was not significant (p = 0.631, df = 4) and the area under the ROC curve was 0.714, suggesting that the model was fit. The sensitivity and specificity of this model were 64.9% and 64.5% respectively. These results indicated that satisfaction with general dental appearance could be predicted correctly in 64.7% of these patients. When we assessed the contribution of each outlier, we found that none was influential.\nFactors influencing patients' satisfaction with dental appearance by multiple logistic regression analysis\na Likelihood Ratio (LR) test\nFurther analyses were performed to evaluate the perceptions of different groups of patients about the color of their teeth. Table 7 shows the distribution of responses by the socio-demographic background (age, sex and education level) between the103 patients who were and the 132 patients who were not satisfied with their tooth color. Results of the chi-square test showed that satisfaction with tooth color differed significantly between male and female patients whereas none of the other background variables was significant. Both simple and multiple logistic regression analyses showed that dissatisfaction with tooth color was significantly higher in female than in males (OR = 1.99; 95% CI: 1.13-3.50).\nSocio-demographic background of patients who were (n = 103) and were not (n = 132) satisfied with their tooth colour", "Attitudes and perceptions towards dental appearance differ among populations and among individuals in a population [20]. We found that of adults attending the HUSM dental clinic, only 47.2% were satisfied with the appearance of their teeth, a lower percentage than in previous studies of different populations. For example, a study of 1,014 patients at a dental school in Ankara, Turkey found that (57.3%) were satisfied with their dental appearance [7] as were 76% of stratified sample of adults in the United Kingdom [21].\nPerception towards dental appearance is determined by cultural factors and individual preferences varying between individuals and cultures and changing over time [1]. In general, older people (age 55 and above) were more likely than younger people to be satisfied with their dental appearance [7,21], suggesting that the appearance of their teeth is not as important to older than to younger individuals [20]. In this study, however, we found that age was not associated with satisfaction with dental appearance suggesting that dental appearance is becoming equally important in both older and younger adults. This is likely due to the strong impact of the media which portray men and women of all ages as needing to look younger and more beautiful. Indeed, a study of 180 people of six different age strata ranging from 13 to 64 years showed that personal satisfaction with tooth color was age-independent [22]. A study in Sweden of two large samples of 8,881 aged 50 years and 8,563 aged 60 years revealed that the majority of respondents in both groups agreed that beautiful and perfect teeth are very important [23]. Another study on elderly aged 73 to 75 year old in Germany also showed that the importance of dental appearance to overall appearance was rated high by the subjects [24].\nTooth color is a critical factor influencing satisfaction with smile appearance [1]. For example a study in the United Kingdom found that the general public were dissatisfied with relatively mildly discoloured teeth indicating their concern about the color of their teeth [4]. Perception of tooth color is a complex phenomenon that is influenced by many factors including lighting conditions, the optical properties of teeth (translucency, opacity, scattering of light, surface gloss), and the viewer's visual experience [25]. We found that most respondents (56.2%) were dissatisfied with the color of their teeth in agreement with studies in populations in other countries [3,5,7]. In agreement with previous results, we found that, dissatisfaction with tooth color may be the primary reason for dissatisfaction with dental appearance [3].\nThe important contribution of tooth color to patients' satisfaction with dental appearance was further highlighted by our finding that tooth whitening was the aesthetic treatment most desired by participants, a finding similar to previous results [3]. In addition a study of 180 female patients in South London [6] showed that whitened teeth were preferred over teeth with original color with the former associated with greater attractiveness. In contrast another study in Germany done by Höfel et al. [26] found that perceptions of facial attractiveness were independent of tooth color indicating that satisfaction with dental appearance may not correlate positively with facial attractiveness. This finding underlines the influence of psychosocial attributes on the perception of attractiveness.\nMany of the patients in this study reported having dental caries and non-aesthetic restorations in their front teeth, with and some reported having tooth fractures. All of these conditions will undoubtedly affect the appearance of teeth, presumably leading to patient dissatisfaction with general dental appearance. Although our patients in this study were not significantly affected by any of those conditions a previous study [3] reported that patient satisfaction with dental appearance was significantly influenced by self-reported caries in anterior teeth, but not by other conditions. Further, decayed anterior teeth were shown to have negative impact on perceptions of facial attractiveness [6].\nPatients with high levels of education were found to be more satisfied with the color of their teeth than individuals with lower academic achievement [5,7] as well as to have a lower preference for white teeth [20]. These findings suggested that the higher self-satisfaction with tooth colour observed in individuals with higher academic achievement may reflect higher self-esteem [5,7]. Among our patients the education level did not have impact on satisfaction with tooth color or general dental appearance.\nIt is a commonly thought that women are more interested in their appearance than men. Indeed, female patients were found to be more concerned with their dental appearance than males [20] as well as to be more critical in judging their dental appearance [24]. Similar to previous results [3] we found that women expressed greater dissatisfaction with dental appearance and tooth color than men. In contrast study in Sweden found that men regarded dental appearance as more important than women [23] while other studies found that the differences were not significant [5,7]. Gender associated differences in satisfaction with dental appearance may require further investigations.\nIncreased labio-lingual inclination of the anterior teeth may have caused some patients to regard that their teeth as protruding, another factor that influenced patient satisfaction with general dental appearance. We found that, other tooth malalignments did not affect patient satisfaction with general appearance, although self-reported poorly aligned teeth and upper anterior crowding have been found to be associated with patients satisfaction [3,27]. These discrepancies highlighted the wide individual variation in appreciation of acceptable occlusal features. Individuals who perceived their profiles as being different from average were found more likely to be dissatisfied with their facial appearance [28]. Poor tooth alignment and crowding are among the most common malocclusion traits reported in the literature [29-31], which may explain our finding of a lack of association between patients' perceptions of having these traits and satisfaction with general dental appearance.\nThis study was based entirely on self-reports by patients through an interviewer guided questionnaire. We did not attempt to correlate patient self assessments of their dental problems and their dental records or to compare patients' desired aesthetic treatments and professional assessments of their needs. Furthermore, since the subjects of this study were patients who came to the dental clinic for treatment, they would be expected to be more aware and sensitive to their dental appearance.", "Most patients in this study expressed dissatisfaction with their dental appearance. Dissatisfaction was more common in females than in males. Unhappiness with tooth color and feelings of having protruding teeth also had significant negative influences on patient satisfaction with their general dental appearance. The importance of tooth color was further supported by our finding that most patients would like to have their teeth whitened. These results provide useful indications of the potential demands for dental treatment, particularly aesthetic treatment. In addition, understanding patients' perceptions of their dental appearance is an important aspect of patient management which may assist dentists in planning treatments that are acceptable to the patients leading to higher levels of patient satisfaction.", "The authors declare that they have no competing interests.", "MMTO contributed to the design of the study, analyzed the data and wrote the manuscript. NKS contributed to data analyses and interpretation, and revised the manuscript. NH contributed to data acquisition and data management. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6831/11/6/prepub\n" ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Dental appearance is an important feature in determining the attractiveness of a face, and thus plays a key role in human social interactions. Among the significant factors affecting overall dental appearance are tooth color, shape, and position; quality of restoration; and the general arrangement of the dentition, especially of the anterior teeth [1]. Furthermore, an aesthetically pleasing smile was found to depend on tooth color, size, shape, and position, upper lip position, visibility of teeth and amount of gingival display [2]. Although each factor may be considered individually, all components must act together to create a harmonic and symmetric entity that produces the final aesthetic effect [1].\nIn general, people desire for pearly white teeth. Thus, tooth color is one of the most important factors determining satisfaction with dental appearance [1,3]. Self-satisfaction with tooth color decreases with increasing severity of discoloration [4,5]. White teeth have been positively correlated with high ratings of social competence, intellectual ability, psychological adjustment and relationship status [6]. Alternatively, untreated dental caries, non-aesthetic or discolored anterior teeth restorations and missing anterior teeth usually lead to dissatisfaction with dental appearance [3,7-9]. Furthermore, treatments improving dental aesthetics have been found to increase patient quality of life and psychological status [10,11].\nMalocclusion is a common oral disorder, although treatment needs and demands vary. In some populations, tooth misalignments are not regarded as serious enough to necessitate treatment [12-14], whereas, in other populations, the need for orthodontic treatment may be very high [15,16]. There is general agreement that people are motivated to seek orthodontic treatment because of the negative physical, psychological and social impacts of malocclusion, but studies of the effects of malocclusion and its treatment on people's lives have yielded inconsistent results [17,18]. These discrepancies may be due to various interpretations of physical, psychological and social impact and lack of standardized methods to measure these quality of life constructs [17].\nCurrently, cosmetic dentistry has become an important aspect of dentistry. Tooth whitening treatments, anterior teeth restoration, labial veneers crowns, and orthodontic treatment are frequently demanded by patients who interested in improving their dental appearance [3]. We have assessed satisfaction with dental appearance, desired treatments to improve dental appearance, and factors that influence satisfaction with dental appearance among adult patients who attended the dental clinic at the Hospital Universiti Sains Malaysia (HUSM).", "This cross sectional study was carried out from June 1, 2009 to January 31, 2010 among patients who attended the HUSM dental clinic. All included patients were newly registered adults >18 years old, who had not received any dental treatment within the previous six months, were able to understand the Malay language and have no clear evidence of cognitive disturbances. Sample size was calculated using the formula estimating a single proportion with a requirement for 95% confidence [19]. The prevalence of dissatisfaction with dental appearance was estimated to be 62.7% based on the satisfaction with dental aesthetics among adult patients attending a military dental clinic in Tel Aviv, Israel [3]. Considering the available resources, a sample size of 183 was selected with a precision of 0.07 (7%). To accommodate for a 30% non-response rate, 238 patients were invited to participate in this study.\nA systematic random sampling technique was used to select the study sample. The sampling interval was decided based on the estimated number of eligible patients attending the clinic on a normal outpatient day, with every tenth patient was invited to participate. No possible biases regarding the selection of the study population were anticipated and the samples were representative of the reference population. This study was approved by the Research and Ethics Committee (Human), Universiti Sains Malaysia.\nA structured, interviewer guided questionnaire (Table 1) was used for data collection. The questionnaire consisted of questions on socio-demographic items including sex, age, and level of education, as well as questions on each patient's satisfaction with his/her then-current general dental appearance. Patients were also asked about their satisfaction with tooth color, perceived malalignment of teeth (crowding, poorly aligned or protruding), caries in anterior teeth, non-aesthetic anterior tooth color restoration and presence of tooth fracture. In addition, patients were asked to select the aesthetic treatments they wished to undergo, including orthodontic treatment, crowns, tooth whitening, tooth color restorations and partial dentures.\nQuestionnaire used in the study\nPrior to this study the clarity of the questionnaire was pre-tested on 15 patients who were not involved in the study. Feedback regarding problems understanding and answering the questionnaire was obtained and addressed. Each patient provided written informed consent before participation in this study.\nData were entered and analyzed using the Statistical Package for Social Sciences (SPSS) for Windows software (version 12.0; SPSS Inc, Chicago). Descriptive statistics such as mean and standard deviation (SD) for continuous variables and frequency and percentage for categorical variables were determined. The chi-square test was used to compare the sex, age, education levels of patients who were and were not satisfied with their dental appearance. The level of significance was set at 0.05.\nFactors influencing patient satisfaction with dental appearance were determined at both the univariate and multivariate levels using simple logistic regression analysis and multiple logistic regression analysis, respectively. Variables selected for inclusion in the multiple logistic regression analysis model were selected using the forward stepwise logistic regression method. Following the fit of the preliminary model, the importance of each variable was verified. The interaction terms were checked using the Likelihood Ratio (LR) test. Multicollinearity problems were was identified by the Variance Inflation Factor (VIF) test.\nThe final model was assessed for fitness using the Hosmer-Lemeshow goodness-of-fit test. The classification tables for sensitivity and specificity as well as the area under the Receiver Operating Characteristic (ROC) curve were also recorded to assess the model fitness. Influential outliers were identified using Cook's distance. Data points above 1.0 were considered as influential outliers.", "The demographic background of the patients and their satisfaction with their dental appearance are shown in Table 2. Of the 235 patients, (70.2%) were female. Ages ranged from 18 to 62 years with a mean age of 31.5 years (SD 13.0). We found that (52.8%) of these patients were not happy with their general dental appearance, with dissatisfaction with tooth color being the most common (56.2%). In addition, some patients regarded their teeth as poorly aligned (32.3%), crowded (26.4%), and protruding (23.4%). Others reasons for dissatisfaction include self-reported presence of caries (43.4%), non-aesthetic restorations (30.6%), and tooth fractures (15.3%). Patients also answered questions about the treatments they desired to improve their appearance (Table 3). We found that 48.1% wished to have their teeth whitened, followed by restoration of tooth color (18.3%), dentures (16.2%), orthodontic treatment (14.0%), and dental crowns (11.5%).\nBackground of patients and satisfaction with dental appearance (n = 235)\nDesired aesthetic dental treatments (n = 235)\nTable 4 shows a comparison between the 111 patients who were and the 124 who were not satisfied with their general dental appearance. We found that satisfaction with dental appearance differed significantly between males and females. In addition, dissatisfaction with tooth color and perception that of having protruding teeth had significant negative impacts on patient satisfaction with general dental appearance. No other dental problem or condition was associated with patient satisfaction with general dental appearance.\nProfile of patients who were (n = 111) and were not (n = 124) satisfied with their general dental appearance\nSimple logistic regression analysis of factors influencing patient satisfaction with dental appearance found no significant associations between patient satisfaction and age, education level, perception of having crowded and poorly aligned teeth, self-reported dental caries, non-aesthetic restorations, and fractures of the anterior teeth (Table 5). However, dissatisfaction with general dental appearance was significantly associated with female gender (OR = 2.70, 95% CI: 1.51-4.82), with unhappiness with tooth color (OR = 3.35, 95% CI: 2.01-5.92) and with regarding their teeth as protruding (OR = 3.42, 95% CI: 1.75-6.71).\nFactors influencing patients' satisfaction with dental appearance by simple logistic regression analysis\na Likelihood Ratio (LR) test\nb Wald test\nMultivariable logistic regression analysis showed that female gender (OR = 2.18, 95% CI: 1.18-4.03), unhappiness with tooth color (OR = 3.05, 95% CI: 1.74-5.34) and the opinion who felt that their teeth were protruded (OR = 2.91, 95% CI: 1.44-5.91) were significant independent determinants of patient satisfaction with general appearance (Table 6). Possible two-way interactions between factors were not significant, and there was no multicollinearity problem. The preliminary final model was checked for fitness. The result of Hosmer-Lemeshow goodness-of-fit test was not significant (p = 0.631, df = 4) and the area under the ROC curve was 0.714, suggesting that the model was fit. The sensitivity and specificity of this model were 64.9% and 64.5% respectively. These results indicated that satisfaction with general dental appearance could be predicted correctly in 64.7% of these patients. When we assessed the contribution of each outlier, we found that none was influential.\nFactors influencing patients' satisfaction with dental appearance by multiple logistic regression analysis\na Likelihood Ratio (LR) test\nFurther analyses were performed to evaluate the perceptions of different groups of patients about the color of their teeth. Table 7 shows the distribution of responses by the socio-demographic background (age, sex and education level) between the103 patients who were and the 132 patients who were not satisfied with their tooth color. Results of the chi-square test showed that satisfaction with tooth color differed significantly between male and female patients whereas none of the other background variables was significant. Both simple and multiple logistic regression analyses showed that dissatisfaction with tooth color was significantly higher in female than in males (OR = 1.99; 95% CI: 1.13-3.50).\nSocio-demographic background of patients who were (n = 103) and were not (n = 132) satisfied with their tooth colour", "Attitudes and perceptions towards dental appearance differ among populations and among individuals in a population [20]. We found that of adults attending the HUSM dental clinic, only 47.2% were satisfied with the appearance of their teeth, a lower percentage than in previous studies of different populations. For example, a study of 1,014 patients at a dental school in Ankara, Turkey found that (57.3%) were satisfied with their dental appearance [7] as were 76% of stratified sample of adults in the United Kingdom [21].\nPerception towards dental appearance is determined by cultural factors and individual preferences varying between individuals and cultures and changing over time [1]. In general, older people (age 55 and above) were more likely than younger people to be satisfied with their dental appearance [7,21], suggesting that the appearance of their teeth is not as important to older than to younger individuals [20]. In this study, however, we found that age was not associated with satisfaction with dental appearance suggesting that dental appearance is becoming equally important in both older and younger adults. This is likely due to the strong impact of the media which portray men and women of all ages as needing to look younger and more beautiful. Indeed, a study of 180 people of six different age strata ranging from 13 to 64 years showed that personal satisfaction with tooth color was age-independent [22]. A study in Sweden of two large samples of 8,881 aged 50 years and 8,563 aged 60 years revealed that the majority of respondents in both groups agreed that beautiful and perfect teeth are very important [23]. Another study on elderly aged 73 to 75 year old in Germany also showed that the importance of dental appearance to overall appearance was rated high by the subjects [24].\nTooth color is a critical factor influencing satisfaction with smile appearance [1]. For example a study in the United Kingdom found that the general public were dissatisfied with relatively mildly discoloured teeth indicating their concern about the color of their teeth [4]. Perception of tooth color is a complex phenomenon that is influenced by many factors including lighting conditions, the optical properties of teeth (translucency, opacity, scattering of light, surface gloss), and the viewer's visual experience [25]. We found that most respondents (56.2%) were dissatisfied with the color of their teeth in agreement with studies in populations in other countries [3,5,7]. In agreement with previous results, we found that, dissatisfaction with tooth color may be the primary reason for dissatisfaction with dental appearance [3].\nThe important contribution of tooth color to patients' satisfaction with dental appearance was further highlighted by our finding that tooth whitening was the aesthetic treatment most desired by participants, a finding similar to previous results [3]. In addition a study of 180 female patients in South London [6] showed that whitened teeth were preferred over teeth with original color with the former associated with greater attractiveness. In contrast another study in Germany done by Höfel et al. [26] found that perceptions of facial attractiveness were independent of tooth color indicating that satisfaction with dental appearance may not correlate positively with facial attractiveness. This finding underlines the influence of psychosocial attributes on the perception of attractiveness.\nMany of the patients in this study reported having dental caries and non-aesthetic restorations in their front teeth, with and some reported having tooth fractures. All of these conditions will undoubtedly affect the appearance of teeth, presumably leading to patient dissatisfaction with general dental appearance. Although our patients in this study were not significantly affected by any of those conditions a previous study [3] reported that patient satisfaction with dental appearance was significantly influenced by self-reported caries in anterior teeth, but not by other conditions. Further, decayed anterior teeth were shown to have negative impact on perceptions of facial attractiveness [6].\nPatients with high levels of education were found to be more satisfied with the color of their teeth than individuals with lower academic achievement [5,7] as well as to have a lower preference for white teeth [20]. These findings suggested that the higher self-satisfaction with tooth colour observed in individuals with higher academic achievement may reflect higher self-esteem [5,7]. Among our patients the education level did not have impact on satisfaction with tooth color or general dental appearance.\nIt is a commonly thought that women are more interested in their appearance than men. Indeed, female patients were found to be more concerned with their dental appearance than males [20] as well as to be more critical in judging their dental appearance [24]. Similar to previous results [3] we found that women expressed greater dissatisfaction with dental appearance and tooth color than men. In contrast study in Sweden found that men regarded dental appearance as more important than women [23] while other studies found that the differences were not significant [5,7]. Gender associated differences in satisfaction with dental appearance may require further investigations.\nIncreased labio-lingual inclination of the anterior teeth may have caused some patients to regard that their teeth as protruding, another factor that influenced patient satisfaction with general dental appearance. We found that, other tooth malalignments did not affect patient satisfaction with general appearance, although self-reported poorly aligned teeth and upper anterior crowding have been found to be associated with patients satisfaction [3,27]. These discrepancies highlighted the wide individual variation in appreciation of acceptable occlusal features. Individuals who perceived their profiles as being different from average were found more likely to be dissatisfied with their facial appearance [28]. Poor tooth alignment and crowding are among the most common malocclusion traits reported in the literature [29-31], which may explain our finding of a lack of association between patients' perceptions of having these traits and satisfaction with general dental appearance.\nThis study was based entirely on self-reports by patients through an interviewer guided questionnaire. We did not attempt to correlate patient self assessments of their dental problems and their dental records or to compare patients' desired aesthetic treatments and professional assessments of their needs. Furthermore, since the subjects of this study were patients who came to the dental clinic for treatment, they would be expected to be more aware and sensitive to their dental appearance.", "Most patients in this study expressed dissatisfaction with their dental appearance. Dissatisfaction was more common in females than in males. Unhappiness with tooth color and feelings of having protruding teeth also had significant negative influences on patient satisfaction with their general dental appearance. The importance of tooth color was further supported by our finding that most patients would like to have their teeth whitened. These results provide useful indications of the potential demands for dental treatment, particularly aesthetic treatment. In addition, understanding patients' perceptions of their dental appearance is an important aspect of patient management which may assist dentists in planning treatments that are acceptable to the patients leading to higher levels of patient satisfaction.", "The authors declare that they have no competing interests.", "MMTO contributed to the design of the study, analyzed the data and wrote the manuscript. NKS contributed to data analyses and interpretation, and revised the manuscript. NH contributed to data acquisition and data management. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6831/11/6/prepub\n" ]
[ null, "methods", null, null, null, null, null, null ]
[]
Genome-wide discovery of missing genes in biological pathways of prokaryotes.
21342538
Reconstruction of biological pathways is typically done through mapping well-characterized pathways of model organisms to a target genome, through orthologous gene mapping. A limitation of such pathway-mapping approaches is that the mapped pathway models are constrained by the composition of the template pathways, e.g., some genes in a target pathway may not have corresponding genes in the template pathways, the so-called "missing gene" problem.
BACKGROUND
We present a novel pathway-expansion method for identifying additional genes that are possibly involved in a target pathway after pathway mapping, to fill holes caused by missing genes as well as to expand the mapped pathway model. The basic idea of the algorithm is to identify genes in the target genome whose homologous genes share common operons with homologs of any mapped pathway genes in some reference genome, and to add such genes to the target pathway if their functions are consistent with the cellular function of the target pathway.
METHODS
We have implemented this idea using a graph-theoretic approach and demonstrated the effectiveness of the algorithm on known pathways of E. coli in the KEGG database. On all KEGG pathways containing at least 5 genes, our method achieves an average of 60% positive predictive value (PPV) and the performance is increased with more seed genes added. Analysis shows that our method is highly robust.
RESULTS
An effective method is presented to find missing genes in biological pathways of prokaryotes, which achieves high prediction reliability on E. coli at a genome level. Numerous missing genes are found to be related to known E. coli pathways, which can be further validated through biological experiments. Overall this method is robust and can be used for functional inference.
CONCLUSIONS
[ "Algorithms", "Chromosome Mapping", "Computational Biology", "Escherichia coli", "Genome, Archaeal", "Genome, Bacterial", "Genomics", "Operon", "Phylogeny" ]
3044263
null
null
Methods
[SUBTITLE] High-level description of our algorithm [SUBSECTION] We first represent genes in the target genome or in a set of specified reference genomes, and their functional relatedness as a graph, called a reference graph, where each gene in any of the genomes is represented as a vertex, and two genes have an edge linking them if they are in the same operon or they are homologous. We then define a linkage graph for the target genome such that each gene is represented as a vertex and two genes have an edge if and only if there is a path linking the two genes in the reference graph, and the distance of the edge is defined as the distance of the shortest path between the two genes in the reference graph. We have augmented this distance by including two additive terms, one penalty factor (system(error)) used to model the reliability of a predicted functional relationship, and a phylogeny-based distance used to capture co-evolutionary relationships, more general than homology relationships among genes, between two genes. Our goal here is to find genes that have short distances, defined above, to genes in a known pathway, and predict that they are involved in this pathway if their distances are ranked among the top such genes. The whole procedure is summarized in Figure 1, with the detailed steps explained as follows. The flow chart of the method. The method uses gene similarity and operon information to first construct a genome reference graph. It then hierarchically fuses the shortest path distance and phylogenetic distance to rank all candidate genes. We first represent genes in the target genome or in a set of specified reference genomes, and their functional relatedness as a graph, called a reference graph, where each gene in any of the genomes is represented as a vertex, and two genes have an edge linking them if they are in the same operon or they are homologous. We then define a linkage graph for the target genome such that each gene is represented as a vertex and two genes have an edge if and only if there is a path linking the two genes in the reference graph, and the distance of the edge is defined as the distance of the shortest path between the two genes in the reference graph. We have augmented this distance by including two additive terms, one penalty factor (system(error)) used to model the reliability of a predicted functional relationship, and a phylogeny-based distance used to capture co-evolutionary relationships, more general than homology relationships among genes, between two genes. Our goal here is to find genes that have short distances, defined above, to genes in a known pathway, and predict that they are involved in this pathway if their distances are ranked among the top such genes. The whole procedure is summarized in Figure 1, with the detailed steps explained as follows. The flow chart of the method. The method uses gene similarity and operon information to first construct a genome reference graph. It then hierarchically fuses the shortest path distance and phylogenetic distance to rank all candidate genes. [SUBTITLE] Selection of reference genomes [SUBSECTION] Currently over 1,000 bacterial and archaean genomes have been sequenced and are publicly available (NCBI release of September 2009). From this set, we have selected 185 strains (non-redundant genomes and plasmids) (see Additional File 1) from 185 different genera using the following rule: for each genus, select the genome with the longest sequence. Currently over 1,000 bacterial and archaean genomes have been sequenced and are publicly available (NCBI release of September 2009). From this set, we have selected 185 strains (non-redundant genomes and plasmids) (see Additional File 1) from 185 different genera using the following rule: for each genus, select the genome with the longest sequence. [SUBTITLE] Calculation of homology-based distance [SUBSECTION] For each pair of genes xi, xj, in the target genome and the 185 reference genomes, we use the E-value of BLAST (with default parameters) to define their homology-based distance ds(xi, xj) as follows:(1) where ps(xi, xj) is the BLAST E-value for genes xi, xj, and 185 is a normalization factor since when the E-value is smaller than 1e-185, it is set as 0 in the BLAST program. Clearly ds(xi, xj) is between 0 and 1; and the more similar two genes are, the smaller the ds(xi, xj) value is. For each pair of genes xi, xj, in the target genome and the 185 reference genomes, we use the E-value of BLAST (with default parameters) to define their homology-based distance ds(xi, xj) as follows:(1) where ps(xi, xj) is the BLAST E-value for genes xi, xj, and 185 is a normalization factor since when the E-value is smaller than 1e-185, it is set as 0 in the BLAST program. Clearly ds(xi, xj) is between 0 and 1; and the more similar two genes are, the smaller the ds(xi, xj) value is. [SUBTITLE] Calculation of operon-based distance [SUBSECTION] We have used the operons predicted using our own program [18], which is considered the most reliable operon prediction method in the public domain [17]. A probability calculated by this method represents the likelihood that two neighbouring genes are in the same operon. We apply this program to all of the 185 reference genomes and get the probability po(xi, xj) between two genes xi, xj in each genome. For any pair of neighbouring genes xi, xj in the same genome (target or reference), we define their operon-based distancedo(xi, xj) as follows:(2) where po(xi, xj) represents the probability that xi, xj are in the same operon as given in [18]. We have used the operons predicted using our own program [18], which is considered the most reliable operon prediction method in the public domain [17]. A probability calculated by this method represents the likelihood that two neighbouring genes are in the same operon. We apply this program to all of the 185 reference genomes and get the probability po(xi, xj) between two genes xi, xj in each genome. For any pair of neighbouring genes xi, xj in the same genome (target or reference), we define their operon-based distancedo(xi, xj) as follows:(2) where po(xi, xj) represents the probability that xi, xj are in the same operon as given in [18]. [SUBTITLE] Reference graph and linkage graph [SUBSECTION] We define a reference graph over all genes in the target as well as the reference genomes as follows. Each gene is represented as a vertex, and an edge between two genes is created if (i) the two genes are in the same operon, with their edge distance defined to be the operon-based distance between the two genes; or (ii) the two genes in different genomes are homologous, with their edge distance defined to be their homology-based distance. Based on the reference graph, we define a linkage graph on genes in the target genome. For any pair of genes, xi, xj, we define an edge between them if and only if there is a path xi, x1,x2, … xj in the reference graph, with its edge distance set to be the distance of the shortest path between the two genes (Figure 2). We intend to use an edge in this graph to capture a functional linkage relationship possibly through multiple steps of co-operon and homologous relationship. We recognize that the reliability of such defined edges could go down (largely independent of the reliability of individual operon and homology predictions) as the number of edges in the above path goes up. Hence we included a penalty factor, system (error), which is proportional to the number of edges in the path, and redefined the path distance of a gene pair as follows:(3) The relationship path through operon edge and similarity edge. Given a reference pathway, its known genes are used as seeds to calculate the shortest distances to candidate genes. For example, gene1 and gene2 are connected with the same candidate gene. The path from gene1 to candidate gene (path1) is noted as solid line, and gene2 to candidate gene (path2) as dashed line. The paths are both constructed by operon edge (colour arrow) and similarity edge (solid or dashed line). where k is the number of edges in the path, and α is a scaling factor. In our current implementation, we set α = 380 and system(error) = 0.06 based on a ten-fold cross-validation method (see Parameter Selection). E(operon) and E(similarity) are the set of operon edges and the set of similarity edges, respectively. We define a reference graph over all genes in the target as well as the reference genomes as follows. Each gene is represented as a vertex, and an edge between two genes is created if (i) the two genes are in the same operon, with their edge distance defined to be the operon-based distance between the two genes; or (ii) the two genes in different genomes are homologous, with their edge distance defined to be their homology-based distance. Based on the reference graph, we define a linkage graph on genes in the target genome. For any pair of genes, xi, xj, we define an edge between them if and only if there is a path xi, x1,x2, … xj in the reference graph, with its edge distance set to be the distance of the shortest path between the two genes (Figure 2). We intend to use an edge in this graph to capture a functional linkage relationship possibly through multiple steps of co-operon and homologous relationship. We recognize that the reliability of such defined edges could go down (largely independent of the reliability of individual operon and homology predictions) as the number of edges in the above path goes up. Hence we included a penalty factor, system (error), which is proportional to the number of edges in the path, and redefined the path distance of a gene pair as follows:(3) The relationship path through operon edge and similarity edge. Given a reference pathway, its known genes are used as seeds to calculate the shortest distances to candidate genes. For example, gene1 and gene2 are connected with the same candidate gene. The path from gene1 to candidate gene (path1) is noted as solid line, and gene2 to candidate gene (path2) as dashed line. The paths are both constructed by operon edge (colour arrow) and similarity edge (solid or dashed line). where k is the number of edges in the path, and α is a scaling factor. In our current implementation, we set α = 380 and system(error) = 0.06 based on a ten-fold cross-validation method (see Parameter Selection). E(operon) and E(similarity) are the set of operon edges and the set of similarity edges, respectively. [SUBTITLE] Phylogeny-based distance [SUBSECTION] We also considered a more general class of functional relationship defined in terms of the phylogenetic profiles of genes, which measures their co-evolutionary relationship [20,21]. Basically, the phylogenetic profile X of a gene against a set of n reference genomes is a binary string of length n, with the ith position being 1, if the gene has a homolog in the ith reference genome, and 0 otherwise. It has been found that two genes (of the same genome) are generally functionally related if their phylogenetic profiles are highly similar [20]. We have used a BLAST E-value e-3 as the cutoff for determining the presence of a homolog in another genome [22]. We use the following to measure the similarity between two phylogenetic profiles, similar to that reported in [23]. Given the phylogenetic profiles Xi and Yj for genes xi and yj, their phylogeny-based distance is defined as follows:(4) where, dhamming(Xi, Xj) is the Hamming distance between Xi and Xj, and Entropy (Xi, Xj) is the entropy of the common part of Xi and Xj, defined as follows:(5) with p being the frequency of 1’s in common positions between the two phylogenetic profiles. Note that the more similar two phylogenetic profiles are, the smaller their distance is. We also considered a more general class of functional relationship defined in terms of the phylogenetic profiles of genes, which measures their co-evolutionary relationship [20,21]. Basically, the phylogenetic profile X of a gene against a set of n reference genomes is a binary string of length n, with the ith position being 1, if the gene has a homolog in the ith reference genome, and 0 otherwise. It has been found that two genes (of the same genome) are generally functionally related if their phylogenetic profiles are highly similar [20]. We have used a BLAST E-value e-3 as the cutoff for determining the presence of a homolog in another genome [22]. We use the following to measure the similarity between two phylogenetic profiles, similar to that reported in [23]. Given the phylogenetic profiles Xi and Yj for genes xi and yj, their phylogeny-based distance is defined as follows:(4) where, dhamming(Xi, Xj) is the Hamming distance between Xi and Xj, and Entropy (Xi, Xj) is the entropy of the common part of Xi and Xj, defined as follows:(5) with p being the frequency of 1’s in common positions between the two phylogenetic profiles. Note that the more similar two phylogenetic profiles are, the smaller their distance is. [SUBTITLE] Rank functional relatedness of candidate genes [SUBSECTION] Our goal here is to rank all the genes in a target genome in terms of a possible relationship with a set of seed genes (known genes in a pathway), by fusing the path distance and the phylogeny-based distance. For a given pathway P, let its known gene set be G(P) and |G(P)| be the number of its genes. We define a distance from P to a candidate gene xi as(6) Similarly, we define a phylogenetic distance from P to xias(7) Our experience has been that for both the path distance and the phylogenetic distance, the distance for the top ranked genes tend to be more reliable. Hence only the top K candidate genes to each gene xj ε G(P) are considered and the remaining is ignored. To a seed gene, we only take the K shortest genes measured by reference distance, where the K is ranged from 5 to 30. Similarly, only the top K( = 50) genes closed to a seed gene is considered for phylogenetic distance [20]. So some candidate genes may not have a path distance or phylogenetic distance, due to their ranking. The final combined distance from any gene xi to pathway P is defined as(8) where β is a scaling factor and set to 5, based on the ten-fold cross-validation method (see Parameter Selection); and T is set to be 2 if gene xi has both the path distance and phylogenetic distance, and as 1 if it has only one distance defined. The candidate genes are ranked by their combined distance and the final top γ genes are output (γ = 10 in this study). Our goal here is to rank all the genes in a target genome in terms of a possible relationship with a set of seed genes (known genes in a pathway), by fusing the path distance and the phylogeny-based distance. For a given pathway P, let its known gene set be G(P) and |G(P)| be the number of its genes. We define a distance from P to a candidate gene xi as(6) Similarly, we define a phylogenetic distance from P to xias(7) Our experience has been that for both the path distance and the phylogenetic distance, the distance for the top ranked genes tend to be more reliable. Hence only the top K candidate genes to each gene xj ε G(P) are considered and the remaining is ignored. To a seed gene, we only take the K shortest genes measured by reference distance, where the K is ranged from 5 to 30. Similarly, only the top K( = 50) genes closed to a seed gene is considered for phylogenetic distance [20]. So some candidate genes may not have a path distance or phylogenetic distance, due to their ranking. The final combined distance from any gene xi to pathway P is defined as(8) where β is a scaling factor and set to 5, based on the ten-fold cross-validation method (see Parameter Selection); and T is set to be 2 if gene xi has both the path distance and phylogenetic distance, and as 1 if it has only one distance defined. The candidate genes are ranked by their combined distance and the final top γ genes are output (γ = 10 in this study). [SUBTITLE] Parameter Selection and Validation Method [SUBSECTION] For a predicted target gene and a target pathway, the gene is considered a positive prediction (based on a partial gene list of the pathway) if it is part of the pathway. For any of the following assessments of our prediction, we use the following (standard) notations: TP for true positive predictions; TN for true negative predictions; FP for false positive predictions and FN for false negative predictions (FN); and we use the following standard measures of sensitivity (SE), specificity (SP) and positive predictive value (PPV) to assess the performance of our prediction method of missing genes:(9)(10)(11) To assess the prediction performance against a set of pathways, we use the average of the above three measures across all the pathways as follows:(12)(13)(14) where SPi, SEi and PPVi are SP, SE and PPV for the ith pathway, respectively, and N is the number of pathways considered. For each to-be-determined parameter in our program, a ten-fold cross-validation procedure is used to derive the optimal value. Specifically, all the pathways are divided randomly into ten parts, nine for training and one for testing each time. The value with the best average is finally selected. The leave-one-out cross-validation procedure is used to assess the performance. For each pathway, its known genes are used as the seed-gene set. The procedure removes each gene from the pathway seed set one at a time, and then calculates the final combined distance from the remaining genes to the removed gene and all the left genes of the target genome. If the removed gene is output in the final top γ genes, it is counted as a successful prediction. For a predicted target gene and a target pathway, the gene is considered a positive prediction (based on a partial gene list of the pathway) if it is part of the pathway. For any of the following assessments of our prediction, we use the following (standard) notations: TP for true positive predictions; TN for true negative predictions; FP for false positive predictions and FN for false negative predictions (FN); and we use the following standard measures of sensitivity (SE), specificity (SP) and positive predictive value (PPV) to assess the performance of our prediction method of missing genes:(9)(10)(11) To assess the prediction performance against a set of pathways, we use the average of the above three measures across all the pathways as follows:(12)(13)(14) where SPi, SEi and PPVi are SP, SE and PPV for the ith pathway, respectively, and N is the number of pathways considered. For each to-be-determined parameter in our program, a ten-fold cross-validation procedure is used to derive the optimal value. Specifically, all the pathways are divided randomly into ten parts, nine for training and one for testing each time. The value with the best average is finally selected. The leave-one-out cross-validation procedure is used to assess the performance. For each pathway, its known genes are used as the seed-gene set. The procedure removes each gene from the pathway seed set one at a time, and then calculates the final combined distance from the remaining genes to the removed gene and all the left genes of the target genome. If the removed gene is output in the final top γ genes, it is counted as a successful prediction.
null
null
null
null
[ "Background", "High-level description of our algorithm", "Selection of reference genomes", "Calculation of homology-based distance", "Calculation of operon-based distance", "Reference graph and linkage graph", "Phylogeny-based distance", "Rank functional relatedness of candidate genes", "Parameter Selection and Validation Method", "Results", "Performance measure calculation", "Case study of the predicted pyruvate metabolism pathway", "Robustness analysis of parameters", "Discussion", "Concluding remark", "Competing interests", "Authors' contributions" ]
[ "Reconstruction of biological pathways is a fundamental problem in understanding the functional mechanisms of cellular organisms. Substantial efforts have been put into the elucidation of biological pathways, particularly for prokaryotic organisms, in a systematic manner based on high-throughput omic data and computational prediction. As a result, a number of pathway databases have been developed and are being widely used, such as KEGG and BioCyc [1-5]. These databases not only serve as an information resource for retrieving well-characterized pathways for specific organisms but also provide a set of pathway templates for reconstructing pathways for organisms that are not directly covered by the databases, as substantial portions of homologous pathways may be conserved across different organisms, particularly related organisms.\nA number of computer programs have been developed for pathway reconstruction through mapping known pathways from one organism to another. While some success has been reported on these programs, there has been a general issue associated with such homologous pathway mapping-based approaches, which is that homologous pathways are generally not identical and hence the mapped pathways could miss some parts not covered by their well-characterized homologous template pathways. This problem, called pathway holes or missing genes, has been widely recognized [6-9]. A number of methods have been developed to find such missing genes, based mainly on the idea of finding genes that are functionally associated with genes already in the mapped pathways. One class of such methods attempts to find enzyme-encoding genes missing in a mapped metabolic pathway based on multiple types of gene association information [8-10], taking advantage of the fact that genes encoding a metabolic pathway tend to group into clusters (e.g., operons). Another class of methods attempt to identify functional modules from some large gene association networks or groups [11-15], and then to suggest possible candidates for missing genes based on genes found in the same functional modules of genes already in mapped pathways. While these methods have provided useful information for searching for missing genes, there is clearly substantial room for improvement in terms of the functional specificity of their predicted candidates and the scope of applicability of the existing methods [16]. Among the various areas for further improvements, we identified a few we can possibly improve on using the currently available information: (i) there have not been reliable methods for consideration and inclusion of functionally uncharacterized genes (often referred to as hypothetical and conserved genes) into partially predicted pathway models (e.g, mapped pathways); (ii) while (conserved) genomic synteny has been utilized for prediction of functionally associated genes, its true usefulness, other than operon information, is yet to be well documented. Previous studies have shown that there is a strong link between genes in the same operons and genes working in the same biological pathways [17]. So full utilization of operon information should be a key direction for improving biological pathways, particular now as the state of the art prediction methods for operons have reached high accuracy (~90%) [17-19].\nWe present, in this paper, a novel computational method for identification and functional annotation of missing genes in a predicted pathway model, either through homologous pathway mapping or using other methods. The basic idea of the method can be outlined as follows. For any specified target genome, we define a distance between any pair of genes in the genome to measure the level of their functional relatedness in terms of a set of reference genomes. Specifically, two genes are functionally related if they (i) are homologous, (ii) share a common operon directly or through their homologs in a reference genome, (iii) are phylogenetically related, or (iv) deemed to be functionally related through combinations of the first three criteria. For any pair of functionally related genes in the target genome, their distance is defined essentially as the minimum number of applications of this recursive definition. Our algorithm identifies genes possibly involved in a target pathway based on their distances to genes already in the pathway. We have tested the algorithm on all characterized pathways of E. coli, using portions of the pathways as the initial pathway genes (called seeds), and found that the vast majority of the remaining genes of these known pathways are all within short distances to the seeds, confirming the effectiveness of our distance measure. Our study has also identified numerous genes with short distances to the known pathway genes, which we believe are highly promising candidates for addition to these known E. coli pathways. Limited analyses of the potential functional roles of these genes have been carried out, and reported in this paper.", "We first represent genes in the target genome or in a set of specified reference genomes, and their functional relatedness as a graph, called a reference graph, where each gene in any of the genomes is represented as a vertex, and two genes have an edge linking them if they are in the same operon or they are homologous. We then define a linkage graph for the target genome such that each gene is represented as a vertex and two genes have an edge if and only if there is a path linking the two genes in the reference graph, and the distance of the edge is defined as the distance of the shortest path between the two genes in the reference graph. We have augmented this distance by including two additive terms, one penalty factor (system(error)) used to model the reliability of a predicted functional relationship, and a phylogeny-based distance used to capture co-evolutionary relationships, more general than homology relationships among genes, between two genes. Our goal here is to find genes that have short distances, defined above, to genes in a known pathway, and predict that they are involved in this pathway if their distances are ranked among the top such genes. The whole procedure is summarized in Figure 1, with the detailed steps explained as follows.\nThe flow chart of the method. The method uses gene similarity and operon information to first construct a genome reference graph. It then hierarchically fuses the shortest path distance and phylogenetic distance to rank all candidate genes.", "Currently over 1,000 bacterial and archaean genomes have been sequenced and are publicly available (NCBI release of September 2009). From this set, we have selected 185 strains (non-redundant genomes and plasmids) (see Additional File 1) from 185 different genera using the following rule: for each genus, select the genome with the longest sequence.", "For each pair of genes xi, xj, in the target genome and the 185 reference genomes, we use the E-value of BLAST (with default parameters) to define their homology-based distance ds(xi, xj) as follows:(1)\nwhere ps(xi, xj) is the BLAST E-value for genes xi, xj, and 185 is a normalization factor since when the E-value is smaller than 1e-185, it is set as 0 in the BLAST program. Clearly ds(xi, xj) is between 0 and 1; and the more similar two genes are, the smaller the ds(xi, xj) value is.", "We have used the operons predicted using our own program [18], which is considered the most reliable operon prediction method in the public domain [17]. A probability calculated by this method represents the likelihood that two neighbouring genes are in the same operon. We apply this program to all of the 185 reference genomes and get the probability po(xi, xj) between two genes xi, xj in each genome. For any pair of neighbouring genes xi, xj in the same genome (target or reference), we define their operon-based distancedo(xi, xj) as follows:(2)\nwhere po(xi, xj) represents the probability that xi, xj are in the same operon as given in [18].", "We define a reference graph over all genes in the target as well as the reference genomes as follows. Each gene is represented as a vertex, and an edge between two genes is created if (i) the two genes are in the same operon, with their edge distance defined to be the operon-based distance between the two genes; or (ii) the two genes in different genomes are homologous, with their edge distance defined to be their homology-based distance. Based on the reference graph, we define a linkage graph on genes in the target genome. For any pair of genes, xi, xj, we define an edge between them if and only if there is a path xi, x1,x2, … xj in the reference graph, with its edge distance set to be the distance of the shortest path between the two genes (Figure 2). We intend to use an edge in this graph to capture a functional linkage relationship possibly through multiple steps of co-operon and homologous relationship. We recognize that the reliability of such defined edges could go down (largely independent of the reliability of individual operon and homology predictions) as the number of edges in the above path goes up. Hence we included a penalty factor, system (error), which is proportional to the number of edges in the path, and redefined the path distance of a gene pair as follows:(3)\nThe relationship path through operon edge and similarity edge. Given a reference pathway, its known genes are used as seeds to calculate the shortest distances to candidate genes. For example, gene1 and gene2 are connected with the same candidate gene. The path from gene1 to candidate gene (path1) is noted as solid line, and gene2 to candidate gene (path2) as dashed line. The paths are both constructed by operon edge (colour arrow) and similarity edge (solid or dashed line).\nwhere k is the number of edges in the path, and α is a scaling factor. In our current implementation, we set α = 380 and system(error) = 0.06 based on a ten-fold cross-validation method (see Parameter Selection). E(operon) and E(similarity) are the set of operon edges and the set of similarity edges, respectively.", "We also considered a more general class of functional relationship defined in terms of the phylogenetic profiles of genes, which measures their co-evolutionary relationship [20,21]. Basically, the phylogenetic profile X of a gene against a set of n reference genomes is a binary string of length n, with the ith position being 1, if the gene has a homolog in the ith reference genome, and 0 otherwise. It has been found that two genes (of the same genome) are generally functionally related if their phylogenetic profiles are highly similar [20]. We have used a BLAST E-value e-3 as the cutoff for determining the presence of a homolog in another genome [22]. We use the following to measure the similarity between two phylogenetic profiles, similar to that reported in [23]. Given the phylogenetic profiles Xi and Yj for genes xi and yj, their phylogeny-based distance is defined as follows:(4)\nwhere, dhamming(Xi, Xj) is the Hamming distance between Xi and Xj, and Entropy (Xi, Xj) is the entropy of the common part of Xi and Xj, defined as follows:(5)\nwith p being the frequency of 1’s in common positions between the two phylogenetic profiles. Note that the more similar two phylogenetic profiles are, the smaller their distance is.", "Our goal here is to rank all the genes in a target genome in terms of a possible relationship with a set of seed genes (known genes in a pathway), by fusing the path distance and the phylogeny-based distance. For a given pathway P, let its known gene set be G(P) and |G(P)| be the number of its genes. We define a distance from P to a candidate gene xi as(6)\nSimilarly, we define a phylogenetic distance from P to xias(7)\nOur experience has been that for both the path distance and the phylogenetic distance, the distance for the top ranked genes tend to be more reliable. Hence only the top K candidate genes to each gene xj ε G(P) are considered and the remaining is ignored. To a seed gene, we only take the K shortest genes measured by reference distance, where the K is ranged from 5 to 30. Similarly, only the top K( = 50) genes closed to a seed gene is considered for phylogenetic distance [20]. So some candidate genes may not have a path distance or phylogenetic distance, due to their ranking. The final combined distance from any gene xi to pathway P is defined as(8)\nwhere β is a scaling factor and set to 5, based on the ten-fold cross-validation method (see Parameter Selection); and T is set to be 2 if gene xi has both the path distance and phylogenetic distance, and as 1 if it has only one distance defined. The candidate genes are ranked by their combined distance and the final top γ genes are output (γ = 10 in this study).", "For a predicted target gene and a target pathway, the gene is considered a positive prediction (based on a partial gene list of the pathway) if it is part of the pathway. For any of the following assessments of our prediction, we use the following (standard) notations: TP for true positive predictions; TN for true negative predictions; FP for false positive predictions and FN for false negative predictions (FN); and we use the following standard measures of sensitivity (SE), specificity (SP) and positive predictive value (PPV) to assess the performance of our prediction method of missing genes:(9)(10)(11)\nTo assess the prediction performance against a set of pathways, we use the average of the above three measures across all the pathways as follows:(12)(13)(14)\nwhere SPi, SEi and PPVi are SP, SE and PPV for the ith pathway, respectively, and N is the number of pathways considered.\nFor each to-be-determined parameter in our program, a ten-fold cross-validation procedure is used to derive the optimal value. Specifically, all the pathways are divided randomly into ten parts, nine for training and one for testing each time. The value with the best average is finally selected. The leave-one-out cross-validation procedure is used to assess the performance. For each pathway, its known genes are used as the seed-gene set. The procedure removes each gene from the pathway seed set one at a time, and then calculates the final combined distance from the remaining genes to the removed gene and all the left genes of the target genome. If the removed gene is output in the final top γ genes, it is counted as a successful prediction.", "[SUBTITLE] Performance measure calculation [SUBSECTION] We first tested our ranking algorithm on all the 121 KEGG pathways of E. coli K12. We have downloaded these pathways from KEGG (released in September of 2009; see Additional File 2), of which 105 are metabolic pathways, 11 are involved in genetic information processing, and 5 are involved in environmental information processing. On these 121 pathways, the performance of our method was tested with different K ranging from 5 to 30. Figure 3 shows the accuracies of our algorithm for different K and for pathways with different numbers of assigned genes. It has near 90% prediction accuracy (PPV) for K = 5, and the accuracy increases as the number of genes in a pathway increases. Also we noted that the PPV value decreases with the increase of the K value in general, suggesting a higher level of noise is being included as K increases. We have also calculated the SP and SE values for different K on 121 pathways; the detailed data is shown in Additional File 3. We noted that SE increases with the increase of K, achieving near 78% since only the top K shortest genes were considered.\nThe PPV rate based on a different number of pathway known genes. The average PPV rate of pathways P with |G(P)| ≥ x is calculated, and x is the threshold number, changed from 1 to 50. system(error) = 0.06, α = 380, β = 5, γ = 380, K is changed to 5, 10, 15, 20, 25, 30. (PPI: Phylogenetic Profile Information)\nWhile the major contribution to the prediction accuracy by our method is from operon and homology information, we have also assessed the contribution from phylogentic profiles. We noted that the phylogenetic profile gives a small increase for PPV (~ 4% for K = 5). When K increases, the contribution also increases (Figure 3). It shows that genes confirmed by the phylogenetic profile can reduce the mis-predicted genes from the graph-based prediction results, and increase the PPV value. This result suggests that phylogenetic profile can detect some genes which cannot be found by operon or sequence similarity alone.\nOne interesting observation we made is that our method gives rise to different performance levels for pathways in different functional categories. To fully investigate this observation, we have tested our algorithm on 18 different functional categories of KEGG pathways where each has at least 5 (assigned) genes. One special care needs to be taken when assessing the prediction performance as some KEGG pathways are predicted to form one “combined” pathway by our prediction. For example, all the pathways in Amino Acid Metabolism are put together into one combined “pathway”. Hence we need to evaluate the performance of our method on this combined “pathway”. The performance on the 18 categories of KEGG pathways is generally good except for the category of Biosynthesis of Secondary Metabolism, Metabolism of Other Amino Acids, Transcription and Xenobiotics Biodegradation and Metabolism (see Additional File 4). The reduced performance may be due to two reasons: (i) some correctly predicted genes are regarded as false positives since the combined pathway is incomplete; and (ii) the combined pathways may not be conserved across different genomes; and hence cannot be inferred by our method. We also calculated the PPV values of individual pathways whose number of genes is at least 30. They all have high prediction accuracy except for the Pyruvate Metabolism Pathway, which only gets 40% prediction accuracy (see Additional File 5).\nWe first tested our ranking algorithm on all the 121 KEGG pathways of E. coli K12. We have downloaded these pathways from KEGG (released in September of 2009; see Additional File 2), of which 105 are metabolic pathways, 11 are involved in genetic information processing, and 5 are involved in environmental information processing. On these 121 pathways, the performance of our method was tested with different K ranging from 5 to 30. Figure 3 shows the accuracies of our algorithm for different K and for pathways with different numbers of assigned genes. It has near 90% prediction accuracy (PPV) for K = 5, and the accuracy increases as the number of genes in a pathway increases. Also we noted that the PPV value decreases with the increase of the K value in general, suggesting a higher level of noise is being included as K increases. We have also calculated the SP and SE values for different K on 121 pathways; the detailed data is shown in Additional File 3. We noted that SE increases with the increase of K, achieving near 78% since only the top K shortest genes were considered.\nThe PPV rate based on a different number of pathway known genes. The average PPV rate of pathways P with |G(P)| ≥ x is calculated, and x is the threshold number, changed from 1 to 50. system(error) = 0.06, α = 380, β = 5, γ = 380, K is changed to 5, 10, 15, 20, 25, 30. (PPI: Phylogenetic Profile Information)\nWhile the major contribution to the prediction accuracy by our method is from operon and homology information, we have also assessed the contribution from phylogentic profiles. We noted that the phylogenetic profile gives a small increase for PPV (~ 4% for K = 5). When K increases, the contribution also increases (Figure 3). It shows that genes confirmed by the phylogenetic profile can reduce the mis-predicted genes from the graph-based prediction results, and increase the PPV value. This result suggests that phylogenetic profile can detect some genes which cannot be found by operon or sequence similarity alone.\nOne interesting observation we made is that our method gives rise to different performance levels for pathways in different functional categories. To fully investigate this observation, we have tested our algorithm on 18 different functional categories of KEGG pathways where each has at least 5 (assigned) genes. One special care needs to be taken when assessing the prediction performance as some KEGG pathways are predicted to form one “combined” pathway by our prediction. For example, all the pathways in Amino Acid Metabolism are put together into one combined “pathway”. Hence we need to evaluate the performance of our method on this combined “pathway”. The performance on the 18 categories of KEGG pathways is generally good except for the category of Biosynthesis of Secondary Metabolism, Metabolism of Other Amino Acids, Transcription and Xenobiotics Biodegradation and Metabolism (see Additional File 4). The reduced performance may be due to two reasons: (i) some correctly predicted genes are regarded as false positives since the combined pathway is incomplete; and (ii) the combined pathways may not be conserved across different genomes; and hence cannot be inferred by our method. We also calculated the PPV values of individual pathways whose number of genes is at least 30. They all have high prediction accuracy except for the Pyruvate Metabolism Pathway, which only gets 40% prediction accuracy (see Additional File 5).\n[SUBTITLE] Case study of the predicted pyruvate metabolism pathway [SUBSECTION] We have carefully analyzed our prediction results on the pyruvate metabolism pathway (eco00620) since it has the worst prediction performance among all the 21 E. coli pathways, each of which has at least 30 (assigned) genes. This KEGG pathway currently consists of 41 annotated genes (released in September 2009); five (pflD, tdcE, pflB, accC, ybiW) of them are correctly predicted in the top 10 by our method. Among the “incorrect” top 10 predictions (tdcD, eutD, ybiY, prpE, yiaY), some have been reported as correct genes involved in the pathway by a number of published papers. For example, gene ybiY is predicted as a “pyruvate formate lyase activating enzyme” in the NCBI and KEGG databases. Furthermore, we find three genes (tdcD, pflD, prpE) are all in the same “Propanoate Metabolism” pathway (eco00640), which is directly related to the pyruvate metabolism pathway. Actually, there are 10 genes that are common in both pathways.\nGene yiaY is annotated as “Fe-containing alcohol dehydregenas” and is predicted among the top 5 predictions by three known pathway genes (ybiC, adhE, fucO) with the similarity connection path or the operon connection path (Figure 4). Both gene ybiC and gene yiaY have homologous genes in genome (NC_007925) and are reported as an operon with high probability (≥ 0.999). The connections show that these two genes are structured as an operon in NC_007925, while they are diverged into different segments in E. coli. The gene eutD is ranked among the top 5 predictions by three pathway genes (maeA, maeB and pta) (Figure 5) connected by a similarity connection. These results suggested that our method can give a reasonable gene rank list to a target pathway.\nThe connected paths to candidate gene yiaY. The three paths from ybiC, adhE and fucO to yiaY in the genome reference graph are presented and noted with the original operon probability and the BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nThe connected paths to candidate gene eutD. The three paths from maeA, maeB and pta to eutD in the genome reference graph are presented and noted by original BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nWe have carefully analyzed our prediction results on the pyruvate metabolism pathway (eco00620) since it has the worst prediction performance among all the 21 E. coli pathways, each of which has at least 30 (assigned) genes. This KEGG pathway currently consists of 41 annotated genes (released in September 2009); five (pflD, tdcE, pflB, accC, ybiW) of them are correctly predicted in the top 10 by our method. Among the “incorrect” top 10 predictions (tdcD, eutD, ybiY, prpE, yiaY), some have been reported as correct genes involved in the pathway by a number of published papers. For example, gene ybiY is predicted as a “pyruvate formate lyase activating enzyme” in the NCBI and KEGG databases. Furthermore, we find three genes (tdcD, pflD, prpE) are all in the same “Propanoate Metabolism” pathway (eco00640), which is directly related to the pyruvate metabolism pathway. Actually, there are 10 genes that are common in both pathways.\nGene yiaY is annotated as “Fe-containing alcohol dehydregenas” and is predicted among the top 5 predictions by three known pathway genes (ybiC, adhE, fucO) with the similarity connection path or the operon connection path (Figure 4). Both gene ybiC and gene yiaY have homologous genes in genome (NC_007925) and are reported as an operon with high probability (≥ 0.999). The connections show that these two genes are structured as an operon in NC_007925, while they are diverged into different segments in E. coli. The gene eutD is ranked among the top 5 predictions by three pathway genes (maeA, maeB and pta) (Figure 5) connected by a similarity connection. These results suggested that our method can give a reasonable gene rank list to a target pathway.\nThe connected paths to candidate gene yiaY. The three paths from ybiC, adhE and fucO to yiaY in the genome reference graph are presented and noted with the original operon probability and the BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nThe connected paths to candidate gene eutD. The three paths from maeA, maeB and pta to eutD in the genome reference graph are presented and noted by original BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\n[SUBTITLE] Robustness analysis of parameters [SUBSECTION] To test the robustness of our method, we calculated the change in the average PPV value when the parameters α, β and system(error) change. The initial parameter values are set as K = 5, α = 380 and system(error) = 0.06. 71 pathways (with the number of assigned genes ≥ 10) are used, and the final average PPV of the top 10 genes are calculated. For parameter x, the change rate is defined as and the related PPV change rate is . For each parameter, the change rate ranges from -1 to 1. The results show that our method is very robust in terms of these three parameters. For example, when the change rate of system(error) is -1, the related PPV change rate is only 0.0449 (Figure 6). It is a very small change compared with the change of system(error). This result also shows that the system(error) can give an extra 0.0449 contribution to the final average PPV; and suggests that system(error) is useful in finding relationships in the reference graph. These results suggest that the genome reference graph is very useful and gives a major contribution to the final result.\nThe robustness of parameters α, β and system(error). Three parameters (X-axis) are changed ±100% compared with the value used in our study and the final accuracy change rates are described in the Y-axis.\nTo test the robustness of our method, we calculated the change in the average PPV value when the parameters α, β and system(error) change. The initial parameter values are set as K = 5, α = 380 and system(error) = 0.06. 71 pathways (with the number of assigned genes ≥ 10) are used, and the final average PPV of the top 10 genes are calculated. For parameter x, the change rate is defined as and the related PPV change rate is . For each parameter, the change rate ranges from -1 to 1. The results show that our method is very robust in terms of these three parameters. For example, when the change rate of system(error) is -1, the related PPV change rate is only 0.0449 (Figure 6). It is a very small change compared with the change of system(error). This result also shows that the system(error) can give an extra 0.0449 contribution to the final average PPV; and suggests that system(error) is useful in finding relationships in the reference graph. These results suggest that the genome reference graph is very useful and gives a major contribution to the final result.\nThe robustness of parameters α, β and system(error). Three parameters (X-axis) are changed ±100% compared with the value used in our study and the final accuracy change rates are described in the Y-axis.", "We first tested our ranking algorithm on all the 121 KEGG pathways of E. coli K12. We have downloaded these pathways from KEGG (released in September of 2009; see Additional File 2), of which 105 are metabolic pathways, 11 are involved in genetic information processing, and 5 are involved in environmental information processing. On these 121 pathways, the performance of our method was tested with different K ranging from 5 to 30. Figure 3 shows the accuracies of our algorithm for different K and for pathways with different numbers of assigned genes. It has near 90% prediction accuracy (PPV) for K = 5, and the accuracy increases as the number of genes in a pathway increases. Also we noted that the PPV value decreases with the increase of the K value in general, suggesting a higher level of noise is being included as K increases. We have also calculated the SP and SE values for different K on 121 pathways; the detailed data is shown in Additional File 3. We noted that SE increases with the increase of K, achieving near 78% since only the top K shortest genes were considered.\nThe PPV rate based on a different number of pathway known genes. The average PPV rate of pathways P with |G(P)| ≥ x is calculated, and x is the threshold number, changed from 1 to 50. system(error) = 0.06, α = 380, β = 5, γ = 380, K is changed to 5, 10, 15, 20, 25, 30. (PPI: Phylogenetic Profile Information)\nWhile the major contribution to the prediction accuracy by our method is from operon and homology information, we have also assessed the contribution from phylogentic profiles. We noted that the phylogenetic profile gives a small increase for PPV (~ 4% for K = 5). When K increases, the contribution also increases (Figure 3). It shows that genes confirmed by the phylogenetic profile can reduce the mis-predicted genes from the graph-based prediction results, and increase the PPV value. This result suggests that phylogenetic profile can detect some genes which cannot be found by operon or sequence similarity alone.\nOne interesting observation we made is that our method gives rise to different performance levels for pathways in different functional categories. To fully investigate this observation, we have tested our algorithm on 18 different functional categories of KEGG pathways where each has at least 5 (assigned) genes. One special care needs to be taken when assessing the prediction performance as some KEGG pathways are predicted to form one “combined” pathway by our prediction. For example, all the pathways in Amino Acid Metabolism are put together into one combined “pathway”. Hence we need to evaluate the performance of our method on this combined “pathway”. The performance on the 18 categories of KEGG pathways is generally good except for the category of Biosynthesis of Secondary Metabolism, Metabolism of Other Amino Acids, Transcription and Xenobiotics Biodegradation and Metabolism (see Additional File 4). The reduced performance may be due to two reasons: (i) some correctly predicted genes are regarded as false positives since the combined pathway is incomplete; and (ii) the combined pathways may not be conserved across different genomes; and hence cannot be inferred by our method. We also calculated the PPV values of individual pathways whose number of genes is at least 30. They all have high prediction accuracy except for the Pyruvate Metabolism Pathway, which only gets 40% prediction accuracy (see Additional File 5).", "We have carefully analyzed our prediction results on the pyruvate metabolism pathway (eco00620) since it has the worst prediction performance among all the 21 E. coli pathways, each of which has at least 30 (assigned) genes. This KEGG pathway currently consists of 41 annotated genes (released in September 2009); five (pflD, tdcE, pflB, accC, ybiW) of them are correctly predicted in the top 10 by our method. Among the “incorrect” top 10 predictions (tdcD, eutD, ybiY, prpE, yiaY), some have been reported as correct genes involved in the pathway by a number of published papers. For example, gene ybiY is predicted as a “pyruvate formate lyase activating enzyme” in the NCBI and KEGG databases. Furthermore, we find three genes (tdcD, pflD, prpE) are all in the same “Propanoate Metabolism” pathway (eco00640), which is directly related to the pyruvate metabolism pathway. Actually, there are 10 genes that are common in both pathways.\nGene yiaY is annotated as “Fe-containing alcohol dehydregenas” and is predicted among the top 5 predictions by three known pathway genes (ybiC, adhE, fucO) with the similarity connection path or the operon connection path (Figure 4). Both gene ybiC and gene yiaY have homologous genes in genome (NC_007925) and are reported as an operon with high probability (≥ 0.999). The connections show that these two genes are structured as an operon in NC_007925, while they are diverged into different segments in E. coli. The gene eutD is ranked among the top 5 predictions by three pathway genes (maeA, maeB and pta) (Figure 5) connected by a similarity connection. These results suggested that our method can give a reasonable gene rank list to a target pathway.\nThe connected paths to candidate gene yiaY. The three paths from ybiC, adhE and fucO to yiaY in the genome reference graph are presented and noted with the original operon probability and the BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nThe connected paths to candidate gene eutD. The three paths from maeA, maeB and pta to eutD in the genome reference graph are presented and noted by original BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.", "To test the robustness of our method, we calculated the change in the average PPV value when the parameters α, β and system(error) change. The initial parameter values are set as K = 5, α = 380 and system(error) = 0.06. 71 pathways (with the number of assigned genes ≥ 10) are used, and the final average PPV of the top 10 genes are calculated. For parameter x, the change rate is defined as and the related PPV change rate is . For each parameter, the change rate ranges from -1 to 1. The results show that our method is very robust in terms of these three parameters. For example, when the change rate of system(error) is -1, the related PPV change rate is only 0.0449 (Figure 6). It is a very small change compared with the change of system(error). This result also shows that the system(error) can give an extra 0.0449 contribution to the final average PPV; and suggests that system(error) is useful in finding relationships in the reference graph. These results suggest that the genome reference graph is very useful and gives a major contribution to the final result.\nThe robustness of parameters α, β and system(error). Three parameters (X-axis) are changed ±100% compared with the value used in our study and the final accuracy change rates are described in the Y-axis.", "Our method provides new insights about finding missing genes and recruiting additional genes into partially predicted pathways in E. coli, through combining operon information and homology information across multiple genomes. Some wrongly predicted genes may indicate pathways might be quite functionally related (as being showed above, genes in eco00620 can recall many genes in eco00640), since pathways are defined quite arbitrary by biologists, this may remind us to think about the redefinition of some pathways. In some pathways, we noted that some genes form (connected) functional modules. We have systematically checked for this by connecting two genes with a link if one gene can be recalled by another gene among the top 10 predictions; and used all the genes in eco00620 to reconstruct a new graph with 5 connected components (Figure 7). The biggest component includes 23 genes and is the main functional module in pyruvate metabolism and can be further parted into three smaller sub-modules (A1, A2 and A3), and we found all sub-modules are indicating special biochemical processes (Figure 8). For example, part C includes three genes from the same operon and they are involved in the process to metabolize Pyruvate to Acetyl-CoA in eco00620. The structures observed in individual pathways and between pathways provide more insights about the hierarchical structure of pathways and consisted with earlier studies [24-26].\nReconstructed gene connections of Eco00620. Connected graph constructed by the recalled information predicted in our program. If a gene can be recalled by another gene in final top 10 ranks, then an edge is connected between the gene pair. 38 genes are recalled and connected as five submodules. The biggest module has 23 genes and can be further divided as three parts.\nMapped structures on the pathway of Eco00620 in KEGG. Five recalled modules can be well mapped on the described pathway Eco00620 in KEGG. Each module can be mapped with a biochemical process.", "We present a method to find pathway genes at a genome level, which can be used to fill pathway holes or recruit new genes into existing pathways. The results show that our method can achieve higher prediction accuracy and is very robust. The main advantage of our method is that by introducing the reference graph, we get a natural way to integrate different types of information such as genomic structure information and sequence similarity information. More information could possibly be added in future studies. For example, we can use information like regulons [27] and gene fusion events [28] to provide a more general framework for integrating different information, which can be easily included into our current program. Besides finding new genes for pathways, our method can also be used for functional module inference, as some functional modules may be the union of existing pathways.", "The authors declare that they have no competing interests.", "Yong Chen produced the program and contributed towards planning and writing of the manuscript, particularly producing the Results section. Fenglou Mao and Guojun Li contributed in preparing data and some results analysis. Ying Xu provided guidance and planning for the project. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "High-level description of our algorithm", "Selection of reference genomes", "Calculation of homology-based distance", "Calculation of operon-based distance", "Reference graph and linkage graph", "Phylogeny-based distance", "Rank functional relatedness of candidate genes", "Parameter Selection and Validation Method", "Results", "Performance measure calculation", "Case study of the predicted pyruvate metabolism pathway", "Robustness analysis of parameters", "Discussion", "Concluding remark", "Competing interests", "Authors' contributions", "Supplementary Material" ]
[ "Reconstruction of biological pathways is a fundamental problem in understanding the functional mechanisms of cellular organisms. Substantial efforts have been put into the elucidation of biological pathways, particularly for prokaryotic organisms, in a systematic manner based on high-throughput omic data and computational prediction. As a result, a number of pathway databases have been developed and are being widely used, such as KEGG and BioCyc [1-5]. These databases not only serve as an information resource for retrieving well-characterized pathways for specific organisms but also provide a set of pathway templates for reconstructing pathways for organisms that are not directly covered by the databases, as substantial portions of homologous pathways may be conserved across different organisms, particularly related organisms.\nA number of computer programs have been developed for pathway reconstruction through mapping known pathways from one organism to another. While some success has been reported on these programs, there has been a general issue associated with such homologous pathway mapping-based approaches, which is that homologous pathways are generally not identical and hence the mapped pathways could miss some parts not covered by their well-characterized homologous template pathways. This problem, called pathway holes or missing genes, has been widely recognized [6-9]. A number of methods have been developed to find such missing genes, based mainly on the idea of finding genes that are functionally associated with genes already in the mapped pathways. One class of such methods attempts to find enzyme-encoding genes missing in a mapped metabolic pathway based on multiple types of gene association information [8-10], taking advantage of the fact that genes encoding a metabolic pathway tend to group into clusters (e.g., operons). Another class of methods attempt to identify functional modules from some large gene association networks or groups [11-15], and then to suggest possible candidates for missing genes based on genes found in the same functional modules of genes already in mapped pathways. While these methods have provided useful information for searching for missing genes, there is clearly substantial room for improvement in terms of the functional specificity of their predicted candidates and the scope of applicability of the existing methods [16]. Among the various areas for further improvements, we identified a few we can possibly improve on using the currently available information: (i) there have not been reliable methods for consideration and inclusion of functionally uncharacterized genes (often referred to as hypothetical and conserved genes) into partially predicted pathway models (e.g, mapped pathways); (ii) while (conserved) genomic synteny has been utilized for prediction of functionally associated genes, its true usefulness, other than operon information, is yet to be well documented. Previous studies have shown that there is a strong link between genes in the same operons and genes working in the same biological pathways [17]. So full utilization of operon information should be a key direction for improving biological pathways, particular now as the state of the art prediction methods for operons have reached high accuracy (~90%) [17-19].\nWe present, in this paper, a novel computational method for identification and functional annotation of missing genes in a predicted pathway model, either through homologous pathway mapping or using other methods. The basic idea of the method can be outlined as follows. For any specified target genome, we define a distance between any pair of genes in the genome to measure the level of their functional relatedness in terms of a set of reference genomes. Specifically, two genes are functionally related if they (i) are homologous, (ii) share a common operon directly or through their homologs in a reference genome, (iii) are phylogenetically related, or (iv) deemed to be functionally related through combinations of the first three criteria. For any pair of functionally related genes in the target genome, their distance is defined essentially as the minimum number of applications of this recursive definition. Our algorithm identifies genes possibly involved in a target pathway based on their distances to genes already in the pathway. We have tested the algorithm on all characterized pathways of E. coli, using portions of the pathways as the initial pathway genes (called seeds), and found that the vast majority of the remaining genes of these known pathways are all within short distances to the seeds, confirming the effectiveness of our distance measure. Our study has also identified numerous genes with short distances to the known pathway genes, which we believe are highly promising candidates for addition to these known E. coli pathways. Limited analyses of the potential functional roles of these genes have been carried out, and reported in this paper.", "[SUBTITLE] High-level description of our algorithm [SUBSECTION] We first represent genes in the target genome or in a set of specified reference genomes, and their functional relatedness as a graph, called a reference graph, where each gene in any of the genomes is represented as a vertex, and two genes have an edge linking them if they are in the same operon or they are homologous. We then define a linkage graph for the target genome such that each gene is represented as a vertex and two genes have an edge if and only if there is a path linking the two genes in the reference graph, and the distance of the edge is defined as the distance of the shortest path between the two genes in the reference graph. We have augmented this distance by including two additive terms, one penalty factor (system(error)) used to model the reliability of a predicted functional relationship, and a phylogeny-based distance used to capture co-evolutionary relationships, more general than homology relationships among genes, between two genes. Our goal here is to find genes that have short distances, defined above, to genes in a known pathway, and predict that they are involved in this pathway if their distances are ranked among the top such genes. The whole procedure is summarized in Figure 1, with the detailed steps explained as follows.\nThe flow chart of the method. The method uses gene similarity and operon information to first construct a genome reference graph. It then hierarchically fuses the shortest path distance and phylogenetic distance to rank all candidate genes.\nWe first represent genes in the target genome or in a set of specified reference genomes, and their functional relatedness as a graph, called a reference graph, where each gene in any of the genomes is represented as a vertex, and two genes have an edge linking them if they are in the same operon or they are homologous. We then define a linkage graph for the target genome such that each gene is represented as a vertex and two genes have an edge if and only if there is a path linking the two genes in the reference graph, and the distance of the edge is defined as the distance of the shortest path between the two genes in the reference graph. We have augmented this distance by including two additive terms, one penalty factor (system(error)) used to model the reliability of a predicted functional relationship, and a phylogeny-based distance used to capture co-evolutionary relationships, more general than homology relationships among genes, between two genes. Our goal here is to find genes that have short distances, defined above, to genes in a known pathway, and predict that they are involved in this pathway if their distances are ranked among the top such genes. The whole procedure is summarized in Figure 1, with the detailed steps explained as follows.\nThe flow chart of the method. The method uses gene similarity and operon information to first construct a genome reference graph. It then hierarchically fuses the shortest path distance and phylogenetic distance to rank all candidate genes.\n[SUBTITLE] Selection of reference genomes [SUBSECTION] Currently over 1,000 bacterial and archaean genomes have been sequenced and are publicly available (NCBI release of September 2009). From this set, we have selected 185 strains (non-redundant genomes and plasmids) (see Additional File 1) from 185 different genera using the following rule: for each genus, select the genome with the longest sequence.\nCurrently over 1,000 bacterial and archaean genomes have been sequenced and are publicly available (NCBI release of September 2009). From this set, we have selected 185 strains (non-redundant genomes and plasmids) (see Additional File 1) from 185 different genera using the following rule: for each genus, select the genome with the longest sequence.\n[SUBTITLE] Calculation of homology-based distance [SUBSECTION] For each pair of genes xi, xj, in the target genome and the 185 reference genomes, we use the E-value of BLAST (with default parameters) to define their homology-based distance ds(xi, xj) as follows:(1)\nwhere ps(xi, xj) is the BLAST E-value for genes xi, xj, and 185 is a normalization factor since when the E-value is smaller than 1e-185, it is set as 0 in the BLAST program. Clearly ds(xi, xj) is between 0 and 1; and the more similar two genes are, the smaller the ds(xi, xj) value is.\nFor each pair of genes xi, xj, in the target genome and the 185 reference genomes, we use the E-value of BLAST (with default parameters) to define their homology-based distance ds(xi, xj) as follows:(1)\nwhere ps(xi, xj) is the BLAST E-value for genes xi, xj, and 185 is a normalization factor since when the E-value is smaller than 1e-185, it is set as 0 in the BLAST program. Clearly ds(xi, xj) is between 0 and 1; and the more similar two genes are, the smaller the ds(xi, xj) value is.\n[SUBTITLE] Calculation of operon-based distance [SUBSECTION] We have used the operons predicted using our own program [18], which is considered the most reliable operon prediction method in the public domain [17]. A probability calculated by this method represents the likelihood that two neighbouring genes are in the same operon. We apply this program to all of the 185 reference genomes and get the probability po(xi, xj) between two genes xi, xj in each genome. For any pair of neighbouring genes xi, xj in the same genome (target or reference), we define their operon-based distancedo(xi, xj) as follows:(2)\nwhere po(xi, xj) represents the probability that xi, xj are in the same operon as given in [18].\nWe have used the operons predicted using our own program [18], which is considered the most reliable operon prediction method in the public domain [17]. A probability calculated by this method represents the likelihood that two neighbouring genes are in the same operon. We apply this program to all of the 185 reference genomes and get the probability po(xi, xj) between two genes xi, xj in each genome. For any pair of neighbouring genes xi, xj in the same genome (target or reference), we define their operon-based distancedo(xi, xj) as follows:(2)\nwhere po(xi, xj) represents the probability that xi, xj are in the same operon as given in [18].\n[SUBTITLE] Reference graph and linkage graph [SUBSECTION] We define a reference graph over all genes in the target as well as the reference genomes as follows. Each gene is represented as a vertex, and an edge between two genes is created if (i) the two genes are in the same operon, with their edge distance defined to be the operon-based distance between the two genes; or (ii) the two genes in different genomes are homologous, with their edge distance defined to be their homology-based distance. Based on the reference graph, we define a linkage graph on genes in the target genome. For any pair of genes, xi, xj, we define an edge between them if and only if there is a path xi, x1,x2, … xj in the reference graph, with its edge distance set to be the distance of the shortest path between the two genes (Figure 2). We intend to use an edge in this graph to capture a functional linkage relationship possibly through multiple steps of co-operon and homologous relationship. We recognize that the reliability of such defined edges could go down (largely independent of the reliability of individual operon and homology predictions) as the number of edges in the above path goes up. Hence we included a penalty factor, system (error), which is proportional to the number of edges in the path, and redefined the path distance of a gene pair as follows:(3)\nThe relationship path through operon edge and similarity edge. Given a reference pathway, its known genes are used as seeds to calculate the shortest distances to candidate genes. For example, gene1 and gene2 are connected with the same candidate gene. The path from gene1 to candidate gene (path1) is noted as solid line, and gene2 to candidate gene (path2) as dashed line. The paths are both constructed by operon edge (colour arrow) and similarity edge (solid or dashed line).\nwhere k is the number of edges in the path, and α is a scaling factor. In our current implementation, we set α = 380 and system(error) = 0.06 based on a ten-fold cross-validation method (see Parameter Selection). E(operon) and E(similarity) are the set of operon edges and the set of similarity edges, respectively.\nWe define a reference graph over all genes in the target as well as the reference genomes as follows. Each gene is represented as a vertex, and an edge between two genes is created if (i) the two genes are in the same operon, with their edge distance defined to be the operon-based distance between the two genes; or (ii) the two genes in different genomes are homologous, with their edge distance defined to be their homology-based distance. Based on the reference graph, we define a linkage graph on genes in the target genome. For any pair of genes, xi, xj, we define an edge between them if and only if there is a path xi, x1,x2, … xj in the reference graph, with its edge distance set to be the distance of the shortest path between the two genes (Figure 2). We intend to use an edge in this graph to capture a functional linkage relationship possibly through multiple steps of co-operon and homologous relationship. We recognize that the reliability of such defined edges could go down (largely independent of the reliability of individual operon and homology predictions) as the number of edges in the above path goes up. Hence we included a penalty factor, system (error), which is proportional to the number of edges in the path, and redefined the path distance of a gene pair as follows:(3)\nThe relationship path through operon edge and similarity edge. Given a reference pathway, its known genes are used as seeds to calculate the shortest distances to candidate genes. For example, gene1 and gene2 are connected with the same candidate gene. The path from gene1 to candidate gene (path1) is noted as solid line, and gene2 to candidate gene (path2) as dashed line. The paths are both constructed by operon edge (colour arrow) and similarity edge (solid or dashed line).\nwhere k is the number of edges in the path, and α is a scaling factor. In our current implementation, we set α = 380 and system(error) = 0.06 based on a ten-fold cross-validation method (see Parameter Selection). E(operon) and E(similarity) are the set of operon edges and the set of similarity edges, respectively.\n[SUBTITLE] Phylogeny-based distance [SUBSECTION] We also considered a more general class of functional relationship defined in terms of the phylogenetic profiles of genes, which measures their co-evolutionary relationship [20,21]. Basically, the phylogenetic profile X of a gene against a set of n reference genomes is a binary string of length n, with the ith position being 1, if the gene has a homolog in the ith reference genome, and 0 otherwise. It has been found that two genes (of the same genome) are generally functionally related if their phylogenetic profiles are highly similar [20]. We have used a BLAST E-value e-3 as the cutoff for determining the presence of a homolog in another genome [22]. We use the following to measure the similarity between two phylogenetic profiles, similar to that reported in [23]. Given the phylogenetic profiles Xi and Yj for genes xi and yj, their phylogeny-based distance is defined as follows:(4)\nwhere, dhamming(Xi, Xj) is the Hamming distance between Xi and Xj, and Entropy (Xi, Xj) is the entropy of the common part of Xi and Xj, defined as follows:(5)\nwith p being the frequency of 1’s in common positions between the two phylogenetic profiles. Note that the more similar two phylogenetic profiles are, the smaller their distance is.\nWe also considered a more general class of functional relationship defined in terms of the phylogenetic profiles of genes, which measures their co-evolutionary relationship [20,21]. Basically, the phylogenetic profile X of a gene against a set of n reference genomes is a binary string of length n, with the ith position being 1, if the gene has a homolog in the ith reference genome, and 0 otherwise. It has been found that two genes (of the same genome) are generally functionally related if their phylogenetic profiles are highly similar [20]. We have used a BLAST E-value e-3 as the cutoff for determining the presence of a homolog in another genome [22]. We use the following to measure the similarity between two phylogenetic profiles, similar to that reported in [23]. Given the phylogenetic profiles Xi and Yj for genes xi and yj, their phylogeny-based distance is defined as follows:(4)\nwhere, dhamming(Xi, Xj) is the Hamming distance between Xi and Xj, and Entropy (Xi, Xj) is the entropy of the common part of Xi and Xj, defined as follows:(5)\nwith p being the frequency of 1’s in common positions between the two phylogenetic profiles. Note that the more similar two phylogenetic profiles are, the smaller their distance is.\n[SUBTITLE] Rank functional relatedness of candidate genes [SUBSECTION] Our goal here is to rank all the genes in a target genome in terms of a possible relationship with a set of seed genes (known genes in a pathway), by fusing the path distance and the phylogeny-based distance. For a given pathway P, let its known gene set be G(P) and |G(P)| be the number of its genes. We define a distance from P to a candidate gene xi as(6)\nSimilarly, we define a phylogenetic distance from P to xias(7)\nOur experience has been that for both the path distance and the phylogenetic distance, the distance for the top ranked genes tend to be more reliable. Hence only the top K candidate genes to each gene xj ε G(P) are considered and the remaining is ignored. To a seed gene, we only take the K shortest genes measured by reference distance, where the K is ranged from 5 to 30. Similarly, only the top K( = 50) genes closed to a seed gene is considered for phylogenetic distance [20]. So some candidate genes may not have a path distance or phylogenetic distance, due to their ranking. The final combined distance from any gene xi to pathway P is defined as(8)\nwhere β is a scaling factor and set to 5, based on the ten-fold cross-validation method (see Parameter Selection); and T is set to be 2 if gene xi has both the path distance and phylogenetic distance, and as 1 if it has only one distance defined. The candidate genes are ranked by their combined distance and the final top γ genes are output (γ = 10 in this study).\nOur goal here is to rank all the genes in a target genome in terms of a possible relationship with a set of seed genes (known genes in a pathway), by fusing the path distance and the phylogeny-based distance. For a given pathway P, let its known gene set be G(P) and |G(P)| be the number of its genes. We define a distance from P to a candidate gene xi as(6)\nSimilarly, we define a phylogenetic distance from P to xias(7)\nOur experience has been that for both the path distance and the phylogenetic distance, the distance for the top ranked genes tend to be more reliable. Hence only the top K candidate genes to each gene xj ε G(P) are considered and the remaining is ignored. To a seed gene, we only take the K shortest genes measured by reference distance, where the K is ranged from 5 to 30. Similarly, only the top K( = 50) genes closed to a seed gene is considered for phylogenetic distance [20]. So some candidate genes may not have a path distance or phylogenetic distance, due to their ranking. The final combined distance from any gene xi to pathway P is defined as(8)\nwhere β is a scaling factor and set to 5, based on the ten-fold cross-validation method (see Parameter Selection); and T is set to be 2 if gene xi has both the path distance and phylogenetic distance, and as 1 if it has only one distance defined. The candidate genes are ranked by their combined distance and the final top γ genes are output (γ = 10 in this study).\n[SUBTITLE] Parameter Selection and Validation Method [SUBSECTION] For a predicted target gene and a target pathway, the gene is considered a positive prediction (based on a partial gene list of the pathway) if it is part of the pathway. For any of the following assessments of our prediction, we use the following (standard) notations: TP for true positive predictions; TN for true negative predictions; FP for false positive predictions and FN for false negative predictions (FN); and we use the following standard measures of sensitivity (SE), specificity (SP) and positive predictive value (PPV) to assess the performance of our prediction method of missing genes:(9)(10)(11)\nTo assess the prediction performance against a set of pathways, we use the average of the above three measures across all the pathways as follows:(12)(13)(14)\nwhere SPi, SEi and PPVi are SP, SE and PPV for the ith pathway, respectively, and N is the number of pathways considered.\nFor each to-be-determined parameter in our program, a ten-fold cross-validation procedure is used to derive the optimal value. Specifically, all the pathways are divided randomly into ten parts, nine for training and one for testing each time. The value with the best average is finally selected. The leave-one-out cross-validation procedure is used to assess the performance. For each pathway, its known genes are used as the seed-gene set. The procedure removes each gene from the pathway seed set one at a time, and then calculates the final combined distance from the remaining genes to the removed gene and all the left genes of the target genome. If the removed gene is output in the final top γ genes, it is counted as a successful prediction.\nFor a predicted target gene and a target pathway, the gene is considered a positive prediction (based on a partial gene list of the pathway) if it is part of the pathway. For any of the following assessments of our prediction, we use the following (standard) notations: TP for true positive predictions; TN for true negative predictions; FP for false positive predictions and FN for false negative predictions (FN); and we use the following standard measures of sensitivity (SE), specificity (SP) and positive predictive value (PPV) to assess the performance of our prediction method of missing genes:(9)(10)(11)\nTo assess the prediction performance against a set of pathways, we use the average of the above three measures across all the pathways as follows:(12)(13)(14)\nwhere SPi, SEi and PPVi are SP, SE and PPV for the ith pathway, respectively, and N is the number of pathways considered.\nFor each to-be-determined parameter in our program, a ten-fold cross-validation procedure is used to derive the optimal value. Specifically, all the pathways are divided randomly into ten parts, nine for training and one for testing each time. The value with the best average is finally selected. The leave-one-out cross-validation procedure is used to assess the performance. For each pathway, its known genes are used as the seed-gene set. The procedure removes each gene from the pathway seed set one at a time, and then calculates the final combined distance from the remaining genes to the removed gene and all the left genes of the target genome. If the removed gene is output in the final top γ genes, it is counted as a successful prediction.", "We first represent genes in the target genome or in a set of specified reference genomes, and their functional relatedness as a graph, called a reference graph, where each gene in any of the genomes is represented as a vertex, and two genes have an edge linking them if they are in the same operon or they are homologous. We then define a linkage graph for the target genome such that each gene is represented as a vertex and two genes have an edge if and only if there is a path linking the two genes in the reference graph, and the distance of the edge is defined as the distance of the shortest path between the two genes in the reference graph. We have augmented this distance by including two additive terms, one penalty factor (system(error)) used to model the reliability of a predicted functional relationship, and a phylogeny-based distance used to capture co-evolutionary relationships, more general than homology relationships among genes, between two genes. Our goal here is to find genes that have short distances, defined above, to genes in a known pathway, and predict that they are involved in this pathway if their distances are ranked among the top such genes. The whole procedure is summarized in Figure 1, with the detailed steps explained as follows.\nThe flow chart of the method. The method uses gene similarity and operon information to first construct a genome reference graph. It then hierarchically fuses the shortest path distance and phylogenetic distance to rank all candidate genes.", "Currently over 1,000 bacterial and archaean genomes have been sequenced and are publicly available (NCBI release of September 2009). From this set, we have selected 185 strains (non-redundant genomes and plasmids) (see Additional File 1) from 185 different genera using the following rule: for each genus, select the genome with the longest sequence.", "For each pair of genes xi, xj, in the target genome and the 185 reference genomes, we use the E-value of BLAST (with default parameters) to define their homology-based distance ds(xi, xj) as follows:(1)\nwhere ps(xi, xj) is the BLAST E-value for genes xi, xj, and 185 is a normalization factor since when the E-value is smaller than 1e-185, it is set as 0 in the BLAST program. Clearly ds(xi, xj) is between 0 and 1; and the more similar two genes are, the smaller the ds(xi, xj) value is.", "We have used the operons predicted using our own program [18], which is considered the most reliable operon prediction method in the public domain [17]. A probability calculated by this method represents the likelihood that two neighbouring genes are in the same operon. We apply this program to all of the 185 reference genomes and get the probability po(xi, xj) between two genes xi, xj in each genome. For any pair of neighbouring genes xi, xj in the same genome (target or reference), we define their operon-based distancedo(xi, xj) as follows:(2)\nwhere po(xi, xj) represents the probability that xi, xj are in the same operon as given in [18].", "We define a reference graph over all genes in the target as well as the reference genomes as follows. Each gene is represented as a vertex, and an edge between two genes is created if (i) the two genes are in the same operon, with their edge distance defined to be the operon-based distance between the two genes; or (ii) the two genes in different genomes are homologous, with their edge distance defined to be their homology-based distance. Based on the reference graph, we define a linkage graph on genes in the target genome. For any pair of genes, xi, xj, we define an edge between them if and only if there is a path xi, x1,x2, … xj in the reference graph, with its edge distance set to be the distance of the shortest path between the two genes (Figure 2). We intend to use an edge in this graph to capture a functional linkage relationship possibly through multiple steps of co-operon and homologous relationship. We recognize that the reliability of such defined edges could go down (largely independent of the reliability of individual operon and homology predictions) as the number of edges in the above path goes up. Hence we included a penalty factor, system (error), which is proportional to the number of edges in the path, and redefined the path distance of a gene pair as follows:(3)\nThe relationship path through operon edge and similarity edge. Given a reference pathway, its known genes are used as seeds to calculate the shortest distances to candidate genes. For example, gene1 and gene2 are connected with the same candidate gene. The path from gene1 to candidate gene (path1) is noted as solid line, and gene2 to candidate gene (path2) as dashed line. The paths are both constructed by operon edge (colour arrow) and similarity edge (solid or dashed line).\nwhere k is the number of edges in the path, and α is a scaling factor. In our current implementation, we set α = 380 and system(error) = 0.06 based on a ten-fold cross-validation method (see Parameter Selection). E(operon) and E(similarity) are the set of operon edges and the set of similarity edges, respectively.", "We also considered a more general class of functional relationship defined in terms of the phylogenetic profiles of genes, which measures their co-evolutionary relationship [20,21]. Basically, the phylogenetic profile X of a gene against a set of n reference genomes is a binary string of length n, with the ith position being 1, if the gene has a homolog in the ith reference genome, and 0 otherwise. It has been found that two genes (of the same genome) are generally functionally related if their phylogenetic profiles are highly similar [20]. We have used a BLAST E-value e-3 as the cutoff for determining the presence of a homolog in another genome [22]. We use the following to measure the similarity between two phylogenetic profiles, similar to that reported in [23]. Given the phylogenetic profiles Xi and Yj for genes xi and yj, their phylogeny-based distance is defined as follows:(4)\nwhere, dhamming(Xi, Xj) is the Hamming distance between Xi and Xj, and Entropy (Xi, Xj) is the entropy of the common part of Xi and Xj, defined as follows:(5)\nwith p being the frequency of 1’s in common positions between the two phylogenetic profiles. Note that the more similar two phylogenetic profiles are, the smaller their distance is.", "Our goal here is to rank all the genes in a target genome in terms of a possible relationship with a set of seed genes (known genes in a pathway), by fusing the path distance and the phylogeny-based distance. For a given pathway P, let its known gene set be G(P) and |G(P)| be the number of its genes. We define a distance from P to a candidate gene xi as(6)\nSimilarly, we define a phylogenetic distance from P to xias(7)\nOur experience has been that for both the path distance and the phylogenetic distance, the distance for the top ranked genes tend to be more reliable. Hence only the top K candidate genes to each gene xj ε G(P) are considered and the remaining is ignored. To a seed gene, we only take the K shortest genes measured by reference distance, where the K is ranged from 5 to 30. Similarly, only the top K( = 50) genes closed to a seed gene is considered for phylogenetic distance [20]. So some candidate genes may not have a path distance or phylogenetic distance, due to their ranking. The final combined distance from any gene xi to pathway P is defined as(8)\nwhere β is a scaling factor and set to 5, based on the ten-fold cross-validation method (see Parameter Selection); and T is set to be 2 if gene xi has both the path distance and phylogenetic distance, and as 1 if it has only one distance defined. The candidate genes are ranked by their combined distance and the final top γ genes are output (γ = 10 in this study).", "For a predicted target gene and a target pathway, the gene is considered a positive prediction (based on a partial gene list of the pathway) if it is part of the pathway. For any of the following assessments of our prediction, we use the following (standard) notations: TP for true positive predictions; TN for true negative predictions; FP for false positive predictions and FN for false negative predictions (FN); and we use the following standard measures of sensitivity (SE), specificity (SP) and positive predictive value (PPV) to assess the performance of our prediction method of missing genes:(9)(10)(11)\nTo assess the prediction performance against a set of pathways, we use the average of the above three measures across all the pathways as follows:(12)(13)(14)\nwhere SPi, SEi and PPVi are SP, SE and PPV for the ith pathway, respectively, and N is the number of pathways considered.\nFor each to-be-determined parameter in our program, a ten-fold cross-validation procedure is used to derive the optimal value. Specifically, all the pathways are divided randomly into ten parts, nine for training and one for testing each time. The value with the best average is finally selected. The leave-one-out cross-validation procedure is used to assess the performance. For each pathway, its known genes are used as the seed-gene set. The procedure removes each gene from the pathway seed set one at a time, and then calculates the final combined distance from the remaining genes to the removed gene and all the left genes of the target genome. If the removed gene is output in the final top γ genes, it is counted as a successful prediction.", "[SUBTITLE] Performance measure calculation [SUBSECTION] We first tested our ranking algorithm on all the 121 KEGG pathways of E. coli K12. We have downloaded these pathways from KEGG (released in September of 2009; see Additional File 2), of which 105 are metabolic pathways, 11 are involved in genetic information processing, and 5 are involved in environmental information processing. On these 121 pathways, the performance of our method was tested with different K ranging from 5 to 30. Figure 3 shows the accuracies of our algorithm for different K and for pathways with different numbers of assigned genes. It has near 90% prediction accuracy (PPV) for K = 5, and the accuracy increases as the number of genes in a pathway increases. Also we noted that the PPV value decreases with the increase of the K value in general, suggesting a higher level of noise is being included as K increases. We have also calculated the SP and SE values for different K on 121 pathways; the detailed data is shown in Additional File 3. We noted that SE increases with the increase of K, achieving near 78% since only the top K shortest genes were considered.\nThe PPV rate based on a different number of pathway known genes. The average PPV rate of pathways P with |G(P)| ≥ x is calculated, and x is the threshold number, changed from 1 to 50. system(error) = 0.06, α = 380, β = 5, γ = 380, K is changed to 5, 10, 15, 20, 25, 30. (PPI: Phylogenetic Profile Information)\nWhile the major contribution to the prediction accuracy by our method is from operon and homology information, we have also assessed the contribution from phylogentic profiles. We noted that the phylogenetic profile gives a small increase for PPV (~ 4% for K = 5). When K increases, the contribution also increases (Figure 3). It shows that genes confirmed by the phylogenetic profile can reduce the mis-predicted genes from the graph-based prediction results, and increase the PPV value. This result suggests that phylogenetic profile can detect some genes which cannot be found by operon or sequence similarity alone.\nOne interesting observation we made is that our method gives rise to different performance levels for pathways in different functional categories. To fully investigate this observation, we have tested our algorithm on 18 different functional categories of KEGG pathways where each has at least 5 (assigned) genes. One special care needs to be taken when assessing the prediction performance as some KEGG pathways are predicted to form one “combined” pathway by our prediction. For example, all the pathways in Amino Acid Metabolism are put together into one combined “pathway”. Hence we need to evaluate the performance of our method on this combined “pathway”. The performance on the 18 categories of KEGG pathways is generally good except for the category of Biosynthesis of Secondary Metabolism, Metabolism of Other Amino Acids, Transcription and Xenobiotics Biodegradation and Metabolism (see Additional File 4). The reduced performance may be due to two reasons: (i) some correctly predicted genes are regarded as false positives since the combined pathway is incomplete; and (ii) the combined pathways may not be conserved across different genomes; and hence cannot be inferred by our method. We also calculated the PPV values of individual pathways whose number of genes is at least 30. They all have high prediction accuracy except for the Pyruvate Metabolism Pathway, which only gets 40% prediction accuracy (see Additional File 5).\nWe first tested our ranking algorithm on all the 121 KEGG pathways of E. coli K12. We have downloaded these pathways from KEGG (released in September of 2009; see Additional File 2), of which 105 are metabolic pathways, 11 are involved in genetic information processing, and 5 are involved in environmental information processing. On these 121 pathways, the performance of our method was tested with different K ranging from 5 to 30. Figure 3 shows the accuracies of our algorithm for different K and for pathways with different numbers of assigned genes. It has near 90% prediction accuracy (PPV) for K = 5, and the accuracy increases as the number of genes in a pathway increases. Also we noted that the PPV value decreases with the increase of the K value in general, suggesting a higher level of noise is being included as K increases. We have also calculated the SP and SE values for different K on 121 pathways; the detailed data is shown in Additional File 3. We noted that SE increases with the increase of K, achieving near 78% since only the top K shortest genes were considered.\nThe PPV rate based on a different number of pathway known genes. The average PPV rate of pathways P with |G(P)| ≥ x is calculated, and x is the threshold number, changed from 1 to 50. system(error) = 0.06, α = 380, β = 5, γ = 380, K is changed to 5, 10, 15, 20, 25, 30. (PPI: Phylogenetic Profile Information)\nWhile the major contribution to the prediction accuracy by our method is from operon and homology information, we have also assessed the contribution from phylogentic profiles. We noted that the phylogenetic profile gives a small increase for PPV (~ 4% for K = 5). When K increases, the contribution also increases (Figure 3). It shows that genes confirmed by the phylogenetic profile can reduce the mis-predicted genes from the graph-based prediction results, and increase the PPV value. This result suggests that phylogenetic profile can detect some genes which cannot be found by operon or sequence similarity alone.\nOne interesting observation we made is that our method gives rise to different performance levels for pathways in different functional categories. To fully investigate this observation, we have tested our algorithm on 18 different functional categories of KEGG pathways where each has at least 5 (assigned) genes. One special care needs to be taken when assessing the prediction performance as some KEGG pathways are predicted to form one “combined” pathway by our prediction. For example, all the pathways in Amino Acid Metabolism are put together into one combined “pathway”. Hence we need to evaluate the performance of our method on this combined “pathway”. The performance on the 18 categories of KEGG pathways is generally good except for the category of Biosynthesis of Secondary Metabolism, Metabolism of Other Amino Acids, Transcription and Xenobiotics Biodegradation and Metabolism (see Additional File 4). The reduced performance may be due to two reasons: (i) some correctly predicted genes are regarded as false positives since the combined pathway is incomplete; and (ii) the combined pathways may not be conserved across different genomes; and hence cannot be inferred by our method. We also calculated the PPV values of individual pathways whose number of genes is at least 30. They all have high prediction accuracy except for the Pyruvate Metabolism Pathway, which only gets 40% prediction accuracy (see Additional File 5).\n[SUBTITLE] Case study of the predicted pyruvate metabolism pathway [SUBSECTION] We have carefully analyzed our prediction results on the pyruvate metabolism pathway (eco00620) since it has the worst prediction performance among all the 21 E. coli pathways, each of which has at least 30 (assigned) genes. This KEGG pathway currently consists of 41 annotated genes (released in September 2009); five (pflD, tdcE, pflB, accC, ybiW) of them are correctly predicted in the top 10 by our method. Among the “incorrect” top 10 predictions (tdcD, eutD, ybiY, prpE, yiaY), some have been reported as correct genes involved in the pathway by a number of published papers. For example, gene ybiY is predicted as a “pyruvate formate lyase activating enzyme” in the NCBI and KEGG databases. Furthermore, we find three genes (tdcD, pflD, prpE) are all in the same “Propanoate Metabolism” pathway (eco00640), which is directly related to the pyruvate metabolism pathway. Actually, there are 10 genes that are common in both pathways.\nGene yiaY is annotated as “Fe-containing alcohol dehydregenas” and is predicted among the top 5 predictions by three known pathway genes (ybiC, adhE, fucO) with the similarity connection path or the operon connection path (Figure 4). Both gene ybiC and gene yiaY have homologous genes in genome (NC_007925) and are reported as an operon with high probability (≥ 0.999). The connections show that these two genes are structured as an operon in NC_007925, while they are diverged into different segments in E. coli. The gene eutD is ranked among the top 5 predictions by three pathway genes (maeA, maeB and pta) (Figure 5) connected by a similarity connection. These results suggested that our method can give a reasonable gene rank list to a target pathway.\nThe connected paths to candidate gene yiaY. The three paths from ybiC, adhE and fucO to yiaY in the genome reference graph are presented and noted with the original operon probability and the BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nThe connected paths to candidate gene eutD. The three paths from maeA, maeB and pta to eutD in the genome reference graph are presented and noted by original BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nWe have carefully analyzed our prediction results on the pyruvate metabolism pathway (eco00620) since it has the worst prediction performance among all the 21 E. coli pathways, each of which has at least 30 (assigned) genes. This KEGG pathway currently consists of 41 annotated genes (released in September 2009); five (pflD, tdcE, pflB, accC, ybiW) of them are correctly predicted in the top 10 by our method. Among the “incorrect” top 10 predictions (tdcD, eutD, ybiY, prpE, yiaY), some have been reported as correct genes involved in the pathway by a number of published papers. For example, gene ybiY is predicted as a “pyruvate formate lyase activating enzyme” in the NCBI and KEGG databases. Furthermore, we find three genes (tdcD, pflD, prpE) are all in the same “Propanoate Metabolism” pathway (eco00640), which is directly related to the pyruvate metabolism pathway. Actually, there are 10 genes that are common in both pathways.\nGene yiaY is annotated as “Fe-containing alcohol dehydregenas” and is predicted among the top 5 predictions by three known pathway genes (ybiC, adhE, fucO) with the similarity connection path or the operon connection path (Figure 4). Both gene ybiC and gene yiaY have homologous genes in genome (NC_007925) and are reported as an operon with high probability (≥ 0.999). The connections show that these two genes are structured as an operon in NC_007925, while they are diverged into different segments in E. coli. The gene eutD is ranked among the top 5 predictions by three pathway genes (maeA, maeB and pta) (Figure 5) connected by a similarity connection. These results suggested that our method can give a reasonable gene rank list to a target pathway.\nThe connected paths to candidate gene yiaY. The three paths from ybiC, adhE and fucO to yiaY in the genome reference graph are presented and noted with the original operon probability and the BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nThe connected paths to candidate gene eutD. The three paths from maeA, maeB and pta to eutD in the genome reference graph are presented and noted by original BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\n[SUBTITLE] Robustness analysis of parameters [SUBSECTION] To test the robustness of our method, we calculated the change in the average PPV value when the parameters α, β and system(error) change. The initial parameter values are set as K = 5, α = 380 and system(error) = 0.06. 71 pathways (with the number of assigned genes ≥ 10) are used, and the final average PPV of the top 10 genes are calculated. For parameter x, the change rate is defined as and the related PPV change rate is . For each parameter, the change rate ranges from -1 to 1. The results show that our method is very robust in terms of these three parameters. For example, when the change rate of system(error) is -1, the related PPV change rate is only 0.0449 (Figure 6). It is a very small change compared with the change of system(error). This result also shows that the system(error) can give an extra 0.0449 contribution to the final average PPV; and suggests that system(error) is useful in finding relationships in the reference graph. These results suggest that the genome reference graph is very useful and gives a major contribution to the final result.\nThe robustness of parameters α, β and system(error). Three parameters (X-axis) are changed ±100% compared with the value used in our study and the final accuracy change rates are described in the Y-axis.\nTo test the robustness of our method, we calculated the change in the average PPV value when the parameters α, β and system(error) change. The initial parameter values are set as K = 5, α = 380 and system(error) = 0.06. 71 pathways (with the number of assigned genes ≥ 10) are used, and the final average PPV of the top 10 genes are calculated. For parameter x, the change rate is defined as and the related PPV change rate is . For each parameter, the change rate ranges from -1 to 1. The results show that our method is very robust in terms of these three parameters. For example, when the change rate of system(error) is -1, the related PPV change rate is only 0.0449 (Figure 6). It is a very small change compared with the change of system(error). This result also shows that the system(error) can give an extra 0.0449 contribution to the final average PPV; and suggests that system(error) is useful in finding relationships in the reference graph. These results suggest that the genome reference graph is very useful and gives a major contribution to the final result.\nThe robustness of parameters α, β and system(error). Three parameters (X-axis) are changed ±100% compared with the value used in our study and the final accuracy change rates are described in the Y-axis.", "We first tested our ranking algorithm on all the 121 KEGG pathways of E. coli K12. We have downloaded these pathways from KEGG (released in September of 2009; see Additional File 2), of which 105 are metabolic pathways, 11 are involved in genetic information processing, and 5 are involved in environmental information processing. On these 121 pathways, the performance of our method was tested with different K ranging from 5 to 30. Figure 3 shows the accuracies of our algorithm for different K and for pathways with different numbers of assigned genes. It has near 90% prediction accuracy (PPV) for K = 5, and the accuracy increases as the number of genes in a pathway increases. Also we noted that the PPV value decreases with the increase of the K value in general, suggesting a higher level of noise is being included as K increases. We have also calculated the SP and SE values for different K on 121 pathways; the detailed data is shown in Additional File 3. We noted that SE increases with the increase of K, achieving near 78% since only the top K shortest genes were considered.\nThe PPV rate based on a different number of pathway known genes. The average PPV rate of pathways P with |G(P)| ≥ x is calculated, and x is the threshold number, changed from 1 to 50. system(error) = 0.06, α = 380, β = 5, γ = 380, K is changed to 5, 10, 15, 20, 25, 30. (PPI: Phylogenetic Profile Information)\nWhile the major contribution to the prediction accuracy by our method is from operon and homology information, we have also assessed the contribution from phylogentic profiles. We noted that the phylogenetic profile gives a small increase for PPV (~ 4% for K = 5). When K increases, the contribution also increases (Figure 3). It shows that genes confirmed by the phylogenetic profile can reduce the mis-predicted genes from the graph-based prediction results, and increase the PPV value. This result suggests that phylogenetic profile can detect some genes which cannot be found by operon or sequence similarity alone.\nOne interesting observation we made is that our method gives rise to different performance levels for pathways in different functional categories. To fully investigate this observation, we have tested our algorithm on 18 different functional categories of KEGG pathways where each has at least 5 (assigned) genes. One special care needs to be taken when assessing the prediction performance as some KEGG pathways are predicted to form one “combined” pathway by our prediction. For example, all the pathways in Amino Acid Metabolism are put together into one combined “pathway”. Hence we need to evaluate the performance of our method on this combined “pathway”. The performance on the 18 categories of KEGG pathways is generally good except for the category of Biosynthesis of Secondary Metabolism, Metabolism of Other Amino Acids, Transcription and Xenobiotics Biodegradation and Metabolism (see Additional File 4). The reduced performance may be due to two reasons: (i) some correctly predicted genes are regarded as false positives since the combined pathway is incomplete; and (ii) the combined pathways may not be conserved across different genomes; and hence cannot be inferred by our method. We also calculated the PPV values of individual pathways whose number of genes is at least 30. They all have high prediction accuracy except for the Pyruvate Metabolism Pathway, which only gets 40% prediction accuracy (see Additional File 5).", "We have carefully analyzed our prediction results on the pyruvate metabolism pathway (eco00620) since it has the worst prediction performance among all the 21 E. coli pathways, each of which has at least 30 (assigned) genes. This KEGG pathway currently consists of 41 annotated genes (released in September 2009); five (pflD, tdcE, pflB, accC, ybiW) of them are correctly predicted in the top 10 by our method. Among the “incorrect” top 10 predictions (tdcD, eutD, ybiY, prpE, yiaY), some have been reported as correct genes involved in the pathway by a number of published papers. For example, gene ybiY is predicted as a “pyruvate formate lyase activating enzyme” in the NCBI and KEGG databases. Furthermore, we find three genes (tdcD, pflD, prpE) are all in the same “Propanoate Metabolism” pathway (eco00640), which is directly related to the pyruvate metabolism pathway. Actually, there are 10 genes that are common in both pathways.\nGene yiaY is annotated as “Fe-containing alcohol dehydregenas” and is predicted among the top 5 predictions by three known pathway genes (ybiC, adhE, fucO) with the similarity connection path or the operon connection path (Figure 4). Both gene ybiC and gene yiaY have homologous genes in genome (NC_007925) and are reported as an operon with high probability (≥ 0.999). The connections show that these two genes are structured as an operon in NC_007925, while they are diverged into different segments in E. coli. The gene eutD is ranked among the top 5 predictions by three pathway genes (maeA, maeB and pta) (Figure 5) connected by a similarity connection. These results suggested that our method can give a reasonable gene rank list to a target pathway.\nThe connected paths to candidate gene yiaY. The three paths from ybiC, adhE and fucO to yiaY in the genome reference graph are presented and noted with the original operon probability and the BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.\nThe connected paths to candidate gene eutD. The three paths from maeA, maeB and pta to eutD in the genome reference graph are presented and noted by original BLAST similarity. The NCBI gene id is used to the connected genes and the gene symbol is noted in bracket.", "To test the robustness of our method, we calculated the change in the average PPV value when the parameters α, β and system(error) change. The initial parameter values are set as K = 5, α = 380 and system(error) = 0.06. 71 pathways (with the number of assigned genes ≥ 10) are used, and the final average PPV of the top 10 genes are calculated. For parameter x, the change rate is defined as and the related PPV change rate is . For each parameter, the change rate ranges from -1 to 1. The results show that our method is very robust in terms of these three parameters. For example, when the change rate of system(error) is -1, the related PPV change rate is only 0.0449 (Figure 6). It is a very small change compared with the change of system(error). This result also shows that the system(error) can give an extra 0.0449 contribution to the final average PPV; and suggests that system(error) is useful in finding relationships in the reference graph. These results suggest that the genome reference graph is very useful and gives a major contribution to the final result.\nThe robustness of parameters α, β and system(error). Three parameters (X-axis) are changed ±100% compared with the value used in our study and the final accuracy change rates are described in the Y-axis.", "Our method provides new insights about finding missing genes and recruiting additional genes into partially predicted pathways in E. coli, through combining operon information and homology information across multiple genomes. Some wrongly predicted genes may indicate pathways might be quite functionally related (as being showed above, genes in eco00620 can recall many genes in eco00640), since pathways are defined quite arbitrary by biologists, this may remind us to think about the redefinition of some pathways. In some pathways, we noted that some genes form (connected) functional modules. We have systematically checked for this by connecting two genes with a link if one gene can be recalled by another gene among the top 10 predictions; and used all the genes in eco00620 to reconstruct a new graph with 5 connected components (Figure 7). The biggest component includes 23 genes and is the main functional module in pyruvate metabolism and can be further parted into three smaller sub-modules (A1, A2 and A3), and we found all sub-modules are indicating special biochemical processes (Figure 8). For example, part C includes three genes from the same operon and they are involved in the process to metabolize Pyruvate to Acetyl-CoA in eco00620. The structures observed in individual pathways and between pathways provide more insights about the hierarchical structure of pathways and consisted with earlier studies [24-26].\nReconstructed gene connections of Eco00620. Connected graph constructed by the recalled information predicted in our program. If a gene can be recalled by another gene in final top 10 ranks, then an edge is connected between the gene pair. 38 genes are recalled and connected as five submodules. The biggest module has 23 genes and can be further divided as three parts.\nMapped structures on the pathway of Eco00620 in KEGG. Five recalled modules can be well mapped on the described pathway Eco00620 in KEGG. Each module can be mapped with a biochemical process.", "We present a method to find pathway genes at a genome level, which can be used to fill pathway holes or recruit new genes into existing pathways. The results show that our method can achieve higher prediction accuracy and is very robust. The main advantage of our method is that by introducing the reference graph, we get a natural way to integrate different types of information such as genomic structure information and sequence similarity information. More information could possibly be added in future studies. For example, we can use information like regulons [27] and gene fusion events [28] to provide a more general framework for integrating different information, which can be easily included into our current program. Besides finding new genes for pathways, our method can also be used for functional module inference, as some functional modules may be the union of existing pathways.", "The authors declare that they have no competing interests.", "Yong Chen produced the program and contributed towards planning and writing of the manuscript, particularly producing the Results section. Fenglou Mao and Guojun Li contributed in preparing data and some results analysis. Ying Xu provided guidance and planning for the project. All authors read and approved the final manuscript.", "the strain name and NCBI ID of 185 strains (genomes with plasmids). 185 strains (non-redundant genomes and plasmids) which have longest sequence in each genera are selected from 185 different (NCBI release of 9.2009).\nClick here for file\nNames and gene number of 121 pathway genes. 121 characterized pathways of E. coli K12 is downloaded from KEGG (released 9.2009).\nClick here for file\nSP and SE value. Calculated average SP and SE with constraints system(error) = 0.06, α = 380, β = 5, K is changed to 5, 10, 15, 20, 25, 30.\nClick here for file\nThe average PPV rate of E.coli pathways based on the 2nd level of KEGG orthology. The pathways (|G(P) ≥ 5|) are calculated with system(error) = 0.06, K = 5, α = 380, β = 5, γ = 10.\nClick here for file\nPPV values of individual pathway with |G(P)| ≥ 30. The PPV values are calculated based on system(error) system(error) = 0.06, K = 5, α = 380, β = 5, γ = 10.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Integrating multiple protein-protein interaction networks to prioritize disease genes: a Bayesian regression approach.
21342540
The identification of genes responsible for human inherited diseases is one of the most challenging tasks in human genetics. Recent studies based on phenotype similarity and gene proximity have demonstrated great success in prioritizing candidate genes for human diseases. However, most of these methods rely on a single protein-protein interaction (PPI) network to calculate similarities between genes, and thus greatly restrict the scope of application of such methods. Meanwhile, independently constructed and maintained PPI networks are usually quite diverse in coverage and quality, making the selection of a suitable PPI network inevitable but difficult.
BACKGROUND
We adopt a linear model to explain similarities between disease phenotypes using gene proximities that are quantified by diffusion kernels of one or more PPI networks. We solve this model via a Bayesian approach, and we derive an analytic form for Bayes factor that naturally measures the strength of association between a query disease and a candidate gene and thus can be used as a score to prioritize candidate genes. This method is intrinsically capable of integrating multiple PPI networks.
METHODS
We show that gene proximities calculated from PPI networks imply phenotype similarities. We demonstrate the effectiveness of the Bayesian regression approach on five PPI networks via large scale leave-one-out cross-validation experiments and summarize the results in terms of the mean rank ratio of known disease genes and the area under the receiver operating characteristic curve (AUC). We further show the capability of our approach in integrating multiple PPI networks.
RESULTS
The Bayesian regression approach can achieve much higher performance than the existing CIPHER approach and the ordinary linear regression method. The integration of multiple PPI networks can greatly improve the scope of application of the proposed method in the inference of disease genes.
CONCLUSIONS
[ "Bayes Theorem", "Computational Biology", "Humans", "Linear Models", "Phenotype", "Protein Interaction Mapping", "Proteins" ]
3044265
null
null
Methods
[SUBTITLE] Data sources [SUBSECTION] We propose to infer disease genes using gene proximity profiles derived from protein-protein interaction (PPI) networks, a phenotype similarity profile calculated using text mining technique, and known associations between disease phenotypes and genes extracted from the Online Mendelian Inheritance in Man (OMIM) database. There have been a few PPI networks with diverse coverage and quality. In our study, we adopt five widely-used PPI networks to calculate gene proximity profiles. First, the Human Protein Reference Database (HPRD) contains human protein-protein interactions that are manually extracted from the literature by expert biologists [28]. After removing duplications and self-linked interactions, we extract from release 8 of this database 36,634 interactions between 9,470 human genes. Second, the Biological General Repository for Interaction Datasets (BioGRID) contains protein and genetic interactions of major model organism species [29]. We extract from version 2.0.63 of this database 29,558 interactions between 9,043 human genes. Third, the Biomolecular Interaction Network Database (BIND) contains both high-throughput and manually curated interactions between biological molecules [30]. From this database, we collect 14,955 interactions between 6,089 human genes. Fourth, the IntAct molecular interaction database (IntAct) contains protein-protein interaction derived from literature [31]. From this database, we collect 30,030 interactions between 6,775 human genes. Finally, the Molecular INTeraction database (MINT) contains information about physical interactions between proteins [32]. From this database, we collect 15,902 interactions between 7,200 human proteins. Details about these five PPI networks are given in Table 1. Summary of the five protein-protein interaction networks. The phenotype similarity profile, which is obtained from an earlier work of van Driel et al[21], is represented as a matrix of pair-wise similarities between human disease phenotypes. Briefly, van Driel et al analyzed the full-text and clinical synopsis fields of all OMIM records, and used the anatomy and the disease sections of the medical subject headings vocabulary (MeSH) to extract terms from the OMIM records [21]. By doing this, they were able to characterize a phenotype using a feature vector that was composed of standardized and weighted phenotypic feature terms and further calculated a similarity score for a pair of phenotypes as the cosine of the angle of their feature vectors. Finally, they obtain a phenotype similarity profile that contains pair-wise similarity scores for 5,080 OMIM diseases. Known associations between disease phenotypes and genes are extracted from BioMart [33]. For genes in HPRD, we obtain 2,466 associations between 1,590 diseases and 1,440 genes. For BioGRID, we obtain 2,166 associations between 1,412 diseases and 1,247 genes. For BIND, we obtain 1,442 associations between 1,016 diseases and 811 genes. For IntAct, we obtain 1,622 associations between 1,094 diseases and 933 genes. For MINT, we obtain 1,231 associations between 889 diseases and 677 genes. We also summarize the above information in Table 1. We propose to infer disease genes using gene proximity profiles derived from protein-protein interaction (PPI) networks, a phenotype similarity profile calculated using text mining technique, and known associations between disease phenotypes and genes extracted from the Online Mendelian Inheritance in Man (OMIM) database. There have been a few PPI networks with diverse coverage and quality. In our study, we adopt five widely-used PPI networks to calculate gene proximity profiles. First, the Human Protein Reference Database (HPRD) contains human protein-protein interactions that are manually extracted from the literature by expert biologists [28]. After removing duplications and self-linked interactions, we extract from release 8 of this database 36,634 interactions between 9,470 human genes. Second, the Biological General Repository for Interaction Datasets (BioGRID) contains protein and genetic interactions of major model organism species [29]. We extract from version 2.0.63 of this database 29,558 interactions between 9,043 human genes. Third, the Biomolecular Interaction Network Database (BIND) contains both high-throughput and manually curated interactions between biological molecules [30]. From this database, we collect 14,955 interactions between 6,089 human genes. Fourth, the IntAct molecular interaction database (IntAct) contains protein-protein interaction derived from literature [31]. From this database, we collect 30,030 interactions between 6,775 human genes. Finally, the Molecular INTeraction database (MINT) contains information about physical interactions between proteins [32]. From this database, we collect 15,902 interactions between 7,200 human proteins. Details about these five PPI networks are given in Table 1. Summary of the five protein-protein interaction networks. The phenotype similarity profile, which is obtained from an earlier work of van Driel et al[21], is represented as a matrix of pair-wise similarities between human disease phenotypes. Briefly, van Driel et al analyzed the full-text and clinical synopsis fields of all OMIM records, and used the anatomy and the disease sections of the medical subject headings vocabulary (MeSH) to extract terms from the OMIM records [21]. By doing this, they were able to characterize a phenotype using a feature vector that was composed of standardized and weighted phenotypic feature terms and further calculated a similarity score for a pair of phenotypes as the cosine of the angle of their feature vectors. Finally, they obtain a phenotype similarity profile that contains pair-wise similarity scores for 5,080 OMIM diseases. Known associations between disease phenotypes and genes are extracted from BioMart [33]. For genes in HPRD, we obtain 2,466 associations between 1,590 diseases and 1,440 genes. For BioGRID, we obtain 2,166 associations between 1,412 diseases and 1,247 genes. For BIND, we obtain 1,442 associations between 1,016 diseases and 811 genes. For IntAct, we obtain 1,622 associations between 1,094 diseases and 933 genes. For MINT, we obtain 1,231 associations between 889 diseases and 677 genes. We also summarize the above information in Table 1. [SUBTITLE] Bayesian linear regression [SUBSECTION] We adopt a linear regression model to explain disease similarities in the phenotype similarity profile using gene similarities in one or more gene proximity profiles, and we solve this regression model via a Bayesian approach [34]. For a clear presentation, we first derive this method using a single gene proximity profile and then extend this model to include multiple profiles. A gene proximity profile contains pair-wise similarity measure of every two genes and is calculated as the diffusion kernel of the underlying PPI network. Given a network of n nodes, represented by an adjacency matrix A, we calculate the Laplacian of the network as L = D – A and the diffusion kernel as Z = e-γL, where D is a diagonal matrix containing node degrees, and 0 <γ < 1 a free parameter that controls the magnitude of diffusion. With the kernel Z = (zij)n×n, we define the proximity of two genes i and j as the corresponding element zij in the kernel. Let ydd′ denote the similarity score between a query disease d and another disease d′. We define the phenotype similarity vector for disease d as , i.e., the similarities between disease d and all m diseases d1,d2,…,dm in the phenotype similarity profile. Let Zgg′ denote the proximity score between genes g and g′ in the gene proximity profile and G(d) the set of genes known as associated with disease d. We define the proximity between gene g and disease d as the summation of proximity scores between gene g and all genes known as associated with disease d, denoted by . We further define the gene proximity vector for gene g as , i.e., the proximities between gene g and all diseases d1,d2,…,dm in the phenotype similarity profile. We then explain the phenotype similarity vector for disease d using gene proximity vectors of all genes that are associated with the disease via a linear regression model , where y = yd is the response vector, X the design matrix, β the coefficient vector, and the residual vector. For disease d associated with a total of p genes, the design matrix X has p + 1 columns, with the first column being 1s for the purpose of incorporating the intercept. We solve this model using a Bayesian approach [34] and use the resulting Bayes factor to measure the strength of evidence for a candidate association. For the alternative model, we assume that y conditional on X is subject to a normal distribution, as , with residuals independent and identically distributed, following normal density with mean 0 and variance σ2. We set conjugate prior distributions for β and σ2, as , where is composed of prior means, and σ2Σ prior variances with being a diagonal matrix. The joint distribution of all random quantities y, β, and σ2 is then given as. Integrating out β and σ2, we obtain the marginal likelihood of y given X as , where and with and . On the other hand, for the null model, where y is independent of X, the marginal likelihood of y can be derived in a similar way, as , where , and . Then, the Bayes factor is calculated as the ratio of the marginal likelihood under the alternative and the null hypotheses, respectively, as. Following literature [34], we will use the parameter setting (as +∞ in calculation) and σi = 1 (for i ≥ 1) throughout this paper, though a grid search for other values of σi shows that the method is quite robust to this parameter. It has been shown that the parameter γ should take a small value [35-37]. In our study, we perform a grid search for this parameter and find results are quite robust when 0.1 ≤ γ ≤ 0.3. Therefore we will use γ = 0.2 throughout this paper. Obviously, a larger Bayes factor indicates a better exhibition of the linear relationship between the disease similarity and the gene proximity. With this understanding, we propose the following schemes to prioritize candidate genes. First, given a query disease and a set of candidate genes, we calculate a Bayes factor for each candidate gene, with the assumption that the gene is the only one associated with the query disease. Then, we rank candidate genes in non-increasing order according to their Bayes factors. This scheme mimics the situation in which we aim at inferring associations between genes and a "novel" disease that has yet not been previously studied. Second, for a disease that has been previously studied (and thus already has some genes associated), we can choose to calculate Bayes factors for candidate genes with the inclusion of the genes that are already known to be associated with the disease. This scheme is more suitable for inferring associations between genes and a disease that has been previously studied (and thus has known associated genes). In the case that multiple gene proximity profiles calculated from multiple PPI networks are available, we extend the regression model by incorporating additional gene proximity vectors into the design matrix. Suppose that disease d is associated with p genes, and q gene proximity profiles are available, the design matrix X will have pq + 1 columns, with column 1 for the intercept, columns 2 to p + 1 for the first profile, columns p + 2 to 2p + 1 for the second profile, and so on. With this extension, all the above reasoning remains unchanged. We adopt a linear regression model to explain disease similarities in the phenotype similarity profile using gene similarities in one or more gene proximity profiles, and we solve this regression model via a Bayesian approach [34]. For a clear presentation, we first derive this method using a single gene proximity profile and then extend this model to include multiple profiles. A gene proximity profile contains pair-wise similarity measure of every two genes and is calculated as the diffusion kernel of the underlying PPI network. Given a network of n nodes, represented by an adjacency matrix A, we calculate the Laplacian of the network as L = D – A and the diffusion kernel as Z = e-γL, where D is a diagonal matrix containing node degrees, and 0 <γ < 1 a free parameter that controls the magnitude of diffusion. With the kernel Z = (zij)n×n, we define the proximity of two genes i and j as the corresponding element zij in the kernel. Let ydd′ denote the similarity score between a query disease d and another disease d′. We define the phenotype similarity vector for disease d as , i.e., the similarities between disease d and all m diseases d1,d2,…,dm in the phenotype similarity profile. Let Zgg′ denote the proximity score between genes g and g′ in the gene proximity profile and G(d) the set of genes known as associated with disease d. We define the proximity between gene g and disease d as the summation of proximity scores between gene g and all genes known as associated with disease d, denoted by . We further define the gene proximity vector for gene g as , i.e., the proximities between gene g and all diseases d1,d2,…,dm in the phenotype similarity profile. We then explain the phenotype similarity vector for disease d using gene proximity vectors of all genes that are associated with the disease via a linear regression model , where y = yd is the response vector, X the design matrix, β the coefficient vector, and the residual vector. For disease d associated with a total of p genes, the design matrix X has p + 1 columns, with the first column being 1s for the purpose of incorporating the intercept. We solve this model using a Bayesian approach [34] and use the resulting Bayes factor to measure the strength of evidence for a candidate association. For the alternative model, we assume that y conditional on X is subject to a normal distribution, as , with residuals independent and identically distributed, following normal density with mean 0 and variance σ2. We set conjugate prior distributions for β and σ2, as , where is composed of prior means, and σ2Σ prior variances with being a diagonal matrix. The joint distribution of all random quantities y, β, and σ2 is then given as. Integrating out β and σ2, we obtain the marginal likelihood of y given X as , where and with and . On the other hand, for the null model, where y is independent of X, the marginal likelihood of y can be derived in a similar way, as , where , and . Then, the Bayes factor is calculated as the ratio of the marginal likelihood under the alternative and the null hypotheses, respectively, as. Following literature [34], we will use the parameter setting (as +∞ in calculation) and σi = 1 (for i ≥ 1) throughout this paper, though a grid search for other values of σi shows that the method is quite robust to this parameter. It has been shown that the parameter γ should take a small value [35-37]. In our study, we perform a grid search for this parameter and find results are quite robust when 0.1 ≤ γ ≤ 0.3. Therefore we will use γ = 0.2 throughout this paper. Obviously, a larger Bayes factor indicates a better exhibition of the linear relationship between the disease similarity and the gene proximity. With this understanding, we propose the following schemes to prioritize candidate genes. First, given a query disease and a set of candidate genes, we calculate a Bayes factor for each candidate gene, with the assumption that the gene is the only one associated with the query disease. Then, we rank candidate genes in non-increasing order according to their Bayes factors. This scheme mimics the situation in which we aim at inferring associations between genes and a "novel" disease that has yet not been previously studied. Second, for a disease that has been previously studied (and thus already has some genes associated), we can choose to calculate Bayes factors for candidate genes with the inclusion of the genes that are already known to be associated with the disease. This scheme is more suitable for inferring associations between genes and a disease that has been previously studied (and thus has known associated genes). In the case that multiple gene proximity profiles calculated from multiple PPI networks are available, we extend the regression model by incorporating additional gene proximity vectors into the design matrix. Suppose that disease d is associated with p genes, and q gene proximity profiles are available, the design matrix X will have pq + 1 columns, with column 1 for the intercept, columns 2 to p + 1 for the first profile, columns p + 2 to 2p + 1 for the second profile, and so on. With this extension, all the above reasoning remains unchanged. [SUBTITLE] Validation methods and evaluation criteria [SUBSECTION] We adopt two large scale leave-one-out cross-validation experiments to test how well the Bayesian regression approach performs in recovering known associations between diseases and genes. In the validation of random controls, we prioritize genes that are known as associated with diseases against randomly selected genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of 99 randomly selected control genes. In the validation of simulated linkage intervals, we simulate the real situation of identify disease genes by prioritizing genes that are known as associated with diseases against genes that are located around the disease genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of control genes that are located in 10Mbp upstream and downstream around this gene. In both experiments, we adopt the first scheme to mimic the situation of inferring associations between genes and novel diseases, for the purpose of achieving a more strict validation. In each of the above leave-one-out cross-validation experiments, we repeat the validation run for every known association between a disease and a gene, obtaining a number of ranking lists. We further normalize the ranks by dividing them with the total number of candidate genes in the ranking list to obtain rank ratios and derive two criteria to measure the performance of a prioritization method. The first criterion is mean rank ratio, which is simply the average of rank ratios over all disease genes in a cross-validation experiment. This criterion provides a summary of the ranks of all genes that are known as associated with diseases, and the smaller the mean rank ratio, the better a method. The second criterion is AUC, the area under the receiver operating characteristic curve (ROC). Given a list of rank ratios and a predefined threshold, we define the sensitivity as the percentage of disease genes that are ranked above the threshold and the specificity as the percentage of control genes that are ranked below the threshold. Varying the threshold values, we are able to plot a ROC curve, which shows the relationship between sensitivity and 1-specificity. Calculating the area under the ROC curve, we obtain the AUC score, which provides an overall measure for the performance of a prioritization method. We adopt two large scale leave-one-out cross-validation experiments to test how well the Bayesian regression approach performs in recovering known associations between diseases and genes. In the validation of random controls, we prioritize genes that are known as associated with diseases against randomly selected genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of 99 randomly selected control genes. In the validation of simulated linkage intervals, we simulate the real situation of identify disease genes by prioritizing genes that are known as associated with diseases against genes that are located around the disease genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of control genes that are located in 10Mbp upstream and downstream around this gene. In both experiments, we adopt the first scheme to mimic the situation of inferring associations between genes and novel diseases, for the purpose of achieving a more strict validation. In each of the above leave-one-out cross-validation experiments, we repeat the validation run for every known association between a disease and a gene, obtaining a number of ranking lists. We further normalize the ranks by dividing them with the total number of candidate genes in the ranking list to obtain rank ratios and derive two criteria to measure the performance of a prioritization method. The first criterion is mean rank ratio, which is simply the average of rank ratios over all disease genes in a cross-validation experiment. This criterion provides a summary of the ranks of all genes that are known as associated with diseases, and the smaller the mean rank ratio, the better a method. The second criterion is AUC, the area under the receiver operating characteristic curve (ROC). Given a list of rank ratios and a predefined threshold, we define the sensitivity as the percentage of disease genes that are ranked above the threshold and the specificity as the percentage of control genes that are ranked below the threshold. Varying the threshold values, we are able to plot a ROC curve, which shows the relationship between sensitivity and 1-specificity. Calculating the area under the ROC curve, we obtain the AUC score, which provides an overall measure for the performance of a prioritization method.
null
null
null
null
[ "Background", "Data sources", "Bayesian linear regression", "Validation methods and evaluation criteria", "Results", "Gene proximity implying phenotype similarity", "Prioritization with individual PPI networks", "Prioritization with the integration of multiple PPI networks", "Conclusions and discussion", "Competing interests", "Authors' contributions" ]
[ "Inference of genes responsible for human inherited diseases has been one of the major tasks in modern human and medical genetics. Traditionally, associations between diseases and genes are pinpointed through statistical methods such as family-based linkage analysis and population-based association studies [1], which have been demonstrating remarkable successes in mapping disease genes. However, linkage analysis can only associate diseases with genetic regions that typically contain dozens to hundreds of genes, and association studies usually require carefully selected candidate genes that are biologically related to the disease of interest, making computational inference of causative genes from positional candidates and the selection of functional candidates indispensible [2,3].\nMost existing computational methods for inferring causative genes from candidates are formulated as a one-class novelty learning problem that is usually solved with the guilt-by-association principle, which suggests to compute a score from functional genomics data to quantify the strength of association between a query disease and a candidate gene, and then rank candidate genes according to their scores to facilitate the selection of susceptibility genes [4]. For this purpose, various genomic data, including protein sequences [5,6], gene expression profiles [6-8], functional annotations [6,8-11], literature descriptions [6,7,12], protein interactions [6,8,13,14], and many others [15] have been employed to characterize similarities between genes, with the assumption that genes similar in one or more characteristics are usually similar in their functions, and thus are likely to be associated with the same disease. Recent studies have also shown the modular nature of human genetic diseases [15-23], which suggests that diseases share common clinic characteristics are often caused by functionally related genes [24]. With this understanding, various methods have been proposed to utilize phenotype similarity and gene proximity for the inference of causative genes for human inherited diseases [14,25-27].\nIt has been shown that the Pearson's correlation coefficient of similarities between phenotypes and closeness of genes in a single protein-protein interaction (PPI) network can be used as a concordance score to facilitate the prioritization of candidate genes [25]. However, PPI networks are far from complete. For example, the Human Protein Reference Database (HPRD) [28], as one of the most comprehensive protein interaction databases, only covers less than half of human protein-coding genes. Therefore, relying on a single PPI network to infer disease genes will restrict the scope of application of such methods. Meanwhile, there have been a few protein interaction databases constructed and maintained independently. These databases are often quite diverse in coverage and quality, making the selection of a suitable PPI network inevitable. Moreover, although the naïve thinking of combining all available protein interactions into a single large network is straightforward, performance of methods based on such a combined network is questionable [25].\nWith these considerations, we propose a Bayesian regression approach that can be used with either a single PPI network or multiple networks to prioritize candidate genes. We adopt a linear model to explain disease similarity using gene proximity, and we solve this model via a Bayesian approach, which yields an analytic form of Bayes factor for measuring the strength of association between a query disease and a candidate gene. We then use Bayes factors as scores to prioritize candidate genes. We show the validity of assumptions of this approach, and we demonstrate the effectiveness of this approach on five PPI networks via large scale leave-one-out cross-validation experiments and comprehensive statistical analysis. We further show the capability of our approach in integrating multiple PPI networks.", "We propose to infer disease genes using gene proximity profiles derived from protein-protein interaction (PPI) networks, a phenotype similarity profile calculated using text mining technique, and known associations between disease phenotypes and genes extracted from the Online Mendelian Inheritance in Man (OMIM) database.\nThere have been a few PPI networks with diverse coverage and quality. In our study, we adopt five widely-used PPI networks to calculate gene proximity profiles. First, the Human Protein Reference Database (HPRD) contains human protein-protein interactions that are manually extracted from the literature by expert biologists [28]. After removing duplications and self-linked interactions, we extract from release 8 of this database 36,634 interactions between 9,470 human genes. Second, the Biological General Repository for Interaction Datasets (BioGRID) contains protein and genetic interactions of major model organism species [29]. We extract from version 2.0.63 of this database 29,558 interactions between 9,043 human genes. Third, the Biomolecular Interaction Network Database (BIND) contains both high-throughput and manually curated interactions between biological molecules [30]. From this database, we collect 14,955 interactions between 6,089 human genes. Fourth, the IntAct molecular interaction database (IntAct) contains protein-protein interaction derived from literature [31]. From this database, we collect 30,030 interactions between 6,775 human genes. Finally, the Molecular INTeraction database (MINT) contains information about physical interactions between proteins [32]. From this database, we collect 15,902 interactions between 7,200 human proteins. Details about these five PPI networks are given in Table 1.\nSummary of the five protein-protein interaction networks.\nThe phenotype similarity profile, which is obtained from an earlier work of van Driel et al[21], is represented as a matrix of pair-wise similarities between human disease phenotypes. Briefly, van Driel et al analyzed the full-text and clinical synopsis fields of all OMIM records, and used the anatomy and the disease sections of the medical subject headings vocabulary (MeSH) to extract terms from the OMIM records [21]. By doing this, they were able to characterize a phenotype using a feature vector that was composed of standardized and weighted phenotypic feature terms and further calculated a similarity score for a pair of phenotypes as the cosine of the angle of their feature vectors. Finally, they obtain a phenotype similarity profile that contains pair-wise similarity scores for 5,080 OMIM diseases.\nKnown associations between disease phenotypes and genes are extracted from BioMart [33]. For genes in HPRD, we obtain 2,466 associations between 1,590 diseases and 1,440 genes. For BioGRID, we obtain 2,166 associations between 1,412 diseases and 1,247 genes. For BIND, we obtain 1,442 associations between 1,016 diseases and 811 genes. For IntAct, we obtain 1,622 associations between 1,094 diseases and 933 genes. For MINT, we obtain 1,231 associations between 889 diseases and 677 genes. We also summarize the above information in Table 1.", "We adopt a linear regression model to explain disease similarities in the phenotype similarity profile using gene similarities in one or more gene proximity profiles, and we solve this regression model via a Bayesian approach [34]. For a clear presentation, we first derive this method using a single gene proximity profile and then extend this model to include multiple profiles.\nA gene proximity profile contains pair-wise similarity measure of every two genes and is calculated as the diffusion kernel of the underlying PPI network. Given a network of n nodes, represented by an adjacency matrix A, we calculate the Laplacian of the network as L = D – A and the diffusion kernel as Z = e-γL, where D is a diagonal matrix containing node degrees, and 0 <γ < 1 a free parameter that controls the magnitude of diffusion. With the kernel Z = (zij)n×n, we define the proximity of two genes i and j as the corresponding element zij in the kernel.\nLet ydd′ denote the similarity score between a query disease d and another disease d′. We define the phenotype similarity vector for disease d as , i.e., the similarities between disease d and all m diseases d1,d2,…,dm in the phenotype similarity profile. Let Zgg′ denote the proximity score between genes g and g′ in the gene proximity profile and G(d) the set of genes known as associated with disease d. We define the proximity between gene g and disease d as the summation of proximity scores between gene g and all genes known as associated with disease d, denoted by . We further define the gene proximity vector for gene g as , i.e., the proximities between gene g and all diseases d1,d2,…,dm in the phenotype similarity profile.\nWe then explain the phenotype similarity vector for disease d using gene proximity vectors of all genes that are associated with the disease via a linear regression model\n,\nwhere y = yd is the response vector, X the design matrix, β the coefficient vector, and the residual vector. For disease d associated with a total of p genes, the design matrix X has p + 1 columns, with the first column being 1s for the purpose of incorporating the intercept.\nWe solve this model using a Bayesian approach [34] and use the resulting Bayes factor to measure the strength of evidence for a candidate association. For the alternative model, we assume that y conditional on X is subject to a normal distribution, as\n,\nwith residuals independent and identically distributed, following normal density with mean 0 and variance σ2. We set conjugate prior distributions for β and σ2, as\n,\nwhere is composed of prior means, and σ2Σ prior variances with being a diagonal matrix. The joint distribution of all random quantities y, β, and σ2 is then given as.\nIntegrating out β and σ2, we obtain the marginal likelihood of y given X as\n,\nwhere and with and .\nOn the other hand, for the null model, where y is independent of X, the marginal likelihood of y can be derived in a similar way, as\n,\nwhere , and .\nThen, the Bayes factor is calculated as the ratio of the marginal likelihood under the alternative and the null hypotheses, respectively, as.\nFollowing literature [34], we will use the parameter setting (as +∞ in calculation) and σi = 1 (for i ≥ 1) throughout this paper, though a grid search for other values of σi shows that the method is quite robust to this parameter. It has been shown that the parameter γ should take a small value [35-37]. In our study, we perform a grid search for this parameter and find results are quite robust when 0.1 ≤ γ ≤ 0.3. Therefore we will use γ = 0.2 throughout this paper.\nObviously, a larger Bayes factor indicates a better exhibition of the linear relationship between the disease similarity and the gene proximity. With this understanding, we propose the following schemes to prioritize candidate genes. First, given a query disease and a set of candidate genes, we calculate a Bayes factor for each candidate gene, with the assumption that the gene is the only one associated with the query disease. Then, we rank candidate genes in non-increasing order according to their Bayes factors. This scheme mimics the situation in which we aim at inferring associations between genes and a \"novel\" disease that has yet not been previously studied. Second, for a disease that has been previously studied (and thus already has some genes associated), we can choose to calculate Bayes factors for candidate genes with the inclusion of the genes that are already known to be associated with the disease. This scheme is more suitable for inferring associations between genes and a disease that has been previously studied (and thus has known associated genes).\nIn the case that multiple gene proximity profiles calculated from multiple PPI networks are available, we extend the regression model by incorporating additional gene proximity vectors into the design matrix. Suppose that disease d is associated with p genes, and q gene proximity profiles are available, the design matrix X will have pq + 1 columns, with column 1 for the intercept, columns 2 to p + 1 for the first profile, columns p + 2 to 2p + 1 for the second profile, and so on. With this extension, all the above reasoning remains unchanged.", "We adopt two large scale leave-one-out cross-validation experiments to test how well the Bayesian regression approach performs in recovering known associations between diseases and genes. In the validation of random controls, we prioritize genes that are known as associated with diseases against randomly selected genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of 99 randomly selected control genes. In the validation of simulated linkage intervals, we simulate the real situation of identify disease genes by prioritizing genes that are known as associated with diseases against genes that are located around the disease genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of control genes that are located in 10Mbp upstream and downstream around this gene. In both experiments, we adopt the first scheme to mimic the situation of inferring associations between genes and novel diseases, for the purpose of achieving a more strict validation.\nIn each of the above leave-one-out cross-validation experiments, we repeat the validation run for every known association between a disease and a gene, obtaining a number of ranking lists. We further normalize the ranks by dividing them with the total number of candidate genes in the ranking list to obtain rank ratios and derive two criteria to measure the performance of a prioritization method. The first criterion is mean rank ratio, which is simply the average of rank ratios over all disease genes in a cross-validation experiment. This criterion provides a summary of the ranks of all genes that are known as associated with diseases, and the smaller the mean rank ratio, the better a method. The second criterion is AUC, the area under the receiver operating characteristic curve (ROC). Given a list of rank ratios and a predefined threshold, we define the sensitivity as the percentage of disease genes that are ranked above the threshold and the specificity as the percentage of control genes that are ranked below the threshold. Varying the threshold values, we are able to plot a ROC curve, which shows the relationship between sensitivity and 1-specificity. Calculating the area under the ROC curve, we obtain the AUC score, which provides an overall measure for the performance of a prioritization method.", "[SUBTITLE] Gene proximity implying phenotype similarity [SUBSECTION] The proposed approach for inferring disease genes is based on the assumption that phenotypically similar diseases are caused by functionally related genes that are usually proximal in a PPI network. Moreover, we assume the existence of a linear relationship between similarities of diseases and proximities of genes that are associated with the diseases. In order to validate this assumption, we compile from HPRD 2,466 associations between 1,590 diseases and 1,440 genes, calculate Bayes factors for these disease genes, and run a Wilcoxon signed rank test to check whether the resulting Bayes factors are significantly greater than 1 (the random case). Results show that the p-value is smaller than 2.2E-16, indicating that the similarities of diseases have a linear relationship with the proximities of disease genes.\nTo further substantiate this point, we perform a series of permutations towards disease-disease, disease-gene, and gene-gene relationships. First, we break disease-disease relationship by permuting the phenotype similarity profile. Second, we break disease-gene relationship by two methods: (1) permuting disease-gene associations and (2) replacing disease genes in known disease-gene associations with randomly selected genes. Third, we break gene-gene relationship by permuting connections in the underlying protein-protein interaction network while keeping node degrees and recalculating the diffusion kernel. For each of the above permutations, we calculate Bayes factors of disease genes and present the results in Figure 1, from which we can clearly see that the median of Bayes factors based on the original data is much higher than those using permuted relationships.\nBayes factors of the original and permuted data. “original”, “permuted PPS”, “permuted seed”, “random seed”, “permuted PPI” denote the results obtained using original data, permuting phenotype similarity profile, permuting disease-gene associations, replacing disease genes in disease-gene associations with randomly selected genes, and permuting connections in the protein-protein interaction network, respectively.\nWe also perform similar studies using data extracted from BioGRID, BIND, IntAct, and MINT, and we obtain similar results as HPRD. From these comprehensive studies, we conclude that similarities between diseases can be explained using network proximities of genes that are associated with the diseases. In other words, gene proximity implies phenotype similarity.\nThe proposed approach for inferring disease genes is based on the assumption that phenotypically similar diseases are caused by functionally related genes that are usually proximal in a PPI network. Moreover, we assume the existence of a linear relationship between similarities of diseases and proximities of genes that are associated with the diseases. In order to validate this assumption, we compile from HPRD 2,466 associations between 1,590 diseases and 1,440 genes, calculate Bayes factors for these disease genes, and run a Wilcoxon signed rank test to check whether the resulting Bayes factors are significantly greater than 1 (the random case). Results show that the p-value is smaller than 2.2E-16, indicating that the similarities of diseases have a linear relationship with the proximities of disease genes.\nTo further substantiate this point, we perform a series of permutations towards disease-disease, disease-gene, and gene-gene relationships. First, we break disease-disease relationship by permuting the phenotype similarity profile. Second, we break disease-gene relationship by two methods: (1) permuting disease-gene associations and (2) replacing disease genes in known disease-gene associations with randomly selected genes. Third, we break gene-gene relationship by permuting connections in the underlying protein-protein interaction network while keeping node degrees and recalculating the diffusion kernel. For each of the above permutations, we calculate Bayes factors of disease genes and present the results in Figure 1, from which we can clearly see that the median of Bayes factors based on the original data is much higher than those using permuted relationships.\nBayes factors of the original and permuted data. “original”, “permuted PPS”, “permuted seed”, “random seed”, “permuted PPI” denote the results obtained using original data, permuting phenotype similarity profile, permuting disease-gene associations, replacing disease genes in disease-gene associations with randomly selected genes, and permuting connections in the protein-protein interaction network, respectively.\nWe also perform similar studies using data extracted from BioGRID, BIND, IntAct, and MINT, and we obtain similar results as HPRD. From these comprehensive studies, we conclude that similarities between diseases can be explained using network proximities of genes that are associated with the diseases. In other words, gene proximity implies phenotype similarity.\n[SUBTITLE] Prioritization with individual PPI networks [SUBSECTION] We design a series of large scale leave-one-out cross-validation experiments to show the validity and effectiveness of the Bayesian regression approach on individual PPI networks. As described in the method section, in each run of the validation procedure, we prioritize candidate genes according to the Bayes factors against two control sets, random controls and linkage intervals, with the performance being evaluated by mean rank ratios and AUC scores. Results are shown in Table 2 and Figure 2.\nPerformance of the Bayesian regression approach on individual data sources.\nResults are obtained using diffusion kernel (γ=0.2) with Bayesian prior µ=0 and σi =1 (for i ≥ 1). Results for the validation of random controls are mean (standard deviation) of 10 independent runs.\nROC curves of the five PPI networks on random controls (A) and linkage intervals (B). AUC scores for the validation of random controls are average of 10 independent runs.\nFrom Table 2, we see that the mean rank ratios obtained using the five PPI networks are all below 0.17, and the AUC scores are all above 0.83, suggesting the effectiveness of the Bayesian regression approach. The best performance are obtained using HPRD, with which the mean rank ratios against random controls and linkage intervals are 0.1349 and 0.1353, respectively, and the AUC scores are 0.8738 and 0.8720, respectively. From Figure 2, we see that the ROC curves of the HPRD data set are above those of the other data sets, suggesting that the performance on HPRD is superior over that of the others. To understand this observation, we perform one-sided Wilcoxon rank sum test against the hypothesis that Bayes factors of disease genes for the HPRD data set are greater than those for the other data sets. Results show that Bayes factors of disease genes for the HPRD data set are indeed greater than those of BioGRID (p-value=4.7E-2), BIND (p-value=2.6E-3), IntAct (p-value=1.9E-5), and MINT (p-value=2.5E-5). We therefore conjecture that the performance of the proposed method depends on how well the linear relationship between disease similarity and gene proximity exhibits.\nTo further demonstrate the effectiveness of the proposed approach, we repeat the same leave-one-out cross-validation experiments using the existing CIPHER approach [25], which relies on Pearson's correlation coefficient between the disease similarity vector and the gene proximity vector to prioritize candidate genes. We compare the results of these two approaches in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves lower mean rank ratios and higher AUC scores in all the five data sets. For example, when performing cross-validation for linkage intervals using the HPRD data set, the CIPHER approach achieves a mean rank ratio of 0.1746 and an AUC score of 0.8313, whereas the Bayesian approach achieves a mean rank ratio of 0.1353 and an AUC score of 0.8720, suggesting an obvious improvement over the CIPHER approach. Note that the CIPHER method calculates gene proximity matrix by applying a Gaussian kernel to the shortest path distance matrix of the underlying network. We also try to use the diffusion kernel matrix as the gene proximity matrix and find the difference is not obvious. These results strongly suggest that the Bayesian regression approach is superior over the CIPHER approach in prioritizing candidate genes.\nComparison with the CIPHER approach and the ordinary regression method. Subplots A and C illustrate mean rank ratios and AUC scores against random controls, respectively. Subplots B and D illustrate mean rank ratios and AUC scores against linkage intervals, respectively. Results for the validation of random controls are average of 10 independent runs (Variance not shown).\nIt is also of interest to compare the Bayesian approach with the ordinary linear regression method. For this purpose, we implement another method that relies on R2, the coefficient of determination, to prioritize candidate genes. We repeat the leave-one-out cross-validation experiments for this method and present the results in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves higher performance than the ordinary regression method in terms of both mean rank ratios and AUC scores in all the five data sets.\nWe design a series of large scale leave-one-out cross-validation experiments to show the validity and effectiveness of the Bayesian regression approach on individual PPI networks. As described in the method section, in each run of the validation procedure, we prioritize candidate genes according to the Bayes factors against two control sets, random controls and linkage intervals, with the performance being evaluated by mean rank ratios and AUC scores. Results are shown in Table 2 and Figure 2.\nPerformance of the Bayesian regression approach on individual data sources.\nResults are obtained using diffusion kernel (γ=0.2) with Bayesian prior µ=0 and σi =1 (for i ≥ 1). Results for the validation of random controls are mean (standard deviation) of 10 independent runs.\nROC curves of the five PPI networks on random controls (A) and linkage intervals (B). AUC scores for the validation of random controls are average of 10 independent runs.\nFrom Table 2, we see that the mean rank ratios obtained using the five PPI networks are all below 0.17, and the AUC scores are all above 0.83, suggesting the effectiveness of the Bayesian regression approach. The best performance are obtained using HPRD, with which the mean rank ratios against random controls and linkage intervals are 0.1349 and 0.1353, respectively, and the AUC scores are 0.8738 and 0.8720, respectively. From Figure 2, we see that the ROC curves of the HPRD data set are above those of the other data sets, suggesting that the performance on HPRD is superior over that of the others. To understand this observation, we perform one-sided Wilcoxon rank sum test against the hypothesis that Bayes factors of disease genes for the HPRD data set are greater than those for the other data sets. Results show that Bayes factors of disease genes for the HPRD data set are indeed greater than those of BioGRID (p-value=4.7E-2), BIND (p-value=2.6E-3), IntAct (p-value=1.9E-5), and MINT (p-value=2.5E-5). We therefore conjecture that the performance of the proposed method depends on how well the linear relationship between disease similarity and gene proximity exhibits.\nTo further demonstrate the effectiveness of the proposed approach, we repeat the same leave-one-out cross-validation experiments using the existing CIPHER approach [25], which relies on Pearson's correlation coefficient between the disease similarity vector and the gene proximity vector to prioritize candidate genes. We compare the results of these two approaches in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves lower mean rank ratios and higher AUC scores in all the five data sets. For example, when performing cross-validation for linkage intervals using the HPRD data set, the CIPHER approach achieves a mean rank ratio of 0.1746 and an AUC score of 0.8313, whereas the Bayesian approach achieves a mean rank ratio of 0.1353 and an AUC score of 0.8720, suggesting an obvious improvement over the CIPHER approach. Note that the CIPHER method calculates gene proximity matrix by applying a Gaussian kernel to the shortest path distance matrix of the underlying network. We also try to use the diffusion kernel matrix as the gene proximity matrix and find the difference is not obvious. These results strongly suggest that the Bayesian regression approach is superior over the CIPHER approach in prioritizing candidate genes.\nComparison with the CIPHER approach and the ordinary regression method. Subplots A and C illustrate mean rank ratios and AUC scores against random controls, respectively. Subplots B and D illustrate mean rank ratios and AUC scores against linkage intervals, respectively. Results for the validation of random controls are average of 10 independent runs (Variance not shown).\nIt is also of interest to compare the Bayesian approach with the ordinary linear regression method. For this purpose, we implement another method that relies on R2, the coefficient of determination, to prioritize candidate genes. We repeat the leave-one-out cross-validation experiments for this method and present the results in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves higher performance than the ordinary regression method in terms of both mean rank ratios and AUC scores in all the five data sets.\n[SUBTITLE] Prioritization with the integration of multiple PPI networks [SUBSECTION] The coverage of a single PPI network is in general not high. Even the largest HPRD network covers only 9,470 genes, less than half of known human protein-coding genes. We therefore propose to use the Bayesian regression approach to integrate multiple PPI networks, for the purpose of improving the coverage.\nBy taking the union of genes on individual PPI networks, we obtain 15,644 human genes. Focusing on these genes, we extract from BioMart 2,708 associations between 1,752 diseases and 1,621 genes. With this data set, we repeat the leave-one-out cross-validation experiments using individual gene proximity profiles and present the results in Figure 4. Note that in this procedure, we set the proximity of two genes to zero (minimum proximity) if any of the two genes is absent from the underlying network. We observe that the performance on individual proximity profiles drops dramatically in this larger data set (in comparison with Table 2), simply because each PPI network covers only a fraction of genes, and the scheme of handling missing data (setting to zero) yield small Bayes factors for genes that are absent from the network.\nPerformance of the integration method. Subplot A illustrates mean rank ratios for the integration method and individual PPI networks. Subplot B illustrates AUC scores for the integration method and individual PPI networks. Results for the validation of random controls are average of 10 independent runs (variance not shown).\nWe then use the Bayesian regression approach to integrate all the five PPI networks by extending the design matrix to include gene proximities from multiple profiles. We repeat the leave-one-out cross-validation experiments and present the results in Figure 4, from which we observe clearly the better performance of the proposed approach with the integrated use of multiple PPI networks. The mean rank ratios for random controls and linkage intervals are 0.1385 and 0.1380, respectively, with AUC scores being 0.8702 and 0.8692, respectively. In contrast, combining all genes and interactions in individual PPI networks together to form a large network (15,644 nodes and 77,332 edges) and then applying CIPHER only yields mean rank ratios 0.1850 and 0.1876 (AUC scores 0.8230 and 0.8180) for random controls and linkage intervals, respectively. Directly applying the Bayesian regression approach to the combined network yields mean rank ratios 0.1462 and 0.1469 (AUC scores 0.8624 and 0.8601) for random controls and linkage intervals, respectively. Furthermore, we also extract from the combined network interactions that exist in at least two individual PPI networks and obtain a high confident network (8,463 nodes and 28,617 edges). Focusing on genes in this network, we extract from BioMart 2,219 associations between 1,441 diseases and 1,271 genes. Directly applying the Bayesian regression approach to this high confident network yields mean rank ratios 0.1373 and 0.1380 (AUC scores 0.8717 and 0.8694) for random controls and linkage intervals, respectively. Though these results are slightly better than those of the Bayesian integration method, the coverage of this high confident network is apparently much lower than that of the Bayesian integration approach. From these results, we conclude that the Bayesian regression approach is effective in integrating multiple PPI networks for prioritizing disease genes.\nIt is of interest to see how much individual PPI networks contribute to the integration approach. For this purpose, we repeat the cross-validation experiments by integrating every four data sources and see how the exclusion of the remaining network affects the prioritization results. We find that all data sources have positive contributions to the integration approach, because mean rank ratios increase and AUC scores drop when any of the five data source is excluded. Results also suggest the order of the data sources according to their contributions (differences in cross-validation results) as follows: HPRD > BIND > BioGRID > IntAct > MINT. It is not surprising that HPRD contributes most to the integration method, because HPRD has the largest coverage and highest performance in previous validations. It is also not surprising that MINT has the least contribution, and IntAct has the second least contribution, because both coverage and performance of these two data sources are not high. However, it is not obvious that BIND contributes more than BioGRID, because individually, BioGRID has higher coverage and performance than BIND. To understand this observation, we analyze the relation between the five data sources and find that Bayes factors calculated from HPRD and BioGRID are highly correlated (Pearson's correlation coefficient = 0.9770). Therefore, the removal of BioGRID, when HPRD is presented, will not significantly affect the performance of the integration method.\nFinally, we study whether the integration approach is biased toward well characterized genes, that is, genes appearing in more data sources tend to have higher ranks. We group all genes in a validation procedure into 10 categories according their ranks such that the i-th category contains genes ranked at ((i - 1)×10%,i×10%]. For each category, we group genes according to the number of PPI networks containing the genes such that the j-th category contains genes appearing in exact j networks. We then perform pair-wise Pearson's chi-squared tests against the alternative hypothesis that frequencies of genes appearing in different number of PPI networks are different across different rank categories. Results show that for the cross-validation experiment on linkage intervals, the minimum p-value produced by the series of chi-square tests is 0.0909, suggesting that we cannot reject the null hypothesis. Similar results are obtained for the cross-validation experiment on random controls. We therefore conclude that the integration approach is not biased toward well characterized genes. In other words, genes appearing in more data sources do not tend to have higher ranks.\nThe coverage of a single PPI network is in general not high. Even the largest HPRD network covers only 9,470 genes, less than half of known human protein-coding genes. We therefore propose to use the Bayesian regression approach to integrate multiple PPI networks, for the purpose of improving the coverage.\nBy taking the union of genes on individual PPI networks, we obtain 15,644 human genes. Focusing on these genes, we extract from BioMart 2,708 associations between 1,752 diseases and 1,621 genes. With this data set, we repeat the leave-one-out cross-validation experiments using individual gene proximity profiles and present the results in Figure 4. Note that in this procedure, we set the proximity of two genes to zero (minimum proximity) if any of the two genes is absent from the underlying network. We observe that the performance on individual proximity profiles drops dramatically in this larger data set (in comparison with Table 2), simply because each PPI network covers only a fraction of genes, and the scheme of handling missing data (setting to zero) yield small Bayes factors for genes that are absent from the network.\nPerformance of the integration method. Subplot A illustrates mean rank ratios for the integration method and individual PPI networks. Subplot B illustrates AUC scores for the integration method and individual PPI networks. Results for the validation of random controls are average of 10 independent runs (variance not shown).\nWe then use the Bayesian regression approach to integrate all the five PPI networks by extending the design matrix to include gene proximities from multiple profiles. We repeat the leave-one-out cross-validation experiments and present the results in Figure 4, from which we observe clearly the better performance of the proposed approach with the integrated use of multiple PPI networks. The mean rank ratios for random controls and linkage intervals are 0.1385 and 0.1380, respectively, with AUC scores being 0.8702 and 0.8692, respectively. In contrast, combining all genes and interactions in individual PPI networks together to form a large network (15,644 nodes and 77,332 edges) and then applying CIPHER only yields mean rank ratios 0.1850 and 0.1876 (AUC scores 0.8230 and 0.8180) for random controls and linkage intervals, respectively. Directly applying the Bayesian regression approach to the combined network yields mean rank ratios 0.1462 and 0.1469 (AUC scores 0.8624 and 0.8601) for random controls and linkage intervals, respectively. Furthermore, we also extract from the combined network interactions that exist in at least two individual PPI networks and obtain a high confident network (8,463 nodes and 28,617 edges). Focusing on genes in this network, we extract from BioMart 2,219 associations between 1,441 diseases and 1,271 genes. Directly applying the Bayesian regression approach to this high confident network yields mean rank ratios 0.1373 and 0.1380 (AUC scores 0.8717 and 0.8694) for random controls and linkage intervals, respectively. Though these results are slightly better than those of the Bayesian integration method, the coverage of this high confident network is apparently much lower than that of the Bayesian integration approach. From these results, we conclude that the Bayesian regression approach is effective in integrating multiple PPI networks for prioritizing disease genes.\nIt is of interest to see how much individual PPI networks contribute to the integration approach. For this purpose, we repeat the cross-validation experiments by integrating every four data sources and see how the exclusion of the remaining network affects the prioritization results. We find that all data sources have positive contributions to the integration approach, because mean rank ratios increase and AUC scores drop when any of the five data source is excluded. Results also suggest the order of the data sources according to their contributions (differences in cross-validation results) as follows: HPRD > BIND > BioGRID > IntAct > MINT. It is not surprising that HPRD contributes most to the integration method, because HPRD has the largest coverage and highest performance in previous validations. It is also not surprising that MINT has the least contribution, and IntAct has the second least contribution, because both coverage and performance of these two data sources are not high. However, it is not obvious that BIND contributes more than BioGRID, because individually, BioGRID has higher coverage and performance than BIND. To understand this observation, we analyze the relation between the five data sources and find that Bayes factors calculated from HPRD and BioGRID are highly correlated (Pearson's correlation coefficient = 0.9770). Therefore, the removal of BioGRID, when HPRD is presented, will not significantly affect the performance of the integration method.\nFinally, we study whether the integration approach is biased toward well characterized genes, that is, genes appearing in more data sources tend to have higher ranks. We group all genes in a validation procedure into 10 categories according their ranks such that the i-th category contains genes ranked at ((i - 1)×10%,i×10%]. For each category, we group genes according to the number of PPI networks containing the genes such that the j-th category contains genes appearing in exact j networks. We then perform pair-wise Pearson's chi-squared tests against the alternative hypothesis that frequencies of genes appearing in different number of PPI networks are different across different rank categories. Results show that for the cross-validation experiment on linkage intervals, the minimum p-value produced by the series of chi-square tests is 0.0909, suggesting that we cannot reject the null hypothesis. Similar results are obtained for the cross-validation experiment on random controls. We therefore conclude that the integration approach is not biased toward well characterized genes. In other words, genes appearing in more data sources do not tend to have higher ranks.", "The proposed approach for inferring disease genes is based on the assumption that phenotypically similar diseases are caused by functionally related genes that are usually proximal in a PPI network. Moreover, we assume the existence of a linear relationship between similarities of diseases and proximities of genes that are associated with the diseases. In order to validate this assumption, we compile from HPRD 2,466 associations between 1,590 diseases and 1,440 genes, calculate Bayes factors for these disease genes, and run a Wilcoxon signed rank test to check whether the resulting Bayes factors are significantly greater than 1 (the random case). Results show that the p-value is smaller than 2.2E-16, indicating that the similarities of diseases have a linear relationship with the proximities of disease genes.\nTo further substantiate this point, we perform a series of permutations towards disease-disease, disease-gene, and gene-gene relationships. First, we break disease-disease relationship by permuting the phenotype similarity profile. Second, we break disease-gene relationship by two methods: (1) permuting disease-gene associations and (2) replacing disease genes in known disease-gene associations with randomly selected genes. Third, we break gene-gene relationship by permuting connections in the underlying protein-protein interaction network while keeping node degrees and recalculating the diffusion kernel. For each of the above permutations, we calculate Bayes factors of disease genes and present the results in Figure 1, from which we can clearly see that the median of Bayes factors based on the original data is much higher than those using permuted relationships.\nBayes factors of the original and permuted data. “original”, “permuted PPS”, “permuted seed”, “random seed”, “permuted PPI” denote the results obtained using original data, permuting phenotype similarity profile, permuting disease-gene associations, replacing disease genes in disease-gene associations with randomly selected genes, and permuting connections in the protein-protein interaction network, respectively.\nWe also perform similar studies using data extracted from BioGRID, BIND, IntAct, and MINT, and we obtain similar results as HPRD. From these comprehensive studies, we conclude that similarities between diseases can be explained using network proximities of genes that are associated with the diseases. In other words, gene proximity implies phenotype similarity.", "We design a series of large scale leave-one-out cross-validation experiments to show the validity and effectiveness of the Bayesian regression approach on individual PPI networks. As described in the method section, in each run of the validation procedure, we prioritize candidate genes according to the Bayes factors against two control sets, random controls and linkage intervals, with the performance being evaluated by mean rank ratios and AUC scores. Results are shown in Table 2 and Figure 2.\nPerformance of the Bayesian regression approach on individual data sources.\nResults are obtained using diffusion kernel (γ=0.2) with Bayesian prior µ=0 and σi =1 (for i ≥ 1). Results for the validation of random controls are mean (standard deviation) of 10 independent runs.\nROC curves of the five PPI networks on random controls (A) and linkage intervals (B). AUC scores for the validation of random controls are average of 10 independent runs.\nFrom Table 2, we see that the mean rank ratios obtained using the five PPI networks are all below 0.17, and the AUC scores are all above 0.83, suggesting the effectiveness of the Bayesian regression approach. The best performance are obtained using HPRD, with which the mean rank ratios against random controls and linkage intervals are 0.1349 and 0.1353, respectively, and the AUC scores are 0.8738 and 0.8720, respectively. From Figure 2, we see that the ROC curves of the HPRD data set are above those of the other data sets, suggesting that the performance on HPRD is superior over that of the others. To understand this observation, we perform one-sided Wilcoxon rank sum test against the hypothesis that Bayes factors of disease genes for the HPRD data set are greater than those for the other data sets. Results show that Bayes factors of disease genes for the HPRD data set are indeed greater than those of BioGRID (p-value=4.7E-2), BIND (p-value=2.6E-3), IntAct (p-value=1.9E-5), and MINT (p-value=2.5E-5). We therefore conjecture that the performance of the proposed method depends on how well the linear relationship between disease similarity and gene proximity exhibits.\nTo further demonstrate the effectiveness of the proposed approach, we repeat the same leave-one-out cross-validation experiments using the existing CIPHER approach [25], which relies on Pearson's correlation coefficient between the disease similarity vector and the gene proximity vector to prioritize candidate genes. We compare the results of these two approaches in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves lower mean rank ratios and higher AUC scores in all the five data sets. For example, when performing cross-validation for linkage intervals using the HPRD data set, the CIPHER approach achieves a mean rank ratio of 0.1746 and an AUC score of 0.8313, whereas the Bayesian approach achieves a mean rank ratio of 0.1353 and an AUC score of 0.8720, suggesting an obvious improvement over the CIPHER approach. Note that the CIPHER method calculates gene proximity matrix by applying a Gaussian kernel to the shortest path distance matrix of the underlying network. We also try to use the diffusion kernel matrix as the gene proximity matrix and find the difference is not obvious. These results strongly suggest that the Bayesian regression approach is superior over the CIPHER approach in prioritizing candidate genes.\nComparison with the CIPHER approach and the ordinary regression method. Subplots A and C illustrate mean rank ratios and AUC scores against random controls, respectively. Subplots B and D illustrate mean rank ratios and AUC scores against linkage intervals, respectively. Results for the validation of random controls are average of 10 independent runs (Variance not shown).\nIt is also of interest to compare the Bayesian approach with the ordinary linear regression method. For this purpose, we implement another method that relies on R2, the coefficient of determination, to prioritize candidate genes. We repeat the leave-one-out cross-validation experiments for this method and present the results in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves higher performance than the ordinary regression method in terms of both mean rank ratios and AUC scores in all the five data sets.", "The coverage of a single PPI network is in general not high. Even the largest HPRD network covers only 9,470 genes, less than half of known human protein-coding genes. We therefore propose to use the Bayesian regression approach to integrate multiple PPI networks, for the purpose of improving the coverage.\nBy taking the union of genes on individual PPI networks, we obtain 15,644 human genes. Focusing on these genes, we extract from BioMart 2,708 associations between 1,752 diseases and 1,621 genes. With this data set, we repeat the leave-one-out cross-validation experiments using individual gene proximity profiles and present the results in Figure 4. Note that in this procedure, we set the proximity of two genes to zero (minimum proximity) if any of the two genes is absent from the underlying network. We observe that the performance on individual proximity profiles drops dramatically in this larger data set (in comparison with Table 2), simply because each PPI network covers only a fraction of genes, and the scheme of handling missing data (setting to zero) yield small Bayes factors for genes that are absent from the network.\nPerformance of the integration method. Subplot A illustrates mean rank ratios for the integration method and individual PPI networks. Subplot B illustrates AUC scores for the integration method and individual PPI networks. Results for the validation of random controls are average of 10 independent runs (variance not shown).\nWe then use the Bayesian regression approach to integrate all the five PPI networks by extending the design matrix to include gene proximities from multiple profiles. We repeat the leave-one-out cross-validation experiments and present the results in Figure 4, from which we observe clearly the better performance of the proposed approach with the integrated use of multiple PPI networks. The mean rank ratios for random controls and linkage intervals are 0.1385 and 0.1380, respectively, with AUC scores being 0.8702 and 0.8692, respectively. In contrast, combining all genes and interactions in individual PPI networks together to form a large network (15,644 nodes and 77,332 edges) and then applying CIPHER only yields mean rank ratios 0.1850 and 0.1876 (AUC scores 0.8230 and 0.8180) for random controls and linkage intervals, respectively. Directly applying the Bayesian regression approach to the combined network yields mean rank ratios 0.1462 and 0.1469 (AUC scores 0.8624 and 0.8601) for random controls and linkage intervals, respectively. Furthermore, we also extract from the combined network interactions that exist in at least two individual PPI networks and obtain a high confident network (8,463 nodes and 28,617 edges). Focusing on genes in this network, we extract from BioMart 2,219 associations between 1,441 diseases and 1,271 genes. Directly applying the Bayesian regression approach to this high confident network yields mean rank ratios 0.1373 and 0.1380 (AUC scores 0.8717 and 0.8694) for random controls and linkage intervals, respectively. Though these results are slightly better than those of the Bayesian integration method, the coverage of this high confident network is apparently much lower than that of the Bayesian integration approach. From these results, we conclude that the Bayesian regression approach is effective in integrating multiple PPI networks for prioritizing disease genes.\nIt is of interest to see how much individual PPI networks contribute to the integration approach. For this purpose, we repeat the cross-validation experiments by integrating every four data sources and see how the exclusion of the remaining network affects the prioritization results. We find that all data sources have positive contributions to the integration approach, because mean rank ratios increase and AUC scores drop when any of the five data source is excluded. Results also suggest the order of the data sources according to their contributions (differences in cross-validation results) as follows: HPRD > BIND > BioGRID > IntAct > MINT. It is not surprising that HPRD contributes most to the integration method, because HPRD has the largest coverage and highest performance in previous validations. It is also not surprising that MINT has the least contribution, and IntAct has the second least contribution, because both coverage and performance of these two data sources are not high. However, it is not obvious that BIND contributes more than BioGRID, because individually, BioGRID has higher coverage and performance than BIND. To understand this observation, we analyze the relation between the five data sources and find that Bayes factors calculated from HPRD and BioGRID are highly correlated (Pearson's correlation coefficient = 0.9770). Therefore, the removal of BioGRID, when HPRD is presented, will not significantly affect the performance of the integration method.\nFinally, we study whether the integration approach is biased toward well characterized genes, that is, genes appearing in more data sources tend to have higher ranks. We group all genes in a validation procedure into 10 categories according their ranks such that the i-th category contains genes ranked at ((i - 1)×10%,i×10%]. For each category, we group genes according to the number of PPI networks containing the genes such that the j-th category contains genes appearing in exact j networks. We then perform pair-wise Pearson's chi-squared tests against the alternative hypothesis that frequencies of genes appearing in different number of PPI networks are different across different rank categories. Results show that for the cross-validation experiment on linkage intervals, the minimum p-value produced by the series of chi-square tests is 0.0909, suggesting that we cannot reject the null hypothesis. Similar results are obtained for the cross-validation experiment on random controls. We therefore conclude that the integration approach is not biased toward well characterized genes. In other words, genes appearing in more data sources do not tend to have higher ranks.", "In this paper, we propose a Bayesian regression approach that relies on the linear relationship between disease similarity and gene proximity to prioritize candidate genes. We show that gene proximity, as the diffusion distance obtained from some PPI network, implies disease similarity, and we perform a series of leave-one-out cross-validation experiments to demonstrate the effectiveness of the proposed approach. We show that the Bayesian regression approach can achieve much higher performance that an existing CIPHER approach and the ordinary regression method. We also use the proposed approach to integrate multiple PPI networks, to achieve higher coverage while maintaining superior performance. Our contribution in this paper therefore lies in the following points: (1) systematic validation of the assumption that gene proximity in a PPI network implies disease similarity; (2) the Bayesian regression approach that greatly improves the performance in prioritizing disease genes, in comparison with a previous CIPHER approach; (3) detailed analysis of the effectiveness of five widely-used PPI networks in prioritizing disease genes; (4) a simple yet effective method to integrate multiple PPI networks into a single prioritization model.\nCertainly, our approach can be further studied from the following aspects. First, the main reason of using conjugate priors in the Bayesian regression model is to seed for analytic solutions and thus alleviate the computational burden in the calculation of Bayesian factors. Although this formulation shows great success, it is known that the specification of prior is intrinsically complicated and subjective. The main consideration here is that the posterior mean and variance should not depend on the units in which the disease similarities are measured and should also be invariant to the shift of the response variable. Therefore, one can consider the use of Jeffreys prior instead of the conjugate prior. By doing this, a Markov chain Monte Carlo (MCMC) approach would be necessary in the calculation of the marginal likelihood, and thus the computational burden could be high.\nSecond, it is conceptually straightforward to extend the Bayesian regression model to infer interactive effects of multiple genes on a query complex disease. For example, we can enumerate pair-wise combinations of all candidate genes and calculate a Bayes factor for each combination to infer the interactive effects of two genes on a query disease. Nevertheless, the challenge will come from the computational feasibility, because the number of combinations of even a small number of candidate genes will be large.\nThird, the means of dealing with missing data (setting to zeros) in the proposed approach, though simple, is kind of naïve. When more data sources are integrated, the overlap of genes between data sources will typically be lower, and thus a more effective method for dealing with missing data is desired. One possible solution is to interpolate missing data use the mean or median of observed data. Another possible solution is to rank candidate genes using data sources individually and then aggregate the ranks, as what is done in existing literature [6]. Both methods have their own advantages and disadvantages, and a comprehensive comparison study is necessary in order to obtain detailed understanding of these possible solutions.\nFinally, there have been a lot of large scale data produced by high-throughput techniques. To mention a few, sequences of most human protein-coding genes are known; mapping of human genes to Gene Ontology (GO) is available; expression profile for most human genes across various conditions has been obtained. How to extend the proposed approach to integrate these data sources will be one of the directions in our future work.", "No conflicts of interests declared.", "WZ implemented the method, collected the results, and wrote the manuscript. FS designed the research and wrote the manuscript. RJ designed the research and wrote the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data sources", "Bayesian linear regression", "Validation methods and evaluation criteria", "Results", "Gene proximity implying phenotype similarity", "Prioritization with individual PPI networks", "Prioritization with the integration of multiple PPI networks", "Conclusions and discussion", "Competing interests", "Authors' contributions" ]
[ "Inference of genes responsible for human inherited diseases has been one of the major tasks in modern human and medical genetics. Traditionally, associations between diseases and genes are pinpointed through statistical methods such as family-based linkage analysis and population-based association studies [1], which have been demonstrating remarkable successes in mapping disease genes. However, linkage analysis can only associate diseases with genetic regions that typically contain dozens to hundreds of genes, and association studies usually require carefully selected candidate genes that are biologically related to the disease of interest, making computational inference of causative genes from positional candidates and the selection of functional candidates indispensible [2,3].\nMost existing computational methods for inferring causative genes from candidates are formulated as a one-class novelty learning problem that is usually solved with the guilt-by-association principle, which suggests to compute a score from functional genomics data to quantify the strength of association between a query disease and a candidate gene, and then rank candidate genes according to their scores to facilitate the selection of susceptibility genes [4]. For this purpose, various genomic data, including protein sequences [5,6], gene expression profiles [6-8], functional annotations [6,8-11], literature descriptions [6,7,12], protein interactions [6,8,13,14], and many others [15] have been employed to characterize similarities between genes, with the assumption that genes similar in one or more characteristics are usually similar in their functions, and thus are likely to be associated with the same disease. Recent studies have also shown the modular nature of human genetic diseases [15-23], which suggests that diseases share common clinic characteristics are often caused by functionally related genes [24]. With this understanding, various methods have been proposed to utilize phenotype similarity and gene proximity for the inference of causative genes for human inherited diseases [14,25-27].\nIt has been shown that the Pearson's correlation coefficient of similarities between phenotypes and closeness of genes in a single protein-protein interaction (PPI) network can be used as a concordance score to facilitate the prioritization of candidate genes [25]. However, PPI networks are far from complete. For example, the Human Protein Reference Database (HPRD) [28], as one of the most comprehensive protein interaction databases, only covers less than half of human protein-coding genes. Therefore, relying on a single PPI network to infer disease genes will restrict the scope of application of such methods. Meanwhile, there have been a few protein interaction databases constructed and maintained independently. These databases are often quite diverse in coverage and quality, making the selection of a suitable PPI network inevitable. Moreover, although the naïve thinking of combining all available protein interactions into a single large network is straightforward, performance of methods based on such a combined network is questionable [25].\nWith these considerations, we propose a Bayesian regression approach that can be used with either a single PPI network or multiple networks to prioritize candidate genes. We adopt a linear model to explain disease similarity using gene proximity, and we solve this model via a Bayesian approach, which yields an analytic form of Bayes factor for measuring the strength of association between a query disease and a candidate gene. We then use Bayes factors as scores to prioritize candidate genes. We show the validity of assumptions of this approach, and we demonstrate the effectiveness of this approach on five PPI networks via large scale leave-one-out cross-validation experiments and comprehensive statistical analysis. We further show the capability of our approach in integrating multiple PPI networks.", "[SUBTITLE] Data sources [SUBSECTION] We propose to infer disease genes using gene proximity profiles derived from protein-protein interaction (PPI) networks, a phenotype similarity profile calculated using text mining technique, and known associations between disease phenotypes and genes extracted from the Online Mendelian Inheritance in Man (OMIM) database.\nThere have been a few PPI networks with diverse coverage and quality. In our study, we adopt five widely-used PPI networks to calculate gene proximity profiles. First, the Human Protein Reference Database (HPRD) contains human protein-protein interactions that are manually extracted from the literature by expert biologists [28]. After removing duplications and self-linked interactions, we extract from release 8 of this database 36,634 interactions between 9,470 human genes. Second, the Biological General Repository for Interaction Datasets (BioGRID) contains protein and genetic interactions of major model organism species [29]. We extract from version 2.0.63 of this database 29,558 interactions between 9,043 human genes. Third, the Biomolecular Interaction Network Database (BIND) contains both high-throughput and manually curated interactions between biological molecules [30]. From this database, we collect 14,955 interactions between 6,089 human genes. Fourth, the IntAct molecular interaction database (IntAct) contains protein-protein interaction derived from literature [31]. From this database, we collect 30,030 interactions between 6,775 human genes. Finally, the Molecular INTeraction database (MINT) contains information about physical interactions between proteins [32]. From this database, we collect 15,902 interactions between 7,200 human proteins. Details about these five PPI networks are given in Table 1.\nSummary of the five protein-protein interaction networks.\nThe phenotype similarity profile, which is obtained from an earlier work of van Driel et al[21], is represented as a matrix of pair-wise similarities between human disease phenotypes. Briefly, van Driel et al analyzed the full-text and clinical synopsis fields of all OMIM records, and used the anatomy and the disease sections of the medical subject headings vocabulary (MeSH) to extract terms from the OMIM records [21]. By doing this, they were able to characterize a phenotype using a feature vector that was composed of standardized and weighted phenotypic feature terms and further calculated a similarity score for a pair of phenotypes as the cosine of the angle of their feature vectors. Finally, they obtain a phenotype similarity profile that contains pair-wise similarity scores for 5,080 OMIM diseases.\nKnown associations between disease phenotypes and genes are extracted from BioMart [33]. For genes in HPRD, we obtain 2,466 associations between 1,590 diseases and 1,440 genes. For BioGRID, we obtain 2,166 associations between 1,412 diseases and 1,247 genes. For BIND, we obtain 1,442 associations between 1,016 diseases and 811 genes. For IntAct, we obtain 1,622 associations between 1,094 diseases and 933 genes. For MINT, we obtain 1,231 associations between 889 diseases and 677 genes. We also summarize the above information in Table 1.\nWe propose to infer disease genes using gene proximity profiles derived from protein-protein interaction (PPI) networks, a phenotype similarity profile calculated using text mining technique, and known associations between disease phenotypes and genes extracted from the Online Mendelian Inheritance in Man (OMIM) database.\nThere have been a few PPI networks with diverse coverage and quality. In our study, we adopt five widely-used PPI networks to calculate gene proximity profiles. First, the Human Protein Reference Database (HPRD) contains human protein-protein interactions that are manually extracted from the literature by expert biologists [28]. After removing duplications and self-linked interactions, we extract from release 8 of this database 36,634 interactions between 9,470 human genes. Second, the Biological General Repository for Interaction Datasets (BioGRID) contains protein and genetic interactions of major model organism species [29]. We extract from version 2.0.63 of this database 29,558 interactions between 9,043 human genes. Third, the Biomolecular Interaction Network Database (BIND) contains both high-throughput and manually curated interactions between biological molecules [30]. From this database, we collect 14,955 interactions between 6,089 human genes. Fourth, the IntAct molecular interaction database (IntAct) contains protein-protein interaction derived from literature [31]. From this database, we collect 30,030 interactions between 6,775 human genes. Finally, the Molecular INTeraction database (MINT) contains information about physical interactions between proteins [32]. From this database, we collect 15,902 interactions between 7,200 human proteins. Details about these five PPI networks are given in Table 1.\nSummary of the five protein-protein interaction networks.\nThe phenotype similarity profile, which is obtained from an earlier work of van Driel et al[21], is represented as a matrix of pair-wise similarities between human disease phenotypes. Briefly, van Driel et al analyzed the full-text and clinical synopsis fields of all OMIM records, and used the anatomy and the disease sections of the medical subject headings vocabulary (MeSH) to extract terms from the OMIM records [21]. By doing this, they were able to characterize a phenotype using a feature vector that was composed of standardized and weighted phenotypic feature terms and further calculated a similarity score for a pair of phenotypes as the cosine of the angle of their feature vectors. Finally, they obtain a phenotype similarity profile that contains pair-wise similarity scores for 5,080 OMIM diseases.\nKnown associations between disease phenotypes and genes are extracted from BioMart [33]. For genes in HPRD, we obtain 2,466 associations between 1,590 diseases and 1,440 genes. For BioGRID, we obtain 2,166 associations between 1,412 diseases and 1,247 genes. For BIND, we obtain 1,442 associations between 1,016 diseases and 811 genes. For IntAct, we obtain 1,622 associations between 1,094 diseases and 933 genes. For MINT, we obtain 1,231 associations between 889 diseases and 677 genes. We also summarize the above information in Table 1.\n[SUBTITLE] Bayesian linear regression [SUBSECTION] We adopt a linear regression model to explain disease similarities in the phenotype similarity profile using gene similarities in one or more gene proximity profiles, and we solve this regression model via a Bayesian approach [34]. For a clear presentation, we first derive this method using a single gene proximity profile and then extend this model to include multiple profiles.\nA gene proximity profile contains pair-wise similarity measure of every two genes and is calculated as the diffusion kernel of the underlying PPI network. Given a network of n nodes, represented by an adjacency matrix A, we calculate the Laplacian of the network as L = D – A and the diffusion kernel as Z = e-γL, where D is a diagonal matrix containing node degrees, and 0 <γ < 1 a free parameter that controls the magnitude of diffusion. With the kernel Z = (zij)n×n, we define the proximity of two genes i and j as the corresponding element zij in the kernel.\nLet ydd′ denote the similarity score between a query disease d and another disease d′. We define the phenotype similarity vector for disease d as , i.e., the similarities between disease d and all m diseases d1,d2,…,dm in the phenotype similarity profile. Let Zgg′ denote the proximity score between genes g and g′ in the gene proximity profile and G(d) the set of genes known as associated with disease d. We define the proximity between gene g and disease d as the summation of proximity scores between gene g and all genes known as associated with disease d, denoted by . We further define the gene proximity vector for gene g as , i.e., the proximities between gene g and all diseases d1,d2,…,dm in the phenotype similarity profile.\nWe then explain the phenotype similarity vector for disease d using gene proximity vectors of all genes that are associated with the disease via a linear regression model\n,\nwhere y = yd is the response vector, X the design matrix, β the coefficient vector, and the residual vector. For disease d associated with a total of p genes, the design matrix X has p + 1 columns, with the first column being 1s for the purpose of incorporating the intercept.\nWe solve this model using a Bayesian approach [34] and use the resulting Bayes factor to measure the strength of evidence for a candidate association. For the alternative model, we assume that y conditional on X is subject to a normal distribution, as\n,\nwith residuals independent and identically distributed, following normal density with mean 0 and variance σ2. We set conjugate prior distributions for β and σ2, as\n,\nwhere is composed of prior means, and σ2Σ prior variances with being a diagonal matrix. The joint distribution of all random quantities y, β, and σ2 is then given as.\nIntegrating out β and σ2, we obtain the marginal likelihood of y given X as\n,\nwhere and with and .\nOn the other hand, for the null model, where y is independent of X, the marginal likelihood of y can be derived in a similar way, as\n,\nwhere , and .\nThen, the Bayes factor is calculated as the ratio of the marginal likelihood under the alternative and the null hypotheses, respectively, as.\nFollowing literature [34], we will use the parameter setting (as +∞ in calculation) and σi = 1 (for i ≥ 1) throughout this paper, though a grid search for other values of σi shows that the method is quite robust to this parameter. It has been shown that the parameter γ should take a small value [35-37]. In our study, we perform a grid search for this parameter and find results are quite robust when 0.1 ≤ γ ≤ 0.3. Therefore we will use γ = 0.2 throughout this paper.\nObviously, a larger Bayes factor indicates a better exhibition of the linear relationship between the disease similarity and the gene proximity. With this understanding, we propose the following schemes to prioritize candidate genes. First, given a query disease and a set of candidate genes, we calculate a Bayes factor for each candidate gene, with the assumption that the gene is the only one associated with the query disease. Then, we rank candidate genes in non-increasing order according to their Bayes factors. This scheme mimics the situation in which we aim at inferring associations between genes and a \"novel\" disease that has yet not been previously studied. Second, for a disease that has been previously studied (and thus already has some genes associated), we can choose to calculate Bayes factors for candidate genes with the inclusion of the genes that are already known to be associated with the disease. This scheme is more suitable for inferring associations between genes and a disease that has been previously studied (and thus has known associated genes).\nIn the case that multiple gene proximity profiles calculated from multiple PPI networks are available, we extend the regression model by incorporating additional gene proximity vectors into the design matrix. Suppose that disease d is associated with p genes, and q gene proximity profiles are available, the design matrix X will have pq + 1 columns, with column 1 for the intercept, columns 2 to p + 1 for the first profile, columns p + 2 to 2p + 1 for the second profile, and so on. With this extension, all the above reasoning remains unchanged.\nWe adopt a linear regression model to explain disease similarities in the phenotype similarity profile using gene similarities in one or more gene proximity profiles, and we solve this regression model via a Bayesian approach [34]. For a clear presentation, we first derive this method using a single gene proximity profile and then extend this model to include multiple profiles.\nA gene proximity profile contains pair-wise similarity measure of every two genes and is calculated as the diffusion kernel of the underlying PPI network. Given a network of n nodes, represented by an adjacency matrix A, we calculate the Laplacian of the network as L = D – A and the diffusion kernel as Z = e-γL, where D is a diagonal matrix containing node degrees, and 0 <γ < 1 a free parameter that controls the magnitude of diffusion. With the kernel Z = (zij)n×n, we define the proximity of two genes i and j as the corresponding element zij in the kernel.\nLet ydd′ denote the similarity score between a query disease d and another disease d′. We define the phenotype similarity vector for disease d as , i.e., the similarities between disease d and all m diseases d1,d2,…,dm in the phenotype similarity profile. Let Zgg′ denote the proximity score between genes g and g′ in the gene proximity profile and G(d) the set of genes known as associated with disease d. We define the proximity between gene g and disease d as the summation of proximity scores between gene g and all genes known as associated with disease d, denoted by . We further define the gene proximity vector for gene g as , i.e., the proximities between gene g and all diseases d1,d2,…,dm in the phenotype similarity profile.\nWe then explain the phenotype similarity vector for disease d using gene proximity vectors of all genes that are associated with the disease via a linear regression model\n,\nwhere y = yd is the response vector, X the design matrix, β the coefficient vector, and the residual vector. For disease d associated with a total of p genes, the design matrix X has p + 1 columns, with the first column being 1s for the purpose of incorporating the intercept.\nWe solve this model using a Bayesian approach [34] and use the resulting Bayes factor to measure the strength of evidence for a candidate association. For the alternative model, we assume that y conditional on X is subject to a normal distribution, as\n,\nwith residuals independent and identically distributed, following normal density with mean 0 and variance σ2. We set conjugate prior distributions for β and σ2, as\n,\nwhere is composed of prior means, and σ2Σ prior variances with being a diagonal matrix. The joint distribution of all random quantities y, β, and σ2 is then given as.\nIntegrating out β and σ2, we obtain the marginal likelihood of y given X as\n,\nwhere and with and .\nOn the other hand, for the null model, where y is independent of X, the marginal likelihood of y can be derived in a similar way, as\n,\nwhere , and .\nThen, the Bayes factor is calculated as the ratio of the marginal likelihood under the alternative and the null hypotheses, respectively, as.\nFollowing literature [34], we will use the parameter setting (as +∞ in calculation) and σi = 1 (for i ≥ 1) throughout this paper, though a grid search for other values of σi shows that the method is quite robust to this parameter. It has been shown that the parameter γ should take a small value [35-37]. In our study, we perform a grid search for this parameter and find results are quite robust when 0.1 ≤ γ ≤ 0.3. Therefore we will use γ = 0.2 throughout this paper.\nObviously, a larger Bayes factor indicates a better exhibition of the linear relationship between the disease similarity and the gene proximity. With this understanding, we propose the following schemes to prioritize candidate genes. First, given a query disease and a set of candidate genes, we calculate a Bayes factor for each candidate gene, with the assumption that the gene is the only one associated with the query disease. Then, we rank candidate genes in non-increasing order according to their Bayes factors. This scheme mimics the situation in which we aim at inferring associations between genes and a \"novel\" disease that has yet not been previously studied. Second, for a disease that has been previously studied (and thus already has some genes associated), we can choose to calculate Bayes factors for candidate genes with the inclusion of the genes that are already known to be associated with the disease. This scheme is more suitable for inferring associations between genes and a disease that has been previously studied (and thus has known associated genes).\nIn the case that multiple gene proximity profiles calculated from multiple PPI networks are available, we extend the regression model by incorporating additional gene proximity vectors into the design matrix. Suppose that disease d is associated with p genes, and q gene proximity profiles are available, the design matrix X will have pq + 1 columns, with column 1 for the intercept, columns 2 to p + 1 for the first profile, columns p + 2 to 2p + 1 for the second profile, and so on. With this extension, all the above reasoning remains unchanged.\n[SUBTITLE] Validation methods and evaluation criteria [SUBSECTION] We adopt two large scale leave-one-out cross-validation experiments to test how well the Bayesian regression approach performs in recovering known associations between diseases and genes. In the validation of random controls, we prioritize genes that are known as associated with diseases against randomly selected genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of 99 randomly selected control genes. In the validation of simulated linkage intervals, we simulate the real situation of identify disease genes by prioritizing genes that are known as associated with diseases against genes that are located around the disease genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of control genes that are located in 10Mbp upstream and downstream around this gene. In both experiments, we adopt the first scheme to mimic the situation of inferring associations between genes and novel diseases, for the purpose of achieving a more strict validation.\nIn each of the above leave-one-out cross-validation experiments, we repeat the validation run for every known association between a disease and a gene, obtaining a number of ranking lists. We further normalize the ranks by dividing them with the total number of candidate genes in the ranking list to obtain rank ratios and derive two criteria to measure the performance of a prioritization method. The first criterion is mean rank ratio, which is simply the average of rank ratios over all disease genes in a cross-validation experiment. This criterion provides a summary of the ranks of all genes that are known as associated with diseases, and the smaller the mean rank ratio, the better a method. The second criterion is AUC, the area under the receiver operating characteristic curve (ROC). Given a list of rank ratios and a predefined threshold, we define the sensitivity as the percentage of disease genes that are ranked above the threshold and the specificity as the percentage of control genes that are ranked below the threshold. Varying the threshold values, we are able to plot a ROC curve, which shows the relationship between sensitivity and 1-specificity. Calculating the area under the ROC curve, we obtain the AUC score, which provides an overall measure for the performance of a prioritization method.\nWe adopt two large scale leave-one-out cross-validation experiments to test how well the Bayesian regression approach performs in recovering known associations between diseases and genes. In the validation of random controls, we prioritize genes that are known as associated with diseases against randomly selected genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of 99 randomly selected control genes. In the validation of simulated linkage intervals, we simulate the real situation of identify disease genes by prioritizing genes that are known as associated with diseases against genes that are located around the disease genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of control genes that are located in 10Mbp upstream and downstream around this gene. In both experiments, we adopt the first scheme to mimic the situation of inferring associations between genes and novel diseases, for the purpose of achieving a more strict validation.\nIn each of the above leave-one-out cross-validation experiments, we repeat the validation run for every known association between a disease and a gene, obtaining a number of ranking lists. We further normalize the ranks by dividing them with the total number of candidate genes in the ranking list to obtain rank ratios and derive two criteria to measure the performance of a prioritization method. The first criterion is mean rank ratio, which is simply the average of rank ratios over all disease genes in a cross-validation experiment. This criterion provides a summary of the ranks of all genes that are known as associated with diseases, and the smaller the mean rank ratio, the better a method. The second criterion is AUC, the area under the receiver operating characteristic curve (ROC). Given a list of rank ratios and a predefined threshold, we define the sensitivity as the percentage of disease genes that are ranked above the threshold and the specificity as the percentage of control genes that are ranked below the threshold. Varying the threshold values, we are able to plot a ROC curve, which shows the relationship between sensitivity and 1-specificity. Calculating the area under the ROC curve, we obtain the AUC score, which provides an overall measure for the performance of a prioritization method.", "We propose to infer disease genes using gene proximity profiles derived from protein-protein interaction (PPI) networks, a phenotype similarity profile calculated using text mining technique, and known associations between disease phenotypes and genes extracted from the Online Mendelian Inheritance in Man (OMIM) database.\nThere have been a few PPI networks with diverse coverage and quality. In our study, we adopt five widely-used PPI networks to calculate gene proximity profiles. First, the Human Protein Reference Database (HPRD) contains human protein-protein interactions that are manually extracted from the literature by expert biologists [28]. After removing duplications and self-linked interactions, we extract from release 8 of this database 36,634 interactions between 9,470 human genes. Second, the Biological General Repository for Interaction Datasets (BioGRID) contains protein and genetic interactions of major model organism species [29]. We extract from version 2.0.63 of this database 29,558 interactions between 9,043 human genes. Third, the Biomolecular Interaction Network Database (BIND) contains both high-throughput and manually curated interactions between biological molecules [30]. From this database, we collect 14,955 interactions between 6,089 human genes. Fourth, the IntAct molecular interaction database (IntAct) contains protein-protein interaction derived from literature [31]. From this database, we collect 30,030 interactions between 6,775 human genes. Finally, the Molecular INTeraction database (MINT) contains information about physical interactions between proteins [32]. From this database, we collect 15,902 interactions between 7,200 human proteins. Details about these five PPI networks are given in Table 1.\nSummary of the five protein-protein interaction networks.\nThe phenotype similarity profile, which is obtained from an earlier work of van Driel et al[21], is represented as a matrix of pair-wise similarities between human disease phenotypes. Briefly, van Driel et al analyzed the full-text and clinical synopsis fields of all OMIM records, and used the anatomy and the disease sections of the medical subject headings vocabulary (MeSH) to extract terms from the OMIM records [21]. By doing this, they were able to characterize a phenotype using a feature vector that was composed of standardized and weighted phenotypic feature terms and further calculated a similarity score for a pair of phenotypes as the cosine of the angle of their feature vectors. Finally, they obtain a phenotype similarity profile that contains pair-wise similarity scores for 5,080 OMIM diseases.\nKnown associations between disease phenotypes and genes are extracted from BioMart [33]. For genes in HPRD, we obtain 2,466 associations between 1,590 diseases and 1,440 genes. For BioGRID, we obtain 2,166 associations between 1,412 diseases and 1,247 genes. For BIND, we obtain 1,442 associations between 1,016 diseases and 811 genes. For IntAct, we obtain 1,622 associations between 1,094 diseases and 933 genes. For MINT, we obtain 1,231 associations between 889 diseases and 677 genes. We also summarize the above information in Table 1.", "We adopt a linear regression model to explain disease similarities in the phenotype similarity profile using gene similarities in one or more gene proximity profiles, and we solve this regression model via a Bayesian approach [34]. For a clear presentation, we first derive this method using a single gene proximity profile and then extend this model to include multiple profiles.\nA gene proximity profile contains pair-wise similarity measure of every two genes and is calculated as the diffusion kernel of the underlying PPI network. Given a network of n nodes, represented by an adjacency matrix A, we calculate the Laplacian of the network as L = D – A and the diffusion kernel as Z = e-γL, where D is a diagonal matrix containing node degrees, and 0 <γ < 1 a free parameter that controls the magnitude of diffusion. With the kernel Z = (zij)n×n, we define the proximity of two genes i and j as the corresponding element zij in the kernel.\nLet ydd′ denote the similarity score between a query disease d and another disease d′. We define the phenotype similarity vector for disease d as , i.e., the similarities between disease d and all m diseases d1,d2,…,dm in the phenotype similarity profile. Let Zgg′ denote the proximity score between genes g and g′ in the gene proximity profile and G(d) the set of genes known as associated with disease d. We define the proximity between gene g and disease d as the summation of proximity scores between gene g and all genes known as associated with disease d, denoted by . We further define the gene proximity vector for gene g as , i.e., the proximities between gene g and all diseases d1,d2,…,dm in the phenotype similarity profile.\nWe then explain the phenotype similarity vector for disease d using gene proximity vectors of all genes that are associated with the disease via a linear regression model\n,\nwhere y = yd is the response vector, X the design matrix, β the coefficient vector, and the residual vector. For disease d associated with a total of p genes, the design matrix X has p + 1 columns, with the first column being 1s for the purpose of incorporating the intercept.\nWe solve this model using a Bayesian approach [34] and use the resulting Bayes factor to measure the strength of evidence for a candidate association. For the alternative model, we assume that y conditional on X is subject to a normal distribution, as\n,\nwith residuals independent and identically distributed, following normal density with mean 0 and variance σ2. We set conjugate prior distributions for β and σ2, as\n,\nwhere is composed of prior means, and σ2Σ prior variances with being a diagonal matrix. The joint distribution of all random quantities y, β, and σ2 is then given as.\nIntegrating out β and σ2, we obtain the marginal likelihood of y given X as\n,\nwhere and with and .\nOn the other hand, for the null model, where y is independent of X, the marginal likelihood of y can be derived in a similar way, as\n,\nwhere , and .\nThen, the Bayes factor is calculated as the ratio of the marginal likelihood under the alternative and the null hypotheses, respectively, as.\nFollowing literature [34], we will use the parameter setting (as +∞ in calculation) and σi = 1 (for i ≥ 1) throughout this paper, though a grid search for other values of σi shows that the method is quite robust to this parameter. It has been shown that the parameter γ should take a small value [35-37]. In our study, we perform a grid search for this parameter and find results are quite robust when 0.1 ≤ γ ≤ 0.3. Therefore we will use γ = 0.2 throughout this paper.\nObviously, a larger Bayes factor indicates a better exhibition of the linear relationship between the disease similarity and the gene proximity. With this understanding, we propose the following schemes to prioritize candidate genes. First, given a query disease and a set of candidate genes, we calculate a Bayes factor for each candidate gene, with the assumption that the gene is the only one associated with the query disease. Then, we rank candidate genes in non-increasing order according to their Bayes factors. This scheme mimics the situation in which we aim at inferring associations between genes and a \"novel\" disease that has yet not been previously studied. Second, for a disease that has been previously studied (and thus already has some genes associated), we can choose to calculate Bayes factors for candidate genes with the inclusion of the genes that are already known to be associated with the disease. This scheme is more suitable for inferring associations between genes and a disease that has been previously studied (and thus has known associated genes).\nIn the case that multiple gene proximity profiles calculated from multiple PPI networks are available, we extend the regression model by incorporating additional gene proximity vectors into the design matrix. Suppose that disease d is associated with p genes, and q gene proximity profiles are available, the design matrix X will have pq + 1 columns, with column 1 for the intercept, columns 2 to p + 1 for the first profile, columns p + 2 to 2p + 1 for the second profile, and so on. With this extension, all the above reasoning remains unchanged.", "We adopt two large scale leave-one-out cross-validation experiments to test how well the Bayesian regression approach performs in recovering known associations between diseases and genes. In the validation of random controls, we prioritize genes that are known as associated with diseases against randomly selected genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of 99 randomly selected control genes. In the validation of simulated linkage intervals, we simulate the real situation of identify disease genes by prioritizing genes that are known as associated with diseases against genes that are located around the disease genes. In each run of the validation, we select an association between a gene and a disease, assume that the association is unknown, and prioritize the gene against a set of control genes that are located in 10Mbp upstream and downstream around this gene. In both experiments, we adopt the first scheme to mimic the situation of inferring associations between genes and novel diseases, for the purpose of achieving a more strict validation.\nIn each of the above leave-one-out cross-validation experiments, we repeat the validation run for every known association between a disease and a gene, obtaining a number of ranking lists. We further normalize the ranks by dividing them with the total number of candidate genes in the ranking list to obtain rank ratios and derive two criteria to measure the performance of a prioritization method. The first criterion is mean rank ratio, which is simply the average of rank ratios over all disease genes in a cross-validation experiment. This criterion provides a summary of the ranks of all genes that are known as associated with diseases, and the smaller the mean rank ratio, the better a method. The second criterion is AUC, the area under the receiver operating characteristic curve (ROC). Given a list of rank ratios and a predefined threshold, we define the sensitivity as the percentage of disease genes that are ranked above the threshold and the specificity as the percentage of control genes that are ranked below the threshold. Varying the threshold values, we are able to plot a ROC curve, which shows the relationship between sensitivity and 1-specificity. Calculating the area under the ROC curve, we obtain the AUC score, which provides an overall measure for the performance of a prioritization method.", "[SUBTITLE] Gene proximity implying phenotype similarity [SUBSECTION] The proposed approach for inferring disease genes is based on the assumption that phenotypically similar diseases are caused by functionally related genes that are usually proximal in a PPI network. Moreover, we assume the existence of a linear relationship between similarities of diseases and proximities of genes that are associated with the diseases. In order to validate this assumption, we compile from HPRD 2,466 associations between 1,590 diseases and 1,440 genes, calculate Bayes factors for these disease genes, and run a Wilcoxon signed rank test to check whether the resulting Bayes factors are significantly greater than 1 (the random case). Results show that the p-value is smaller than 2.2E-16, indicating that the similarities of diseases have a linear relationship with the proximities of disease genes.\nTo further substantiate this point, we perform a series of permutations towards disease-disease, disease-gene, and gene-gene relationships. First, we break disease-disease relationship by permuting the phenotype similarity profile. Second, we break disease-gene relationship by two methods: (1) permuting disease-gene associations and (2) replacing disease genes in known disease-gene associations with randomly selected genes. Third, we break gene-gene relationship by permuting connections in the underlying protein-protein interaction network while keeping node degrees and recalculating the diffusion kernel. For each of the above permutations, we calculate Bayes factors of disease genes and present the results in Figure 1, from which we can clearly see that the median of Bayes factors based on the original data is much higher than those using permuted relationships.\nBayes factors of the original and permuted data. “original”, “permuted PPS”, “permuted seed”, “random seed”, “permuted PPI” denote the results obtained using original data, permuting phenotype similarity profile, permuting disease-gene associations, replacing disease genes in disease-gene associations with randomly selected genes, and permuting connections in the protein-protein interaction network, respectively.\nWe also perform similar studies using data extracted from BioGRID, BIND, IntAct, and MINT, and we obtain similar results as HPRD. From these comprehensive studies, we conclude that similarities between diseases can be explained using network proximities of genes that are associated with the diseases. In other words, gene proximity implies phenotype similarity.\nThe proposed approach for inferring disease genes is based on the assumption that phenotypically similar diseases are caused by functionally related genes that are usually proximal in a PPI network. Moreover, we assume the existence of a linear relationship between similarities of diseases and proximities of genes that are associated with the diseases. In order to validate this assumption, we compile from HPRD 2,466 associations between 1,590 diseases and 1,440 genes, calculate Bayes factors for these disease genes, and run a Wilcoxon signed rank test to check whether the resulting Bayes factors are significantly greater than 1 (the random case). Results show that the p-value is smaller than 2.2E-16, indicating that the similarities of diseases have a linear relationship with the proximities of disease genes.\nTo further substantiate this point, we perform a series of permutations towards disease-disease, disease-gene, and gene-gene relationships. First, we break disease-disease relationship by permuting the phenotype similarity profile. Second, we break disease-gene relationship by two methods: (1) permuting disease-gene associations and (2) replacing disease genes in known disease-gene associations with randomly selected genes. Third, we break gene-gene relationship by permuting connections in the underlying protein-protein interaction network while keeping node degrees and recalculating the diffusion kernel. For each of the above permutations, we calculate Bayes factors of disease genes and present the results in Figure 1, from which we can clearly see that the median of Bayes factors based on the original data is much higher than those using permuted relationships.\nBayes factors of the original and permuted data. “original”, “permuted PPS”, “permuted seed”, “random seed”, “permuted PPI” denote the results obtained using original data, permuting phenotype similarity profile, permuting disease-gene associations, replacing disease genes in disease-gene associations with randomly selected genes, and permuting connections in the protein-protein interaction network, respectively.\nWe also perform similar studies using data extracted from BioGRID, BIND, IntAct, and MINT, and we obtain similar results as HPRD. From these comprehensive studies, we conclude that similarities between diseases can be explained using network proximities of genes that are associated with the diseases. In other words, gene proximity implies phenotype similarity.\n[SUBTITLE] Prioritization with individual PPI networks [SUBSECTION] We design a series of large scale leave-one-out cross-validation experiments to show the validity and effectiveness of the Bayesian regression approach on individual PPI networks. As described in the method section, in each run of the validation procedure, we prioritize candidate genes according to the Bayes factors against two control sets, random controls and linkage intervals, with the performance being evaluated by mean rank ratios and AUC scores. Results are shown in Table 2 and Figure 2.\nPerformance of the Bayesian regression approach on individual data sources.\nResults are obtained using diffusion kernel (γ=0.2) with Bayesian prior µ=0 and σi =1 (for i ≥ 1). Results for the validation of random controls are mean (standard deviation) of 10 independent runs.\nROC curves of the five PPI networks on random controls (A) and linkage intervals (B). AUC scores for the validation of random controls are average of 10 independent runs.\nFrom Table 2, we see that the mean rank ratios obtained using the five PPI networks are all below 0.17, and the AUC scores are all above 0.83, suggesting the effectiveness of the Bayesian regression approach. The best performance are obtained using HPRD, with which the mean rank ratios against random controls and linkage intervals are 0.1349 and 0.1353, respectively, and the AUC scores are 0.8738 and 0.8720, respectively. From Figure 2, we see that the ROC curves of the HPRD data set are above those of the other data sets, suggesting that the performance on HPRD is superior over that of the others. To understand this observation, we perform one-sided Wilcoxon rank sum test against the hypothesis that Bayes factors of disease genes for the HPRD data set are greater than those for the other data sets. Results show that Bayes factors of disease genes for the HPRD data set are indeed greater than those of BioGRID (p-value=4.7E-2), BIND (p-value=2.6E-3), IntAct (p-value=1.9E-5), and MINT (p-value=2.5E-5). We therefore conjecture that the performance of the proposed method depends on how well the linear relationship between disease similarity and gene proximity exhibits.\nTo further demonstrate the effectiveness of the proposed approach, we repeat the same leave-one-out cross-validation experiments using the existing CIPHER approach [25], which relies on Pearson's correlation coefficient between the disease similarity vector and the gene proximity vector to prioritize candidate genes. We compare the results of these two approaches in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves lower mean rank ratios and higher AUC scores in all the five data sets. For example, when performing cross-validation for linkage intervals using the HPRD data set, the CIPHER approach achieves a mean rank ratio of 0.1746 and an AUC score of 0.8313, whereas the Bayesian approach achieves a mean rank ratio of 0.1353 and an AUC score of 0.8720, suggesting an obvious improvement over the CIPHER approach. Note that the CIPHER method calculates gene proximity matrix by applying a Gaussian kernel to the shortest path distance matrix of the underlying network. We also try to use the diffusion kernel matrix as the gene proximity matrix and find the difference is not obvious. These results strongly suggest that the Bayesian regression approach is superior over the CIPHER approach in prioritizing candidate genes.\nComparison with the CIPHER approach and the ordinary regression method. Subplots A and C illustrate mean rank ratios and AUC scores against random controls, respectively. Subplots B and D illustrate mean rank ratios and AUC scores against linkage intervals, respectively. Results for the validation of random controls are average of 10 independent runs (Variance not shown).\nIt is also of interest to compare the Bayesian approach with the ordinary linear regression method. For this purpose, we implement another method that relies on R2, the coefficient of determination, to prioritize candidate genes. We repeat the leave-one-out cross-validation experiments for this method and present the results in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves higher performance than the ordinary regression method in terms of both mean rank ratios and AUC scores in all the five data sets.\nWe design a series of large scale leave-one-out cross-validation experiments to show the validity and effectiveness of the Bayesian regression approach on individual PPI networks. As described in the method section, in each run of the validation procedure, we prioritize candidate genes according to the Bayes factors against two control sets, random controls and linkage intervals, with the performance being evaluated by mean rank ratios and AUC scores. Results are shown in Table 2 and Figure 2.\nPerformance of the Bayesian regression approach on individual data sources.\nResults are obtained using diffusion kernel (γ=0.2) with Bayesian prior µ=0 and σi =1 (for i ≥ 1). Results for the validation of random controls are mean (standard deviation) of 10 independent runs.\nROC curves of the five PPI networks on random controls (A) and linkage intervals (B). AUC scores for the validation of random controls are average of 10 independent runs.\nFrom Table 2, we see that the mean rank ratios obtained using the five PPI networks are all below 0.17, and the AUC scores are all above 0.83, suggesting the effectiveness of the Bayesian regression approach. The best performance are obtained using HPRD, with which the mean rank ratios against random controls and linkage intervals are 0.1349 and 0.1353, respectively, and the AUC scores are 0.8738 and 0.8720, respectively. From Figure 2, we see that the ROC curves of the HPRD data set are above those of the other data sets, suggesting that the performance on HPRD is superior over that of the others. To understand this observation, we perform one-sided Wilcoxon rank sum test against the hypothesis that Bayes factors of disease genes for the HPRD data set are greater than those for the other data sets. Results show that Bayes factors of disease genes for the HPRD data set are indeed greater than those of BioGRID (p-value=4.7E-2), BIND (p-value=2.6E-3), IntAct (p-value=1.9E-5), and MINT (p-value=2.5E-5). We therefore conjecture that the performance of the proposed method depends on how well the linear relationship between disease similarity and gene proximity exhibits.\nTo further demonstrate the effectiveness of the proposed approach, we repeat the same leave-one-out cross-validation experiments using the existing CIPHER approach [25], which relies on Pearson's correlation coefficient between the disease similarity vector and the gene proximity vector to prioritize candidate genes. We compare the results of these two approaches in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves lower mean rank ratios and higher AUC scores in all the five data sets. For example, when performing cross-validation for linkage intervals using the HPRD data set, the CIPHER approach achieves a mean rank ratio of 0.1746 and an AUC score of 0.8313, whereas the Bayesian approach achieves a mean rank ratio of 0.1353 and an AUC score of 0.8720, suggesting an obvious improvement over the CIPHER approach. Note that the CIPHER method calculates gene proximity matrix by applying a Gaussian kernel to the shortest path distance matrix of the underlying network. We also try to use the diffusion kernel matrix as the gene proximity matrix and find the difference is not obvious. These results strongly suggest that the Bayesian regression approach is superior over the CIPHER approach in prioritizing candidate genes.\nComparison with the CIPHER approach and the ordinary regression method. Subplots A and C illustrate mean rank ratios and AUC scores against random controls, respectively. Subplots B and D illustrate mean rank ratios and AUC scores against linkage intervals, respectively. Results for the validation of random controls are average of 10 independent runs (Variance not shown).\nIt is also of interest to compare the Bayesian approach with the ordinary linear regression method. For this purpose, we implement another method that relies on R2, the coefficient of determination, to prioritize candidate genes. We repeat the leave-one-out cross-validation experiments for this method and present the results in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves higher performance than the ordinary regression method in terms of both mean rank ratios and AUC scores in all the five data sets.\n[SUBTITLE] Prioritization with the integration of multiple PPI networks [SUBSECTION] The coverage of a single PPI network is in general not high. Even the largest HPRD network covers only 9,470 genes, less than half of known human protein-coding genes. We therefore propose to use the Bayesian regression approach to integrate multiple PPI networks, for the purpose of improving the coverage.\nBy taking the union of genes on individual PPI networks, we obtain 15,644 human genes. Focusing on these genes, we extract from BioMart 2,708 associations between 1,752 diseases and 1,621 genes. With this data set, we repeat the leave-one-out cross-validation experiments using individual gene proximity profiles and present the results in Figure 4. Note that in this procedure, we set the proximity of two genes to zero (minimum proximity) if any of the two genes is absent from the underlying network. We observe that the performance on individual proximity profiles drops dramatically in this larger data set (in comparison with Table 2), simply because each PPI network covers only a fraction of genes, and the scheme of handling missing data (setting to zero) yield small Bayes factors for genes that are absent from the network.\nPerformance of the integration method. Subplot A illustrates mean rank ratios for the integration method and individual PPI networks. Subplot B illustrates AUC scores for the integration method and individual PPI networks. Results for the validation of random controls are average of 10 independent runs (variance not shown).\nWe then use the Bayesian regression approach to integrate all the five PPI networks by extending the design matrix to include gene proximities from multiple profiles. We repeat the leave-one-out cross-validation experiments and present the results in Figure 4, from which we observe clearly the better performance of the proposed approach with the integrated use of multiple PPI networks. The mean rank ratios for random controls and linkage intervals are 0.1385 and 0.1380, respectively, with AUC scores being 0.8702 and 0.8692, respectively. In contrast, combining all genes and interactions in individual PPI networks together to form a large network (15,644 nodes and 77,332 edges) and then applying CIPHER only yields mean rank ratios 0.1850 and 0.1876 (AUC scores 0.8230 and 0.8180) for random controls and linkage intervals, respectively. Directly applying the Bayesian regression approach to the combined network yields mean rank ratios 0.1462 and 0.1469 (AUC scores 0.8624 and 0.8601) for random controls and linkage intervals, respectively. Furthermore, we also extract from the combined network interactions that exist in at least two individual PPI networks and obtain a high confident network (8,463 nodes and 28,617 edges). Focusing on genes in this network, we extract from BioMart 2,219 associations between 1,441 diseases and 1,271 genes. Directly applying the Bayesian regression approach to this high confident network yields mean rank ratios 0.1373 and 0.1380 (AUC scores 0.8717 and 0.8694) for random controls and linkage intervals, respectively. Though these results are slightly better than those of the Bayesian integration method, the coverage of this high confident network is apparently much lower than that of the Bayesian integration approach. From these results, we conclude that the Bayesian regression approach is effective in integrating multiple PPI networks for prioritizing disease genes.\nIt is of interest to see how much individual PPI networks contribute to the integration approach. For this purpose, we repeat the cross-validation experiments by integrating every four data sources and see how the exclusion of the remaining network affects the prioritization results. We find that all data sources have positive contributions to the integration approach, because mean rank ratios increase and AUC scores drop when any of the five data source is excluded. Results also suggest the order of the data sources according to their contributions (differences in cross-validation results) as follows: HPRD > BIND > BioGRID > IntAct > MINT. It is not surprising that HPRD contributes most to the integration method, because HPRD has the largest coverage and highest performance in previous validations. It is also not surprising that MINT has the least contribution, and IntAct has the second least contribution, because both coverage and performance of these two data sources are not high. However, it is not obvious that BIND contributes more than BioGRID, because individually, BioGRID has higher coverage and performance than BIND. To understand this observation, we analyze the relation between the five data sources and find that Bayes factors calculated from HPRD and BioGRID are highly correlated (Pearson's correlation coefficient = 0.9770). Therefore, the removal of BioGRID, when HPRD is presented, will not significantly affect the performance of the integration method.\nFinally, we study whether the integration approach is biased toward well characterized genes, that is, genes appearing in more data sources tend to have higher ranks. We group all genes in a validation procedure into 10 categories according their ranks such that the i-th category contains genes ranked at ((i - 1)×10%,i×10%]. For each category, we group genes according to the number of PPI networks containing the genes such that the j-th category contains genes appearing in exact j networks. We then perform pair-wise Pearson's chi-squared tests against the alternative hypothesis that frequencies of genes appearing in different number of PPI networks are different across different rank categories. Results show that for the cross-validation experiment on linkage intervals, the minimum p-value produced by the series of chi-square tests is 0.0909, suggesting that we cannot reject the null hypothesis. Similar results are obtained for the cross-validation experiment on random controls. We therefore conclude that the integration approach is not biased toward well characterized genes. In other words, genes appearing in more data sources do not tend to have higher ranks.\nThe coverage of a single PPI network is in general not high. Even the largest HPRD network covers only 9,470 genes, less than half of known human protein-coding genes. We therefore propose to use the Bayesian regression approach to integrate multiple PPI networks, for the purpose of improving the coverage.\nBy taking the union of genes on individual PPI networks, we obtain 15,644 human genes. Focusing on these genes, we extract from BioMart 2,708 associations between 1,752 diseases and 1,621 genes. With this data set, we repeat the leave-one-out cross-validation experiments using individual gene proximity profiles and present the results in Figure 4. Note that in this procedure, we set the proximity of two genes to zero (minimum proximity) if any of the two genes is absent from the underlying network. We observe that the performance on individual proximity profiles drops dramatically in this larger data set (in comparison with Table 2), simply because each PPI network covers only a fraction of genes, and the scheme of handling missing data (setting to zero) yield small Bayes factors for genes that are absent from the network.\nPerformance of the integration method. Subplot A illustrates mean rank ratios for the integration method and individual PPI networks. Subplot B illustrates AUC scores for the integration method and individual PPI networks. Results for the validation of random controls are average of 10 independent runs (variance not shown).\nWe then use the Bayesian regression approach to integrate all the five PPI networks by extending the design matrix to include gene proximities from multiple profiles. We repeat the leave-one-out cross-validation experiments and present the results in Figure 4, from which we observe clearly the better performance of the proposed approach with the integrated use of multiple PPI networks. The mean rank ratios for random controls and linkage intervals are 0.1385 and 0.1380, respectively, with AUC scores being 0.8702 and 0.8692, respectively. In contrast, combining all genes and interactions in individual PPI networks together to form a large network (15,644 nodes and 77,332 edges) and then applying CIPHER only yields mean rank ratios 0.1850 and 0.1876 (AUC scores 0.8230 and 0.8180) for random controls and linkage intervals, respectively. Directly applying the Bayesian regression approach to the combined network yields mean rank ratios 0.1462 and 0.1469 (AUC scores 0.8624 and 0.8601) for random controls and linkage intervals, respectively. Furthermore, we also extract from the combined network interactions that exist in at least two individual PPI networks and obtain a high confident network (8,463 nodes and 28,617 edges). Focusing on genes in this network, we extract from BioMart 2,219 associations between 1,441 diseases and 1,271 genes. Directly applying the Bayesian regression approach to this high confident network yields mean rank ratios 0.1373 and 0.1380 (AUC scores 0.8717 and 0.8694) for random controls and linkage intervals, respectively. Though these results are slightly better than those of the Bayesian integration method, the coverage of this high confident network is apparently much lower than that of the Bayesian integration approach. From these results, we conclude that the Bayesian regression approach is effective in integrating multiple PPI networks for prioritizing disease genes.\nIt is of interest to see how much individual PPI networks contribute to the integration approach. For this purpose, we repeat the cross-validation experiments by integrating every four data sources and see how the exclusion of the remaining network affects the prioritization results. We find that all data sources have positive contributions to the integration approach, because mean rank ratios increase and AUC scores drop when any of the five data source is excluded. Results also suggest the order of the data sources according to their contributions (differences in cross-validation results) as follows: HPRD > BIND > BioGRID > IntAct > MINT. It is not surprising that HPRD contributes most to the integration method, because HPRD has the largest coverage and highest performance in previous validations. It is also not surprising that MINT has the least contribution, and IntAct has the second least contribution, because both coverage and performance of these two data sources are not high. However, it is not obvious that BIND contributes more than BioGRID, because individually, BioGRID has higher coverage and performance than BIND. To understand this observation, we analyze the relation between the five data sources and find that Bayes factors calculated from HPRD and BioGRID are highly correlated (Pearson's correlation coefficient = 0.9770). Therefore, the removal of BioGRID, when HPRD is presented, will not significantly affect the performance of the integration method.\nFinally, we study whether the integration approach is biased toward well characterized genes, that is, genes appearing in more data sources tend to have higher ranks. We group all genes in a validation procedure into 10 categories according their ranks such that the i-th category contains genes ranked at ((i - 1)×10%,i×10%]. For each category, we group genes according to the number of PPI networks containing the genes such that the j-th category contains genes appearing in exact j networks. We then perform pair-wise Pearson's chi-squared tests against the alternative hypothesis that frequencies of genes appearing in different number of PPI networks are different across different rank categories. Results show that for the cross-validation experiment on linkage intervals, the minimum p-value produced by the series of chi-square tests is 0.0909, suggesting that we cannot reject the null hypothesis. Similar results are obtained for the cross-validation experiment on random controls. We therefore conclude that the integration approach is not biased toward well characterized genes. In other words, genes appearing in more data sources do not tend to have higher ranks.", "The proposed approach for inferring disease genes is based on the assumption that phenotypically similar diseases are caused by functionally related genes that are usually proximal in a PPI network. Moreover, we assume the existence of a linear relationship between similarities of diseases and proximities of genes that are associated with the diseases. In order to validate this assumption, we compile from HPRD 2,466 associations between 1,590 diseases and 1,440 genes, calculate Bayes factors for these disease genes, and run a Wilcoxon signed rank test to check whether the resulting Bayes factors are significantly greater than 1 (the random case). Results show that the p-value is smaller than 2.2E-16, indicating that the similarities of diseases have a linear relationship with the proximities of disease genes.\nTo further substantiate this point, we perform a series of permutations towards disease-disease, disease-gene, and gene-gene relationships. First, we break disease-disease relationship by permuting the phenotype similarity profile. Second, we break disease-gene relationship by two methods: (1) permuting disease-gene associations and (2) replacing disease genes in known disease-gene associations with randomly selected genes. Third, we break gene-gene relationship by permuting connections in the underlying protein-protein interaction network while keeping node degrees and recalculating the diffusion kernel. For each of the above permutations, we calculate Bayes factors of disease genes and present the results in Figure 1, from which we can clearly see that the median of Bayes factors based on the original data is much higher than those using permuted relationships.\nBayes factors of the original and permuted data. “original”, “permuted PPS”, “permuted seed”, “random seed”, “permuted PPI” denote the results obtained using original data, permuting phenotype similarity profile, permuting disease-gene associations, replacing disease genes in disease-gene associations with randomly selected genes, and permuting connections in the protein-protein interaction network, respectively.\nWe also perform similar studies using data extracted from BioGRID, BIND, IntAct, and MINT, and we obtain similar results as HPRD. From these comprehensive studies, we conclude that similarities between diseases can be explained using network proximities of genes that are associated with the diseases. In other words, gene proximity implies phenotype similarity.", "We design a series of large scale leave-one-out cross-validation experiments to show the validity and effectiveness of the Bayesian regression approach on individual PPI networks. As described in the method section, in each run of the validation procedure, we prioritize candidate genes according to the Bayes factors against two control sets, random controls and linkage intervals, with the performance being evaluated by mean rank ratios and AUC scores. Results are shown in Table 2 and Figure 2.\nPerformance of the Bayesian regression approach on individual data sources.\nResults are obtained using diffusion kernel (γ=0.2) with Bayesian prior µ=0 and σi =1 (for i ≥ 1). Results for the validation of random controls are mean (standard deviation) of 10 independent runs.\nROC curves of the five PPI networks on random controls (A) and linkage intervals (B). AUC scores for the validation of random controls are average of 10 independent runs.\nFrom Table 2, we see that the mean rank ratios obtained using the five PPI networks are all below 0.17, and the AUC scores are all above 0.83, suggesting the effectiveness of the Bayesian regression approach. The best performance are obtained using HPRD, with which the mean rank ratios against random controls and linkage intervals are 0.1349 and 0.1353, respectively, and the AUC scores are 0.8738 and 0.8720, respectively. From Figure 2, we see that the ROC curves of the HPRD data set are above those of the other data sets, suggesting that the performance on HPRD is superior over that of the others. To understand this observation, we perform one-sided Wilcoxon rank sum test against the hypothesis that Bayes factors of disease genes for the HPRD data set are greater than those for the other data sets. Results show that Bayes factors of disease genes for the HPRD data set are indeed greater than those of BioGRID (p-value=4.7E-2), BIND (p-value=2.6E-3), IntAct (p-value=1.9E-5), and MINT (p-value=2.5E-5). We therefore conjecture that the performance of the proposed method depends on how well the linear relationship between disease similarity and gene proximity exhibits.\nTo further demonstrate the effectiveness of the proposed approach, we repeat the same leave-one-out cross-validation experiments using the existing CIPHER approach [25], which relies on Pearson's correlation coefficient between the disease similarity vector and the gene proximity vector to prioritize candidate genes. We compare the results of these two approaches in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves lower mean rank ratios and higher AUC scores in all the five data sets. For example, when performing cross-validation for linkage intervals using the HPRD data set, the CIPHER approach achieves a mean rank ratio of 0.1746 and an AUC score of 0.8313, whereas the Bayesian approach achieves a mean rank ratio of 0.1353 and an AUC score of 0.8720, suggesting an obvious improvement over the CIPHER approach. Note that the CIPHER method calculates gene proximity matrix by applying a Gaussian kernel to the shortest path distance matrix of the underlying network. We also try to use the diffusion kernel matrix as the gene proximity matrix and find the difference is not obvious. These results strongly suggest that the Bayesian regression approach is superior over the CIPHER approach in prioritizing candidate genes.\nComparison with the CIPHER approach and the ordinary regression method. Subplots A and C illustrate mean rank ratios and AUC scores against random controls, respectively. Subplots B and D illustrate mean rank ratios and AUC scores against linkage intervals, respectively. Results for the validation of random controls are average of 10 independent runs (Variance not shown).\nIt is also of interest to compare the Bayesian approach with the ordinary linear regression method. For this purpose, we implement another method that relies on R2, the coefficient of determination, to prioritize candidate genes. We repeat the leave-one-out cross-validation experiments for this method and present the results in Figure 3, from which we see clearly that the Bayesian regression approach in general achieves higher performance than the ordinary regression method in terms of both mean rank ratios and AUC scores in all the five data sets.", "The coverage of a single PPI network is in general not high. Even the largest HPRD network covers only 9,470 genes, less than half of known human protein-coding genes. We therefore propose to use the Bayesian regression approach to integrate multiple PPI networks, for the purpose of improving the coverage.\nBy taking the union of genes on individual PPI networks, we obtain 15,644 human genes. Focusing on these genes, we extract from BioMart 2,708 associations between 1,752 diseases and 1,621 genes. With this data set, we repeat the leave-one-out cross-validation experiments using individual gene proximity profiles and present the results in Figure 4. Note that in this procedure, we set the proximity of two genes to zero (minimum proximity) if any of the two genes is absent from the underlying network. We observe that the performance on individual proximity profiles drops dramatically in this larger data set (in comparison with Table 2), simply because each PPI network covers only a fraction of genes, and the scheme of handling missing data (setting to zero) yield small Bayes factors for genes that are absent from the network.\nPerformance of the integration method. Subplot A illustrates mean rank ratios for the integration method and individual PPI networks. Subplot B illustrates AUC scores for the integration method and individual PPI networks. Results for the validation of random controls are average of 10 independent runs (variance not shown).\nWe then use the Bayesian regression approach to integrate all the five PPI networks by extending the design matrix to include gene proximities from multiple profiles. We repeat the leave-one-out cross-validation experiments and present the results in Figure 4, from which we observe clearly the better performance of the proposed approach with the integrated use of multiple PPI networks. The mean rank ratios for random controls and linkage intervals are 0.1385 and 0.1380, respectively, with AUC scores being 0.8702 and 0.8692, respectively. In contrast, combining all genes and interactions in individual PPI networks together to form a large network (15,644 nodes and 77,332 edges) and then applying CIPHER only yields mean rank ratios 0.1850 and 0.1876 (AUC scores 0.8230 and 0.8180) for random controls and linkage intervals, respectively. Directly applying the Bayesian regression approach to the combined network yields mean rank ratios 0.1462 and 0.1469 (AUC scores 0.8624 and 0.8601) for random controls and linkage intervals, respectively. Furthermore, we also extract from the combined network interactions that exist in at least two individual PPI networks and obtain a high confident network (8,463 nodes and 28,617 edges). Focusing on genes in this network, we extract from BioMart 2,219 associations between 1,441 diseases and 1,271 genes. Directly applying the Bayesian regression approach to this high confident network yields mean rank ratios 0.1373 and 0.1380 (AUC scores 0.8717 and 0.8694) for random controls and linkage intervals, respectively. Though these results are slightly better than those of the Bayesian integration method, the coverage of this high confident network is apparently much lower than that of the Bayesian integration approach. From these results, we conclude that the Bayesian regression approach is effective in integrating multiple PPI networks for prioritizing disease genes.\nIt is of interest to see how much individual PPI networks contribute to the integration approach. For this purpose, we repeat the cross-validation experiments by integrating every four data sources and see how the exclusion of the remaining network affects the prioritization results. We find that all data sources have positive contributions to the integration approach, because mean rank ratios increase and AUC scores drop when any of the five data source is excluded. Results also suggest the order of the data sources according to their contributions (differences in cross-validation results) as follows: HPRD > BIND > BioGRID > IntAct > MINT. It is not surprising that HPRD contributes most to the integration method, because HPRD has the largest coverage and highest performance in previous validations. It is also not surprising that MINT has the least contribution, and IntAct has the second least contribution, because both coverage and performance of these two data sources are not high. However, it is not obvious that BIND contributes more than BioGRID, because individually, BioGRID has higher coverage and performance than BIND. To understand this observation, we analyze the relation between the five data sources and find that Bayes factors calculated from HPRD and BioGRID are highly correlated (Pearson's correlation coefficient = 0.9770). Therefore, the removal of BioGRID, when HPRD is presented, will not significantly affect the performance of the integration method.\nFinally, we study whether the integration approach is biased toward well characterized genes, that is, genes appearing in more data sources tend to have higher ranks. We group all genes in a validation procedure into 10 categories according their ranks such that the i-th category contains genes ranked at ((i - 1)×10%,i×10%]. For each category, we group genes according to the number of PPI networks containing the genes such that the j-th category contains genes appearing in exact j networks. We then perform pair-wise Pearson's chi-squared tests against the alternative hypothesis that frequencies of genes appearing in different number of PPI networks are different across different rank categories. Results show that for the cross-validation experiment on linkage intervals, the minimum p-value produced by the series of chi-square tests is 0.0909, suggesting that we cannot reject the null hypothesis. Similar results are obtained for the cross-validation experiment on random controls. We therefore conclude that the integration approach is not biased toward well characterized genes. In other words, genes appearing in more data sources do not tend to have higher ranks.", "In this paper, we propose a Bayesian regression approach that relies on the linear relationship between disease similarity and gene proximity to prioritize candidate genes. We show that gene proximity, as the diffusion distance obtained from some PPI network, implies disease similarity, and we perform a series of leave-one-out cross-validation experiments to demonstrate the effectiveness of the proposed approach. We show that the Bayesian regression approach can achieve much higher performance that an existing CIPHER approach and the ordinary regression method. We also use the proposed approach to integrate multiple PPI networks, to achieve higher coverage while maintaining superior performance. Our contribution in this paper therefore lies in the following points: (1) systematic validation of the assumption that gene proximity in a PPI network implies disease similarity; (2) the Bayesian regression approach that greatly improves the performance in prioritizing disease genes, in comparison with a previous CIPHER approach; (3) detailed analysis of the effectiveness of five widely-used PPI networks in prioritizing disease genes; (4) a simple yet effective method to integrate multiple PPI networks into a single prioritization model.\nCertainly, our approach can be further studied from the following aspects. First, the main reason of using conjugate priors in the Bayesian regression model is to seed for analytic solutions and thus alleviate the computational burden in the calculation of Bayesian factors. Although this formulation shows great success, it is known that the specification of prior is intrinsically complicated and subjective. The main consideration here is that the posterior mean and variance should not depend on the units in which the disease similarities are measured and should also be invariant to the shift of the response variable. Therefore, one can consider the use of Jeffreys prior instead of the conjugate prior. By doing this, a Markov chain Monte Carlo (MCMC) approach would be necessary in the calculation of the marginal likelihood, and thus the computational burden could be high.\nSecond, it is conceptually straightforward to extend the Bayesian regression model to infer interactive effects of multiple genes on a query complex disease. For example, we can enumerate pair-wise combinations of all candidate genes and calculate a Bayes factor for each combination to infer the interactive effects of two genes on a query disease. Nevertheless, the challenge will come from the computational feasibility, because the number of combinations of even a small number of candidate genes will be large.\nThird, the means of dealing with missing data (setting to zeros) in the proposed approach, though simple, is kind of naïve. When more data sources are integrated, the overlap of genes between data sources will typically be lower, and thus a more effective method for dealing with missing data is desired. One possible solution is to interpolate missing data use the mean or median of observed data. Another possible solution is to rank candidate genes using data sources individually and then aggregate the ranks, as what is done in existing literature [6]. Both methods have their own advantages and disadvantages, and a comprehensive comparison study is necessary in order to obtain detailed understanding of these possible solutions.\nFinally, there have been a lot of large scale data produced by high-throughput techniques. To mention a few, sequences of most human protein-coding genes are known; mapping of human genes to Gene Ontology (GO) is available; expression profile for most human genes across various conditions has been obtained. How to extend the proposed approach to integrate these data sources will be one of the directions in our future work.", "No conflicts of interests declared.", "WZ implemented the method, collected the results, and wrote the manuscript. FS designed the research and wrote the manuscript. RJ designed the research and wrote the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null ]
[]
An efficient algorithmic approach for mass spectrometry-based disulfide connectivity determination using multi-ion analysis.
21342541
Determining the disulfide (S-S) bond pattern in a protein is often crucial for understanding its structure and function. In recent research, mass spectrometry (MS) based analysis has been applied to this problem following protein digestion under both partial reduction and non-reduction conditions. However, this paradigm still awaits solutions to certain algorithmic problems fundamental amongst which is the efficient matching of an exponentially growing set of putative S-S bonded structural alternatives to the large amounts of experimental spectrometric data. Current methods circumvent this challenge primarily through simplifications, such as by assuming only the occurrence of certain ion-types (b-ions and y-ions) that predominate in the more popular dissociation methods, such as collision-induced dissociation (CID). Unfortunately, this can adversely impact the quality of results.
BACKGROUND
We present an algorithmic approach to this problem that can, with high computational efficiency, analyze multiple ions types (a, b, bo, b*, c, x, y, yo, y*, and z) and deal with complex bonding topologies, such as inter/intra bonding involving more than two peptides. The proposed approach combines an approximation algorithm-based search formulation with data driven parameter estimation. This formulation considers only those regions of the search space where the correct solution resides with a high likelihood. Putative disulfide bonds thus obtained are finally combined in a globally consistent pattern to yield the overall disulfide bonding topology of the molecule. Additionally, each bond is associated with a confidence score, which aids in interpretation and assimilation of the results.
METHOD
The method was tested on nine different eukaryotic Glycosyltransferases possessing disulfide bonding topologies of varying complexity. Its performance was found to be characterized by high efficiency (in terms of time and the fraction of search space considered), sensitivity, specificity, and accuracy. The method was also compared with other techniques at the state-of-the-art. It was found to perform as well or better than the competing techniques. An implementation is available at: http://tintin.sfsu.edu/~whemurad/disulfidebond.
RESULTS
This research addresses some of the significant challenges in MS-based disulfide bond determination. To the best of our knowledge, this is the first algorithmic work that can consider multiple ion types in this problem setting while simultaneously ensuring polynomial time complexity and high accuracy of results.
CONCLUSIONS
[ "Algorithms", "Computational Biology", "Disulfides", "Glycosyltransferases", "Ions", "Mass Spectrometry", "Proteins" ]
3044266
null
null
Methods
We start the description of our method by providing, in Table 1, the key abbreviations used in the ensuing description and their respective definitions. In the first stage of the method, an Initial Match (IM) is said to be obtained when the difference between the detected mass of a targeted ion from the PMS and the calculated mass of a possible disulfide-bonded peptide structure from the DMS is found to be less than a threshold TIM. The second stage validates (or rejects) the initial matches. For each Initial Match, the validation occurs by searching for matches between product ions from the TMS and the theoretical spectra FMS. A Validation Match (VM) is said to occur when the difference between a precursor ion fragment mass from TMS and a disulfide-bonded fragment structure mass from FMS falls below a validation match threshold TVM. Abbreviations and their definitions Unfortunately, the sizes of both FMS and DMS grow exponentially. For a disulfide-bonded peptide structure consisting of k peptides, considering that there are f different fragment ion types possible, up to fk types of fragment arrangements may occur in the FMS. If the ith fragment ion consists of pi amino acid residues, then the complexity to compute the entire FMS for a disulfide-bonded peptide structure is using a brute-force approach. The DMS also grows exponentially. To understand this, let P = {p1, p2, …, pk} be the list of cysteine-containing peptides in a polypeptide chain. Further, let C = {c1, c2, …, ci} be the list of the number of cysteines per cysteine-containing peptide pi. If is the total number of cysteines in a protein, the number of possible disulfide connectivity patterns (DMS size) is [1,14]: . [SUBTITLE] The subset-sum formulation: towards polynomial-time matching [SUBSECTION] Given the growth characteristics of the DMS and the FMS, an exhaustive search-and-match strategy is clearly infeasible in the general case. This is especially true if multiple ion types are considered. Indexing [11,12] and filtering [15] are two possible approaches that have been considered for ameliorating this problem. In this paper we explore an alternative strategy that is based on the key insight that the entire search space (DMS or FMS) does not need to be generated to determine the matches. That is, we only want to generate the few disulfide bonded peptides whose mass is close to the (given) experimental spectra rather than generate all possible peptide combinations and subsequently testing and discarding most of these. This insight allows us to re-cast the DMS and FMS generation as instances of the subset-sum problem [16]. Recall, that given the pair (S, t), where S is a set of positive integers and t ∈ Z+, the subset-sum problem asks whether there exists a subset of S that adds up to t. While the subset-sum problem is itself NP-Complete, it can be solved using approximation strategies to obtain near-optimal solutions, in polynomial-time [16]. Given the growth characteristics of the DMS and the FMS, an exhaustive search-and-match strategy is clearly infeasible in the general case. This is especially true if multiple ion types are considered. Indexing [11,12] and filtering [15] are two possible approaches that have been considered for ameliorating this problem. In this paper we explore an alternative strategy that is based on the key insight that the entire search space (DMS or FMS) does not need to be generated to determine the matches. That is, we only want to generate the few disulfide bonded peptides whose mass is close to the (given) experimental spectra rather than generate all possible peptide combinations and subsequently testing and discarding most of these. This insight allows us to re-cast the DMS and FMS generation as instances of the subset-sum problem [16]. Recall, that given the pair (S, t), where S is a set of positive integers and t ∈ Z+, the subset-sum problem asks whether there exists a subset of S that adds up to t. While the subset-sum problem is itself NP-Complete, it can be solved using approximation strategies to obtain near-optimal solutions, in polynomial-time [16]. [SUBTITLE] Polynomial time DMS mass list construction [SUBSECTION] Our strategy lies in obtaining an approximate solution to the subset-sum problem by trimming as many elements from DMS as possible based on a parameter ε. To trim the DMS set by ε means to remove as many elements from DMS as possible such that if DMS* is the resultant trimmed set, then for every element DMSi removed from DMS, there will remain an element DMSi* in DMS* which is “sufficiently” close in terms of its mass to the deleted element DMSi. Specifically,(1) The approximation algorithm for creating the partial DMS is described by the APPROX-DMS and TRIM routines (Figure 4). APPROX-DMS takes the following parameters: (1) a sorted list of cysteine-containing peptides mass values (CCP), (2) a target mass value from the PMS list (PMSval), (3) the trimming parameter ε, and (4) the Initial Match threshold (TIM). In lines 2-8 of Figure 4, all the variables and data structures are initialized. In lines 9-11, the theoretical disulfide-bonded peptide structures are formed and stored in a temporary set called TempSet. Line 10 excludes values greater than the PMSval plus a constant corresponding to the Initial Match threshold. The rationale behind this threshold is explained in the following section. Line 12 increments the DMS by invoking the routine MERGE, which returns a sorted set formed by merging the two sorted input sets DMS and TempSet, with duplicated values removed. In line 13, the TRIM routine is called to shorten the DMS set. Lines 14-15 examine if the largest mass value in the constructed DMS set is sufficiently close to the targeted mass PMSval. If so, an Initial Match occurs. Pseudo code for APROX-DMS and TRIM routines Table 2 presents an example showing the effectiveness of the APROX-DMS. In this specific case, 37.5% of the entire search space (all feasible combinations of cysteine-containing peptides) was successfully trimmed, while ensuring that the correct IM was not missed. Another example illustrating the action of APPROX-DMS on the Beta-LG protein is available as supplemental information (see Additional File 1). Running APROX-DMS on the ST8SiaIV C142-C292bond CCP: the mass values of all cysteine-containing peptides. PMSval: a disulfide-bonded precursor ion mass. TrimSet: all the disulfide-bonded structures trimmed from the set of feasible combinations of cysteine-containing peptides. For this example, 37.5% of the structures were trimmed and the correct IM was found. The complexity of both routines MERGE and TRIM is O(|DMS|+|TempSet|) and O(|DMS|), respectively. Further, for any fixed ε > 0, our algorithm is a (1 + ε)-approximation scheme. That is, for any fixed ε > 0, the algorithm runs in polynomial time. The proof of the polynomial time complexity of APPROX-DMS can be obtained by direct analogy to the proof of the polynomial time complexity of the subset sum approximation algorithm from [16] and is outlined in Appendix A. Our strategy lies in obtaining an approximate solution to the subset-sum problem by trimming as many elements from DMS as possible based on a parameter ε. To trim the DMS set by ε means to remove as many elements from DMS as possible such that if DMS* is the resultant trimmed set, then for every element DMSi removed from DMS, there will remain an element DMSi* in DMS* which is “sufficiently” close in terms of its mass to the deleted element DMSi. Specifically,(1) The approximation algorithm for creating the partial DMS is described by the APPROX-DMS and TRIM routines (Figure 4). APPROX-DMS takes the following parameters: (1) a sorted list of cysteine-containing peptides mass values (CCP), (2) a target mass value from the PMS list (PMSval), (3) the trimming parameter ε, and (4) the Initial Match threshold (TIM). In lines 2-8 of Figure 4, all the variables and data structures are initialized. In lines 9-11, the theoretical disulfide-bonded peptide structures are formed and stored in a temporary set called TempSet. Line 10 excludes values greater than the PMSval plus a constant corresponding to the Initial Match threshold. The rationale behind this threshold is explained in the following section. Line 12 increments the DMS by invoking the routine MERGE, which returns a sorted set formed by merging the two sorted input sets DMS and TempSet, with duplicated values removed. In line 13, the TRIM routine is called to shorten the DMS set. Lines 14-15 examine if the largest mass value in the constructed DMS set is sufficiently close to the targeted mass PMSval. If so, an Initial Match occurs. Pseudo code for APROX-DMS and TRIM routines Table 2 presents an example showing the effectiveness of the APROX-DMS. In this specific case, 37.5% of the entire search space (all feasible combinations of cysteine-containing peptides) was successfully trimmed, while ensuring that the correct IM was not missed. Another example illustrating the action of APPROX-DMS on the Beta-LG protein is available as supplemental information (see Additional File 1). Running APROX-DMS on the ST8SiaIV C142-C292bond CCP: the mass values of all cysteine-containing peptides. PMSval: a disulfide-bonded precursor ion mass. TrimSet: all the disulfide-bonded structures trimmed from the set of feasible combinations of cysteine-containing peptides. For this example, 37.5% of the structures were trimmed and the correct IM was found. The complexity of both routines MERGE and TRIM is O(|DMS|+|TempSet|) and O(|DMS|), respectively. Further, for any fixed ε > 0, our algorithm is a (1 + ε)-approximation scheme. That is, for any fixed ε > 0, the algorithm runs in polynomial time. The proof of the polynomial time complexity of APPROX-DMS can be obtained by direct analogy to the proof of the polynomial time complexity of the subset sum approximation algorithm from [16] and is outlined in Appendix A. [SUBTITLE] Parameters estimation [SUBSECTION] APPROX-DMS depends on two important parameters, namely, the match threshold TIM and the trimming parameter ε. The match threshold is responsible for defining a “matching window”. This is necessary due to practical considerations such as the sensitivity of the instrument (i.e. 0.01Da, 0.1Da, and 1.0Da) and experimental noise, due to which an exact match is a rarity. We conducted an empirical study by using different values of TIM for all our datasets. Based on the results, the TIM value of ±1.0Da was found to minimize missing matches as well as the occurrence of false positives. Considering the smallest precursor ion mass involved, in these studies, the above value of TIM guaranteed a matching accuracy of 99.86%. The second parameter ε is much more important as it is crucial to the running time of the algorithm and its accuracy as evident from Eq. (1). To determine ε, we note that it is inversely proportional to the algorithm’s running time. However, a large value of ε would cause meaningful fragments to be left out of the DMS. At the same time, a small value for ε will lead to few data points being trimmed. Thus “guessing” appropriate values of ε can be complicated and suboptimal choices can significantly impact the quality of the results. We address the problem of data-driven estimation of ε using a regression framework where ε is treated as a dependent variable and based on the data, a functional relationship is obtained between it and the other (independent) variables. We model this functional relationship using the following independent variables: (1) the cysteine-containing peptides (CCP) mass range defined by CCPmax and CCPmin corresponding to the peptides with the highest and lowest mass respectively. (2) The number of cysteine-containing peptides k. A large k implies that the average difference in the mass of any two peptide fragments is small. Conversely, a small k implies fewer fragments with putatively larger differences in their masses. (3) The cysteine-containing peptides average mass value CCPaverage. The relationship between ε and these other variables is then obtained using multiple-variable regression. In our studies, the data for the regression was obtained using bootstrapping where groups of four proteins were randomly picked from the set of 9 proteins available to us. The functional relationship defining ε was obtained to be:(2) APPROX-DMS depends on two important parameters, namely, the match threshold TIM and the trimming parameter ε. The match threshold is responsible for defining a “matching window”. This is necessary due to practical considerations such as the sensitivity of the instrument (i.e. 0.01Da, 0.1Da, and 1.0Da) and experimental noise, due to which an exact match is a rarity. We conducted an empirical study by using different values of TIM for all our datasets. Based on the results, the TIM value of ±1.0Da was found to minimize missing matches as well as the occurrence of false positives. Considering the smallest precursor ion mass involved, in these studies, the above value of TIM guaranteed a matching accuracy of 99.86%. The second parameter ε is much more important as it is crucial to the running time of the algorithm and its accuracy as evident from Eq. (1). To determine ε, we note that it is inversely proportional to the algorithm’s running time. However, a large value of ε would cause meaningful fragments to be left out of the DMS. At the same time, a small value for ε will lead to few data points being trimmed. Thus “guessing” appropriate values of ε can be complicated and suboptimal choices can significantly impact the quality of the results. We address the problem of data-driven estimation of ε using a regression framework where ε is treated as a dependent variable and based on the data, a functional relationship is obtained between it and the other (independent) variables. We model this functional relationship using the following independent variables: (1) the cysteine-containing peptides (CCP) mass range defined by CCPmax and CCPmin corresponding to the peptides with the highest and lowest mass respectively. (2) The number of cysteine-containing peptides k. A large k implies that the average difference in the mass of any two peptide fragments is small. Conversely, a small k implies fewer fragments with putatively larger differences in their masses. (3) The cysteine-containing peptides average mass value CCPaverage. The relationship between ε and these other variables is then obtained using multiple-variable regression. In our studies, the data for the regression was obtained using bootstrapping where groups of four proteins were randomly picked from the set of 9 proteins available to us. The functional relationship defining ε was obtained to be:(2) [SUBTITLE] Polynomial time FMS construction [SUBSECTION] In creating the FMS, a strategy similar to the one used for generating the DMS can be used. This involves using an approximation algorithm, this time, to generate the theoretical spectra for all the IMs found during the first-stage matching. We define another trimming parameter δ to trim the FMS mass list. It can be expected that the functional form of δ depends on the fragments mass range, as well as their granularity (extent to which fragments are broken down into smaller ions). In a manner similar to the case for estimating ε, we used regression to obtain the specific functional form for the dependent variable δ in terms of the variables AAmax (the largest amino acid residue mass), AAmin (the smallest amino acid residue mass), AAaverage (the average amino acid residues mass), and ||p|| (average number of amino acid residues per fragment). Bootstrapping was once again utilized, resulting in the relationship shown in Eq. (3).(3) The pseudocode of the APPROX-FMS procedure used for generating the FMS is shown in Figure 5. The function GENFRAGS(.), in line 7, generates multiple fragment ions (a, b, bo, b*, c, x, y, yo, y*, and z) for peptide sequences in Pepsequences, which contains the disulfide-bonded peptides involved in the IM being analyzed. Next, for each element in the FMS and for each fragment in the FragSet (lines 8-11), new disulfide-bonded peptide fragment structures are formed. Line 10 rejects values greater than the TMSval, considering the Validation Match threshold. In line 12, the current FMS set is combined with the disulfide-bonded peptide fragments set TempSet using MERGE. In line 13, the FMS is trimmed using the TRIM routine. Lastly, a Validation Match VM is declared (lines 14-15) when a correspondence is found between the mass of the largest value in FMS and an experimentally determined mass value TMSval, given a Validation Match threshold. Pseudo code for APROX-FMS routine In creating the FMS, a strategy similar to the one used for generating the DMS can be used. This involves using an approximation algorithm, this time, to generate the theoretical spectra for all the IMs found during the first-stage matching. We define another trimming parameter δ to trim the FMS mass list. It can be expected that the functional form of δ depends on the fragments mass range, as well as their granularity (extent to which fragments are broken down into smaller ions). In a manner similar to the case for estimating ε, we used regression to obtain the specific functional form for the dependent variable δ in terms of the variables AAmax (the largest amino acid residue mass), AAmin (the smallest amino acid residue mass), AAaverage (the average amino acid residues mass), and ||p|| (average number of amino acid residues per fragment). Bootstrapping was once again utilized, resulting in the relationship shown in Eq. (3).(3) The pseudocode of the APPROX-FMS procedure used for generating the FMS is shown in Figure 5. The function GENFRAGS(.), in line 7, generates multiple fragment ions (a, b, bo, b*, c, x, y, yo, y*, and z) for peptide sequences in Pepsequences, which contains the disulfide-bonded peptides involved in the IM being analyzed. Next, for each element in the FMS and for each fragment in the FragSet (lines 8-11), new disulfide-bonded peptide fragment structures are formed. Line 10 rejects values greater than the TMSval, considering the Validation Match threshold. In line 12, the current FMS set is combined with the disulfide-bonded peptide fragments set TempSet using MERGE. In line 13, the FMS is trimmed using the TRIM routine. Lastly, a Validation Match VM is declared (lines 14-15) when a correspondence is found between the mass of the largest value in FMS and an experimentally determined mass value TMSval, given a Validation Match threshold. Pseudo code for APROX-FMS routine [SUBTITLE] Determining the globally consistent bond topology [SUBSECTION] Once all the Initial Matches and Validation Matches are calculated, we have a “local” (putative bond-level) view of the possible disulfide connectivity. This local information needs to be integrated to obtain a globally consistent view. Our approach to this problem is motivated by Fariselli and Casadio [14]. Specifically, we model the location of the putative disulfide bonds by edges in an undirected graph G (V, E), where the set of vertices V corresponds to the set of cysteines. To each edge, we assign a match score. This score represents the combined importance of each single peak match within two spectra. Each specific peak match is weighted according to its intensity. The match score is given by:(4) In Eq. (4), the numerator corresponds to the sum of each validation match for a disulfide bond multiplied by the matched MS/MS fragment normalized intensity value (IN). Here, VMi is a binary value which is set to 1 if a confirmatory match was found for fragment i. The denominator similarly contains the sum of each experimental MS/MS fragment ion from TMS multiplied by IN. Here, TMSi is a binary variable which indicates the presence of a fragment i in the MS/MS spectrum. Next, the globally consistent bond topology is found by solving the maximum weight matching problem for the graph G. A matching M in the graph G is a set of pair-wise non-adjacent edges; that is, two edges do not share a common vertex. A maximum weight matching is defined as a matching M that contains the largest possible sum of the weights (match scores) of each possible edge (disulfide bond). We use the Gabow algorithm [17], as implemented in [18] for computing the maximum weight match. Once all the Initial Matches and Validation Matches are calculated, we have a “local” (putative bond-level) view of the possible disulfide connectivity. This local information needs to be integrated to obtain a globally consistent view. Our approach to this problem is motivated by Fariselli and Casadio [14]. Specifically, we model the location of the putative disulfide bonds by edges in an undirected graph G (V, E), where the set of vertices V corresponds to the set of cysteines. To each edge, we assign a match score. This score represents the combined importance of each single peak match within two spectra. Each specific peak match is weighted according to its intensity. The match score is given by:(4) In Eq. (4), the numerator corresponds to the sum of each validation match for a disulfide bond multiplied by the matched MS/MS fragment normalized intensity value (IN). Here, VMi is a binary value which is set to 1 if a confirmatory match was found for fragment i. The denominator similarly contains the sum of each experimental MS/MS fragment ion from TMS multiplied by IN. Here, TMSi is a binary variable which indicates the presence of a fragment i in the MS/MS spectrum. Next, the globally consistent bond topology is found by solving the maximum weight matching problem for the graph G. A matching M in the graph G is a set of pair-wise non-adjacent edges; that is, two edges do not share a common vertex. A maximum weight matching is defined as a matching M that contains the largest possible sum of the weights (match scores) of each possible edge (disulfide bond). We use the Gabow algorithm [17], as implemented in [18] for computing the maximum weight match.
null
null
null
null
[ "Background", "The subset-sum formulation: towards polynomial-time matching", "Polynomial time DMS mass list construction", "Parameters estimation", "Polynomial time FMS construction", "Determining the globally consistent bond topology", "Results", "Analysis of efficiency of the search", "Effects of incorporating multiple ion types: a case study", "Comparative studies with predictive techniques", "Comparative studies with MassMatrix", "Quantitative assessment and analysis of the method’s performance", "Conclusions", "Authors' contributions", "Competing interests", "Appendix A – Etudes of the proof of polynomial complexity" ]
[ "Disulfide (S-S) bonds are known to play an important role in protein structure and function. Among others, this includes: influencing protein folding and stabilization, formation of characteristic structural motifs such as the cysteine knot, mediation of thiol-disulfide interchange reactions, and regulation of enzymatic activity. Early computational approaches for S-S bond determination focused on two learning-driven formulations based on the protein primary structure [1]: residue classification (distinguish bonded and free cysteines) and connectivity prediction (determine the S-S connectivity pattern). In recent times, the increasing availability and accuracy of mass spectrometry [2] (MS) has opened up an alternate approach; its essence lies in matching the theoretical spectra of ionized peptide fragments with experimentally obtained spectra to identify the presence of specific S-S bonds. A diagrammatic representation of the key steps of a MS-based approach is presented in Figure 1, along with the different types of fragment ions that can be generated as an outcome of this process.\nMS-based approach diagrammatic representation. (A) Once a protein is digested, the theoretically possible disulfide bonded peptides are compared with experimentally obtained precursor ions. In order to confirm each correspondence, the possible disulfide bonded fragment ions are next compared with experimentally generated MS/MS spectra. (B) Most of the different fragment ions (and their nomenclature) that can be observed. Ions types not represented here include b and y ions which have either lost a water molecule (bo, yo) or have lost an ammonia molecule (b*, y*).\nMS-based methods generally outperform methods using sequence-based learning formulations, as showed by Lee and Singh [3]. However, a number of algorithmic challenges remain outstanding in realizing the potential of MS-based approaches. Salient among these are: (1) accounting for multiple ion types in the data [4,5]: To avoid an exponential increase in the search space, a common simplification is to limit the analysis to the spectra of b-ions and y-ions only [3,6,7]. However, this simplification may erroneously ignore the occurrence of other ions, such as: a, bo, b*, c, x, yo, y*, and z. While the occurrence of non-b/y ions is minimized (though not eliminated) in collision-induced dissociation (CID), some of these ions can be present with greater likelihood in dissociation methods such as electron capture dissociation (ECD), electron transfer dissociation (ETD), and electron-detachment dissociation (EDD). In fact these ions types should be considered even in CID as illustrated by the example in Figure 2. (2) Design of efficient search and matching algorithms: The search space of possible disulfide topologies increases rapidly not only with the number of ion types being analyzed but also with the number of cysteines as well as the types of connectivity patterns. Thus, it is imperative to have algorithms that can accommodate the richness of the entire problem domain. (3) Automated data-driven determination of parameters: Many advanced algorithms in this area are intrinsically parametric. Often, determining the optimal value of these parameters automatically is in itself, a complex problem. This places the practitioner at a significant disadvantage. Support for automated and data-driven strategies for estimation of crucial parameters is therefore crucial to the real-world success of a method in this problem domain.\nMultiple-ion spectra analysis. This figure illustrates the presence of multiple ions types (in green) after CID. In the first spectrum, note the presence of bo and yo ions with high intensity in the fragmentation of the precursor ion with sequence: FFLQGIQLNTILPDAR, for the protein Lysozyme [Swiss-Prot: P11279]. In the second spectrum, a, bo, b*, and yo ions (all with high intensity) can be observed after the fragmentation of a precursor ion existing in the protein Pratelet glycoprotein 4 [Swiss-Prot P16671].\nThe contributions of this paper in context of the aforementioned challenges include: (1) Development of a highly efficient strategy for multi-ion disulfide bond analysis by considering a, b, bo, b*, c, x, y, yo, y*, and z ion types. To the best of our knowledge, this is the first algorithmic work that has considered all these ion-types in S-S bond determination. (2) A fully polynomial-time algorithm that selectively generates only those regions of the search space where the correct solutions reside with a high likelihood. (3) A multiple-regression-based data driven method to calculate the critical parameters modulating the search, so as to ensure that the correct bonding topologies are not missed due to the truncation of the search space. At the same time, the parameter selection ensures that the search is focused on the most promising regions of the search-space, and (4) A local-to-global strategy that builds a globally consistent bonding pattern based on MS data at the level of individual bonds.\nThe proposed approach also implements the probability-based scoring model proposed in [8] for each specific disulfide bond based on the number of MS/MS matches and their respective abundance. These scores reflect the significance of the specific disulfide bond and can form the basis of analysis, such as that conducted in [9], to estimate the accuracy of peptide assignment to tandem mass spectra.\nAt a high-level, the proposed approach can be thought of as a two-stage database-based matching technique (see Figure 3). From this perspective, it shares similarities with [10], where cross-linked peptides were also identified using a two-level method. During the first stage of such two-stage methods, the mass values of the theoretically possible disulfide-bonded peptide structures are compared with precursor ion mass values derived from the MS-spectra. In the second (confirmatory) stage, the theoretical spectra from the disulfide-bonded peptide structures are compared with MS/MS experimental spectra. The confirmatory step is necessary since a disulfide bonded peptide may not actually correspond to a precursor ion, even if their mass values are similar. Our approach can be used to conduct this entire search process in (a low degree) polynomial time. This paper significantly extends our prior research where we had proposed efficient indexing strategies to speed-up the search [11,12] as well as our more recent work [13], where a polynomial time approximation algorithm using hand-crafted parameters was proposed for the first stage matching.\nTwo-stage matching spectra for protein ST8SiaIV. (A) In the first-stage (DMS vs. PMS), the theoretical disulfide-bonded structure is matched with the doubly charged precursor ion with highest intensity, whose m/z = 1082.9. (B) For this initial match, the disulfide-bonded peptide pair is fragmented and the fragments are matched with the MS/MS spectrum for the precursor ion (FMS vs. TMS), generating a list of validation matches.", "Given the growth characteristics of the DMS and the FMS, an exhaustive search-and-match strategy is clearly infeasible in the general case. This is especially true if multiple ion types are considered. Indexing [11,12] and filtering [15] are two possible approaches that have been considered for ameliorating this problem. In this paper we explore an alternative strategy that is based on the key insight that the entire search space (DMS or FMS) does not need to be generated to determine the matches. That is, we only want to generate the few disulfide bonded peptides whose mass is close to the (given) experimental spectra rather than generate all possible peptide combinations and subsequently testing and discarding most of these. This insight allows us to re-cast the DMS and FMS generation as instances of the subset-sum problem [16]. Recall, that given the pair (S, t), where S is a set of positive integers and t ∈ Z+, the subset-sum problem asks whether there exists a subset of S that adds up to t. While the subset-sum problem is itself NP-Complete, it can be solved using approximation strategies to obtain near-optimal solutions, in polynomial-time [16].", "Our strategy lies in obtaining an approximate solution to the subset-sum problem by trimming as many elements from DMS as possible based on a parameter ε. To trim the DMS set by ε means to remove as many elements from DMS as possible such that if DMS* is the resultant trimmed set, then for every element DMSi removed from DMS, there will remain an element DMSi* in DMS* which is “sufficiently” close in terms of its mass to the deleted element DMSi. Specifically,(1)\nThe approximation algorithm for creating the partial DMS is described by the APPROX-DMS and TRIM routines (Figure 4). APPROX-DMS takes the following parameters: (1) a sorted list of cysteine-containing peptides mass values (CCP), (2) a target mass value from the PMS list (PMSval), (3) the trimming parameter ε, and (4) the Initial Match threshold (TIM). In lines 2-8 of Figure 4, all the variables and data structures are initialized. In lines 9-11, the theoretical disulfide-bonded peptide structures are formed and stored in a temporary set called TempSet. Line 10 excludes values greater than the PMSval plus a constant corresponding to the Initial Match threshold. The rationale behind this threshold is explained in the following section. Line 12 increments the DMS by invoking the routine MERGE, which returns a sorted set formed by merging the two sorted input sets DMS and TempSet, with duplicated values removed. In line 13, the TRIM routine is called to shorten the DMS set. Lines 14-15 examine if the largest mass value in the constructed DMS set is sufficiently close to the targeted mass PMSval. If so, an Initial Match occurs.\nPseudo code for APROX-DMS and TRIM routines\nTable 2 presents an example showing the effectiveness of the APROX-DMS. In this specific case, 37.5% of the entire search space (all feasible combinations of cysteine-containing peptides) was successfully trimmed, while ensuring that the correct IM was not missed. Another example illustrating the action of APPROX-DMS on the Beta-LG protein is available as supplemental information (see Additional File 1).\nRunning APROX-DMS on the ST8SiaIV C142-C292bond\nCCP: the mass values of all cysteine-containing peptides. PMSval: a disulfide-bonded precursor ion mass. TrimSet: all the disulfide-bonded structures trimmed from the set of feasible combinations of cysteine-containing peptides. For this example, 37.5% of the structures were trimmed and the correct IM was found.\nThe complexity of both routines MERGE and TRIM is O(|DMS|+|TempSet|) and O(|DMS|), respectively. Further, for any fixed ε > 0, our algorithm is a (1 + ε)-approximation scheme. That is, for any fixed ε > 0, the algorithm runs in polynomial time. The proof of the polynomial time complexity of APPROX-DMS can be obtained by direct analogy to the proof of the polynomial time complexity of the subset sum approximation algorithm from [16] and is outlined in Appendix A.", "APPROX-DMS depends on two important parameters, namely, the match threshold TIM and the trimming parameter ε. The match threshold is responsible for defining a “matching window”. This is necessary due to practical considerations such as the sensitivity of the instrument (i.e. 0.01Da, 0.1Da, and 1.0Da) and experimental noise, due to which an exact match is a rarity. We conducted an empirical study by using different values of TIM for all our datasets. Based on the results, the TIM value of ±1.0Da was found to minimize missing matches as well as the occurrence of false positives. Considering the smallest precursor ion mass involved, in these studies, the above value of TIM guaranteed a matching accuracy of 99.86%.\nThe second parameter ε is much more important as it is crucial to the running time of the algorithm and its accuracy as evident from Eq. (1). To determine ε, we note that it is inversely proportional to the algorithm’s running time. However, a large value of ε would cause meaningful fragments to be left out of the DMS. At the same time, a small value for ε will lead to few data points being trimmed. Thus “guessing” appropriate values of ε can be complicated and suboptimal choices can significantly impact the quality of the results. We address the problem of data-driven estimation of ε using a regression framework where ε is treated as a dependent variable and based on the data, a functional relationship is obtained between it and the other (independent) variables. We model this functional relationship using the following independent variables: (1) the cysteine-containing peptides (CCP) mass range defined by CCPmax and CCPmin corresponding to the peptides with the highest and lowest mass respectively. (2) The number of cysteine-containing peptides k. A large k implies that the average difference in the mass of any two peptide fragments is small. Conversely, a small k implies fewer fragments with putatively larger differences in their masses. (3) The cysteine-containing peptides average mass value CCPaverage. The relationship between ε and these other variables is then obtained using multiple-variable regression. In our studies, the data for the regression was obtained using bootstrapping where groups of four proteins were randomly picked from the set of 9 proteins available to us. The functional relationship defining ε was obtained to be:(2)", "In creating the FMS, a strategy similar to the one used for generating the DMS can be used. This involves using an approximation algorithm, this time, to generate the theoretical spectra for all the IMs found during the first-stage matching. We define another trimming parameter δ to trim the FMS mass list. It can be expected that the functional form of δ depends on the fragments mass range, as well as their granularity (extent to which fragments are broken down into smaller ions). In a manner similar to the case for estimating ε, we used regression to obtain the specific functional form for the dependent variable δ in terms of the variables AAmax (the largest amino acid residue mass), AAmin (the smallest amino acid residue mass), AAaverage (the average amino acid residues mass), and ||p|| (average number of amino acid residues per fragment). Bootstrapping was once again utilized, resulting in the relationship shown in Eq. (3).(3)\nThe pseudocode of the APPROX-FMS procedure used for generating the FMS is shown in Figure 5. The function GENFRAGS(.), in line 7, generates multiple fragment ions (a, b, bo, b*, c, x, y, yo, y*, and z) for peptide sequences in Pepsequences, which contains the disulfide-bonded peptides involved in the IM being analyzed. Next, for each element in the FMS and for each fragment in the FragSet (lines 8-11), new disulfide-bonded peptide fragment structures are formed. Line 10 rejects values greater than the TMSval, considering the Validation Match threshold. In line 12, the current FMS set is combined with the disulfide-bonded peptide fragments set TempSet using MERGE. In line 13, the FMS is trimmed using the TRIM routine. Lastly, a Validation Match VM is declared (lines 14-15) when a correspondence is found between the mass of the largest value in FMS and an experimentally determined mass value TMSval, given a Validation Match threshold.\nPseudo code for APROX-FMS routine", "Once all the Initial Matches and Validation Matches are calculated, we have a “local” (putative bond-level) view of the possible disulfide connectivity. This local information needs to be integrated to obtain a globally consistent view. Our approach to this problem is motivated by Fariselli and Casadio [14]. Specifically, we model the location of the putative disulfide bonds by edges in an undirected graph G (V, E), where the set of vertices V corresponds to the set of cysteines. To each edge, we assign a match score. This score represents the combined importance of each single peak match within two spectra. Each specific peak match is weighted according to its intensity. The match score is given by:(4)\nIn Eq. (4), the numerator corresponds to the sum of each validation match for a disulfide bond multiplied by the matched MS/MS fragment normalized intensity value (IN). Here, VMi is a binary value which is set to 1 if a confirmatory match was found for fragment i. The denominator similarly contains the sum of each experimental MS/MS fragment ion from TMS multiplied by IN. Here, TMSi is a binary variable which indicates the presence of a fragment i in the MS/MS spectrum.\nNext, the globally consistent bond topology is found by solving the maximum weight matching problem for the graph G. A matching M in the graph G is a set of pair-wise non-adjacent edges; that is, two edges do not share a common vertex. A maximum weight matching is defined as a matching M that contains the largest possible sum of the weights (match scores) of each possible edge (disulfide bond). We use the Gabow algorithm [17], as implemented in [18] for computing the maximum weight match.", "The proposed method was validated utilizing experimental data obtained using a capillary liquid chromatography system coupled with a Thermo-Fisher LCQ ion trap mass spectrometer LC/ESI-MS/MS system. Details of the experimental protocols can be found in [19,20]. We used data from nine eukaryotic glycosyltransferases. These molecules and their Swiss-Prot ID were: ST8Sia IV [Q92187], Beta-lactoglobulin [P02754], FucT VII [Q11130], C2GnT-I [Q09324], Lysozyme [P00698], FT III [P21217], β1-4GalT [P08037], Aldolase [P00883], and Aspa [Q9R1T5].\nWe conducted five sets of experiments to investigate the proposed method and its efficacy. These experiments included: (1) Analysis of method’s efficiency, showing how the method successfully reduced the DMS and FMS search spaces. (2) Analysis of the effect of incorporating multiple ion types, demonstrating the importance of considering non-b/y ions in the determination of disulfide bonds. (3) Comparative analysis of the proposed method with established predictive techniques. (4) Comparative analysis of the method with MassMatrix, an established MS-based approach which can be used for determining S-S bonds. In both experiment 3 and experiment 4, the aforementioned set of glycosyltransferases and their known S-S bond topology provided us with the ground truth. (5) Analysis of the method in terms of established performance measures: Accuracy (Q2), Sensitivity (Qc), Specificity (Qnc), and Matthew’s correlation coefficient (c).\n[SUBTITLE] Analysis of efficiency of the search [SUBSECTION] One of the most important characteristics of the proposed method is its efficiency in terms of excluding significant portions of a large and rapidly expanding search space. In Table 3 we compare the size of the complete DMS (containing all the disulfide-bonded peptide structures generated for each protein) and the complete FMS (containing all the disulfide-bonded fragment ions) with the truncated DMS and FMS obtained using the proposed approach.\nDMS and FMS mass space sizes comparison\nIt may be noted that across the molecules, on an average, the proposed approach required examining about 78% of the entire DMS and only about 14% of the entire FMS. It is crucial to note that this reduction in search was achieved without impacting the accuracy and having considered all multiple fragment ion types (a, b, bo, b*, c, x, y, yo, y*, and z). The DMS decrease was less than the FMS decrease because the disulfide-bonded structures in the DMS were bigger and fewer in number and consequently dispersed across the spectra mass range. In Figure 6, we show the actual time taken to obtain a solution by generating the complete DMS and FMS, as well as their truncated counterparts, for each of the molecules.\nComparison of the computational time (in seconds) for the exhaustive and partial generation of DMS and FMS of the proteins from Table 3. On average there was a 49.5% decrease in time to compute the DMS and 88.7% decrease in time to compute the FMS. The computations were carried out on an Intel T2390 1.86 GHz single-core processor with 1GB RAM.\nOne of the most important characteristics of the proposed method is its efficiency in terms of excluding significant portions of a large and rapidly expanding search space. In Table 3 we compare the size of the complete DMS (containing all the disulfide-bonded peptide structures generated for each protein) and the complete FMS (containing all the disulfide-bonded fragment ions) with the truncated DMS and FMS obtained using the proposed approach.\nDMS and FMS mass space sizes comparison\nIt may be noted that across the molecules, on an average, the proposed approach required examining about 78% of the entire DMS and only about 14% of the entire FMS. It is crucial to note that this reduction in search was achieved without impacting the accuracy and having considered all multiple fragment ion types (a, b, bo, b*, c, x, y, yo, y*, and z). The DMS decrease was less than the FMS decrease because the disulfide-bonded structures in the DMS were bigger and fewer in number and consequently dispersed across the spectra mass range. In Figure 6, we show the actual time taken to obtain a solution by generating the complete DMS and FMS, as well as their truncated counterparts, for each of the molecules.\nComparison of the computational time (in seconds) for the exhaustive and partial generation of DMS and FMS of the proteins from Table 3. On average there was a 49.5% decrease in time to compute the DMS and 88.7% decrease in time to compute the FMS. The computations were carried out on an Intel T2390 1.86 GHz single-core processor with 1GB RAM.\n[SUBTITLE] Effects of incorporating multiple ion types: a case study [SUBSECTION] In this experiment, we investigated the effect of incorporating multiple ion types (a, b, bo, b*, c, x, y, yo, y*, and z) in determining the S-S bonds as opposed to considering only b/y-ions. We found that multiple instances of combinations between b/y ions and other ions types occurred by analyzing the confirmatory matches for the different disulfide bonds. These combinations are available as supplemental information (see Additional File 2).\nThe consideration of multiple ion types also contributed to the method’s accuracy in terms of determining specific S-S bonds. Disulfide bonds previously missed due to their low match score could be identified when all ten different ion types were considered. The tryptic-digested protein FucT VII (which underwent CID) constituted one such example. In FucT VII the bond C318-C321 was missed when considering only b/y ions (match score 29, pp=11, pp2 =15). However, as shown in Figure 7, this bond was identified when multiple ions types were included (match score 100, pp=31, pp2=70). The confidence measures pp and pp2 are described in the following section. To explain this improvement we note that C318-C321 was an intra-bond involving cysteines that were close together. Consequently, CID-based fragmentation was poor and the consideration of other ion types essentially improved the signal-to-background contrast. In this particular case, five other ion types - a4, a5, a6, bo7, y*7 - were present in the FucT VII MS/MS data besides the b ions represented in the spectrum on the right in Figure 7. In the following, we present details of how these ions contribute to the match score Vs (from Eq. (4)). We present the two cases: consideration of only b/y-ions (Eq. (5)) and consideration of multiple ion types (Eq. (6)). In the numerator we specify the contribution of each spectrum peak from Figure 7 (the ion corresponding to each VMi × IN term is showed in brackets).(5)(6)\nSpectra samples from tryptic digested protein FucT VII. Spectra (m/z vs. normalized intensity) illustrating the confirmatory matches (whose intensity values were at least 10% of the maximum intensity) found for the disulfide bond between cysteines C318-C321 in protein FucT VII. The spectrum in the left shows the matches found when multiple ions were considered. The spectrum in the right shows the matches when only b/y-ions were considered.\nWe also observed that consideration of multiple ion-types led to significant increase in the match scores of the true disulfide bonds, whereas only a modest increase was noticed for false positives. This allowed us to increase the threshold we use on the match score Vs to identify high-quality matches from 30 to 80 (a 166% increase). The positive effect of this increment on the specificity of the method can be illustrated by considering the protein Aldolase. In this molecule, consideration of only b/y ions led to a false positive S-S bond identification between cysteines C135-C202 (Vs=30.8, with (original) threshold 30) However, when the multiple ions-types were considered with the (increased) threshold on the match score, no S-S bond was found between C135-C202 (Vs= 53.2, (incremented) threshold 80).\nIn this experiment, we investigated the effect of incorporating multiple ion types (a, b, bo, b*, c, x, y, yo, y*, and z) in determining the S-S bonds as opposed to considering only b/y-ions. We found that multiple instances of combinations between b/y ions and other ions types occurred by analyzing the confirmatory matches for the different disulfide bonds. These combinations are available as supplemental information (see Additional File 2).\nThe consideration of multiple ion types also contributed to the method’s accuracy in terms of determining specific S-S bonds. Disulfide bonds previously missed due to their low match score could be identified when all ten different ion types were considered. The tryptic-digested protein FucT VII (which underwent CID) constituted one such example. In FucT VII the bond C318-C321 was missed when considering only b/y ions (match score 29, pp=11, pp2 =15). However, as shown in Figure 7, this bond was identified when multiple ions types were included (match score 100, pp=31, pp2=70). The confidence measures pp and pp2 are described in the following section. To explain this improvement we note that C318-C321 was an intra-bond involving cysteines that were close together. Consequently, CID-based fragmentation was poor and the consideration of other ion types essentially improved the signal-to-background contrast. In this particular case, five other ion types - a4, a5, a6, bo7, y*7 - were present in the FucT VII MS/MS data besides the b ions represented in the spectrum on the right in Figure 7. In the following, we present details of how these ions contribute to the match score Vs (from Eq. (4)). We present the two cases: consideration of only b/y-ions (Eq. (5)) and consideration of multiple ion types (Eq. (6)). In the numerator we specify the contribution of each spectrum peak from Figure 7 (the ion corresponding to each VMi × IN term is showed in brackets).(5)(6)\nSpectra samples from tryptic digested protein FucT VII. Spectra (m/z vs. normalized intensity) illustrating the confirmatory matches (whose intensity values were at least 10% of the maximum intensity) found for the disulfide bond between cysteines C318-C321 in protein FucT VII. The spectrum in the left shows the matches found when multiple ions were considered. The spectrum in the right shows the matches when only b/y-ions were considered.\nWe also observed that consideration of multiple ion-types led to significant increase in the match scores of the true disulfide bonds, whereas only a modest increase was noticed for false positives. This allowed us to increase the threshold we use on the match score Vs to identify high-quality matches from 30 to 80 (a 166% increase). The positive effect of this increment on the specificity of the method can be illustrated by considering the protein Aldolase. In this molecule, consideration of only b/y ions led to a false positive S-S bond identification between cysteines C135-C202 (Vs=30.8, with (original) threshold 30) However, when the multiple ions-types were considered with the (increased) threshold on the match score, no S-S bond was found between C135-C202 (Vs= 53.2, (incremented) threshold 80).\n[SUBTITLE] Comparative studies with predictive techniques [SUBSECTION] In this experiment we compared the proposed method with three well known predictive methods DiANNA [21], DISULFIND [22], and PreCys [23]. The results from each of the methods are shown in Table 4 along with the with the known disulfide bond linkages according to the Swiss-Prot knowledgebase. As it can be seen, in terms of correct identifications (as well as minimizing false positives), the proposed approach outperformed all the predictive techniques.\nComparison with predictive methods\nIn this experiment we compared the proposed method with three well known predictive methods DiANNA [21], DISULFIND [22], and PreCys [23]. The results from each of the methods are shown in Table 4 along with the with the known disulfide bond linkages according to the Swiss-Prot knowledgebase. As it can be seen, in terms of correct identifications (as well as minimizing false positives), the proposed approach outperformed all the predictive techniques.\nComparison with predictive methods\n[SUBTITLE] Comparative studies with MassMatrix [SUBSECTION] At the state-of-the-art MS2Assign [6] and MassMatrix [7] are two MS-based methods that can be applied to the problem of determining S-S bond connectivity. In our previous work [3], the MS2DB system developed by us was found to be comparable to MS2Assign [6], albeit, in limited testing. Since the proposed method improves upon MS2DB and due to space limitations, we only present detailed comparative results with MassMatrix [7] in Table 5. As part of this experiment, for each S-S bond, in addition to the empirical match score (Eq. (4)), a probability based scoring model proposed in [8] was implemented. This model provided two scores called pp and pp2 scores. The pp score helps to evaluate whether the number of VMs could be a random. The pp2 score evaluates whether the total abundance (intensity) of VMs could be a random. We refer the reader to [8] for a detailed description and formulae of the pp and pp2 scores. The reader may note that the proposed method had better pp and pp2 scores when compared to MassMatrix (higher pp and pp2 scores are better, indicating smaller p-values). While the match scores (Vs) obtained with the proposed method were also higher than those obtained with MassMatrix (V*s), no inferences should be drawn as these scores are calculated differently in each of these methods. As can be seen from Table 5, every bond correctly determined by MassMatrix was also found by us. However, there were S-S bonds in C2GnT-I and Lysozyme that were found by the proposed method but not by MassMatrix.\nComparison with MassMatrix\nThe score (Vs) of each disulfide bond and the confidence scores (pp and pp2 values) are shown in brackets, respectively.\nAt the state-of-the-art MS2Assign [6] and MassMatrix [7] are two MS-based methods that can be applied to the problem of determining S-S bond connectivity. In our previous work [3], the MS2DB system developed by us was found to be comparable to MS2Assign [6], albeit, in limited testing. Since the proposed method improves upon MS2DB and due to space limitations, we only present detailed comparative results with MassMatrix [7] in Table 5. As part of this experiment, for each S-S bond, in addition to the empirical match score (Eq. (4)), a probability based scoring model proposed in [8] was implemented. This model provided two scores called pp and pp2 scores. The pp score helps to evaluate whether the number of VMs could be a random. The pp2 score evaluates whether the total abundance (intensity) of VMs could be a random. We refer the reader to [8] for a detailed description and formulae of the pp and pp2 scores. The reader may note that the proposed method had better pp and pp2 scores when compared to MassMatrix (higher pp and pp2 scores are better, indicating smaller p-values). While the match scores (Vs) obtained with the proposed method were also higher than those obtained with MassMatrix (V*s), no inferences should be drawn as these scores are calculated differently in each of these methods. As can be seen from Table 5, every bond correctly determined by MassMatrix was also found by us. However, there were S-S bonds in C2GnT-I and Lysozyme that were found by the proposed method but not by MassMatrix.\nComparison with MassMatrix\nThe score (Vs) of each disulfide bond and the confidence scores (pp and pp2 values) are shown in brackets, respectively.\n[SUBTITLE] Quantitative assessment and analysis of the method’s performance [SUBSECTION] If the set of disulfide bonds are denoted by P and the set of cysteines not forming disulfide bonds by N, then true positive (TP) predictions occur when disulfide bonds that exist are correctly predicted. False negative (FN) predictions occur when bonds that exist are not predicted as such. Similarly, a true negative (TN) prediction correctly identifies cysteine pairs that do not form a bond. Finally, a false positive (FP) prediction, incorrectly assigns a disulfide link to a pair of cysteines, which are not actually bonded. Based on these definitions, we use the following four standard measures to analyze the proposed method.\nSensitivity (Qc) = TP/P (7)\nSpecificity (Qnc) = TN/N (8)\nAccuracy (Q2) = TP + TN/P + N (9)\nIn Table 6 we present the results obtained for our framework. With maximum specificity and high accuracy (98% average), the method correctly reported the connectivity for most of the proteins. The method only failed to identify three disulfide bonds. One intra-bond in the Beta-LG protein could not be found due to a blind spot caused by the same intra-bond, making the protein’s fragmentation difficult. A blind spot occurs when the precursor ion fragmentation produces different fragments only at the outside boundaries of the intra-disulfide bond. This can cause too few product ions to be generated; the limited information can prevent accurate determination of disulfide bonds using MS-based methods. One cross-linked bond in the FT III protein also could not be identified because this particular connectivity configuration creates a large disulfide-bonded structure, which is poorly fragmented by tandem mass spectrometry. One bond in the C2GnT-I protein could not be found, since the precursor ion cannot be formed by chymotryptic digestion, which was the digestion carried for C2GnT-I. It is important to note that neither MassMatrix nor MS2Assign were able to identify these bonds.\nSensitivity, specificity, accuracy and Mathew’s correlation coefficient results for all nine proteins analyzed\nIf the set of disulfide bonds are denoted by P and the set of cysteines not forming disulfide bonds by N, then true positive (TP) predictions occur when disulfide bonds that exist are correctly predicted. False negative (FN) predictions occur when bonds that exist are not predicted as such. Similarly, a true negative (TN) prediction correctly identifies cysteine pairs that do not form a bond. Finally, a false positive (FP) prediction, incorrectly assigns a disulfide link to a pair of cysteines, which are not actually bonded. Based on these definitions, we use the following four standard measures to analyze the proposed method.\nSensitivity (Qc) = TP/P (7)\nSpecificity (Qnc) = TN/N (8)\nAccuracy (Q2) = TP + TN/P + N (9)\nIn Table 6 we present the results obtained for our framework. With maximum specificity and high accuracy (98% average), the method correctly reported the connectivity for most of the proteins. The method only failed to identify three disulfide bonds. One intra-bond in the Beta-LG protein could not be found due to a blind spot caused by the same intra-bond, making the protein’s fragmentation difficult. A blind spot occurs when the precursor ion fragmentation produces different fragments only at the outside boundaries of the intra-disulfide bond. This can cause too few product ions to be generated; the limited information can prevent accurate determination of disulfide bonds using MS-based methods. One cross-linked bond in the FT III protein also could not be identified because this particular connectivity configuration creates a large disulfide-bonded structure, which is poorly fragmented by tandem mass spectrometry. One bond in the C2GnT-I protein could not be found, since the precursor ion cannot be formed by chymotryptic digestion, which was the digestion carried for C2GnT-I. It is important to note that neither MassMatrix nor MS2Assign were able to identify these bonds.\nSensitivity, specificity, accuracy and Mathew’s correlation coefficient results for all nine proteins analyzed", "One of the most important characteristics of the proposed method is its efficiency in terms of excluding significant portions of a large and rapidly expanding search space. In Table 3 we compare the size of the complete DMS (containing all the disulfide-bonded peptide structures generated for each protein) and the complete FMS (containing all the disulfide-bonded fragment ions) with the truncated DMS and FMS obtained using the proposed approach.\nDMS and FMS mass space sizes comparison\nIt may be noted that across the molecules, on an average, the proposed approach required examining about 78% of the entire DMS and only about 14% of the entire FMS. It is crucial to note that this reduction in search was achieved without impacting the accuracy and having considered all multiple fragment ion types (a, b, bo, b*, c, x, y, yo, y*, and z). The DMS decrease was less than the FMS decrease because the disulfide-bonded structures in the DMS were bigger and fewer in number and consequently dispersed across the spectra mass range. In Figure 6, we show the actual time taken to obtain a solution by generating the complete DMS and FMS, as well as their truncated counterparts, for each of the molecules.\nComparison of the computational time (in seconds) for the exhaustive and partial generation of DMS and FMS of the proteins from Table 3. On average there was a 49.5% decrease in time to compute the DMS and 88.7% decrease in time to compute the FMS. The computations were carried out on an Intel T2390 1.86 GHz single-core processor with 1GB RAM.", "In this experiment, we investigated the effect of incorporating multiple ion types (a, b, bo, b*, c, x, y, yo, y*, and z) in determining the S-S bonds as opposed to considering only b/y-ions. We found that multiple instances of combinations between b/y ions and other ions types occurred by analyzing the confirmatory matches for the different disulfide bonds. These combinations are available as supplemental information (see Additional File 2).\nThe consideration of multiple ion types also contributed to the method’s accuracy in terms of determining specific S-S bonds. Disulfide bonds previously missed due to their low match score could be identified when all ten different ion types were considered. The tryptic-digested protein FucT VII (which underwent CID) constituted one such example. In FucT VII the bond C318-C321 was missed when considering only b/y ions (match score 29, pp=11, pp2 =15). However, as shown in Figure 7, this bond was identified when multiple ions types were included (match score 100, pp=31, pp2=70). The confidence measures pp and pp2 are described in the following section. To explain this improvement we note that C318-C321 was an intra-bond involving cysteines that were close together. Consequently, CID-based fragmentation was poor and the consideration of other ion types essentially improved the signal-to-background contrast. In this particular case, five other ion types - a4, a5, a6, bo7, y*7 - were present in the FucT VII MS/MS data besides the b ions represented in the spectrum on the right in Figure 7. In the following, we present details of how these ions contribute to the match score Vs (from Eq. (4)). We present the two cases: consideration of only b/y-ions (Eq. (5)) and consideration of multiple ion types (Eq. (6)). In the numerator we specify the contribution of each spectrum peak from Figure 7 (the ion corresponding to each VMi × IN term is showed in brackets).(5)(6)\nSpectra samples from tryptic digested protein FucT VII. Spectra (m/z vs. normalized intensity) illustrating the confirmatory matches (whose intensity values were at least 10% of the maximum intensity) found for the disulfide bond between cysteines C318-C321 in protein FucT VII. The spectrum in the left shows the matches found when multiple ions were considered. The spectrum in the right shows the matches when only b/y-ions were considered.\nWe also observed that consideration of multiple ion-types led to significant increase in the match scores of the true disulfide bonds, whereas only a modest increase was noticed for false positives. This allowed us to increase the threshold we use on the match score Vs to identify high-quality matches from 30 to 80 (a 166% increase). The positive effect of this increment on the specificity of the method can be illustrated by considering the protein Aldolase. In this molecule, consideration of only b/y ions led to a false positive S-S bond identification between cysteines C135-C202 (Vs=30.8, with (original) threshold 30) However, when the multiple ions-types were considered with the (increased) threshold on the match score, no S-S bond was found between C135-C202 (Vs= 53.2, (incremented) threshold 80).", "In this experiment we compared the proposed method with three well known predictive methods DiANNA [21], DISULFIND [22], and PreCys [23]. The results from each of the methods are shown in Table 4 along with the with the known disulfide bond linkages according to the Swiss-Prot knowledgebase. As it can be seen, in terms of correct identifications (as well as minimizing false positives), the proposed approach outperformed all the predictive techniques.\nComparison with predictive methods", "At the state-of-the-art MS2Assign [6] and MassMatrix [7] are two MS-based methods that can be applied to the problem of determining S-S bond connectivity. In our previous work [3], the MS2DB system developed by us was found to be comparable to MS2Assign [6], albeit, in limited testing. Since the proposed method improves upon MS2DB and due to space limitations, we only present detailed comparative results with MassMatrix [7] in Table 5. As part of this experiment, for each S-S bond, in addition to the empirical match score (Eq. (4)), a probability based scoring model proposed in [8] was implemented. This model provided two scores called pp and pp2 scores. The pp score helps to evaluate whether the number of VMs could be a random. The pp2 score evaluates whether the total abundance (intensity) of VMs could be a random. We refer the reader to [8] for a detailed description and formulae of the pp and pp2 scores. The reader may note that the proposed method had better pp and pp2 scores when compared to MassMatrix (higher pp and pp2 scores are better, indicating smaller p-values). While the match scores (Vs) obtained with the proposed method were also higher than those obtained with MassMatrix (V*s), no inferences should be drawn as these scores are calculated differently in each of these methods. As can be seen from Table 5, every bond correctly determined by MassMatrix was also found by us. However, there were S-S bonds in C2GnT-I and Lysozyme that were found by the proposed method but not by MassMatrix.\nComparison with MassMatrix\nThe score (Vs) of each disulfide bond and the confidence scores (pp and pp2 values) are shown in brackets, respectively.", "If the set of disulfide bonds are denoted by P and the set of cysteines not forming disulfide bonds by N, then true positive (TP) predictions occur when disulfide bonds that exist are correctly predicted. False negative (FN) predictions occur when bonds that exist are not predicted as such. Similarly, a true negative (TN) prediction correctly identifies cysteine pairs that do not form a bond. Finally, a false positive (FP) prediction, incorrectly assigns a disulfide link to a pair of cysteines, which are not actually bonded. Based on these definitions, we use the following four standard measures to analyze the proposed method.\nSensitivity (Qc) = TP/P (7)\nSpecificity (Qnc) = TN/N (8)\nAccuracy (Q2) = TP + TN/P + N (9)\nIn Table 6 we present the results obtained for our framework. With maximum specificity and high accuracy (98% average), the method correctly reported the connectivity for most of the proteins. The method only failed to identify three disulfide bonds. One intra-bond in the Beta-LG protein could not be found due to a blind spot caused by the same intra-bond, making the protein’s fragmentation difficult. A blind spot occurs when the precursor ion fragmentation produces different fragments only at the outside boundaries of the intra-disulfide bond. This can cause too few product ions to be generated; the limited information can prevent accurate determination of disulfide bonds using MS-based methods. One cross-linked bond in the FT III protein also could not be identified because this particular connectivity configuration creates a large disulfide-bonded structure, which is poorly fragmented by tandem mass spectrometry. One bond in the C2GnT-I protein could not be found, since the precursor ion cannot be formed by chymotryptic digestion, which was the digestion carried for C2GnT-I. It is important to note that neither MassMatrix nor MS2Assign were able to identify these bonds.\nSensitivity, specificity, accuracy and Mathew’s correlation coefficient results for all nine proteins analyzed", "We have presented an algorithmic framework for determining S-S bond topologies of molecules using MS/MS data. The proposed approach is computationally efficient, data driven, and has high accuracy, sensitivity, and specificity. It is not limited either by the connectivity pattern or by the variability of product ion types generated during the fragmentation of precursor ions. Furthermore, the approach does not require user intervention and can form the basis for high-throughput S-S bond determination.", "The algorithmic solution framework was designed by RS and implemented by WM. Computational studies and experiments were carried out by WM and RS. T-YY developed the experimental protocols and generated the data. The paper was written by RS and WM.", "The authors declare that they have no competing interests.", "The proof that the proposed method is a fully polynomial approximation scheme consists of two parts. First, we need to show that each value returned by the APPROX-DMS function is within 1 + ε from the optimal solution. Second, we need to show that the running time of the method is fully polynomial. We refer the reader to [16] for the proof of the first part and focus in the following on analyzing the complexity of the method. To show that the method is a fully polynomial-time approximation scheme, we derive a bound on the length of a DMS set. After trimming, successive elements DMSi and of DMS must have a relationship . Therefore, each possible DMS set contains up to log1+εPMLval values. Since (x/(1 + x)) ≤ ln(1 + x) ≤ x and 0 < ε < 1, it can be shown that:(11)\nAs can be seen from Eq. (11), this bound is (explicitly) polynomial in the size of the input PMSval. It is also (implicitly) polynomial in the size of the set DMS since ε is directly proportional to the number of cysteine-containing peptides k (per Eq. (2)) and these peptides are in turn combined to form each element of the DMS. A similar argument can be made for the APPROX-FMS routine, completing thereby the proof that the proposed method is a fully polynomial-time approximation scheme." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "The subset-sum formulation: towards polynomial-time matching", "Polynomial time DMS mass list construction", "Parameters estimation", "Polynomial time FMS construction", "Determining the globally consistent bond topology", "Results", "Analysis of efficiency of the search", "Effects of incorporating multiple ion types: a case study", "Comparative studies with predictive techniques", "Comparative studies with MassMatrix", "Quantitative assessment and analysis of the method’s performance", "Conclusions", "Authors' contributions", "Competing interests", "Appendix A – Etudes of the proof of polynomial complexity", "Supplementary Material" ]
[ "Disulfide (S-S) bonds are known to play an important role in protein structure and function. Among others, this includes: influencing protein folding and stabilization, formation of characteristic structural motifs such as the cysteine knot, mediation of thiol-disulfide interchange reactions, and regulation of enzymatic activity. Early computational approaches for S-S bond determination focused on two learning-driven formulations based on the protein primary structure [1]: residue classification (distinguish bonded and free cysteines) and connectivity prediction (determine the S-S connectivity pattern). In recent times, the increasing availability and accuracy of mass spectrometry [2] (MS) has opened up an alternate approach; its essence lies in matching the theoretical spectra of ionized peptide fragments with experimentally obtained spectra to identify the presence of specific S-S bonds. A diagrammatic representation of the key steps of a MS-based approach is presented in Figure 1, along with the different types of fragment ions that can be generated as an outcome of this process.\nMS-based approach diagrammatic representation. (A) Once a protein is digested, the theoretically possible disulfide bonded peptides are compared with experimentally obtained precursor ions. In order to confirm each correspondence, the possible disulfide bonded fragment ions are next compared with experimentally generated MS/MS spectra. (B) Most of the different fragment ions (and their nomenclature) that can be observed. Ions types not represented here include b and y ions which have either lost a water molecule (bo, yo) or have lost an ammonia molecule (b*, y*).\nMS-based methods generally outperform methods using sequence-based learning formulations, as showed by Lee and Singh [3]. However, a number of algorithmic challenges remain outstanding in realizing the potential of MS-based approaches. Salient among these are: (1) accounting for multiple ion types in the data [4,5]: To avoid an exponential increase in the search space, a common simplification is to limit the analysis to the spectra of b-ions and y-ions only [3,6,7]. However, this simplification may erroneously ignore the occurrence of other ions, such as: a, bo, b*, c, x, yo, y*, and z. While the occurrence of non-b/y ions is minimized (though not eliminated) in collision-induced dissociation (CID), some of these ions can be present with greater likelihood in dissociation methods such as electron capture dissociation (ECD), electron transfer dissociation (ETD), and electron-detachment dissociation (EDD). In fact these ions types should be considered even in CID as illustrated by the example in Figure 2. (2) Design of efficient search and matching algorithms: The search space of possible disulfide topologies increases rapidly not only with the number of ion types being analyzed but also with the number of cysteines as well as the types of connectivity patterns. Thus, it is imperative to have algorithms that can accommodate the richness of the entire problem domain. (3) Automated data-driven determination of parameters: Many advanced algorithms in this area are intrinsically parametric. Often, determining the optimal value of these parameters automatically is in itself, a complex problem. This places the practitioner at a significant disadvantage. Support for automated and data-driven strategies for estimation of crucial parameters is therefore crucial to the real-world success of a method in this problem domain.\nMultiple-ion spectra analysis. This figure illustrates the presence of multiple ions types (in green) after CID. In the first spectrum, note the presence of bo and yo ions with high intensity in the fragmentation of the precursor ion with sequence: FFLQGIQLNTILPDAR, for the protein Lysozyme [Swiss-Prot: P11279]. In the second spectrum, a, bo, b*, and yo ions (all with high intensity) can be observed after the fragmentation of a precursor ion existing in the protein Pratelet glycoprotein 4 [Swiss-Prot P16671].\nThe contributions of this paper in context of the aforementioned challenges include: (1) Development of a highly efficient strategy for multi-ion disulfide bond analysis by considering a, b, bo, b*, c, x, y, yo, y*, and z ion types. To the best of our knowledge, this is the first algorithmic work that has considered all these ion-types in S-S bond determination. (2) A fully polynomial-time algorithm that selectively generates only those regions of the search space where the correct solutions reside with a high likelihood. (3) A multiple-regression-based data driven method to calculate the critical parameters modulating the search, so as to ensure that the correct bonding topologies are not missed due to the truncation of the search space. At the same time, the parameter selection ensures that the search is focused on the most promising regions of the search-space, and (4) A local-to-global strategy that builds a globally consistent bonding pattern based on MS data at the level of individual bonds.\nThe proposed approach also implements the probability-based scoring model proposed in [8] for each specific disulfide bond based on the number of MS/MS matches and their respective abundance. These scores reflect the significance of the specific disulfide bond and can form the basis of analysis, such as that conducted in [9], to estimate the accuracy of peptide assignment to tandem mass spectra.\nAt a high-level, the proposed approach can be thought of as a two-stage database-based matching technique (see Figure 3). From this perspective, it shares similarities with [10], where cross-linked peptides were also identified using a two-level method. During the first stage of such two-stage methods, the mass values of the theoretically possible disulfide-bonded peptide structures are compared with precursor ion mass values derived from the MS-spectra. In the second (confirmatory) stage, the theoretical spectra from the disulfide-bonded peptide structures are compared with MS/MS experimental spectra. The confirmatory step is necessary since a disulfide bonded peptide may not actually correspond to a precursor ion, even if their mass values are similar. Our approach can be used to conduct this entire search process in (a low degree) polynomial time. This paper significantly extends our prior research where we had proposed efficient indexing strategies to speed-up the search [11,12] as well as our more recent work [13], where a polynomial time approximation algorithm using hand-crafted parameters was proposed for the first stage matching.\nTwo-stage matching spectra for protein ST8SiaIV. (A) In the first-stage (DMS vs. PMS), the theoretical disulfide-bonded structure is matched with the doubly charged precursor ion with highest intensity, whose m/z = 1082.9. (B) For this initial match, the disulfide-bonded peptide pair is fragmented and the fragments are matched with the MS/MS spectrum for the precursor ion (FMS vs. TMS), generating a list of validation matches.", "We start the description of our method by providing, in Table 1, the key abbreviations used in the ensuing description and their respective definitions. In the first stage of the method, an Initial Match (IM) is said to be obtained when the difference between the detected mass of a targeted ion from the PMS and the calculated mass of a possible disulfide-bonded peptide structure from the DMS is found to be less than a threshold TIM. The second stage validates (or rejects) the initial matches. For each Initial Match, the validation occurs by searching for matches between product ions from the TMS and the theoretical spectra FMS. A Validation Match (VM) is said to occur when the difference between a precursor ion fragment mass from TMS and a disulfide-bonded fragment structure mass from FMS falls below a validation match threshold TVM.\nAbbreviations and their definitions\nUnfortunately, the sizes of both FMS and DMS grow exponentially. For a disulfide-bonded peptide structure consisting of k peptides, considering that there are f different fragment ion types possible, up to fk types of fragment arrangements may occur in the FMS. If the ith fragment ion consists of pi amino acid residues, then the complexity to compute the entire FMS for a disulfide-bonded peptide structure is using a brute-force approach. The DMS also grows exponentially. To understand this, let P = {p1, p2, …, pk} be the list of cysteine-containing peptides in a polypeptide chain. Further, let C = {c1, c2, …, ci} be the list of the number of cysteines per cysteine-containing peptide pi. If is the total number of cysteines in a protein, the number of possible disulfide connectivity patterns (DMS size) is [1,14]: .\n[SUBTITLE] The subset-sum formulation: towards polynomial-time matching [SUBSECTION] Given the growth characteristics of the DMS and the FMS, an exhaustive search-and-match strategy is clearly infeasible in the general case. This is especially true if multiple ion types are considered. Indexing [11,12] and filtering [15] are two possible approaches that have been considered for ameliorating this problem. In this paper we explore an alternative strategy that is based on the key insight that the entire search space (DMS or FMS) does not need to be generated to determine the matches. That is, we only want to generate the few disulfide bonded peptides whose mass is close to the (given) experimental spectra rather than generate all possible peptide combinations and subsequently testing and discarding most of these. This insight allows us to re-cast the DMS and FMS generation as instances of the subset-sum problem [16]. Recall, that given the pair (S, t), where S is a set of positive integers and t ∈ Z+, the subset-sum problem asks whether there exists a subset of S that adds up to t. While the subset-sum problem is itself NP-Complete, it can be solved using approximation strategies to obtain near-optimal solutions, in polynomial-time [16].\nGiven the growth characteristics of the DMS and the FMS, an exhaustive search-and-match strategy is clearly infeasible in the general case. This is especially true if multiple ion types are considered. Indexing [11,12] and filtering [15] are two possible approaches that have been considered for ameliorating this problem. In this paper we explore an alternative strategy that is based on the key insight that the entire search space (DMS or FMS) does not need to be generated to determine the matches. That is, we only want to generate the few disulfide bonded peptides whose mass is close to the (given) experimental spectra rather than generate all possible peptide combinations and subsequently testing and discarding most of these. This insight allows us to re-cast the DMS and FMS generation as instances of the subset-sum problem [16]. Recall, that given the pair (S, t), where S is a set of positive integers and t ∈ Z+, the subset-sum problem asks whether there exists a subset of S that adds up to t. While the subset-sum problem is itself NP-Complete, it can be solved using approximation strategies to obtain near-optimal solutions, in polynomial-time [16].\n[SUBTITLE] Polynomial time DMS mass list construction [SUBSECTION] Our strategy lies in obtaining an approximate solution to the subset-sum problem by trimming as many elements from DMS as possible based on a parameter ε. To trim the DMS set by ε means to remove as many elements from DMS as possible such that if DMS* is the resultant trimmed set, then for every element DMSi removed from DMS, there will remain an element DMSi* in DMS* which is “sufficiently” close in terms of its mass to the deleted element DMSi. Specifically,(1)\nThe approximation algorithm for creating the partial DMS is described by the APPROX-DMS and TRIM routines (Figure 4). APPROX-DMS takes the following parameters: (1) a sorted list of cysteine-containing peptides mass values (CCP), (2) a target mass value from the PMS list (PMSval), (3) the trimming parameter ε, and (4) the Initial Match threshold (TIM). In lines 2-8 of Figure 4, all the variables and data structures are initialized. In lines 9-11, the theoretical disulfide-bonded peptide structures are formed and stored in a temporary set called TempSet. Line 10 excludes values greater than the PMSval plus a constant corresponding to the Initial Match threshold. The rationale behind this threshold is explained in the following section. Line 12 increments the DMS by invoking the routine MERGE, which returns a sorted set formed by merging the two sorted input sets DMS and TempSet, with duplicated values removed. In line 13, the TRIM routine is called to shorten the DMS set. Lines 14-15 examine if the largest mass value in the constructed DMS set is sufficiently close to the targeted mass PMSval. If so, an Initial Match occurs.\nPseudo code for APROX-DMS and TRIM routines\nTable 2 presents an example showing the effectiveness of the APROX-DMS. In this specific case, 37.5% of the entire search space (all feasible combinations of cysteine-containing peptides) was successfully trimmed, while ensuring that the correct IM was not missed. Another example illustrating the action of APPROX-DMS on the Beta-LG protein is available as supplemental information (see Additional File 1).\nRunning APROX-DMS on the ST8SiaIV C142-C292bond\nCCP: the mass values of all cysteine-containing peptides. PMSval: a disulfide-bonded precursor ion mass. TrimSet: all the disulfide-bonded structures trimmed from the set of feasible combinations of cysteine-containing peptides. For this example, 37.5% of the structures were trimmed and the correct IM was found.\nThe complexity of both routines MERGE and TRIM is O(|DMS|+|TempSet|) and O(|DMS|), respectively. Further, for any fixed ε > 0, our algorithm is a (1 + ε)-approximation scheme. That is, for any fixed ε > 0, the algorithm runs in polynomial time. The proof of the polynomial time complexity of APPROX-DMS can be obtained by direct analogy to the proof of the polynomial time complexity of the subset sum approximation algorithm from [16] and is outlined in Appendix A.\nOur strategy lies in obtaining an approximate solution to the subset-sum problem by trimming as many elements from DMS as possible based on a parameter ε. To trim the DMS set by ε means to remove as many elements from DMS as possible such that if DMS* is the resultant trimmed set, then for every element DMSi removed from DMS, there will remain an element DMSi* in DMS* which is “sufficiently” close in terms of its mass to the deleted element DMSi. Specifically,(1)\nThe approximation algorithm for creating the partial DMS is described by the APPROX-DMS and TRIM routines (Figure 4). APPROX-DMS takes the following parameters: (1) a sorted list of cysteine-containing peptides mass values (CCP), (2) a target mass value from the PMS list (PMSval), (3) the trimming parameter ε, and (4) the Initial Match threshold (TIM). In lines 2-8 of Figure 4, all the variables and data structures are initialized. In lines 9-11, the theoretical disulfide-bonded peptide structures are formed and stored in a temporary set called TempSet. Line 10 excludes values greater than the PMSval plus a constant corresponding to the Initial Match threshold. The rationale behind this threshold is explained in the following section. Line 12 increments the DMS by invoking the routine MERGE, which returns a sorted set formed by merging the two sorted input sets DMS and TempSet, with duplicated values removed. In line 13, the TRIM routine is called to shorten the DMS set. Lines 14-15 examine if the largest mass value in the constructed DMS set is sufficiently close to the targeted mass PMSval. If so, an Initial Match occurs.\nPseudo code for APROX-DMS and TRIM routines\nTable 2 presents an example showing the effectiveness of the APROX-DMS. In this specific case, 37.5% of the entire search space (all feasible combinations of cysteine-containing peptides) was successfully trimmed, while ensuring that the correct IM was not missed. Another example illustrating the action of APPROX-DMS on the Beta-LG protein is available as supplemental information (see Additional File 1).\nRunning APROX-DMS on the ST8SiaIV C142-C292bond\nCCP: the mass values of all cysteine-containing peptides. PMSval: a disulfide-bonded precursor ion mass. TrimSet: all the disulfide-bonded structures trimmed from the set of feasible combinations of cysteine-containing peptides. For this example, 37.5% of the structures were trimmed and the correct IM was found.\nThe complexity of both routines MERGE and TRIM is O(|DMS|+|TempSet|) and O(|DMS|), respectively. Further, for any fixed ε > 0, our algorithm is a (1 + ε)-approximation scheme. That is, for any fixed ε > 0, the algorithm runs in polynomial time. The proof of the polynomial time complexity of APPROX-DMS can be obtained by direct analogy to the proof of the polynomial time complexity of the subset sum approximation algorithm from [16] and is outlined in Appendix A.\n[SUBTITLE] Parameters estimation [SUBSECTION] APPROX-DMS depends on two important parameters, namely, the match threshold TIM and the trimming parameter ε. The match threshold is responsible for defining a “matching window”. This is necessary due to practical considerations such as the sensitivity of the instrument (i.e. 0.01Da, 0.1Da, and 1.0Da) and experimental noise, due to which an exact match is a rarity. We conducted an empirical study by using different values of TIM for all our datasets. Based on the results, the TIM value of ±1.0Da was found to minimize missing matches as well as the occurrence of false positives. Considering the smallest precursor ion mass involved, in these studies, the above value of TIM guaranteed a matching accuracy of 99.86%.\nThe second parameter ε is much more important as it is crucial to the running time of the algorithm and its accuracy as evident from Eq. (1). To determine ε, we note that it is inversely proportional to the algorithm’s running time. However, a large value of ε would cause meaningful fragments to be left out of the DMS. At the same time, a small value for ε will lead to few data points being trimmed. Thus “guessing” appropriate values of ε can be complicated and suboptimal choices can significantly impact the quality of the results. We address the problem of data-driven estimation of ε using a regression framework where ε is treated as a dependent variable and based on the data, a functional relationship is obtained between it and the other (independent) variables. We model this functional relationship using the following independent variables: (1) the cysteine-containing peptides (CCP) mass range defined by CCPmax and CCPmin corresponding to the peptides with the highest and lowest mass respectively. (2) The number of cysteine-containing peptides k. A large k implies that the average difference in the mass of any two peptide fragments is small. Conversely, a small k implies fewer fragments with putatively larger differences in their masses. (3) The cysteine-containing peptides average mass value CCPaverage. The relationship between ε and these other variables is then obtained using multiple-variable regression. In our studies, the data for the regression was obtained using bootstrapping where groups of four proteins were randomly picked from the set of 9 proteins available to us. The functional relationship defining ε was obtained to be:(2)\nAPPROX-DMS depends on two important parameters, namely, the match threshold TIM and the trimming parameter ε. The match threshold is responsible for defining a “matching window”. This is necessary due to practical considerations such as the sensitivity of the instrument (i.e. 0.01Da, 0.1Da, and 1.0Da) and experimental noise, due to which an exact match is a rarity. We conducted an empirical study by using different values of TIM for all our datasets. Based on the results, the TIM value of ±1.0Da was found to minimize missing matches as well as the occurrence of false positives. Considering the smallest precursor ion mass involved, in these studies, the above value of TIM guaranteed a matching accuracy of 99.86%.\nThe second parameter ε is much more important as it is crucial to the running time of the algorithm and its accuracy as evident from Eq. (1). To determine ε, we note that it is inversely proportional to the algorithm’s running time. However, a large value of ε would cause meaningful fragments to be left out of the DMS. At the same time, a small value for ε will lead to few data points being trimmed. Thus “guessing” appropriate values of ε can be complicated and suboptimal choices can significantly impact the quality of the results. We address the problem of data-driven estimation of ε using a regression framework where ε is treated as a dependent variable and based on the data, a functional relationship is obtained between it and the other (independent) variables. We model this functional relationship using the following independent variables: (1) the cysteine-containing peptides (CCP) mass range defined by CCPmax and CCPmin corresponding to the peptides with the highest and lowest mass respectively. (2) The number of cysteine-containing peptides k. A large k implies that the average difference in the mass of any two peptide fragments is small. Conversely, a small k implies fewer fragments with putatively larger differences in their masses. (3) The cysteine-containing peptides average mass value CCPaverage. The relationship between ε and these other variables is then obtained using multiple-variable regression. In our studies, the data for the regression was obtained using bootstrapping where groups of four proteins were randomly picked from the set of 9 proteins available to us. The functional relationship defining ε was obtained to be:(2)\n[SUBTITLE] Polynomial time FMS construction [SUBSECTION] In creating the FMS, a strategy similar to the one used for generating the DMS can be used. This involves using an approximation algorithm, this time, to generate the theoretical spectra for all the IMs found during the first-stage matching. We define another trimming parameter δ to trim the FMS mass list. It can be expected that the functional form of δ depends on the fragments mass range, as well as their granularity (extent to which fragments are broken down into smaller ions). In a manner similar to the case for estimating ε, we used regression to obtain the specific functional form for the dependent variable δ in terms of the variables AAmax (the largest amino acid residue mass), AAmin (the smallest amino acid residue mass), AAaverage (the average amino acid residues mass), and ||p|| (average number of amino acid residues per fragment). Bootstrapping was once again utilized, resulting in the relationship shown in Eq. (3).(3)\nThe pseudocode of the APPROX-FMS procedure used for generating the FMS is shown in Figure 5. The function GENFRAGS(.), in line 7, generates multiple fragment ions (a, b, bo, b*, c, x, y, yo, y*, and z) for peptide sequences in Pepsequences, which contains the disulfide-bonded peptides involved in the IM being analyzed. Next, for each element in the FMS and for each fragment in the FragSet (lines 8-11), new disulfide-bonded peptide fragment structures are formed. Line 10 rejects values greater than the TMSval, considering the Validation Match threshold. In line 12, the current FMS set is combined with the disulfide-bonded peptide fragments set TempSet using MERGE. In line 13, the FMS is trimmed using the TRIM routine. Lastly, a Validation Match VM is declared (lines 14-15) when a correspondence is found between the mass of the largest value in FMS and an experimentally determined mass value TMSval, given a Validation Match threshold.\nPseudo code for APROX-FMS routine\nIn creating the FMS, a strategy similar to the one used for generating the DMS can be used. This involves using an approximation algorithm, this time, to generate the theoretical spectra for all the IMs found during the first-stage matching. We define another trimming parameter δ to trim the FMS mass list. It can be expected that the functional form of δ depends on the fragments mass range, as well as their granularity (extent to which fragments are broken down into smaller ions). In a manner similar to the case for estimating ε, we used regression to obtain the specific functional form for the dependent variable δ in terms of the variables AAmax (the largest amino acid residue mass), AAmin (the smallest amino acid residue mass), AAaverage (the average amino acid residues mass), and ||p|| (average number of amino acid residues per fragment). Bootstrapping was once again utilized, resulting in the relationship shown in Eq. (3).(3)\nThe pseudocode of the APPROX-FMS procedure used for generating the FMS is shown in Figure 5. The function GENFRAGS(.), in line 7, generates multiple fragment ions (a, b, bo, b*, c, x, y, yo, y*, and z) for peptide sequences in Pepsequences, which contains the disulfide-bonded peptides involved in the IM being analyzed. Next, for each element in the FMS and for each fragment in the FragSet (lines 8-11), new disulfide-bonded peptide fragment structures are formed. Line 10 rejects values greater than the TMSval, considering the Validation Match threshold. In line 12, the current FMS set is combined with the disulfide-bonded peptide fragments set TempSet using MERGE. In line 13, the FMS is trimmed using the TRIM routine. Lastly, a Validation Match VM is declared (lines 14-15) when a correspondence is found between the mass of the largest value in FMS and an experimentally determined mass value TMSval, given a Validation Match threshold.\nPseudo code for APROX-FMS routine\n[SUBTITLE] Determining the globally consistent bond topology [SUBSECTION] Once all the Initial Matches and Validation Matches are calculated, we have a “local” (putative bond-level) view of the possible disulfide connectivity. This local information needs to be integrated to obtain a globally consistent view. Our approach to this problem is motivated by Fariselli and Casadio [14]. Specifically, we model the location of the putative disulfide bonds by edges in an undirected graph G (V, E), where the set of vertices V corresponds to the set of cysteines. To each edge, we assign a match score. This score represents the combined importance of each single peak match within two spectra. Each specific peak match is weighted according to its intensity. The match score is given by:(4)\nIn Eq. (4), the numerator corresponds to the sum of each validation match for a disulfide bond multiplied by the matched MS/MS fragment normalized intensity value (IN). Here, VMi is a binary value which is set to 1 if a confirmatory match was found for fragment i. The denominator similarly contains the sum of each experimental MS/MS fragment ion from TMS multiplied by IN. Here, TMSi is a binary variable which indicates the presence of a fragment i in the MS/MS spectrum.\nNext, the globally consistent bond topology is found by solving the maximum weight matching problem for the graph G. A matching M in the graph G is a set of pair-wise non-adjacent edges; that is, two edges do not share a common vertex. A maximum weight matching is defined as a matching M that contains the largest possible sum of the weights (match scores) of each possible edge (disulfide bond). We use the Gabow algorithm [17], as implemented in [18] for computing the maximum weight match.\nOnce all the Initial Matches and Validation Matches are calculated, we have a “local” (putative bond-level) view of the possible disulfide connectivity. This local information needs to be integrated to obtain a globally consistent view. Our approach to this problem is motivated by Fariselli and Casadio [14]. Specifically, we model the location of the putative disulfide bonds by edges in an undirected graph G (V, E), where the set of vertices V corresponds to the set of cysteines. To each edge, we assign a match score. This score represents the combined importance of each single peak match within two spectra. Each specific peak match is weighted according to its intensity. The match score is given by:(4)\nIn Eq. (4), the numerator corresponds to the sum of each validation match for a disulfide bond multiplied by the matched MS/MS fragment normalized intensity value (IN). Here, VMi is a binary value which is set to 1 if a confirmatory match was found for fragment i. The denominator similarly contains the sum of each experimental MS/MS fragment ion from TMS multiplied by IN. Here, TMSi is a binary variable which indicates the presence of a fragment i in the MS/MS spectrum.\nNext, the globally consistent bond topology is found by solving the maximum weight matching problem for the graph G. A matching M in the graph G is a set of pair-wise non-adjacent edges; that is, two edges do not share a common vertex. A maximum weight matching is defined as a matching M that contains the largest possible sum of the weights (match scores) of each possible edge (disulfide bond). We use the Gabow algorithm [17], as implemented in [18] for computing the maximum weight match.", "Given the growth characteristics of the DMS and the FMS, an exhaustive search-and-match strategy is clearly infeasible in the general case. This is especially true if multiple ion types are considered. Indexing [11,12] and filtering [15] are two possible approaches that have been considered for ameliorating this problem. In this paper we explore an alternative strategy that is based on the key insight that the entire search space (DMS or FMS) does not need to be generated to determine the matches. That is, we only want to generate the few disulfide bonded peptides whose mass is close to the (given) experimental spectra rather than generate all possible peptide combinations and subsequently testing and discarding most of these. This insight allows us to re-cast the DMS and FMS generation as instances of the subset-sum problem [16]. Recall, that given the pair (S, t), where S is a set of positive integers and t ∈ Z+, the subset-sum problem asks whether there exists a subset of S that adds up to t. While the subset-sum problem is itself NP-Complete, it can be solved using approximation strategies to obtain near-optimal solutions, in polynomial-time [16].", "Our strategy lies in obtaining an approximate solution to the subset-sum problem by trimming as many elements from DMS as possible based on a parameter ε. To trim the DMS set by ε means to remove as many elements from DMS as possible such that if DMS* is the resultant trimmed set, then for every element DMSi removed from DMS, there will remain an element DMSi* in DMS* which is “sufficiently” close in terms of its mass to the deleted element DMSi. Specifically,(1)\nThe approximation algorithm for creating the partial DMS is described by the APPROX-DMS and TRIM routines (Figure 4). APPROX-DMS takes the following parameters: (1) a sorted list of cysteine-containing peptides mass values (CCP), (2) a target mass value from the PMS list (PMSval), (3) the trimming parameter ε, and (4) the Initial Match threshold (TIM). In lines 2-8 of Figure 4, all the variables and data structures are initialized. In lines 9-11, the theoretical disulfide-bonded peptide structures are formed and stored in a temporary set called TempSet. Line 10 excludes values greater than the PMSval plus a constant corresponding to the Initial Match threshold. The rationale behind this threshold is explained in the following section. Line 12 increments the DMS by invoking the routine MERGE, which returns a sorted set formed by merging the two sorted input sets DMS and TempSet, with duplicated values removed. In line 13, the TRIM routine is called to shorten the DMS set. Lines 14-15 examine if the largest mass value in the constructed DMS set is sufficiently close to the targeted mass PMSval. If so, an Initial Match occurs.\nPseudo code for APROX-DMS and TRIM routines\nTable 2 presents an example showing the effectiveness of the APROX-DMS. In this specific case, 37.5% of the entire search space (all feasible combinations of cysteine-containing peptides) was successfully trimmed, while ensuring that the correct IM was not missed. Another example illustrating the action of APPROX-DMS on the Beta-LG protein is available as supplemental information (see Additional File 1).\nRunning APROX-DMS on the ST8SiaIV C142-C292bond\nCCP: the mass values of all cysteine-containing peptides. PMSval: a disulfide-bonded precursor ion mass. TrimSet: all the disulfide-bonded structures trimmed from the set of feasible combinations of cysteine-containing peptides. For this example, 37.5% of the structures were trimmed and the correct IM was found.\nThe complexity of both routines MERGE and TRIM is O(|DMS|+|TempSet|) and O(|DMS|), respectively. Further, for any fixed ε > 0, our algorithm is a (1 + ε)-approximation scheme. That is, for any fixed ε > 0, the algorithm runs in polynomial time. The proof of the polynomial time complexity of APPROX-DMS can be obtained by direct analogy to the proof of the polynomial time complexity of the subset sum approximation algorithm from [16] and is outlined in Appendix A.", "APPROX-DMS depends on two important parameters, namely, the match threshold TIM and the trimming parameter ε. The match threshold is responsible for defining a “matching window”. This is necessary due to practical considerations such as the sensitivity of the instrument (i.e. 0.01Da, 0.1Da, and 1.0Da) and experimental noise, due to which an exact match is a rarity. We conducted an empirical study by using different values of TIM for all our datasets. Based on the results, the TIM value of ±1.0Da was found to minimize missing matches as well as the occurrence of false positives. Considering the smallest precursor ion mass involved, in these studies, the above value of TIM guaranteed a matching accuracy of 99.86%.\nThe second parameter ε is much more important as it is crucial to the running time of the algorithm and its accuracy as evident from Eq. (1). To determine ε, we note that it is inversely proportional to the algorithm’s running time. However, a large value of ε would cause meaningful fragments to be left out of the DMS. At the same time, a small value for ε will lead to few data points being trimmed. Thus “guessing” appropriate values of ε can be complicated and suboptimal choices can significantly impact the quality of the results. We address the problem of data-driven estimation of ε using a regression framework where ε is treated as a dependent variable and based on the data, a functional relationship is obtained between it and the other (independent) variables. We model this functional relationship using the following independent variables: (1) the cysteine-containing peptides (CCP) mass range defined by CCPmax and CCPmin corresponding to the peptides with the highest and lowest mass respectively. (2) The number of cysteine-containing peptides k. A large k implies that the average difference in the mass of any two peptide fragments is small. Conversely, a small k implies fewer fragments with putatively larger differences in their masses. (3) The cysteine-containing peptides average mass value CCPaverage. The relationship between ε and these other variables is then obtained using multiple-variable regression. In our studies, the data for the regression was obtained using bootstrapping where groups of four proteins were randomly picked from the set of 9 proteins available to us. The functional relationship defining ε was obtained to be:(2)", "In creating the FMS, a strategy similar to the one used for generating the DMS can be used. This involves using an approximation algorithm, this time, to generate the theoretical spectra for all the IMs found during the first-stage matching. We define another trimming parameter δ to trim the FMS mass list. It can be expected that the functional form of δ depends on the fragments mass range, as well as their granularity (extent to which fragments are broken down into smaller ions). In a manner similar to the case for estimating ε, we used regression to obtain the specific functional form for the dependent variable δ in terms of the variables AAmax (the largest amino acid residue mass), AAmin (the smallest amino acid residue mass), AAaverage (the average amino acid residues mass), and ||p|| (average number of amino acid residues per fragment). Bootstrapping was once again utilized, resulting in the relationship shown in Eq. (3).(3)\nThe pseudocode of the APPROX-FMS procedure used for generating the FMS is shown in Figure 5. The function GENFRAGS(.), in line 7, generates multiple fragment ions (a, b, bo, b*, c, x, y, yo, y*, and z) for peptide sequences in Pepsequences, which contains the disulfide-bonded peptides involved in the IM being analyzed. Next, for each element in the FMS and for each fragment in the FragSet (lines 8-11), new disulfide-bonded peptide fragment structures are formed. Line 10 rejects values greater than the TMSval, considering the Validation Match threshold. In line 12, the current FMS set is combined with the disulfide-bonded peptide fragments set TempSet using MERGE. In line 13, the FMS is trimmed using the TRIM routine. Lastly, a Validation Match VM is declared (lines 14-15) when a correspondence is found between the mass of the largest value in FMS and an experimentally determined mass value TMSval, given a Validation Match threshold.\nPseudo code for APROX-FMS routine", "Once all the Initial Matches and Validation Matches are calculated, we have a “local” (putative bond-level) view of the possible disulfide connectivity. This local information needs to be integrated to obtain a globally consistent view. Our approach to this problem is motivated by Fariselli and Casadio [14]. Specifically, we model the location of the putative disulfide bonds by edges in an undirected graph G (V, E), where the set of vertices V corresponds to the set of cysteines. To each edge, we assign a match score. This score represents the combined importance of each single peak match within two spectra. Each specific peak match is weighted according to its intensity. The match score is given by:(4)\nIn Eq. (4), the numerator corresponds to the sum of each validation match for a disulfide bond multiplied by the matched MS/MS fragment normalized intensity value (IN). Here, VMi is a binary value which is set to 1 if a confirmatory match was found for fragment i. The denominator similarly contains the sum of each experimental MS/MS fragment ion from TMS multiplied by IN. Here, TMSi is a binary variable which indicates the presence of a fragment i in the MS/MS spectrum.\nNext, the globally consistent bond topology is found by solving the maximum weight matching problem for the graph G. A matching M in the graph G is a set of pair-wise non-adjacent edges; that is, two edges do not share a common vertex. A maximum weight matching is defined as a matching M that contains the largest possible sum of the weights (match scores) of each possible edge (disulfide bond). We use the Gabow algorithm [17], as implemented in [18] for computing the maximum weight match.", "The proposed method was validated utilizing experimental data obtained using a capillary liquid chromatography system coupled with a Thermo-Fisher LCQ ion trap mass spectrometer LC/ESI-MS/MS system. Details of the experimental protocols can be found in [19,20]. We used data from nine eukaryotic glycosyltransferases. These molecules and their Swiss-Prot ID were: ST8Sia IV [Q92187], Beta-lactoglobulin [P02754], FucT VII [Q11130], C2GnT-I [Q09324], Lysozyme [P00698], FT III [P21217], β1-4GalT [P08037], Aldolase [P00883], and Aspa [Q9R1T5].\nWe conducted five sets of experiments to investigate the proposed method and its efficacy. These experiments included: (1) Analysis of method’s efficiency, showing how the method successfully reduced the DMS and FMS search spaces. (2) Analysis of the effect of incorporating multiple ion types, demonstrating the importance of considering non-b/y ions in the determination of disulfide bonds. (3) Comparative analysis of the proposed method with established predictive techniques. (4) Comparative analysis of the method with MassMatrix, an established MS-based approach which can be used for determining S-S bonds. In both experiment 3 and experiment 4, the aforementioned set of glycosyltransferases and their known S-S bond topology provided us with the ground truth. (5) Analysis of the method in terms of established performance measures: Accuracy (Q2), Sensitivity (Qc), Specificity (Qnc), and Matthew’s correlation coefficient (c).\n[SUBTITLE] Analysis of efficiency of the search [SUBSECTION] One of the most important characteristics of the proposed method is its efficiency in terms of excluding significant portions of a large and rapidly expanding search space. In Table 3 we compare the size of the complete DMS (containing all the disulfide-bonded peptide structures generated for each protein) and the complete FMS (containing all the disulfide-bonded fragment ions) with the truncated DMS and FMS obtained using the proposed approach.\nDMS and FMS mass space sizes comparison\nIt may be noted that across the molecules, on an average, the proposed approach required examining about 78% of the entire DMS and only about 14% of the entire FMS. It is crucial to note that this reduction in search was achieved without impacting the accuracy and having considered all multiple fragment ion types (a, b, bo, b*, c, x, y, yo, y*, and z). The DMS decrease was less than the FMS decrease because the disulfide-bonded structures in the DMS were bigger and fewer in number and consequently dispersed across the spectra mass range. In Figure 6, we show the actual time taken to obtain a solution by generating the complete DMS and FMS, as well as their truncated counterparts, for each of the molecules.\nComparison of the computational time (in seconds) for the exhaustive and partial generation of DMS and FMS of the proteins from Table 3. On average there was a 49.5% decrease in time to compute the DMS and 88.7% decrease in time to compute the FMS. The computations were carried out on an Intel T2390 1.86 GHz single-core processor with 1GB RAM.\nOne of the most important characteristics of the proposed method is its efficiency in terms of excluding significant portions of a large and rapidly expanding search space. In Table 3 we compare the size of the complete DMS (containing all the disulfide-bonded peptide structures generated for each protein) and the complete FMS (containing all the disulfide-bonded fragment ions) with the truncated DMS and FMS obtained using the proposed approach.\nDMS and FMS mass space sizes comparison\nIt may be noted that across the molecules, on an average, the proposed approach required examining about 78% of the entire DMS and only about 14% of the entire FMS. It is crucial to note that this reduction in search was achieved without impacting the accuracy and having considered all multiple fragment ion types (a, b, bo, b*, c, x, y, yo, y*, and z). The DMS decrease was less than the FMS decrease because the disulfide-bonded structures in the DMS were bigger and fewer in number and consequently dispersed across the spectra mass range. In Figure 6, we show the actual time taken to obtain a solution by generating the complete DMS and FMS, as well as their truncated counterparts, for each of the molecules.\nComparison of the computational time (in seconds) for the exhaustive and partial generation of DMS and FMS of the proteins from Table 3. On average there was a 49.5% decrease in time to compute the DMS and 88.7% decrease in time to compute the FMS. The computations were carried out on an Intel T2390 1.86 GHz single-core processor with 1GB RAM.\n[SUBTITLE] Effects of incorporating multiple ion types: a case study [SUBSECTION] In this experiment, we investigated the effect of incorporating multiple ion types (a, b, bo, b*, c, x, y, yo, y*, and z) in determining the S-S bonds as opposed to considering only b/y-ions. We found that multiple instances of combinations between b/y ions and other ions types occurred by analyzing the confirmatory matches for the different disulfide bonds. These combinations are available as supplemental information (see Additional File 2).\nThe consideration of multiple ion types also contributed to the method’s accuracy in terms of determining specific S-S bonds. Disulfide bonds previously missed due to their low match score could be identified when all ten different ion types were considered. The tryptic-digested protein FucT VII (which underwent CID) constituted one such example. In FucT VII the bond C318-C321 was missed when considering only b/y ions (match score 29, pp=11, pp2 =15). However, as shown in Figure 7, this bond was identified when multiple ions types were included (match score 100, pp=31, pp2=70). The confidence measures pp and pp2 are described in the following section. To explain this improvement we note that C318-C321 was an intra-bond involving cysteines that were close together. Consequently, CID-based fragmentation was poor and the consideration of other ion types essentially improved the signal-to-background contrast. In this particular case, five other ion types - a4, a5, a6, bo7, y*7 - were present in the FucT VII MS/MS data besides the b ions represented in the spectrum on the right in Figure 7. In the following, we present details of how these ions contribute to the match score Vs (from Eq. (4)). We present the two cases: consideration of only b/y-ions (Eq. (5)) and consideration of multiple ion types (Eq. (6)). In the numerator we specify the contribution of each spectrum peak from Figure 7 (the ion corresponding to each VMi × IN term is showed in brackets).(5)(6)\nSpectra samples from tryptic digested protein FucT VII. Spectra (m/z vs. normalized intensity) illustrating the confirmatory matches (whose intensity values were at least 10% of the maximum intensity) found for the disulfide bond between cysteines C318-C321 in protein FucT VII. The spectrum in the left shows the matches found when multiple ions were considered. The spectrum in the right shows the matches when only b/y-ions were considered.\nWe also observed that consideration of multiple ion-types led to significant increase in the match scores of the true disulfide bonds, whereas only a modest increase was noticed for false positives. This allowed us to increase the threshold we use on the match score Vs to identify high-quality matches from 30 to 80 (a 166% increase). The positive effect of this increment on the specificity of the method can be illustrated by considering the protein Aldolase. In this molecule, consideration of only b/y ions led to a false positive S-S bond identification between cysteines C135-C202 (Vs=30.8, with (original) threshold 30) However, when the multiple ions-types were considered with the (increased) threshold on the match score, no S-S bond was found between C135-C202 (Vs= 53.2, (incremented) threshold 80).\nIn this experiment, we investigated the effect of incorporating multiple ion types (a, b, bo, b*, c, x, y, yo, y*, and z) in determining the S-S bonds as opposed to considering only b/y-ions. We found that multiple instances of combinations between b/y ions and other ions types occurred by analyzing the confirmatory matches for the different disulfide bonds. These combinations are available as supplemental information (see Additional File 2).\nThe consideration of multiple ion types also contributed to the method’s accuracy in terms of determining specific S-S bonds. Disulfide bonds previously missed due to their low match score could be identified when all ten different ion types were considered. The tryptic-digested protein FucT VII (which underwent CID) constituted one such example. In FucT VII the bond C318-C321 was missed when considering only b/y ions (match score 29, pp=11, pp2 =15). However, as shown in Figure 7, this bond was identified when multiple ions types were included (match score 100, pp=31, pp2=70). The confidence measures pp and pp2 are described in the following section. To explain this improvement we note that C318-C321 was an intra-bond involving cysteines that were close together. Consequently, CID-based fragmentation was poor and the consideration of other ion types essentially improved the signal-to-background contrast. In this particular case, five other ion types - a4, a5, a6, bo7, y*7 - were present in the FucT VII MS/MS data besides the b ions represented in the spectrum on the right in Figure 7. In the following, we present details of how these ions contribute to the match score Vs (from Eq. (4)). We present the two cases: consideration of only b/y-ions (Eq. (5)) and consideration of multiple ion types (Eq. (6)). In the numerator we specify the contribution of each spectrum peak from Figure 7 (the ion corresponding to each VMi × IN term is showed in brackets).(5)(6)\nSpectra samples from tryptic digested protein FucT VII. Spectra (m/z vs. normalized intensity) illustrating the confirmatory matches (whose intensity values were at least 10% of the maximum intensity) found for the disulfide bond between cysteines C318-C321 in protein FucT VII. The spectrum in the left shows the matches found when multiple ions were considered. The spectrum in the right shows the matches when only b/y-ions were considered.\nWe also observed that consideration of multiple ion-types led to significant increase in the match scores of the true disulfide bonds, whereas only a modest increase was noticed for false positives. This allowed us to increase the threshold we use on the match score Vs to identify high-quality matches from 30 to 80 (a 166% increase). The positive effect of this increment on the specificity of the method can be illustrated by considering the protein Aldolase. In this molecule, consideration of only b/y ions led to a false positive S-S bond identification between cysteines C135-C202 (Vs=30.8, with (original) threshold 30) However, when the multiple ions-types were considered with the (increased) threshold on the match score, no S-S bond was found between C135-C202 (Vs= 53.2, (incremented) threshold 80).\n[SUBTITLE] Comparative studies with predictive techniques [SUBSECTION] In this experiment we compared the proposed method with three well known predictive methods DiANNA [21], DISULFIND [22], and PreCys [23]. The results from each of the methods are shown in Table 4 along with the with the known disulfide bond linkages according to the Swiss-Prot knowledgebase. As it can be seen, in terms of correct identifications (as well as minimizing false positives), the proposed approach outperformed all the predictive techniques.\nComparison with predictive methods\nIn this experiment we compared the proposed method with three well known predictive methods DiANNA [21], DISULFIND [22], and PreCys [23]. The results from each of the methods are shown in Table 4 along with the with the known disulfide bond linkages according to the Swiss-Prot knowledgebase. As it can be seen, in terms of correct identifications (as well as minimizing false positives), the proposed approach outperformed all the predictive techniques.\nComparison with predictive methods\n[SUBTITLE] Comparative studies with MassMatrix [SUBSECTION] At the state-of-the-art MS2Assign [6] and MassMatrix [7] are two MS-based methods that can be applied to the problem of determining S-S bond connectivity. In our previous work [3], the MS2DB system developed by us was found to be comparable to MS2Assign [6], albeit, in limited testing. Since the proposed method improves upon MS2DB and due to space limitations, we only present detailed comparative results with MassMatrix [7] in Table 5. As part of this experiment, for each S-S bond, in addition to the empirical match score (Eq. (4)), a probability based scoring model proposed in [8] was implemented. This model provided two scores called pp and pp2 scores. The pp score helps to evaluate whether the number of VMs could be a random. The pp2 score evaluates whether the total abundance (intensity) of VMs could be a random. We refer the reader to [8] for a detailed description and formulae of the pp and pp2 scores. The reader may note that the proposed method had better pp and pp2 scores when compared to MassMatrix (higher pp and pp2 scores are better, indicating smaller p-values). While the match scores (Vs) obtained with the proposed method were also higher than those obtained with MassMatrix (V*s), no inferences should be drawn as these scores are calculated differently in each of these methods. As can be seen from Table 5, every bond correctly determined by MassMatrix was also found by us. However, there were S-S bonds in C2GnT-I and Lysozyme that were found by the proposed method but not by MassMatrix.\nComparison with MassMatrix\nThe score (Vs) of each disulfide bond and the confidence scores (pp and pp2 values) are shown in brackets, respectively.\nAt the state-of-the-art MS2Assign [6] and MassMatrix [7] are two MS-based methods that can be applied to the problem of determining S-S bond connectivity. In our previous work [3], the MS2DB system developed by us was found to be comparable to MS2Assign [6], albeit, in limited testing. Since the proposed method improves upon MS2DB and due to space limitations, we only present detailed comparative results with MassMatrix [7] in Table 5. As part of this experiment, for each S-S bond, in addition to the empirical match score (Eq. (4)), a probability based scoring model proposed in [8] was implemented. This model provided two scores called pp and pp2 scores. The pp score helps to evaluate whether the number of VMs could be a random. The pp2 score evaluates whether the total abundance (intensity) of VMs could be a random. We refer the reader to [8] for a detailed description and formulae of the pp and pp2 scores. The reader may note that the proposed method had better pp and pp2 scores when compared to MassMatrix (higher pp and pp2 scores are better, indicating smaller p-values). While the match scores (Vs) obtained with the proposed method were also higher than those obtained with MassMatrix (V*s), no inferences should be drawn as these scores are calculated differently in each of these methods. As can be seen from Table 5, every bond correctly determined by MassMatrix was also found by us. However, there were S-S bonds in C2GnT-I and Lysozyme that were found by the proposed method but not by MassMatrix.\nComparison with MassMatrix\nThe score (Vs) of each disulfide bond and the confidence scores (pp and pp2 values) are shown in brackets, respectively.\n[SUBTITLE] Quantitative assessment and analysis of the method’s performance [SUBSECTION] If the set of disulfide bonds are denoted by P and the set of cysteines not forming disulfide bonds by N, then true positive (TP) predictions occur when disulfide bonds that exist are correctly predicted. False negative (FN) predictions occur when bonds that exist are not predicted as such. Similarly, a true negative (TN) prediction correctly identifies cysteine pairs that do not form a bond. Finally, a false positive (FP) prediction, incorrectly assigns a disulfide link to a pair of cysteines, which are not actually bonded. Based on these definitions, we use the following four standard measures to analyze the proposed method.\nSensitivity (Qc) = TP/P (7)\nSpecificity (Qnc) = TN/N (8)\nAccuracy (Q2) = TP + TN/P + N (9)\nIn Table 6 we present the results obtained for our framework. With maximum specificity and high accuracy (98% average), the method correctly reported the connectivity for most of the proteins. The method only failed to identify three disulfide bonds. One intra-bond in the Beta-LG protein could not be found due to a blind spot caused by the same intra-bond, making the protein’s fragmentation difficult. A blind spot occurs when the precursor ion fragmentation produces different fragments only at the outside boundaries of the intra-disulfide bond. This can cause too few product ions to be generated; the limited information can prevent accurate determination of disulfide bonds using MS-based methods. One cross-linked bond in the FT III protein also could not be identified because this particular connectivity configuration creates a large disulfide-bonded structure, which is poorly fragmented by tandem mass spectrometry. One bond in the C2GnT-I protein could not be found, since the precursor ion cannot be formed by chymotryptic digestion, which was the digestion carried for C2GnT-I. It is important to note that neither MassMatrix nor MS2Assign were able to identify these bonds.\nSensitivity, specificity, accuracy and Mathew’s correlation coefficient results for all nine proteins analyzed\nIf the set of disulfide bonds are denoted by P and the set of cysteines not forming disulfide bonds by N, then true positive (TP) predictions occur when disulfide bonds that exist are correctly predicted. False negative (FN) predictions occur when bonds that exist are not predicted as such. Similarly, a true negative (TN) prediction correctly identifies cysteine pairs that do not form a bond. Finally, a false positive (FP) prediction, incorrectly assigns a disulfide link to a pair of cysteines, which are not actually bonded. Based on these definitions, we use the following four standard measures to analyze the proposed method.\nSensitivity (Qc) = TP/P (7)\nSpecificity (Qnc) = TN/N (8)\nAccuracy (Q2) = TP + TN/P + N (9)\nIn Table 6 we present the results obtained for our framework. With maximum specificity and high accuracy (98% average), the method correctly reported the connectivity for most of the proteins. The method only failed to identify three disulfide bonds. One intra-bond in the Beta-LG protein could not be found due to a blind spot caused by the same intra-bond, making the protein’s fragmentation difficult. A blind spot occurs when the precursor ion fragmentation produces different fragments only at the outside boundaries of the intra-disulfide bond. This can cause too few product ions to be generated; the limited information can prevent accurate determination of disulfide bonds using MS-based methods. One cross-linked bond in the FT III protein also could not be identified because this particular connectivity configuration creates a large disulfide-bonded structure, which is poorly fragmented by tandem mass spectrometry. One bond in the C2GnT-I protein could not be found, since the precursor ion cannot be formed by chymotryptic digestion, which was the digestion carried for C2GnT-I. It is important to note that neither MassMatrix nor MS2Assign were able to identify these bonds.\nSensitivity, specificity, accuracy and Mathew’s correlation coefficient results for all nine proteins analyzed", "One of the most important characteristics of the proposed method is its efficiency in terms of excluding significant portions of a large and rapidly expanding search space. In Table 3 we compare the size of the complete DMS (containing all the disulfide-bonded peptide structures generated for each protein) and the complete FMS (containing all the disulfide-bonded fragment ions) with the truncated DMS and FMS obtained using the proposed approach.\nDMS and FMS mass space sizes comparison\nIt may be noted that across the molecules, on an average, the proposed approach required examining about 78% of the entire DMS and only about 14% of the entire FMS. It is crucial to note that this reduction in search was achieved without impacting the accuracy and having considered all multiple fragment ion types (a, b, bo, b*, c, x, y, yo, y*, and z). The DMS decrease was less than the FMS decrease because the disulfide-bonded structures in the DMS were bigger and fewer in number and consequently dispersed across the spectra mass range. In Figure 6, we show the actual time taken to obtain a solution by generating the complete DMS and FMS, as well as their truncated counterparts, for each of the molecules.\nComparison of the computational time (in seconds) for the exhaustive and partial generation of DMS and FMS of the proteins from Table 3. On average there was a 49.5% decrease in time to compute the DMS and 88.7% decrease in time to compute the FMS. The computations were carried out on an Intel T2390 1.86 GHz single-core processor with 1GB RAM.", "In this experiment, we investigated the effect of incorporating multiple ion types (a, b, bo, b*, c, x, y, yo, y*, and z) in determining the S-S bonds as opposed to considering only b/y-ions. We found that multiple instances of combinations between b/y ions and other ions types occurred by analyzing the confirmatory matches for the different disulfide bonds. These combinations are available as supplemental information (see Additional File 2).\nThe consideration of multiple ion types also contributed to the method’s accuracy in terms of determining specific S-S bonds. Disulfide bonds previously missed due to their low match score could be identified when all ten different ion types were considered. The tryptic-digested protein FucT VII (which underwent CID) constituted one such example. In FucT VII the bond C318-C321 was missed when considering only b/y ions (match score 29, pp=11, pp2 =15). However, as shown in Figure 7, this bond was identified when multiple ions types were included (match score 100, pp=31, pp2=70). The confidence measures pp and pp2 are described in the following section. To explain this improvement we note that C318-C321 was an intra-bond involving cysteines that were close together. Consequently, CID-based fragmentation was poor and the consideration of other ion types essentially improved the signal-to-background contrast. In this particular case, five other ion types - a4, a5, a6, bo7, y*7 - were present in the FucT VII MS/MS data besides the b ions represented in the spectrum on the right in Figure 7. In the following, we present details of how these ions contribute to the match score Vs (from Eq. (4)). We present the two cases: consideration of only b/y-ions (Eq. (5)) and consideration of multiple ion types (Eq. (6)). In the numerator we specify the contribution of each spectrum peak from Figure 7 (the ion corresponding to each VMi × IN term is showed in brackets).(5)(6)\nSpectra samples from tryptic digested protein FucT VII. Spectra (m/z vs. normalized intensity) illustrating the confirmatory matches (whose intensity values were at least 10% of the maximum intensity) found for the disulfide bond between cysteines C318-C321 in protein FucT VII. The spectrum in the left shows the matches found when multiple ions were considered. The spectrum in the right shows the matches when only b/y-ions were considered.\nWe also observed that consideration of multiple ion-types led to significant increase in the match scores of the true disulfide bonds, whereas only a modest increase was noticed for false positives. This allowed us to increase the threshold we use on the match score Vs to identify high-quality matches from 30 to 80 (a 166% increase). The positive effect of this increment on the specificity of the method can be illustrated by considering the protein Aldolase. In this molecule, consideration of only b/y ions led to a false positive S-S bond identification between cysteines C135-C202 (Vs=30.8, with (original) threshold 30) However, when the multiple ions-types were considered with the (increased) threshold on the match score, no S-S bond was found between C135-C202 (Vs= 53.2, (incremented) threshold 80).", "In this experiment we compared the proposed method with three well known predictive methods DiANNA [21], DISULFIND [22], and PreCys [23]. The results from each of the methods are shown in Table 4 along with the with the known disulfide bond linkages according to the Swiss-Prot knowledgebase. As it can be seen, in terms of correct identifications (as well as minimizing false positives), the proposed approach outperformed all the predictive techniques.\nComparison with predictive methods", "At the state-of-the-art MS2Assign [6] and MassMatrix [7] are two MS-based methods that can be applied to the problem of determining S-S bond connectivity. In our previous work [3], the MS2DB system developed by us was found to be comparable to MS2Assign [6], albeit, in limited testing. Since the proposed method improves upon MS2DB and due to space limitations, we only present detailed comparative results with MassMatrix [7] in Table 5. As part of this experiment, for each S-S bond, in addition to the empirical match score (Eq. (4)), a probability based scoring model proposed in [8] was implemented. This model provided two scores called pp and pp2 scores. The pp score helps to evaluate whether the number of VMs could be a random. The pp2 score evaluates whether the total abundance (intensity) of VMs could be a random. We refer the reader to [8] for a detailed description and formulae of the pp and pp2 scores. The reader may note that the proposed method had better pp and pp2 scores when compared to MassMatrix (higher pp and pp2 scores are better, indicating smaller p-values). While the match scores (Vs) obtained with the proposed method were also higher than those obtained with MassMatrix (V*s), no inferences should be drawn as these scores are calculated differently in each of these methods. As can be seen from Table 5, every bond correctly determined by MassMatrix was also found by us. However, there were S-S bonds in C2GnT-I and Lysozyme that were found by the proposed method but not by MassMatrix.\nComparison with MassMatrix\nThe score (Vs) of each disulfide bond and the confidence scores (pp and pp2 values) are shown in brackets, respectively.", "If the set of disulfide bonds are denoted by P and the set of cysteines not forming disulfide bonds by N, then true positive (TP) predictions occur when disulfide bonds that exist are correctly predicted. False negative (FN) predictions occur when bonds that exist are not predicted as such. Similarly, a true negative (TN) prediction correctly identifies cysteine pairs that do not form a bond. Finally, a false positive (FP) prediction, incorrectly assigns a disulfide link to a pair of cysteines, which are not actually bonded. Based on these definitions, we use the following four standard measures to analyze the proposed method.\nSensitivity (Qc) = TP/P (7)\nSpecificity (Qnc) = TN/N (8)\nAccuracy (Q2) = TP + TN/P + N (9)\nIn Table 6 we present the results obtained for our framework. With maximum specificity and high accuracy (98% average), the method correctly reported the connectivity for most of the proteins. The method only failed to identify three disulfide bonds. One intra-bond in the Beta-LG protein could not be found due to a blind spot caused by the same intra-bond, making the protein’s fragmentation difficult. A blind spot occurs when the precursor ion fragmentation produces different fragments only at the outside boundaries of the intra-disulfide bond. This can cause too few product ions to be generated; the limited information can prevent accurate determination of disulfide bonds using MS-based methods. One cross-linked bond in the FT III protein also could not be identified because this particular connectivity configuration creates a large disulfide-bonded structure, which is poorly fragmented by tandem mass spectrometry. One bond in the C2GnT-I protein could not be found, since the precursor ion cannot be formed by chymotryptic digestion, which was the digestion carried for C2GnT-I. It is important to note that neither MassMatrix nor MS2Assign were able to identify these bonds.\nSensitivity, specificity, accuracy and Mathew’s correlation coefficient results for all nine proteins analyzed", "We have presented an algorithmic framework for determining S-S bond topologies of molecules using MS/MS data. The proposed approach is computationally efficient, data driven, and has high accuracy, sensitivity, and specificity. It is not limited either by the connectivity pattern or by the variability of product ion types generated during the fragmentation of precursor ions. Furthermore, the approach does not require user intervention and can form the basis for high-throughput S-S bond determination.", "The algorithmic solution framework was designed by RS and implemented by WM. Computational studies and experiments were carried out by WM and RS. T-YY developed the experimental protocols and generated the data. The paper was written by RS and WM.", "The authors declare that they have no competing interests.", "The proof that the proposed method is a fully polynomial approximation scheme consists of two parts. First, we need to show that each value returned by the APPROX-DMS function is within 1 + ε from the optimal solution. Second, we need to show that the running time of the method is fully polynomial. We refer the reader to [16] for the proof of the first part and focus in the following on analyzing the complexity of the method. To show that the method is a fully polynomial-time approximation scheme, we derive a bound on the length of a DMS set. After trimming, successive elements DMSi and of DMS must have a relationship . Therefore, each possible DMS set contains up to log1+εPMLval values. Since (x/(1 + x)) ≤ ln(1 + x) ≤ x and 0 < ε < 1, it can be shown that:(11)\nAs can be seen from Eq. (11), this bound is (explicitly) polynomial in the size of the input PMSval. It is also (implicitly) polynomial in the size of the set DMS since ε is directly proportional to the number of cysteine-containing peptides k (per Eq. (2)) and these peptides are in turn combined to form each element of the DMS. A similar argument can be made for the APPROX-FMS routine, completing thereby the proof that the proposed method is a fully polynomial-time approximation scheme.", "Action of APPROX-DMS on the protein Beta-LG This example shows the effectiveness of the APROX-DMS algorithm while trimming a DMS set generated for the protein Beta-LG using MS/MS data.\nClick here for file\nCombination between b/y ions and other ions types on MS/MS data This example shows that combinations between ion types other than just b and/or y ions do occur, even for proteins that underwent CID (CID is a dissociation method which produces mainly b/y ions).\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Reconstructing phylogeny from metabolic substrate-product relationships.
21342557
Many approaches utilize metabolic pathway information to reconstruct the phyletic tree of fully sequenced organisms, but how metabolic networks can add information to original genomic annotations has remained open.
BACKGROUND
We translated enzyme reactions assigned in 1075 organisms into substrate-product relationships to represent the metabolic information at a finer resolution than enzymes and compounds. Each organism was represented as a vector of substrate-product relationships and the phyletic tree was reconstructed by a simple hierarchical method. Obtained results were compared with several other approaches that use genome information and network properties.
METHODS
Phyletic trees without consideration of network properties can already extract organisms in anomalous environments. This efficient method can add insights to traditional genome-based phylogenetic reconstruction.
RESULTS
Structural relationship among metabolites can highlight parasitic or symbiont species such as spirochaete and clamydia. The method assists understanding of species-environment interaction when used in combination with traditional phylogenetic methods.
CONCLUSIONS
[ "Algorithms", "Archaea", "Bacteria", "Cluster Analysis", "Computational Biology", "Eukaryota", "Metabolic Networks and Pathways", "Phylogeny", "Software" ]
3044282
null
null
Methods
[SUBTITLE] Enzyme annotation for organisms [SUBSECTION] Enzyme annotations and corresponding EC reactions for 1075 organisms (895 bacteria, 67 archaea, and 113 eukaryotes) were obtained from the KEGG database through its application program interface. The number of EC reactions was 3116, covering as many as 154 pathway maps. Metabolic annotations in each species were represented as a set of substrate-product relationships by transforming all assigned EC reactions into a set of metabolite pairs (see the next section). Most EC-numbered entries correspond to multiple enzymatic reactions. For example, alcohol dehydrogenase (EC 1.1.1.1) can catalyze a multitude of compounds with a hydroxyl group. For such generic EC-numbered functions we manually integrated possible reactions to ensure the coverage of biochemical transformation shown in the metabolic maps. Enzyme annotations and corresponding EC reactions for 1075 organisms (895 bacteria, 67 archaea, and 113 eukaryotes) were obtained from the KEGG database through its application program interface. The number of EC reactions was 3116, covering as many as 154 pathway maps. Metabolic annotations in each species were represented as a set of substrate-product relationships by transforming all assigned EC reactions into a set of metabolite pairs (see the next section). Most EC-numbered entries correspond to multiple enzymatic reactions. For example, alcohol dehydrogenase (EC 1.1.1.1) can catalyze a multitude of compounds with a hydroxyl group. For such generic EC-numbered functions we manually integrated possible reactions to ensure the coverage of biochemical transformation shown in the metabolic maps. [SUBTITLE] Strategy for graph transformation [SUBSECTION] An enzymatic reaction usually has multiple inputs (substrates) and outputs (products). Although standard metabolic pathway charts are depicted as hypergraphs, substrate-product relationships must be specified for each reaction to transform it into a graph. A standard way is to use a fully connected bipartite graph [7,8,12]. The network connectivity then portrays the ‘reaction membership’; frequently occurring metabolites become hub nodes in the resulting graph. The representation, however, does not capture biochemical transformation between compounds because any two metabolites can be falsely linked through metabolic hubs regardless of their structures [11]. To avoid this bypassing effect, we employ the substrate-product decomposition of reactions [13]. In this scheme, each reaction is decomposed into a set of structurally related substrate-product pairs at the atomic scale. The data are also available from the RPAIR database [14], and the same method has been used in several recent works [15-17]. This representation avoids bias originating from currency metabolites. In other words, the method focuses on the variation of structural transformations, not the occurrence of each metabolite. The decomposition results of EC-numbered reactions are accessible at our wiki-based site: http://metabolomics.jp/wiki/Enzyme:[EC-number]. For example, the details of hexokinase can be accessed at http://metabolomics.jp/wiki/Enzyme:2.7.1.1 In the transformation, we replaced generic names such as alcohol or amino acids with concrete compound names. For hexokinase, as many as 15 reactions are included depending on hexose types. Through this decomposition, a set of enzymatic reactions becomes a set of substrate-product pairs. We did not consider the multiplicity of each pair in our analysis. An enzymatic reaction usually has multiple inputs (substrates) and outputs (products). Although standard metabolic pathway charts are depicted as hypergraphs, substrate-product relationships must be specified for each reaction to transform it into a graph. A standard way is to use a fully connected bipartite graph [7,8,12]. The network connectivity then portrays the ‘reaction membership’; frequently occurring metabolites become hub nodes in the resulting graph. The representation, however, does not capture biochemical transformation between compounds because any two metabolites can be falsely linked through metabolic hubs regardless of their structures [11]. To avoid this bypassing effect, we employ the substrate-product decomposition of reactions [13]. In this scheme, each reaction is decomposed into a set of structurally related substrate-product pairs at the atomic scale. The data are also available from the RPAIR database [14], and the same method has been used in several recent works [15-17]. This representation avoids bias originating from currency metabolites. In other words, the method focuses on the variation of structural transformations, not the occurrence of each metabolite. The decomposition results of EC-numbered reactions are accessible at our wiki-based site: http://metabolomics.jp/wiki/Enzyme:[EC-number]. For example, the details of hexokinase can be accessed at http://metabolomics.jp/wiki/Enzyme:2.7.1.1 In the transformation, we replaced generic names such as alcohol or amino acids with concrete compound names. For hexokinase, as many as 15 reactions are included depending on hexose types. Through this decomposition, a set of enzymatic reactions becomes a set of substrate-product pairs. We did not consider the multiplicity of each pair in our analysis. [SUBTITLE] Phyletic reconstruction [SUBSECTION] Phyletic trees were created by a hierarchical clustering method (pairwise complete linkage algorithm) using the Cluster 3.0 software program [18]. Each organism was represented as a vector of substrate-product pairs, where the absence/presence of each relationship was denoted as 0 or 1. For visualization, Dendroscope software program [19] was used to analyze and compare phyletic trees. The employed simple algorithm may be controversial for phyletic reconstruction, and will be discussed later. Phyletic trees were created by a hierarchical clustering method (pairwise complete linkage algorithm) using the Cluster 3.0 software program [18]. Each organism was represented as a vector of substrate-product pairs, where the absence/presence of each relationship was denoted as 0 or 1. For visualization, Dendroscope software program [19] was used to analyze and compare phyletic trees. The employed simple algorithm may be controversial for phyletic reconstruction, and will be discussed later.
null
null
null
null
[ "Background", "Enzyme annotation for organisms", "Strategy for graph transformation", "Phyletic reconstruction", "Results", "Phyletic trees for multi-domains of life based on substrate-product relationships", "Phyletic trees with or without network connectivity", "Comparison with EC number-based classification", "Central metabolites", "Metabolic differences between bacteria, archaea, and eukaryotes", "Discussion", "Why can substrate-product relationships add insights?", "Algorithms to find phylogeny", "Sharing metabolic knowledge through wiki", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Understanding the phyletic relationship among living organisms has long been a fundamental challenge since the concept of evolution had emerged. Traditionally, molecular biologists constructed phylogenetic trees based on the sequence similarity of small subunit ribosomal RNA [1] or other single genes. As whole-genome sequencing technologies advance, vast amount of sequence data become available for download and analysis. Without question, the comparative analysis of whole genomes can provide more information to reconstruct the phylogeny than individual genes do. Consequently, numerous methods have been proposed to reconstruct the phylogenetic trees from whole genome features such as oligonucleotide compositions [2], genome fragment occurrence [3], and absence/presence of metabolic features [4].\nIn parallel with genomic comparisons, many studies focused on the similarity of metabolic processes. Metabolic profiles of a living organism are strongly related to its environment, and metabolism is adapted to balance compounds taken up from its surroundings [5,6]. Thus, metabolic consideration can add insights into species-environment interaction such as symbiosis or convergent adaptation to extreme environments. To analyze the phyletic relationship in metabolic capability, there are at least 3 approaches. The first is machine learning. Oh et al. used a distance computed by the exponential graph kernel, i.e., the weighted sum of similarities between adjacency matrices of 1-step neighbors, 2-step neighbors, and so on for 81 organisms [7]. The second is network comparison. Zhang et al. defined existence/absence of metabolic pathways and computed the network similarity measure for 47 organisms [8]. The last is EC-based classification. Clemente et al. used sets of EC numbers to define pathway similarity and compared metabolism of 8 bacteria [9].\nMetabolic data are well standardized in previous approaches because all works depended on the bulk-downloadable KEGG database [10]. Less concerned, however, was the strategy for transforming enzymatic reactions into graphs (or networks). Depending on the strategy, resulting networks are drastically different enough to change fundamental network centralities [11]. For example, Borenstein et al. converted each enzymatic reaction to a fully connected bipartite graph between substrates and products to enhance connectivity and defined ‘seed’ compounds for each organism as the union of essential metabolites in all environments [12]. This transformation is known to overestimate the ability to synthesize/degrade metabolites. On the other hand, using the EC numbers for pathway analysis tend to underestimate the metabolic network because the numbers are assigned to biochemical transformation, and not to enzyme themselves. We here propose a more suitable data representation, and elucidate the phylogenies across three domains of life. Its effectiveness is shown in comparison with previous approaches.", "Enzyme annotations and corresponding EC reactions for 1075 organisms (895 bacteria, 67 archaea, and 113 eukaryotes) were obtained from the KEGG database through its application program interface. The number of EC reactions was 3116, covering as many as 154 pathway maps. Metabolic annotations in each species were represented as a set of substrate-product relationships by transforming all assigned EC reactions into a set of metabolite pairs (see the next section). Most EC-numbered entries correspond to multiple enzymatic reactions. For example, alcohol dehydrogenase (EC 1.1.1.1) can catalyze a multitude of compounds with a hydroxyl group. For such generic EC-numbered functions we manually integrated possible reactions to ensure the coverage of biochemical transformation shown in the metabolic maps.", "An enzymatic reaction usually has multiple inputs (substrates) and outputs (products). Although standard metabolic pathway charts are depicted as hypergraphs, substrate-product relationships must be specified for each reaction to transform it into a graph. A standard way is to use a fully connected bipartite graph [7,8,12]. The network connectivity then portrays the ‘reaction membership’; frequently occurring metabolites become hub nodes in the resulting graph. The representation, however, does not capture biochemical transformation between compounds because any two metabolites can be falsely linked through metabolic hubs regardless of their structures [11].\nTo avoid this bypassing effect, we employ the substrate-product decomposition of reactions [13]. In this scheme, each reaction is decomposed into a set of structurally related substrate-product pairs at the atomic scale. The data are also available from the RPAIR database [14], and the same method has been used in several recent works [15-17]. This representation avoids bias originating from currency metabolites. In other words, the method focuses on the variation of structural transformations, not the occurrence of each metabolite. The decomposition results of EC-numbered reactions are accessible at our wiki-based site: http://metabolomics.jp/wiki/Enzyme:[EC-number]. For example, the details of hexokinase can be accessed at http://metabolomics.jp/wiki/Enzyme:2.7.1.1 In the transformation, we replaced generic names such as alcohol or amino acids with concrete compound names. For hexokinase, as many as 15 reactions are included depending on hexose types. Through this decomposition, a set of enzymatic reactions becomes a set of substrate-product pairs. We did not consider the multiplicity of each pair in our analysis.", "Phyletic trees were created by a hierarchical clustering method (pairwise complete linkage algorithm) using the Cluster 3.0 software program [18]. Each organism was represented as a vector of substrate-product pairs, where the absence/presence of each relationship was denoted as 0 or 1. For visualization, Dendroscope software program [19] was used to analyze and compare phyletic trees. The employed simple algorithm may be controversial for phyletic reconstruction, and will be discussed later.", "We compared results of our data representation with several recent, well known studies.\n[SUBTITLE] Phyletic trees for multi-domains of life based on substrate-product relationships [SUBSECTION] To compare with the phylogeny reconstruction based on the ‘seed’ metabolites [12], we reconstructed a phyletic tree for 478 species (the same number as the original article) using our substrate-product pairs. Figure 1 shows the summarized view of both trees of life. Both approaches clustered 6 main domains successfully, but the seed approach placed plants and fungi among bacteria. This is a serious artifact; since the seed approach focuses on essential metabolites, classification based on secondary metabolites becomes unstable. In both trees, a few seemingly dispersed clades (protists in eukaryota) existed. This is reasonable because the definition of protist is a structural simplicity regardless of its metabolic capability. Note here that our method correctly classified eukaryotes and also placed spirochaeta and chlamydia in a group separated from the other bacteria. This indicates these parasitic/pathogenic species exhibit anomalous metabolism in comparison with the other species, but further investigation is necessary to confirm its reason.\nReconstructed phlogeny. Left: Our reconstruction. Species in the same family were grouped into leaf nodes. Right: Reconstruction by Borenstein et al. [12] using the ‘seed’ metabolites. Reprinted with permission. Copyright (2008) National Academy of Sciences, U.S.A. Abbreviations: Bac, bacteria (orange); Arc, archaea (cyan); Pla, plants (light green); Ani, animals (navy blue); Fun, fungi (dark green); Pro, protists (light purple).\nAs the second comparison, our approach was compared with the golden standard tree, reconstructed by using concatenated alignment of 31 universal protein families covering 191 species [20] (Figure 2). Our method could clearly separate three main domains, bacteria, archaea and metazoan, except Nanoarchaeum equitans, which is an obligatory symbiont on Ignicoccus. It lacks many essential metabolic pathways and therefore became an orphan branch in our reconstruction. Similarly, the reconstruction reflected more on metabolic phenotypes rather than genetic evolution. For example, Mycoplasma spp. were located far from the other bacteria and closer to eukaryotes in our tree because they lack many metabolic pathways (higher animals lack many amino acid biosynthesis, for example). This defect was also observed in the comparison with the ‘seed’-based tree. Some invertebrate parasites were also grouped with Caenorhabditis elegans due to their metabolic similarity of unknown reason. Note that systematics of C. elegans is contentious and still unresolved because of its high evolutionary rate [21]. In summary, our method could reproduce comparable results with the standard tree. In addition, it could extract metabolically anomalous species which could not be easily found by simple genetic comparisons by comparing results with the standard phylogeny.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Ciccarelli et al. [20] using the 31 universal protein families. Reprinted with permission from AAAS. Bacteria (purple), Archaea (green), thermo Archaea (yellow green), Metazoa (pink).\nTo compare with the phylogeny reconstruction based on the ‘seed’ metabolites [12], we reconstructed a phyletic tree for 478 species (the same number as the original article) using our substrate-product pairs. Figure 1 shows the summarized view of both trees of life. Both approaches clustered 6 main domains successfully, but the seed approach placed plants and fungi among bacteria. This is a serious artifact; since the seed approach focuses on essential metabolites, classification based on secondary metabolites becomes unstable. In both trees, a few seemingly dispersed clades (protists in eukaryota) existed. This is reasonable because the definition of protist is a structural simplicity regardless of its metabolic capability. Note here that our method correctly classified eukaryotes and also placed spirochaeta and chlamydia in a group separated from the other bacteria. This indicates these parasitic/pathogenic species exhibit anomalous metabolism in comparison with the other species, but further investigation is necessary to confirm its reason.\nReconstructed phlogeny. Left: Our reconstruction. Species in the same family were grouped into leaf nodes. Right: Reconstruction by Borenstein et al. [12] using the ‘seed’ metabolites. Reprinted with permission. Copyright (2008) National Academy of Sciences, U.S.A. Abbreviations: Bac, bacteria (orange); Arc, archaea (cyan); Pla, plants (light green); Ani, animals (navy blue); Fun, fungi (dark green); Pro, protists (light purple).\nAs the second comparison, our approach was compared with the golden standard tree, reconstructed by using concatenated alignment of 31 universal protein families covering 191 species [20] (Figure 2). Our method could clearly separate three main domains, bacteria, archaea and metazoan, except Nanoarchaeum equitans, which is an obligatory symbiont on Ignicoccus. It lacks many essential metabolic pathways and therefore became an orphan branch in our reconstruction. Similarly, the reconstruction reflected more on metabolic phenotypes rather than genetic evolution. For example, Mycoplasma spp. were located far from the other bacteria and closer to eukaryotes in our tree because they lack many metabolic pathways (higher animals lack many amino acid biosynthesis, for example). This defect was also observed in the comparison with the ‘seed’-based tree. Some invertebrate parasites were also grouped with Caenorhabditis elegans due to their metabolic similarity of unknown reason. Note that systematics of C. elegans is contentious and still unresolved because of its high evolutionary rate [21]. In summary, our method could reproduce comparable results with the standard tree. In addition, it could extract metabolically anomalous species which could not be easily found by simple genetic comparisons by comparing results with the standard phylogeny.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Ciccarelli et al. [20] using the 31 universal protein families. Reprinted with permission from AAAS. Bacteria (purple), Archaea (green), thermo Archaea (yellow green), Metazoa (pink).\n[SUBTITLE] Phyletic trees with or without network connectivity [SUBSECTION] To investigate the information gain by considering metabolic network connectivity, we carefully compared our approach with the network topology-based approach [8]. There are few discrepancies between our and their results. In our approach, some proteobacteria and hyperthermophils were not properly grouped into the same sub-clusters (Figure 3). These clades are labeled as “other independent bacteria” and their proper positions are context-dependent. For this reason, we do not consider our classification inappropriate. On the other hand, we could correctly cluster Mycobacterium tuberculosis and M. leprae into Gram-positive bacteria. In addition, parasites and symbionts (spirochaete and clamydia) were classified more correctly in our method. In summary, although overall classification was similar, we could better, or at least equally, classify parasitic or symbiotic species in comparison with the results with another phyletic approach.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Zhang et al.[8]. Reprinted under the BioMed Central Open License agreement (BMC Bioinformatics).\nTo investigate the information gain by considering metabolic network connectivity, we carefully compared our approach with the network topology-based approach [8]. There are few discrepancies between our and their results. In our approach, some proteobacteria and hyperthermophils were not properly grouped into the same sub-clusters (Figure 3). These clades are labeled as “other independent bacteria” and their proper positions are context-dependent. For this reason, we do not consider our classification inappropriate. On the other hand, we could correctly cluster Mycobacterium tuberculosis and M. leprae into Gram-positive bacteria. In addition, parasites and symbionts (spirochaete and clamydia) were classified more correctly in our method. In summary, although overall classification was similar, we could better, or at least equally, classify parasitic or symbiotic species in comparison with the results with another phyletic approach.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Zhang et al.[8]. Reprinted under the BioMed Central Open License agreement (BMC Bioinformatics).\n[SUBTITLE] Comparison with EC number-based classification [SUBSECTION] Clemente et al. investigated the relationship among 8 photosynthetic bacteria using pseudo-alignment of over 60 metabolic pathways using the EC hierarchy [9]. Lastly, we compared their results with ours and found 2 differences from the EC-based phylogeny (Figure 4): the positions of Synechocystis (syn) and Synechococcus (syw), both of which belong to Chroococcales together with Thermosynechococcus elongates (tel). The misplacement of Chroococcales was observed in the work by Clemente et al. too and presumably results from the insufficiency of gene annotations in these species (Figure 4). In terms of metabolic similarity, our reconstruction seems more accurate because Gloeobacter violaceus (gvi) and tel were isolated from rocks and hot springs, respectively, whereas the remaining 6 species were isolated from fresh or sea water. Therefore, the two species should be regarded as metabolic out-groups as in our classification.\nReconstructed phylogeny. Left: Our reconstruction. Abbreviations: Anabaena (ana), Gloeobacter violaceus (gvi), Prochlorococcus marinus marinus (pma), P. marinus pastoris (pmm), P. marinus (pmt), Synechocystis (syn), Synechococcus (syw), Thermosynechococcus elongatus (tel). Most separated are the two photosynthetic eukaryotes: Arabidopsis thaliana (ath) and Cyanidioschyzon merolae (cme). Right: EC-number based classification by Clemente et al. [9] Reprinted with permission from Oxford University Press.\nClemente et al. investigated the relationship among 8 photosynthetic bacteria using pseudo-alignment of over 60 metabolic pathways using the EC hierarchy [9]. Lastly, we compared their results with ours and found 2 differences from the EC-based phylogeny (Figure 4): the positions of Synechocystis (syn) and Synechococcus (syw), both of which belong to Chroococcales together with Thermosynechococcus elongates (tel). The misplacement of Chroococcales was observed in the work by Clemente et al. too and presumably results from the insufficiency of gene annotations in these species (Figure 4). In terms of metabolic similarity, our reconstruction seems more accurate because Gloeobacter violaceus (gvi) and tel were isolated from rocks and hot springs, respectively, whereas the remaining 6 species were isolated from fresh or sea water. Therefore, the two species should be regarded as metabolic out-groups as in our classification.\nReconstructed phylogeny. Left: Our reconstruction. Abbreviations: Anabaena (ana), Gloeobacter violaceus (gvi), Prochlorococcus marinus marinus (pma), P. marinus pastoris (pmm), P. marinus (pmt), Synechocystis (syn), Synechococcus (syw), Thermosynechococcus elongatus (tel). Most separated are the two photosynthetic eukaryotes: Arabidopsis thaliana (ath) and Cyanidioschyzon merolae (cme). Right: EC-number based classification by Clemente et al. [9] Reprinted with permission from Oxford University Press.\n[SUBTITLE] Central metabolites [SUBSECTION] We previously argued that metabolic hubs are better identified in the substrate-product graph than in other graph representations, because the approach does not count the frequency of metabolite names in reactions but the number of structural transformations [11]. The number of transformations roughly reflects the structural variation of catalytic sites of respective enzymes, and therefore reflects the diversity of metabolic capabilities.\nTable 1 is the list of metabolites in the three domains which appear as the top 10 hubs in more than 20% organisms for each domain. The abundance of adenosine-related metabolites for all domains indicates the ancientry of purine-related metabolism, which coincides with the analysis on protein structures [22]. The presence of CO2 and NH3 is an unavoidable artifact of counting all decarboxylations and amino-transfers. High-degree metabolites are largely conserved. It can be seen that eukaryotes contain more reactions with glucuronate, glutathione, and galactose, which appear in drug metabolism. At the same time, eukaryotes use less L-aspartate- or 5-phospho-alpha-D-ribose 1-phosphate-dependent reactions. Archaea lack malonyl-acyl carrier proteins and coenzyme A, which often appear in lipid metabolism for eukaryotes and bacteria. Archaea also use L-glutamine more often than the other domains.\nMost differently transforming metabolites in the three domains. The full list is available at http://sarst.life.nthu.edu.tw/metabolic/SD.csv.\nHighlighted metabolites are mentioned in the main text. Abbreviations: P … phosphate; ACP … acyl carrier proteins.\n\nWe previously argued that metabolic hubs are better identified in the substrate-product graph than in other graph representations, because the approach does not count the frequency of metabolite names in reactions but the number of structural transformations [11]. The number of transformations roughly reflects the structural variation of catalytic sites of respective enzymes, and therefore reflects the diversity of metabolic capabilities.\nTable 1 is the list of metabolites in the three domains which appear as the top 10 hubs in more than 20% organisms for each domain. The abundance of adenosine-related metabolites for all domains indicates the ancientry of purine-related metabolism, which coincides with the analysis on protein structures [22]. The presence of CO2 and NH3 is an unavoidable artifact of counting all decarboxylations and amino-transfers. High-degree metabolites are largely conserved. It can be seen that eukaryotes contain more reactions with glucuronate, glutathione, and galactose, which appear in drug metabolism. At the same time, eukaryotes use less L-aspartate- or 5-phospho-alpha-D-ribose 1-phosphate-dependent reactions. Archaea lack malonyl-acyl carrier proteins and coenzyme A, which often appear in lipid metabolism for eukaryotes and bacteria. Archaea also use L-glutamine more often than the other domains.\nMost differently transforming metabolites in the three domains. The full list is available at http://sarst.life.nthu.edu.tw/metabolic/SD.csv.\nHighlighted metabolites are mentioned in the main text. Abbreviations: P … phosphate; ACP … acyl carrier proteins.\n\n[SUBTITLE] Metabolic differences between bacteria, archaea, and eukaryotes [SUBSECTION] To elucidate the metabolic differences between the three domains of life, we created a heat map of the substrate-product relationships in 535 species. In Figure 5, the vertical and horizontal directions are the hierarchically clustered organisms and the substrate-product relationships, respectively. Note that substrate-product relationships in species-specific pathways tend to cluster in this scheme. Archaea and Mycoplasma lack the fatty acid biosynthesis and many other pathways. However, many archaeal pathways are overlooked in the KEGG annotations (e.g. energy metabolism and ether-lipid metabolism for membrane synthesis), and their uniqueness is not easily discerned in this analysis. In contrast, Plantae and Animalia kingdoms in eukaryotes are easy to locate because animals possess drug- and other secondary metabolism, and plants possess unique secondary biosynthetic pathways (Figure 5).\nThe heat map of substrate-product relationships in 535 organisms. The horizontal line at the topmost right part corresponds to animal- and plant-specific pathways (the rightmost line is plants. Animals are the next-right line just below plants. The black horizontal line just below the eukaryotes (plants and animals) is Mycoplasma, which lack most pathways. Archaea are clustered at the bottom of the figure.\nTo elucidate the metabolic differences between the three domains of life, we created a heat map of the substrate-product relationships in 535 species. In Figure 5, the vertical and horizontal directions are the hierarchically clustered organisms and the substrate-product relationships, respectively. Note that substrate-product relationships in species-specific pathways tend to cluster in this scheme. Archaea and Mycoplasma lack the fatty acid biosynthesis and many other pathways. However, many archaeal pathways are overlooked in the KEGG annotations (e.g. energy metabolism and ether-lipid metabolism for membrane synthesis), and their uniqueness is not easily discerned in this analysis. In contrast, Plantae and Animalia kingdoms in eukaryotes are easy to locate because animals possess drug- and other secondary metabolism, and plants possess unique secondary biosynthetic pathways (Figure 5).\nThe heat map of substrate-product relationships in 535 organisms. The horizontal line at the topmost right part corresponds to animal- and plant-specific pathways (the rightmost line is plants. Animals are the next-right line just below plants. The black horizontal line just below the eukaryotes (plants and animals) is Mycoplasma, which lack most pathways. Archaea are clustered at the bottom of the figure.", "To compare with the phylogeny reconstruction based on the ‘seed’ metabolites [12], we reconstructed a phyletic tree for 478 species (the same number as the original article) using our substrate-product pairs. Figure 1 shows the summarized view of both trees of life. Both approaches clustered 6 main domains successfully, but the seed approach placed plants and fungi among bacteria. This is a serious artifact; since the seed approach focuses on essential metabolites, classification based on secondary metabolites becomes unstable. In both trees, a few seemingly dispersed clades (protists in eukaryota) existed. This is reasonable because the definition of protist is a structural simplicity regardless of its metabolic capability. Note here that our method correctly classified eukaryotes and also placed spirochaeta and chlamydia in a group separated from the other bacteria. This indicates these parasitic/pathogenic species exhibit anomalous metabolism in comparison with the other species, but further investigation is necessary to confirm its reason.\nReconstructed phlogeny. Left: Our reconstruction. Species in the same family were grouped into leaf nodes. Right: Reconstruction by Borenstein et al. [12] using the ‘seed’ metabolites. Reprinted with permission. Copyright (2008) National Academy of Sciences, U.S.A. Abbreviations: Bac, bacteria (orange); Arc, archaea (cyan); Pla, plants (light green); Ani, animals (navy blue); Fun, fungi (dark green); Pro, protists (light purple).\nAs the second comparison, our approach was compared with the golden standard tree, reconstructed by using concatenated alignment of 31 universal protein families covering 191 species [20] (Figure 2). Our method could clearly separate three main domains, bacteria, archaea and metazoan, except Nanoarchaeum equitans, which is an obligatory symbiont on Ignicoccus. It lacks many essential metabolic pathways and therefore became an orphan branch in our reconstruction. Similarly, the reconstruction reflected more on metabolic phenotypes rather than genetic evolution. For example, Mycoplasma spp. were located far from the other bacteria and closer to eukaryotes in our tree because they lack many metabolic pathways (higher animals lack many amino acid biosynthesis, for example). This defect was also observed in the comparison with the ‘seed’-based tree. Some invertebrate parasites were also grouped with Caenorhabditis elegans due to their metabolic similarity of unknown reason. Note that systematics of C. elegans is contentious and still unresolved because of its high evolutionary rate [21]. In summary, our method could reproduce comparable results with the standard tree. In addition, it could extract metabolically anomalous species which could not be easily found by simple genetic comparisons by comparing results with the standard phylogeny.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Ciccarelli et al. [20] using the 31 universal protein families. Reprinted with permission from AAAS. Bacteria (purple), Archaea (green), thermo Archaea (yellow green), Metazoa (pink).", "To investigate the information gain by considering metabolic network connectivity, we carefully compared our approach with the network topology-based approach [8]. There are few discrepancies between our and their results. In our approach, some proteobacteria and hyperthermophils were not properly grouped into the same sub-clusters (Figure 3). These clades are labeled as “other independent bacteria” and their proper positions are context-dependent. For this reason, we do not consider our classification inappropriate. On the other hand, we could correctly cluster Mycobacterium tuberculosis and M. leprae into Gram-positive bacteria. In addition, parasites and symbionts (spirochaete and clamydia) were classified more correctly in our method. In summary, although overall classification was similar, we could better, or at least equally, classify parasitic or symbiotic species in comparison with the results with another phyletic approach.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Zhang et al.[8]. Reprinted under the BioMed Central Open License agreement (BMC Bioinformatics).", "Clemente et al. investigated the relationship among 8 photosynthetic bacteria using pseudo-alignment of over 60 metabolic pathways using the EC hierarchy [9]. Lastly, we compared their results with ours and found 2 differences from the EC-based phylogeny (Figure 4): the positions of Synechocystis (syn) and Synechococcus (syw), both of which belong to Chroococcales together with Thermosynechococcus elongates (tel). The misplacement of Chroococcales was observed in the work by Clemente et al. too and presumably results from the insufficiency of gene annotations in these species (Figure 4). In terms of metabolic similarity, our reconstruction seems more accurate because Gloeobacter violaceus (gvi) and tel were isolated from rocks and hot springs, respectively, whereas the remaining 6 species were isolated from fresh or sea water. Therefore, the two species should be regarded as metabolic out-groups as in our classification.\nReconstructed phylogeny. Left: Our reconstruction. Abbreviations: Anabaena (ana), Gloeobacter violaceus (gvi), Prochlorococcus marinus marinus (pma), P. marinus pastoris (pmm), P. marinus (pmt), Synechocystis (syn), Synechococcus (syw), Thermosynechococcus elongatus (tel). Most separated are the two photosynthetic eukaryotes: Arabidopsis thaliana (ath) and Cyanidioschyzon merolae (cme). Right: EC-number based classification by Clemente et al. [9] Reprinted with permission from Oxford University Press.", "We previously argued that metabolic hubs are better identified in the substrate-product graph than in other graph representations, because the approach does not count the frequency of metabolite names in reactions but the number of structural transformations [11]. The number of transformations roughly reflects the structural variation of catalytic sites of respective enzymes, and therefore reflects the diversity of metabolic capabilities.\nTable 1 is the list of metabolites in the three domains which appear as the top 10 hubs in more than 20% organisms for each domain. The abundance of adenosine-related metabolites for all domains indicates the ancientry of purine-related metabolism, which coincides with the analysis on protein structures [22]. The presence of CO2 and NH3 is an unavoidable artifact of counting all decarboxylations and amino-transfers. High-degree metabolites are largely conserved. It can be seen that eukaryotes contain more reactions with glucuronate, glutathione, and galactose, which appear in drug metabolism. At the same time, eukaryotes use less L-aspartate- or 5-phospho-alpha-D-ribose 1-phosphate-dependent reactions. Archaea lack malonyl-acyl carrier proteins and coenzyme A, which often appear in lipid metabolism for eukaryotes and bacteria. Archaea also use L-glutamine more often than the other domains.\nMost differently transforming metabolites in the three domains. The full list is available at http://sarst.life.nthu.edu.tw/metabolic/SD.csv.\nHighlighted metabolites are mentioned in the main text. Abbreviations: P … phosphate; ACP … acyl carrier proteins.\n", "To elucidate the metabolic differences between the three domains of life, we created a heat map of the substrate-product relationships in 535 species. In Figure 5, the vertical and horizontal directions are the hierarchically clustered organisms and the substrate-product relationships, respectively. Note that substrate-product relationships in species-specific pathways tend to cluster in this scheme. Archaea and Mycoplasma lack the fatty acid biosynthesis and many other pathways. However, many archaeal pathways are overlooked in the KEGG annotations (e.g. energy metabolism and ether-lipid metabolism for membrane synthesis), and their uniqueness is not easily discerned in this analysis. In contrast, Plantae and Animalia kingdoms in eukaryotes are easy to locate because animals possess drug- and other secondary metabolism, and plants possess unique secondary biosynthetic pathways (Figure 5).\nThe heat map of substrate-product relationships in 535 organisms. The horizontal line at the topmost right part corresponds to animal- and plant-specific pathways (the rightmost line is plants. Animals are the next-right line just below plants. The black horizontal line just below the eukaryotes (plants and animals) is Mycoplasma, which lack most pathways. Archaea are clustered at the bottom of the figure.", "Our reconstruction using substrate-product relationships efficiently extracted metabolically interesting species in comparison with the standard phylogenetic approach. Previous approaches which used metabolic information could also produce informative results [7-9,12], but the achievements were similar to those found by genetic comparisons [2-4]. This is understandable because in their approach metabolic reactions correspond roughly one-to-one to enzymes or genes.\n[SUBTITLE] Why can substrate-product relationships add insights? [SUBSECTION] Our approach is more robust to pathway gaps (incomplete annotation) or currency metabolites by evaluating each biochemical transformation with an equal weight. It is also robust to biases by the number of genes or their multiplicity. Standard phylogenetic methods can elucidate evolutionary relationship, whereas our approach can locate species of anomalous or interesting metabolism in comparison. Therefore, the method is useful in combination (not exclusive) with existing phyletic/phylogenetic clustering.\nOur method is also computationally lightweight and scalable, requiring O(N2V) time for computing pairwise similarity, where N is the number of organisms and V is the maximum number of reactions in one organism. On the contrary, for example, the exponential graph kernel requires O(NV3+N2V2) time to compute the similarity [7]. Our computational complexity is equivalent to the recently presented pathway alignment method [23], but the method exploits the graph topology and the result is expected to be similar to the one by Zhang et al[8]. Lastly, the ‘seed’ approach uses a heuristic to find metabolic seeds [12], but an accurate identification of metabolic seeds is NP-complete [24]. There is a huge gap as to the scalability to the other metabolic approaches.\nOur approach is more robust to pathway gaps (incomplete annotation) or currency metabolites by evaluating each biochemical transformation with an equal weight. It is also robust to biases by the number of genes or their multiplicity. Standard phylogenetic methods can elucidate evolutionary relationship, whereas our approach can locate species of anomalous or interesting metabolism in comparison. Therefore, the method is useful in combination (not exclusive) with existing phyletic/phylogenetic clustering.\nOur method is also computationally lightweight and scalable, requiring O(N2V) time for computing pairwise similarity, where N is the number of organisms and V is the maximum number of reactions in one organism. On the contrary, for example, the exponential graph kernel requires O(NV3+N2V2) time to compute the similarity [7]. Our computational complexity is equivalent to the recently presented pathway alignment method [23], but the method exploits the graph topology and the result is expected to be similar to the one by Zhang et al[8]. Lastly, the ‘seed’ approach uses a heuristic to find metabolic seeds [12], but an accurate identification of metabolic seeds is NP-complete [24]. There is a huge gap as to the scalability to the other metabolic approaches.\n[SUBTITLE] Algorithms to find phylogeny [SUBSECTION] Our method uses a simplistic complete linkage clustering algorithm to reconstruct the phylogeny. This may sound inappropriate but is grounded on our data representation. Since the substrate-product relationship disregards the occurrence of metabolites, a frequently appearing reaction type (e.g. ATP-kinase) and a rare reaction type (e.g. sterol synthase) are given the same weights. For this reason, standard parsimony or evolutionary distance does not properly reflect the distance between species in our scheme. Since we wanted to focus on metabolic differences, the complete linkage method was employed. However, other algorithms should be systematically tested and evaluated for their appropriateness, which is left as our future work.\nOur method uses a simplistic complete linkage clustering algorithm to reconstruct the phylogeny. This may sound inappropriate but is grounded on our data representation. Since the substrate-product relationship disregards the occurrence of metabolites, a frequently appearing reaction type (e.g. ATP-kinase) and a rare reaction type (e.g. sterol synthase) are given the same weights. For this reason, standard parsimony or evolutionary distance does not properly reflect the distance between species in our scheme. Since we wanted to focus on metabolic differences, the complete linkage method was employed. However, other algorithms should be systematically tested and evaluated for their appropriateness, which is left as our future work.\n[SUBTITLE] Sharing metabolic knowledge through wiki [SUBSECTION] We publicize the substrate-product relationships on a wiki-based site so that readers can check every detail of our analysis. This is especially important in the era of high-throughput data management because more and more research results tend to become irreproducible due to the insufficiency of publicized data or incomplete description of methods. To overcome this difficulty, the traceability and transparency of data and their analysis is important in the evaluation of research.\nWe publicize the substrate-product relationships on a wiki-based site so that readers can check every detail of our analysis. This is especially important in the era of high-throughput data management because more and more research results tend to become irreproducible due to the insufficiency of publicized data or incomplete description of methods. To overcome this difficulty, the traceability and transparency of data and their analysis is important in the evaluation of research.", "Our approach is more robust to pathway gaps (incomplete annotation) or currency metabolites by evaluating each biochemical transformation with an equal weight. It is also robust to biases by the number of genes or their multiplicity. Standard phylogenetic methods can elucidate evolutionary relationship, whereas our approach can locate species of anomalous or interesting metabolism in comparison. Therefore, the method is useful in combination (not exclusive) with existing phyletic/phylogenetic clustering.\nOur method is also computationally lightweight and scalable, requiring O(N2V) time for computing pairwise similarity, where N is the number of organisms and V is the maximum number of reactions in one organism. On the contrary, for example, the exponential graph kernel requires O(NV3+N2V2) time to compute the similarity [7]. Our computational complexity is equivalent to the recently presented pathway alignment method [23], but the method exploits the graph topology and the result is expected to be similar to the one by Zhang et al[8]. Lastly, the ‘seed’ approach uses a heuristic to find metabolic seeds [12], but an accurate identification of metabolic seeds is NP-complete [24]. There is a huge gap as to the scalability to the other metabolic approaches.", "Our method uses a simplistic complete linkage clustering algorithm to reconstruct the phylogeny. This may sound inappropriate but is grounded on our data representation. Since the substrate-product relationship disregards the occurrence of metabolites, a frequently appearing reaction type (e.g. ATP-kinase) and a rare reaction type (e.g. sterol synthase) are given the same weights. For this reason, standard parsimony or evolutionary distance does not properly reflect the distance between species in our scheme. Since we wanted to focus on metabolic differences, the complete linkage method was employed. However, other algorithms should be systematically tested and evaluated for their appropriateness, which is left as our future work.", "We publicize the substrate-product relationships on a wiki-based site so that readers can check every detail of our analysis. This is especially important in the era of high-throughput data management because more and more research results tend to become irreproducible due to the insufficiency of publicized data or incomplete description of methods. To overcome this difficulty, the traceability and transparency of data and their analysis is important in the evaluation of research.", "Phylogeny was reconstructed by using structural relationship between annotated metabolites. This method is robust to pathway gaps or gene copy numbers, and can extract metabolically anomalous species by comparing the result with other phyletic or phylogenetic reconstructions. Through several comparisons, our method could highlight metabolic anomaly in clamydia and spirochaete, both of which are well known parasitic species. The metabolic comparison thus assists understanding of species-environment interaction in combination with other gene-oriented strategies.", "There is no competing interests.", "MA designed, and CWC conducted research under supervision of PCL. CWC and MA wrote the paper together." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Enzyme annotation for organisms", "Strategy for graph transformation", "Phyletic reconstruction", "Results", "Phyletic trees for multi-domains of life based on substrate-product relationships", "Phyletic trees with or without network connectivity", "Comparison with EC number-based classification", "Central metabolites", "Metabolic differences between bacteria, archaea, and eukaryotes", "Discussion", "Why can substrate-product relationships add insights?", "Algorithms to find phylogeny", "Sharing metabolic knowledge through wiki", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Understanding the phyletic relationship among living organisms has long been a fundamental challenge since the concept of evolution had emerged. Traditionally, molecular biologists constructed phylogenetic trees based on the sequence similarity of small subunit ribosomal RNA [1] or other single genes. As whole-genome sequencing technologies advance, vast amount of sequence data become available for download and analysis. Without question, the comparative analysis of whole genomes can provide more information to reconstruct the phylogeny than individual genes do. Consequently, numerous methods have been proposed to reconstruct the phylogenetic trees from whole genome features such as oligonucleotide compositions [2], genome fragment occurrence [3], and absence/presence of metabolic features [4].\nIn parallel with genomic comparisons, many studies focused on the similarity of metabolic processes. Metabolic profiles of a living organism are strongly related to its environment, and metabolism is adapted to balance compounds taken up from its surroundings [5,6]. Thus, metabolic consideration can add insights into species-environment interaction such as symbiosis or convergent adaptation to extreme environments. To analyze the phyletic relationship in metabolic capability, there are at least 3 approaches. The first is machine learning. Oh et al. used a distance computed by the exponential graph kernel, i.e., the weighted sum of similarities between adjacency matrices of 1-step neighbors, 2-step neighbors, and so on for 81 organisms [7]. The second is network comparison. Zhang et al. defined existence/absence of metabolic pathways and computed the network similarity measure for 47 organisms [8]. The last is EC-based classification. Clemente et al. used sets of EC numbers to define pathway similarity and compared metabolism of 8 bacteria [9].\nMetabolic data are well standardized in previous approaches because all works depended on the bulk-downloadable KEGG database [10]. Less concerned, however, was the strategy for transforming enzymatic reactions into graphs (or networks). Depending on the strategy, resulting networks are drastically different enough to change fundamental network centralities [11]. For example, Borenstein et al. converted each enzymatic reaction to a fully connected bipartite graph between substrates and products to enhance connectivity and defined ‘seed’ compounds for each organism as the union of essential metabolites in all environments [12]. This transformation is known to overestimate the ability to synthesize/degrade metabolites. On the other hand, using the EC numbers for pathway analysis tend to underestimate the metabolic network because the numbers are assigned to biochemical transformation, and not to enzyme themselves. We here propose a more suitable data representation, and elucidate the phylogenies across three domains of life. Its effectiveness is shown in comparison with previous approaches.", "[SUBTITLE] Enzyme annotation for organisms [SUBSECTION] Enzyme annotations and corresponding EC reactions for 1075 organisms (895 bacteria, 67 archaea, and 113 eukaryotes) were obtained from the KEGG database through its application program interface. The number of EC reactions was 3116, covering as many as 154 pathway maps. Metabolic annotations in each species were represented as a set of substrate-product relationships by transforming all assigned EC reactions into a set of metabolite pairs (see the next section). Most EC-numbered entries correspond to multiple enzymatic reactions. For example, alcohol dehydrogenase (EC 1.1.1.1) can catalyze a multitude of compounds with a hydroxyl group. For such generic EC-numbered functions we manually integrated possible reactions to ensure the coverage of biochemical transformation shown in the metabolic maps.\nEnzyme annotations and corresponding EC reactions for 1075 organisms (895 bacteria, 67 archaea, and 113 eukaryotes) were obtained from the KEGG database through its application program interface. The number of EC reactions was 3116, covering as many as 154 pathway maps. Metabolic annotations in each species were represented as a set of substrate-product relationships by transforming all assigned EC reactions into a set of metabolite pairs (see the next section). Most EC-numbered entries correspond to multiple enzymatic reactions. For example, alcohol dehydrogenase (EC 1.1.1.1) can catalyze a multitude of compounds with a hydroxyl group. For such generic EC-numbered functions we manually integrated possible reactions to ensure the coverage of biochemical transformation shown in the metabolic maps.\n[SUBTITLE] Strategy for graph transformation [SUBSECTION] An enzymatic reaction usually has multiple inputs (substrates) and outputs (products). Although standard metabolic pathway charts are depicted as hypergraphs, substrate-product relationships must be specified for each reaction to transform it into a graph. A standard way is to use a fully connected bipartite graph [7,8,12]. The network connectivity then portrays the ‘reaction membership’; frequently occurring metabolites become hub nodes in the resulting graph. The representation, however, does not capture biochemical transformation between compounds because any two metabolites can be falsely linked through metabolic hubs regardless of their structures [11].\nTo avoid this bypassing effect, we employ the substrate-product decomposition of reactions [13]. In this scheme, each reaction is decomposed into a set of structurally related substrate-product pairs at the atomic scale. The data are also available from the RPAIR database [14], and the same method has been used in several recent works [15-17]. This representation avoids bias originating from currency metabolites. In other words, the method focuses on the variation of structural transformations, not the occurrence of each metabolite. The decomposition results of EC-numbered reactions are accessible at our wiki-based site: http://metabolomics.jp/wiki/Enzyme:[EC-number]. For example, the details of hexokinase can be accessed at http://metabolomics.jp/wiki/Enzyme:2.7.1.1 In the transformation, we replaced generic names such as alcohol or amino acids with concrete compound names. For hexokinase, as many as 15 reactions are included depending on hexose types. Through this decomposition, a set of enzymatic reactions becomes a set of substrate-product pairs. We did not consider the multiplicity of each pair in our analysis.\nAn enzymatic reaction usually has multiple inputs (substrates) and outputs (products). Although standard metabolic pathway charts are depicted as hypergraphs, substrate-product relationships must be specified for each reaction to transform it into a graph. A standard way is to use a fully connected bipartite graph [7,8,12]. The network connectivity then portrays the ‘reaction membership’; frequently occurring metabolites become hub nodes in the resulting graph. The representation, however, does not capture biochemical transformation between compounds because any two metabolites can be falsely linked through metabolic hubs regardless of their structures [11].\nTo avoid this bypassing effect, we employ the substrate-product decomposition of reactions [13]. In this scheme, each reaction is decomposed into a set of structurally related substrate-product pairs at the atomic scale. The data are also available from the RPAIR database [14], and the same method has been used in several recent works [15-17]. This representation avoids bias originating from currency metabolites. In other words, the method focuses on the variation of structural transformations, not the occurrence of each metabolite. The decomposition results of EC-numbered reactions are accessible at our wiki-based site: http://metabolomics.jp/wiki/Enzyme:[EC-number]. For example, the details of hexokinase can be accessed at http://metabolomics.jp/wiki/Enzyme:2.7.1.1 In the transformation, we replaced generic names such as alcohol or amino acids with concrete compound names. For hexokinase, as many as 15 reactions are included depending on hexose types. Through this decomposition, a set of enzymatic reactions becomes a set of substrate-product pairs. We did not consider the multiplicity of each pair in our analysis.\n[SUBTITLE] Phyletic reconstruction [SUBSECTION] Phyletic trees were created by a hierarchical clustering method (pairwise complete linkage algorithm) using the Cluster 3.0 software program [18]. Each organism was represented as a vector of substrate-product pairs, where the absence/presence of each relationship was denoted as 0 or 1. For visualization, Dendroscope software program [19] was used to analyze and compare phyletic trees. The employed simple algorithm may be controversial for phyletic reconstruction, and will be discussed later.\nPhyletic trees were created by a hierarchical clustering method (pairwise complete linkage algorithm) using the Cluster 3.0 software program [18]. Each organism was represented as a vector of substrate-product pairs, where the absence/presence of each relationship was denoted as 0 or 1. For visualization, Dendroscope software program [19] was used to analyze and compare phyletic trees. The employed simple algorithm may be controversial for phyletic reconstruction, and will be discussed later.", "Enzyme annotations and corresponding EC reactions for 1075 organisms (895 bacteria, 67 archaea, and 113 eukaryotes) were obtained from the KEGG database through its application program interface. The number of EC reactions was 3116, covering as many as 154 pathway maps. Metabolic annotations in each species were represented as a set of substrate-product relationships by transforming all assigned EC reactions into a set of metabolite pairs (see the next section). Most EC-numbered entries correspond to multiple enzymatic reactions. For example, alcohol dehydrogenase (EC 1.1.1.1) can catalyze a multitude of compounds with a hydroxyl group. For such generic EC-numbered functions we manually integrated possible reactions to ensure the coverage of biochemical transformation shown in the metabolic maps.", "An enzymatic reaction usually has multiple inputs (substrates) and outputs (products). Although standard metabolic pathway charts are depicted as hypergraphs, substrate-product relationships must be specified for each reaction to transform it into a graph. A standard way is to use a fully connected bipartite graph [7,8,12]. The network connectivity then portrays the ‘reaction membership’; frequently occurring metabolites become hub nodes in the resulting graph. The representation, however, does not capture biochemical transformation between compounds because any two metabolites can be falsely linked through metabolic hubs regardless of their structures [11].\nTo avoid this bypassing effect, we employ the substrate-product decomposition of reactions [13]. In this scheme, each reaction is decomposed into a set of structurally related substrate-product pairs at the atomic scale. The data are also available from the RPAIR database [14], and the same method has been used in several recent works [15-17]. This representation avoids bias originating from currency metabolites. In other words, the method focuses on the variation of structural transformations, not the occurrence of each metabolite. The decomposition results of EC-numbered reactions are accessible at our wiki-based site: http://metabolomics.jp/wiki/Enzyme:[EC-number]. For example, the details of hexokinase can be accessed at http://metabolomics.jp/wiki/Enzyme:2.7.1.1 In the transformation, we replaced generic names such as alcohol or amino acids with concrete compound names. For hexokinase, as many as 15 reactions are included depending on hexose types. Through this decomposition, a set of enzymatic reactions becomes a set of substrate-product pairs. We did not consider the multiplicity of each pair in our analysis.", "Phyletic trees were created by a hierarchical clustering method (pairwise complete linkage algorithm) using the Cluster 3.0 software program [18]. Each organism was represented as a vector of substrate-product pairs, where the absence/presence of each relationship was denoted as 0 or 1. For visualization, Dendroscope software program [19] was used to analyze and compare phyletic trees. The employed simple algorithm may be controversial for phyletic reconstruction, and will be discussed later.", "We compared results of our data representation with several recent, well known studies.\n[SUBTITLE] Phyletic trees for multi-domains of life based on substrate-product relationships [SUBSECTION] To compare with the phylogeny reconstruction based on the ‘seed’ metabolites [12], we reconstructed a phyletic tree for 478 species (the same number as the original article) using our substrate-product pairs. Figure 1 shows the summarized view of both trees of life. Both approaches clustered 6 main domains successfully, but the seed approach placed plants and fungi among bacteria. This is a serious artifact; since the seed approach focuses on essential metabolites, classification based on secondary metabolites becomes unstable. In both trees, a few seemingly dispersed clades (protists in eukaryota) existed. This is reasonable because the definition of protist is a structural simplicity regardless of its metabolic capability. Note here that our method correctly classified eukaryotes and also placed spirochaeta and chlamydia in a group separated from the other bacteria. This indicates these parasitic/pathogenic species exhibit anomalous metabolism in comparison with the other species, but further investigation is necessary to confirm its reason.\nReconstructed phlogeny. Left: Our reconstruction. Species in the same family were grouped into leaf nodes. Right: Reconstruction by Borenstein et al. [12] using the ‘seed’ metabolites. Reprinted with permission. Copyright (2008) National Academy of Sciences, U.S.A. Abbreviations: Bac, bacteria (orange); Arc, archaea (cyan); Pla, plants (light green); Ani, animals (navy blue); Fun, fungi (dark green); Pro, protists (light purple).\nAs the second comparison, our approach was compared with the golden standard tree, reconstructed by using concatenated alignment of 31 universal protein families covering 191 species [20] (Figure 2). Our method could clearly separate three main domains, bacteria, archaea and metazoan, except Nanoarchaeum equitans, which is an obligatory symbiont on Ignicoccus. It lacks many essential metabolic pathways and therefore became an orphan branch in our reconstruction. Similarly, the reconstruction reflected more on metabolic phenotypes rather than genetic evolution. For example, Mycoplasma spp. were located far from the other bacteria and closer to eukaryotes in our tree because they lack many metabolic pathways (higher animals lack many amino acid biosynthesis, for example). This defect was also observed in the comparison with the ‘seed’-based tree. Some invertebrate parasites were also grouped with Caenorhabditis elegans due to their metabolic similarity of unknown reason. Note that systematics of C. elegans is contentious and still unresolved because of its high evolutionary rate [21]. In summary, our method could reproduce comparable results with the standard tree. In addition, it could extract metabolically anomalous species which could not be easily found by simple genetic comparisons by comparing results with the standard phylogeny.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Ciccarelli et al. [20] using the 31 universal protein families. Reprinted with permission from AAAS. Bacteria (purple), Archaea (green), thermo Archaea (yellow green), Metazoa (pink).\nTo compare with the phylogeny reconstruction based on the ‘seed’ metabolites [12], we reconstructed a phyletic tree for 478 species (the same number as the original article) using our substrate-product pairs. Figure 1 shows the summarized view of both trees of life. Both approaches clustered 6 main domains successfully, but the seed approach placed plants and fungi among bacteria. This is a serious artifact; since the seed approach focuses on essential metabolites, classification based on secondary metabolites becomes unstable. In both trees, a few seemingly dispersed clades (protists in eukaryota) existed. This is reasonable because the definition of protist is a structural simplicity regardless of its metabolic capability. Note here that our method correctly classified eukaryotes and also placed spirochaeta and chlamydia in a group separated from the other bacteria. This indicates these parasitic/pathogenic species exhibit anomalous metabolism in comparison with the other species, but further investigation is necessary to confirm its reason.\nReconstructed phlogeny. Left: Our reconstruction. Species in the same family were grouped into leaf nodes. Right: Reconstruction by Borenstein et al. [12] using the ‘seed’ metabolites. Reprinted with permission. Copyright (2008) National Academy of Sciences, U.S.A. Abbreviations: Bac, bacteria (orange); Arc, archaea (cyan); Pla, plants (light green); Ani, animals (navy blue); Fun, fungi (dark green); Pro, protists (light purple).\nAs the second comparison, our approach was compared with the golden standard tree, reconstructed by using concatenated alignment of 31 universal protein families covering 191 species [20] (Figure 2). Our method could clearly separate three main domains, bacteria, archaea and metazoan, except Nanoarchaeum equitans, which is an obligatory symbiont on Ignicoccus. It lacks many essential metabolic pathways and therefore became an orphan branch in our reconstruction. Similarly, the reconstruction reflected more on metabolic phenotypes rather than genetic evolution. For example, Mycoplasma spp. were located far from the other bacteria and closer to eukaryotes in our tree because they lack many metabolic pathways (higher animals lack many amino acid biosynthesis, for example). This defect was also observed in the comparison with the ‘seed’-based tree. Some invertebrate parasites were also grouped with Caenorhabditis elegans due to their metabolic similarity of unknown reason. Note that systematics of C. elegans is contentious and still unresolved because of its high evolutionary rate [21]. In summary, our method could reproduce comparable results with the standard tree. In addition, it could extract metabolically anomalous species which could not be easily found by simple genetic comparisons by comparing results with the standard phylogeny.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Ciccarelli et al. [20] using the 31 universal protein families. Reprinted with permission from AAAS. Bacteria (purple), Archaea (green), thermo Archaea (yellow green), Metazoa (pink).\n[SUBTITLE] Phyletic trees with or without network connectivity [SUBSECTION] To investigate the information gain by considering metabolic network connectivity, we carefully compared our approach with the network topology-based approach [8]. There are few discrepancies between our and their results. In our approach, some proteobacteria and hyperthermophils were not properly grouped into the same sub-clusters (Figure 3). These clades are labeled as “other independent bacteria” and their proper positions are context-dependent. For this reason, we do not consider our classification inappropriate. On the other hand, we could correctly cluster Mycobacterium tuberculosis and M. leprae into Gram-positive bacteria. In addition, parasites and symbionts (spirochaete and clamydia) were classified more correctly in our method. In summary, although overall classification was similar, we could better, or at least equally, classify parasitic or symbiotic species in comparison with the results with another phyletic approach.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Zhang et al.[8]. Reprinted under the BioMed Central Open License agreement (BMC Bioinformatics).\nTo investigate the information gain by considering metabolic network connectivity, we carefully compared our approach with the network topology-based approach [8]. There are few discrepancies between our and their results. In our approach, some proteobacteria and hyperthermophils were not properly grouped into the same sub-clusters (Figure 3). These clades are labeled as “other independent bacteria” and their proper positions are context-dependent. For this reason, we do not consider our classification inappropriate. On the other hand, we could correctly cluster Mycobacterium tuberculosis and M. leprae into Gram-positive bacteria. In addition, parasites and symbionts (spirochaete and clamydia) were classified more correctly in our method. In summary, although overall classification was similar, we could better, or at least equally, classify parasitic or symbiotic species in comparison with the results with another phyletic approach.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Zhang et al.[8]. Reprinted under the BioMed Central Open License agreement (BMC Bioinformatics).\n[SUBTITLE] Comparison with EC number-based classification [SUBSECTION] Clemente et al. investigated the relationship among 8 photosynthetic bacteria using pseudo-alignment of over 60 metabolic pathways using the EC hierarchy [9]. Lastly, we compared their results with ours and found 2 differences from the EC-based phylogeny (Figure 4): the positions of Synechocystis (syn) and Synechococcus (syw), both of which belong to Chroococcales together with Thermosynechococcus elongates (tel). The misplacement of Chroococcales was observed in the work by Clemente et al. too and presumably results from the insufficiency of gene annotations in these species (Figure 4). In terms of metabolic similarity, our reconstruction seems more accurate because Gloeobacter violaceus (gvi) and tel were isolated from rocks and hot springs, respectively, whereas the remaining 6 species were isolated from fresh or sea water. Therefore, the two species should be regarded as metabolic out-groups as in our classification.\nReconstructed phylogeny. Left: Our reconstruction. Abbreviations: Anabaena (ana), Gloeobacter violaceus (gvi), Prochlorococcus marinus marinus (pma), P. marinus pastoris (pmm), P. marinus (pmt), Synechocystis (syn), Synechococcus (syw), Thermosynechococcus elongatus (tel). Most separated are the two photosynthetic eukaryotes: Arabidopsis thaliana (ath) and Cyanidioschyzon merolae (cme). Right: EC-number based classification by Clemente et al. [9] Reprinted with permission from Oxford University Press.\nClemente et al. investigated the relationship among 8 photosynthetic bacteria using pseudo-alignment of over 60 metabolic pathways using the EC hierarchy [9]. Lastly, we compared their results with ours and found 2 differences from the EC-based phylogeny (Figure 4): the positions of Synechocystis (syn) and Synechococcus (syw), both of which belong to Chroococcales together with Thermosynechococcus elongates (tel). The misplacement of Chroococcales was observed in the work by Clemente et al. too and presumably results from the insufficiency of gene annotations in these species (Figure 4). In terms of metabolic similarity, our reconstruction seems more accurate because Gloeobacter violaceus (gvi) and tel were isolated from rocks and hot springs, respectively, whereas the remaining 6 species were isolated from fresh or sea water. Therefore, the two species should be regarded as metabolic out-groups as in our classification.\nReconstructed phylogeny. Left: Our reconstruction. Abbreviations: Anabaena (ana), Gloeobacter violaceus (gvi), Prochlorococcus marinus marinus (pma), P. marinus pastoris (pmm), P. marinus (pmt), Synechocystis (syn), Synechococcus (syw), Thermosynechococcus elongatus (tel). Most separated are the two photosynthetic eukaryotes: Arabidopsis thaliana (ath) and Cyanidioschyzon merolae (cme). Right: EC-number based classification by Clemente et al. [9] Reprinted with permission from Oxford University Press.\n[SUBTITLE] Central metabolites [SUBSECTION] We previously argued that metabolic hubs are better identified in the substrate-product graph than in other graph representations, because the approach does not count the frequency of metabolite names in reactions but the number of structural transformations [11]. The number of transformations roughly reflects the structural variation of catalytic sites of respective enzymes, and therefore reflects the diversity of metabolic capabilities.\nTable 1 is the list of metabolites in the three domains which appear as the top 10 hubs in more than 20% organisms for each domain. The abundance of adenosine-related metabolites for all domains indicates the ancientry of purine-related metabolism, which coincides with the analysis on protein structures [22]. The presence of CO2 and NH3 is an unavoidable artifact of counting all decarboxylations and amino-transfers. High-degree metabolites are largely conserved. It can be seen that eukaryotes contain more reactions with glucuronate, glutathione, and galactose, which appear in drug metabolism. At the same time, eukaryotes use less L-aspartate- or 5-phospho-alpha-D-ribose 1-phosphate-dependent reactions. Archaea lack malonyl-acyl carrier proteins and coenzyme A, which often appear in lipid metabolism for eukaryotes and bacteria. Archaea also use L-glutamine more often than the other domains.\nMost differently transforming metabolites in the three domains. The full list is available at http://sarst.life.nthu.edu.tw/metabolic/SD.csv.\nHighlighted metabolites are mentioned in the main text. Abbreviations: P … phosphate; ACP … acyl carrier proteins.\n\nWe previously argued that metabolic hubs are better identified in the substrate-product graph than in other graph representations, because the approach does not count the frequency of metabolite names in reactions but the number of structural transformations [11]. The number of transformations roughly reflects the structural variation of catalytic sites of respective enzymes, and therefore reflects the diversity of metabolic capabilities.\nTable 1 is the list of metabolites in the three domains which appear as the top 10 hubs in more than 20% organisms for each domain. The abundance of adenosine-related metabolites for all domains indicates the ancientry of purine-related metabolism, which coincides with the analysis on protein structures [22]. The presence of CO2 and NH3 is an unavoidable artifact of counting all decarboxylations and amino-transfers. High-degree metabolites are largely conserved. It can be seen that eukaryotes contain more reactions with glucuronate, glutathione, and galactose, which appear in drug metabolism. At the same time, eukaryotes use less L-aspartate- or 5-phospho-alpha-D-ribose 1-phosphate-dependent reactions. Archaea lack malonyl-acyl carrier proteins and coenzyme A, which often appear in lipid metabolism for eukaryotes and bacteria. Archaea also use L-glutamine more often than the other domains.\nMost differently transforming metabolites in the three domains. The full list is available at http://sarst.life.nthu.edu.tw/metabolic/SD.csv.\nHighlighted metabolites are mentioned in the main text. Abbreviations: P … phosphate; ACP … acyl carrier proteins.\n\n[SUBTITLE] Metabolic differences between bacteria, archaea, and eukaryotes [SUBSECTION] To elucidate the metabolic differences between the three domains of life, we created a heat map of the substrate-product relationships in 535 species. In Figure 5, the vertical and horizontal directions are the hierarchically clustered organisms and the substrate-product relationships, respectively. Note that substrate-product relationships in species-specific pathways tend to cluster in this scheme. Archaea and Mycoplasma lack the fatty acid biosynthesis and many other pathways. However, many archaeal pathways are overlooked in the KEGG annotations (e.g. energy metabolism and ether-lipid metabolism for membrane synthesis), and their uniqueness is not easily discerned in this analysis. In contrast, Plantae and Animalia kingdoms in eukaryotes are easy to locate because animals possess drug- and other secondary metabolism, and plants possess unique secondary biosynthetic pathways (Figure 5).\nThe heat map of substrate-product relationships in 535 organisms. The horizontal line at the topmost right part corresponds to animal- and plant-specific pathways (the rightmost line is plants. Animals are the next-right line just below plants. The black horizontal line just below the eukaryotes (plants and animals) is Mycoplasma, which lack most pathways. Archaea are clustered at the bottom of the figure.\nTo elucidate the metabolic differences between the three domains of life, we created a heat map of the substrate-product relationships in 535 species. In Figure 5, the vertical and horizontal directions are the hierarchically clustered organisms and the substrate-product relationships, respectively. Note that substrate-product relationships in species-specific pathways tend to cluster in this scheme. Archaea and Mycoplasma lack the fatty acid biosynthesis and many other pathways. However, many archaeal pathways are overlooked in the KEGG annotations (e.g. energy metabolism and ether-lipid metabolism for membrane synthesis), and their uniqueness is not easily discerned in this analysis. In contrast, Plantae and Animalia kingdoms in eukaryotes are easy to locate because animals possess drug- and other secondary metabolism, and plants possess unique secondary biosynthetic pathways (Figure 5).\nThe heat map of substrate-product relationships in 535 organisms. The horizontal line at the topmost right part corresponds to animal- and plant-specific pathways (the rightmost line is plants. Animals are the next-right line just below plants. The black horizontal line just below the eukaryotes (plants and animals) is Mycoplasma, which lack most pathways. Archaea are clustered at the bottom of the figure.", "To compare with the phylogeny reconstruction based on the ‘seed’ metabolites [12], we reconstructed a phyletic tree for 478 species (the same number as the original article) using our substrate-product pairs. Figure 1 shows the summarized view of both trees of life. Both approaches clustered 6 main domains successfully, but the seed approach placed plants and fungi among bacteria. This is a serious artifact; since the seed approach focuses on essential metabolites, classification based on secondary metabolites becomes unstable. In both trees, a few seemingly dispersed clades (protists in eukaryota) existed. This is reasonable because the definition of protist is a structural simplicity regardless of its metabolic capability. Note here that our method correctly classified eukaryotes and also placed spirochaeta and chlamydia in a group separated from the other bacteria. This indicates these parasitic/pathogenic species exhibit anomalous metabolism in comparison with the other species, but further investigation is necessary to confirm its reason.\nReconstructed phlogeny. Left: Our reconstruction. Species in the same family were grouped into leaf nodes. Right: Reconstruction by Borenstein et al. [12] using the ‘seed’ metabolites. Reprinted with permission. Copyright (2008) National Academy of Sciences, U.S.A. Abbreviations: Bac, bacteria (orange); Arc, archaea (cyan); Pla, plants (light green); Ani, animals (navy blue); Fun, fungi (dark green); Pro, protists (light purple).\nAs the second comparison, our approach was compared with the golden standard tree, reconstructed by using concatenated alignment of 31 universal protein families covering 191 species [20] (Figure 2). Our method could clearly separate three main domains, bacteria, archaea and metazoan, except Nanoarchaeum equitans, which is an obligatory symbiont on Ignicoccus. It lacks many essential metabolic pathways and therefore became an orphan branch in our reconstruction. Similarly, the reconstruction reflected more on metabolic phenotypes rather than genetic evolution. For example, Mycoplasma spp. were located far from the other bacteria and closer to eukaryotes in our tree because they lack many metabolic pathways (higher animals lack many amino acid biosynthesis, for example). This defect was also observed in the comparison with the ‘seed’-based tree. Some invertebrate parasites were also grouped with Caenorhabditis elegans due to their metabolic similarity of unknown reason. Note that systematics of C. elegans is contentious and still unresolved because of its high evolutionary rate [21]. In summary, our method could reproduce comparable results with the standard tree. In addition, it could extract metabolically anomalous species which could not be easily found by simple genetic comparisons by comparing results with the standard phylogeny.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Ciccarelli et al. [20] using the 31 universal protein families. Reprinted with permission from AAAS. Bacteria (purple), Archaea (green), thermo Archaea (yellow green), Metazoa (pink).", "To investigate the information gain by considering metabolic network connectivity, we carefully compared our approach with the network topology-based approach [8]. There are few discrepancies between our and their results. In our approach, some proteobacteria and hyperthermophils were not properly grouped into the same sub-clusters (Figure 3). These clades are labeled as “other independent bacteria” and their proper positions are context-dependent. For this reason, we do not consider our classification inappropriate. On the other hand, we could correctly cluster Mycobacterium tuberculosis and M. leprae into Gram-positive bacteria. In addition, parasites and symbionts (spirochaete and clamydia) were classified more correctly in our method. In summary, although overall classification was similar, we could better, or at least equally, classify parasitic or symbiotic species in comparison with the results with another phyletic approach.\nReconstructed phylogeny. Left: Our reconstruction. Right: Reconstruction by Zhang et al.[8]. Reprinted under the BioMed Central Open License agreement (BMC Bioinformatics).", "Clemente et al. investigated the relationship among 8 photosynthetic bacteria using pseudo-alignment of over 60 metabolic pathways using the EC hierarchy [9]. Lastly, we compared their results with ours and found 2 differences from the EC-based phylogeny (Figure 4): the positions of Synechocystis (syn) and Synechococcus (syw), both of which belong to Chroococcales together with Thermosynechococcus elongates (tel). The misplacement of Chroococcales was observed in the work by Clemente et al. too and presumably results from the insufficiency of gene annotations in these species (Figure 4). In terms of metabolic similarity, our reconstruction seems more accurate because Gloeobacter violaceus (gvi) and tel were isolated from rocks and hot springs, respectively, whereas the remaining 6 species were isolated from fresh or sea water. Therefore, the two species should be regarded as metabolic out-groups as in our classification.\nReconstructed phylogeny. Left: Our reconstruction. Abbreviations: Anabaena (ana), Gloeobacter violaceus (gvi), Prochlorococcus marinus marinus (pma), P. marinus pastoris (pmm), P. marinus (pmt), Synechocystis (syn), Synechococcus (syw), Thermosynechococcus elongatus (tel). Most separated are the two photosynthetic eukaryotes: Arabidopsis thaliana (ath) and Cyanidioschyzon merolae (cme). Right: EC-number based classification by Clemente et al. [9] Reprinted with permission from Oxford University Press.", "We previously argued that metabolic hubs are better identified in the substrate-product graph than in other graph representations, because the approach does not count the frequency of metabolite names in reactions but the number of structural transformations [11]. The number of transformations roughly reflects the structural variation of catalytic sites of respective enzymes, and therefore reflects the diversity of metabolic capabilities.\nTable 1 is the list of metabolites in the three domains which appear as the top 10 hubs in more than 20% organisms for each domain. The abundance of adenosine-related metabolites for all domains indicates the ancientry of purine-related metabolism, which coincides with the analysis on protein structures [22]. The presence of CO2 and NH3 is an unavoidable artifact of counting all decarboxylations and amino-transfers. High-degree metabolites are largely conserved. It can be seen that eukaryotes contain more reactions with glucuronate, glutathione, and galactose, which appear in drug metabolism. At the same time, eukaryotes use less L-aspartate- or 5-phospho-alpha-D-ribose 1-phosphate-dependent reactions. Archaea lack malonyl-acyl carrier proteins and coenzyme A, which often appear in lipid metabolism for eukaryotes and bacteria. Archaea also use L-glutamine more often than the other domains.\nMost differently transforming metabolites in the three domains. The full list is available at http://sarst.life.nthu.edu.tw/metabolic/SD.csv.\nHighlighted metabolites are mentioned in the main text. Abbreviations: P … phosphate; ACP … acyl carrier proteins.\n", "To elucidate the metabolic differences between the three domains of life, we created a heat map of the substrate-product relationships in 535 species. In Figure 5, the vertical and horizontal directions are the hierarchically clustered organisms and the substrate-product relationships, respectively. Note that substrate-product relationships in species-specific pathways tend to cluster in this scheme. Archaea and Mycoplasma lack the fatty acid biosynthesis and many other pathways. However, many archaeal pathways are overlooked in the KEGG annotations (e.g. energy metabolism and ether-lipid metabolism for membrane synthesis), and their uniqueness is not easily discerned in this analysis. In contrast, Plantae and Animalia kingdoms in eukaryotes are easy to locate because animals possess drug- and other secondary metabolism, and plants possess unique secondary biosynthetic pathways (Figure 5).\nThe heat map of substrate-product relationships in 535 organisms. The horizontal line at the topmost right part corresponds to animal- and plant-specific pathways (the rightmost line is plants. Animals are the next-right line just below plants. The black horizontal line just below the eukaryotes (plants and animals) is Mycoplasma, which lack most pathways. Archaea are clustered at the bottom of the figure.", "Our reconstruction using substrate-product relationships efficiently extracted metabolically interesting species in comparison with the standard phylogenetic approach. Previous approaches which used metabolic information could also produce informative results [7-9,12], but the achievements were similar to those found by genetic comparisons [2-4]. This is understandable because in their approach metabolic reactions correspond roughly one-to-one to enzymes or genes.\n[SUBTITLE] Why can substrate-product relationships add insights? [SUBSECTION] Our approach is more robust to pathway gaps (incomplete annotation) or currency metabolites by evaluating each biochemical transformation with an equal weight. It is also robust to biases by the number of genes or their multiplicity. Standard phylogenetic methods can elucidate evolutionary relationship, whereas our approach can locate species of anomalous or interesting metabolism in comparison. Therefore, the method is useful in combination (not exclusive) with existing phyletic/phylogenetic clustering.\nOur method is also computationally lightweight and scalable, requiring O(N2V) time for computing pairwise similarity, where N is the number of organisms and V is the maximum number of reactions in one organism. On the contrary, for example, the exponential graph kernel requires O(NV3+N2V2) time to compute the similarity [7]. Our computational complexity is equivalent to the recently presented pathway alignment method [23], but the method exploits the graph topology and the result is expected to be similar to the one by Zhang et al[8]. Lastly, the ‘seed’ approach uses a heuristic to find metabolic seeds [12], but an accurate identification of metabolic seeds is NP-complete [24]. There is a huge gap as to the scalability to the other metabolic approaches.\nOur approach is more robust to pathway gaps (incomplete annotation) or currency metabolites by evaluating each biochemical transformation with an equal weight. It is also robust to biases by the number of genes or their multiplicity. Standard phylogenetic methods can elucidate evolutionary relationship, whereas our approach can locate species of anomalous or interesting metabolism in comparison. Therefore, the method is useful in combination (not exclusive) with existing phyletic/phylogenetic clustering.\nOur method is also computationally lightweight and scalable, requiring O(N2V) time for computing pairwise similarity, where N is the number of organisms and V is the maximum number of reactions in one organism. On the contrary, for example, the exponential graph kernel requires O(NV3+N2V2) time to compute the similarity [7]. Our computational complexity is equivalent to the recently presented pathway alignment method [23], but the method exploits the graph topology and the result is expected to be similar to the one by Zhang et al[8]. Lastly, the ‘seed’ approach uses a heuristic to find metabolic seeds [12], but an accurate identification of metabolic seeds is NP-complete [24]. There is a huge gap as to the scalability to the other metabolic approaches.\n[SUBTITLE] Algorithms to find phylogeny [SUBSECTION] Our method uses a simplistic complete linkage clustering algorithm to reconstruct the phylogeny. This may sound inappropriate but is grounded on our data representation. Since the substrate-product relationship disregards the occurrence of metabolites, a frequently appearing reaction type (e.g. ATP-kinase) and a rare reaction type (e.g. sterol synthase) are given the same weights. For this reason, standard parsimony or evolutionary distance does not properly reflect the distance between species in our scheme. Since we wanted to focus on metabolic differences, the complete linkage method was employed. However, other algorithms should be systematically tested and evaluated for their appropriateness, which is left as our future work.\nOur method uses a simplistic complete linkage clustering algorithm to reconstruct the phylogeny. This may sound inappropriate but is grounded on our data representation. Since the substrate-product relationship disregards the occurrence of metabolites, a frequently appearing reaction type (e.g. ATP-kinase) and a rare reaction type (e.g. sterol synthase) are given the same weights. For this reason, standard parsimony or evolutionary distance does not properly reflect the distance between species in our scheme. Since we wanted to focus on metabolic differences, the complete linkage method was employed. However, other algorithms should be systematically tested and evaluated for their appropriateness, which is left as our future work.\n[SUBTITLE] Sharing metabolic knowledge through wiki [SUBSECTION] We publicize the substrate-product relationships on a wiki-based site so that readers can check every detail of our analysis. This is especially important in the era of high-throughput data management because more and more research results tend to become irreproducible due to the insufficiency of publicized data or incomplete description of methods. To overcome this difficulty, the traceability and transparency of data and their analysis is important in the evaluation of research.\nWe publicize the substrate-product relationships on a wiki-based site so that readers can check every detail of our analysis. This is especially important in the era of high-throughput data management because more and more research results tend to become irreproducible due to the insufficiency of publicized data or incomplete description of methods. To overcome this difficulty, the traceability and transparency of data and their analysis is important in the evaluation of research.", "Our approach is more robust to pathway gaps (incomplete annotation) or currency metabolites by evaluating each biochemical transformation with an equal weight. It is also robust to biases by the number of genes or their multiplicity. Standard phylogenetic methods can elucidate evolutionary relationship, whereas our approach can locate species of anomalous or interesting metabolism in comparison. Therefore, the method is useful in combination (not exclusive) with existing phyletic/phylogenetic clustering.\nOur method is also computationally lightweight and scalable, requiring O(N2V) time for computing pairwise similarity, where N is the number of organisms and V is the maximum number of reactions in one organism. On the contrary, for example, the exponential graph kernel requires O(NV3+N2V2) time to compute the similarity [7]. Our computational complexity is equivalent to the recently presented pathway alignment method [23], but the method exploits the graph topology and the result is expected to be similar to the one by Zhang et al[8]. Lastly, the ‘seed’ approach uses a heuristic to find metabolic seeds [12], but an accurate identification of metabolic seeds is NP-complete [24]. There is a huge gap as to the scalability to the other metabolic approaches.", "Our method uses a simplistic complete linkage clustering algorithm to reconstruct the phylogeny. This may sound inappropriate but is grounded on our data representation. Since the substrate-product relationship disregards the occurrence of metabolites, a frequently appearing reaction type (e.g. ATP-kinase) and a rare reaction type (e.g. sterol synthase) are given the same weights. For this reason, standard parsimony or evolutionary distance does not properly reflect the distance between species in our scheme. Since we wanted to focus on metabolic differences, the complete linkage method was employed. However, other algorithms should be systematically tested and evaluated for their appropriateness, which is left as our future work.", "We publicize the substrate-product relationships on a wiki-based site so that readers can check every detail of our analysis. This is especially important in the era of high-throughput data management because more and more research results tend to become irreproducible due to the insufficiency of publicized data or incomplete description of methods. To overcome this difficulty, the traceability and transparency of data and their analysis is important in the evaluation of research.", "Phylogeny was reconstructed by using structural relationship between annotated metabolites. This method is robust to pathway gaps or gene copy numbers, and can extract metabolically anomalous species by comparing the result with other phyletic or phylogenetic reconstructions. Through several comparisons, our method could highlight metabolic anomaly in clamydia and spirochaete, both of which are well known parasitic species. The metabolic comparison thus assists understanding of species-environment interaction in combination with other gene-oriented strategies.", "There is no competing interests.", "MA designed, and CWC conducted research under supervision of PCL. CWC and MA wrote the paper together." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]